Parallel repetition concerns solving multiple independent instances, say n, of the same problem. Naively, this can be achieved by executing each instance individually; thus using n times the amount of resource for computing a single instance. One might ask the converse: does solving n instances require \Omega(n) times the amount of resource of solving a single instance. In this talk, we will survey the regime where the "resource" is communication and see how tools from information theory can help partially resolve such questions. As time permits, we will discuss its application in proving dynamic streaming lower bounds for computing a spanning forest.