Communication Complexity in Randomized Algorithms

randomized algorithms n.w
1 / 25
Embed
Share

Explore the communication complexity of algorithms in a lecture by Moni Naor, covering topics such as equal proof compression, local lemma, simultaneous messages protocols, and more. Discover insights on communication complexity without shared randomness and strategies against static adversaries. Dive into the concept of pair-wise independence representation and analysis.

  • Algorithms
  • Communication
  • Complexity
  • Randomized
  • Moni Naor

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Randomized Algorithms Lecturer: Moni Naor Lecture 9

  2. Recap and Today Recap The communication complexity of equality Proof by Compression/Encoding Today Local Lemma

  3. Simultaneous Messages Protocols x {0,1}n ALICE mA f(x,y) CAROL mB BOB Y {0,1}n With shared randomness independent of inputs: O(1) Suggested by Yao 1979 For the equality function: |mA| + |mB| = ( n) [Newman Szegedy 1996] |mA| x |mB| = (n) [Babai Kimmel 1997] Upper bound: O( n)

  4. Communication Complexity without Shared Randomness f(x,y) ALICE x {0,1}n With shared randomness independent of inputs: O(1) BOB y {0,1}n Theorem: For any protocol that computes the equality function: ?|mi| (log n)

  5. New: Working Against a Static Adversary Adversary chooses the sequence of cards ahead of time and independent of the guesses it sees Knows the algorithm Any randomness that should be accessed again and again is charged to the memory As a first step: assume the guesser has access to the randomness for free The [1][1-2] construction does not work: simply put 1 at the end. The probabilistic construction works but the issue is the size of the representation

  6. Pair-Wise Independence Representation Idea: use pair-wise independence to represent the set choice. Recall: subset ??constructed by picking each element independently with probability 2?/?. Let : ? [?] be pair-wise independent function. Example: h(x) = ax+b over a field The set defined by h: all i s.t. 1 ? 2? The representation cost: O(log n) bits Does it work? Assume n is a power of 2

  7. Analysis of Pair-Wise Independence Representation To yield a reasonable guess at step t, subset ??should intersect suffix ??with size 1. Pr[|?? ??| = 1] = ? 2? ? 1 1 2? Using full independence Used: ? ? Number of possible intersection Prob. of a particular element being in ?? Prob remaining elements not chosen to ?? Now only pair-wise: Union Bound 1 ? 2? Pr ?? ?? = 0 ? ~1 ~1 2 Set log? Pr[|?? ??| = 1] ? 2? 1 ? 2? 2t ? 1 j log? ? -1 ? ?

  8. Idea to Explore: Permute the Underlying Universe Consider the fixed subset construction: [1-1][1-2],[1-4],[1-8],[1-16],[1-32], ..., [1-n] Choose a permutation ? ? and permute the universe accordingly. Act with the fixed subsets on the permutated universe What are the properties we require from that will allow us to retain the analysis? Min-wise independence

  9. Structure of Proofs by Compression or Encoding Relevant to both upper and lower bounds Consider a random string used in the system Could be The random function that should be inverted The randomness used by an algorithm Show that the event you want to bound implies a compression of the randomness Probability of compressing ? bits is 2 ? Hence: probability of bad event is bound by 2 ? Sometimes need extra randomness to help us

  10. Proof by Encoding Inverting a random function is hard Given black-box access to a random function ?: 0,1? 0,1?and A challenge y 0,1? goal is to find x s.t f(x) =y Claim: number of queries needed is close to 2? Proof: Homework

  11. Proof by Encoding Given black-box access to a random function ?: 0,1? 0,1?and A challenge y 0,1? goal is to find x s.t ?(?) = ? Claim: number of queries needed is close to 2? Proof: Suppose can invert y within T steps with probability p Will use to compress ? to fewer than ?2?bits needed for random function Fix the inverter s bit as well as y Sequence of queries and answers ?1,?1, ?2,?2, ,(??,??) With probability at least p: one of the ?? s is y Savings: ? log? Only log T Bit to record Which one Only these recorded

  12. Mirror Games Two players, `first and `second , game: A virtual deck with 2n cards Each round: alternating players should pick a card (value in 1..2n) that has not appeared so far If any player repeats a card that has appeared: loses the game. If all cards are picked: draw With perfect memory a draw is reached Claim: Second player has a strategy that requires very little memory

  13. Mirror Games Claim (Sumagha Garg and Jon Schneider): first player does not have a deterministic strategy that uses sublinear space and guarantees a draw. What about probabilistic strategies? Garg and Schneider and Feige: Suppose first player has access to a secret matching Can use techniques like the subset encoding we saw to figure out numbers that have appeared so far Whenever the Second player guesses the `Old Maid - the unmatched value

  14. The Lovasz Local Lemma Statement Application Constructive Algorithmic

  15. The Union Bound ? = ?1,...?? For ??, the dependencies on ? \ {??} may be arbitrary Theorem: If there are 0 ??< 1 s.t. for all ??: Pr[??] ?? ?=1 then the probability that no event ??happens is positive. If all the ??= ? then we need that ? ? < 1 Example: a CNF formula with ? clauses where each clause has more than log ? literals is necessarily satisfiable Event ??: clause i is not satisfied. Pr ?? < 1/? a collection of random events. ???< 1 For a uniformly random assignment

  16. The (Lovasz) Local Lemma Question: what happens with k-wise independent events? ? = ?1,...?? For ??, let (??) be a minimal set of events that ?? depends on ??is independent of all events in ? \ (??) information on events not in (??) {??} does not affect the probability of event ??happening. Theorem: If there are 0 ? < 1 and d s.t. for all ??: Pr[??] ? | (??)| ? ? ? ? < 1 then the probability that no event ??happens is positive. a collection of random events. Not symmetric Simultaneously: ????[ ? [?] ??] > 0

  17. Applying The Local Lemma for k-CNF Theorem: If there are 0 ? < 1 and d s.t. for all ??: Pr[??] ? (??) d pde <1 then the probability that no event ??happens is positive. No dependency on the number of variables n or number of clauses m Theorem: Every k-CNF formula in which each variable appears in less than 2?/?? clauses is satisfiable. Proof: for a uniformly random assignment to the variables, Prob[??is not satisfied] 2 ? Each clause depends on at most 2? Event ??: Clause ?? not satisfied If two clauses do not share a variable, their satisfiability is independent ?? ? = 2?/? clauses. Taking p= 2 ?and d= 2?/? we get that the probability the formula is satisfiable is positive. How to find the satisfying assignment?

  18. Example: LLL and the Hat Game Nodes of a graph G=(V,E) are assigned one of q colors. At each node there is a person observing the colors of the neighbors, but not the color at the node wearing a hat of that color. Players should guess the color of their own hat Players win if at least one player guesses correctly Do they have a deterministic strategy? HG(G): largest q for which there is a winning strategy for graph G For every node ? a mapping ??: ?|? ? | [?] For every color assignment: at least one node ? gets its own color right ? ? : neighbors of ?

  19. LLL and the Hat Game Claim: In an graph ? with maximum degree : ??(?) ? . For a graph G and strategies ??: ?|? ? | [?] Choose colors at random from [q]. Let ??be the event that node ? guesses correctly ??[??] =1 Why not distance 2? There are of these ? Event ??is independent of all {??|? no? ? ? ? } pde <1 Probability that no ??occurs simultaneously is larger than 0

  20. Proof of The Local Lemma Theorem: If there are 0 ? < 1 and d s.t. for all ??: Pr[??] ? (??) d pd4 <1 then the probability that no event ??happens is positive. Induction Hypothesis: Let S {1, . . . , m}. By induction on s = 0, . . . , m 1. If |S| s, then for all k S we have Pr ?? ? ? ?? 2? For s=0: Pr ?? ? For s>0, Claim: ??[ ? ? ??] >0 ??[ ? ? ??] = ? ?Pr[ ??| ?=1 ??] = ? ?(1 Pr ?? ?=1 ? ?(1 2?) ? 1 ? 1 ?? By induction hypothesis

  21. Proof of The Local Lemma Theorem: If there are 0 ? < 1 and d s.t. for all ??: Pr[??] ? (??) d pd4 <1 then the probability that no event ??happens is positive. Induction Hypothesis: Let S {1, . . . , m}. By induction on s = 0, . . . , m 1. If |S| s, then for all k S we have Pr ?? ? ? ?? 2? For s>0, Claim: Pr[ ? ? ??] > 0 ---------------------------------------------------------------------------------------------- Let ? = ?1 ?2where ?2are independent of ?? If ?2=S we are done: Pr ?? ? ? ?? = Pr ?? ? Otherwise, we know that | ?1| d Notation: let ??= ? ? ?? The event added From independence

  22. Proof of The Local Lemma manipulation ? = ?1 ?2 ??= ? ? ?? Want to prove Pr ?? ? ? ?? 2? ? Pr ?? ??1??2] ?? ??1??2] Pr ???? =Pr[?? ??] = 1/2 Pr[??] Independence of ?2 Pr ?? ??1??2] Pr[??| ??2] = Pr[??] p Since |S_2| < |S|=s can apply induction hypothesis to Pr ????2] = Pr ?? ? ?2 ?? 2? Now ?? ? ?2 ?? ?? ??1??2] = ?? ? ?1 1 ? S1Pr ?? ? ?2 ?? 1 ?2? 1 2?? 1 Induction hypothesis 2

  23. Proof of The Local Lemma Know that Pr ?? ? ? ?? 2? Want to show ????[ ? [?] ??] > 0 ?Pr[ ??| ?=1 ?(1 Pr[?? ?=1? 1 ??] ) ? 1 2? > 0 ? 1 ??[ ? ? ??] = ?=1 = ?=1 ?=1 ??]

  24. On Making the Local Lemma Constructive The proof does not give an algorithm to find the point in space that makes all the bad events disappear. Approach for finding them: Partition instance problem into smaller ones Logarithmic in size By partial assignment. Then do an exhaustive search on each part Each variable either true, false or undefined If clause not satisfied, make sure enough free variables We know it exists

  25. Constructive vs. Algorithmic For the algorithm to work we need a base case that relies on the (non-constructive) LLL Suggested by J zsef Beck Suboptimal dependency on k Moser and Tardos suggested an algorithmic version of the LLL

Related


More Related Content