Randomized Algorithms Lecture 3 by Robert K. & Moni N.
This lecture covers topics on Probabilistic Turing Machines, Language Recognition, BPP Complexity Class, Homework on RP, and more. Explore the role of randomization in decision problems and the extent to which it aids computation theory.
Uploaded on Apr 28, 2025 | 0 Views
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Randomized Algorithms Lecture 3 Lecturers: Robert (Robi) K. Moni Naor Krauthghamer
Administrivia Meetings: Mondays 14:15-17:00 It is expected that all students attend all lectures and actively participate in classes Read material before class Present in class hw solutions etc. Interact during lectures Test Lectures are recorded (please remind the speaker to start)
Course: Sources: Mitzenmacher and Upfal Probability and Computing Motwani and Raghavan Randomized Algorithms A few lectures from Ryan O'Donnell's course ``A Theorist's Toolkit" Also Alon-Spencer The Probabilistic Method
Last Time 2-SAT 3-SAT Randomized Complexity Classes
Probabilistic Turing Machines Probabilistic Turing Machine (PTM): Transition function ?: ? {0,1} ? {????,??? ?} c a a b a a b b Input tape Random tape Alternative view: the transition function is probabilistic
Language Recognition When is a PTM M considered to recognize a language L? Two sided error For all ? ? we have Prob[M stop with yes ] 2/3 For all ? ? we have Prob[M stop with no ] 2/3 One sided error For all ? ? we have Prob[M stop with yes ] 1/2 For all ? ? we have Prob[M stop with no ] = 1 Zero sided error M never stops with wrong answer. Consider the expected time (or memory) Probability is over the random tape! Always stops Always stops Monte Carlo vs. Las Vegas Might never stop!
BPP: decision problems with two sided errors Correct answer returned with prob 2/3 Complexity Classes Monte Carlo RP: decision problems solvable with one-sided error in poly time: If the correct answer is `no , always return `no . If the correct answer is `yes , return`yes with probability . ZPP: Decision problems solvable in expected poly-time. Theorem: P ZPP RP NP. To what extent does randomization help? Does P = ZPP? ZPP = RP? RP = NP? Common belief: P=ZPP=RP=BPP Las Vegas BPP Run till twice expected stopping time RP Co-RP ZPP Strong PRGs
Strong vs Weak BPP Definition: A language L is in Strong BPP if for any polynomial q(n), there exists a polynomial time machine M( , ) s. t. for any x 0,1?: if x L then Pr[M(x, ) accepts ] 1 2 ?(?) If x L then Pr[M(x, ) accepts ] 2 ?(?) /
Strong vs Weak BPP Definition: A language L is in weak BPP if there is a polynomial p(n) and a polynomial time machine M( , ) s. t. for any x 0,1?: if x L then Pr[M(x, ) accepts ] + 1/? ? If x L then Pr[M(x, ) accepts ] 1/? ? /
Amplification Theorem Theorem: Strong BPP = Weak BPP Proof: Given M from weak BPP, construct M Pick random strings ?1, ,??for M, for each ??, compute ??= M(x, ??). Let ? = ?=1 ??. Then M accepts if and only if z > t/2 ? How large should t be? As a function of p(n) and q(n)
Deviation Bounds Let ?1, ,??be random variables s.t. Consider S= ?=1 How far S it from its expectation? Markov s inequality (non-negative) Pr[? > ? ] ?[?]/? Chebychev s inequality Pr[|? ?[?]| ? ] ???[?]/?2 Chernoff-Hoeffding Inequality Need complete independence ? ?? ??? ? = ?[(? ? ? 2] ? are Claim: If the ?? pairwise independent: then same variance
Large Deviation Bounds Several names: Chernoff, Hoeffding, Azuma Tail Inequality Let ?1, ,??be independent random variables s.t. ?? 1 and ?[??] = 0. Then for S= ?=1 ?? Pr [S > a] < ? ?2/2? How to use: Set ??= is correct - -1/p need ? = ?2/2? and we have ? = ?/p Set ? = 2??2 ?
Watch and Learn Chernoff, Hoeffding, etc. bounds || @ CMU || Lecture 5a,b,c of CS Theory Toolkit
BPP is in Non-Uniform P P\Poly: the class of languages with polynomial time Turing Machines that takes advise depending on the size of the input only. Equivalent: has polynomial sized circuits. Claim: ??? ?\Poly Idea: union bound
Claim: ??? ?\Poly From Strong BPP Let L BPP. Pick q(n) = 2n. M accepts L with error probability 2 2?. Let M(x,y) is a machine with input xand random bits y. y is wrong advice for x if M(x,y) is wrong on membership in L for x. Pr[y is wrong for x] 2 2? There are 2?values of x of length n: Pr[ x s.t. y is wrong for x] 2? 2 2?= 2 ? By the Union Bound
Checking Matrix multiplication Given three n x n matrices A, B and C how do you check that A B = C, say over the finite field GF[2]? Recomputing the product A B is expensive: Asymptotic time denoted by ?(??) best known value for ? is 2.3728639. Freivalds: an ?(?2) algorithm for verification: Pick at random a vector r 0,1?and compute A(Br)) and Cr
Checking Matrix multiplication Compare the two resulting vectors. If AB = C then the algorithms always says 'yes' and if Prove: If A B C then the algorithm says `no' with probability at least 1/2. How does this vary according to the field size?
Pseudo-Deterministic Algorithms An algorithm that with high probability returns the same answer on all runs For decision problems: amplify Interesting for search or coordination problems Example: find a prime between N and 2N Question: turn the Contraction Algorithm into a pseudo-deterministic algorithm Open question (?) is it possible to turn the more advanced algorithms into pseudo-deterministic one
Proof by encoding A method to prove success of an algorithm by showing that failure allows us to compress the random bits used Example: the birthday paradox Suppose that you can expect a collision in less than k random elements ?1,?2, ,??from a domain of size n If ??=??then can encode ??using i. Instead of log n bits need log k + log k bits. For i LZ style For j Not likely to happen before ? ?
Monte Carlo vs. Las Vegas Algorithms Monte Carlo algorithm: Guaranteed to run in poly-time, likely to find correct answer. Ex: Contraction algorithm for global min cut. Las Vegas algorithm: Guaranteed to find correct answer, likely to run in poly-time. Ex: Randomized quicksort Can always convert a Las Vegas algorithm into Monte Carlo, but no known method to convert the other way
Watch Lecture on Entropy Entropy || @ CMU || Lecture 24a and 24b of CS Theory Toolkit https://www.youtube.com/watch?v=b6x4A mjdvvY What was the mistake? Codes that are uniquely decodable are not necessarily prefix free! Sardinas-Patterson algorithm