Derandomization: New Framework and Assumptions for Hardness-to-Randomness

hardness vs randomness revised uniform non black n.w
1 / 45
Embed
Share

Explore the latest concepts in derandomization with a focus on new uniform hardness assumptions, non-black-box techniques, and instance-wise reconstructive approaches. Discover how randomness can be transformed into a valuable resource through innovative methodologies and open problems in the field.

  • Derandomization
  • Hardness
  • Randomness
  • Uniform
  • Non-Black-Box

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise Lijie Chen MIT Roei Tell MIT

  2. Todays Plan Part I (An Overview of Conceptual Messages) Part II (Motivation and Results) The right assumptions for derandomization Randomness might be indistinguishable from useless Part III (Techniques) Instance-wise reconstructive targeted PRGs/HSGs Derandomization via GKR interactive proof systems Open Problems

  3. Conceptual Messages: The RIGHT Derandomization RIGHT Assumptions for A new Hardness-to-Randomness framework for non-black-box derandomization based on new uniform hardness assumptions New type of uniform hardness assumptions which is both necessary and sufficient for derandomization New uniform assumptions prBPP = prP

  4. Conceptual Messages: Super-fast Derandomization A new Hardness-to-Randomness framework for non-black-box derandomization based on new uniform hardness assumptions A more flexible framework enables extremely fast derandomization Provably unachievable by PRGs! Randomness is indistinguishable from useless (BPTIME[?(?)] is effectively in TIME[? ? ??(1)]) New uniform assumptions

  5. New Hardness Assumptions: A Template New Hardness Assumptions: A Template an efficient function ? with multiple output bits that is hard for ??-time probabilistic algorithms

  6. Uniform, Non-Black-Box, Instance-Wise : Explained Does NOT depend on C PRGs Generator ?? fooling all small circuit ? Hard function ? Targeted PRGs Generator ??,? fooling only circuit ? Hard function ? and circuit ? Non-Black-Box w.r.t. input: ??,?depends on ? (unlike PRG) w.r.t. hard function ?: ??,?is determined from the computation history of ? on the input ?

  7. Uniform, Non-Black-Box, and Instance-Wise: Explained an efficient function ? with multiple output bits that is hard for ??-time probabilistic algorithms Uniform Only assume hardness against uniform probabilistic algorithms (NOT circuits). Instance-wise ? is ``hard on input ? means derandomization works on input ? This avoids the hard question of proving circuit lower bounds

  8. Todays Plan Part I (An Overview of Conceptual Messages) Part II (Motivation and Results) The right assumptions for derandomization Randomness might be indistinguishable from useless Part III (Techniques) Instance-wise reconstructive targeted PRGs/HSGs Derandomization via GKR interactive proof systems Open Problems

  9. Previous Hardness Hardness- -to to- -Randomness are based on PRGs Randomness frameworks PRGs Circuit Lower Bounds (E i.o.-SIZE[2?(?)]) Log-seed PRGs [Yao82; BM84; Nis91; NW94; IW99] ? Assumptions are toostrong? prP = prBPP [IW98; TV07; ] ? Uniform Lower Bounds (E BPP) PRGs that work on average Assumptions are tooweak? Remark: Circuit Lower Bounds for E vs. for NE

  10. New Hardness Hardness- -to to- -Randomness based on Targeted Randomness framework Targeted PRGs PRGs Circuit Lower Bounds (E i.o.-SIZE[2?(?)]) Log-seed PRGs ? prP = prBPP New type of Hardness Assumptions ? Uniform Lower Bounds (E BPP) PRGs that work on average

  11. The RIGHT RIGHT Assumptions for Derandomization Assumption poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms ?10-time randomized algorithm ?, for at most finitely many input ? 0,1 internal coins of ?? ? = ? ? Pr 2/3 Claim Take-away Almost-all-input hardness is necessary for derandomization Proved by simple diagonalization prP = prBPP Assumption

  12. The RIGHT RIGHT Assumptions for Derandomization Assumptionlow-depth poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms And? is computable by poly-size log-space uniform circuits of ??-depth prP = prBPP Take-away Almost-all-input hardness is sufficient for derandomization log-space uniform circuits of size T a uniform ?(log ?)-space algorithm that prints the description of the circuit

  13. The RIGHT RIGHT Assumptions for Derandomization Main Result Assumptionlow-depth poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms And? is computable by poly-size log-space uniform circuits of ??-depth prP = prBPP Assumption poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms Take-away Almost-all-input hardness is sufficient and necessary for derandomization

  14. Generality and robustness Robustness of the derandomization ? = {??}: ? is hard with probability 1 ? over distr.? Derandomization for prBPP succeeds on 1 ? fraction of inputs from ? General trade-off More running time of hard function ? Slower Derandomization for prBPP Restricted Circuits Hardness against prob. ??circuits Derandomization for ??circuits

  15. Todays Plan Part I (An Overview of Conceptual Messages) Part II (Motivation and Results) The right assumptions for derandomization Randomness might be indistinguishable from useless Part III (Techniques) Instance-wise reconstructive targeted PRGs/HSGs Derandomization via GKR interactive proof systems Open Problems

  16. Quest for faster derandomization or: Is randomness useless? Assumptions Derandomization

  17. Quest for faster derandomization or: Is randomness useless? Caution: From now on, will use BPTIME instead of prBPTIME Assumptions Derandomization BPTIME[?(?)] TIME[? ??] ? 7 Classical work Circuit lower bounds

  18. Quest for faster derandomization or: Is randomness useless? Caution: From now on, will use BPTIME instead of prBPTIME Assumptions Derandomization BPTIME[?(?)] TIME[? ??] ? 7 Classical work Circuit lower bounds [Doron, Moshkovitz, Oh, and Zuckerman, 2020] Lower bounds against non-uniform MA protocols BPTIME[?(?)] TIME[? ?2]

  19. Quest for faster derandomization or: Is randomness useless? Caution: From now on, will use BPTIME instead of prBPTIME Assumptions Derandomization BPTIME[?(?)] TIME[? ??] ? 7 Classical work Circuit lower bounds [Doron, Moshkovitz, Oh, and Zuckerman, 2020] Lower bounds against non-uniform MA protocols BPTIME[?(?)] TIME[? ?2] OWFs and lower bounds against non-uniform algorithms BPTIME[?(?)] TIME[? ? ?] [C.-Tell, 2021]

  20. Quest for faster derandomization or: Is randomness useless? Caution: From now on, will use BPTIME instead of prBPTIME Assumptions Derandomization BPTIME[?(?)] TIME[? ??] ? 7 Classical work Circuit lower bounds [Doron, Moshkovitz, Oh, and Zuckerman, 2020] Lower bounds against non-uniform MA protocols BPTIME[?(?)] TIME[? ?2] OWFs and lower bounds against non-uniform algorithms BPTIME[?(?)] TIME[? ? ?] [C.-Tell, 2021] [Wil 16] and [C.-Tell, 2021] BPTIME[?(?)] TIME[? ? ?1 ?] Under #NSETH A barrier for faster derandomization

  21. How to overcome the #NSETH #NSETH-barrier? A Lesson from Crypto If two things are indistinguishable by poly-time algorithms, then they are practically the same Can we show randomness is indistinguishable from useless? i.e., BPTIME ? ? Effective-TIME[? ? ??(1)] For every ? ????????[?(?)], find a ? ? ??(1)-time deterministic algorithm ?? such that: Goal for every poly-timesampleable distribution {??}? , Pr ? ??? ? ??? That is, no efficient adversary can break ?? by finding a mistake! ? ?(1)

  22. Derandomization from non-batch-computable functions: Randomness is indistinguishable indistinguishable from useless Assume OWFsexists and ?: 0,1? 0,1?? s.t. (1) ? ?? can be computed in ?(?)-time, given ? and ? (2) ? is hard-to-approximately-print by ? ? ??/10-time randomized algorithm, over all poly-time sampleable distribution. Effective-TIME[? ? ??(?)] ThenBPTIME ? ? Hard-to-approximately-print with respect to a distribution ? For every ? ? ??/10-time randomized algorithm ? Pr ? ?[?(?)0.99-prints?(?)] ? ?(1) ?(?)0.99-prints?(?)IF with probability at least 2/3 over the internalrandomness of ?(?), ? ??= ? ?? for at least 0.99 fraction of ? [??]

  23. Randomness is indistinguishable indistinguishable from useless? The need for non non- -black black- -box Goal box derandomization Indistinguishable from being correct For all ? ??????[?(?)], find a ? ?1+?(1)-time det. algorithm ?? s.t. no poly(?(?))-time adversary can find a mistake ? with non-negl probability. (? is mistake iff ??? ? ? .) Theorem NoPRGs or black-box derandomization can achieve this goal That is, only non-black-box derandomization can get superfast derandomization!

  24. Todays Plan Part I (An Overview of Conceptual Messages) Part II (Motivation and Results) The right assumptions for derandomization Randomness might be indistinguishable from useless Part III (Techniques) Instance-wise reconstructive targeted PRGs/HSGs Derandomization via GKR interactive proof systems Open Problems

  25. Review Review: Hardness-to-Randomness via Reconstructive Reconstructive PRGs PRGs Generator ?? fooling all small circuit ? Hard function ? Reconstructive PRGs A small circuit ? which ??does not fool a small circuit for ?

  26. Revision Revision: uniform Instance Instance- -wise uniform Hardness-to-Randomness via wise Reconstructive Reconstructive Targeted PRGs Targeted PRGs Generator ??,? fooling only circuit ? Function ? and a circuit ? ?(?) is hard to print Instance-wise ? is ``hard on input ? means derandomization works on input ?

  27. Revision Revision: uniform Instance Instance- -wise uniform Hardness-to-Randomness via wise Reconstructive Reconstructive Targeted PRGs Targeted PRGs Generator ??,? fooling only circuit ? Function ? and a circuit ? ?(?) is hard to print Instance-wise ? is ``hard on input ? means derandomization works on input ? Reconstructive Targeted PRGs A small circuit ? which ??,?does not fool A fast uniform algorithm ? s.t. ?(?) prints ?(?)

  28. Revision Revision: uniform Instance Instance- -wise uniform Hardness-to-Randomness via wise Reconstructive Reconstructive Targeted HSGs Targeted HSGs Generator ??,? hitting only circuit ? Function ? and a circuit ? ?(?) is hard to print Instance-wise ? is ``hard on input ? means derandomization works on input ? Reconstructive Targeted HSGs A small circuit ? which avoids??,? A fast uniform algorithm ? s.t. ?(?) prints ?(?)

  29. A Construction of Reconstructive Reconstructive HSGs Main Theorem A reconstructive HSG for every ?: 0,1? 0,1? computable by log-space uniform circuits of ?-size and ?-depth: Instance-wise Reconstruction ?? running in ? poly(?) time such that: ? accepts half of ?-bit strings but rejects all outputs of ??,? ?? Targeted HSG For ? 0,1???,? outputs poly(T) many ?-bit strings. ??,? runs in poly(T) time ?(?) prints ?(?) w.h.p. ?(?,?): algo. for ? prRPTIME[?(?)] Apply ??,? and ?( ) = ?(?, ) ?: 0,1? 0,1? computable by poly-size log-space uniform circuits of ??-depth which is almost-all-input-hard against all ?10-time randomized algorithms prP = prRP prBPP = prRPprRP prP = prBPP

  30. Todays Plan Part I (An Overview of Conceptual Messages) Part II (Motivation and Results) The right assumptions for derandomization Randomness might be indistinguishable from useless Part III (Techniques) Instance-wise reconstructive targeted PRGs/HSGs Derandomization via GKR interactive proof systems Open Problems

  31. The GKR interactive proof protocol For every ?: 0,1? 0,1? computable by log-space uniform circuits of ?-size and ?-depth [GKR15] There is an interactive protocol with an efficient prover and = ?(?) rounds Useful Properties of GKR Short message for verifier Each message from V has length ?(log ?). History-independent message for honest prover (*) Honest P s message onlydependson the verifier s last message Doubly Efficient Honest P s strategy is computable in poly(?) time; Goal: P wants to convince V that ? ? = ? 1. ?, w ?, w 2. Prover (P) Verifier (V) 3. -round

  32. The GKR interactive proof protocol For every ?: 0,1? 0,1? computable by log-space uniform circuits of ?-size and ?-depth [GKR15] There is an interactive protocol with an efficient prover and = ?(?) rounds Useful Properties of GKR Short message for verifier Each message from V has length ?(log ?). History-independent message for honest prover (*) Honest P s message onlydependson the verifier s last message Doubly Efficient Honest P s strategy is computable in poly(?) time; Goal: P wants to convince V that ? ? = ? 1. Honest Prover Strategy 2. Round-1 Strategy ?1 Verifier (V) ?: 0,1?(log ?) 3. -round Round- Strategy ? ?: 0,1?(log ?)

  33. The GKR interactive proof protocol: Reductive Reductive Truth-tables of honest prover strategies Round-1 Strategy ?1 Round-2 Strategy ?2 Round-3 Strategy ?3 From which one can compute ?(?) ?: 0,1?(log ?) ?: 0,1?(log ?) Reductive For all ? 1 ??-time ??,? such that ??+1 ?: 0,1?(log ?) each entry is computable in ?? time from ? and next row ? ? ??,? = ?? Base case Round- Strategy ? each entry is computable in ?? time from ? ?: 0,1?(log ?) ? ??-time ??, such that ??, = ?

  34. The Targeted HSG ??,? Truth-tables of honest prover strategies NW PRG Round-1 Strategy ?1 Round-2 Strategy ?2 Round-3 Strategy ?3 NW?1 ? ?: 0,1?(log ?) NW?2 ? ?: 0,1?(log ?) NW?3 ? ?: 0,1?(log ?) union ??,? NW? 2 ? NW? 1 ? Round- Strategy ? NW? ? ?: 0,1?(log ?)

  35. Overview of the Reconstruction Algorithm NW PRG NW?1 ? ??( ) ?(?, )avoids??,? (??accepts half of the inputs; but reject all outputs of ??,?) NW?2 ? NW?3 ? union ??,? ??avoids every NW?? ? NW? 2 ? ?? is a distinguisher for every NW?? ? NW? 1 ? NW? ?

  36. Overview of the Reconstruction Algorithm NW PRG ? a small circuit for ?1 NW?1 ? ? a small circuit for ?2 NW?2 ? ? a small circuit for ?3 NW?3 ? Reconstruction ?? ? NW? 2 a small circuit for ? 2 ? ? a small circuit for ? 1 NW? 1 ? ? a small circuit for ? NW? ? ?? is a distinguisher for everyNW?? ?

  37. Review Review: Reconstruction as Learning Reconstructive PRGs The small circuit ? which NW?does not fool a small circuit for ? [IW 98] The Nisan-Wigderson PRG gives a uniformlearningprocedure Reconstruction via uniform Learning A uniform learner ?(?): Given ? that distinguishes NW?, makes a small number of queries to ?, output a small circuit for ? w.h.p.

  38. The GKR interactive proof protocol: Reminder Reminder Truth-tables of honest prover strategies Round-1 Strategy ?1 Round-2 Strategy ?2 Round-3 Strategy ?3 From which one can compute ?(?) ?: 0,1?(log ?) ?: 0,1?(log ?) Reductive For all ? 1 ??-time ??,? such that ??+1 ?: 0,1?(log ?) each entry is computable in ?? time from ? and next row ? ? ??,? = ?? Base case Round- Strategy ? each entry is computable in ?? time from ? ?: 0,1?(log ?) ? ??-time ??, such that ??, = ?

  39. NW?1 ? ? a small circuit ?1 for ?1 ?(?) Reductive For all ? 1 ??-time ??,? such that ??+1 NW?2 ? ? One Step a small circuit ?2 for ?2 ? a small circuit ??+1 for ??+1 NW?3 ? ? a small circuit ?3 for ?3 ? ? ??,? = ?? ?(??): Run the learning algorithm ?? ?? ??+1 to answer queries to ?? ?! use ??, ? uniform Learning ?(??): Given ??, makes a small number of queries to ?? a small circuit for ?? ?? ? a small circuit ?? for ?? NW? 2 ? ?, output ?w.h.p. ? a small circuit ? 2 for ? 2 NW? 1 ? ? a small circuit ? 1 for ? 1 NW? ? ? a small circuit ? for ? Base case: ? = ??, ?? is a distinguisher for everyNW?? ? Overall Running time: poly ? = ? poly(?)

  40. A Construction of Reconstructive Reconstructive HSGs Main Theorem A reconstructive HSG for every ?: 0,1? 0,1? computable by log-space uniform circuits of ?-size and ?-depth: Instance-wise Reconstruction ?? running in ? poly(?) time such that: ? accepts half of ?-bit strings but rejects all outputs of ??,? ?? Targeted HSG For ? 0,1???,? outputs poly(T) many ?-bit strings. ??,? runs in poly(T) time ?(?) prints ?(?) w.h.p.

  41. Conclusion We provide a new hardness-to-randomness framework that constructs targeted PRGs/HSGs from almost-all-input hardness (aai-hardness) aai-hardness is probably the right hardness assumption for prP = prBPP. enables superfast derandomization Our proof builds on the doubly efficient delegation system of Goldwasser, Kalai, and Rothblum

  42. New directions to explore Almost-all-input hardness Can we prove the required almost-all-input lower bounds? Can we study this in different settings? Will it lead to new developments? More connections between proof system to derandomization Can we apply the doubly efficient delegation system for low-space computation by Reingold, Rothblum, and Rothblum? A more general connection?

  43. Open Problems: Full Equivalence? poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms And? is computable by poly-size log-space uniform circuits of ??-depth prP = prBPP ? poly-time ?: 0,1? 0,1? that is almost-all-input-hard against all ?10-time randomized algorithms

  44. Open Problems: low-space? ?: 0,1? 0,1? computable in??-space and poly-time which is almost-all-input-hard against all ?10-time randomized algorithms ? prP = prBPP

  45. Thanks! Thanks!

Related


More Related Content