Cryptography and Hard Problems: Exploring Structure and Complexity
This content delves into the intricate world of cryptography, focusing on the challenges posed by hard problems with sufficient structure. It discusses various aspects such as hardness, structure, statistical zero-knowledge proofs, one-way functions, worst-case and average-case hardness, lossy reductions, and more. Dive into the complexities of cryptographic systems and their implications in a digital world driven by data security.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Cryptography from Information Loss Marshall Ball Elette Boyle Akshay Degwekar Apoorvaa Deshpande Alon Rosen Vinod Vaikuntanathan Prashant Nalini Vasudevan
A hard problem with sufficient structure Hardness + Structure Crypto Worst-case, Average-case, some variants Many possibilities and dimensions What is sufficient?
Hardness + Structure Crypto Statistical Zero Knowledge Proofs (SZK) One-Way Functions (OWF) Ostrovsky 91 Average-case Hardness Ostrovsky-Wigderson 93 Worst-case Hardness SZK Auxiliary-input OWF Applebaum-Raykov 16 Worst-case Hardness Randomized Encodings OWF Fine-grained Worst-case Hardness Low-degree poly representation Ball et al 18 Proofs of Work Ong-Vadhan 08, Berman et al 18, This work Worst-case Hardness Lossy reductions OWF Collision-Resistant Hash Functions This work Average-case Hardness++ Compressing reductions
Lossy Reductions Randomized poly-time algorithm ? is a reduction from language ? to ? if: ? ? Pr[? ? ? ] 0.9 ? ? Pr[? ? ? ] 0.9 Reduction ? is ?-compressing if, for any input ? of size ?, ? ? ? ? (often studied implicitly in parametrized complexity, see [Downey-Fellows 13]) Example: All problems in ? have a 1-compressing reduction to a trivial language Various parametrized problems, notably Vertex Cover parameterized by size of the cover
Lossy Reductions Examples: Any random self-reduction, e.g. for Quadratic Residuosity is 1-lossy Randomized poly-time algorithm ? is a reduction from language ? to ? if: ? ? Pr[? ? ? ] 0.9 ? ? Pr[? ? ? ] 0.9 Given ? ? matrix ? over field ?, is det ? = 0? Multiplying by random full rank matrix ? gives random matrix ?? with the same span as ? Roughly, 4log ? -lossy reduction Reduction ? is ?-compressing if, for any input ? of size ?, ? ? ?2 ? ? (often studied implicitly in parametrized complexity, see [Downey-Fellows 13]) Reduction ? is ?-lossy if, for every random variable ? over inputs of size ?, ? ?;? ? ? ? (inspired by definitions in communication complexity, e.g. those in [Braverman et al 13])
Related Concepts Kernelization reduction compresses to size poly in some given parameter of input pervasive in parametrized complexity and algorithms Instance-compression [Harnik-Naor 10, Bodlaender et al 08] reduction compresses to size poly in witness-length Randomized encodings [Ishai-Kushilevitz 00, Applebaum et al 04] 1-lossy reductions to some unspecified language pervasive in theoretical cryptography Worst-case to average-case Karp reductions 1-lossy reductions to a specified language
Loss and Cryptography For language ? and function ?, ??? ? = ?1, ,?? ? ?? = ? ?:?? ?} Example: Given ? matrices ?1, ,?? of dimension ? ? over field ?, is there an ? such that det ?? = 0? Let ? = ?1?2 ?? det ? = 0 iff det ?? = 0 for some ? ?2log ? -compressing reduction from ??? ? to ? for any ?
Loss and Cryptography For language ? and function ?, ??? ? = ?1, ,?? ? ?? = ? ?:?? ?} Harnik-Naor 10:???? ? -compressing reduction for ??? ??? for all ? Collision-Resistance Hashing from OWFs (big deal) Fortnow-Santhanam 11: Such reduction for ??? ??? PH collapses Drucker 12: ? log(?) -compressing reduction from ??? ? to any ? ? ??? lossy
Statistical Zero Knowledge Complete Problem: Statistical Difference Input: Circuits ?0 and ?1 that both output ? bits ????= ?0,?1 ???0,?1 2/3 } ?????= ?0,?1 ???0,?1 1/3 } Essentially: Are two given distributions close or far? Average-case hard problem in ??? OWF [Ostrovsky 91]
Loss and Cryptography For language ? and function ?, ??? ? = ?1, ,?? ? ?? = ? ?:?? ?} Harnik-Naor 10:???? ? -compressing reduction for ??? ??? for all ? Collision-Resistance Hashing from OWFs (big deal) Fortnow-Santhanam 11: Such reduction for ??? ??? PH collapses Drucker 12: ? log(?) -compressing reduction from ??? ? to any ? ? ??? lossy
Loss and Cryptography Redn from ??? ? to ? ? 100-lossy ? is worst- case hard + OWF Redn from ??? ? to ? ? log(?) -lossy ? is average- case hard + OWF Drucker 12: ? log(?) -compressing reduction from ??? ? to any ? ? ??? lossy
Loss and Cryptography ? ??? [Drucker 12] Redn from ??? ? to ? ? 100-lossy ? is worst- case hard + OWF Proven using ??? completeness theorems of [Sahai-Vadhan 03] Essentially [Drucker 12] ?is one-sided average-case hard
Kinds of Hardness (of a language ?) Worst-case: alg ? ?:? ? ? ? Average-case: dist ?: alg ?: Pr ? ?? ? = ? ? One-sided Average-case: dist ? ?: alg ?: dist ?? ?: Pr ? ?? ? = 1 Pr 1 2+ ? ? ??? ? = 1 Two-sided Average-case: dists ? ?,? ?: alg ?: Pr ? ?? ? = 1 Pr ? ?? ? = 1
Loss and Cryptography ? ??? [Drucker 12] Redn from ??? ? to ? ? 100-lossy ? is worst- case hard + OWF Proven using ??? completeness theorems of [Sahai-Vadhan 03] Essentially [Drucker 12] ?is one-sided average-case hard
Loss and One-Sided Hardness Reduction?: ?:?? ? ? ? ?1 ?2 ? ? ? 100 Loss:? ?1, ,??;? ? ? ?? ? Idea:? cannot contain much information about all of the ?? s ?? ? Disguising Distribution Lemma [Drucker 12]: There is a distribution ? supported within ? such that for any ? ?, the distribution of ? above is almost the same. ?0
Loss and One-Sided Hardness ? ?0 is contained in ? If ? ?, ? is contained in ? ?1 ?2 ? ? ? ? is worst-case hard efficient ?, distribution over ? that ? cannot distinguish from ?0 ? ?? ? ?? ? One-sided: ?0 is fixed, other dist is not Disguising Distribution Lemma [Drucker 12]: There is a distribution ? supported within ? such that for any ? ?, the distribution of ? above is almost the same. ?0
Loss and SZK If ? ?, dist. of ? is close to ?0 If ? ?, dist. of ? is far from ?0 ?1 ?2 ? ? Reduction to ??: On input ?, output (?0,?1): ?0 samples ?0 ?1 computes ? with ? ? ? ?? ? ?? ? ? ??? Disguising Distribution Lemma [Drucker 12]: There is a distribution ? supported within ? such that for any ? ?, the distribution of ? above is almost the same. ?0
Loss and Cryptography ? ??? [Drucker 12] Redn from ??? ? to ? ? 100-lossy ? is worst- case hard + If ?? is one-sided avg-case hard, then OWF OWF Essentially [Drucker 12] ?is one-sided average-case hard
SZK and One-Way Functions ????= ?0,?1 ???0,?1 2/3 } Given: ?? is one-sided hard with distribution ? over ???? ?????= ?0,?1 ???0,?1 1/3 } ?(?,?,?): use ? to sample ?0,?1 ?, output (?0,?1,??? ) OWF [Impagliazzo-Luby 89] Claim: Adversary ? cannot sample random inverses for ? For any distribution ? over ?????, consider ???,?,? : sample ?0,?1 ?, output ?0,?1,??? If ??(?0,?1) is larger, ? is more biased given ??(?) Successful random inverse sampler ? outputs ? with higher probability when given ?(?,?,?) than if given ??(?,?,?) So can distinguish between ? and any ?, contradiction
Our Results Redn from ??? ? to ? ? 100-lossy ? is worst- case hard + OWF Redn from ???? ? to ? ? 100-lossy (via Randomized Encodings, like [Applebaum-Raykov 16]) Errorless redn from ??? ? to ? ? ? log ? -lossy (it s complicated ) Errorless redn from ??? ? to ? ? 100-compressing ? is one-sided average-case hard +++ CRHF (along the lines of [Ishai et al 05])
Collision-Resistant Hashing ??= ?: 0,1? ? 0,1 ? ? ? ? ???(?): sample a distribution over ? ????(?,?): evaluate ?(?) For any adversary ?, with ? ???(?), and ?,? ? ? , Pr ? = ? ?? = ?? ????(?) Errorless redn from ??? ? to ? ? 100-compressing ? is one-sided average-case hard ++ + CRHF (along the lines of [Ishai et al 05])
Collision-Resistant Hashing ???(?): Sample 2 ? ? instances (?1,0,?1,1,?2,0, ,?2?,1) from ?, each of size ? Output ? = (?1,0,?1,1,?2,0, ,?2?,1) ????(?,?): Output ?(?1,?1,?2,?2, ,??,??) NO distribution ? Reduction ? Errorless redn from ??? ? to ? ? 100-compressing ? is one-sided average-case hard ++ + CRHF (along the lines of [Ishai et al 05])
Collision-Resistant Hashing ???(?): Sample 2 ? ? instances (?1,0,?1,1,?2,0, ,?2?,1) from ?, each of size ? Output ? = (?1,0,?1,1,?2,0, ,?2?,1) ????(?,?): Output ?(?1,?1,?2,?2, ,??,??) If adversary ? finds collisions with probability ?, consider distinguisher ? that: given input ?, samples ? = (?1,0,?1,1,?2,0, ,?2?,1), replaces a random ??,? with ? to get ? runs ?(? ) to get (?,? ) if (?,? ) is valid collision and they differ at bit ?, then output 1; else 0 Pr ? ?? ? = 1 ?/? Pr ? ?? ? = 1 = 0
Summary New sufficient conditions for hardness being useful Worst-case hardness to OWFs Average-case hardness to CRHFs Concepts of lossy reductions and one-sided average-case hardness
Questions Lossy self-reductions for (?? of) currently unusable hard problems? Would enable OWFs from their worst-case hardness ???-complete problems Would show a converse of [Drucker 12] Lattice problems, e.g., the Gap Shortest Vector Problem OWFs known from worst-case hardness for gap (n) ? Contained in ??? for gap log ? Even showing one-sided avg-case hardness of the above problems would be sufficient for OWFs Can lossy reductions be profitably used in place of Randomized Encodings in any of its applications? Computational notions of information loss?