
Covert Learning: Strategies for Secure Information Transfer
Discover strategies for covert learning and preventing adversaries from unauthorized access. Explore solutions to safeguard information transfer and protect against malicious interference. Learn about the Covert Verifiable Learning model and its real-world applications in secure outsourcing and model extraction attacks. Uncover the challenges and opportunities in the realm of General Covert Learning.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Covert Learning: How to learn with an untrusted intermediary Ran Canetti, Ari Karchmer Boston University
? Can we prevent Adv from learning ??
? ? Can we prevent Adv from learning ?? Can we prevent Adv from tricking Learner into producing a bad hypothesis?
In the paper we Formalize these questions under a Covert Verifiable Learning model Simulation-based privacy Soundness inspired by interactive proofs for ML [GRSY20] Provide solutions to other salient problems (e.g. parities, decision trees) Motivate real world applications of Covert Verifiable Learning Secure outsourcing of automated scientific discovery (molecular biology) Model extraction attacks Many open questions: General Covert Learning compiler ?
Problem Given random ?1,...,?? {0,1}? and ?1,...,?? {0,1} such that for some ? {0,1}?: Problem is computationally hard if you believe in the Learning Parity with Noise (LPN) assumption. ?? {0,1}?[??= ??,? ] 1/2 + ? ?? Decode ? w.h.p. What if you can choose ?1,...,?? {0,1}? ? Can find ? using the famed Goldreich-Levin algorithm in time poly(?,1/?).
Problem Given random ?1,...,?? {0,1}? and ?1,...,?? {0,1} such that for some ? {0,1}?: ?? {0,1}?[??= ??,? ] 1/2 + ? ?? What if an eavesdropper also obtains ?1,...,?? {0,1}?and ?1,...,?? {0,1}? Decode ? w.h.p. Then eavesdropper can also decode ? by following along with the GL algorithm. Question: How to run the Goldreich-Levin algorithm covertly i.e. learn ? for myself but prevent any eavesdropper from doing the same.
Idea Use the LPN assumption itself to fashion queries that look random to everyone but the one who invented them. Masks Learning noisy parities the leaky way Covert GL GL GL Query 1 Query 2 Query ? ????1????? ?????
If given access to ground truth random examples, we can extend the previous algorithm to at least detect when this is happening (i.e. obtain soundness ) Problem Given random ?1,...,?? {0,1}? and ?1,...,?? {0,1}? such that for some ? {0,1}?: 1 Test or Learn : ?? {0,1}???= ??,? ?? 2+ ? Input: S = random examples Repeat r times: Flip coin c If c = 0 #learn covert_learn() If c = 1 #test Query section of S abort if results inconsistent Decode ? w.h.p. What if you can choose ?1,...,?? {0,1}? ,? but ?1,...,?? {0,1} can be intercepted by a computationally bounded party and corrupted arbitrarily?