Taint Leakage Model in Software Security

the taint leakage model n.w
1 / 9
Embed
Share

Understand the Taint Leakage Model proposed by Ron Rivest in the context of software security, which focuses on how computations with tainted inputs can leak information to adversaries and motivate various attacks like timing attacks. Learn how to implement clean zones, tainted zones, and spoiled zones to protect against potential leaks in building secure systems.

  • Security
  • Taint Model
  • Software
  • Cryptography

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. The Taint Leakage Model Ron Rivest Crypto in the Clouds Workshop, MIT Rump Session Talk August 4, 2009

  2. Taint Common term in software security Any external input is tainted. A computation with a tainted input produces tainted output. Think tainted = controllable by adversary Untainted values are private inputs, random values you generate, and functions of untainted values. E.g. what values in browser depend on user input?

  3. Proposed Taint Leakage Model z Only computations with tainted inputs leak information. Adversary learns output and all inputs (even untainted ones) of a computation with a tainted input. Define a valued as spoiled if it is untainted but input to a computation with a tainted input. Examples: tainted values in red, spoiled values in purple clean values in black (untainted and unspoiled) z = f(x,y) No leakage; clean inputs gives clean outputs z = f(x,y) x tainted so z tainted & y spoiled z = f(x,y) x clean & y spoiled so z clean Leakable iff tainted or spoiled Adversary can learn all tainted and spoiled values. Leakage may be unbounded or bounded. f y x z f y x z f y x

  4. Motivating Sample What attacks motivate this model? Various forms of chosen-input attacks, such as timing attacks or differential attacks. C = EK(M) Here K is spoiled, and thus leakable; this models timing attacks on K using adversary- controlled probes via control of M .

  5. Model useful in building systems Clean zone Tainted zone Spoiled zone adversary Private inputs Zones can be implemented separately -- e.g. untainted on a TPM (or remote!) -- clean zone may include a random source, and can do computations (e.g. keygen) -- output could even be stored when independent of adversarial input (ref Dodis talk in this workshop)

  6. Example Encrypting (tainted) message M with key K : C = EK(M) K is spoiled and thus leaks (since M is tainted) C = (R, S) where S = M xor Y and Y = EK(R)) K is not tainted or spoiled, thus protected S is tainted (since M is tainted) R is spoiled (since paired with tainted S ) (but known anyway) Y is spoiled (since M is tainted) Protect long-term keys by using random ephemeral working keys. (Can do similarly for signatures) Taint model more-or-less distinguishes between chosen- plaintext and known-plaintext attacks. Related to on-line/off-line primitives

  7. Relation to other models Incomparable Adversary is weaker with taint model than with computational leakage, since values not depending on adversarial input don t leak. Adversary is stronger than with bounded leakage models, since it is OK to leak all inputs and output of computation with tainted input. Taint model doesn t capture all attacks (e.g. power-analysis, memory remanence attacks, )

  8. Discussion Contribution here is probably mostly terminology; model presumably implicit (or explicit?) in prior work. Results in taint leakage model may be easy in some cases (e.g. using empheral keys). (ref Dodis talk in this workshop) Goals typically should be that leakage does at most temporary damage . What can be done securely in this model?

  9. The End

More Related Content