Prospective Clinical Implementation of CYP2C19 Genotype-Guided Antiplatelet Therapy

Prospective Clinical Implementation of CYP2C19 Genotype-Guided Antiplatelet Therapy
Slide Note
Embed
Share

This study explores the impact of CYP2C19 genotype-guided antiplatelet therapy on major adverse cardiovascular events (MACE) post-PCI. The research aims to compare outcomes between patients with CYP2C19 loss-of-function alleles receiving alternative antiplatelet agents versus clopidogrel, and to evaluate the risk in non-LOF patients. The investigation involves multiple sites and real-world data to assess the effectiveness of personalized antiplatelet therapy in reducing MACE.

  • Genotype
  • Antiplatelet Therapy
  • CYP2C19
  • MACE Outcomes
  • Personalized Medicine

Uploaded on Feb 19, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Memory Memory- -Sample Lower Bounds for LWE Sample Lower Bounds for LWE Mingqi Lu1 Junzhao Yang2 Tsinghua University CRYPTO 2024 August, 2024

  2. Learning in a Stream Learning in a Stream Setup. Secret ? ?, Sample ? ? Learning task described by Learning Matrix ?:? ? ? Given a stream ?1,? ?1,? ?2,? ?2,? (??,? ??,? ) Learner reads in one pass with limited memory and outputs ? w.p. 0.99 Example. (Learning Parity) Secret and samples are n-bit vectors ? = ? = 2 Given inner products ? ?,? = ?,? mod 2 ?

  3. Memory Memory- -Sample Trade Sample Trade- -off off Upper bounds for Learning Parity Memorizing samples: ?(?2) memory and ?(?) samples Enumerating secret: ?(?) memory and 2?(?)samples Raz first proved an information-theoretic lower bound [Raz 18] Learners with ?2/25 memory cannot learn ? from 2?(?)samples with non-negligible success probability No intermediate algorithms!

  4. Motivation Motivation Can we get a similar information-theoretic lower bound for LWE? n x + e n A T

  5. n Learning with Error (LWE) Learning with Error (LWE) x + e n A T ?, Sample ? ? ?, Error ? ? Setup. Secret ? ? Learning Matrix ? ?,? = ?,? mod ? Error Distribution ? (Discrete Gaussians / Bounded uniform) Given a stream ?1,? ?1,? + ?1 ?2? ?2,? + ?2 (??,? ??,? + ??) Learner reads in one pass with limited memory and outputs ? w.p. 0.99

  6. Our Results Our Results For LWE with modulus ? and error distribution ? ,?, Any learner either uses ? ? ? ?) In some regimes (e.g. ? is product of large primes),lower bound is tight log2? log(?/?) matches with information-theoretic upper bound In other cases, lower bound is almost tight (ignoring log factors) ?2 memory or 2 (?)samples, where ? = log(?/?) ?log( ?2 Remark. Lower Bounds are unconditional and generalize to Learning with Rounding (LWR)

  7. Technical Overview Technical Overview Build on the extractor framework proof of [Garg-Raz-Tal 18] If the learning matrix has some extractor property , then implies LB 1. Generalize from 2to ? Our new characterization q-balanced learning matrix Adapt the extractor-based proof to q-balanced Show LWE is q-balanced using reductions to almost orthogonality 2. Adapt analysis of error to ? labels This is similar to [Garg-Kothari-Liu-Raz 21] (Learning Parity with Noise)

  8. From From 2to to ? ?, Recall ?,? ? ? ?,? = ?,? mod ? First attempt: Write learning matrix by bits Given log q bits of labels ?(1)?,? ,?(2)?,? Apply LB to log q learning matrices? Failed! Samples are NOT independent 2lower bounds cannot trivially imply ?

  9. The Extractor Property The Extractor Property [GRT 18] ? ? ? {1, 1} is ?,?,? ?2-extractor if for every ? ? +with ?2/ ?1 2?, we have at most 2 ?fraction of rows ? that | ?(?),? | ?1 > 2 ? ? Intuition. When secret ? ?1, we have ?(?,?) is balanced for most ?

  10. Our Characterization: q Our Characterization: q- -balanced balanced [This work] ? ? ? [?] is q-balanced with parameter ?,?,? if for every ? ? +with ?2/ ?1 2?, we have at most 2 ?fraction of rows ? that =?? = 1 iff ? ?,? = ? where ?? ? Intuition. When secret ? ?1, we have ?(?,?) is balanced for most ? Pr ? ?,? = ? ?/?

  11. From From 2to to ?: Our Idea : Our Idea ?, Recall ?,? ? ? ?,? = ?,? mod ? Our idea: Generalize to q-balanced Two Directions: 1. Adapt the extractor-based proof to q-balanced The most tedious and technical part 2. Show LWE is q-balanced

  12. Direction #1: Computation Model Direction #1: Computation Model Unbounded computational power & Limited memory! (Read-Once) Branching Program. A DAG with ? + 1 layers with each layer containing ? vertices. Taking input ?1,?1, , ??,??, computational path Starts from ?0in layer 0, Moves along edge (??,??) from layer ? 1 to layer ?, Arrive at some ??in layer ? and outputs ?(??).

  13. Direction #1: Proof Sketch Direction #1: Proof Sketch Posterior distribution: ?|?? = Pr ? = ? reach ?] Low success probability when reaching a vertex ? with spread posterior For fixed target ? with concentrated posterior, bound the probability of reaching ? by 2 (n2) Progress Measure towards ?. Define similarity ?|?, ?|? , by q-balanced For most sample ?, similarity does not increase For negligible many ?, similarity blows up by at most ?. This must happen (?) times to reach ?

  14. Direction #2: LWE is q Direction #2: LWE is q- -balanced balanced The Reduction. ? ?,?,?= ? 1 ? ?,? = ? ? ?,? ? 1 1. Rows of ? are almost orthogonal 2. Almost orthogonal implies q-balanced

  15. Discussions on the Noisy Case Discussions on the Noisy Case Error analysis naturally generalize to q-balanced matrix using similar ideas of [GKLR 21] But this does NOT give tight lower bounds for LPN (Learning Parity with Noise) The Reason. Error of LPN is noisier than LWE LB depends on ??? (? max ? [?]?(?)) UB depends on ??? ? ?(?) (bits of info per sample) Only match in the error regime of LWE

  16. Thanks for listening! Thanks for listening! Questions are welcomed.

Related


More Related Content