Learning and Testing Submodular Functions: Insights and Applications

learning and testing submodular functions n.w
1 / 19
Embed
Share

Explore the world of submodular functions, from their discrete analog of convexity/concavity to approximate learning techniques and positive results in valuation functions and beyond. Discover the potential applications in optimization and algorithmic game theory.

  • Submodular Functions
  • Optimization
  • Algorithmic Game Theory
  • Learning
  • Applications

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Learning and Testing Submodular Functions Grigory Yaroslavtsev + Work in progress with Rocco Servedio With Sofya Raskhodnikova (SODA 13) Columbia University October 26, 2012

  2. Submodularity Discrete analog of convexity/concavity, law of diminishing returns Applications: optimization, algorithmic game theory Let ?:2? [0,?]: Discrete derivative: ??? ? = ? ? {?} ? ? , Submodular function: ??? ? ??? ? , ? ? ?,? ? ??? ? ?,? ?

  3. Exact learning Q: Reconstruct a submodular ?:2? ? with poly(|X|) queries (for all arguments)? A: Only [Goemans, Harvey, Iwata, Mirrokni, SODA 09] ? -approximation (multiplicative) possible Q: Only for 1 ? -fraction of points (PAC-style learning with membership queries under uniform distribution)? 1 ? 1 Pr ? ?(2?)? ? = ? ? Pr 2 ?????????? ?? ? A: Almost as hard [Balcan, Harvey, STOC 11].

  4. Approximate learning P PM MAC AC-learning (M Multiplicative), with poly(|X|) queries : 1 ? 1 Pr ? ?(2?) ? ? ? ? ?? ? Pr 2 ?????????? ?? ? 1 3 ? ? ? ?(over arbitrary distributions [BH 11]) P PA AAC AC-learning (A Additive) ? ?(2?) |? ? ? ? | ? 1 ? 1 Pr Pr 2 ?????????? ?? ? 2 |?| ? log(1 ? ?) [Gupta, Hardt, Roth, Ullman, STOC 11] 2 , log1 ?[Cheraghchi, Klivans, Kothari, Running time: ? ? ? Running time: poly ? Lee, SODA 12]

  5. Learning ?:2? [0,?] For all algorithms ? = ?????. Goemans, Harvey, Iwata, Mirrokni Balcan, Harvey Gupta, Hardt, Roth, Ullman Cheraghchi, Klivans, Kothari, Lee Our result with Sofya Learning PMAC PAAC Additive ? PAC ? ? - ?:?? ?, ,? (bounded integral range ? |?|) Multiplicative ? ?= ? approximation Everywhere ? 2 ?3??( ? log ? ) Time Poly(|X|) Poly(|X|) |?| ? ? ? Extra features Under arbitrary distribution Tolerant queries SQ- queries, Agnostic Agnostic

  6. Learning: Bigger picture } XOS = Fractionally subadditive Subadditive [Badanidiyuru, Dobzinski, Fu, Kleinberg, Nisan, Roughgarden,SODA 12] Submodular Other positive results: Learning valuation functions [Balcan, Constantin, Iwata, Wang, COLT 12] PMAC-learning (sketching) valuation functions [BDFKNR 12] PMAC learning Lipschitz submodular functions [BH 10] (concentration around average via Talagrand) Gross substitutes OXS Additive (linear) Value demand

  7. Discrete convexity Monotone convex ?:{1, ,?} 0, ,? 8 6 4 2 0 1 2 3 <=R n Convex ?:{1, ,?} 0, ,? 8 6 4 2 0 1 2 3 <=R >= n-R n

  8. Discrete submodularity ?:2? {0,,?} Case study: ? = 1 (Boolean submodular functions ?: 0,1? {0,1}) Monotone submodular = ??1 ??2 ??? (monomial) Submodular = (??1 ???) (??1 ???) (2-term CNF) Submodular Monotone submodular ? ? ? ? ? ? ? ? ?

  9. Discrete monotone submodularity Monotone submodular ?:2? 0, ,? ???(? ??,?(??)) ?(??) ?(??) ? ? ?(??) ?(??)

  10. Discrete monotone submodularity Theorem: for monotone submodular ?:2? 0, ,? for all ?: ? ? = max ? ?, ? ? ?(?) ? ? max ? ?, ? ? ?(?) (by monotonicity) ? ? ?, ? ? ? ?

  11. Discrete monotone submodularity ? ? max ? ?, ? ? ? ? S =smallest subset of ? such that ? ? = ? ? ? ? we have ??? ? Restriction of ? on 2? is monotone increasing =>|? | ? > 0 => ? ? ? ? ?

  12. Representation by a formula Theorem: for monotone submodular ?:2? 0, ,? for all ?: ? ? = max ? ?, ? ? ?(?) Notation switch: ? ?, 2? (?1, ,??) (Monotone) Pseudo Boolean k DNF ( ???, ??= 1 ?? ): ???? [?? ??1 ??2 ???] (no negations) (Monotone) submodular ? ?1, ,?? 0, ,? can be represented as a (monotone) pseudo-Boolean 2R-DNF with constants ?? 0, ,? .

  13. Discrete submodularity Submodular ? ?1, ,?? 0, ,? can be represented as a pseudo-Boolean 2R-DNF with constants ?? 0, ,? . Hint [Lovasz] (Submodular monotonization): Given submodular ?, define ????? = ???? ? ? ? Then ???? is monotone and submodular. ? ? ? ? ? ?

  14. Learning pB-formulas and k-DNF ????,? = class of pB-DNF of width ? with ?? {0, ,?} i-slice???1, ,?? {0,1} defined as ???1, ,?? = 1 iff ? ?1, ,?? ? If ? ????,? its i-slices?? are ?-DNF and: ? ?1, ,?? = max 1 ? ? (? ???1, ,??) PAC-learning 1 ? 1 Pr ? ?( 0,1?)? ? = ? ? Pr 2 ????(?)

  15. Learning pB-formulas and k-DNF Learn every i-slice??on 1 ? = (1 ? / ?) fraction of arguments ? ?) Learning ?-DNF (????,?) (let Fourier sparsity ??= ?? log Kushilevitz-Mansour (Goldreich-Levin): ???? ?,?? queries/time. ``Attribute efficient learning : ??????? ? ???? ?? queries Lower bound: (2?) queries to learn a random ?-junta ( ?-DNF) up to constant precision. Optimizations: Slightly better than KM/GL by looking at the Fourier spectrum of ????,?(see SODA paper: switching lemma => ?1 bound) Do all R iterations of KM/GL in parallel by reusing queries

  16. Property testing Let C be the class of submodular ?: 0,1? {0, ,?} How to (approximately) test, whether a given ? is in C? Property tester: (Randomized) algorithm for distinguishing: 1. ? ? ? ?? ? ? 2? 2. (?-far): min Key idea: ?-DNFs have small representations: [Gopalan, Meka,Reingold CCC 12] (using quasi-sunflowers [Rossman 10]) ? > 0, ?-DNF formula F there exists: ?(?) such that ? ? ?2? ?-DNF formula F of size ? log1 ?

  17. Testing by implicit learning Good approximation by juntas => efficient property testing [surveys: Ron; Servedio] ?-approximation by ?(?)-junta ?(?) Good dependence on ?: ? ? = ? log1 [Blais, Onak] sunflowers for submodular functions [? ? log ? + log1 ? ?](?+?) ? (?) Query complexity: ? log1 Running time: exponential in ? ? (we think can be reduced it to O(?(?)) ) We have ? lower bound for testing ?-DNF (reduction from Gap Set Intersection: distinguishing a random ?-junta vs ? + O(log ?)-junta requires ? queries) (independent of n) ?

  18. Previous work on testing submodularity ?: 0,1? [0,?] [Parnas, Ron, Rubinfeld 03, Seshadhri, Vondrak, ICS 11]: Upper bound (?/?)?( ?). Lower bound: ? ? Special case: coverage functions [Chakrabarty, Huang, ICALP 12]. }Gap in query complexity

  19. Thanks!

More Related Content