Submodular Maximization with Cardinality Constraints: Algorithms and Applications

slide1 n.w
1 / 16
Embed
Share

Explore submodular maximization with cardinality constraints, including the classical greedy algorithm, random greedy algorithm, and recent results for both monotone and non-monotone functions. Learn about the applications of submodular set functions in various settings and the efficiency of different algorithms in maximizing these functions subject to constraints.

  • Submodular Maximization
  • Cardinality Constraints
  • Algorithms
  • Applications
  • Greedy Algorithm

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. 2 1 Submodular Maximization with Cardinality Constraints Moran Feldman Joint work with: Niv Buchbinder Joseph (Seffi) Naor Roy Schwartz

  2. Set Functions Definition Given a ground set N, a set function f : 2N R assigns a number to every subset of the ground set. Notation Given a set A, and an elements u, fu(A) is the marginal contribution of u to A: ( ) ( A f A fu = u ) ( ) A f Properties a Set Functions May Have Non negativity: Monotonicity: Submodularity: ( ) A : 0 A N f ( ) A ( ) B : A B N f f For sets A B N, and u B: fu(A) fu(B) For sets A, B N: f(A) + f(B) f(A B) + f(A B) 2

  3. Where can One Find Submodular Set Functions? In Combinatorial Settings Ground Set Submodular Function Nodes of a graph The number of edges leaving a set of nodes. Collection of sets The number of elements in the union of a sub-collection. In Applicative Settings Utility/cost functions in economics (economy of scale). Influence of a set of users in a social network. 3

  4. 12 Maximization Subject to a Cardinality Constraint Instance A non-negative submodular function f : 2N R+ and an integer k. Objective Find a subset S N of size at most k maximizing f(S). Accessing the Function A representation of f can be exponential in the size of the ground set. The algorithm has access to f via an oracle. Value Oracle given a set S returns f(S). 4

  5. The (Classical) Greedy Algorithm The Algorithm Do k iterations. In each iteration pick the element with the maximum marginal contribution. More Formally 1. Let S0 . 2. For i = 1 to k do: 3. Let ui be the element maximizing: fui(Si-1). 4. Let Si Si-1 {ui}. 5. Return Sk. 5

  6. 12 Greedy Achieves For Monotone Functions 1-1/e approximation [Nemhauser et al. 78]. Match a hardness of [Nemhauser et al. 78] Time complexity: O(nk) For Non-Monotone Functions Same time complexity. No constant approximation ratio. Recent Results for Non-Motone Functions 0.325 approx. (simulated annealing) [Oveis Gharan and Vondrak 11] 1/e o(1) approx. (measured continuous greedy) [Feldman et al. 11] 0.491 hardness [Oveis Gharan and Vondrak 11] All known algorithms are much slower than the greedy algorithm. 6

  7. The Random Greedy Algorithm The Algorithm Do k iterations. In each iteration pick at random one element out of the k with the largest marginal contributions. More Formally 1. Let S0 . 2. For i = 1 to k do: 3. Let Mi be set of k the elements maximizing: fu(Si-1). 4. Let ui be a uniformly random element from Mi. 5. Let Si Si-1 {ui}. 6. Return Sk. 7

  8. Reduction We add k dummy elements of value 0. The dummy elements are removed at the end. Allows us to assume OPT is of size exactly k. Helper Lemma For a non-negative submodular function g : 2N R+ and a random set R containing every element with probability at most p: E[g(R)] (1 p) g( ). Similar to a Lemma from [Feige et al. 2007]. Intuition: Adding all the elements can reduce the value of g( ) by at most g( ) to 0. Adding at most a p fraction of every element, should reduce g ( ) by no more than p g( ). 8

  9. Algorithm Analysis First Objective Lower bound E[f(OPT Si)]. Method - show that no element belongs to Si with a large probability, and then apply the helper lemma. Observation In every iteration i, every element outside of Si-1 has a probability of at most 1/k to get into Si. Corollary An element belongs to Si with probability at most 1 (1-1/k)i. Applying the Helper Lemma Let g(S) = f(OPT S). Observe that g(S) is non-negative and submodular. E[f(OPT Si)] = E[g(Si)] (1-1/k)i g( ) = (1-1/k)i f(OPT). 9

  10. Algorithm Analysis (cont.) In iteration i Fix everything that happened before iteration i. All the expectations will be conditioned on the history. By submodularity: ( ) ( ) ( ) OPT f S f OPT S f S 1 1 1 u i i i u The elememt ui is picked at random from Mi, and OPT is a potential candidate to be Mi. S f E i u i 1 1 i i S f S f E 1 1 ( ) ( )= ( ) ( ) ( ) = f S f S 1 1 u i u i k k OPT u M u ( i ( ) ) f OPT S f S = 1 1 i i k 10

  11. Algorithm Analysis (cont.) Unfixing history, and using previous observations, we get: ( ) ( ) i / 1 1 ( ) k f OPT E f S ( ) S ( ) 1 i E f E f S 1 i i k Adding up all iterations We got a lower bound on the (expected) improvement in each iteration. Using induction it is possible to prove that: 1 1 k i 1 1 1 k 1 i ( ) S ( ) ( ) ( ) S ( ) 1 E f f OPT e f OPT E f f OPT k i k k k k Remarks This algorithm is both faster than previous ones, and gets rid of the o(1) in the approximation ratio. The algorithm achieves 1 1/e approximation (in expectation) for monotone functions. 11

  12. 12 Equality Cardinality Constraint New Objective Find a subset S N of size exactly k maximizing f(S). Monotone Functions Not interesting. We can always add arbitrary elements to the output. Non-monotone Functions Best previous approximation: - o(1). [Vondrak 2009] 12

  13. The Double Greedy Algorithm The Algorithm Based on the approximation for unconstrained submodular maximization of [Buchbinder et al. 2012]. Starts with two vectors: x = 1 and y = 1N. The vectors continuously evolve toward each other and satisfy: At time t, for every element u N: yu xu = 1 - t. At time 1, x = y is a feasible solution. y 1-t 1-t 1-t x Approximation Ratio For Both Variants 1 n + 1 ( ) 2 n k k For k = n/2, both problems have an approximation ratio of , and this is tight. 13

  14. Other Results Cardinality Constraint For a non-equality constraint: e-1 + approximation, where = 0.004. Proves that e-1 is not the right answer for the problem. Is e-1 the right answer for a general matroid independence constraint? For an equality constraint: A fast algorithm with an approximation ratio depending on k/n. It always achieves at least 0.266-approximation. For every ratio k/n one of our algorithms achieves at least 0.356- approximation. Fast Algorithms for General Matroid Constraint Approximation Ratio Time Complexity 1/4 O(nk) (1-e2) / 2 > 0.283 O(nk + k +1) State of the art approximation ratio for a general matroid constraint: e-1 o(1). 14

  15. Open Problems Cardinality Constraint The approximability depends on k/n. For k/n = 0.5, we have 0.5 approximation. For small k s, one cannot beat 0.491 [Oveis Gharan and Vondrak 11] What is the correct approximation ratio for a given k/n? Fast Algorithms Finding fast algorithms for more involved constraints. Beating e-1 using a fast algorithm: Even for k = n/2. Even faster algorithms: For monotone functions there is a 1 1/e approximation using O(n -1log (n / )) oracle queries. [Ashwinkumar and Vondrak 14] 15

  16. 16

More Related Content