Multi-Agent Submodular Optimization

03/10/2018
by   Richard Santiago, et al.
0

Recent years have seen many algorithmic advances in the area of submodular optimization: (SO) / f(S): S ∈F, where F is a given family of feasible sets over a ground set V and f:2^V →R is submodular. This progress has been coupled with a wealth of new applications for these models. Our focus is on a more general class of multi-agent submodular optimization (MASO) which was introduced by Goel et al. in the minimization setting: ∑_i f_i(S_i): S_1 S_2 ... S_k ∈F. Here we use to denote disjoint union and hence this model is attractive where resources are being allocated across k agents, each with its own submodular cost function f_i(). In this paper we explore the extent to which the approximability of the multi-agent problems are linked to their single-agent primitives, referred to informally as the multi-agent gap. We present different reductions that transform a multi-agent problem into a single-agent one. For maximization we show that (MASO) admits an O(α)-approximation whenever (SO) admits an α-approximation over the multilinear formulation, and thus substantially expanding the family of tractable models. We also discuss several family classes (such as spanning trees, matroids, and p-systems) that have a provable multi-agent gap of 1. In the minimization setting we show that (MASO) has an O(α·{k, (n) (n/ n)})-approximation whenever (SO) admits an α-approximation over the convex formulation. In addition, we discuss the class of "bounded blocker" families where there is a provably tight O( n) gap between (MASO) and (SO).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro