Distributed Forward-Backward algorithms for stochastic generalized Nash equilibrium seeking

12/09/2019
by   Barbara Franci, et al.
0

We consider the stochastic generalized Nash equilibrium problem (SGNEP) with expected-value cost functions. Inspired by Yi and Pavel (Automatica, 2019), we propose a distributed GNE seeking algorithm based on the preconditioned forward-backward operator splitting for SGNEP, where, at each iteration, the expected value of the pseudogradient is approximated via a number of random samples. As main contribution, we show almost sure convergence of our proposed algorithm if the pseudogradient mapping is restricted (monotone and) cocoercive. For non-generalized SNEPs, we show almost sure convergence also if the pseudogradient mapping is restricted strictly monotone. Numerical simulations show that the proposed forward-backward algorithm seems faster that other available algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro