Tighter Sparse Approximation Bounds for ReLU Neural Networks

10/07/2021
by   Carles Domingo Enrich, et al.
0

A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball ℬ_R(ℝ^d) up to error ϵ, when the Fourier based quantity C_f = 1/(2π)^d/2∫_ℝ^dξ^2 |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based ℛ-norms and show that a function defined on ℝ^d can be represented as an infinite-width two-layer neural network if and only if its ℛ-norm is finite. In this work, we extend the framework of Ongie et al. (2019) and define similar Radon-based semi-norms (ℛ, 𝒰-norms) such that a function admits an infinite-width neural network representation on a bounded open set 𝒰⊆ℝ^d when its ℛ, 𝒰-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro