Optimal approximation of C^k-functions using shallow complex-valued neural networks

03/29/2023
by   Paul Geuchen, et al.
0

We prove a quantitative result for the approximation of functions of regularity C^k (in the sense of real variables) defined on the complex cube Ω_n := [-1,1]^n +i[-1,1]^n⊆ℂ^n using shallow complex-valued neural networks. Precisely, we consider neural networks with a single hidden layer and m neurons, i.e., networks of the form z ↦∑_j=1^m σ_j ·ϕ(ρ_j^T z + b_j) and show that one can approximate every function in C^k ( Ω_n; ℂ) using a function of that form with error of the order m^-k/(2n) as m →∞, provided that the activation function ϕ: ℂ→ℂ is smooth but not polyharmonic on some non-empty open set. Furthermore, we show that the selection of the weights σ_j, b_j ∈ℂ and ρ_j ∈ℂ^n is continuous with respect to f and prove that the derived rate of approximation is optimal under this continuity assumption. We also discuss the optimality of the result for a possibly discontinuous choice of the weights.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro