A Loss Function for Box-Constrained Inverses Problems
DOI:
https://doi.org/10.7494/dmms.2008.2.2.79Keywords:
individual behavior, inverse problems, simultaneous equations, optimizationAbstract
A loss function is proposed for solving box-constrained inverse problems. Given causality mechanisms between inputs and outputs as smooth functions, an inverse problem demands to adjust the input levels to make the output levels as close as possible to the target values; box-constrained refers to the requirement that all outcome levels remain within their respective permissible intervals. A feasible solution is assumed known, which is often the status quo. We propose a loss function which avoids activation of the constraints. A practical advantage of this approach over the usual weighted least squares is that permissible outcome intervals are required in place of target importance weights, facilitating data acquisition. The proposed loss function is smooth and strictly convex with closed-form gradient and Hessian, permitting Newton family algorithms. The author has not been able to locate in the literature the Gibbs distribution corresponding to the loss function. The loss function is closely related to the generalized matching law in psychology.
Downloads
Published
How to Cite
Issue
Section
License
Remeber to prepeare, sign and scan copyright statement
The content of the journal is freely available according to the Creative Commons License Attribution 4.0 International (CC BY 4.0)
Accepted 2013-08-23
Published 2008-12-18