Multidimensional Gains for Stochastic Approximation

This paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated wi...

Full description

Saved in:
Bibliographic Details
Main Author: Saab, Samer S. (author)
Other Authors: Shen, Dong (author)
Format: article
Published: 2019
Online Access:http://hdl.handle.net/10725/11135
http://dx.doi.org/10.1109/TNNLS.2019.2920930
http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php
https://ieeexplore.ieee.org/abstract/document/8751995
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1864513488305520640
author Saab, Samer S.
author2 Shen, Dong
author2_role author
author_facet Saab, Samer S.
Shen, Dong
author_role author
dc.creator.none.fl_str_mv Saab, Samer S.
Shen, Dong
dc.date.none.fl_str_mv 2019-07-24T09:51:18Z
2019-07-24T09:51:18Z
2019
2019-07-24
dc.identifier.none.fl_str_mv 2162-237X
http://hdl.handle.net/10725/11135
http://dx.doi.org/10.1109/TNNLS.2019.2920930
Saab, S. S., & Shen, D. (2019). Multidimensional Gains for Stochastic Approximation. IEEE transactions on neural networks and learning systems.
http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php
https://ieeexplore.ieee.org/abstract/document/8751995
dc.language.none.fl_str_mv en
dc.relation.none.fl_str_mv IEEE Transactions on Neural Networks and Learning Systems
dc.rights.*.fl_str_mv info:eu-repo/semantics/openAccess
dc.title.none.fl_str_mv Multidimensional Gains for Stochastic Approximation
dc.type.none.fl_str_mv Article
info:eu-repo/semantics/publishedVersion
info:eu-repo/semantics/article
description This paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated with the noisy observations. The analytical development of the matrix gain is built on an iterative-dependent linear function interfered by additive zero-mean white noise, where the dimension of the function is M≥ 1 and the dimension of the unknown variable is N≥ 1. Necessary and sufficient conditions for M≥ N algorithms are presented pertaining to algorithm stability and convergence of the estimate error covariance matrix. Two algorithms are proposed: one for the case where M≥ N and the second one for the antithesis. The two algorithms assume full knowledge of the Jacobian. The recursive algorithms are proposed for generating the optimal iterative-dependent matrix gain. The proposed algorithms here aim for per-iteration minimization of the mean square estimate error. We show that the proposed algorithm satisfies the presented conditions for stability and convergence of the covariance. In addition, the convergence rate of the estimation error covariance is shown to be inversely proportional to the number of iterations. For the antithesis M < N, contraction of the error covariance is guaranteed. This underdetermined system of equations can be helpful in training neural networks. Numerical examples are presented to illustrate the performance capabilities of the proposed multidimensional gain while considering nonlinear functions.
eu_rights_str_mv openAccess
format article
id LAURepo_e49f4e47c4dfddd2a632d9c480dcd94a
identifier_str_mv 2162-237X
Saab, S. S., & Shen, D. (2019). Multidimensional Gains for Stochastic Approximation. IEEE transactions on neural networks and learning systems.
language_invalid_str_mv en
network_acronym_str LAURepo
network_name_str Lebanese American University repository
oai_identifier_str oai:laur.lau.edu.lb:10725/11135
publishDate 2019
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
spelling Multidimensional Gains for Stochastic ApproximationSaab, Samer S.Shen, DongThis paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated with the noisy observations. The analytical development of the matrix gain is built on an iterative-dependent linear function interfered by additive zero-mean white noise, where the dimension of the function is M≥ 1 and the dimension of the unknown variable is N≥ 1. Necessary and sufficient conditions for M≥ N algorithms are presented pertaining to algorithm stability and convergence of the estimate error covariance matrix. Two algorithms are proposed: one for the case where M≥ N and the second one for the antithesis. The two algorithms assume full knowledge of the Jacobian. The recursive algorithms are proposed for generating the optimal iterative-dependent matrix gain. The proposed algorithms here aim for per-iteration minimization of the mean square estimate error. We show that the proposed algorithm satisfies the presented conditions for stability and convergence of the covariance. In addition, the convergence rate of the estimation error covariance is shown to be inversely proportional to the number of iterations. For the antithesis M < N, contraction of the error covariance is guaranteed. This underdetermined system of equations can be helpful in training neural networks. Numerical examples are presented to illustrate the performance capabilities of the proposed multidimensional gain while considering nonlinear functions.PublishedN/A2019-07-24T09:51:18Z2019-07-24T09:51:18Z20192019-07-24Articleinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/article2162-237Xhttp://hdl.handle.net/10725/11135http://dx.doi.org/10.1109/TNNLS.2019.2920930Saab, S. S., & Shen, D. (2019). Multidimensional Gains for Stochastic Approximation. IEEE transactions on neural networks and learning systems.http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.phphttps://ieeexplore.ieee.org/abstract/document/8751995enIEEE Transactions on Neural Networks and Learning Systemsinfo:eu-repo/semantics/openAccessoai:laur.lau.edu.lb:10725/111352021-03-19T10:47:35Z
spellingShingle Multidimensional Gains for Stochastic Approximation
Saab, Samer S.
status_str publishedVersion
title Multidimensional Gains for Stochastic Approximation
title_full Multidimensional Gains for Stochastic Approximation
title_fullStr Multidimensional Gains for Stochastic Approximation
title_full_unstemmed Multidimensional Gains for Stochastic Approximation
title_short Multidimensional Gains for Stochastic Approximation
title_sort Multidimensional Gains for Stochastic Approximation
url http://hdl.handle.net/10725/11135
http://dx.doi.org/10.1109/TNNLS.2019.2920930
http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php
https://ieeexplore.ieee.org/abstract/document/8751995