Reproducing kernel-based semiparametric functional smoothed score estimation with binary responses
<p>Functional data classification has gained increasing utility in the modern statistical learning era. In this article, we investigate the generalization ability and the Fisher consistency of the smoothed score (SS) classifier on the intrinsic infinite-dimensional functional data both theoret...
Saved in:
| Main Author: | |
|---|---|
| Other Authors: | , , , , |
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <p>Functional data classification has gained increasing utility in the modern statistical learning era. In this article, we investigate the generalization ability and the Fisher consistency of the smoothed score (SS) classifier on the intrinsic infinite-dimensional functional data both theoretically and numerically. Formulating the empirical risk minimization based on a regularized smoothed non-convex loss function, we first establish a rigorous error bound on the misclassification rate. The theoretical results reveal a trade-off between the choice of a tuning parameter and the size of a candidate function class. Additionally, a nonregular convergence rate of the SS estimation is derived in <math><mrow><msub><mrow><mi>L</mi></mrow><mn>2</mn></msub></mrow></math> norm that aligns with <math><mrow><msubsup><mrow><mi>h</mi></mrow><mi>n</mi><mrow><mo>−</mo><mo>(</mo><mn>1</mn><mo>−</mo><mi>ν</mi><mo>)</mo></mrow></msubsup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mi>μ</mi><mrow><mn>2</mn><mo>(</mo><mi>μ</mi><mo>+</mo><mn>1</mn><mo>)</mo></mrow></mrow></mrow></math>, as the bandwidth <math><mrow><msub><mrow><mi>h</mi></mrow><mi>n</mi></msub></mrow></math> of the smoothed loss is shrunk to 0. By projecting the functional data onto one specific direction over a reproducing kernel Hilbert space where the estimated function is expected to deliver desirable performance, we address the problem of slope function estimation. Computationally, we tackle the nonconvex optimization by developing an efficient proximal gradient algorithm. Finally, the finite sample results in simulation studies as well as a real data analysis from the ADNI study demonstrate the favorable performance of the proposed method compared with some popular classifiers in terms of prediction and estimation.</p> |
|---|