Communication-Efficient Pilot Estimation for Non-Randomly Distributed Data in Diverging Dimensions
<p>The communication-efficient surrogate likelihood (CSL) framework (<i>Jordan, Lee, and Yang</i>) is notable for handling massive or distributed datasets. The CSL methods use the first machine as the central one for optimization with its data and assume a fixed dimension for stati...
Saved in:
| Main Author: | |
|---|---|
| Other Authors: | , |
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <p>The communication-efficient surrogate likelihood (CSL) framework (<i>Jordan, Lee, and Yang</i>) is notable for handling massive or distributed datasets. The CSL methods use the first machine as the central one for optimization with its data and assume a fixed dimension for statistical properties. However, CSL may not suit non-randomly or heterogeneously distributed data and limit its applicability to diverging- or high-dimensional datasets. To address these issues, we propose a communication-efficient pilot (CEP) estimation strategy. This involves pilot sampling on each machine to create a pilot sample dataset and using a new pilot sample-based surrogate loss to approximate the global one, with the minimizer termed the CEP estimator. We rigorously investigate theoretical properties of the CEP estimator including its convergence rate, reaching the global rate <math><mrow><msqrt><mrow><mrow><msub><mrow><mi>p</mi></mrow><mi>n</mi></msub></mrow><mi>N</mi></mrow></msqrt></mrow></math>, and its asymptotic normality when the dimension <math><mrow><msub><mrow><mi>p</mi></mrow><mi>n</mi></msub></mrow></math> diverges with the pilot sample size <i>r</i> and <math><mrow><msub><mrow><mi>p</mi></mrow><mi>n</mi></msub><mo><</mo><mi>n</mi></mrow></math>. Additionally, we extend CEP to high-dimensional cases (<math><mrow><msub><mrow><mi>p</mi></mrow><mi>n</mi></msub><mo>></mo><mi>n</mi></mrow></math>) and propose a regularized version of CEP (CERP). We establish non-asymptotic error bounds for the CERP estimator with Lasso penalty (CERP-Lasso) and provide convergence rates and asymptotic normality for the CERP estimator with adaptive Lasso penalty (CERP-aLasso) under generalized linear models. Extensive synthetic and real datasets demonstrate the effectiveness of our approaches. Supplementary materials for this article are available online.</p> |
|---|