Showing 1,281 - 1,300 results of 5,616 for search '(( significantly small decrease ) OR ( significantly higher decrease ))', query time: 0.35s Refine Results
  1. 1281

    S1 Dataset - by Na Wang (193263)

    Published 2024
    “…</p><p>Results</p><p>We found that numerical rating scale(NRS) score and incidence of breast fistula in group A were significantly lower than other, the continuous decrease of postoperative drainage in group A was higher than other, there were significant differences among groups (p<0.001). …”
  2. 1282

    Numerical Rating Scale (NRS). by Na Wang (193263)

    Published 2024
    “…</p><p>Results</p><p>We found that numerical rating scale(NRS) score and incidence of breast fistula in group A were significantly lower than other, the continuous decrease of postoperative drainage in group A was higher than other, there were significant differences among groups (p<0.001). …”
  3. 1283

    Mammary fistula (n(%)). by Na Wang (193263)

    Published 2024
    “…</p><p>Results</p><p>We found that numerical rating scale(NRS) score and incidence of breast fistula in group A were significantly lower than other, the continuous decrease of postoperative drainage in group A was higher than other, there were significant differences among groups (p<0.001). …”
  4. 1284

    Forest maps affecting lactation outcomes. by Na Wang (193263)

    Published 2024
    “…</p><p>Results</p><p>We found that numerical rating scale(NRS) score and incidence of breast fistula in group A were significantly lower than other, the continuous decrease of postoperative drainage in group A was higher than other, there were significant differences among groups (p<0.001). …”
  5. 1285

    Postoperative drainage(ml) (M(IQR)). by Na Wang (193263)

    Published 2024
    “…</p><p>Results</p><p>We found that numerical rating scale(NRS) score and incidence of breast fistula in group A were significantly lower than other, the continuous decrease of postoperative drainage in group A was higher than other, there were significant differences among groups (p<0.001). …”
  6. 1286
  7. 1287
  8. 1288
  9. 1289
  10. 1290
  11. 1291
  12. 1292

    Testing set error. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  13. 1293

    Internal structure of an LSTM cell. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  14. 1294

    Prediction effect of each model after STL. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  15. 1295

    The kernel density plot for data of each feature. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  16. 1296

    Analysis of raw data prediction results. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  17. 1297

    Flowchart of the STL. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  18. 1298

    SARIMA predicts season components. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  19. 1299

    BWO-BiLSTM model prediction results. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”
  20. 1300

    Bi-LSTM architecture diagram. by Xiangjuan Liu (618000)

    Published 2025
    “…Further integration of Spearman correlation analysis and PCA dimensionality reduction created multidimensional feature sets, revealing substantial accuracy improvements: The BiLSTM model achieved an 83.6% cumulative MAE reduction from 1.65 (raw data) to 0.27 (STL-PCA), while traditional models like Prophet showed an 82.2% MAE decrease after feature engineering optimization. Finally, the Beluga Whale Optimization (BWO)-tuned STL-PCA-BWO-BiLSTM hybrid model delivered optimal performance on test sets (RMSE = 0.22, MAE = 0.16, MAPE = 0.99%, ), exhibiting 40.7% higher accuracy than unoptimized BiLSTM (MAE = 0.27). …”