Showing 2,401 - 2,420 results of 21,342 for search '(( significant decrease decrease ) OR ( significant ((trend decrease) OR (mean decrease)) ))', query time: 0.53s Refine Results
  1. 2401

    Changes in the active H3K27ac and repressive H3K27me3 histone marks among Vasa2+/Piwi1+ and all cells in fed, starved, and refed juvenile polyps. by Eudald Pascual-Carreras (12115380)

    Published 2025
    “…Between fed, T<sub>5ds</sub> and T<sub>20ds</sub> timepoints, MFI levels of H3K27ac progressively and significantly decreased while levels H3K27me3 (M) did not change significantly (N). …”
  2. 2402
  3. 2403
  4. 2404

    The TOR inhibitors Rapamycin and AZD-8055 strongly reduce RPS6 phosphorylation and cell proliferation in Vasa2+/Piwi1+ cells. by Eudald Pascual-Carreras (12115380)

    Published 2025
    “…<i>n</i> = 2–4 biological replicates per condition, with 15 individuals per replicate. Significance levels for Student <i>t</i> test are indicated for adjusted <i>p</i> values: *<i>p</i> < 0.05, ***<i>p</i> < 0.001, ***<i>p</i> < 0.0001. d: day(s), n.s.: non-significant. …”
  5. 2405
  6. 2406
  7. 2407

    Testing set error. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  8. 2408

    Internal structure of an LSTM cell. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  9. 2409

    Prediction effect of each model after STL. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  10. 2410

    The kernel density plot for data of each feature. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  11. 2411

    Analysis of raw data prediction results. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  12. 2412

    Flowchart of the STL. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  13. 2413

    SARIMA predicts season components. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  14. 2414

    BWO-BiLSTM model prediction results. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  15. 2415

    Bi-LSTM architecture diagram. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  16. 2416

    STL Linear Combination Forecast Graph. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  17. 2417

    LOSS curves for BWO-BiLSTM model training. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  18. 2418

    Analysis of STL-PCA prediction results. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  19. 2419

    Accumulated contribution rate of PCA. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”
  20. 2420

    Figure of ablation experiment. by Xiangjuan Liu (618000)

    Published 2025
    “…First, seven benchmark models including Prophet, ARIMA, and LSTM were applied to raw price series, where results demonstrated that deep learning models significantly outperformed traditional methods. Subsequently, STL decomposition decoupled the series into trend, seasonal, and residual components for component-specific modeling, achieving a 22.6% reduction in average MAE compared to raw data modeling. …”