Search alternatives:
largest decrease » marked decrease (Expand Search)
larger decrease » marked decrease (Expand Search)
linear decrease » linear increase (Expand Search)
Showing 1,841 - 1,860 results of 5,763 for search '(( ct ((largest decrease) OR (larger decrease)) ) OR ( a ((mean decrease) OR (linear decrease)) ))', query time: 0.56s Refine Results
  1. 1841
  2. 1842
  3. 1843
  4. 1844
  5. 1845
  6. 1846
  7. 1847
  8. 1848

    Effects of menstrual cycle on MST performance. by Mateja Perović (21238182)

    Published 2025
    “…There is a distinct non-linear effect of cycle point on accuracy for lures, but not for foils or targets, showing a decrease in accuracy in the ML phase of the cycle. …”
  9. 1849

    Flowchart of CNNBLSTM algorithm. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  10. 1850

    Comparison of TL and VL of different algorithms. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  11. 1851

    Analysis of ST characteristics of TF. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  12. 1852

    Schematic diagram of STTFP model. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  13. 1853

    Summary of the methods in current research. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  14. 1854

    Diagram of the improved CNNBLSTM model. by Guozhu Sui (21672798)

    Published 2025
    “…Compared with the traditional convolutional neural network with bidirectional long short-term memory algorithm, the training loss decreased by 42.86% The suggested algorithm outperformed the current advanced algorithms in terms of prediction precision, with an average absolute percentage error of 0.233 and a root mean square error of 23.87. …”
  15. 1855
  16. 1856
  17. 1857
  18. 1858

    Complexity comparison of different models. by Li Yuan (102305)

    Published 2025
    “…Therefore, the study proposes a signal automatic modulation classification model based on fixed K-mean algorithm and denoising autoencoder. …”
  19. 1859

    Dynamic window based median filtering algorithm. by Li Yuan (102305)

    Published 2025
    “…Therefore, the study proposes a signal automatic modulation classification model based on fixed K-mean algorithm and denoising autoencoder. …”
  20. 1860

    Flow of operation of improved KMA. by Li Yuan (102305)

    Published 2025
    “…Therefore, the study proposes a signal automatic modulation classification model based on fixed K-mean algorithm and denoising autoencoder. …”