Showing 161 - 180 results of 4,111 for search '(( ((algorithm its) OR (algorithm etc)) function ) OR ( algorithm python function ))*', query time: 0.30s Refine Results
  1. 161
  2. 162
  3. 163
  4. 164

    Simulation settings of rMAPPO algorithm. by Jianbin Zheng (587000)

    Published 2025
    “…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
  5. 165

    Parameters of the proposed algorithm. by Heba Askr (15572851)

    Published 2023
    “…First, MaAVOA was applied to the DTLZ functions, and its performance was compared to that of several popular many-objective algorithms and according to the results, MaAVOA outperforms the competitor algorithms in terms of inverted generational distance and hypervolume performance measures and has a beneficial adaptation ability in terms of both convergence and diversity performance measures. …”
  6. 166
  7. 167

    Comparative analysis of algorithms. by Xumin Zhao (18261643)

    Published 2024
    “…Subsequent experimental validations of the LIRU algorithm underscore its superiority over conventional replacement algorithms, showcasing significant improvements in storage utilization, data access efficiency, and reduced access delays. …”
  8. 168

    Parameter settings for metaheuristic algorithms. by Junhao Wei (6816803)

    Published 2025
    “…In the experimental section, we validate the efficiency and superiority of LSWOA by comparing it with outstanding metaheuristic algorithms and excellent WOA variants. The experimental results show that LSWOA exhibits significant optimization performance on the benchmark functions with various dimensions. …”
  9. 169

    Mean training time of different algorithms. by Wei Liu (20030)

    Published 2023
    “…The results show: (1) the global convergence probability of SGWO was 1, and its process was a finite homogeneous Markov chain with an absorption state; (2) SGWO not only has better optimization performance when solving complex functions of different dimensions, but also when applied to Elman for parameter optimization, SGWO can significantly optimize the network structure and SGWO-Elman has accurate prediction performance.…”
  10. 170

    Algorithm ranking under different dimensions. by Wei Liu (20030)

    Published 2023
    “…The results show: (1) the global convergence probability of SGWO was 1, and its process was a finite homogeneous Markov chain with an absorption state; (2) SGWO not only has better optimization performance when solving complex functions of different dimensions, but also when applied to Elman for parameter optimization, SGWO can significantly optimize the network structure and SGWO-Elman has accurate prediction performance.…”
  11. 171
  12. 172
  13. 173

    Parameter sets of the chosen algorithms. by WanRu Zhao (18980374)

    Published 2024
    “…The IERWHO algorithm is an improved Wild Horse optimization (WHO) algorithm that combines the concepts of chaotic sequence factor, nonlinear factor, and inertia weights factor. …”
  14. 174

    The flow chart of IERWHO algorithm. by WanRu Zhao (18980374)

    Published 2024
    “…The IERWHO algorithm is an improved Wild Horse optimization (WHO) algorithm that combines the concepts of chaotic sequence factor, nonlinear factor, and inertia weights factor. …”
  15. 175

    The flow chart of WHO algorithm. by WanRu Zhao (18980374)

    Published 2024
    “…The IERWHO algorithm is an improved Wild Horse optimization (WHO) algorithm that combines the concepts of chaotic sequence factor, nonlinear factor, and inertia weights factor. …”
  16. 176

    CEC2017 test function test results. by Tengfei Ma (597633)

    Published 2025
    “…The optimal individual’s position is updated by randomly selecting from these factors, enhancing the algorithm’s ability to attain the global optimum and increasing its overall robustness. …”
  17. 177
  18. 178

    Flowchart of the specific incarnation of the BO algorithm used in the experiments. by Lisa Laux (9367681)

    Published 2020
    “…To choose the next pipeline configuration to evaluate, the BO algorithm uses an Expected Improvement function to trade off maximisation of QS with the need to fully learn the GP. …”
  19. 179
  20. 180