Search alternatives:
algorithm time » algorithm i (Expand Search), algorithm ai (Expand Search), algorithm pre (Expand Search)
algorithm both » algorithm blood (Expand Search), algorithm b (Expand Search), algorithm etc (Expand Search)
time function » sine function (Expand Search), like function (Expand Search), tissue function (Expand Search)
both function » body function (Expand Search), growth function (Expand Search), beach function (Expand Search)
algorithm time » algorithm i (Expand Search), algorithm ai (Expand Search), algorithm pre (Expand Search)
algorithm both » algorithm blood (Expand Search), algorithm b (Expand Search), algorithm etc (Expand Search)
time function » sine function (Expand Search), like function (Expand Search), tissue function (Expand Search)
both function » body function (Expand Search), growth function (Expand Search), beach function (Expand Search)
-
1
Scheduling time of five algorithms.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
2
-
3
-
4
Completion times for different algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
5
The pseudocode for the NAFPSO algorithm.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
6
PSO algorithm flowchart.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
7
Convergence speed of five algorithms.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
8
-
9
Continuous Probability Distributions generated by the PIPE Algorithm
Published 2022“…<div><p>Abstract We investigate the use of the Probabilistic Incremental Programming Evolution (PIPE) algorithm as a tool to construct continuous cumulative distribution functions to model given data sets. …”
-
10
-
11
-
12
-
13
-
14
The average cumulative reward of algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
15
-
16
-
17
Average function evaluation times of the three optimization algorithms.
Published 2025“…<p>Average function evaluation times of the three optimization algorithms.…”
-
18
Simulation settings of rMAPPO algorithm.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
19
The run time for each algorithm in seconds.
Published 2025“…We find evidence that the generalised GLS-KGR algorithm is well-suited to such time-series applications, outperforming several standard techniques on this dataset.…”
-
20