Search alternatives:
algorithm its » algorithm i (Expand Search), algorithm etc (Expand Search), algorithm iqa (Expand Search)
its function » i function (Expand Search), loss function (Expand Search), cost function (Expand Search)
algorithm its » algorithm i (Expand Search), algorithm etc (Expand Search), algorithm iqa (Expand Search)
its function » i function (Expand Search), loss function (Expand Search), cost function (Expand Search)
-
1
-
2
-
3
-
4
The pseudocode for the NAFPSO algorithm.
Published 2025“…A scheduling optimization model based on the particle swarm optimization (PSO) algorithm is proposed. In view of the high-dimensional complexity and local optimal problems, the neighborhood adaptive constrained fractional particle swarm optimization (NACFPSO) algorithm is used to solve it. …”
-
5
PSO algorithm flowchart.
Published 2025“…A scheduling optimization model based on the particle swarm optimization (PSO) algorithm is proposed. In view of the high-dimensional complexity and local optimal problems, the neighborhood adaptive constrained fractional particle swarm optimization (NACFPSO) algorithm is used to solve it. …”
-
6
Flowchart of the specific incarnation of the BO algorithm used in the experiments.
Published 2020“…To choose the next pipeline configuration to evaluate, the BO algorithm uses an Expected Improvement function to trade off maximisation of QS with the need to fully learn the GP. …”
-
7
Efficient algorithms to discover alterations with complementary functional association in cancer
Published 2019“…We provide analytic evidence of the effectiveness of UNCOVER in finding high-quality solutions and show experimentally that UNCOVER finds sets of alterations significantly associated with functional targets in a variety of scenarios. In particular, we show that our algorithms find sets which are better than the ones obtained by the state-of-the-art method, even when sets are evaluated using the statistical score employed by the latter. …”
-
8
-
9
-
10
Completion times for different algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
11
The average cumulative reward of algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
12
-
13
Scheduling time of five algorithms.
Published 2025“…A scheduling optimization model based on the particle swarm optimization (PSO) algorithm is proposed. In view of the high-dimensional complexity and local optimal problems, the neighborhood adaptive constrained fractional particle swarm optimization (NACFPSO) algorithm is used to solve it. …”
-
14
Convergence speed of five algorithms.
Published 2025“…A scheduling optimization model based on the particle swarm optimization (PSO) algorithm is proposed. In view of the high-dimensional complexity and local optimal problems, the neighborhood adaptive constrained fractional particle swarm optimization (NACFPSO) algorithm is used to solve it. …”
-
15
-
16
-
17
Simulation settings of rMAPPO algorithm.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
18
-
19
-
20