Search alternatives:
making algorithm » learning algorithm (Expand Search), finding algorithm (Expand Search), means algorithm (Expand Search)
method algorithm » network algorithm (Expand Search), means algorithm (Expand Search), mean algorithm (Expand Search)
coding algorithm » cosine algorithm (Expand Search), modeling algorithm (Expand Search), finding algorithm (Expand Search)
level coding » level according (Expand Search), level modeling (Expand Search), level using (Expand Search)
time making » time imaging (Expand Search), time saving (Expand Search), time tracking (Expand Search)
making algorithm » learning algorithm (Expand Search), finding algorithm (Expand Search), means algorithm (Expand Search)
method algorithm » network algorithm (Expand Search), means algorithm (Expand Search), mean algorithm (Expand Search)
coding algorithm » cosine algorithm (Expand Search), modeling algorithm (Expand Search), finding algorithm (Expand Search)
level coding » level according (Expand Search), level modeling (Expand Search), level using (Expand Search)
time making » time imaging (Expand Search), time saving (Expand Search), time tracking (Expand Search)
-
1
-
2
Computation time as a function of the sample size on the chain graph dataset.
Published 2024Subjects: -
3
Computation time as a function of the sample size on the random graph dataset.
Published 2024Subjects: -
4
Computation time as a function of the number of variables on the chain graph dataset.
Published 2024Subjects: -
5
Computation time as a function of the number of variables on the random graph dataset.
Published 2024Subjects: -
6
-
7
-
8
-
9
Completion times for different algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
10
-
11
-
12
-
13
-
14
-
15
-
16
-
17
Flowchart of PRGA algorithm.
Published 2025“…A case study of a bidirectional disruption during the 08:00–10:00 on the section of Xi’an Metro Line 2 demonstrates that: (1) The proposed model exhibits stronger robustness under demand uncertainty, achieving a reduction of 3 dispatched vehicles and a cost saving of 9,439 RMB by moderately increasing passenger costs by 850 RMB and extending bridging time; (2) The RPGA algorithm outperforms Non-dominated Sorting Genetic Algorithm II (NSGA-II), Reinforcement Learning-based NSGA-II (RLNSGA-II), and Multi-objective Particle Swarm Optimization Algorithm (MOPSO) in hypervolume (HV), generational distance (GD), and non-dominated ratio (NDR); (3) Increasing the rated passenger capacity within a certain range can reduce average passenger delays but correspondingly raises transportation costs. …”
-
18
Comparison of total time consumed for different offloading algorithms.for N = 10, 20, 30.
Published 2025Subjects: -
19
-
20