Search alternatives:
algorithm shows » algorithm allows (Expand Search), algorithm flow (Expand Search)
shows function » loss function (Expand Search)
algorithm both » algorithm blood (Expand Search), algorithm b (Expand Search), algorithm etc (Expand Search)
both function » body function (Expand Search), growth function (Expand Search), beach function (Expand Search)
algorithm pca » algorithm a (Expand Search), algorithm cl (Expand Search), algorithm co (Expand Search)
pca function » gpcr function (Expand Search), a function (Expand Search), fc function (Expand Search)
algorithm shows » algorithm allows (Expand Search), algorithm flow (Expand Search)
shows function » loss function (Expand Search)
algorithm both » algorithm blood (Expand Search), algorithm b (Expand Search), algorithm etc (Expand Search)
both function » body function (Expand Search), growth function (Expand Search), beach function (Expand Search)
algorithm pca » algorithm a (Expand Search), algorithm cl (Expand Search), algorithm co (Expand Search)
pca function » gpcr function (Expand Search), a function (Expand Search), fc function (Expand Search)
-
1
-
2
-
3
The pseudocode for the NAFPSO algorithm.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
4
PSO algorithm flowchart.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
5
Scheduling time of five algorithms.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
6
Convergence speed of five algorithms.
Published 2025“…The experimental results show that compared with the traditional particle swarm optimization algorithm, NACFPSO performs well in both convergence speed and scheduling time, with an average convergence speed of 81.17 iterations and an average scheduling time of 200.00 minutes; while the average convergence speed of the particle swarm optimization algorithm is 82.17 iterations and an average scheduling time of 207.49 minutes. …”
-
7
-
8
Efficient Algorithms for GPU Accelerated Evaluation of the DFT Exchange-Correlation Functional
Published 2025“…We show that batched formation of the XC matrix from the density matrix yields the best performance for large (>O(103) basis functions), sparse systems such as glycine chains and water clusters. …”
-
9
-
10
-
11
Completion times for different algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
12
The average cumulative reward of algorithms.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
13
Simulation settings of rMAPPO algorithm.
Published 2025“…This paper first analyzes the H-beam processing flow and appropriately simplifies it, develops a reinforcement learning environment for multi-agent scheduling, and applies the rMAPPO algorithm to make scheduling decisions. The effectiveness of the proposed method is then verified on both the physical work cell for riveting and welding and its digital twin platform, and it is compared with other baseline multi-agent reinforcement learning methods (MAPPO, MADDPG, and MASAC). …”
-
14
Flexible CDOCKER: Hybrid Searching Algorithm and Scoring Function with Side Chain Conformational Entropy
Published 2021“…We also describe a novel hybrid searching algorithm that combines both molecular dynamics (MD)-based simulated annealing and genetic algorithm crossovers to address the enhanced sampling of the increased search space. …”
-
15
Flexible CDOCKER: Hybrid Searching Algorithm and Scoring Function with Side Chain Conformational Entropy
Published 2021“…We also describe a novel hybrid searching algorithm that combines both molecular dynamics (MD)-based simulated annealing and genetic algorithm crossovers to address the enhanced sampling of the increased search space. …”
-
16
-
17
-
18
-
19
-
20