Search alternatives:
function optimization » reaction optimization (Expand Search), formulation optimization (Expand Search), generation optimization (Expand Search)
based optimization » whale optimization (Expand Search)
basis function » loss function (Expand Search), brain function (Expand Search), barrier function (Expand Search)
lines basis » lines based (Expand Search)
binary task » binary mask (Expand Search)
task based » risk based (Expand Search)
function optimization » reaction optimization (Expand Search), formulation optimization (Expand Search), generation optimization (Expand Search)
based optimization » whale optimization (Expand Search)
basis function » loss function (Expand Search), brain function (Expand Search), barrier function (Expand Search)
lines basis » lines based (Expand Search)
binary task » binary mask (Expand Search)
task based » risk based (Expand Search)
-
1
-
2
DMTD algorithm.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
3
Proposed Algorithm.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
4
Comparisons between ADAM and NADAM optimizers.
Published 2025“…Hence, an Energy-Harvesting Reinforcement Learning-based Offloading Decision Algorithm (EHRL) is proposed. …”
-
5
-
6
-
7
-
8
The Pseudo-Code of the IRBMO Algorithm.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
9
-
10
IRBMO vs. meta-heuristic algorithms boxplot.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
11
IRBMO vs. feature selection algorithm boxplot.
Published 2025“…In order to comprehensively verify the performance of IRBMO, this paper designs a series of experiments to compare it with nine mainstream binary optimization algorithms. The experiments are based on 12 medical datasets, and the results show that IRBMO achieves optimal overall performance in key metrics such as fitness value, classification accuracy and specificity. …”
-
12
-
13
EITO<sub>P</sub> with flexible trip time.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
14
Energy consumption in the PPO process.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
15
Speed limits.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
16
Running time in the PPO process.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
17
Speed limits and gradients from RJ to WYJ.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
18
EITO<sub>E</sub> speed distance profile.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
19
Speed limits and gradient from SJZ to XHM.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
20
Parameters of DKZ32.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”