Search alternatives:
function optimization » reaction optimization (Expand Search), formulation optimization (Expand Search), generation optimization (Expand Search)
codon optimization » wolf optimization (Expand Search)
basis function » loss function (Expand Search), brain function (Expand Search), barrier function (Expand Search)
lines basis » lines based (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a codon » _ codon (Expand Search), a common (Expand Search)
function optimization » reaction optimization (Expand Search), formulation optimization (Expand Search), generation optimization (Expand Search)
codon optimization » wolf optimization (Expand Search)
basis function » loss function (Expand Search), brain function (Expand Search), barrier function (Expand Search)
lines basis » lines based (Expand Search)
binary a » binary _ (Expand Search), binary b (Expand Search), hilary a (Expand Search)
a codon » _ codon (Expand Search), a common (Expand Search)
-
1
DMTD algorithm.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
2
-
3
-
4
-
5
-
6
-
7
EITO<sub>P</sub> with flexible trip time.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
8
Energy consumption in the PPO process.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
9
Speed limits.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
10
Running time in the PPO process.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
11
Speed limits and gradients from RJ to WYJ.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
12
EITO<sub>E</sub> speed distance profile.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
13
Speed limits and gradient from SJZ to XHM.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
14
Parameters of DKZ32.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
15
EITO<sub><i>P</i></sub> with a variable trip time.
Published 2025“…On the basis of EITO<sub>E</sub>, we propose EITO<sub>P</sub> algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. …”
-
16
-
17
-
18
-
19
-
20
Schematic of <i>P. chabaudi</i> within-host infection dynamics and fitness optimization.
Published 2025“…For local optimization, an arbitrary starting spline is picked (left panel) and an optimization algorithm is used to adjust the relative weights of the basis functions until a fitness maximum is achieved (going from left to right). …”