Search alternatives:
based optimization » whale optimization (Expand Search)
binary case » binary mask (Expand Search), binary image (Expand Search), primary case (Expand Search)
case based » made based (Expand Search), game based (Expand Search), rate based (Expand Search)
based optimization » whale optimization (Expand Search)
binary case » binary mask (Expand Search), binary image (Expand Search), primary case (Expand Search)
case based » made based (Expand Search), game based (Expand Search), rate based (Expand Search)
-
1
-
2
-
3
-
4
MSE for ILSTM algorithm in binary classification.
Published 2023“…The ILSTM was then used to build an efficient intrusion detection system for binary and multi-class classification cases. The proposed algorithm has two phases: phase one involves training a conventional LSTM network to get initial weights, and phase two involves using the hybrid swarm algorithms, CBOA and PSO, to optimize the weights of LSTM to improve the accuracy. …”
-
5
-
6
Comparison of optimization algorithms.
Published 2024“…Subsequently, the GWO algorithm is used to optimize the number and the nodes of the hidden layer in the Dual-channel MLP-Attention model. …”
-
7
-
8
-
9
-
10
-
11
-
12
-
13
-
14
-
15
-
16
Algorithm comparison.
Published 2024“…Subsequently, the GWO algorithm is used to optimize the number and the nodes of the hidden layer in the Dual-channel MLP-Attention model. …”
-
17
MEC three-layer architecture.
Published 2023“…Based on this, combined with the characteristics of deep reinforcement learning, this paper investigates a computation offloading optimization scheme for the perception layer. The algorithm can adaptively adjust the computational task offloading policy of IoT terminals according to the network changes in the perception layer. …”
-
18
-
19
RGBD input layer.
Published 2024“…Our proposed algorithm identifies the optimal layer replication configuration for the model. …”
-
20
Process of GWO optimization.
Published 2024“…Subsequently, the GWO algorithm is used to optimize the number and the nodes of the hidden layer in the Dual-channel MLP-Attention model. …”