Showing 101 - 120 results of 290 for search '(( final pre processing optimization algorithm ) OR ( binary a based optimization algorithm ))*', query time: 1.16s Refine Results
  1. 101

    <i>hi</i>PRS algorithm process flow. by Michela C. Massi (14599915)

    Published 2023
    “…<b>(B)</b> Focusing on the positive class only, the algorithm exploits FIM (<i>apriori</i> algorithm) to build a list of candidate interactions of any desired order, retaining those that have an empirical frequency above a given threshold <i>δ</i>. …”
  2. 102
  3. 103

    Triplet Matching for Estimating Causal Effects With Three Treatment Arms: A Comparative Study of Mortality by Trauma Center Level by Giovanni Nattino (561797)

    Published 2021
    “…We implement the evidence factors method for binary outcomes, which includes a randomization-based testing strategy and a sensitivity analysis for hidden bias in three-group matched designs. …”
  4. 104

    Multicategory Angle-Based Learning for Estimating Optimal Dynamic Treatment Regimes With Censored Data by Fei Xue (24567)

    Published 2021
    “…In this article, we develop a novel angle-based approach to search the optimal DTR under a multicategory treatment framework for survival data. …”
  5. 105

    Comparison with existing SOTA techniques. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  6. 106

    Proposed inverted residual parallel block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  7. 107

    Inverted residual bottleneck block. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  8. 108

    Proposed architecture testing phase. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  9. 109

    Sample classes from the HMDB51 dataset. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  10. 110

    Sample classes from UCF101 dataset [40]. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  11. 111

    Self-attention module for the features learning. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  12. 112

    Residual behavior. by Yasir Khan Jadoon (21433231)

    Published 2025
    “…The proposed architecture is trained on the selected datasets, whereas the hyperparameters are chosen using the particle swarm optimization (PSO) algorithm. The trained model is employed in the testing phase for the feature extraction from the self-attention layer and passed to the shallow wide neural network classifier for the final classification. …”
  13. 113

    Flowchart for developed MAOA. by P. M. Vijayan (17095147)

    Published 2023
    “…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
  14. 114

    Representation of autoencoder with DBN. by P. M. Vijayan (17095147)

    Published 2023
    “…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
  15. 115

    Threats in MQTT set. by P. M. Vijayan (17095147)

    Published 2023
    “…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
  16. 116

    Features of MQTT set. by P. M. Vijayan (17095147)

    Published 2023
    “…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
  17. 117

    Architectural view of smart home atmosphere. by P. M. Vijayan (17095147)

    Published 2023
    “…The resultant processed data is given into two models named (i) Autoencoder with Deep Belief Network (DBN), in which the optimal features are selected from Autoencoder with the aid of Modified Archimedes Optimization Algorithm (MAOA). …”
  18. 118
  19. 119

    SHAP bar plot. by Meng Cao (105914)

    Published 2025
    “…</p><p>Objective</p><p>This study aimed to develop a risk prediction model for CI in CKD patients using machine learning algorithms, with the objective of enhancing risk prediction accuracy and facilitating early intervention.…”
  20. 120

    Sample screening flowchart. by Meng Cao (105914)

    Published 2025
    “…</p><p>Objective</p><p>This study aimed to develop a risk prediction model for CI in CKD patients using machine learning algorithms, with the objective of enhancing risk prediction accuracy and facilitating early intervention.…”