بدائل البحث:
processing optimization » process optimization (توسيع البحث), process optimisation (توسيع البحث), routing optimization (توسيع البحث)
whale optimization » swarm optimization (توسيع البحث)
speech processing » pre processing (توسيع البحث)
processing optimization » process optimization (توسيع البحث), process optimisation (توسيع البحث), routing optimization (توسيع البحث)
whale optimization » swarm optimization (توسيع البحث)
speech processing » pre processing (توسيع البحث)
-
1
Structure of the Kuhn-Munkres Algorithm.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
2
Hyperparameters of the LSTM Model.
منشور في 2025"…The capacity to confront and overcome this obstacle is where machine learning and metaheuristic algorithms shine. This study introduces the Adaptive Dynamic Particle Swarm Optimization enhanced with the Guided Whale Optimization Algorithm (AD-PSO-Guided WOA) for rainfall prediction. …"
-
3
The AD-PSO-Guided WOA LSTM framework.
منشور في 2025"…The capacity to confront and overcome this obstacle is where machine learning and metaheuristic algorithms shine. This study introduces the Adaptive Dynamic Particle Swarm Optimization enhanced with the Guided Whale Optimization Algorithm (AD-PSO-Guided WOA) for rainfall prediction. …"
-
4
Prediction results of individual models.
منشور في 2025"…The capacity to confront and overcome this obstacle is where machine learning and metaheuristic algorithms shine. This study introduces the Adaptive Dynamic Particle Swarm Optimization enhanced with the Guided Whale Optimization Algorithm (AD-PSO-Guided WOA) for rainfall prediction. …"
-
5
Hyperparameter settings.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
6
Initial weight values and correlation thresholds.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
7
Ablation experiment results comparison.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
8
Adjustment step size.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
9
Curve of data size vs. running time.
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
10
Data (3).
منشور في 2025"…To achieve this, publicly available datasets—Carnegie Mellon University Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Interactive Emotional Dyadic Motion Capture (IEMOCAP)—are employed to collect speech, visual, and textual data relevant to multimodal interaction scenarios. …"
-
11
Table_1_Brief Sensory Training Narrows the Temporal Binding Window and Enhances Long-Term Multimodal Speech Perception.DOCX
منشور في 2019"…We also investigated the influence of the TBW on speech intelligibility, where participants had to integrate auditory and visual speech information from a videotaped speaker. …"