A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection
<p>Multimodal medical imaging combining conventional imaging modalities such as mammography, ultrasound, and histopathology has shown significant promise for improving breast cancer detection accuracy. However, clinical implementation faces substantial challenges due to incomplete patient-matc...
محفوظ في:
| المؤلف الرئيسي: | |
|---|---|
| مؤلفون آخرون: | , , , , |
| منشور في: |
2025
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
| _version_ | 1864513531549843456 |
|---|---|
| author | Younes Akbari (16303286) |
| author2 | Faseela Abdullakutty (22564814) Somaya Al-Maadeed (5178131) Rafif Al Saady (22921217) Ahmed Bouridane (2270131) Rifat Hamoudi (523339) |
| author2_role | author author author author author |
| author_facet | Younes Akbari (16303286) Faseela Abdullakutty (22564814) Somaya Al-Maadeed (5178131) Rafif Al Saady (22921217) Ahmed Bouridane (2270131) Rifat Hamoudi (523339) |
| author_role | author |
| dc.creator.none.fl_str_mv | Younes Akbari (16303286) Faseela Abdullakutty (22564814) Somaya Al-Maadeed (5178131) Rafif Al Saady (22921217) Ahmed Bouridane (2270131) Rifat Hamoudi (523339) |
| dc.date.none.fl_str_mv | 2025-12-08T12:00:00Z |
| dc.identifier.none.fl_str_mv | 10.1016/j.compmedimag.2025.102687 |
| dc.relation.none.fl_str_mv | https://figshare.com/articles/journal_contribution/A_novel_virtual_patient_approach_for_cross-patient_multimodal_fusion_in_enhanced_breast_cancer_detection/30962702 |
| dc.rights.none.fl_str_mv | CC BY 4.0 info:eu-repo/semantics/openAccess |
| dc.subject.none.fl_str_mv | Biomedical and clinical sciences Oncology and carcinogenesis Engineering Biomedical engineering Health sciences Health services and systems Information and computing sciences Artificial intelligence Breast cancer detection Multimodal fusion Cross-patient learning Virtual patients Medical imaging |
| dc.title.none.fl_str_mv | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| dc.type.none.fl_str_mv | Text Journal contribution info:eu-repo/semantics/publishedVersion text contribution to journal |
| description | <p>Multimodal medical imaging combining conventional imaging modalities such as mammography, ultrasound, and histopathology has shown significant promise for improving breast cancer detection accuracy. However, clinical implementation faces substantial challenges due to incomplete patient-matched multimodal datasets and resource constraints. Traditional approaches require complete imaging workups from individual patients, limiting their practical applicability. This study investigates whether cross-patient multimodal fusion combining imaging modalities from different patients, can provide additional diagnostic information beyond single-modality approaches. We hypothesize that leveraging complementary information from heterogeneous patient populations enhances cancer detection performance, even when modalities originate from separate individuals. We developed a novel virtual patient framework that systematically combines imaging modalities across different patients based on quality-driven selection strategies. Two training paradigms were evaluated: Fixed scenario with 1:1:1 cross-patient combinations ( ∼ 250 virtual patients), and Combinatorial scenario with systematic companion selection ( ∼ 20,000 virtual patients). Multiple fusion architectures (concatenation, attention, and averaging) were assessed, and we designed a novel co-attention mechanism that enables sophisticated cross-modal interaction through learned attention weights. These fusion networks were evaluated using histopathology (BCSS), mammography, and ultrasound (BUSI) datasets. External validation using the ICIAR2018 BACH Challenge dataset as an alternative histopathology source demonstrated the generalizability of our approach, achieving promising accuracy despite differences in staining protocols and acquisition procedures across institutions. All models were evaluated on consistent fixed test sets to ensure fair comparison. This dataset is well-suited for multiple breast cancer analysis tasks, including detection, segmentation, and Explainable Artificial Intelligence (XAI) applications. Cross-patient multimodal fusion demonstrated significant improvements over single-modality approaches. The best single modality achieved 75.36% accuracy (mammography), while the optimal fusion combination (histopathology-mammography) reached 97.10% accuracy, representing a 21.74 percentage point improvement. Comprehensive quantitative validation through silhouette analysis (score: 0.894) confirms that the observed performance improvements reflect genuine feature space structure rather than visualization artifacts. Cross-patient multimodal fusion demonstrates significant potential for enhancing breast cancer detection, particularly addressing real-world scenarios where complete patient-matched multimodal data is unavailable. This approach represents a paradigm shift toward leveraging heterogeneous information sources for improved diagnostic performance.</p><h2>Other Information</h2> <p> Published in: Computerized Medical Imaging and Graphics<br> License: <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1016/j.compmedimag.2025.102687" target="_blank">https://dx.doi.org/10.1016/j.compmedimag.2025.102687</a></p> |
| eu_rights_str_mv | openAccess |
| id | Manara2_f9b8eabeaeceb692914b9bf0c2ccbfab |
| identifier_str_mv | 10.1016/j.compmedimag.2025.102687 |
| network_acronym_str | Manara2 |
| network_name_str | Manara2 |
| oai_identifier_str | oai:figshare.com:article/30962702 |
| publishDate | 2025 |
| repository.mail.fl_str_mv | |
| repository.name.fl_str_mv | |
| repository_id_str | |
| rights_invalid_str_mv | CC BY 4.0 |
| spelling | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detectionYounes Akbari (16303286)Faseela Abdullakutty (22564814)Somaya Al-Maadeed (5178131)Rafif Al Saady (22921217)Ahmed Bouridane (2270131)Rifat Hamoudi (523339)Biomedical and clinical sciencesOncology and carcinogenesisEngineeringBiomedical engineeringHealth sciencesHealth services and systemsInformation and computing sciencesArtificial intelligenceBreast cancer detectionMultimodal fusionCross-patient learningVirtual patientsMedical imaging<p>Multimodal medical imaging combining conventional imaging modalities such as mammography, ultrasound, and histopathology has shown significant promise for improving breast cancer detection accuracy. However, clinical implementation faces substantial challenges due to incomplete patient-matched multimodal datasets and resource constraints. Traditional approaches require complete imaging workups from individual patients, limiting their practical applicability. This study investigates whether cross-patient multimodal fusion combining imaging modalities from different patients, can provide additional diagnostic information beyond single-modality approaches. We hypothesize that leveraging complementary information from heterogeneous patient populations enhances cancer detection performance, even when modalities originate from separate individuals. We developed a novel virtual patient framework that systematically combines imaging modalities across different patients based on quality-driven selection strategies. Two training paradigms were evaluated: Fixed scenario with 1:1:1 cross-patient combinations ( ∼ 250 virtual patients), and Combinatorial scenario with systematic companion selection ( ∼ 20,000 virtual patients). Multiple fusion architectures (concatenation, attention, and averaging) were assessed, and we designed a novel co-attention mechanism that enables sophisticated cross-modal interaction through learned attention weights. These fusion networks were evaluated using histopathology (BCSS), mammography, and ultrasound (BUSI) datasets. External validation using the ICIAR2018 BACH Challenge dataset as an alternative histopathology source demonstrated the generalizability of our approach, achieving promising accuracy despite differences in staining protocols and acquisition procedures across institutions. All models were evaluated on consistent fixed test sets to ensure fair comparison. This dataset is well-suited for multiple breast cancer analysis tasks, including detection, segmentation, and Explainable Artificial Intelligence (XAI) applications. Cross-patient multimodal fusion demonstrated significant improvements over single-modality approaches. The best single modality achieved 75.36% accuracy (mammography), while the optimal fusion combination (histopathology-mammography) reached 97.10% accuracy, representing a 21.74 percentage point improvement. Comprehensive quantitative validation through silhouette analysis (score: 0.894) confirms that the observed performance improvements reflect genuine feature space structure rather than visualization artifacts. Cross-patient multimodal fusion demonstrates significant potential for enhancing breast cancer detection, particularly addressing real-world scenarios where complete patient-matched multimodal data is unavailable. This approach represents a paradigm shift toward leveraging heterogeneous information sources for improved diagnostic performance.</p><h2>Other Information</h2> <p> Published in: Computerized Medical Imaging and Graphics<br> License: <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1016/j.compmedimag.2025.102687" target="_blank">https://dx.doi.org/10.1016/j.compmedimag.2025.102687</a></p>2025-12-08T12:00:00ZTextJournal contributioninfo:eu-repo/semantics/publishedVersiontextcontribution to journal10.1016/j.compmedimag.2025.102687https://figshare.com/articles/journal_contribution/A_novel_virtual_patient_approach_for_cross-patient_multimodal_fusion_in_enhanced_breast_cancer_detection/30962702CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/309627022025-12-08T12:00:00Z |
| spellingShingle | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection Younes Akbari (16303286) Biomedical and clinical sciences Oncology and carcinogenesis Engineering Biomedical engineering Health sciences Health services and systems Information and computing sciences Artificial intelligence Breast cancer detection Multimodal fusion Cross-patient learning Virtual patients Medical imaging |
| status_str | publishedVersion |
| title | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| title_full | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| title_fullStr | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| title_full_unstemmed | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| title_short | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| title_sort | A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection |
| topic | Biomedical and clinical sciences Oncology and carcinogenesis Engineering Biomedical engineering Health sciences Health services and systems Information and computing sciences Artificial intelligence Breast cancer detection Multimodal fusion Cross-patient learning Virtual patients Medical imaging |