MMA estimation on multi-modal images in SEN12MS dataset.
<p>MMA estimation on multi-modal images in SEN12MS dataset.</p>
Saved in:
| Main Author: | Yide Di (20969124) (author) |
|---|---|
| Other Authors: | Yun Liao (160524) (author), Hao Zhou (136535) (author), Kaijun Zhu (18283913) (author), Qing Duan (541846) (author), Junhui Liu (2063140) (author), Mingyu Lu (2333083) (author) |
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
MMA estimation on different multi-modal image datasets. The modal of the images: (1) RGB-NIR, (2) Optical-SAR, (3) Optical-SAR, (4) T1-T2 (5) RGB-Depth, (6) UV-Green.
by: Yide Di (20969124)
Published: (2025) -
Feature matching of different-modals images: (a) MatchosNet (b) LoFTR (c) FeMIP, and (d) UFM. The datasets from top to bottom are: SEN 12 MS dataset, RGB-NIR Scene dataset, WHU-OPT-SAR dataset, Optical-SAR dataset, BrainWeb dataset, NYU-Depth V2 dataset and UV-Green dataset. The images of SEN 12 MS dataset, RGB-NIR Scene dataset and WHU-OPT-SAR dataset are rotated and the images of the Optical-SAR dataset and the NYU-Depth V2 dataset are cropped. The matches with less than 1 pixel error are represented by lines.
by: Yide Di (20969124)
Published: (2025) -
Illustration of feature matching of UFM. When working with specific data, the pre-trained backbone is frozen, and only the corresponding modal assistants need to be fine-tuned for feature matching. The multi-modal assistants contain both same-modal assistants and different-modal matching assistants.
by: Yide Di (20969124)
Published: (2025) -
Examples of the image pairs of different modals. (a) The SEN 12 MS dataset. a1. Optical a2. SAR a3. NIR a4. SWIR (b) The RGB-NIR Scene dataset. b1. RGB b2. Near-Infrared (c) The WHU-OPT-SAR dataset. c1. Optical c2. SAR (d) The Optical-SAR dataset. d1. Optical d2. SAR (e) The BrainWeb dataset. e1. T1 e2. T2 (f) The NYU-Depth V2 dataset. f1. Depth f2. RGB (g) The UV/Green Image dataset. g1. UV g2. Green.
by: Yide Di (20969124)
Published: (2025) -
Fine-tuning on same-model feature matching tasks. The X-FFN and Y-FFN represent the assistants of any two kinds of pre-trained different modal images in the second stage of Fig 4. The fine-tuning of the X-modal image and the fine-tuning of the Y-modal image are independent of each other.
by: Yide Di (20969124)
Published: (2025)