Issues with automatic and team-based QC.

<p>When maintaining large neuroimaging datasets with multiple processing pipelines, shallow quality control processes that rely on derived metrics can fail to catch instances of algorithmic failures. However, deep QC processes quickly become unscalable and inefficient as the amount of data ava...

Full description

Saved in:
Bibliographic Details
Main Author: Michael E. Kim (21956727) (author)
Other Authors: Chenyu Gao (9501623) (author), Nancy R. Newlin (21956730) (author), Gaurav Rudravaram (21956733) (author), Aravind R. Krishnan (21956736) (author), Karthik Ramadass (13949238) (author), Praitayini Kanakaraj (9193371) (author), Kurt G. Schilling (9193353) (author), Blake E. Dewey (19066193) (author), David A. Bennett (229710) (author), Sid O’Bryant (6449120) (author), Robert C. Barber (21956739) (author), Derek Archer (21956742) (author), Timothy J. Hohman (8356254) (author), Shunxing Bao (21956745) (author), Zhiyuan Li (205620) (author), Bennett A. Landman (7623857) (author), Nazirah Mohd Khairi (21956748) (author)
Published: 2025
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p>When maintaining large neuroimaging datasets with multiple processing pipelines, shallow quality control processes that rely on derived metrics can fail to catch instances of algorithmic failures. However, deep QC processes quickly become unscalable and inefficient as the amount of data available increases due to the required time for mass visualization of outputs. For example, opening 50,000 T1w images separately in an image viewer for deep QC can take over 60 hours if it takes five seconds to load images in and out of the viewer. Team driven efforts to alleviate such large time costs come with additional challenges due to inconsistencies in reporting and methods of performing QC.</p>