The global COVID-19 pandemic has catalyzed significant expansion in artificial intelligence (AI) research within healthcare, highlighting the challenges humanity faces in managing large-scale health crises. In response, re- search teams worldwide have employed computer vision to predict COVID-19 presence and severity. Despite these efforts, the clinical utility of AI models remains limited, as they do not adequately address common errors made by radiologists: satisfaction of search, premature closure, and anchoring bias. Most AI research has focused on single-label classifiers and severity predictors, which we identify as problematic, as they cannot prevent missed diagnoses or identify additional diseases. Radiologists can detect moderate and severe thoracic diseases, but certain conditions may not be visible on X-rays in patients with mild symptoms. Hence, the added value of these AI models in clinical settings is unclear. Moreover, accuracy metrics can be misleading without radiologist-annotated bounding boxes outlining pathological areas. We address this by employing the intersection over union (IoU) metric, a spatial metric that indicates the portion of the observed pathological area identified by the model. This research utilizes X- ray images from patients with healthy lungs or various lung diseases, and Microsoft Azure AutoML to develop a multi-output classification model. Our model mitigates known cognitive errors in radiology by indicating the number of dis- eases, naming them, and highlighting pathology locations. Finally, we propose a comprehensive diagnostic system incorporating medical imaging as one component. To create clinic-ready AI products, we emphasize the necessity of diverse, well-labeled data spanning various demographics.
multi-label classification, Azure AutoML, computer vision, AI in healthcare, healthcare diagnostics.