# CheXT **Repository Path**: snakecy/CheXT ## Basic Information - **Project Name**: CheXT - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-11-25 - **Last Updated**: 2024-11-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # CheXT Under Development Offical Implementation for Papers https://openaccess.thecvf.com/content/WACV2022/html/Han_Knowledge-Augmented_Contrastive_Learning_for_Abnormality_Classification_and_Localization_in_Chest_WACV_2022_paper.html and https://arxiv.org/abs/2207.04394 # Motivation Accurate classification and localization of abnormalities in chest X-rays play an inportant role in clinical diagnosis and treatment planning. Building a highly accurate predictive model of these tasks usually requires a large number of manually annotated labels and pixel regions (bounding boxes) of abnormalitites. However, it is expensive to acquire such annotations, especially the bounding boxes. Before the recent success of deep learning methods for automated chest X-rays analysis, practitioners used handcrafted radiomic features to quantitatively describe local patches of chest X-rays. However, extracting discriminative radiomic features also relies on accurate pathology localization. Hence we will run into an intriguing "chicken-and-egg" problem. Here, we utilize the contrastive learning in the chest X-ray domain to solve this problem, the key knob of our framework is a unique positive sampling approach tailored for the chest X-rays, by seamlessly intergrating radiomic features as a knowledge augmentation. Specifically, we first apply an image encoder (ViT or CNN-based models) to classify the chest X-rays and to generate the image features. We next leverage Grad-CAM or Self-Attention maps to highlight the crucial (abnormal) regions for chest X-rays, from which we extract radiomic features. The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray. In this way, our framework constitutes a feedback loop for image and radiomic features to mutually reinforce each other. # Framework 
CNN-Based Framework.
ViT-Based Framework.
# Data The NIH-CXR8 dataset can be downloaded from their offical webset: https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/36938765345 # Environment We recommend you to create a virtual environment. ``` conda env create -f environment.yml conda activate