9:15 - 10:15 EST
Oral presentations are assigned 15 minutes
Early detection of cancer, and breast cancer in particular, can have a positive impact on the survival rate of cancer patients. However, visual inspection by expert pathologists of whole-slide-images is limited because it is error-prone, and there is a lack of skilled pathologists in many parts of the world. To overcome this limitation, many researchers have proposed deep learning driven approaches to detect breast cancer from histopathology images. However, these datasets are often highly imbalanced as patches belonging to the cancerous category is minor in comparison to the healthy cells. Therefore, when trained, the performance of the conventional Convolutional Neural Network (CNN) models drastically decreases, particularly for the minor class, which is often the main target of detection. This paper proposes a class balanced affinity loss function which can be injected at the output layer to any deep learning classifier model to address the imbalance learning. In addition to treating the imbalance, the proposal also builds uniform class prototypes to address the fine-grained classification challenge in histopathology datasets, which conventional softmax loss cannot address. We validate our loss function performance by using two public-access datasets with different levels of imbalance, namely the Invasive Ductal Carcinoma (IDC) and Colorectal cancer (CRC) datasets. In both cases, the injection of our proposed loss function results in better performance, especially for the minority class performance. We also observe a better 2D feature projection in multi-class classification with the class-balanced loss making it more apt to handle imbalanced fine-grained classification challenges.
Melanoma is one of the most dangerous and deadly cancers in the world. In this contribution, we proposed a convolutional neural network architecture implemented in its design by using genetic algorithms. The aim is to find the best structure of a neural network to improve melanoma classification. An experimental study has evaluated the presented approach, conducted using a refined subset of images from ISIC, one of the most referenced datasets used for melanoma classification. The genetic algorithm implemented for the convolutional neural network design allows the population to evolve in subsequent generations to achieve fitness optimally. Convergence leads to the survival of a set of neural network populations representing the best individuals designated to optimize the network for melanoma classification. Our hybrid approach for the design of CNN for melanoma detection reaches 94% in accuracy, 90% in sensitivity, 97% in specificity, and 98% in precision. The preliminary results suggest that the proposed method could improve melanoma classification by eliminating the necessity for user interaction and avoiding a priori network architecture selection.
Early screening for breast cancer is an effective tool to detect tumors and decrease mortality among women. However, COVID restrictions made screening difficult in recent years due to a decrease in screening tests, reduction of routine procedures, and their delay. This preliminary study aimed to investigate mass detection in a large-scale OMI-DB dataset with three Transfer Learning settings in the early screening. We considered a subset of the OMI-DB dataset consisting of 6,000 cases, where we extracted 3,525 images with masses of Hologic Inc. manufacturer. This paper proposes to use the RetinaNet model with ResNet50 backbone to detect tumors in Full-Field Digital Mammograms. The model was initialized with ImageNet weights, COCO weights, and from scratch. We applied True Positive Rate at False Positive per Image evaluation metric with Free-Response Receiver Operating Characteristic curve to visualize the distributions of the detections. The proposed framework obtained 0.93 TPR at 0.84 FPPI with COCO weights initialization. ImageNet weights gave comparable results of 0.93 at 0.84 FPPI and from scratch demonstrated 0.84 at 0.84 FPPI.
Deep learning models have become state-of-the-art in many areas, ranging from computer vision to marine and agriculture research. However, concerns have been raised regarding the transparency of their decisions, especially in the image domain. In this regard, Explainable Artificial Intelligence has been gaining popularity in recent years. The ProtoPNet model, which breaks down an image into prototypes and uses evidence gathered from the prototypes to classify an image, represents an appealing approach. Still, questions regarding its effectiveness arise when the application domain changes from real-world natural images to gray-scale medical images. This work explores the applicability of prototypical part learning in medical imaging by experimenting with ProtoPNet on a breast masses classification task. The two aspects we considered to evaluate the applicability of this approach were the classification capabilities and the validity of explanations. We looked for the optimal model's hyperparameter configuration by operating a random search. We trained the model in a five-fold CV supervised framework, with mammography images cropped around the lesions and ground-truth labels of benign/malignant masses. Then, we compared the performance metrics of ProtoPNet to that of the corresponding base architecture, which was ResNet18, trained under the same framework. In addition, an experienced radiologist provided a clinical viewpoint on the quality of the learned prototypes, the patch activations, and the global explanations. We achieved a Recall of 0.769 and area under the receiver operating characteristic curve of 0.719 in our experiments. Even though our findings are non-optimal for entering the clinical practice yet, the radiologist found ProtoPNet's explanations very intuitive, reporting a high level of satisfaction. Therefore, we believe that prototypical part learning offers a reasonable and promising trade-off between classification performance and the quality of the related explanation.
10:15 - 10:30 EST
10:30 - 11:30 EST
Oral presentations are assigned 15 minutes
Accurate remote pulse rate measurement from RGB face videos has gained a lot of attention in the past years since it allows for a non-invasive contactless monitoring of a subject's heart rate, useful in numerous potential applications. Nowadays, there is a global trend to monitor e-health parameters without the use of physical devices enabling at-home daily monitoring and telehealth. This paper includes a comprehensive state-of-the-art on remote heart rate estimation from face images. We extensively tested a new framework to better understand several open questions in the domain that are: which areas of the face are the most relevant, how to manage video color components and which performances are possible to reach on a public relevant dataset with an optimal and reproducible neural network. From this study, we extract key elements to design an optimal, up-to-date and reproducible framework that can be used as a baseline for accurately estimating the heart rate of a human subject, in particular from the cheek area using the green (G) channel of a RGB video.
Bowel preparation is considered a critical step in colonoscopy. Manual bowel preparation assessment is time consuming and prone to human errors and biases. Automatic Bowel evaluation using machine/deep learning is a better and efficient alternative. Most of the relevant literature have focused on achieving high validation accuracy, where private handy-picked dataset does not reflect real-environment situation. Furthermore, treating a video dataset as a collection of individual frames may produce overestimated results. This is due to the fact a video contains nearly identical consecutive frames, hence, dividing them into training and validation sets yields two similar distributed datasets. Given a public dataset, Nerthus, we show empirically a significant drop in performance when a video dataset is treated as a collection of videos (depicting the real environment/context), instead of a collection of individual frames. We propose a model that utilizes both sequence and none-sequence (spatial) information within videos. On a 2-fold cross-validation the proposed model achieved on average 76% validation accuracy whereas the state-of-the-art models achieved on average a range of 66%-71% validation accuracy.
In this paper we define a deep learning architecture, for automated segmentation of anatomical structures in Craniomaxillofacial (CMF) CT images that leverages the recent success of encoder-decoder models for semantic segmentation of medical images. The aim of this work is to propose an architecture capable to perform the automated segmentation of the dental arch from CBCT scans Cranio-Maxillo-Facial, offering a fast, efficient and reliable method of obtaining images labeled. A deep convolutional neural network was applied by exploiting the deep supervision mechanism with the aim of train a model for the extraction of feature maps at different levels of abstraction, to obtain an accurate segmentation of the images passed in input. In particular, we propose a 3D CNN-based architecture for automated segmentation from CT scans, with a 3D encoder that learns to extract features at different levels of abstraction and send them hierarchically to four 3D decoders that predict intermediate segmentation maps used to obtain the final detailed binary mask.
SARS-CoV-2 induced disease (Covid-19) was declared as a pandemic by the World Health Organization in March 2020. It was confirmed as severe disease which induces pneumonia followed by respiratory failure. Real-Time Polimerase Chain Reaction (RT-PCR) is the de-facto standard diagnosis for Covid-19 but due to the cost and processing-time it is inapplicable for large screening programs. By contrast, Chest X-Ray (CXR) imaging analysis offers a fast, sustainable and performing approach for the early detection of Covid-19 disease. The proposed solution consists of a novel end-to-end intelligent system for CXR analysis embedding lung segmentation and an innovative 2D-to-3D augmentation approach in order to provide a robust classification of input CXR as viral (no Covid-19 pneumonia), Covid-19 pneumonia and healthy subject. Furthermore, in order to make a robust classification process we have implemented a compensation mechanism for adversarial attacks phenomena on CXR images using Jacobian regularization techniques. The collected performance results confirmed the effectiveness of the designed pipeline.
11:30 - 11:45 EST
11:45 - 12:30 EST
Oral presentations are assigned 15 minutes
Conventional approaches to identifying depression are usually not scalable and require a high computational cost. Recent studies have demonstrated the great potential of social media posts in detecting mental disorders. Social media data is often unstructured, ill-formed, and contains typos, making dictionary-based feature extraction methods inefficient. This study proposes a deep learning model - SERCNN, which can detect depressed social media users based on user-generated text. We have also trained SERCNN on different lengths of posting history measured in the number of posts to offer a different perspective on the amount of data required for early depression detection. This study demonstrated that SERCNN could achieve a slightly higher detection accuracy, at 93.7%, compared to existing models. Our finding reveals the potential for optimized detection models that require less data for early depression detection.
Nowadays Artificial Intelligence (AI) is commonly used in many fields such as Image Analysis, Robotics, Text Recognition, replacing the traditional approach to solve a specific problem. In the field of Medicine, in particular, AI has made great strides. Most of the medical data collected from healthcare systems are recorded in digital format. The increased availability of these data has enabled a number of artificial intelligence applications. Specifically, machine learning can generate insights to improve the discovery of new therapeutic tools, to support diagnostic decisions, to help in the rehabilitation process, only to name a few. The joint work of researchers and expert clinicians can play an important role in turning complex medical data (e.g., genomic data, online acquisitions of physicians, medical imagery, etc.) into actionable knowledge that ultimately improves patient care. In the last years, these topics have drawn clinical and machine learning research which ultimately led to practical and successful applications in healthcare.
Supervised deep learning has been widely applied in medical imaging to detect multiple sclerosis. However, it is difficult to have perfectly annotated lesions in magnetic resonance images, due to the inherent difficulties with the annotation process performed by human experts. To provide a model that can completely ignore annotations, we propose an unsupervised anomaly detection approach. The method uses a convolutional autoencoder to learn a "normal brain" distribution and detects abnormalities as a deviation from the norm. Experiments conducted with the recently released OASIS-3 dataset and the challenging MSSEG dataset show the feasibility of the proposed method, as very encouraging sensitivity and specificity were achieved in the binary health/disease discrimination. Following the "normal brain" learning rule, the proposed approach can easily generalize to other types of brain diseases, due to its potential to detect arbitrary anomalies.
12:30 EST