
Few-Shot Learning Framework for Medical Image Analysis
Explore a novel few-shot learning framework by Sohini Roychowdhury to reduce inter-observer variability in medical images, specifically focusing on Optical Coherence Tomography (OCT) images. The framework introduces innovative models and algorithms for automated assessment, regional proposals generation, and label selection, resulting in significant improvements in accuracy and efficiency. This comprehensive approach aims to address the challenges posed by limited annotated data and variability in medical imaging, offering a promising solution for more reliable and automated analysis.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Few Shot Learning Framework to Reduce Inter-observer Variability in Medical Images SOHINI ROYCHOWDHURY DIRECTOR MACHINE LEARNING, FOURTHBRAIN.AI EX-VOLVOCARS USA
Introduction Medical Imaging domain suffers consistently from the small data challenge. Generating large volumes of annotated data is costly, time intensive and subjective. Need exists for automated assessment of labelled data using on few samples (few-shot). Optical Coherence Tomography (OCT) images suffer from high variability owing to image acquisition system [1]. Existing works show aggregate OCT segmentation performances per image stack, but per- image-level subjectivity for annotated labels remains. Source: http://www.jbopticians.co.uk/about_us/topcon-3d-oct-scan/index.html
Key Contributions 1. Novel Few-shot Learning models for generation of multiple regional proposals per image: Global Thresholding Parallel Echo State Networks (ESN)[2] Few shot U-net [3] 2. Novel target Label Selection Algorithm (TLSA) to select the best label automatically. 3. End-to-end system design to receive stacks of OCT images and automatically select best manual label for 60-97% images.
Parallel ESN Method S. Roychowdhury and L. S. Muppirisetty, Fast proposals for image and video annotation using modified echo state networks, in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018, pp. 1225 1230.
B. Target Label Selection Experiments A. Noisy Label Generation
References [1] G. Girish, B. Saikumar, S. Roychowdhury, A. R. Kothari, and J. Rajan, Depthwise separable convolutional neural network model for intraretinal cyst segmentation, IEEE EMBC, 2019. [2] S. Roychowdhury and L. S. Muppirisetty, Fast proposals for image and video annotation using modified echo state networks, in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018, pp. 1225 1230. [3] G. Girish, B. Thakur, S. R. Chowdhury, A. R. Kothari, and J. Rajan, Segmentation of intra-retinal cysts from optical coherence tomography images using a fully convolutional neural network model, IEEE journal of biomedical and health informatics, vol. 23, no. 1, pp. 296 304, 2018.