Publications

Computer-aided Assessment of Catheters and Tubes onRadiographs: How Good Is Artificial Intelligence for Assessment?

Catheters are the second most common abnormal findings on radiographs. The position of catheters must be assessed on all radiographs, as serious complications can arise if catheters are malpositioned. However, due to the large number of radiographs performed each day, there can be substantial delays between the time a radiograph is performed and when it is interpreted by a radiologist. Computer-aided approaches hold the potential to assist in prioritizing radiographs with potentially malpositioned catheters for interpretation and automatically insert text indicating the placement of catheters in radiology reports, thereby improving radiologists’ efficiency. Surprisingly, after 50 years of research in computer-aided diagnosis, there is still a paucity of study in this area. By carefully surveying the literature, we were only able to find 13 studies related to this task. Now, in the era of machine learning, or more specifically deep learning, the problem of catheter assessment is far more solvable. Therefore, we have performed a review of current algorithms and identified key challenges in building a reliable computer-aided diagnosis system for assessment of catheters on radiographs. This review may serve to further the development of machine learning approaches for this important use case.

Generative Adversarial Network in Medical Imaging: A Review

Generative adversarial networks have gained a lot of attention in general computer vision community due to their capability of data generation without explicitly modelling the probability density function and robustness to overfitting. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into the training and imposing higher order consistency that is proven to be useful in many cases, such as in domain adaptation, data augmentation, and image-to-image translation. These nice properties have attracted researcher in the medical imaging community and we have seen quick adoptions in many traditional tasks and some novel applications. This trend will continue to grow based on our observation, therefore we conducted a review of the recent advances in medical imaging using the adversarial training scheme in the hope of benefiting researchers that are interested in this technique.

Automatic Catheter and Tube Detection in Pediatric X-ray Images Using a Scale-Recurrent Network and Synthetic Data

Catheters are commonly inserted life supporting devices. Because serious complications can arise from malpositioned catheters, X-ray images are used to assess the position of a catheter immediately after placement. Previous computer vision approaches to detect catheters on X-ray images were either rule-based or only capable of processing a limited number or type of catheters projecting over the chest. With the resurgence of deep learning, supervised training approaches are beginning to show promising results. However, dense annotation maps are required, and the work of a human annotator is difficult to scale. In this work, we propose an automatic approach for detection of catheters and tubes on pediatric X-ray images. We propose a simple way of synthesizing catheters on X-ray images to generate a training dataset by exploiting the fact that catheters are essentially tubular structures with various cross sectional profiles. Further, we develop a UNet-style segmentation network with a recurrent module that can process inputs at multiple scales and iteratively refine the detection result. By training on adult chest X-rays, the proposed network exhibits promising detection results on pediatric chest/abdomen X-rays in terms of both precision and recall, with Fβ = 0.8. The approach described in this work may contribute to the development of clinical systems to detect and assess the placement of catheters on X-ray images. This may provide a solution to triage and prioritize X-ray images with potentially malpositioned catheters for a radiologist’s urgent review and help automate radiology reporting.

Unsupervised and semi-supervised learning with Categorical Generative Adversarial Networks assisted by Wasserstein distance for dermoscopy image Classification

Melanoma is a curable aggressive skin cancer if detected early. Typically, the diagnosis involves initial screening with subsequent biopsy and histopathological examination if necessary. Computer aided diagnosis offers an objective score that is independent of clinical experience and the potential to lower the workload of a dermatologist. In the recent past, success of deep learning algorithms in the field of general computer vision has motivated successful application of supervised deep learning methods in computer aided melanoma recognition. However, large quantities of labeled images are required to make further improvements on the supervised method. A good annotation generally requires clinical and histological confirmation, which requires significant effort. In an attempt to alleviate this constraint, we propose to use categorical generative adversarial network to automatically learn the feature representation of dermoscopy images in an unsupervised and semi-supervised manner. Thorough experiments on ISIC 2016 skin lesion challenge demonstrate that the proposed feature learning method has achieved an average precision score of 0.424 with only 140 labeled images. Moreover, the proposed method is also capable of generating real-world like dermoscopy images.

Sharpness-aware low dose CT denoising using conditional generative adversarial network

Low Dose Computed Tomography (LDCT) has offered tremendous benefits in radiation restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset shows that the results of the proposed method have very small resolution loss and achieves better performance relative to the-state-of-art methods both quantitatively and visually.

LBP-based Segmentation of Defocus Blur

Defocus blur is extremely common in images captured using optical imaging systems. It may be undesirable, but may also be an intentional artistic effect, thus, it can either enhance or inhibit our visual perception of the image scene. For tasks such as image restoration and object recognition, one might want to segment a partially blurred image into blurred and non-blurred regions. In this paper, we propose a sharpness metric based on LBP (local binary patterns) and a robust segmentation algorithm to separate in- and out-of-focus image regions. The proposed sharpness metric exploits the observation that most local image patches in blurry regions have significantly fewer of certain local binary patterns compared to those in sharp regions. Using this metric together with image matting and multi-scale inference, we obtained high quality sharpness maps. Tests on hundreds of partially blurred images were used to evaluate our blur segmentation algorithm and six comparator methods. The results show that our algorithm achieves comparative segmentation results with the state-of-the-art and have big speed advantage over the others.

Identification of morphologically similar seeds using multi-kernel learning

Use of digital image analysis for the identification of seeds has not been recognized as a validated method. Image analysis for seed identification has been previously studied, and good recognition rates have been achieved. However, the data sets used in these experiments either contain very few groups of non-verified specimens or little representation of intra-species variations. This study considered a data set containing seed specimens that were verified to represent the species and a typical population variation, as well as look-alike species that share the same morphological appearance, in particular, seeds from species in the same genus, which can be particularly difficult for even trained professionals to visually distinguish. With representative specimens, the image features and machine learning algorithms described herein can achieve a high recognition rate: >97%. Three different types of features from seed images: colour, shape, and texture were extracted, and a multi-kernel support vector machine was used as the classifier. We compared our features to the previous state-of-the-art features and the results showed that the features we selected performed better on our data set.