Categories
Team Members

DICOM Networking in Python

Sixth post in a series of posts written by members of the Qurit Team. Written by Adam Watkins.

Part 1: The Handshake

At Qurit, our research relies heavily on analyzing medical images from PET and SPECT scanners. These images, and the majority of all clinical images, come in the DICOM file format. A somewhat underrated yet needed part of the research process is the automation of image retrieval and anonymization. This is an important step before clinical images can be used to train and test our machine learning models. Using DICOM’s underlying networking protocol, imaging pipelines can be created to automate the process of image acquisition, anonymization, and analysis.

Under the hood, DICOM is not only a file format (.dcm) but also a fully featured, and admittedly somewhat archaic, networking protocol. The DICOM networking protocol resides just above the TCP layer in the OSI model. Built upon TCP/IP, the DICOM protocol relies upon making a handshake before communication between two remotes can begin. In DICOM this is called creating an association.

The DICOM Protocol Network Stack

In this post I will show how to associate with a remote DICOM server such as PACS (Picture Archiving and Communication System) server or PET scanner. I will also show how to send a C-Echo to the remote associated with; the first of four basic actions. After the C-Echo, I will move on to the other, increasingly advanced, actions shown below.

NameAction
C-EchoTest the connection between two two devices
C-FindQuery a remote device
C-MoveMove an image between two remote devices
C-StoreSend and Image from a local device to a remote device for storage

Preforming a C-Echo

This article and all future articles will be using pynetdicom (1.5.1). Future articles will also be using pydicom (1.4.1). Pynetdicom handles the low level networking implementation whilst pydiom will allow us to read, write, query DICOM files.

from dataclasses import dataclass

from pynetdicom.sop_class import VerificationSOPClass
from pynetdicom import AE

First we are going to create some helper classes.

@dataclass
class Modality:
    addr: str
    port: int
    ae_title: str

class Association:
    def __init__(self, modality, context):
        self.modality = modality
        self.context = context

    def __enter__(self):
        ae = AE()
        ae.add_requested_context(self.context)
        self._association = ae.associate(**vars(self.modality))
        return self._association

    def __exit__(self, *args):
        self._association.release()
        self._association = None

Above, we are first creating a modality class to keep all our modalities organized. For our purposes, a modality is a remote that we will be connecting with. Each remote has an address, a port, and an application entity (AE) title.

Second, and more importantly, we are creating an association class to act as a context manger; helping to create and release associations. This is accomplished by implementing both dunder enter and dunder exit. Even though more complex initially, I believe using a context manger is good practice as it ensures that releasing associations will never be forgotten (even under error conditions). As an added bonus, it will make all future code shorter, cleaner and frankly more pythonic.

if __name__ == '__main__':
    modality = Modality('127.0.0.1', 104, 'DISCOVERY')

    with Association(modality, VerificationSOPClass) as assoc:
        resp = assoc.send_c_echo()
        print(f'Modality responded with status: {resp.Status}')

Here we are finally testing our connection with the modality that we are trying to reach. If all goes well, the modality should respond with the successful status code 0x0000. Notice how by using context management, we are able to release the association automatically.

If we were to forgo using automatic context management, the implementation would look like the code below. It would be imperative to ensure that association is always released to prevent tying up valuable server resources.

if __name__ == '__main__':
    ae = AE()
    ae.add_requested_context(VerificationSOPClass)
    assoc = ae.associate('127.0.0.1', 104, ae_title='DISCOVERY')
    
    try:
        resp = assoc.send_c_echo()
        print(f'Modality responded with status: {resp.Status}')
        assoc.release()
    except Exception as e:
        assoc.release()
        raise e

Stay tuned for part two where I demonstrate how query a remote device with a C-Find. In the meantime, check out Qurit on Github!

Categories
Team Members

Identification of Parkinson’s Disease Subtypes using Radiomics and Machine Learning

Fourth post in a series of posts written by members of the Qurit Team. Written by Mohammad Salmanpour.

At Qurit lab, my research aim is to subdivide Parkinson’s disease (PD) into specific subtypes. This is important since homogeneous groups of patients are more likely to share genetic and pathological features, enabling potentially earlier disease recognition and more tailored treatment strategies. We aim to identify reproducible PD subtypes using clinical as well as imaging information including radiomics features, via hybrid machine learning (ML) methods that are robust to variations in the number of subjects and features. We also aim to specify most important combination of features for clustering task as well as prediction of outcome.

The segmentation process

We are applying multiple hybrid ML methods constructed using: various feature reduction algorithms, clustering evaluation methods, clustering algorithms and classifiers, as applied to longitudinal data from PD patients.

Identified PD Subtypes

So far, we are seeing that when using no radiomics features, the clusters are not robust to variations in features and samples, whereas utilizing SPECT radiomics information enables consistent generation of 3 distinct clusters, with different motor, non-motor and imaging contributions.

t-SNE Plots (as independent validation of our ML models)

At Qurit lab, we are demonstrating that appropriate hybrid ML and radiomics features enable robust identification of PD sub-types as well as prediction of disease subtypes.

Categories
Team Members

Pros of a Multidisciplinary Team

Third post in a series of posts written by members of the Qurit Team. Written by Cassandra Miller.

When the Qurit lab (then known as the QT lab) first materialized a couple of years ago, the climate of our group was much different. We were a small, close-knit team, and we all shared a common interest: physics! All of the lab members at the time either possessed a background in physics or were pursuing one. While this ensured that our lab excelled in the physics of nuclear medicine, it missed a key component –  nuclear medicine is much more than just physics. To encompass and master all areas of nuclear medicine, we would require team members with heterogeneous knowledge from numerous disciplines.

And so, the Qurit lab began a transition from being composed of mainly physicists to a more well-rounded and diverse team. Now, our lab boasts members specializing in fields such as electrical engineering, computer science, medicine, biophysics, and more. This diversity gives our team a creative edge to solving problems. We can collaborate together and compile multiple perspectives, while still allowing each unique individual in our team to excel within their area of expertise. We can think of novel, exciting ideas together and find new solutions to challenges.

Working together and collaborating also necessitates our team members to acquire excellent communication skills. We are all motivated by each other and inspired by the unique knowledge we all have about our respective fields, which encourages us to learn new skills from each other. There are many benefits to multidisciplinary teams that allows our group to shine.

When it comes to nuclear medicine, our lab has all of the skills and expertise that enables us to be at the leading edge of research in our field. This transition has helped us to be a more successful and thriving team, and we’re only getting better from here!

Categories
Team Members

State-of-the-Art in PET/CT Image Segmentation

Second post in a series of posts written by members of the Qurit Team. Written by Fereshteh Yousefi Rizi, PhD.

With the growing importance of PET/CT scans in cancer care, the incentives for research to develop methods for PET/CT image enhancement, classification, and segmentation have been increasing in recent years. PET/CT analysis methods have been developed to utilize the functional and metabolic information of PET images and anatomical localization of CT images. Although tumor boundaries in PET and CT images are not always matched [1], in order to use the complementary information of PET and CT modalities, existing PET/CT segmentation methods either process PET and CT images separately or simultaneously to achieve accurate tumor segmentation [2].

Figure 1. Segmentation of PET and CT images separatly [6]

Regarding standardized, reproducible, and automatic PET/CT tumor segmentation, there are still difficulties for clinical applications [1, 3]. The relatively low resolution of PET images as well as noise and possible heterogeneity of tumor regions are limitations to this end. Besides, PET-CT registration (even with hardware registration) may cause some errors due to patient motion [4]. Moreover, fusion of complementary information is problematic for PET and CT images [2, 5]. To address these issues, segmentation techniques in PET/CT images can be categorized into two categories [5]:

  1. Combining modality-specific features that are extracted separately from PET and CT images [6-9]
  2. Fusion of complementary features from CT and PET modalities that are prioritized for different tasks [2, 5, 10-13].

Figure 1 depicts overall scheme of a proposed method by Teramoto et al. [6] to separately identify tumor region on the PET and CT images. Another proposed PET/CT co-segmentation method by Zhong et al. [1] is shown in Figure 2 adapted from modality-specific encoder branches.

Figure 2. Co-segmentation of PET and CT images based on feature fusion by a fully convolutional deep network. Two parallel 3D-UNets are designed and the CT and PET branches are concatenated in the corresponding decoders, as depicted by the dotted arrow lines [1].

Deep learning solutions such as V-net [14], W-net [15], and generative adversarial network(GAN) [16] have gained much attention recently for medical image segmentation [17]. Accessing sufficient amount of annotated data, deep models outperform majority of conventional segmentation methods without the need for expert-designed features [1, 18]. Furthermore, there are some ongoing studies on using unsupervised and weakly supervised models [4] or noisy labels [19] and also scarce annotation [20] to make the solutions less dependent on the radiologists.

I am currently working on developing new techniques to improve tumor segmentation results, benefiting from PET/CT complementary information and considering the use of less annotated data, in order to significantly improve predictive models in cancer.

References

1. Zhong, Z., Y. Kim, K. Plichta, B.G. Allen, L. Zhou, J. Buatti, and X. Wu, Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med Phys, 2019. 46(2): p. 619-633.

2. Li, L., X. Zhao, W. Lu, and S. Tan, Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing, 2019.

3. Markel, D., H. Zaidi, and I. El Naqa, Novel multimodality segmentation using level sets and Jensen‐Rényi divergence. Medical physics, 2013. 40(12): p. 121908.

4. Afshari, S., A. BenTaieb, Z. Mirikharaji, and G. Hamarneh. Weakly supervised fully convolutional network for PET lesion segmentation. in Medical Imaging 2019: Image Processing. 2019. International Society for Optics and Photonics.

5. Kumar, A., M. Fulham, D. Feng, and J. Kim, Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Transactions on Medical Imaging, 2019. 39(1): p. 204-217.

6. Teramoto, A., H. Fujita, O. Yamamuro, and T. Tamaki, Automated detection of pulmonary nodules in PET/CT images: Ensemble false‐positive reduction using a convolutional neural network technique. Medical physics, 2016. 43(6Part1): p. 2821-2827.

7. Bi, L., J. Kim, A. Kumar, L. Wen, D. Feng, and M. Fulham, Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Computerized Medical Imaging and Graphics, 2017. 60: p. 3-10.

8.  Xu, L., G. Tetteh, J. Lipkova, Y. Zhao, H. Li, P. Christ, M. Piraud, A. Buck, K. Shi, and B.H. Menze, Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods. Contrast media & molecular imaging, 2018. 2018.

9. Han, D., J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method. in Biennial International Conference on Information Processing in Medical Imaging. 2011. Springer.

10. Song, Y., W. Cai, J. Kim, and D.D. Feng, A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE transactions on Medical Imaging, 2012. 31(5): p. 1061-1075.

11. Bi, L., J. Kim, D. Feng, and M. Fulham. Multi-stage thresholded region classification for whole-body PET-CT lymphoma studies. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2014. Springer.

12. Kumar, A., J. Kim, L. Wen, M. Fulham, and D. Feng, A graph-based approach for the retrieval of multi-modality medical images. Medical image analysis, 2014. 18(2): p. 330-342.

13. Hatt, M., C. Cheze-le Rest, A. Van Baardwijk, P. Lambin, O. Pradier, and D. Visvikis, Impact of tumor size and tracer uptake heterogeneity in 18F-FDG PET and CT non–small cell lung cancer tumor delineation. Journal of Nuclear Medicine, 2011. 52(11): p. 1690-1697.

14. Milletari, F., N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. in 2016 Fourth International Conference on 3D Vision (3DV). 2016. IEEE.

15. Xia, X. and B. Kulis, W-net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.

16. Zhang, X., X. Zhu, N. Zhang, P. Li, and L. Wang. SegGAN: Semantic segmentation with generative adversarial network. in 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). 2018. IEEE.

17. Taghanaki, S.A., K. Abhishek, J.P. Cohen, J. Cohen-Adad, and G. Hamarneh, Deep Semantic Segmentation of Natural and Medical Images: A Review. arXiv preprint arXiv:1910.07655, 2019.

18. Zhao, Y., A. Gafita, B. Vollnberg, G. Tetteh, F. Haupt, A. Afshar-Oromieh, B. Menze, M. Eiber, A. Rominger, and K. Shi, Deep neural network for automatic characterization of lesions on 68 Ga-PSMA-11 PET/CT. European journal of nuclear medicine and molecular imaging, 2020. 47(3): p. 603-613.

19. Mirikharaji, Z., Y. Yan, and G. Hamarneh, Learning to segment skin lesions from noisy annotations, in Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. 2019, Springer. p. 207-215.

20. Tajbakhsh, N., L. Jeyaseelan, Q. Li, J.N. Chiang, Z. Wu, and X. Ding, Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Medical Image Analysis, 2020: p. 101693.

Categories
Interesting links/news Team Members

Starting a new job with Qurit during COVID-19: What I’m learning

First post in a series of posts written by members of the Qurit Team. Written by Cariad Knight.

Starting a new job, especially one in a scientific research context, is often not a glamorous endeavour at the best of times. In past I’ve always found the initial learning curve in research to be exceptionally steep. And truthfully this makes sense. Since the goal of research is to push the boundaries of knowledge, there is often a myriad of material to cover before arriving at a place where one understands not only the details of the research project in question, but also about where it fits into the broader picture of the topic, field, and scientific community as a whole. So, understandably the leg work required to arrive at the requisite understanding to tackle the questions being posed by the research is often extensive and filled with mis-steps.

This learning curve is no different now, during the COVID-19 pandemic. What is different however, is the hurdle required to overcome it. Normally, the way that I amass the information needed to tackle new research beyond reading papers is through speaking with coworkers, and (often repeatedly and extensively) asking my supervisors questions. This method of gaining insight is somewhat hindered during the present lockdown conditions as the casual avenues of communication have been replaced with the somewhat more formal and often less immediate format of email, messenger chats, and zoom calls. For myself, I’ve found this has made me somewhat less inclined to, how should I put this harass my coworkers and supervisors for the answers to my questions.

I actually think that this newfound hesitation in asking for clarification has been a good thing for me! It’s teaching me about myself and the way that I learn and problem solve. Gradually, I’ve noticed that I’m becoming better at, not only tackling the learning curve head on myself, but learning when and which questions to ask for clarification and help. I feel this knowing when and what to ask is an incredibly valuable and empowering skill in both research and life.

So, I suppose I am grateful for my unglamorous COVID-19 start to my new NSERC USRA job with the Qurit lab and new research undertaking, because I am learning much about myself – and as Socrates said, “To know thyself is the beginning of wisdom.”

Categories
Interesting links/news New piece Team Members

Using radioactive epoxy lesions in phantoms

Presentation by our fantastic co-op student, Roberto Fedrigo, to the UBC Multidisciplinary Undergraduate Research Conference (MURC), with title, “Phantom-guided optimization of prostate cancer PET imaging using radioactive epoxy lesions”:

Categories
Interesting links/news Team Members

New opening for post-doc fellow in PET/CT imaging

We have an opening in our team for post-doc fellow in PET/CT imaging. Please see announcement.

Categories
Awards/Grants Interesting links/news Team Members

Roberto Fedrigo awarded the 2020 Summer Undergraduate Fellowship by AAPM

Congratulations to Roberto Fedrigo who has been awarded the 2020 Summer Undergraduate Fellowship by the American Association of Physicists in Medicine (AAPM). Roberto, an undergraduate student at UBC Physics & Astronomy, has been a very active member of our team since September of 2019. The awarded fellowship is a 10 week summer program designed to provide opportunities for undergraduate university students to gain experience in medical physics by performing research in a medical physics laboratory or assisting with clinical service at a clinical facility.

Categories
Uncategorized

Yansong Zhu awarded the 2020 Bradley-Alavi Student Fellowship

Congratulations to Yansong Zhu who has been awarded the 2020 Society of Nuclear Medicine and Molecular Imaging (SNMMI) Bradley-Alavi Student Fellowship!

Yansong, Electrical & Computer Engineering PhD candidate in our team, is actively pursuing research in quantitative brain PET images. The awarded fellowship, entitled, “Application of voxel-based partial volume correction to amyloid PET imaging” proposes to translate MR-guided PET image enhancement to human amyloid PET imaging, aiming to enhance discriminative power to differentiate between Alzheimer’s disease (AD) and normal aging.

Bradley-Alavi Fellows are named in honor of the late Stanley E. Bradley, Professor of Medicine at Columbia University College of Physicians and Surgeons and a prominent researcher in the fields of renal physiology and liver disease, and Abass Alavi, M.D., Professor and Director of Research Education at the Department of Radiology at the University of Pennsylvania.

Categories
Interesting links/news Team Members

New visiting PhD student

We welcome Mohammad Salmanpour, who has been working with us closely since 2017 (from Tehran Polytechnic), now staying with us as exchange PhD student. Mohammad focuses on improved prediction of outcome in Parkinson’s disease (PD) using machine learning algorithms.

Here are two of his recent papers with us, one on prediction of cognitive outcome (2019):
https://rahmimlab.files.wordpress.com/2019/07/salmanpour_cbm19_moca_prediction_pd.pdf
And one just published (2020) on prediction of motor outcome in PD patients:
https://rahmimlab.files.wordpress.com/2020/01/salmanpour_physmed20_pd_motor_prediction_ml.pdf