Categories
New piece Team Members

Generating RT Structs in Python with RT-UTILS

Seventh post in a series of posts written by members of the Qurit Team. Written by Asim Shrestha.

Introducing RT-UTILS

DICOM and RT Structures are the primary means in which medical images are stored. Because of this, it is important to be able to translate data in and out of RT Structs. Although there are well documented mechanisms for loading RT Structures within Python, we found that creating such files ourselves proved difficult. Existing tools felt complicated and clunky and so we set out to make our own library. Introducing RT-Utils, a lightweight and minimal python package for RT Struct manipulation.

What kind of manipulation is possible?

When we say manipulation, we mean that users are able to both load in existing RT Struct files or create new RT Structs given a DICOM series. After doing so, they can perform the following operations:

  • Create a new ROI within the RT Struct from a binary mask. Optionally, one can also specify the color, name, and description of the RT Struct.
  • List the names of all the ROIs within the RT Struct.
  • Load ROI mask from the RT Struct by name.
  • Save the RT Struct by name or path.

How are DICOM headers dealt with?

Our approach with headers are simple. We first add all of the headers required to make the RT Struct file a valid RT Struct. After this, we transfer all of the headers we can from the input DICOM series such as patient name, patient age, etc. Finally, as users input ROIs within the RT Struct, they will be dynamically added to the correct sequences within the RT Struct.

Installation

pip install rt-utils

After installation, you then import RTStructBuilder and either create a new RT Struct or load an existing RT Struct. Below are examples of both of these. Pay attention to the comments in the code snippets to get a better idea of what is going on.

Adding a ROI mask

The ROI mask format required for the add_roi method is a 3D array where each 2D slice of the array is a binary mask. Values of one within the mask indicate that the pixel is a part of the ROI, for a given slice. There must be as many slices within the array as there are slices within your DICOM series. The first slice in the array corresponds to the smallest slice location in the DICOM series, the second slice corresponds to the second slice location, etc.

Creating new RT Structs

from rt_utils import RTStructBuilder
rtstruct = RTStructBuilder.create_new(dicom_series_path="./testlocation")
# …
# Create mask through means such as ML
# …
# Add the 3D mask as an ROI.
# The colour, description, and name will be auto generated
rtstruct.add_roi(mask=MASK_FROM_ML_MODEL)
# Add another ROI, this time setting the color, description, and name
rtstruct.add_roi(
mask=MASK_FROM_ML_MODEL,
color=[255, 0, 255],
name="RT-Utils ROI!"
)
rtstruct.save('new-rt-struct')
view raw rt-utils-new.py hosted with ❤ by GitHub
Resulting RT Struct ROI as viewed in Slicer

Loading and viewing existing RT Structs

from rt_utils import RTStructBuilder
import matplotlib.pyplot as plt
# Load existing RT Struct. Requires the series path and existing RT Struct path
rtstruct = RTStructBuilder.create_from(
dicom_series_path="./testlocation",
rt_struct_path="./testlocation/rt-struct.dcm"
)
# View all of the ROI names from within the image
print(rtstruct.get_roi_names())
# Loading the 3D Mask from within the RT Struct
mask_3d = rtstruct.get_roi_mask_by_name("ROI NAME")
# Display one slice of the region
first_mask_slice = mask_3d[:, :, 0]
plt.imshow(first_mask_slice)
plt.show()
view raw rt-utils-loading.py hosted with ❤ by GitHub
Result of viewing a mask slice within an existing RT Struct

Bonus: Dealing with holes in masks and ROI’s

Often times, DICOM viewing tools such as MIM will automatically detect inner and outer contours and draw holes accordingly. This is not always the case however and sometimes you will be required to create pin holes within your ROI so that your ROIs are displayed properly. This process is described in this link.

An ROI in slicer not properly displaying the hole within the region

If this is the situation you are under, we have another optional parameter that can deal with this for you. When you are adding ROIs to your RT Struct, simply set use_pin_hole=True like as the example below.

rtstruct.add_roi(mask, use_pin_hole=True)
view raw rt-utils-pinhole.py hosted with ❤ by GitHub
The same region as before, but now with a pin hole carved out. In doing this, the inner hole now displays properly.
Categories
Uncategorized

“On Growth, Success and Well-being in Academia”

Presented by Arman Rahmim at UBC Physics & Astronomy Department Colloquium on November 5, 2020.

Categories
Interesting links/news

Our team’s partnership with Microsoft “AI for Health” team

Our Qurit team has partnered with artificial intelligence (AI) experts at Microsoft’s “AI for Health” program, to move towards the next generation of cancer imaging and assessment. Read more about this exciting work!

Categories
Uncategorized

3 minute video summary of our research!

3 minute summary (pitch) of our research to graduate students at the UBC Department of Physics & Astronomy (Sept 24, 2020):

Categories
Team Members

Identification of Parkinson’s Disease Subtypes using Radiomics and Machine Learning

Fourth post in a series of posts written by members of the Qurit Team. Written by Mohammad Salmanpour.

At Qurit lab, my research aim is to subdivide Parkinson’s disease (PD) into specific subtypes. This is important since homogeneous groups of patients are more likely to share genetic and pathological features, enabling potentially earlier disease recognition and more tailored treatment strategies. We aim to identify reproducible PD subtypes using clinical as well as imaging information including radiomics features, via hybrid machine learning (ML) methods that are robust to variations in the number of subjects and features. We also aim to specify most important combination of features for clustering task as well as prediction of outcome.

The segmentation process

We are applying multiple hybrid ML methods constructed using: various feature reduction algorithms, clustering evaluation methods, clustering algorithms and classifiers, as applied to longitudinal data from PD patients.

Identified PD Subtypes

So far, we are seeing that when using no radiomics features, the clusters are not robust to variations in features and samples, whereas utilizing SPECT radiomics information enables consistent generation of 3 distinct clusters, with different motor, non-motor and imaging contributions.

t-SNE Plots (as independent validation of our ML models)

At Qurit lab, we are demonstrating that appropriate hybrid ML and radiomics features enable robust identification of PD sub-types as well as prediction of disease subtypes.

Categories
Team Members

Pros of a Multidisciplinary Team

Third post in a series of posts written by members of the Qurit Team. Written by Cassandra Miller.

When the Qurit lab (then known as the QT lab) first materialized a couple of years ago, the climate of our group was much different. We were a small, close-knit team, and we all shared a common interest: physics! All of the lab members at the time either possessed a background in physics or were pursuing one. While this ensured that our lab excelled in the physics of nuclear medicine, it missed a key component –  nuclear medicine is much more than just physics. To encompass and master all areas of nuclear medicine, we would require team members with heterogeneous knowledge from numerous disciplines.

And so, the Qurit lab began a transition from being composed of mainly physicists to a more well-rounded and diverse team. Now, our lab boasts members specializing in fields such as electrical engineering, computer science, medicine, biophysics, and more. This diversity gives our team a creative edge to solving problems. We can collaborate together and compile multiple perspectives, while still allowing each unique individual in our team to excel within their area of expertise. We can think of novel, exciting ideas together and find new solutions to challenges.

Working together and collaborating also necessitates our team members to acquire excellent communication skills. We are all motivated by each other and inspired by the unique knowledge we all have about our respective fields, which encourages us to learn new skills from each other. There are many benefits to multidisciplinary teams that allows our group to shine.

When it comes to nuclear medicine, our lab has all of the skills and expertise that enables us to be at the leading edge of research in our field. This transition has helped us to be a more successful and thriving team, and we’re only getting better from here!

Categories
Team Members

State-of-the-Art in PET/CT Image Segmentation

Second post in a series of posts written by members of the Qurit Team. Written by Fereshteh Yousefi Rizi, PhD.

With the growing importance of PET/CT scans in cancer care, the incentives for research to develop methods for PET/CT image enhancement, classification, and segmentation have been increasing in recent years. PET/CT analysis methods have been developed to utilize the functional and metabolic information of PET images and anatomical localization of CT images. Although tumor boundaries in PET and CT images are not always matched [1], in order to use the complementary information of PET and CT modalities, existing PET/CT segmentation methods either process PET and CT images separately or simultaneously to achieve accurate tumor segmentation [2].

Figure 1. Segmentation of PET and CT images separatly [6]

Regarding standardized, reproducible, and automatic PET/CT tumor segmentation, there are still difficulties for clinical applications [1, 3]. The relatively low resolution of PET images as well as noise and possible heterogeneity of tumor regions are limitations to this end. Besides, PET-CT registration (even with hardware registration) may cause some errors due to patient motion [4]. Moreover, fusion of complementary information is problematic for PET and CT images [2, 5]. To address these issues, segmentation techniques in PET/CT images can be categorized into two categories [5]:

  1. Combining modality-specific features that are extracted separately from PET and CT images [6-9]
  2. Fusion of complementary features from CT and PET modalities that are prioritized for different tasks [2, 5, 10-13].

Figure 1 depicts overall scheme of a proposed method by Teramoto et al. [6] to separately identify tumor region on the PET and CT images. Another proposed PET/CT co-segmentation method by Zhong et al. [1] is shown in Figure 2 adapted from modality-specific encoder branches.

Figure 2. Co-segmentation of PET and CT images based on feature fusion by a fully convolutional deep network. Two parallel 3D-UNets are designed and the CT and PET branches are concatenated in the corresponding decoders, as depicted by the dotted arrow lines [1].

Deep learning solutions such as V-net [14], W-net [15], and generative adversarial network(GAN) [16] have gained much attention recently for medical image segmentation [17]. Accessing sufficient amount of annotated data, deep models outperform majority of conventional segmentation methods without the need for expert-designed features [1, 18]. Furthermore, there are some ongoing studies on using unsupervised and weakly supervised models [4] or noisy labels [19] and also scarce annotation [20] to make the solutions less dependent on the radiologists.

I am currently working on developing new techniques to improve tumor segmentation results, benefiting from PET/CT complementary information and considering the use of less annotated data, in order to significantly improve predictive models in cancer.

References

1. Zhong, Z., Y. Kim, K. Plichta, B.G. Allen, L. Zhou, J. Buatti, and X. Wu, Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med Phys, 2019. 46(2): p. 619-633.

2. Li, L., X. Zhao, W. Lu, and S. Tan, Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing, 2019.

3. Markel, D., H. Zaidi, and I. El Naqa, Novel multimodality segmentation using level sets and Jensen‐Rényi divergence. Medical physics, 2013. 40(12): p. 121908.

4. Afshari, S., A. BenTaieb, Z. Mirikharaji, and G. Hamarneh. Weakly supervised fully convolutional network for PET lesion segmentation. in Medical Imaging 2019: Image Processing. 2019. International Society for Optics and Photonics.

5. Kumar, A., M. Fulham, D. Feng, and J. Kim, Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Transactions on Medical Imaging, 2019. 39(1): p. 204-217.

6. Teramoto, A., H. Fujita, O. Yamamuro, and T. Tamaki, Automated detection of pulmonary nodules in PET/CT images: Ensemble false‐positive reduction using a convolutional neural network technique. Medical physics, 2016. 43(6Part1): p. 2821-2827.

7. Bi, L., J. Kim, A. Kumar, L. Wen, D. Feng, and M. Fulham, Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Computerized Medical Imaging and Graphics, 2017. 60: p. 3-10.

8.  Xu, L., G. Tetteh, J. Lipkova, Y. Zhao, H. Li, P. Christ, M. Piraud, A. Buck, K. Shi, and B.H. Menze, Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods. Contrast media & molecular imaging, 2018. 2018.

9. Han, D., J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method. in Biennial International Conference on Information Processing in Medical Imaging. 2011. Springer.

10. Song, Y., W. Cai, J. Kim, and D.D. Feng, A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE transactions on Medical Imaging, 2012. 31(5): p. 1061-1075.

11. Bi, L., J. Kim, D. Feng, and M. Fulham. Multi-stage thresholded region classification for whole-body PET-CT lymphoma studies. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2014. Springer.

12. Kumar, A., J. Kim, L. Wen, M. Fulham, and D. Feng, A graph-based approach for the retrieval of multi-modality medical images. Medical image analysis, 2014. 18(2): p. 330-342.

13. Hatt, M., C. Cheze-le Rest, A. Van Baardwijk, P. Lambin, O. Pradier, and D. Visvikis, Impact of tumor size and tracer uptake heterogeneity in 18F-FDG PET and CT non–small cell lung cancer tumor delineation. Journal of Nuclear Medicine, 2011. 52(11): p. 1690-1697.

14. Milletari, F., N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. in 2016 Fourth International Conference on 3D Vision (3DV). 2016. IEEE.

15. Xia, X. and B. Kulis, W-net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.

16. Zhang, X., X. Zhu, N. Zhang, P. Li, and L. Wang. SegGAN: Semantic segmentation with generative adversarial network. in 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). 2018. IEEE.

17. Taghanaki, S.A., K. Abhishek, J.P. Cohen, J. Cohen-Adad, and G. Hamarneh, Deep Semantic Segmentation of Natural and Medical Images: A Review. arXiv preprint arXiv:1910.07655, 2019.

18. Zhao, Y., A. Gafita, B. Vollnberg, G. Tetteh, F. Haupt, A. Afshar-Oromieh, B. Menze, M. Eiber, A. Rominger, and K. Shi, Deep neural network for automatic characterization of lesions on 68 Ga-PSMA-11 PET/CT. European journal of nuclear medicine and molecular imaging, 2020. 47(3): p. 603-613.

19. Mirikharaji, Z., Y. Yan, and G. Hamarneh, Learning to segment skin lesions from noisy annotations, in Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. 2019, Springer. p. 207-215.

20. Tajbakhsh, N., L. Jeyaseelan, Q. Li, J.N. Chiang, Z. Wu, and X. Ding, Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Medical Image Analysis, 2020: p. 101693.

Categories
Interesting links/news Team Members

Starting a new job with Qurit during COVID-19: What I’m learning

First post in a series of posts written by members of the Qurit Team. Written by Cariad Knight.

Starting a new job, especially one in a scientific research context, is often not a glamorous endeavour at the best of times. In past I’ve always found the initial learning curve in research to be exceptionally steep. And truthfully this makes sense. Since the goal of research is to push the boundaries of knowledge, there is often a myriad of material to cover before arriving at a place where one understands not only the details of the research project in question, but also about where it fits into the broader picture of the topic, field, and scientific community as a whole. So, understandably the leg work required to arrive at the requisite understanding to tackle the questions being posed by the research is often extensive and filled with mis-steps.

This learning curve is no different now, during the COVID-19 pandemic. What is different however, is the hurdle required to overcome it. Normally, the way that I amass the information needed to tackle new research beyond reading papers is through speaking with coworkers, and (often repeatedly and extensively) asking my supervisors questions. This method of gaining insight is somewhat hindered during the present lockdown conditions as the casual avenues of communication have been replaced with the somewhat more formal and often less immediate format of email, messenger chats, and zoom calls. For myself, I’ve found this has made me somewhat less inclined to, how should I put this harass my coworkers and supervisors for the answers to my questions.

I actually think that this newfound hesitation in asking for clarification has been a good thing for me! It’s teaching me about myself and the way that I learn and problem solve. Gradually, I’ve noticed that I’m becoming better at, not only tackling the learning curve head on myself, but learning when and which questions to ask for clarification and help. I feel this knowing when and what to ask is an incredibly valuable and empowering skill in both research and life.

So, I suppose I am grateful for my unglamorous COVID-19 start to my new NSERC USRA job with the Qurit lab and new research undertaking, because I am learning much about myself – and as Socrates said, “To know thyself is the beginning of wisdom.”

Categories
Interesting links/news New piece Team Members

Using radioactive epoxy lesions in phantoms

Presentation by our fantastic co-op student, Roberto Fedrigo, to the UBC Multidisciplinary Undergraduate Research Conference (MURC), with title, “Phantom-guided optimization of prostate cancer PET imaging using radioactive epoxy lesions”:

Categories
Interesting links/news Team Members

New opening for post-doc fellow in PET/CT imaging

We have an opening in our team for post-doc fellow in PET/CT imaging. Please see announcement.