Seventh post in a series of posts written by members of the Qurit Team. Written by Asim Shrestha.
DICOM and RT Structures are the primary means in which medical images are stored. Because of this, it is important to be able to translate data in and out of RT Structs. Although there are well documented mechanisms for loading RT Structures within Python, we found that creating such files ourselves proved difficult. Existing tools felt complicated and clunky and so we set out to make our own library. Introducing RT-Utils, a lightweight and minimal python package for RT Struct manipulation.
What kind of manipulation is possible?
When we say manipulation, we mean that users are able to both load in existing RT Struct files or create new RT Structs given a DICOM series. After doing so, they can perform the following operations:
Create a new ROI within the RT Struct from a binary mask. Optionally, one can also specify the color, name, and description of the RT Struct.
List the names of all the ROIs within the RT Struct.
Load ROI mask from the RT Struct by name.
Save the RT Struct by name or path.
How are DICOM headers dealt with?
Our approach with headers are simple. We first add all of the headers required to make the RT Struct file a valid RT Struct. After this, we transfer all of the headers we can from the input DICOM series such as patient name, patient age, etc. Finally, as users input ROIs within the RT Struct, they will be dynamically added to the correct sequences within the RT Struct.
pip install rt-utils
After installation, you then import RTStructBuilder and either create a new RT Struct or load an existing RT Struct. Below are examples of both of these. Pay attention to the comments in the code snippets to get a better idea of what is going on.
Adding a ROI mask
The ROI mask format required for the add_roi method is a 3D array where each 2D slice of the array is a binary mask. Values of one within the mask indicate that the pixel is a part of the ROI, for a given slice. There must be as many slices within the array as there are slices within your DICOM series. The first slice in the array corresponds to the smallest slice location in the DICOM series, the second slice corresponds to the second slice location, etc.
Often times, DICOM viewing tools such as MIM will automatically detect inner and outer contours and draw holes accordingly. This is not always the case however and sometimes you will be required to create pin holes within your ROI so that your ROIs are displayed properly. This process is described in this link.
If this is the situation you are under, we have another optional parameter that can deal with this for you. When you are adding ROIs to your RT Struct, simply set use_pin_hole=True like as the example below.
Our Qurit team has partnered with artificial intelligence (AI) experts at Microsoft’s “AI for Health” program, to move towards the next generation of cancer imaging and assessment. Read more about this exciting work!
Fourth post in a series of posts written by members of the Qurit Team. Written by Mohammad Salmanpour.
At Qurit lab, my research aim is to subdivide Parkinson’s disease (PD) into specific subtypes. This is important since homogeneous groups of patients are more likely to share genetic and pathological features, enabling potentially earlier disease recognition and more tailored treatment strategies. We aim to identify reproducible PD subtypes using clinical as well as imaging information including radiomics features, via hybrid machine learning (ML) methods that are robust to variations in the number of subjects and features. We also aim to specify most important combination of features for clustering task as well as prediction of outcome.
We are applying multiple hybrid ML methods constructed using: various feature reduction algorithms, clustering evaluation methods, clustering algorithms and classifiers, as applied to longitudinal data from PD patients.
So far, we are seeing that when using no radiomics features, the clusters are not robust to variations in features and samples, whereas utilizing SPECT radiomics information enables consistent generation of 3 distinct clusters, with different motor, non-motor and imaging contributions.
At Qurit lab, we are demonstrating that appropriate hybrid ML and radiomics features enable robust identification of PD sub-types as well as prediction of disease subtypes.
Third post in a series of posts written by members of the Qurit Team. Written by Cassandra Miller.
When the Qurit lab (then known as the QT lab) first materialized a couple of years ago, the climate of our group was much different. We were a small, close-knit team, and we all shared a common interest: physics! All of the lab members at the time either possessed a background in physics or were pursuing one. While this ensured that our lab excelled in the physics of nuclear medicine, it missed a key component – nuclear medicine is much more than just physics. To encompass and master all areas of nuclear medicine, we would require team members with heterogeneous knowledge from numerous disciplines.
And so, the Qurit lab began a transition from being composed of mainly physicists to a more well-rounded and diverse team. Now, our lab boasts members specializing in fields such as electrical engineering, computer science, medicine, biophysics, and more. This diversity gives our team a creative edge to solving problems. We can collaborate together and compile multiple perspectives, while still allowing each unique individual in our team to excel within their area of expertise. We can think of novel, exciting ideas together and find new solutions to challenges.
Working together and collaborating also necessitates our team members to acquire excellent communication skills. We are all motivated by each other and inspired by the unique knowledge we all have about our respective fields, which encourages us to learn new skills from each other. There are many benefits to multidisciplinary teams that allows our group to shine.
When it comes to nuclear medicine, our lab has all of the skills and expertise that enables us to be at the leading edge of research in our field. This transition has helped us to be a more successful and thriving team, and we’re only getting better from here!
With the growing importance of PET/CT scans in cancer care, the incentives for research to develop methods for PET/CT image enhancement, classification, and segmentation have been increasing in recent years. PET/CT analysis methods have been developed to utilize the functional and metabolic information of PET images and anatomical localization of CT images. Although tumor boundaries in PET and CT images are not always matched , in order to use the complementary information of PET and CT modalities, existing PET/CT segmentation methods either process PET and CT images separately or simultaneously to achieve accurate tumor segmentation .
Regarding standardized, reproducible, and automatic PET/CT tumor segmentation, there are still difficulties for clinical applications [1, 3]. The relatively low resolution of PET images as well as noise and possible heterogeneity of tumor regions are limitations to this end. Besides, PET-CT registration (even with hardware registration) may cause some errors due to patient motion . Moreover, fusion of complementary information is problematic for PET and CT images [2, 5]. To address these issues, segmentation techniques in PET/CT images can be categorized into two categories :
Combining modality-specific features that are extracted separately from PET and CT images [6-9]
Fusion of complementary features from CT and PET modalities that are prioritized for different tasks [2, 5, 10-13].
Figure 1 depicts overall scheme of a proposed method by Teramoto et al.  to separately identify tumor region on the PET and CT images. Another proposed PET/CT co-segmentation method by Zhong et al.  is shown in Figure 2 adapted from modality-specific encoder branches.
Deep learning solutions such as V-net , W-net , and generative adversarial network(GAN)  have gained much attention recently for medical image segmentation . Accessing sufficient amount of annotated data, deep models outperform majority of conventional segmentation methods without the need for expert-designed features [1, 18]. Furthermore, there are some ongoing studies on using unsupervised and weakly supervised models  or noisy labels  and also scarce annotation  to make the solutions less dependent on the radiologists.
I am currently working on developing new techniques to improve tumor segmentation results, benefiting from PET/CT complementary information and considering the use of less annotated data, in order to significantly improve predictive models in cancer.
1. Zhong, Z., Y. Kim, K. Plichta, B.G. Allen, L. Zhou, J. Buatti, and X. Wu, Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med Phys, 2019. 46(2): p. 619-633.
2. Li, L., X. Zhao, W. Lu, and S. Tan, Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing, 2019.
3. Markel, D., H. Zaidi, and I. El Naqa, Novel multimodality segmentation using level sets and Jensen‐Rényi divergence. Medical physics, 2013. 40(12): p. 121908.
4. Afshari, S., A. BenTaieb, Z. Mirikharaji, and G. Hamarneh. Weakly supervised fully convolutional network for PET lesion segmentation. in Medical Imaging 2019: Image Processing. 2019. International Society for Optics and Photonics.
5. Kumar, A., M. Fulham, D. Feng, and J. Kim, Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Transactions on Medical Imaging, 2019. 39(1): p. 204-217.
6. Teramoto, A., H. Fujita, O. Yamamuro, and T. Tamaki, Automated detection of pulmonary nodules in PET/CT images: Ensemble false‐positive reduction using a convolutional neural network technique. Medical physics, 2016. 43(6Part1): p. 2821-2827.
7. Bi, L., J. Kim, A. Kumar, L. Wen, D. Feng, and M. Fulham, Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Computerized Medical Imaging and Graphics, 2017. 60: p. 3-10.
8. Xu, L., G. Tetteh, J. Lipkova, Y. Zhao, H. Li, P. Christ, M. Piraud, A. Buck, K. Shi, and B.H. Menze, Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods. Contrast media & molecular imaging, 2018. 2018.
9. Han, D., J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method. in Biennial International Conference on Information Processing in Medical Imaging. 2011. Springer.
10. Song, Y., W. Cai, J. Kim, and D.D. Feng, A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE transactions on Medical Imaging, 2012. 31(5): p. 1061-1075.
11. Bi, L., J. Kim, D. Feng, and M. Fulham. Multi-stage thresholded region classification for whole-body PET-CT lymphoma studies. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2014. Springer.
12. Kumar, A., J. Kim, L. Wen, M. Fulham, and D. Feng, A graph-based approach for the retrieval of multi-modality medical images. Medical image analysis, 2014. 18(2): p. 330-342.
13. Hatt, M., C. Cheze-le Rest, A. Van Baardwijk, P. Lambin, O. Pradier, and D. Visvikis, Impact of tumor size and tracer uptake heterogeneity in 18F-FDG PET and CT non–small cell lung cancer tumor delineation. Journal of Nuclear Medicine, 2011. 52(11): p. 1690-1697.
14. Milletari, F., N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. in 2016 Fourth International Conference on 3D Vision (3DV). 2016. IEEE.
15. Xia, X. and B. Kulis, W-net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.
16. Zhang, X., X. Zhu, N. Zhang, P. Li, and L. Wang. SegGAN: Semantic segmentation with generative adversarial network. in 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). 2018. IEEE.
17. Taghanaki, S.A., K. Abhishek, J.P. Cohen, J. Cohen-Adad, and G. Hamarneh, Deep Semantic Segmentation of Natural and Medical Images: A Review. arXiv preprint arXiv:1910.07655, 2019.
18. Zhao, Y., A. Gafita, B. Vollnberg, G. Tetteh, F. Haupt, A. Afshar-Oromieh, B. Menze, M. Eiber, A. Rominger, and K. Shi, Deep neural network for automatic characterization of lesions on 68 Ga-PSMA-11 PET/CT. European journal of nuclear medicine and molecular imaging, 2020. 47(3): p. 603-613.
19. Mirikharaji, Z., Y. Yan, and G. Hamarneh, Learning to segment skin lesions from noisy annotations, in Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. 2019, Springer. p. 207-215.
20. Tajbakhsh, N., L. Jeyaseelan, Q. Li, J.N. Chiang, Z. Wu, and X. Ding, Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Medical Image Analysis, 2020: p. 101693.
First post in a series of posts written by members of the Qurit Team. Written by Cariad Knight.
Starting a new job, especially one in a scientific research context, is often not a glamorous endeavour at the best of times. In past I’ve always found the initial learning curve in research to be exceptionally steep. And truthfully this makes sense. Since the goal of research is to push the boundaries of knowledge, there is often a myriad of material to cover before arriving at a place where one understands not only the details of the research project in question, but also about where it fits into the broader picture of the topic, field, and scientific community as a whole. So, understandably the leg work required to arrive at the requisite understanding to tackle the questions being posed by the research is often extensive and filled with mis-steps.
This learning curve is no different now, during the COVID-19 pandemic. What is different however, is the hurdle required to overcome it. Normally, the way that I amass the information needed to tackle new research beyond reading papers is through speaking with coworkers, and (often repeatedly and extensively) asking my supervisors questions. This method of gaining insight is somewhat hindered during the present lockdown conditions as the casual avenues of communication have been replaced with the somewhat more formal and often less immediate format of email, messenger chats, and zoom calls. For myself, I’ve found this has made me somewhat less inclined to, how should I put this harass my coworkers and supervisors for the answers to my questions.
I actually think that this newfound hesitation in asking for clarification has been a good thing for me! It’s teaching me about myself and the way that I learn and problem solve. Gradually, I’ve noticed that I’m becoming better at, not only tackling the learning curve head on myself, but learning when and which questions to ask for clarification and help. I feel this knowing when and what to ask is an incredibly valuable and empowering skill in both research and life.
So, I suppose I am grateful for my unglamorous COVID-19 start to my new NSERC USRA job with the Qurit lab and new research undertaking, because I am learning much about myself – and as Socrates said, “To know thyself is the beginning of wisdom.”