Presented by Arman Rahmim at UBC Physics & Astronomy Department Colloquium on November 5, 2020.
Our Qurit team has partnered with artificial intelligence (AI) experts at Microsoft’s “AI for Health” program, to move towards the next generation of cancer imaging and assessment. Read more about this exciting work!
3 minute summary (pitch) of our research to graduate students at the UBC Department of Physics & Astronomy (Sept 24, 2020):
Fourth post in a series of posts written by members of the Qurit Team. Written by Mohammad Salmanpour.
At Qurit lab, my research aim is to subdivide Parkinson’s disease (PD) into specific subtypes. This is important since homogeneous groups of patients are more likely to share genetic and pathological features, enabling potentially earlier disease recognition and more tailored treatment strategies. We aim to identify reproducible PD subtypes using clinical as well as imaging information including radiomics features, via hybrid machine learning (ML) methods that are robust to variations in the number of subjects and features. We also aim to specify most important combination of features for clustering task as well as prediction of outcome.
We are applying multiple hybrid ML methods constructed using: various feature reduction algorithms, clustering evaluation methods, clustering algorithms and classifiers, as applied to longitudinal data from PD patients.
So far, we are seeing that when using no radiomics features, the clusters are not robust to variations in features and samples, whereas utilizing SPECT radiomics information enables consistent generation of 3 distinct clusters, with different motor, non-motor and imaging contributions.
At Qurit lab, we are demonstrating that appropriate hybrid ML and radiomics features enable robust identification of PD sub-types as well as prediction of disease subtypes.
Third post in a series of posts written by members of the Qurit Team. Written by Cassandra Miller.
When the Qurit lab (then known as the QT lab) first materialized a couple of years ago, the climate of our group was much different. We were a small, close-knit team, and we all shared a common interest: physics! All of the lab members at the time either possessed a background in physics or were pursuing one. While this ensured that our lab excelled in the physics of nuclear medicine, it missed a key component – nuclear medicine is much more than just physics. To encompass and master all areas of nuclear medicine, we would require team members with heterogeneous knowledge from numerous disciplines.
And so, the Qurit lab began a transition from being composed of mainly physicists to a more well-rounded and diverse team. Now, our lab boasts members specializing in fields such as electrical engineering, computer science, medicine, biophysics, and more. This diversity gives our team a creative edge to solving problems. We can collaborate together and compile multiple perspectives, while still allowing each unique individual in our team to excel within their area of expertise. We can think of novel, exciting ideas together and find new solutions to challenges.
Working together and collaborating also necessitates our team members to acquire excellent communication skills. We are all motivated by each other and inspired by the unique knowledge we all have about our respective fields, which encourages us to learn new skills from each other. There are many benefits to multidisciplinary teams that allows our group to shine.
When it comes to nuclear medicine, our lab has all of the skills and expertise that enables us to be at the leading edge of research in our field. This transition has helped us to be a more successful and thriving team, and we’re only getting better from here!
Second post in a series of posts written by members of the Qurit Team. Written by Fereshteh Yousefi Rizi, PhD.
With the growing importance of PET/CT scans in cancer care, the incentives for research to develop methods for PET/CT image enhancement, classification, and segmentation have been increasing in recent years. PET/CT analysis methods have been developed to utilize the functional and metabolic information of PET images and anatomical localization of CT images. Although tumor boundaries in PET and CT images are not always matched , in order to use the complementary information of PET and CT modalities, existing PET/CT segmentation methods either process PET and CT images separately or simultaneously to achieve accurate tumor segmentation .
Regarding standardized, reproducible, and automatic PET/CT tumor segmentation, there are still difficulties for clinical applications [1, 3]. The relatively low resolution of PET images as well as noise and possible heterogeneity of tumor regions are limitations to this end. Besides, PET-CT registration (even with hardware registration) may cause some errors due to patient motion . Moreover, fusion of complementary information is problematic for PET and CT images [2, 5]. To address these issues, segmentation techniques in PET/CT images can be categorized into two categories :
- Combining modality-specific features that are extracted separately from PET and CT images [6-9]
- Fusion of complementary features from CT and PET modalities that are prioritized for different tasks [2, 5, 10-13].
Figure 1 depicts overall scheme of a proposed method by Teramoto et al.  to separately identify tumor region on the PET and CT images. Another proposed PET/CT co-segmentation method by Zhong et al.  is shown in Figure 2 adapted from modality-specific encoder branches.
Deep learning solutions such as V-net , W-net , and generative adversarial network(GAN)  have gained much attention recently for medical image segmentation . Accessing sufficient amount of annotated data, deep models outperform majority of conventional segmentation methods without the need for expert-designed features [1, 18]. Furthermore, there are some ongoing studies on using unsupervised and weakly supervised models  or noisy labels  and also scarce annotation  to make the solutions less dependent on the radiologists.
I am currently working on developing new techniques to improve tumor segmentation results, benefiting from PET/CT complementary information and considering the use of less annotated data, in order to significantly improve predictive models in cancer.
1. Zhong, Z., Y. Kim, K. Plichta, B.G. Allen, L. Zhou, J. Buatti, and X. Wu, Simultaneous cosegmentation of tumors in PET-CT images using deep fully convolutional networks. Med Phys, 2019. 46(2): p. 619-633.
2. Li, L., X. Zhao, W. Lu, and S. Tan, Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing, 2019.
3. Markel, D., H. Zaidi, and I. El Naqa, Novel multimodality segmentation using level sets and Jensen‐Rényi divergence. Medical physics, 2013. 40(12): p. 121908.
4. Afshari, S., A. BenTaieb, Z. Mirikharaji, and G. Hamarneh. Weakly supervised fully convolutional network for PET lesion segmentation. in Medical Imaging 2019: Image Processing. 2019. International Society for Optics and Photonics.
5. Kumar, A., M. Fulham, D. Feng, and J. Kim, Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Transactions on Medical Imaging, 2019. 39(1): p. 204-217.
6. Teramoto, A., H. Fujita, O. Yamamuro, and T. Tamaki, Automated detection of pulmonary nodules in PET/CT images: Ensemble false‐positive reduction using a convolutional neural network technique. Medical physics, 2016. 43(6Part1): p. 2821-2827.
7. Bi, L., J. Kim, A. Kumar, L. Wen, D. Feng, and M. Fulham, Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Computerized Medical Imaging and Graphics, 2017. 60: p. 3-10.
8. Xu, L., G. Tetteh, J. Lipkova, Y. Zhao, H. Li, P. Christ, M. Piraud, A. Buck, K. Shi, and B.H. Menze, Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods. Contrast media & molecular imaging, 2018. 2018.
9. Han, D., J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method. in Biennial International Conference on Information Processing in Medical Imaging. 2011. Springer.
10. Song, Y., W. Cai, J. Kim, and D.D. Feng, A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE transactions on Medical Imaging, 2012. 31(5): p. 1061-1075.
11. Bi, L., J. Kim, D. Feng, and M. Fulham. Multi-stage thresholded region classification for whole-body PET-CT lymphoma studies. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2014. Springer.
12. Kumar, A., J. Kim, L. Wen, M. Fulham, and D. Feng, A graph-based approach for the retrieval of multi-modality medical images. Medical image analysis, 2014. 18(2): p. 330-342.
13. Hatt, M., C. Cheze-le Rest, A. Van Baardwijk, P. Lambin, O. Pradier, and D. Visvikis, Impact of tumor size and tracer uptake heterogeneity in 18F-FDG PET and CT non–small cell lung cancer tumor delineation. Journal of Nuclear Medicine, 2011. 52(11): p. 1690-1697.
14. Milletari, F., N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. in 2016 Fourth International Conference on 3D Vision (3DV). 2016. IEEE.
15. Xia, X. and B. Kulis, W-net: A deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506, 2017.
16. Zhang, X., X. Zhu, N. Zhang, P. Li, and L. Wang. SegGAN: Semantic segmentation with generative adversarial network. in 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). 2018. IEEE.
17. Taghanaki, S.A., K. Abhishek, J.P. Cohen, J. Cohen-Adad, and G. Hamarneh, Deep Semantic Segmentation of Natural and Medical Images: A Review. arXiv preprint arXiv:1910.07655, 2019.
18. Zhao, Y., A. Gafita, B. Vollnberg, G. Tetteh, F. Haupt, A. Afshar-Oromieh, B. Menze, M. Eiber, A. Rominger, and K. Shi, Deep neural network for automatic characterization of lesions on 68 Ga-PSMA-11 PET/CT. European journal of nuclear medicine and molecular imaging, 2020. 47(3): p. 603-613.
19. Mirikharaji, Z., Y. Yan, and G. Hamarneh, Learning to segment skin lesions from noisy annotations, in Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. 2019, Springer. p. 207-215.
20. Tajbakhsh, N., L. Jeyaseelan, Q. Li, J.N. Chiang, Z. Wu, and X. Ding, Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Medical Image Analysis, 2020: p. 101693.
Starting a new job, especially one in a scientific research context, is often not a glamorous endeavour at the best of times. In past I’ve always found the initial learning curve in research to be exceptionally steep. And truthfully this makes sense. Since the goal of research is to push the boundaries of knowledge, there is often a myriad of material to cover before arriving at a place where one understands not only the details of the research project in question, but also about where it fits into the broader picture of the topic, field, and scientific community as a whole. So, understandably the leg work required to arrive at the requisite understanding to tackle the questions being posed by the research is often extensive and filled with mis-steps.
This learning curve is no different now, during the COVID-19 pandemic. What is different however, is the hurdle required to overcome it. Normally, the way that I amass the information needed to tackle new research beyond reading papers is through speaking with coworkers, and (often repeatedly and extensively) asking my supervisors questions. This method of gaining insight is somewhat hindered during the present lockdown conditions as the casual avenues of communication have been replaced with the somewhat more formal and often less immediate format of email, messenger chats, and zoom calls. For myself, I’ve found this has made me somewhat less inclined to, how should I put this harass my coworkers and supervisors for the answers to my questions.
I actually think that this newfound hesitation in asking for clarification has been a good thing for me! It’s teaching me about myself and the way that I learn and problem solve. Gradually, I’ve noticed that I’m becoming better at, not only tackling the learning curve head on myself, but learning when and which questions to ask for clarification and help. I feel this knowing when and what to ask is an incredibly valuable and empowering skill in both research and life.
So, I suppose I am grateful for my unglamorous COVID-19 start to my new NSERC USRA job with the Qurit lab and new research undertaking, because I am learning much about myself – and as Socrates said, “To know thyself is the beginning of wisdom.”
Presentation by our fantastic co-op student, Roberto Fedrigo, to the UBC Multidisciplinary Undergraduate Research Conference (MURC), with title, “Phantom-guided optimization of prostate cancer PET imaging using radioactive epoxy lesions”:
Congratulations to Roberto Fedrigo who has been awarded the 2020 Summer Undergraduate Fellowship by the American Association of Physicists in Medicine (AAPM). Roberto, an undergraduate student at UBC Physics & Astronomy, has been a very active member of our team since September of 2019. The awarded fellowship is a 10 week summer program designed to provide opportunities for undergraduate university students to gain experience in medical physics by performing research in a medical physics laboratory or assisting with clinical service at a clinical facility.