Introduction: Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) both have their areas of specialty in the medical imaging world. MRI is considered to be a safer modality as it exploits the magnetic properties of the hydrogen nucleus. Whereas a CT scan uses multiple X-rays, which is known to contribute to carcinogenesis and is associated with affecting the patient's health.
Methods: In scenarios, such as radiation therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. Current deep learning methods for MRI to CT synthesis purely use either paired data or unpaired data. Models trained with paired data suffer due to a lack of availability of wellaligned data.
Results: Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we proposed a new model called PUPCGANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks).
Conclusion: This model is capable of learning transformations utilizing both paired and unpaired data. To support this, a paired loss is introduced. Comparing MAE, MSE, NRMSE, PSNR, and SSIM metrics, PUPC-GANs outperform CycleGANs.