Birmingham’s HPC service supports cancer research

Researchers based at The University of Birmingham work to create proton Computed Tomography (CT) image that will help to treat cancer patients

Proton therapy targets tumours very precisely using a proton beam and can cause less damage to surrounding tissue than conventional radiotherapy – for this reason it can be beneficial treatment for children. Generally reliant on X-rays to image the body’s composition and healthy tissue location before treatment, this research is hoping to simulate use of actual protons – not X-rays – to image the body – and in doing so improve accuracy of the final treatment. It forms part of a larger research project set up to build a device capable of delivering protons in this way in a clinical setting.    

Working for the PRaVDA Consortium, a three-year project funded by the Wellcome Trust and led by researchers at the University of Lincoln, the team of researchers are using The University of Birmingham’s centrally funded High Performance Computing (HPC) service, BlueBEAR, to simulate the use of protons for CT imaging. The team hopes to simulate 1000 million protons per image over the course of the project, and will do so 97% faster than on a desktop computer. A test simulation of 180 million protons, which would usually take 5400 hours without the cluster, has already been delivered in 72 hours / 3-days.

The research will give us a better understanding of how a proton beam interacts with the human body, ultimately improving the accuracy of proton therapy.’

The research team is tasked with proving the principle that a 10cm proton CT image, similar in size to a child’s head, can be created. In doing so, it will be the largest proton CT image ever created. Dr Tony Price, PRaVDA research fellow, said: “The research will give us a better understanding of how a proton beam interacts with the human body, ultimately improving the accuracy of proton therapy. The HPC service at The University of Birmingham is essential for us to complete our research, as it gives us the necessary capacity to simulate and record the necessary number of histories to create an image.  It took us only three days to run a simulation of 180 million protons which would usually take 5400 hours without the cluster.”

The BlueBEAR HPC service in use by the PRaVDA Consortium was designed, built and integrated in 2013 by HPC, data management, storage and analytics provider OCF. Due to the stability and reliability of the core service, researchers have invested in expanding this service with nodes purchased from their own grants and integrated into the core service on the understanding that these nodes will be available for general use when not required by the research group. 

This has expanded the capacity of the centrally-funded service by 33%, showing the confidence that the researchers have in the core service. The service is used by researchers from the whole range of research areas at the University, from the traditional HPC users in the STEM (Science, Technology, Engineering and Mathematics) disciplines to non-traditional HPC users such as Psychology and Theology.

Paul Hatton, HPC & Visualisation Specialist, IT Services, The University of Birmingham added: “The HPC service built by OCF has proven over the past two years to be of immense value to a multitude of researchers at the University.  Instead of buying small workstations, researchers are using our central HPC service because it is easy for them to buy and add their own cores when required.  

We work closely with OCF to encourage new users onto the service and provide a framework for users requesting capacity.  The flexible, scalable and unobtrusive design of the high performance clusters has made it easy for us to scale up our HPC service according to the increase in demand.” 

Technology

   The server clusters uses Lenovo System x iDataPlex® with Intel Sandy Bridge processors. OCF has installed more high performance server clusters using the industry-leading Lenovo iDataPlex server than any other UK integrator.

   The server clusters also uses IBM Tivoli Storage Manager for data back up and IBM GPFS software which enables more effective storage capacity expansion, enterprise wide, interdepartmental file sharing, commercial-grade reliability, cost-effective disaster recovery and business continuity.

   The scheduling system on BlueBEAR is Adaptive Computing’s MOAB software, which enables the scheduling, managing, monitoring, and reporting of HPC workloads. 

   Use of Mellanox’s design would make it easier for IT Services to redeploy nodes between the various components of the BEAR services depending on changing workloads should the demand arise. 

W: www.ocf.co.uk

Free live webinar & QA

Blended learning – Did we forget about the students?

Free Education Webinar with Class

Wednesday, June 15, 11AM London BST

Join our expert panel as we look at what blended learning means in 2022 and how universities can meet the needs of ever more diverse student expectations.