By Vijay Shankar, Director of Bioinformatics and Statistics, Clemson Institute for Human Genetics
The title of this blog has multiple contextual meanings. The first comes from the fact that the number of members attending the Supercomputing Conference through STEM-Trek has doubled from 2017 to 2025. This growth is thanks to the incredible effort put forth by the STEM-Trek team in securing funding that enables HPC professionals to attend meetings and conferences for career-development opportunities. The community is stronger than ever before. This became especially apparent when, after attending SC24, I was invited to join the HPC Ecosystems Slack workspace hosted by members of STEM-Trek. Being able to collaborate, share knowledge, and solve problems collectively within this community has become a true highlight of my professional life. I’m delighted to see this group actively growing and developing. Strength in numbers, indeed!
The second meaning of the title points to this year’s preconference STEM-Trek workshop at SC25, which focused on Next Generation Arithmetic (NGA), i.e. cleverer and leaner representations of numbers and precisions that improve performance and reduce resource consumption (especially power). The timing couldn’t have been better, given the explosive popularity of AI and the pressure to fit more transistors into smaller footprints with rising power demands. The fundamental representation of numbers in computing needs to evolve to match current trends. One key insight we gained from CoNGA (Conference on Next Generation Arithmetic) was that traditional number formats and precisions do not align well with the distributions used in modern AI applications. This mismatch results in performance penalties and wasted resources. The conference covered the current contenders for next-generation representations, such as Posits, B-Posits, Takums, and Tekums, and compared their performance and precision across various applications. Biotech companies are now experimenting with purpose-built FPGA cards that use these newer numerical representations for more robust processing of sequencing data. Attending this workshop gave me a valuable foundation in NGA and will help me better evaluate the efficacy of these emerging technologies in my domain.
Another major topic in the preconference workshop was the application of computing in geospatial and weather sciences. Speakers from both the public and private sectors shared the current state of research and the growing role of AI and HPC in these fields. This content resonated strongly with one of our large-scale projects at the Clemson Institute for Human Genetics, which involves extracting explanatory insights (e.g., socioeconomics, exposure data) from participants’ geographic locations. My post-presentation conversations with several speakers, including Mark Munsell from the National Geospatial-Intelligence Agency and Tara Mott from ESRI, opened the door to potential collaborations that I’m excited to explore.
There were also interesting hardware trends at SC25. AMD (a TANGO@SC25 sponsor) had a remarkably strong presence this year. During one of the preconference workshop sessions, an AMD representative briefed us on their latest developments in enterprise-grade CPU, GPU, and networking hardware. My colleagues and I were impressed by the cohesive ecosystem AMD has built for integrated HPC and AI solutions, as well as by the open-source ROCm stack and the HIPIFY tool, which can migrate CUDA code into the ROCm framework. AMD’s networking solutions also stood out. Representatives at the Cisco booth highlighted how future networking trends are leaning toward hardware firewall solutions embedded directly into switches powered by AMD Pensando chips. Our institute has begun incorporating AMD hardware into our cluster based on faculty requests, so having a clearer understanding of their offerings helps us guide faculty in making informed hardware decisions.
Liquid cooling was another noticeable trend this year. We saw “solutions” (pun intended) at all levels, from single-node blocks to full multi-rack implementations, and many system developers and integrators now offer seamless pathways to liquid-cooled infrastructure. Thanks to our recent elevation from “Center” to “Institute” at Clemson University, we are now on track for a building upgrade within the next three to six years. The new building will include a dedicated floor for a purpose-built datacenter designed from the ground up. The insights we gained at SC25 will be invaluable when my team and I are asked to weigh in on appropriate cooling solutions for that space.
This year’s collection of SC workshops was also incredibly useful. The NIH now mandates that Controlled-Access Data (e.g., dbGaP) must reside primarily on compute environments self-certified to NIST 800-171 r2 standards. We attended multiple workshops (HPCSYSPROS25 and S-HPC) and BoFs (Zero Trust in HPC) that addressed the challenges and knowledge gaps around implementing these standards in HPC environments. We also learned about several quality-of-life upgrades, such as transparent container-based software management and global filesystem management across multiple authentication domains. This latter topic is particularly relevant to us, as Clemson University has two HPC clusters, our institute’s and the one on main campus, with separate authentication systems and filesystems. We hope to address this fractured architecture in the future, and our conversation with an iRODS representative at the RENCI booth was especially helpful, as iRODS is well suited to solving this problem.
This was also the first year I had the opportunity to interact with Bryan Johnston, the lead on the HPC Ecosystems project, at an SC conference. Although we were aware of each other through a mutual colleague, this year we finally got to spend time together discussing education, training, and outreach efforts in HPC. Our favorite routine involved ordering Indian food from a nearby food truck, bringing it back to the hotel lobby, and having long conversations about these topics. My colleagues and I supported his efforts by attending his talk at the 12th SC Workshop on Best Practices for HPC Training and Education, as well as his BoF panel on sustainable HPC outreach through the Reinvent, Reuse, and Repurpose paradigm. Our HPC was a recipient of long-term loaned KNL2 nodes from the Texas Advanced Computing Center (TACC), so we deeply appreciate Bryan’s work on reusable enterprise-grade solutions and the associated training necessary for HPC professionals to make good use of such hardware. We also attended many of the HPC Illumination Pavilion talks delivered by members of the HPC Ecosystems community to support their efforts and learn about new advancements. Additionally, I was glad to reconnect with Aaron Jezghani from Georgia Tech, whom I met last year at SC24 in Atlanta (his home base). Given our similar paths into the HPC world, we connected on many topics related to scientific computing and academia, and it was great to catch up on his recent work in HPC education and the GT Rogues Gallery. Bryan also connected me with Jay Lofstead from Sandia National Laboratories. We had great fun chatting at dinner about the role of HPC in academia. Jay invited me to join his HPC Gelato Group on LinkedIn, which of course I did.
Due to funding cuts at Clemson University, the Research Computing and Data division (which manages the university’s main cluster) was limited in the number of personnel it could send to staff the South Carolina Research Computing Consortium’s (SCRCC) booth this year. Clemson is one of the four member organizations of the SCRCC, so my team and I volunteered to help cover booth duties and reduce the strain on RCD. We spent a good amount of time talking to prospective students, faculty, and vendors interested in the consortium’s efforts in education and research. We assisted with booth presentations and poster sessions and contributed our own Precision Medicine Initiative poster to the rotating display. Staffing the booth helped us make connections with HPC professionals and research groups from external organizations, other SCRCC members, and RCD, strengthening relationships we hope to build on.
My final stop at SC25 was the TACC booth, where I had the chance to personally thank Dan Stanzione and Jennifer Schopf for the long-term loan of their KNL2 hardware to the Clemson Institute for Human Genetics. Bryan arranged for us to meet as part of the HPC Ecosystems initiative’s successful expansion. We discussed our planned use cases for these nodes which include hosting testing and development environments, workshops, educational outreach, and overflow for compute, as well as future TACC events my team might be invited to attend. We also invited the TACC team to visit us in Greenwood, South Carolina, to tour our facility and explore opportunities for future collaboration.
This year’s SC experience was just as fruitful, if not more so, than last year’s in terms of networking and knowledge gained. The timing could not have been better, given our elevation to an institute and the growing HPC demands of our organization. I know I am not alone in saying this, but I am incredibly thankful for the immense support provided by STEM-Trek, and especially Elizabeth Leake. Without that support, my team and I would not have been able to attend SC25. Thank you, and here’s hoping we can do it again next year!








