SC25 Takeaways: Posits, Performance, and Building More Portable Scientific Software

By Brian Kyanjo (Georgia Tech Postdoc)

The Supercomputing Conference (SC) is unlike any other conference I’ve attended in the high-performance computing world. Thinking back to SC24, my first SC, I gained so many practical insights and workflow solutions, and I’m grateful to STEM Trek for making that experience possible. At SC25, the TANGO jetlag took things even further. Even on the first day, I found myself energized by strong sessions on geospatial innovation and the newest architectural advances aimed at tackling the community’s most computationally expensive problems. Many of these challenges mirror what we face every day as scientific software developers working on real-world applications: memory pressure, I/O bottlenecks, storage constraints, limited access to HPC resources, and compatibility issues across tools and systems. What stood out most was that these weren’t just abstract problems; there were concrete ideas and approaches I can apply directly, and I also made valuable connections with people I can collaborate with when those challenges show up in practice.

The second day was packed with especially exciting discussions on posits and floating-point arithmetic. Anyone who has built or maintained large computational models with heavy memory and storage footprints knows how quickly things become difficult when accuracy, efficiency, and optimization all matter at once—especially for real-world problems where results are time-critical and the stakes can be high. Choosing what to tradeoff is rarely straightforward, and we often try to maximize everything at the same time (even though that’s nearly impossible). Arithmetic issues become even more painful when dividing very large numbers by very small ones (or the reverse), which can break codes and create debugging nightmares—particularly in parallel and accelerated settings with MPI ranks and GPU threads. The talks on posits and recent advances offered promising ways to reduce some of these pain points, and I’m eager to explore them in my GeoFlood hybrid GPU/CPU overland flooding model, where accuracy and efficiency often pull in different directions.

A personal highlight was meeting John Gustafson, someone I’ve long associated with the foundations of weak scaling and parallel performance thinking, and who is also connected to the posit ecosystem. After spending so much time working in parallel computing, it felt like meeting an academic celebrity whose ideas I’ve been applying for years. That moment alone made the conference feel special.

The tutorials added a whole new layer of value. For me, Spack and “Python for HPC” stood out the most, especially because I’m developing ICESEE, a Python-based data assimilation library designed to couple with ice-sheet models for predicting sea-level rise, melt rates, and ice dynamics. I walked away with clearer ideas for designing ICESEE to be more model-agnostic, more efficient, and better aligned with mature CPython tools that are already optimized for HPC use cases. Because my work frequently involves coupling legacy codes written in Fortran, MATLAB, C/C++, and Julia, integration is often the hardest part—not the science. Many of these models are MPI-parallel in their native languages, while ICESEE is MPI-parallel through mpi4py. Making these different MPI ecosystems work together can become a real headache, and communication often falls back to I/O (like HDF5) when memory-based approaches aren’t feasible. Then the dependency maze shows up: MPICH vs OpenMPI vs MVAPICH, plus HDF5 builds, plus h5py compiled with MPI support, while also dealing with whatever stack a particular system has deployed. In short: compatibility becomes the bottleneck.

Two themes from the tutorials felt like real answers to that problem: containerization and package management. I’ve been using both already, but SC25 helped me understand the tradeoffs more clearly and gave me practical ways to improve how I use them. The Spack tutorial, in particular, was hands-on in a way that directly addressed the challenges I face. I had been using Spack mainly to manage MPI, HDF5, h5py, and my Python environment. Now, I see a path to letting Spack manage everything: dependencies, ICESEE itself, and even the coupled ice-sheet models. That would be a huge win for users: fewer installation issues, fewer “sudo required” roadblocks, and a much smoother onboarding experience. The idea that users could eventually run something as simple as spack install icesee and get the full stack is exactly the kind of usability improvement our community needs. On the containerization side, I had workable solutions, but mostly for single-node Apptainer images for MPI-enabled applications. More recently, I came across CIQ’s approach that uses OpenMPI “wireup” under the hood and relies on Spack to build a multi-node container image, though the examples I saw were Rocky-based while our HPC environment is Red Hat. Because of SC and STEM Trek, I was able to meet people with deep experience in the definition-file “recipe” side of this workflow, and we sketched a prototype that should wire up cleanly with our system. My next step is to test ICESEE and its coupled models using that approach, and I’m genuinely excited about what this could unlock.

SC25 didn’t just teach me new tools, it connected me with people, patterns, and practical solutions that will shape my work in the months ahead. I’m especially grateful to Elizabeth Leake (STEM Trek) for making this possible. These experiences consistently expand my network and help resolve problems that would otherwise take ages to untangle. Being in a space where people bring solutions to the exact challenges you’re facing—and where you can contribute back—is rare, and I’m selfishly happy about it. I’m looking forward to building on what I learned at SC25 and carrying that momentum into what comes next.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>