Undergraduate Research Accelerates Machine Learning Efficiency


November 7, 2024

Seth Kay

Seth David Kay, a junior double-majoring in Computer Science and Economics, presented his groundbreaking research at the IEEE High-Performance Extreme Computing (HPEC) Conference in September 2024. During his presentation, he revealed his findings on accelerating Sparse Matrix-Matrix Multiplication (SpMM) for machine learning (ML) applications. This critical advancement has the potential to enhance computation efficiency in complex data processing tasks, such as training ML models that are at the heart of many artificial intelligence applications widely used in daily activities.

A leading global forum on advanced computing, IEEE HPEC covers computing hardware, software, systems, and applications where performance matters. SpMM is a well-researched operation used widely in emerging technologies to manage vast datasets effectively. Kay’s research, conducted under the mentorship of Professor of Electrical and Computer Engineering Howie Huang, demonstrates the power of combining the novel techniques of just-in-time (JIT) programming and graphics processing unit (GPU) threading to accelerate this fundamental operation significantly, optimizing both runtime and memory consumption.

JIT programming leverages available runtime information to generate binary code optimized for the specific matrix input and underlying computer architecture. Simultaneously, GPU threading complements this by utilizing GPUs’ ability to execute multiple threads in parallel. Together, this approach significantly reduces processing time and improves the efficiency of training ML models.

“We were able to harness that parallelism of GPUs, and through some math, take Sparse Matrix Matrix Multiplication and feed it through the GPU, which can then parallelize it. Then, we’re able to run much of the complicated operations that were being done synchronously at the same time on a GPU,” explained Kay.

As an undergraduate researcher, Kay had the unique chance to conduct this work typically reserved for Ph.D. students. The opportunity arose when Professor Huang shared an opportunity for a research assistantship in his GraphLab, and Kay’s deep interest in high-performance computing and ML algorithms pushed him to reach out. After discussions with Huang about potential projects, this study stood out as the perfect fit. This research was partially supported by an NSF grant and GW Engineering’s Summer Undergraduate Program in Engineering Research.

“This is kind of a novelty among undergraduate students. I am very fortunate to have conducted research independently,” Kay stated.

As he is currently taking a semester off to work at Phillips Healthcare, Kay reflects on how this experience has shaped his understanding of high-performance computing. Presenting at HPEC, his first international and computer science-focused conference, exposed him to a diverse network of researchers with the expertise to ask thought-provoking questions, opening his eyes to new avenues in the field he is eager to explore.

“There are so many areas of emerging tech, especially in ML acceleration right now. Being able to read research across these diverse areas, even just in ML research, and then apply that to a well-researched topic to make further advancements was truly eye-opening,” said Kay.