The gleam of graphics hardware is often the first thing people call attention to when addressing the growth of computing, yet the unseen machinery that enables its creation is just as crucial. Every GPU, no matter how powerful, starts as code lines that are then compiled, tested, and turned into functioning systems.
This is what makes the internal workflows at NVIDIA, the largest GPU provider in the world, so demanding: millions of lines spanning multiple hardware platforms, running through systems that must deliver both speed and reliability.
Dealing with this need is Adarsh Kumar Sadhukha, infrastructure architect at NVIDIA. Over the past six years, he’s been working to turn outdated legacy systems into a broader and more encompassing architecture that could handle the increasing technical needs of millions of applications. From accelerating build processes to designing new domain-specific languages for configurations, his contributions have redefined what efficiency looks like inside NVIDIA.
From Circuits to Systems: A Path into Infrastructure
Adarsh’s journey began far from his work with build pipelines. As an undergraduate in electrical and electronics engineering at the Birla Institute of Technology and Science Pilani, he was fascinated by circuits and hardware, yet increasingly drawn to the precision of software. He took every opportunity to supplement his coursework with computer science, which gave him a greater dexterity and fluency in the two disciplines.
He later pursued a master’s at Georgia Tech, which eventually led to his breakthrough role when he was offered another internship at NVIDIA. There, he got a first taste of the importance of these critical tools. As he puts it, “Behind every breakthrough product (whether it’s a GPU, a new software, or an advanced data pipeline) there’s an invisible layer of tooling making it all possible,” he recalled.
That realization set his trajectory: rather than consumer-facing features, his calling was the scaffolding that empowered other engineers. As he puts it, “My dual background in electronics and computer science made me uniquely positioned to apply hardware engineering and software architecture in equal fashion, a skillset that heavily drives my work today.”
Instaurating C++ At NVIDIA
When he was offered a full-time position at NVIDIA as a software tools infrastructure architect, he was tasked with helping build the protocols needed to scale the company’s internal systems.
One of his earliest challenges in this vein was tackling a build system that was slowly but surely harming operations. At the time, the legacy system was slow, memory-intensive, and fragile when scaled. Adarsh re-architected it from the ground up in modern C++, rewriting algorithms and redesigning data structures. The payoff was sharp, with build times cutting to a quarter while requiring ten times less memory usage.
But this change also involved helping engineers adjust to this new language. C++ had long been viewed warily inside the company, with Perl being the main standard used company-wide. By proving that modern C++ could rival high-level scripting languages in developer velocity while far outstripping them in performance, Adarsh aimed to change minds. He became a mentor to interns and early-career engineers, guiding them through systems-level programming.
His leadership style, rooted in ownership rather than micromanagement, cultivated confidence across teams. “This project proved that with the right engineering rigor, modern C++ could deliver unmatched performance and maintainability,” he explains.
Improving The Company’s DSL
Despite the performance wins, Adarsh also needed to address the equally pressing issue of the sprawl of configuration files. These sprawling instructions governed how hardware codebases are compiled, often stretching into thousands of cryptic lines. For new engineers, they posed a formidable technical barrier, and for veterans, they were a constant source of fragility.
Adarsh’s response was to design a domain-specific language that could speed up the process while keeping it as efficient and accurate as before. By collapsing repetitive patterns into a concise syntax, Adarsh’s new language successfully shrank configuration files by 30% while expanding their functionality across their internal operations. As a result, what once required painstaking parsing could now be expressed declaratively.
The updated DSL showed how NVIDIA could improve its build protocol and could be used to turn its internal infrastructure into a shared asset rather than a burden.
Moving Toward AI-Augmented Tooling
With faster builds and cleaner configurations, Adarsh’s sights are set on the next frontier: intelligence. With certifications in machine learning and AI, his vision is for developer tools that not only execute actions that humans ask them, but that could, over time, learn from them.
That, in practice, means that there could be build-tools capable of predicting technical bottlenecks before they grow into serious problems, pipelines that auto-tune based on prior workloads, and diagnostic systems that surface insights without human prompting. Such systems would turn the build infrastructure from reactive to proactive, improving and inferring with greater accuracy the more they’re used. For organizations managing terabytes of code and thousands of engineers, this would mean advantages like less downtime, fewer surprises, and a smarter foundation that grows with complexity.
“Tomorrow’s tools will learn from every compilation and make the next one even better,” Adarsh explains. For him, AI is not a layer to bolt onto infrastructure, but the principle that will define its evolution.
As GPUs continue to become more essential and used across multiple fields, the invisible engines that build them must keep pace. Thanks to Adarsh Kumar Sadhukha’s work, NVIDIA’s infrastructure is faster, leaner, and ready for a future where tools will learn as much as the engineers who use them.