When I look back at the last decade of ML and data work, there’s a pattern that always repeats: the job titles change slower than the work itself. Teams keep calling themselves “data science” long after they’ve stopped running AB tests, and new roles quietly emerge before anyone knows what to name them. But 2026 will feel different. The shift isn’t subtle anymore. The work itself is evolving so quickly that the titles are being forced to catch up. The question I keep hearing from younger data scientist and engineers is simple: What should I prepare for? The honest answer is that the industry is splintering. The traditional “end-to-end” ML engineer is becoming less common, replaced by deeper specialists who own smaller, more specific pieces of the stack. And this isn’t just trend-spotting for the sake of it. These shifts tell us something real about where companies are investing, where the problems actually are, and what kind of engineer/product teams can no longer afford to go without. So in this piece, I want to walk through a few roles that I think will matter a lot more in 2026 than they do today. Roles that sit at the intersection of product, infrastructure, and research. But before we dig into the details, here’s the message I want you to take away: the future of these roles rewards people who understand how AI systems behave in the messy real world. Not in demos, not in toy datasets, but in production environments where latency, cost, safety, and continuous drift factors everything. Big-tech roles: the stack is more stratified than you think Big-tech roles: the stack is more stratified than you think In large tech companies, “ML person” is no longer a single archetype. If you look across Amazon, Google, Microsoft, Atlassian, Databricks and friends, you see a pattern: similar problems, but increasingly specialized titles. Let me group them by how you spend most of your day. a) Applied Scientist: research shaped work with production constraints a) Applied Scientist: research shaped work with production constraints At Big Tech like Amazon, Applied Scientists sit close to the product but work like researchers: they design and test new models (recsys, ranking, ads, forecasting, gen AI, etc), run experiments, and ship prototypes that engineers then harden. Amazon, Applied Scientists Job descriptions emphasize hands-on modeling, experimentation, and publication-friendly work, often on large-scale data and custom hardware (Inferentia, Trainium, etc.). Interviews typically involve: Leetcode style coding & data structuresStatistical ML, optimization, and modeling questions (with some focus on how well you understand research papers)Product or metrics-aware Design Question (“How would you design/improve product recommender?”) Leetcode style coding & data structures Statistical ML, optimization, and modeling questions (with some focus on how well you understand research papers) Product or metrics-aware Design Question (“How would you design/improve product recommender?”) If you enjoy reading papers and then rewriting half the method section in PyTorch, this is your way to go. b) Machine Learning Engineer: production first, research-aware At Google, Microsoft, even for many Fintech startups, “ML Engineer” is explicitly defined as: design, build, and productionize ML systems end to end including data pipelines, training, serving, monitoring. ML Engineer You still do modeling, but your success is measured in latency, reliability, and cost, not just ROC-AUC. Interviews lean into: Strong coding (usually in Python)Practical ML (feature engineering, model iteration, debugging)System design for training/serving pipelines, especially for cloud products or internal platforms Strong coding (usually in Python) Practical ML (feature engineering, model iteration, debugging) System design for training/serving pipelines, especially for cloud products or internal platforms In India, a lot of these roles concentrate around cloud groups (Azure, GCP), Copilot-like teams, or ads/revenue products, where impact is tightly measured against ARR or usage. c) ML Infrastructure / Platform Engineer: the people who make inference faster ML Infra Engineers design and maintain the platforms that support the entire lifecycle: data ingestion, training clusters, model registry, deployment, observability. Job descriptions read like “distributed systems engineer who happens to know ML”: Kubernetes, GPUs, feature stores, training orchestration, CI/CD for models. Interviews here feel closer to backend/platform roles (tradition SDE) Systems design (multi-tenant training clusters, feature stores, online/offline sync)Strong software engineering and distributed systemsEnough ML literacy to know why a researcher is complaining about data skew Systems design (multi-tenant training clusters, feature stores, online/offline sync) Strong software engineering and distributed systems Enough ML literacy to know why a researcher is complaining about data skew If you like building the rails instead of the train, this is currently in high demand. d) Data Scientist (product / analytics): still very real, just more focused At FAANGMULA, data science roles skew toward experimentation, metrics, and stakeholder-heavy work: think causal inference, AB tests, dashboards, and ML “light” for specific problems like fraud, trust & safety, or growth data science Coming to Interviews: SQL-heavy analytics and product case studiesStatistics, experiment design, metric designSome Python + basic modeling, depending on the team SQL-heavy analytics and product case studies Statistics, experiment design, metric design Some Python + basic modeling, depending on the team e) Forward-Deployed / AI Engineer: the consulting-shaped IC Forward-deployed engineers are having a moment. OpenAI, Salesforce and a wave of AI companies now hire engineers who embed directly with customers, ship bespoke AI workflows, and then feed those learnings back into the core product. Salesforce Day-to-day, this looks like: Understanding messy customer systemsWiring APIs, data pipelines, and agents into real workflowsTranslating vague “we want AI” into something that rails against SLAs and governance Understanding messy customer systems Wiring APIs, data pipelines, and agents into real workflows Translating vague “we want AI” into something that rails against SLAs and governance Interviews mix: Full-stack or backend codingGenAI Systems/API design (Agentic, RAG focussed mostly)Product and communication skills (because you’re effectively the “AI person” in someone else’s org) Full-stack or backend coding GenAI Systems/API design (Agentic, RAG focussed mostly) Product and communication skills (because you’re effectively the “AI person” in someone else’s org) If you enjoy talking to humans as much as training models, this is a strong 2026 bet you should take. 2. Frontier-model and AI-native startup roles: narrower, deeper, weirder 2. Frontier-model and AI-native startup roles: narrower, deeper, weirder On the new age startup side especially in US, Europe and increasingly in India the titles sound similar but the work is more specific and so will be the interview focus. a) LLM / Voice Research Engineer: product-driven research At voice-focused companies like ElevenLabs and speech platforms like AssemblyAI, research engineers bridge deep modeling work with shipped APIs. They build and improve ASR, TTS, and embedding models, while also owning data pipelines and evaluation setups. ElevenLabs DeepMind’s Research Engineer role is a good archetype here: design, implement, and evaluate models and agents, often co-authoring papers but also building infrastructure for distributed training and evaluations. DeepMind’s Research Engineer b) LLM Engineer / GenAI Engineer: shipping features, not just models Many startups now hire “LLM Engineers” whose job is to build LLM-powered products: agents, retrieval pipelines, custom tools, RAG, evaluation, and internal platforms. Finetuning model on top of GPU using CUDA is also a highly growing skill which is in demand by AI frontier startups and some big tech. CUDA c) LLM Evaluation / Safety Engineer: the new QA, but for models A newer niche is LLM evaluation and safety roles focused on building automated and human-in-the-loop eval systems, red-teaming, and continuous benchmarking. Companies like Anthropic,and smaller safety focused startups explicitly hire engineers to design metrics, harnesses, and infra for model evaluation. Anthropic d) Applied AI / Agent Engineer at training-heavy startups Companies like Mercor sit in a different corner: they orchestrate large pools of humans and models to train or supervise AI systems at scale. Their applied AI roles blend research, RLHF, product, and operations refining human feedback datasets, building internal tools, and turning messy real world tasks into model-ready signals. Mercor There are many role I’ve still missed here: But thats ok! If you zoom out, the pattern is simple: in big tech, roles are converging toward stable specialties (applied science, infra, product analytics, forward deployment). In AI startups, roles are converging toward frontier pressure points (LLMs, voice, eval, safety, agents). The interviews for these roles? Completely different. Less LeetCode, more "here's a messy real-world problem with our customer; walk me through your thinking." They want to see if you can operate in ambiguity.