Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
AMD is seeking a TPM Director to lead Inference programs for the AI Group BRAIN organization. You will be at the forefront of innovation and experimentation, shaping a vision for inference platform impact and ecosystem adoption, and engaging with internal and external stakeholders to navigate programs from inception to delivery. You will drive end-to-end execution of complex, cross-functional inference initiatives while owning multi-quarter planning, roadmap alignment, and the operating cadence that turns strategy into predictable delivery across the Inference engineering workstreams.
In this role, you will be a key partner to engineering, product, and business leadership, ensuring that near-term execution strength is matched by clear long-term planning, rigorous prioritization, and proactive management of risks, dependencies, and decision points across a rapidly evolving AI ecosystem.
You will help scale execution across initiatives spanning inference software, runtime enablement, model optimization, systems integration, performance, benchmark readiness, deployment workflows, ecosystem readiness, and product enablement deliverables—including engagement with public inference projects and ecosystems (e.g., SGLang, vLLM) where relevant, as well as benchmark platforms (e.g., MLPerf and InferenceX) where we drive submissions and readiness. This role requires strong technical judgment, executive communication, and the ability to align multiple organizations around shared goals and measurable outcomes.
THE PERSON:
The ideal candidate is a highly effective program leader with strong technical depth in AI/ML systems and large-scale inference, comfortable operating in ambiguity and translating strategy into executable roadmaps across a broad set of teams and priorities. You leverage an AI vision to drive business results and bring broad knowledge of AI technology, algorithms, and tools.
You communicate crisply at all levels, influence without direct authority, and build trust with senior engineering leaders by bringing structure, clarity, and rigor to complex technical programs. You proactively surface risks, tradeoffs, and decision points before they become blockers, and you create mechanisms that improve organizational visibility and delivery predictability.
You thrive in a fast-moving environment, bring strong operational discipline, and can establish durable processes for portfolio planning, executive reviews, milestone tracking, and accountability without creating unnecessary overhead for engineering teams.
KEY RESPONSIBILITIES:
- Own the Inference portfolio planning process by translating strategy into a multi-quarter roadmap, quarterly execution plans, and measurable business and engineering outcomes.
- Establish and run an execution operating model across the Inference organization, including planning reviews, OKRs, dashboards, decision logs, milestone tracking, and risk management mechanisms that drive rigor, transparency, and predictable delivery.
- Drive end-to-end delivery of large-scale inference capabilities across cross-functional engineering teams, including software, systems, architecture, performance, model enablement, runtime, and platform integration; manage scope, milestones, dependencies, critical path, and release readiness.
- Partner with engineering and product leadership to align priorities, sequencing, and resource planning across a complex portfolio of inference initiatives spanning platform readiness, model support, serving performance, benchmark readiness, and ecosystem integration.
- Apply technical judgment to identify and manage architecture-level tradeoffs, technical dependencies, and execution risks across inference workloads, runtimes, software stacks, and deployment environments.
- Analyze and quantify project risks; develop and maintain risk management plans; and proactively mitigate issues by driving clear owners, timelines, and path-to-green actions.
- Develop, maintain, and manage program requirements, execution plans, timelines, issues, risks, and challenges; ensure milestones, dependencies, and resources are tracked and escalated appropriately.
- Lead executive-level program reviews by clearly communicating status, key decisions, risks, dependencies, and resource needs; ensure leadership has accurate visibility into progress, gaps, and path-to-green plans.
- Drive cross-organizational alignment with internal stakeholders and external ecosystem partners where needed, helping remove blockers and accelerate delivery across upstream and downstream dependencies.
- Improve operational maturity across the organization by standardizing TPM best practices, governance frameworks, and planning mechanisms that increase accountability, reduce execution friction, and strengthen delivery consistency.
PREFERRED EXPERIENCE:
- Strong familiarity with modern AI inference ecosystems, including model serving, runtime software, compiler/toolchain dependencies, optimization techniques, and deployment workflows for production inference.
- Experience leading large, cross-functional programs across software, systems, architecture, hardware, and product teams in highly technical environments.
- Track record of building multi-quarter roadmaps, execution cadences, and governance mechanisms that improve predictability across fast-moving engineering organizations.
- Experience working across open-source and ecosystem-driven environments, including upstream dependencies and release planning
- Strong executive presence with demonstrated success communicating program health, risks, tradeoffs, and decisions to senior leadership.
- Proven ability to influence across matrixed organizations, resolve ambiguity, and drive alignment among teams with competing priorities.
- Experience managing, mentoring, or scaling TPM teams is preferred.
ACADEMIC CREDENTIALS:
Master's or Bachelor's degree in Computer Engineering, Computer Science, Electrical Engineering, or a related technical field is desired.
#LI-MH2
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website