
Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
The Role:
This role requires a visionary leader with deep expertise in SOC architecture, power management, and performance optimization for cutting-edge discrete graphics solutions. The candidate will drive innovation in GPU SOC design, balancing performance, power, and area (PPA) while leading cross-functional teams to deliver industry-leading products.
The Person:
This role demands a strategic thinker who can bridge customer needs with groundbreaking SOC architectures, ensuring AMD's leadership in both graphics and AI compute markets. The ideal candidate will have a proven track record of shipping Compute and Graphics SOC products with advanced performance and power features.
Key Responsibilities:
SOC Architecture Leadership: Define and execute the architectural roadmap for discrete GPU SOCs, ensuring alignment with performance, power, and thermal targets.
Performance Benchmarking: Develop methodologies for SOC-level performance analysis, including ML workload characterization, bottleneck identification, and optimization for frameworks like TensorFlow/PyTorch. Optimize compute, memory, and interconnect subsystems for latency, bandwidth, and power efficiency across graphics and ML workloads.
Power & Power Management: Architect advanced power delivery networks and ML-specific power management strategies, including dynamic voltage/frequency scaling (DVFS) for AI accelerators. Design workload-aware power gating and adaptive voltage scaling for ML inference/training workloads.
Customer-Centric Architecture: Translate customer requirements (e.g., gaming, AI, HPC) into actionable product specifications and architectural solutions. Collaborate with business teams to prioritize features balancing market needs, technical feasibility, and PPA trade-offs.
Cross-Functional Collaboration: Partner with ML software teams to co-optimize architectures for frameworks, compilers, and runtime power management. Partner with IP teams to align industrial leading edge technologies with product roadmaps and coordinate next gen IP roadmaps.
Preferred Experience:
We are looking for an SOC/IP architecture expert with at least fifteen years of relevant experience with expertise in GPU/compute subsystems, ML accelerators, and power management. Proficiency in ML frameworks (TensorFlow, PyTorch), quantization/sparsity techniques, and performance/power modeling for AI workloads. Mastery of power analysis tools (Power Artist, MATLAB) and ML-specific optimizations (e.g., tensor core utilization, memory hierarchy tuning).
Expertise in SOC Performance & Power: ML Performance/Power Efficiency: Designed architectures for ML-specific workloads, including transformers, CNNs, and generative AI models. Implemented model-aware power management (e.g., dynamic precision scaling, sparsity exploitation) to reduce energy consumption by 20-30%. Delivered patented techniques for ML accelerator power optimization, such as tensor-core clock gating and workload-dependent voltage margins.
Customer & Market Insight: Proven ability to analyze customer use cases (gaming, content creation, data centers) and translate them into architecture requirements. Experience defining KPIs for ML inference/training efficiency (e.g., TOPS/Watt, batch-size scalability).
Customer-Driven Innovation: Led architecture definitions for customer-facing features like real-time ray tracing, AI upscaling, and low-latency inference engines. Developed performance/power benchmarking suites aligned with industry ML benchmarks (MLPerf, UL Procyon AI).
Leadership: Track record of mentoring teams in ML-optimized SOC design and driving cross-functional alignment on power/performance targets.
Academic Credentials:
Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering, or related. Advanced degrees, such as Master's or Ph. D. are preferred.
Role can be remote based in Austin, TX, Santa Clara, CA, or nearby an AMD US facility.
#HYBRID
#LI-LB1
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
Apply on company website