Back to Search Results
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: Beijing, China
Career Level: Entry Level
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  



1. Responsibilities
  • Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines
  • Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration)
  • Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage
  • Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies
  • Collaborate with research and product teams to drive the deployment of LLMs in real-world applications
2. Requirements
  • Bachelor's degree or above in Computer Science, Artificial Intelligence, or a related field
  • 4+ years of relevant development experience
  • Proficient in at least one of Python or C++, with strong engineering skills
  • Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus
  • Experience in agent development (e.g., LangChain, in-house agents, tool use systems)
  • Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache
  • Understanding of GPU computing principles, with some experience in operator/kernel optimization
3. Preferred Qualifications (Plus)
    • Experience with large-scale LLM training (e.g., distributed training, Megatron, DeepSpeed)
    • Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization
    • Experience in high-performance computing (HPC) or inference framework optimization
    • Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration)

 

#LI-FL#



Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD's “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.


 Apply on company website