cover image
XPENG

XPENG

www.xpeng.com

2 Jobs

1,909 Employees

About the Company

XPENG is a leading tech company that designs, develops, manufactures, and markets intelligent mobility solutions. We explore the diversity of mobility including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. We focus on creating a future of mobility that uses thoughtful and empathetic intelligence to improve the driving experience.
XPENG is committed to in-house R&D, with almost 40% of our employees working in R&D-related areas helping to develop our expanding product portfolio. The Company has created a full-stack Advanced Driver Assistance System (XPILOT), as well as an intelligent operating system (Xmart OS) for an enhanced in-car experience. XPENG has also developed core vehicle systems for enhanced driving capabilities, including powertrains and advanced electronic architecture.
XPENG is headquartered in Guangzhou, China with multi-regional offices in Beijing, Shanghai, Shenzhen, Silicon Valley, and San Diego. In 2021, the Company established its European headquarters in Amsterdam, along with other dedicated offices in Copenhagen, Munich, Oslo, and Stockholm. XPENG's EVs are manufactured at the fully-owned plant located in Zhaoqing, China. To further expand our production capacity, two new self-owned intelligent EV manufacturing bases in Guangzhou and Wuhan are under construction now.

Listed Jobs

Company background Company brand
Company Name
XPENG
Job Title
Research Scientist Intern
Job Description
**Job Title:** Research Scientist Intern **Role Summary:** Drive the design, training, and deployment of XPENG’s next‑generation Vision‑Language‑Action (VLA) foundation model for autonomous driving. Collaborate with researchers and engineers to create large‑scale multimodal architectures that unify vision, language, and control, directly influencing L3/L4 autonomous driving systems. **Expectations:** - Full‑time commitment to research and development of multimodal models. - Contribute to academic publications and presentations at top AI/ML conferences. - Translate research outcomes into production‑ready systems within the autonomous driving pipeline. **Key Responsibilities:** - Design and implement large‑scale vision‑language‑action transformers for end‑to‑end driving. - Develop cross‑modal alignment techniques (visual grounding, temporal reasoning, policy distillation, imitation & reinforcement learning) to improve interpretability and action quality. - Collaborate closely with modeling, perception, and infrastructure teams to integrate research into production. - Participate in code reviews, model evaluation, and iterative refinement of training pipelines. - Produce research documentation, presentations, and conference papers. **Required Skills:** - Strong background in multimodal representation learning, temporal modeling, and reinforcement learning. - Proficiency in PyTorch and modern transformer architecture design. - Experience with distributed training (DDP, FSDP) and large‑batch optimization. - Familiarity with cross‑modal alignment methods such as visual grounding, temporal reasoning, and policy distillation. - Ability to analyze, debug, and optimize complex machine‑learning systems. **Required Education & Certifications:** - Currently enrolled in a Master’s or Ph.D. program in Computer Science, Electrical/Computer Engineering, or a related field with specialization in Computer Vision, Natural Language Processing, or Machine Learning.
Santa clara, United states
On site
Fresher
09-12-2025
Company background Company brand
Company Name
XPENG
Job Title
2026 Campus Recruiting Robotics Center Internship Position
Job Description
Job title: 2026 Campus Recruiting Robotics Center Internship Position Role Summary: Internship focused on developing and deploying robotics foundation models, manipulation policies, locomotion controllers, and whole-body control algorithms using in-house chip technology. Expectations: Collaborate with AI and motion control teams, contribute to code design and experimentation, learn advanced ML techniques (LLM/VLM, diffusion policy, RL), and support efficient training and deployment pipelines. Key Responsibilities: - Assist in co-designing robotics foundation models integrating LLM/VLM with custom chips. - Develop data‑driven end‑to‑end manipulation policies (e.g., diffusion policy, VLA). - Implement and tune RL controllers for human‑like walking and locomotion. - Contribute to whole‑body control algorithm development and testing. - Conduct experiments, analyze results, and document findings. - Maintain and improve the robotics codebase, ensuring reproducibility and scalability. Required Skills: - Proficient in Python and C++ with experience in AI/ML frameworks (PyTorch, TensorFlow). - Familiarity with reinforcement learning, diffusion models, and vision‑language models. - Basic understanding of robotics kinematics, dynamics, and control theory. - Strong analytical and problem‑solving abilities. - Ability to work collaboratively in interdisciplinary teams. Required Education & Certifications: - Currently enrolled in a Computer Science, Robotics, Artificial Intelligence, or related program; expected graduation 2026. - Coursework in machine learning, deep learning, robotics, and control systems. - No specific certifications required; demonstrated interest and academic performance in robotics and AI.
Santa clara, United states
On site
10-02-2026