Minimum qualifications:
- PhD degree in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering or related technical field, or equivalent practical experience.
- Experience with accelerator architectures and data center workloads.
- Experience in programming languages (e.g., C++, Python, Verilog), Synopsys, Cadence tools.
Preferred qualifications:
- 2 years of experience post PhD.
- Experience with performance modeling tools.
- Knowledge of arithmetic units, bus architectures, accelerators, or memory hierarchies.
- Knowledge of high performance and low power design techniques.
About the job
In this role, you will shape the future of AI/ML hardware acceleration as a Silicon Architect/Design Engineer and drive cutting-edge TPU (Tensor Processing Unit) technology that fuels Google's most demanding AI/ML applications. You will collaborate with hardware and software architects and designers to architect, model, analyze, define and design next-generation TPUs. You will have dynamic, multi-faceted responsibilities in areas such as product definition, design, and implementation, collaborating with the Engineering teams to drive the optimal balance between performance, power, features, schedule, and cost.
The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide.
We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.
Responsibilities
- Revolutionize Machine Learning (ML) workload characterization and benchmarking, and propose capabilities and optimizations for next-generation TPUs.
- Develop architecture specifications that meet current and future computing requirements for AI/ML roadmap. Develop architectural and microarchitectural power/performance models, microarchitecture and RTL designs and evaluate quantitative and qualitative performance and power analysis.
- Partner with hardware design, software, compiler, Machine Learning (ML) model and research teams for effective hardware/software codesign, creating high performance hardware/software interfaces.
- Develop and adopt advanced AI/ML capabilities, drive accelerated and efficient design verification strategies and implementations.
- Use AI techniques for faster and optimal Physical Design Convergence -Timing, floor planning, power grid and clock tree design etc. Investigate, validate, and optimize DFT, post-silicon test, and debug strategies, contributing to the advancement of silicon bring-up and qualification processes.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also
Google's EEO Policy and
EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our
Accommodations for Applicants form.