Descrizione Lavoro
OverviewThe Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team focuses on maximizing performance for AWS's custom ML accelerators, crafting high-performance kernels for ML functions at the hardware-software boundary to optimize performance for demanding workloads. The AWS Neuron SDK includes an ML compiler, runtime, and application framework that integrates with popular ML frameworks like PyTorch to enable accelerated ML inference and training performance. The broader Neuron Compiler organization works across frameworks, compilers, runtime, and collectives, optimizing current performance and contributing to future architecture designs while engaging with customers to enable models and ensure optimal performance. This role offers an opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, helping shape the future of AI acceleration technology.This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a team of engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We are inventing. We are experimenting. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure machine learning workloads achieve optimal performance on AWS ML accelerators.Explore the product and our history!https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.htmlhttps://aws.amazon.com/machine-learning/neuron/https://github.com/aws/aws-neuron-sdkhttps://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-successKey job responsibilitiesOur kernel engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you’ll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming modelsAnalyze and optimize kernel-level performance across multiple generations of Neuron hardwareConduct detailed performance analysis using profiling tools to identify and resolve bottlenecksImplement compiler optimizations such as fusion, sharding, tiling, and schedulingWork directly with customers to enable and optimize their ML models on AWS acceleratorsCollaborate across teams to develop innovative kernel optimization techniquesA day in the lifeAs you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:Build high-impact solutions to deliver to our large customer baseParticipate in design discussions, code review, and communicate with internal and external stakeholdersWork cross-functionally to help drive business decisions with your technical inputWork in a startup-like development environment, where you’re always working on the most important stuffAbout the teamDiverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.BASIC QUALIFICATIONS5+ years of non-internship professional software development experience5+ years of programming with at least one software programming language experience5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experienceExperience as a mentor, tech lead or leading an engineering teamPREFERRED QUALIFICATIONS5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experienceBachelor's degree in computer science or equivalentExpertise in accelerator architectures for ML or HPC such as GPUs, CPUs, FPGAs, or custom architecturesExperience with GPU kernel optimization and GPGPU computing such as CUDA, NKI, Triton, OpenCL, SYCL, or ROCmDemonstrated experience with NVIDIA PTX and/or AMD GPU ISAExperience developing high performance libraries for HPC applicationsProficiency in low-level performance optimization for GPUsExperience with LLVM/MLIR backend development for GPUsKnowledge of ML frameworks (PyTorch, TensorFlow) and their GPU backendsExperience with parallel programming and optimization techniquesUnderstanding of GPU memory hierarchies and optimization strategiesAmazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
#J-18808-Ljbffr