Principal Software Engineer R&D

extra holidays - extra parental leave
Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 
United States

Offer summary

Qualifications:

MS or PhD in Computer Science, Electrical Engineering, or related fields., Strong understanding of computer architecture, data structures, and machine learning fundamentals., Experience leading R&D teams or senior software development for AI hardware and models., Proficiency in C/C++ and Python development in Linux environments..

Key responsibilities:

  • Lead research and development of LLM-based kernel code generation for AI hardware.
  • Design and implement software kernels for large language and multimodal models.
  • Research ways to generate kernel code using large language models (LLMs).
  • Collaborate with hardware and software teams to optimize AI compute engine performance.

d-Matrix logo
d-Matrix Scaleup https://www.d-matrix.ai
51 - 200 Employees
See all jobs

Job description

At dMatrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.

We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.

Location:

Hybrid, working onsite at our Santa Clara, CA, headquarters 35 days per week.

The Role: Principal Software Engineer R&D

What you will do:

The principal engineer role requires you to be part of the team that helps design the SW stack for our AI compute engine. As part of the software team, you will lead the research and development of LLMbased kernel code generation for the software kernel SDK for nextgeneration AI hardware. The dMatrix software stack is a hybrid software stack that utilizes a compiler as well as kernels. The kernels team designs and implements operations for large language and multimodal models, such as SIMD operations, matrix multiplications, and convolution operations, and integrates these operations to build kernels such as LayerNorms, convolution layers, attention heads, or KV caches. These kernels are implemented in a combination of the dMatrix HW ISA andor ISAs for thirdparty IPbased processor units.

What you will bring:

Experience tuning LLMs to generate code. You have exposure to building software kernels for HW architectures. You possess an understanding of domainspecific hardware architectures (for example, GPUs, ML accelerators, SIMD vector processors, and DSPs) and how to map ML algorithms, such as nonlinear operations or complex data manipulation operations, to an accelerator architecture. You understand how to map computational graphs generated by AI frameworks (such as PyTorch or TensorFlow) to an underlying architecture. You also understand how to evaluate throughput and latency performance for such accelerators, as well as how to modify the algorithms for numerical accuracy. Your role will be to research and develop ways of generating kernel code through LLMs.

Minimum:

  • MS or PhD in Computer Science, Electrical Engineering, or related fields

  • Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals

  • Experience as technical R&D lead, manager, or senior manager level with software for AI accelerator HW and models for code generation

  • Experience in designing and finetuning generative AI LLM models for code generation andor coding assistance with a record of opensource code andor publications in this field.

  • Proficient in CC++ and Python development in Linux environment and using standard development tools

  • Selfmotivated team player with a strong sense of ownership and leadership

    • Preferred:

      • Prior startup, small team, or incubation experience

      • Experience design and implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, AI accelerators using libraries such as CUDA, etc.

      • Experience with development for embedded SIMD vector processors such as Tensilica

      • Experience with ML frameworks such as TensorFlow andor PyTorch

      • Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc.

      • Work experience at a cloud provider or AI computesubsystem company

        • Equal Opportunity Employment Policy

          dMatrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

          dMatrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with dMatrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork
  • Communication
  • Problem Solving
  • Leadership

Software Engineer Related jobs