Overview
d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The holy grail of AI compute has been to break through the memory wall to minimize data movements. We've achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models.
The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&T. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
Location
Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.
d-Matrix is seeking a Machine Learning Engineer to join our Algorithm Team. We're looking for someone to invent, design, and implement efficient algorithms that will be used to optimize Large Language Model inference on DNN Accelerators we develop. You would be part of a close-knit team of mathematicians, ML researchers, and ML engineers who create and apply advanced algorithmic and numerical techniques to the most cutting-edge and high-impact research in the overlap of mathematics, ML, and modern LLM applications.
What You Will Do