Deepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.
At Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.
Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.
Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.
Deepgram's speech AI models are among the fastest and most accurate in the world — and an increasing number of defense and edge computing customers need those models to run outside of the cloud. On devices, on-premises, in disconnected environments, and on hardware with strict power and compute constraints. This is the frontier where AI meets the physical world, and it requires a fundamentally different engineering approach.
As the Defense / Edge Tech Lead, you will own the technical direction for deploying Deepgram's models to edge and embedded environments. You will work closely with hardware partners like Qualcomm and Motorola, support defense customer requirements through AWS NatSec partnerships, and drive the model optimization and runtime engineering needed to deliver production-quality speech AI on constrained platforms. You will be the technical point of contact for some of Deepgram's most strategically important partnerships and customers.
This role requires a rare combination of systems engineering depth, model optimization expertise, and the judgment to navigate defense and government customer environments. Note that Deepgram does not currently hold facility clearance — this role does not require an active security clearance, though experience working in or alongside classified programs is highly valued.
Lead the technical strategy for edge deployment of Deepgram's STT and TTS models, defining the architecture for on-device, on-premises, and air-gapped inference across diverse hardware targets.
Optimize models for edge and embedded platforms, driving quantization, pruning, distillation, and runtime optimization to meet strict latency, memory, and power constraints.
Partner with Qualcomm, Motorola, and other hardware vendors to ensure Deepgram models run efficiently on their chipsets, collaborating on SDK integration, performance benchmarking, and joint go-to-market.
Support defense customer requirements through AWS NatSec partnerships, translating mission requirements into engineering deliverables and ensuring Deepgram's solutions meet the unique demands of government environments.
Design and build edge runtime infrastructure, including model packaging, deployment pipelines, OTA update mechanisms, and telemetry for devices operating in low-connectivity or disconnected environments.
Harden deployments for security-sensitive environments, implementing secure boot chains, encrypted model storage, tamper detection, and audit logging appropriate for defense and government use cases.
Benchmark and validate performance across target hardware platforms, establishing repeatable test suites for latency, accuracy, power consumption, and resource utilization.
Collaborate with Research and Engine teams to influence model architectures toward edge-friendly designs from the start, reducing the optimization burden at deployment time.
Provide technical leadership to cross-functional teams working on defense and edge projects, setting engineering standards, reviewing designs, and mentoring engineers on systems and optimization practices.
You find deep satisfaction in making a 300M-parameter model run on hardware with 4GB of RAM — and still hit accuracy targets.
You want to work at the intersection of AI and hardware, where optimization is not optional but existential.
You are energized by partnerships with hardware companies and enjoy the back-and-forth of getting a model to sing on a new chipset.
You understand the unique dynamics of defense and government customers and can navigate their requirements without losing engineering velocity.
You believe that edge AI is the next major deployment frontier, and you want to define how speech AI gets there.
You prefer working on hard, constrained problems over open-ended research — you want to ship, not just publish.
5+ years of experience in systems engineering, embedded computing, or edge AI deployment, with a track record of delivering production systems on constrained hardware.
Strong proficiency in C, C++, and/or Rust, with experience writing performance-critical code for resource-constrained environments.
Hands-on experience with model optimization for edge deployment, including quantization, pruning, knowledge distillation, or architecture-specific compilation.
Familiarity with edge inference runtimes such as ONNX Runtime, TensorRT, TFLite, or vendor-specific SDKs (Qualcomm SNPE/QNN, MediaTek NeuroPilot, etc.).
Experience with security-conscious development practices, including secure boot, encrypted storage, code signing, and secure deployment pipelines.
Strong understanding of hardware-software interaction — CPU/GPU/NPU architectures, memory hierarchies, power management, and how they affect model inference performance.
Excellent communication skills — you will be the technical face of Deepgram to hardware partners and defense customers, and you need to be credible and clear in both contexts.
Prior experience working on or alongside classified defense programs — you understand SCIFs, accreditation processes, and the operational constraints of secure environments, even if you do not currently hold an active clearance.
Experience with ML model optimization techniques at depth — custom quantization schemes, mixed-precision inference, neural architecture search for edge targets.
Familiarity with ONNX, TensorRT, or similar model compilation and optimization toolchains and their tradeoffs across hardware targets.
Defense or govtech industry experience, including familiarity with procurement processes, ITAR, FedRAMP, or DoD software development standards.
Experience with real-time audio processing on embedded platforms — DSP pipelines, audio codec optimization, or streaming inference on microcontrollers or edge SoCs.
Background in hardware evaluation and benchmarking — systematically comparing accelerators, SoCs, or GPUs for specific workload profiles.
Medical, dental, vision benefits
Annual wellness stipend
Mental health support
Life, STD, LTD Income Insurance Plans
Unlimited PTO
Generous paid parental leave
Flexible schedule
12 Paid US company holidays
Quarterly personal productivity stipend
One-time stipend for home office upgrades
401(k) plan with company match
Tax Savings Programs
Learning / Education stipend
Participation in talks and conferences
Employee Resource Groups
AI enablement workshops / sessions
*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
We are happy to provide accommodations for applicants who need them.

Sobi - Swedish Orphan Biovitrum AB (publ)

GSB Solutions

Trafilea Tech E-commerce Group

Tether.io

Trafilea Tech E-commerce Group

Deepgram

Deepgram

Deepgram