7+ years in programming and debugging, Degree in Computer Science or related field, Experience with Streaming infrastructure (Flink, Spark, Pulsar), Proficient in Python runtimes and container sandboxing, Familiarity with AWS or GCP.
Key responsabilities:
Develop advanced streaming capabilities in Rift
Build integrated observability solutions
Scale ingestion platform for low latency
Reduce cold start times for Python execution
Launch infrastructure across multiple cloud platforms
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Founded by the team that created the Uber Michelangelo platform, Tecton provides an enterprise-ready feature platform to make world-class machine learning accessible to every company.
Machine learning creates new opportunities to generate more value than ever before from data. Companies can now build ML-driven applications to automate decisions at machine speed, deliver magical customer experiences, and re-invent business processes.
But ML models will only ever be as good as the data that is fed to them. Today, it’s incredibly hard to build and manage ML data. Most companies don’t have access to the advanced ML data infrastructure that is used by the internet giants. So ML teams spend the majority of their time building custom features and bespoke data pipelines, and most models never make it to production.
We believe that companies need a new kind of data platform built for the unique requirements of ML. Our goal is to enable ML teams to build great features, serve them to production quickly and reliably, and do it at scale. By getting the data layer for ML right, companies can get better models to production faster to drive real business outcomes.
Tecton helps companies unlock the full potential of their data for AI applications. The platform streamlines the complex process of preparing and delivering data to models. With Tecton, AI teams accelerate the development of smarter, more impactful AI applications.
Tecton is funded by Sequoia Capital, Andreessen Horowitz, and Kleiner Perkins, along with strategic investments from Snowflake and Databricks. We have a fast-growing team that’s distributed around the world, with offices in San Francisco and New York City. Our team has years of experience building and operating business-critical machine learning systems at leading tech companies like Uber, Google, Meta, Airbnb, Lyft, and Twitter.
Tecton’s Realtime Compute team builds streaming infrastructure that provides sub-second data freshness for AI applications in production. In addition to streaming, we offer a production-ready Python runtime that securely runs user code in realtime at scale. This runtime can handle tasks like generating embeddings or calling third-party APIs for information retrieval.
This position is open to candidates based anywhere in the United States. You can work in one of our hub offices in San Francisco, New York City, Seattle or work fully remotely from outside those areas within the US.
Responsibilities
Develop advanced streaming capabilities in Rift like joins, stateful operations and native connectors to streaming data sources
Build an integrated observability solution that provides an exceptional operational experience with logs, metrics, and traces
Scale our ingestion platform to handle millions of requests per second with low latency and high availability
Reduce the cold start times of our sandboxed Python execution environment for extremely fast autoscaling
Launch our infrastructure across multiple cloud platforms, ensuring compliance with security protocols and data residency requirements
Assess and prioritize tasks, demonstrating a keen awareness of performance-critical areas
Qualifications
7+ years of experience in programming, debugging, and performance tuning distributed and/or highly concurrent software systems.
Degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience, with strong proficiency in building high throughput infrastructure.
Experience with Streaming infrastructure like Flink, Spark, Pulsar, Heron
Experience with Python runtimes, dependency resolution, and container sandboxing.
Experience with at least one of AWS, GCP.
Experience with low latency online storage like DynamoDB, Redis, and BigTable.
Tecton values diversity and is an equal opportunity employer committed to creating an inclusive environment for all employees and applicants without regard to race, color, religion, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or other applicable legally protected characteristics. If you would like to request any accommodations from the application through to the interview, please contact us at recruitingteam@tecton.ai.
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Required profile
Experience
Level of experience:Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.