Vultr is on a mission to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators around the world. With 32 global cloud data center locations, Vultr is trusted by hundreds of thousands of active customers across 185 countries for its flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. In December 2024 Vultr announced an equity financing at a $3.5 billion valuation. Founded by David Aninowsky and self-funded for over a decade, Vultr has grown to become the world’s largest privately-held cloud infrastructure company.
Vultr Cares
100% company-paid insurance premiums for employee medical, dental and vision plans.
401(k) plan that matches 100% up to 4%, with immediate vesting
Professional Development Reimbursement of $2,500 each year
11 Holidays + Paid Time Off Accrual + Rollover Plan
Commitment matters to Vultr! Increased PTO at 3 year and 10 year anniversary + 1 month paid sabbatical every 5 years + Anniversary Bonus each year
$500 stipend for remote office setup in first year + $400 each following year
Internet reimbursement up to $75 per month
Gym membership reimbursement up to $50 per month
Company paid Wellable subscription
Join Vultr
Vultr is seeking a Senior Data Scientist, Capacity Analytics to design and build scalable capacity forecasting and analytics solutions that transform infrastructure telemetry and demand signals into actionable insights for planning and decision-making. In this highly visible role, you will develop end-to-end data pipelines, statistical and machine learning forecasting models, and analytics tools that help optimize infrastructure utilization, prevent capacity shortages, and guide strategic investment decisions. You’ll partner closely with Engineering, Finance, Product, and Operations to translate complex data into trusted metrics, dashboards, APIs, and automated workflows that drive real business outcomes. The ideal candidate is highly hands-on with SQL and Python, experienced working with large-scale time-series and infrastructure datasets, and motivated to solve complex problems that directly impact customer experience, operational efficiency, and the future growth of cloud infrastructure.
Key Responsibilities
Architect and build end-to-end capacity analytics data pipelines (ingestion, transformation, quality checks, feature generation, and serving) across telemetry, inventory/asset, and business demand datasets.
Design, develop, and optimize SQL data models and queries for large-scale time-series and dimensional datasets to enable fast, reliable reporting and analytics.
Develop production-grade Python services and workflows for forecasting, scenario planning, anomaly/sold-out risk detection, and automated capacity gap analysis.
Implement robust data quality, lineage, and observability (validation rules, reconciliation, alerting, and auditability) to ensure trusted “single source of truth” capacity metrics.
Build and maintain forecasting and simulation models that incorporate growth rates, pipeline/renewals/churn, lead times, and supply constraints (racks/power/ports, GPU/CPU pools, storage).
Partner with Infrastructure, SRE/Operations, Procurement, and Finance/FinOps to translate insights into capex recommendations, allocation policies, and operational actions.
Produce executive-ready dashboards and recurring reports (Tableau/Power BI) that clearly communicate utilization, runway, constraints, and monetization opportunities.
Establish standards and reusable libraries for capacity analytics (feature definitions, model evaluation, backtesting, and documentation) and mentor analysts/engineers on best practices.
Qualifications
7-10+ years building data products for capacity planning, infrastructure analytics, or large-scale operational forecasting (cloud, data centers, networking, storage, or similar).
Expert in SQL (advanced query development & optimization, dimension/time-series modeling, window functions, performance tuning) across modern warehouse/lakes (e.g., Snowflake/BigQuery/ClickHouse).
Strong Python engineering skills for production analytics (data pipelines, API/services, workflow orchestration, testing, packages, and observability).
Proven experience designing end-to-end data pipelines: ingestion, transformation, validation, feature engineering, and serving layers; familiarity with tools like Airflow/Dagster/dbt/Spar (or equivalent).
Deep knowledge of forecasting and statistical modeling (time series forecasting, backtesting, scenario analysis, uncertainty intervals) and apply models to real operational constraints.
Practical understanding of infrastructure capacity domains: CPU/vCPU oversubscription, memory contention, storage throughput/capacity, GPU scheduling, rack/power/cooling/ports, and supply lead times.
Demonstrated ability to translate analytics into actionable recommendations (capacity gap detection, capex timing, allocation/reservation policy impacts, monetization/runway reporting.
Strong stakeholder partnership and communication skills-comfortable working with Engineering, SRE/Ops, Finance, Procurement, and Product, and presenting to executives.
Track record of raising the bar on data quality and trust: reconciliation, lineage, monitoring/alerting, clear metric definitions, and documentation.
Compensation
$110,000 - $140,000
Final compensation will vary depending on years of experience, background/skill set, location, and applicable laws.
We are an equal opportunity employer and are committed to creating an inclusive environment for all employees. We welcome applications from individuals of all backgrounds and experiences, and we prohibit discrimination based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected status under applicable laws. Vultr will consider qualified applicants with arrest or conviction records in accordance with applicable laws and will not conduct a background check until after an offer of employment has been extended and accepted.
We also take your privacy seriously. We handle personal information responsibly and follow applicable laws, including U.S. privacy rules and India’s Digital Personal Data Protection Act, 2023. Your data is used only for legitimate business purposes and is protected with proper security measures.
Where allowed by law, applicants may request details about the data we collect, access or delete their information, withdraw consent for its use, and opt out of nonessential communications. For more details, please see our Privacy Policy.

DBServer

DBServer

Globant

Tabby

Depop

Vultr

Vultr

Vultr