Staff Engineer – Data Platform and Lakehouse

Work set-up: 
Full Remote
Contract: 
Salary: 
170 - 170K yearly
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

15+ years of experience in software development and full SDLC., Proficiency in Python and SQL with strong software engineering skills., Extensive experience with distributed frameworks like Apache Spark and cloud-native data architectures., Hands-on experience with Databricks, AWS, and data pipeline orchestration tools..

Key responsibilities:

  • Design and implement scalable, secure data platforms on cloud infrastructure.
  • Lead architecture and ensure best practices in data engineering and governance.
  • Support AI/ML initiatives through robust data engineering and feature engineering.
  • Collaborate with cross-functional teams to translate business needs into platform capabilities.

Industry Dive logo
Industry Dive SME http://www.industrydive.com
201 - 500 Employees
See all jobs

Job description

Company Description

Curinos empowers financial institutions to make better, faster and more profitable decisions through industryleading proprietary data, technologies and insights. With decadeslong expertise in the financial services industry and a relentless focus on the future, Curinos technology and analytics ecosystem allows clients to anticipate customer needs and optimize their goto market decisions in an increasingly competitive market.

Curinos operates in a hybridremote model, and this position is fully remote in the United States or hybrid in the Greater New York, Boston or Chicago metropolitan areas.

Job Description

We are seeking and experienced Staff Engineer – Data Platform and Lakehouse to lead the design and implementation of our cloudnative Data Platform that support AI, advanced analytics, and machine learning applications in our ecosystem of B2B SaaS applications in the FinTech vertical You’ll work with a diverse team of talented engineers, AI and ML scientists, and product managers to build our next generation data and AI platforms, support migration of products from legacy infrastructure, and help product engineering teams leverage the Data platform and Lakehouse to launch new products and build new genAI applications. As a company that specializes in datadriven insights, the reliability, scalability, and effectiveness of our data & AI platforms are integral to our product offerings. This role requires deep technical expertise, strategic thinking, and strong collaboration across engineering, data science, and product teams.

Responsibilities:

  • Design and implement scalable, secure, and maintainable data platforms on Databricks and AWS cloud infrastructure.
  • Provide architectural leadership across engineering domains, ensuring consistency, scalability, and resilience
  • Architect distributed data processing systems using Apache Spark and optimize for performance and scalability.
  • Lead development of reusable data pipelines and workflows using Databricks Workflows.
  • Translate business objectives into platform capabilities in collaboration with Product Managers and crossfunctional teams.
  • Support AIML initiatives through robust data engineering, including feature engineering, model deployment.
  • Champion best practices in ETLELT, data quality, monitoring, observability, and Agile development.
  • Drive adoption of data governance standards: access control, metadata management, lineage, and compliance.
  • Establish and maintain CICD pipelines and DevOps automation for data infrastructure.
  • Evaluate and integrate emerging technologies to enhance development, testing, deployment, and monitoring.
    • Salary Range: $170,000 $190,000 (plus bonus)

      Qualifications

      Desired Skills and Expertise:

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Strategic Thinking
  • Collaboration
  • Communication

Related jobs