Staff Software Engineer, Data Ingestion

Work set-up: 
Full Remote
Contract: 
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

6+ years of experience in software development, preferably with a computer science background., Extensive expertise in Python for developing robust, production-grade applications., Proven experience in data collection from diverse sources such as APIs, Kafka, and cloud storage., Solid understanding of distributed systems, scalability, and cloud platform services like AWS or GCP..

Key responsibilities:

  • Design, develop, and maintain scalable data ingestion pipelines using Python.
  • Integrate data from various sources including databases, APIs, and streaming platforms.
  • Monitor, troubleshoot, and resolve issues in data pipelines to ensure data quality and availability.
  • Collaborate with database engineers and evaluate new technologies to improve data ingestion processes.

BrightEdge logo
BrightEdge SME https://www.brightedge.com/
501 - 1000 Employees
See all jobs

Job description

The Staff Software Engineer, Data Ingestion will be a critical individual contributor responsible for designing collection strategies, developing, and maintaining robust and scalable data pipelines. This role is at the heart of our data ecosystem, deliver new analytical software solution to access timely, accurate, and complete data for insights, products, and operational efficiency.


Key Responsibility
  • Design, develop, and maintain high-performance, fault-tolerant data ingestion pipelines using Python.
  • Integrate with diverse data sources (databases, APIs, streaming platforms, cloud storage, etc.).
  • Implement data transformation and cleansing logic during ingestion to ensure data quality.
  • Monitor and troubleshoot data ingestion pipelines, identifying and resolving issues promptly.
  • Collaborate with database engineers to optimize data models for fast consumption.
  • Evaluate and propose new technologies or frameworks to improve ingestion efficiency and reliability.
  • Develop and implement self-healing mechanisms for data pipelines to ensure continuity.
  • Define and uphold SLAs and SLOs for data freshness, completeness, and availability.
  • Participate in on-call rotation as needed for critical data pipeline issues.

  • Required Skills
  • 6+ years experience in software development industry from computer science background
  • Extensive Python Expertise: Extensive experience in developing robust, production-grade applications with Python.
  • Data Collection & Integration: Proven experience collecting data from various sources (REST APIs, OAuth, GraphQL, Kafka, S3, SFTP, etc.).
  • Distributed Systems & Scalability: Strong understanding of distributed systems concepts, designing for scale, performance optimization, and fault tolerance.
  • Cloud Platforms: Experience with major cloud providers (AWS or GCP) and their data-related services (e.g., S3, EC2, Lambda, SQS, Kafka, Cloud Storage, GKE).
  • Database Fundamentals: Solid understanding of relational databases (SQL, schema design, indexing, query optimization). OLAP database experience is a plus (Hadoop)
  • Monitoring & Alerting: Experience with monitoring tools (e.g., Prometheus, Grafana) and setting up effective alerts.
  • Version Control: Proficiency with Git.
  • Containerization (Plus): Experience with Docker and Kubernetes.
  • Streaming Technologies (Plus): Experience with real-time data processing using Kafka, Flink, Spark Streaming
  • Required profile

    Experience

    Level of experience: Senior (5-10 years)
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Software Engineer Related jobs