Match score not available

Staff Data Engineer, Infrastructure

extra holidays - extra parental leave
Remote: 
Full Remote
Contract: 
Experience: 
Expert & Leadership (>10 years)
Work from: 
New York (USA), United States

Offer summary

Qualifications:

12+ years in software development or DevOps, 5+ years in data engineering and pipeline processing, Expertise in SQL (Redshift, Postgres, etc.), Proficiency in high-level programming languages, Experience with CI/CD and workflow management.

Key responsabilities:

  • Design and deploy production data pipelines
  • Automate data flow patterns to enhance productivity
  • Develop automation solutions in Python, Spark, Flink
  • Maintain and monitor infrastructure systems
  • Increase efficiency of ETL processes
ASAPP logo
ASAPP Information Technology & Services Scaleup https://www.asapp.com/
201 - 500 Employees
See more ASAPP offers

Job description

Join our team at ASAPP, where we're developing transformative Vertical AI designed to improve customer experience. Recognized by Forbes AI 50, ASAPP designs generative AI solutions that transform the customer engagement practices of Fortune 500 companies. With our automation and simplified work processes, we empower people to reach their full potential and create exceptional experiences for everyone involved. Work with our team of talented researchers, engineers, scientists, and specialists to help solve some of the biggest and most complex problems the world is facing.

The Data Engineering team at ASAPP designs, builds and maintains our mission-critical core data infrastructure and analytics platform. Accurate, easy-to-access, and secure data is critical to our natural language processing (NLP) customer interaction platform which interacts with tens of millions of end-users in real-time.

We’re looking to hire a Staff Data Engineer with the knack for building out data infrastructure systems that can handle our ever-growing volumes of data and the demands we want to make of it.  Automation is a key part of our workflow, so you’ll help design and build highly-available data processing pipelines that self-monitor and report anomalies. You’ll need to be an expert in ETL processes and know the in’s and out’s of various data stores that serve data rapidly and securely to all internal and external stakeholders. As part of our fast-growing data engineering team, this role will also play an integral role in shaping the future of data infrastructure as it applies to improving our existing metric-driven development and machine learning capabilities.

Applicants with all or some relevant combination of the requirements listed below are encouraged to apply. We are able to consider remote and hybrid candidates for this role.

What you'll do
  • Design and deploy improvements to our mission-critical production data pipeline, data warehouses, data systems
  • Recognize data flow patterns and generalizations to automate as much as possible to drive productivity gains
  • Expand our logging and monitoring processes to discover and resolve anomalies and issues before they become problems
  • Develop state-of-the-art automation and data solutions in Python, Spark and Flink
  • Maintain, Manage, Monitor our infrastructure related including Kafka, Kubernetes, Spark, Flink, Jenkins, general OLAP and RDBMS databases, S3 objects buckets, permissions
  • Increase the efficiency, accuracy, and repeatability of our ETL processes
  • Know how to make the tradeoffs required to ship without compromising quality

  • What you'll need
  • 12+ years of experience in general software development and/or dev-ops, sre roles in AWS.
  • 5+ years experience in data engineering, data systems, pipeline and stream processing.
  • Expertise in at least one flavor of SQL, e.g. Redshift, Postgres, MySQL, Presto/Trino, Spark SQL, Hive
  • Proficiency in a high-level programming language(s). We use Python, Scala, Java, Kotlin, and Go
  • Experience with CI/CD (continuous integration and deployment)
  • Experience with workflow management systems such as Airflow, Oozie, Luigi, and Azkaban
  • Experience implementing data governance, i.e. access management policies, data retention, IAM, etc.
  • Confidence operating in a devops-like capacity working with AWS, Kubernetes, Jenkins, Terraform, etc. thinking about automation, alerting, monitoring, and security and other declarative infrastructure

  • What we'd like to see
  • Bachelor's Degree in a field of science, technology, engineering, or math, or equivalent hands-on experience
  • Experience in maintaining and managing kafka (not just using)
  • Experience in maintaining and managing OLAP/HA database systems (not just using)
  • Familiarity handling Kubernetes clusters for various jobs, apps, and high throughput
  • Technical knowledge of data exchange and serialization formats such as Protobuf, Avro, or Thrift
  • Experience in either deploying and creating Spark Scala and/or Flink applications.

  • ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Remote

    Required profile

    Experience

    Level of experience: Expert & Leadership (>10 years)
    Industry :
    Information Technology & Services
    Spoken language(s):
    English
    Check out the description to know which languages are mandatory.

    Data Engineer Related jobs