Collaborate as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications.
Build software and frameworks to automate high-volume and real-time data delivery between our cloud based data platforms and applications.
Build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners
Leverage DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of software utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
Perform unit tests and conduct reviews with other team members to make sure the code is rigorously designed, elegantly coded, and effectively tuned for performance
Develop and deploy distributed computing Data applications using Spark or Pyspark
Utilize programming languages like Python and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake
Sql experience
ETL experience
Basic Qualifications:
Bachelor's Degree or military experience
At least 3 years of professional work experience in data engineering
At least 3 years of experience with Python
At least 3 years of experience with Sql
At least 3 years of experience with Spark or Pyspark
At least 3 years of experience with ETL development
At least 2 year of experience working with cloud data capabilities - AWS
Required profile
Experience
Level of experience:Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.