data Engineer_roji_innowrap

Work set-up: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Bachelor's degree in Computer Science, Statistics, Informatics, or related field., 3-6 years of experience as a Data Engineer., Proficiency in SQL and experience with relational and NoSQL databases., Experience with big data tools like Spark, Hadoop, and cloud platforms such as AWS..

Key responsibilities:

  • Build and optimize data pipelines and architectures.
  • Perform root cause analysis on data and processes.
  • Support data transformation, metadata, and workload management.
  • Collaborate with cross-functional teams in a dynamic environment.

CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See all jobs

Job description

Job Description
    • • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
      • Experience building and optimizing big data, data pipelines, architectures and data sets.
      • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
      • Strong analytic skills related to working with unstructured datasets.
      • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
      • A successful history of manipulating, processing and extracting value from large disconnected datasets.
      • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
      • Experience supporting and working with crossfunctional teams in a dynamic environment.

      We are looking for a candidate with 3 6 years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following softwaretools:

      • Develop and improve the current data architecture using AWS Redshift, AWS S3, AWS Aurora (Postgres),Spark, AWS Glue and HadoopEMR.
      • Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability.
      • Experience with relational SQL and NoSQL databases including MySQL and MongoDB.
      • Experience with data pipeline and workflow management tools like Airflow, etc.
      • Experience with streamprocessing systems: Kinesis, SparkStreaming, etc.
      • Experience with objectorientedobject function scripting languages: Python, Java, C++, Scala, etc
      • Strong project management and organizational skills.
      • Experience supporting and working with crossfunctional teams in a dynamic environment.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Organizational Skills
  • Analytical Skills

Related jobs