Minimum of 5 years of experience in Data Engineering roles., Proficiency in Python scripting and object-oriented programming., Experience with Big Data tools like Hadoop, Spark, Kafka., Strong knowledge of relational and NoSQL databases such as PostgresSQL, Cassandra, or MongoDB..
Key responsibilities:
Design and develop scalable data pipelines for data extraction, transformation, and loading.
Ingest data from various sources using Python libraries.
Implement data storage solutions with relational and NoSQL databases.
Collaborate with data scientists and DevOps teams to ensure secure and efficient data pipeline deployment.
Report this Job
Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
We provide nearshoring of fully dedicated IT professionals from Latin America and build remarkable process-driven + user-centered software to help companies to scale their teams quickly with the highest customer satisfaction standards
We have 14+ years developing software and nearshoring for global companies such as Volkswagen, FIAT, IVECO, government agencies and other companies from around the globe.
Job Opportunity only available for Latin America residents.
Requirements:
B2 English Level or higher.
+5 years of experience in a Data Engineering roles.
+4 years of experience with object-oriented or object function scripting languages such as Python.
Experience with Big Data tools such as Hadoop, Spark, Kafka, among others, is expected.
Proven experience with both relational SQL and NoSQL databases, such as PostgresSQL and Cassandra or MongoDB.
Experience with Azure/AWS cloud services is a must.
Responsibilities:
Designing and developing scalable, efficient, and reliable data pipelines to extract, transform, and load data from various sources to a target system.
Responsible for ingesting data from various sources such as databases, files, APIs, or social media platforms using Python libraries like pandas, NumPy, and requests.
Designing and implementing data storage solutions using relational databases like MySQL or PostgreSQL, NoSQL databases like MongoDB or Cassandra, or big data stores like Hadoop or Spark.
Working closely with data scientists to understand their requirements and implement data pipelines that meet their needs.
Ensuring the security of the data pipeline by implementing access controls, encryption, and authentication mechanisms.
Working closely with DevOps teams to ensure smooth deployment of the data pipeline to production environments such as AWS or Azure.
What do we offer?
Type of contract: Independent Contractor with Venon Solutions LLC
Contract duration: Long-term
Benefits: 2 weeks of PTO (paid time off)
Holidays: from the North American calendar
Working hours: Full-time EST, fully committed (flexible hours).
Required profile
Experience
Level of experience:Senior (5-10 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.