Match score not available

Semi Senior Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Degree in Systems Engineering, Computer Science, Data Science, Industrial Engineering, or related fields., 4+ years of experience in designing, developing, and optimizing data pipelines., Strong proficiency in Python and SQL, with hands-on experience in ETL processes., Advanced English (B2+) for effective communication in an international environment..

Key responsabilities:

  • Design, develop, and optimize scalable and efficient data pipelines.
  • Maintain and improve ETL processes for data ingestion, transformation, and storage.
  • Implement data quality checks to ensure accuracy and consistency.
  • Collaborate with data scientists, analysts, and engineers to ensure smooth data flow.

Procalidad Analytics logo
Procalidad Analytics Information Technology & Services SME http://www.procalidad.com/
51 - 200 Employees
See all jobs

Job description

📊 Data is the new gold, and we need a skilled Data Engineer to help us mine it! If you love building scalable data solutions, optimizing pipelines, and working with cloud technologies, this role is for you. Join a dynamic team where innovation, automation, and performance are at the heart of everything we do.

Required Qualifications:

🎓 Education: Degree in Systems Engineering, Computer Science, Data Science, Industrial Engineering, or related fields.
📌 Experience: 4+ years in designing, developing, and optimizing data pipelines.

🗣 Language: Advanced English (B2+) required for effective communication in an international environment.

Technical Expertise:

Programming: Strong proficiency in Python (Pandas, NumPy, PySpark) and SQL (Snowflake, PostgreSQL, MySQL, SQL Server).
Data Pipelines & ETL: Hands-on experience in designing, developing, and maintaining scalable ETL processes and data ingestion/transformation workflows.
Databases: Experience with relational and NoSQL databases (MongoDB, Cassandra).
Cloud & Big Data: Experience with AWS (S3, BigQuery, Snowflake) and familiarity with big data frameworks (Hadoop, Spark is a plus).
DevOps & Orchestration: Experience with containerization (Docker, Git) and workflow automation tools like Airflow, Cron Jobs.
Optimization & Performance: Strong knowledge of query optimization, database performance tuning, and best practices in data modeling.
CI/CD Pipelines: Experience in building and maintaining CI/CD pipelines for data solutions.

Key Responsibilities:

📌 Data Pipeline Development: Design, develop, and optimize scalable and efficient data pipelines.
📌 ETL Optimization: Maintain and improve ETL processes for data ingestion, transformation, and storage.
📌 Data Quality & Validation: Implement data quality checks to ensure accuracy and consistency.
📌 Collaboration: Work closely with data scientists, analysts, and engineers to ensure smooth data flow.
📌 Performance Tuning: Optimize SQL queries for scalability and efficiency.
📌 Cloud Data Solutions: Leverage AWS, GCP, or Azure for data storage and processing.
📌 Automation & Monitoring: Automate workflows using Python scripting and monitor data pipelines for reliability and performance.

Soft Skills:

💡 Teamwork – Ability to collaborate effectively in a dynamic environment.
🎯 Problem-Solving – Proactive approach to identifying and solving data-related challenges.
Work Under Pressure – Ability to handle deadlines and ensure smooth operations.
📢 Communication – Strong assertive communication skills to interact with cross-functional teams.
🔍 Accountability & Responsibility – Ownership of tasks and commitment to objectives.

Required profile

Experience

Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Calmness Under Pressure
  • Accountability
  • Communication
  • Teamwork
  • Problem Solving

Data Engineer Related jobs