Location: CA
Employment Type: Full-time
Experience Level: Senior
We are seeking a skilled ETL Developer to design, develop, and maintain robust data integration solutions. The ideal candidate will be responsible for transforming, cleaning, and loading large volumes of data from multiple sources into data warehouses or analytical platforms. You will work closely with data analysts, engineers, and business teams to ensure the availability, accuracy, and efficiency of enterprise data pipelines.
Design, develop, and implement ETL (Extract, Transform, Load) processes and data integration workflows.
Work with diverse data sources including SQL, APIs, flat files, and cloud storage systems.
Develop and maintain data pipelines to ensure reliable and scalable data movement.
Optimize ETL processes for performance, scalability, and error handling.
Collaborate with data architects and business analysts to define data models and mappings.
Perform data profiling, validation, and quality checks to ensure accuracy and consistency.
Monitor ETL jobs, troubleshoot failures, and perform root cause analysis.
Maintain documentation for ETL processes, data flows, and transformations.
Support data warehouse design and ensure data is properly structured for reporting and analytics.
Participate in code reviews, deployment planning, and continuous improvement initiatives.
Bachelor's degree in Computer Science, Information Systems, or a related field.
3+ years of experience in ETL development, data integration, or data engineering.
Strong proficiency in SQL and experience with ETL tools such as:
Informatica, Talend, SSIS, DataStage, Pentaho, or AWS Glue.
Hands-on experience with data warehousing concepts and relational databases (e.g., Oracle, SQL Server, Snowflake, Redshift, BigQuery).
Knowledge of scripting languages (Python, Shell, etc.) for automation and data manipulation.
Understanding of data modeling, performance tuning, and error handling.
Familiarity with cloud platforms (AWS, Azure, or GCP) and modern data pipelines (Airflow, dbt, etc.) is a plus.
Experience with big data technologies (Spark, Hadoop, Kafka).
Exposure to API integration and RESTful data sources.
Knowledge of DevOps and CI/CD practices for data pipeline deployment.
Strong analytical and problem-solving skills with attention to detail.
Opportunity to work on cutting-edge data technologies.
Collaborative and innovative team environment.
Competitive salary and benefits package.
Career growth and learning opportunities in data engineering and analytics.

Stefanini Brasil

Daxko

Fidelity Canada

CodersBrain

Midorick Solutions

OVA.Work

OVA.Work

OVA.Work