[Job 24194] MidLevel Data Developer (Afirmativa para Mulheres, Pessoas com Deficiência, Pessoas Pretas, LGBTQIAPN+), Brazil

Work set-up: 
Full Remote
Contract: 
Salary: 
1 - 1K yearly
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Experience as a data developer with focus on ETL/ELT pipelines and data modeling., Strong experience with AWS data services (S3, Glue, Lambda, RDS, Redshift, Kinesis)., Proficiency in SQL and database design with PostgreSQL or similar relational databases., Knowledge of data security principles, encryption, and access controls..

Key responsibilities:

  • Design and implement ETL pipelines for data migration and integration.
  • Build and maintain data pipelines using AWS services and develop data models for databases.
  • Create data transformation processes for multi-currency calculations and reporting.
  • Collaborate with stakeholders to understand data requirements and ensure data quality.

Ci&T logo
Ci&T
5001 - 10000 Employees
See all jobs

Job description

We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 7.400 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.


We are seeking a skilled MidLevel Data Developer to join our Data Developer team for a digital transformation project. You will play a crucial role in designing and implementing data pipelines, analytics infrastructure, and reporting solutions for a portal. As a MidLevel Data Developer, you will be responsible for data migration from legacy systems, building ETL processes, and ensuring data quality and compliance.

Key Responsibilities:
Design and implement ETL pipelines for migrating data from Microsoft Power Automate to modern data architecture
Build and maintain data pipelines using AWS services (Glue, Lambda, Step Functions, Kinesis)
Develop data models and schemas for PostgreSQL databases supporting multitenant architecture
Create data integration processes for external systems (insurers, brokers, regulatory bodies)
Implement data quality checks, validation rules, and monitoring for insurance data accuracy
Build data transformation processes for multicurrency calculations and exchange rate management
Develop analytics data marts for reporting dashboards, KPIs, and loss triangle calculations
Create automated data export processes for Excel reports and regulatory submissions
Implement data archiving and retention policies compliant with insurance regulations
Design and maintain data cataloging and metadata management systems
Build realtime data streaming solutions for notifications and SLA monitoring
Develop data backup and disaster recovery processes ensuring RPO < 1 hour
Create data lineage documentation and impact analysis for regulatory compliance
Implement data masking and anonymization for PII and sensitive insurance information
Monitor data pipeline performance and optimize for scalability across LATAM regions
Collaborate with analysts and business stakeholders to understand data requirements

Requirements for this challenge:
Experience as a data developer with focus on ETLELT pipelines and data modeling
Strong experience with AWS data services (S3, Glue, Lambda, RDS, Redshift, Kinesis)
Proficiency in SQL and database design with PostgreSQL or similar relational databases
Experience with Python for data processing and automation scripting
Experience with SharePoint
Knowledge of data pipeline orchestration tools (Apache Airflow, AWS Step Functions)
Understanding of data warehousing concepts and dimensional modeling techniques
Experience with data quality frameworks and validation processes
Familiarity with streaming data processing (Apache Kafka, AWS Kinesis, Apache Spark)
Knowledge of data formats (JSON, Parquet, Avro) and data serialization
Experience with version control (Git) and CICD pipelines for data workflows
Understanding of data security principles, encryption, and access controls
Knowledge of data governance practices and metadata management
Strong problemsolving skills and attention to data accuracy and consistency
Experience working in Agile environments with crossfunctional teams
Excellent communication skills for collaborating with business stakeholders

Nice to Have:
Experience with Apache Spark for largescale data processing
Familiarity with data visualization tools (Grafana, Tableau, Power BI)
Knowledge of machine learning pipelines and MLOps practices
Experience with Infrastructure as Code (Terraform, CloudFormation)
Understanding of data lake architecture and modern data stack
Experience with realtime analytics and eventdriven architectures
Knowledge of containerization (Docker, Kubernetes) for data workloads
Familiarity with data catalog tools (AWS Glue Catalog, Apache Atlas)
Experience with multiregion data replication and disaster recovery
Understanding of cost optimization strategies for cloud data services


#LIJP3

Our benefits:

Health and dental insurance
Meal and food allowance
Childcare assistance
Extended paternity leave
Partnership with gyms and health and wellness professionals via Wellhub (Gympass) TotalPass;
Profit Sharing and Results Participation (PLR);
Life insurance
Continuous learning platform (CI&T University);

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication
  • Problem Solving

Data Engineer Related jobs