Logo for PradeepIT Consulting Services Pvt Ltd

Data Engineer

Roles & Responsibilities

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in data engineering roles.
  • Expertise in building and maintaining data pipelines using ETL tools like Apache Spark, Apache Airflow, or similar.
  • Proficiency in SQL and database technologies (e.g., PostgreSQL, MySQL, NoSQL).

Requirements:

  • Data Pipeline Architecture: Design, develop, and optimize end-to-end ETL pipelines to extract, transform, and load data from multiple sources into the data warehouse, ensuring data quality and performance.
  • Data Modeling and Schema Design: Collaborate with data scientists and stakeholders to create scalable data models and maintain database schemas for easy querying.
  • Data Integration: Build connectors and adaptors to ingest data from diverse sources including databases and APIs, ensuring seamless data flow.
  • Performance Optimization and Monitoring: Continuously monitor pipelines and databases, identify bottlenecks, and implement optimizations to improve speed and resource utilization.

Job description

Job Title: Data Engineer

Experience: 8 to 10 Years

Time Zone: IST Time

Job Type: Remote

Work Location: -

Domain: -




Responsibilities:

  1. Data Pipeline Architecture: Design, develop, and optimize end-to-end data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse. Ensure data quality, reliability, and performance throughout the pipeline.
  2. Data Modeling and Schema Design: Work with data scientists, analysts, and stakeholders to understand data requirements and create scalable and efficient data models. Implement and maintain database schemas that facilitate easy data access and querying.
  3. Data Integration: Integrate data from diverse internal and external sources, including databases, APIs, and third-party systems. Build connectors and adaptors to ensure seamless data flow between systems.
  4. Performance Optimization: Continuously monitor and fine-tune the performance of data pipelines and databases. Identify bottlenecks and implement optimizations to enhance processing speed and resource utilization.
  5. Data Security and Governance: Implement robust security measures to safeguard sensitive data. Ensure compliance with data protection regulations and industry best practices for data governance and privacy.
  6. Data Transformation and Enrichment: Develop data transformation routines to enrich raw data and make it suitable for analytical processing. Apply data cleansing, aggregation, and normalization techniques as needed.
  7. Data Monitoring and Error Handling: Establish monitoring systems to detect data inconsistencies, anomalies, and errors. Develop automated alerts and error handling processes to ensure data integrity.
  8. Technology Evaluation and Implementation: Stay up-to-date with the latest data engineering technologies and best practices. Evaluate new tools and frameworks, and lead the implementation of suitable technologies to improve data processing efficiency and scalability.
  9. Documentation and Collaboration: Maintain comprehensive documentation for data engineering processes, data dictionaries, and workflows. Collaborate with cross-functional teams to understand their data needs and deliver effective solutions.
  10. Mentoring and Leadership: Mentor and provide guidance to junior data engineering team members. Act as a technical leader, driving innovation and best practices within the data engineering team.

Requirements:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Proven track record of at least 8 years of experience in data engineering roles.
  • Expertise in building and maintaining data pipelines using ETL tools like Apache Spark, Apache Airflow, or similar.
  • Strong proficiency in SQL and database technologies (e.g., PostgreSQL, MySQL, NoSQL databases).
  • Extensive experience with cloud-based data platforms, such as AWS, Azure, or Google Cloud Platform.
  • Solid understanding of data modeling concepts and data warehousing principles.
  • Proficiency in at least one programming language (e.g., Python, Java) for data manipulation and automation.
  • Experience with data streaming technologies (e.g., Kafka, Kinesis) is a plus.
  • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is beneficial.
  • Excellent problem-solving skills and the ability to work independently and as part of a team.
  • Strong communication and interpersonal skills to collaborate effectively with stakeholders across the organization.

Data Engineer Related jobs

Other jobs at PradeepIT Consulting Services Pvt Ltd

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.