Experience with Python or Java, Spark, Airflow, and Databricks for building data pipelines
Strong understanding of ETL/ELT architectures and data integration
Hands-on experience with cloud data platforms and data warehouses (Snowflake, Redshift, BigQuery) and cloud storage (S3) on AWS/Azure
Familiarity with infrastructure-as-code and deploying data solutions in cloud environments
Requirements:
Design, build, and maintain scalable data pipelines and ETL/ELT workflows using Spark, Airflow, and Databricks
Develop and optimize data models and warehousing solutions on Snowflake, Redshift, or BigQuery; ensure data quality and governance
Collaborate with data science and analytics teams to translate requirements into data solutions and enable business impact
Support deployment and operations on AWS and Azure, applying infrastructure-as-code practices and ensuring deployment automation
Job description
Launch Your Data Career with Proof, Not PromisesβAt SynergisticIT
If you're serious about starting a high-impact, high-paying career in Data Science, Analytics, or Engineering, it's time to stop guessing and start choosing results. At SynergisticIT, we don't just offer trainingβwe offer outcomes.
Our candidates have landed jobs at top tech firms like Google, Apple, Client, Visa, PayPal, and countless startups and mid-sized tech teams. Whether you're transitioning from business, healthcare, teachingβor you're a self-taught coder or bootcamp gradβwe help bridge that final gap between potential and placement. πSee for yourself:Candidate Success Outcomes π Specializations That Match Market Demand πΉ Data Science Track:
Python, Pandas, NumPy, Scikit-Learn
Machine Learning (supervised/unsupervised), TensorFlow, Deep Learning, NLP
Model deployment, A/B testing, business impact communication
πΉ Data Analytics Track:
SQL, Excel, Power BI, Tableau
Exploratory Data Analysis, visualization, statistical inference