Profile Summary:
We are looking for Big Data Engineers who are ready to take their career to the next level. You will evangelize and build Data Products, simplify critical ML and Analytics products to enrich the customer experience, and simplify marketing operations. You will partner with other data engineering teams and platform teams within AI to lead the architecture, implementation, and operations of big data pipelines and tools for building high-quality data marts
Responsibilities:
· Architect, design, build, implement and support data pipelines/products to serve ML and Analytical use cases
· Collaborate with product managers, engineers, data scientists, and analysts on mission-critical property data needs to build world-class datasets
· Identify opportunities to evangelize and support existing data processes
· Contribute back to common tooling/infrastructure to enable self-service tooling to expedite customer onboarding.
Requirements:
· 5+ years of software development experience using Python/Scala/Java and experience leading the design/implementation of config-driven, scalable, reliable services and workflows/pipelines using Airflow, Hive, Spark, Kafka, EMR, or equivalents.
· A degree in Computer Science or a related technical field; or equivalent work experience
· Expert in establishing and promoting high standards in pipeline monitoring, data validation, testing, etc.
· You have extensive experience applying automation to data engineering (DataOps).
· Passionate about data engineering/analytics and distributed systems.
· Excellent interpersonal skills and passion for collaborating across organizational boundaries.
· Comfortable distilling informal customer requirements into problem definitions, resolving ambiguity, and balancing challenging objectives.
· Excited about mentorship and coaching/onboarding/leading teammates
YozmaTech-Empowering Tech Entrepreneurs
Product Hackers