Education
A master’s degree in computer science, data science, software engineering, or related field.
Experience
At least three years of experience in BI development, data analytics, data engineering, software engineering, or a similar role.
Expertise in data modelling, ETL development, data architecture, master data management.
Strong experience with various Data Management architectures like data warehouse, data lake, LakeHouse architecture, Data Fabric vs Data Mesh concepts and the supporting processes like data Integration, MPP engines, governance, metadata management.
Intermediate experience in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines.
Strong experience in designing, building, and deploying data solutions that capture, explore, transform, and utilize data to create data products and support data informed initiatives. Proficiency in ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration and data virtualization.
Basic knowledge and ability in data science languages/tools such as R, Python, TensorFlow, Databricks, Dataiku, SAS, or others.
Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (i.e. AWS, OCI, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks, etc)
Strong experience with database technologies such as SQL, NoSQL, PostgreSQL, Oracle, Hadoop, Teradata etc.
Intermediate experience working with popular data discovery, analytics, and BI software tools like PowerBI, Tableau, Qlik Sense, Looker, ThoughtSpot, MicroStrategy or others for semantic-layer-based data discovery is advantage.
Expert problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve repetitive problems.