Logo for Keylent Inc

Snowflake Data Engineer

Roles & Responsibilities

  • 8+ years of IT experience with 4+ years in data management (data integration, data modeling, data quality, and optimization)
  • Strong experience with advanced analytics tools and scripting languages (R, Python, Java, C++, Scala)
  • Proven ability to design, build, and manage data pipelines, data models, schemas, metadata, and workload management
  • Experience with SQL and database technologies for relational and non-relational data systems and relevant certifications (e.g., Snowflake)

Requirements:

  • Build data pipelines and optimize data structures, schemas and metadata
  • Drive automation through effective metadata management and modern data preparation, integration and AI-enabled techniques
  • Collaborate across departments and train counterparts in data pipelining and preparation to enable data consumption
  • Participate in ensuring compliance and governance during data use

Job description


Visa status: U.S. Citizens and those authorized to work in the U.S. are encouraged to apply.
Tax Terms: W2, 1099
Corp-Corp or 3rd Parties: Yes

 

Role: Snowflake Data Engineer

Location: Remote

Client: Kimberly Clark

 

Job Description

  • At least 8 years of IT experience and 4 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality.
  • Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Java, C++, Scala, others].
  • Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
  • Strong experience with popular database programming languages including [SQL, Blob Storage and SAP HANA] for relational databases and certifications on upcoming [MS Snowflake HDInsights, Cosmos] for non-relational databases.
  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement, API design and access] and upcoming data ingestion and integration technologies such as [stream data integration, CEP and data virtualization].
  • Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.
  • Strong experience in streaming and message queuing technologies [such Snowflake Service Bus, and Kafka].
  • Basic experience working with popular data discovery, analytics and BI software tools like [Tableau, Power BI and others] for semantic-layer-based data discovery.
  • Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
  • Demonstrated success in working with large, heterogeneous datasets to extract business value using popular data preparation tools.
  • Demonstrated ability to work across multiple deployment environments including [cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as [Docker, Kubernetes].
  • Interpersonal Skills and Characteristics
  • Strong leadership, partnership and communication skills
  • Ability to coordinate with all levels of the firm to design and deliver technical solutions to business problems
  • Ability to influence without authority
  • Prioritization and time management
  • Data modelling with Enterprise Data Warehouse and DataMart, Snowflake Data Lake Gen2 & BLOB.,
  • Data engineering experience with Snowflake Databricks
  • Hands-on experience in SQL, Python, NoSQL, JSON, XML, SSL, RESTful APIs, and other formats of data viz parquet, ORC, AVRO
  • Hands-on emphasis with a proven track record of building and evaluating data pipelines, and delivering systems for final production
  • Exposure to Big Data Analytics (data and technologies), in-memory data processing using spark.
  • Working Experience with various data bases like SAP HANA, Cassandra, Mangodb
  • Strong understanding DevOps, on-premise, and cloud deployments

Roles and responsibilities:

 

  • Build Data Pipelines
  • Drive Automation through effective metadata management
  • Learning and applying modern data preparation, integration and AI-enabled metadata management tools and techniques.
  • Tracking data consumption patterns.
  • Performing intelligent sampling and caching.
  • Monitoring schema changes.
  • Recommending — or sometimes even automating — existing and future integration flows.
  • Collaborate across departments
  • train counterparts in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
  • Participate in ensuring compliance and governance during data use

Data Engineer Related jobs

Other jobs at Keylent Inc

We help you get seen. Not ignored.

We help you get seen faster — by the right people.

🚀

Auto-Apply

We apply for you — automatically and instantly.

Save time, skip forms, and stay on top of every opportunity. Because you can't get seen if you're not in the race.

✨

AI Match Feedback

Know your real match before you apply.

Get a detailed AI assessment of your profile against each job posting. Because getting seen starts with passing the filters.

Upgrade to Premium. Apply smarter and get noticed.

Upgrade to Premium

Join thousands of professionals who got noticed and hired faster.