Help us maintain the quality of our job listings. If you find any issues
with this job post, please let us know. Select the reason you're reporting
this job:
As brands have become publishers, the digital world has become the most important distribution channel. The BrightEdge Content Performance Marketing platform helps brands Target Demand, Create and Optimize Content, and Measure Results to win on the content battleground. BrightEdge transforms online content into tangible business results, such as traffic, revenue, and engagement. Our platform is powered by an Artificial Intelligence engine, DataMind, and is the only company capable of web-wide, real-time measurement of digital content engagement across all digital channels, including search, social, and mobile.
BrightEdge is a leading SEO and content performance marketing platform that transforms online content into tangible business results. Our platform processes massive amounts of data to provide actionable insights to our clients. We're looking for a talented Big Data Engineer to join our Professional Services to help us scale and optimize our data processing capabilities.
Role Overview
As a Big Data Engineer at BrightEdge, you will design, build, and maintain high-performance data pipelines that process terabytes of data. You'll work on optimizing our existing systems, identifying and resolving performance bottlenecks, and implementing solutions that improve the overall efficiency of our platform. This role is critical in ensuring our data infrastructure can handle increasing volumes of data while maintaining exceptional performance standards.
Key Responsibilities
Design and implement scalable batch processing systems using Python and big data technologies
Optimize database performance, focusing on slow-running queries and latency improvements
Use Python profilers and performance monitoring tools to identify bottlenecks
Reduce P95 and P99 latency metrics across our data platform
Build efficient ETL pipelines that can handle large-scale data processing
Collaborate with data scientists and product teams to understand data requirements
Monitor and troubleshoot data pipeline issues in production
Implement data quality checks and validation mechanisms
Document data architecture and engineering processes
Stay current with emerging big data technologies and best practices
Qualifications
Required
Bachelor's degree in Computer Science, Engineering, or related technical field
4+ years of experience in data engineering roles
Strong Python programming skills with focus on data processing libraries
Experience with big data technologies (Spark, Hadoop, etc.)
Proven experience optimizing database performance (SQL or NoSQL)
Knowledge of data pipeline orchestration tools (Airflow, Luigi, etc.)
Understanding of performance optimization techniques and profiling tool
Preferred
Master's degree in Computer Science or related field
Experience with SEO data or web crawling systems
Experience with Clickhouse Database
Knowledge of distributed systems and microservices architecture
Familiarity with container orchestration (Kubernetes, Docker)