Description
● Focus Area: Data Collection The team will take full ownership of all aspects of data collection, including:
○ Development and maintenance of crawlers and tracking systems.
○ Handling documentation, monitoring, and ensuring seamless integration with relevant vendors (e.g., proxy providers, crawler tools, and data API providers).
● Responsibilities:
○ Development, maintenance, and monitoring of data collection infrastructure.
○ Collaborating with the backend team to optimize the data collection processes.
○ Be part of the on-call shifts to ensure continuous support for data crawling and tracking operations.
○ Providing support for all issues related to data crawling and tracking.
Requirements
● Strong English proficiency for both verbal and written communication.
● Excellent communication skills, with the ability to articulate ideas clearly and work effectively in a team environment.
Soft Skills
● Organized and able to handle multiple tasks efficiently.
● Strong attention to detail with a proactive approach to identifying and solving problems.
● A hunger for success and a strong drive to excel in a dynamic environment.
● Experience working in remote positions, with the ability to manage tasks and communicate effectively in a distributed team.
● At least 1 year of experience as a team lead, managing and guiding teams to achieve goals. Technical Expertise
● Proficiency in Python – Mandatory 3 years.
● Experience with databases (e.g., MongoDB, S3, PostgreSQL).
● Knowledge of web scraping tools and frameworks, including Selenium.
● Familiarity with server-client architecture in Python, particularly Celery.
● Hands-on experience with messaging technologies, such as Kafka and Kombu.
● Strong understanding of object-oriented programming (OOP) principles.
● Experience with AWS services, including ECS and EC2.
● Knowledge of Docker and containerization.
● Practical experience with Linux.
● Experience with Jira for task management and team organization.