Match score not available

Senior MLOps Engineer

fully flexible
Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

5+ years of experience in MLOps, AI infrastructure, or DevOps., Deep expertise in AWS, Helm, Kubernetes, Docker, and Terraform., Extensive hands-on experience with ML model serving frameworks like TensorFlow Serving and TorchServe., Strong background in AI/ML pipeline orchestration and automation tools like Kubeflow Pipelines and MLflow..

Key responsabilities:

  • Design and maintain scalable MLOps infrastructure for model deployment and monitoring.
  • Develop and optimize model serving infrastructure for real-time inference and API-based services.
  • Establish best practices for AI observability and monitoring to track model performance.
  • Foster a culture of technical excellence by mentoring teams and sharing best practices.

neoshare AG logo
neoshare AG Scaleup https://www.neoshare.de/
51 - 200 Employees
See all jobs

Job description

Your mission

The AI teams at neoshare design and builds cutting-edge solutions that transform how our customers collaborate on financing and transaction cases. We turn vast collections of documents into structured insights, empower users to interact with their data in natural language, and enhance transparency, efficiency, and decision-making. Our goal is not just to automate but to elevate—giving our customers greater control, clarity, and even joy in their workflows.

As an MLOps Engineer at neoshare, you will be at the core of scaling AI into production, ensuring that models are efficiently deployed, monitored, and continuously improved. You will work at the intersection of AI and DevOps, designing scalable ML pipelines, automating workflows, and enabling seamless AI operations across teams.

Where your experience is needed
  • Design and maintain scalable, reliable, and automated MLOps infrastructure, enabling seamless model deployment, versioning, and monitoring. Build self-service tools that empower AI teams to deploy models efficiently while ensuring high availability and operational excellence.

  • Develop and optimize model serving infrastructure for real-time inference, batch processing, and API-based AI services. Ensure low-latency, high-throughput execution across cloud and on-prem environments while collaborating with DevOps to scale AI workloads effectively.

  • Establish best practices for AI observability and monitoring, implementing tools to track model drift, accuracy, inference speed, and reliability. Drive continuous improvements in performance and stability, ensuring models operate securely and efficiently in production.

  • Foster a culture of technical excellence and collaboration. Share knowledge, refine best practices, and guide teams in adopting cutting-edge MLOps solutions that streamline AI development and deployment.

Your profile
  • 5+ years of blended industry experience in MLOps, AI infrastructure, or DevOps, with a strong track record of building and scaling machine learning pipelines, deploying models in production, and optimizing AI workflows in cloud environments.

  • 2+ years in a role with a main focus on MLOps tasks

  • Deep expertise in AWS and Helm deployments, with proficiency in Kubernetes, Docker, and Terraform. Experience in serverless AI architectures and GPU/TPU-accelerated workloads is a plus.

  • Extensive hands-on experience in ML model serving frameworks such as TensorFlow Serving, TorchServe, and KFServing, ensuring low-latency, high-throughput AI services for real-world applications.

  • Strong background in AI/ML pipeline orchestration, Model Management, and ETL Pipelining with expertise in automating model training, validation, and deployment using tools like Kubeflow Pipelines, MLflow, Dagster, or Prefect. 

  • Passion for streamlining MLOps workflows and enabling AI teams to iterate and deploy seamlessly.

  • A mindset for mentorship and collaboration, actively guiding teams and sharing best practices to foster technical excellence in AI infrastructure and deployment.

Why us?
  • Flexible working hours: Manage your workday autonomously. 
  • neoshare-Health: We offer our employees additional health insurance with dental coverage and a Multisport card. 
  • Remote-Work: While our beautiful Sofia office is always open, we make it possible to work remotely with no fixed office days. 
  • Equipment: Our employees can choose their hardware (between MacBook Pro and Lenovo). 
  • Vacation: We offer 26 days paid leave. 
  • Bonus: We offer a 13th salary in December.  
If you are interested, please send us your CV in English!
About us
neoshare AG, founded in 2019 in Munich, has quickly evolved into an international fintech company and now operates locations in Munich, Düsseldorf, and Sofia, Bulgaria. As an “AI-First Company,” it offers an innovative end-to-end solution with its SaaS platform "neoshare" for the efficient digitization and management of large-scale project and real estate financing. In close collaboration with banks and real estate companies, the product is continuously developed to sustainably transform the financial sector. 

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Mentorship
  • Collaboration

Field Engineer (Solutions) Related jobs