Bachelor's degree in Computer Science, Data Science, or a related field., Proficiency in programming languages such as Python and R., Experience with machine learning frameworks like TensorFlow or PyTorch., Strong analytical and problem-solving skills..
Key responsibilities:
Develop and implement machine learning models for various applications.
Collaborate with data scientists and engineers to optimize algorithms.
Conduct experiments to validate model performance and accuracy.
Prepare reports and presentations to communicate findings to stakeholders.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
R Systems is a leading digital product engineering company that designs and builds next-gen products, platforms, and digital experiences empowering clients across various industries to overcome digital barriers, put their customers first, and achieve higher revenues as well as operational efficiency.
We constantly innovate and bring fresh perspectives to harness the power of the latest technologies like cloud, automation, AI, ML, analytics, Mixed Reality etc. Our 4,400+ technology expeditioners across 26 offices are driven to explore new digital paths, leaving no stone unturned in our quest to deliver business solutions that drive meaningful impact.
Our product mindset, capabilities and tools allow us to partner with Tech industry which is no longer limited to ISV and SaaS companies, but also include Telecom, Media, FinTech, InsureTech and HealthTech players, and enable faster new feature release with full ownership and integration into the CI-CD pipeline.
R Systems is seeking a talented Edge AI Machine Learning Engineer with specialized expertise in embedded GPU/NPU acceleration to join our team.
The ideal candidate will have extensive hands-on experience in developing and optimizing AI inference models for embedded GPU/NPU architectures. As a Machine Learning Engineer specializing in Edge AI, you will play a crucial role in shaping the future Edge AI solution, leveraging the power of GPU/NPU acceleration and enterprise-grade, large-scale edge computing.
If you are a skilled Edge AI Machine Learning Engineer with a passion for pushing the boundaries of edge computing and GPU/NPU acceleration, we want to hear from you!
Apply now to be part of our dynamic and collaborative team and join us in shaping the future of AI at the edge and revolutionizing industries with innovative Edge AI solutions!
Your Contribution
Develop and optimize AI inference models for deployment on edge devices with embedded GPU/NPU accelerators, focusing on low-latency inference runtimes.
Implement and fine-tune low-latency inference pipelines to meet real-time performance requirements.
Collaborate with cross-functional teams to integrate AI inference solutions into edge computing platforms and applications.
Conduct performance profiling and optimization to maximize the efficiency of GPU/NPU acceleration for Edge AI inference.
Influence the Edge AI strategy by providing expert advice on design and architecture
Stay current with advancements in GPU, NPU, and Edge AI frameworks, incorporating them into solution designs as appropriate.
Provide technical expertise and support to project teams, ensuring successful implementation and deployment of Edge AI solutions.
Basic Qualifications
Bachelor’s degree in computer science, Engineering, or a related field; Master’s degree preferred.
5+ years of hands-on experience in AI model development and deployment, with a focus on edge computing and inference runtime optimization.
Experience in developing full Edge / On Prem AI Application based on: NLP, Time Series (Sensors) processing, etc.
Python strong programming skills in languages. Optional: C/C++
Proficiency in ML frameworks (e.g., Scikit-learn, TensorFlow, PyTorch, XGBoost) with focus on edge AI applications deployment (e.g., Glow, TFLite, TensorRT)
Experience with MLOps frameworks (e.g., Kubeflow, MLflow, TFX, Airflow, H2O)
Extensive experience with GPU/NPU acceleration for AI inference, including optimization techniques (tensor, pipeline, data, sharded data parallelism) and performance tuning,
Hands on experience with one or more GPU/NPU frameworks: CUDA, Vulkan, OpenCL, familiarity with NVIDIA Jetson, ARM Mali, or relevant SoC configurations.
Knowledge of parallel computation, memory scheduling, and structural optimization
Excellent problem-solving and analytical skills, with a passion for innovation and continuous learning.
Additional Skills (Preferred)
Experience with edge device hardware and software integration.
Familiarity with edge computing architectures and IoT platforms.
Experience with edge AI applications in domains such as robotics, autonomous vehicles, or industrial automation
Required profile
Experience
Spoken language(s):
Check out the description to know which languages are mandatory.