Senior Machine Learning Engineer, Scaling and Performance

London / Paris
Research (All Teams) – Research /
Permanent /
Hybrid
InstaDeep, founded in 2014, is a pioneering AI company at the forefront of innovation. With strategic offices in major cities worldwide, including London, Paris, Berlin, Tunis, Kigali, Cape Town, Boston, and San Francisco, InstaDeep collaborates with giants like Google DeepMind and prestigious educational institutions like MIT, Stanford, Oxford, UCL, and Imperial College London. We are a Google Cloud Partner and a select NVIDIA Elite Service Delivery Partner. We have been listed among notable players in AI, fast-growing companies, and Europe's 1000 fastest-growing companies in 2022 by Statista and the Financial Times. Our recent acquisition by BioNTech has further solidified our commitment to leading the industry.

Join us to be a part of the AI revolution!

The Team:
Our team plays a pivotal role in enhancing the capabilities and efficiency of our advanced AI systems. We design solutions that enable our machine learning models to scale seamlessly and perform optimally in real-world applications and large scale research. Collaborating across InstaDeep, we directly impact projects in diverse fields including Life Sciences, Logistics, Chip Design, and Quantum ML.

The Role:
We seek a highly skilled Machine Learning Engineer with a passion for tackling the challenges of large-scale ML development. You'll play a vital role in making our ambitious AI solutions a practical reality. If you thrive on system-level analysis, find joy in squeezing every ounce of performance from hardware, and love diving deep into algorithm optimisation, this is the position for you.

TLDR:
Train world-class billion parameter models for some of the most exciting applications of ML in the industry – with minimum development time and maximum hardware utilisation.

Responsibilities

    • Scaling Expertise: Design and implement strategies to efficiently scale machine learning models across diverse hardware platforms (GPU/TPU).
    • Performance Optimisation: Analyse and profile ML systems under heavy load, pinpointing bottlenecks, and implementing targeted optimisations.
    • Distributed Systems Architecture: Create robust distributed training and inference solutions for maximum computational efficiency.
    • Algorithmic Optimisation: Research and understand the latest deep learning literature to implement and optimise state-of-the-art algorithms and architectures, ensuring compute efficiency and performance.
    • Low-Level Mastery: Write high-quality Python, C/C++, XLA, Pallas, Triton, and/or CUDA code to achieve performance breakthroughs.

Required Skills

    • Understanding of Linux systems, performance analysis tools, and hardware optimisation techniques
    • Experience with distributed training frameworks (Ray, Dask, PyTorch Lightning, etc.)
    • Expertise with Python and/or C/C++
    • Development with machine learning frameworks (JAX, Tensorflow, PyTorch etc.)
    • Passion for profiling, identifying bottlenecks, and delivering efficient solutions.

Highly Desirable

    • Track record of successfully scaling ML models.
    • Experience writing custom CUDA kernels or XLA operations.
    • Understanding of GPU/TPU architectures and their implications for efficient ML systems.
    • Fundamentals of modern Deep Learning
    • Actively following ML trends and a desire to push boundaries.

Example Projects:

    • Profile algorithm traces, identifying opportunities for custom XLA operations and CUDA kernel development.
    • Implement and apply SOTA architectures (MAMBA, Griffin, Hyena) to research and applied projects.
    • Adapt algorithms for large-scale distributed architectures across HPC clusters.
    • Employ memory-efficient techniques within models for increased parameter counts and longer context lengths.

What We Offer:

    • Real-World Impact: Directly contribute to the performance and reach of our AI solutions.
    • Cutting-Edge Challenges: Tackle complex problems at the forefront of machine learning and large-scale system design.
    • Growth-Oriented Environment: Expand your expertise in a team of talented engineers dedicated to advancing ML scalability.
* Important: All applicants must submit their CV/Resume and Cover letter in English. *

Our commitment to our people
We empower individuals to celebrate their uniqueness here at InstaDeep. Our team comes from all walks of life, and we’re proud to continue encouraging and supporting applicants from underrepresented groups across the globe. Our commitment to creating an authentic environment comes from our ability to learn and grow from our diversity, and how better to experience this than by joining our team? We operate on a hybrid work model with guidance to work at the office at least 2 to 3 days per week to encourage close collaboration and innovation. We are continuing to review the situation with the well-being of InstaDeepers at the forefront of our minds.

Right to work: Please note that you will require the legal right to work in the location you are applying for.