Qureos

FIND_THE_RIGHTJOB.

Senior AI Research Engineer, Model Inference (Remote)

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for Language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).

This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM/LLMs.

Responsibilities:

  • Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.

  • Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.

  • Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).

  • Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.

  • Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.

  • Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.

  • Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).

  • Integrate and validate quantization workflows for training and inference.

  • Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).

  • Conduct GPU testing across desktop and mobile devices.

  • Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.

  • Deliver production-grade, efficient language model deployment for mobile and edge use cases.

  • Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.

© 2025 Qureos. All rights reserved.