Qureos

Find The RightJob.

Gen AI RAG Engineer

About Our Client

Our client is an established management consulting firm working with government,
financial services, and enterprise clients across the GCC and wider MENA
region.

The Team You'd Be Joining

A rapidly growing, ambitious AI engineering team being built from the ground up
to design and ship the AI products at the centre of the client’s consulting
work. As an early hire in this build, you will help shape both the technical
foundations and the engineering culture of the function.

The Role

This role builds and optimises retrieval-augmented generation pipelines and
LLM-based applications end-to-end — chunking, embeddings, retrieval logic,
prompt orchestration, LLM integration, and the evaluation loops that keep
production AI honest. You are the practitioner who turns a research-paper
architecture into a production system meeting real targets for latency, cost,
accuracy, and hallucination control.

This is a hands-on engineering role, not research. The team ships into live client
engagements against measured outcomes — not demos, not proofs of concept.

Mandatory Requirements

Education.

Bachelor’s in Computer Science (or very similar/related) from a Tier 1 / Tier 2 university. Master’s preferred. “Very similar/related” includes Software Engineering, Computer Engineering, Information Systems, Mechatronics, Applied Mathematics with software focus, and similar substantively quantitative or CS-equivalent engineering disciplines.

Experience: 5–10 years in software engineering, with at least 2 years on production GenAI / LLM applications and 12+ months specifically on retrieval-augmented systems.

Core technical:

  • Strong Python
  • Deep working knowledge of LLM APIs and embeddings
  • Vector databases
  • Prompt engineering
  • API integration

Mandatory certifications: “Generative AI with LLMs” (DeepLearning.AI).

Engineering discipline: Production RAG without an evaluation loop is not engineering. Evidence of formal evaluation pipelines (RAGAS, custom eval sets, regression tests) is expected at interview.

Languages: Proficiency in English is required.

Strong Plus:

  • “LangChain for LLM Application Development” certification (DeepLearning.AI).
  • LangChain or LlamaIndex framework experience at production scale.
  • Reranking models in production use — Cohere Rerank, cross-encoders, ColBERT, orfine-tuned rerankers.
  • Hybrid retrieval design (dense + sparse / BM25 / RRF).
  • Fine-tuned or domain-adapted embedding models.
  • Agentic / tool-use orchestration in production.
  • Hallucination evaluation tooling — RAGAS, TruLens, custom eval pipelines.
  • Cost / latency tuning experience — prompt caching, model routing, semantic cache.
  • Optional certifications: NLP / Transformers (Hugging Face), Vector Database(Pinecone).
  • A live GitHub with working RAG implementations.
  • Arabic-language RAG experience.

How to Apply

Click on Apply and complete our application form (designed to be quick and easy). You will be asked you to upload your CV and also provide a 300-word narrative describing one RAG pipeline or LLM-based application you took from concept to production — use case, retrieval and orchestration design, evaluation methodology, deployment, and measurable adoption or quality outcomes. A GitHub link with working RAG code is welcomed but optional.

Job Types: Full-time, Permanent

Work Location: In person

Similar jobs

No similar jobs found

© 2026 Qureos. All rights reserved.