The ideal candidate will be a Python expert with a primary focus AI/ML. They will also be responsible for maintaining existing applications and collaborating with cross-functional teams.
Responsibilities
- Design, develop, and optimize backend services and microservices using Python (FastAPI/Flask) following best practices for scalability, security, and maintainability.
- Build and maintain robust RESTful APIs, including authentication/authorization, rate limiting, logging, and monitoring.
- Architect and deploy cloud-native applications using AWS (Lambda, ECS/EKS, API Gateway, S3, CloudWatch, RDS/DynamoDB) or equivalent cloud platforms.
- Work with Docker/Kubernetes to containerize, orchestrate, and scale backend applications in production environments.
- Design efficient data models and integrate with relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB).
- Implement CI/CD pipelines and automated deployments using GitHub Actions, GitLab, Azure DevOps, or similar tooling.
- Collaborate with product managers and engineering teams to integrate backend systems into enterprise workflows.
- Monitor and evaluate deployed services, improving performance, reliability, and cost efficiency based on metrics and user feedback.
- Document backend architecture, APIs, deployment processes, and operational guidelines for maintainability and reuse..
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related field.
- 5+ years of professional experience in backend engineering, with strong proficiency in Python.
- Hands-on experience with FastAPI or Flask, including API design, validation, middleware, and performance optimization.
- Strong understanding of AWS or similar cloud platforms, with experience deploying and running production-grade services.
- Proficiency in Python libraries commonly used in backend and data workflows (SQLAlchemy, Pydantic, boto3, openai, pandas, numpy).
- Familiarity with Docker, Kubernetes, and containerized deployments in cloud environments.
- Strong understanding of distributed systems, microservices, and asynchronous programming.
- Good problem-solving and analytical skills with a hands-on, ownership-driven mindset.
- Strong communication skills and ability to work in cross-functional teams.
Nice to Have - AI/ML Requirements:
- Exposure to transformers, embeddings, RAG, or LLM frameworks such as LangGraph, LangChain, or Autogen.
- Experience integrating AI/ML inference endpoints or 3rd-party AI services into backend systems.
- Experience with model fine-tuning or vector databases is a plus.
- Integrate AI/LLM-based components into backend services when required.
- Build and maintain basic RAG or vector-search pipelines that interface with text or image knowledge bases.
- Apply prompt engineering, model fine-tuning, or data preprocessing techniques when enhancing AI-powered features.
- Work with transformers, LLM APIs, or pretrained models as part of feature add-ons or experimental capabilities
We have an amazing team of 700+ individuals working on highly innovative enterprise projects & products. Our customer base includes Fortune 100 retail and CPG companies, leading store chains, fast-growth fintech, and multiple Silicon Valley startups.
What makes Confiz stand out is our focus on processes and culture. Confiz is ISO 9001:2015 (QMS), ISO 27001:2022 (ISMS), ISO 20000-1:2018 (ITSM), ISO 14001:2015 (EMS), ISO 45001:2018 (OHSMS) Certified. We have a vibrant culture of learning via collaboration and making workplace fun.
People who work with us work with cutting-edge technologies while contributing success to the company as well as to themselves.
To know more about Confiz Limited, visit: https://www.linkedin.com/company/confiz-pakistan/