Location
Coimbatore
Role Summary
We are hiring a Data Engineer to build and run our data pipelines, analytics layer and AI data services. This role supports BI dashboards and AI-powered assistants across our platform.
Key Responsibilities
- Build and maintain ETL pipelines using Python and SQL.
- Manage PostgreSQL databases and analytics schemas.
- Support Metabase dashboards and reporting datasets.
- Implement vector search for AI assistants using pgvector.
- Ingest and process documents from S3 and internal systems.
- Integrate data via secure APIs.
- Ensure data quality, security and reliability.
- Support reporting and AI audit requirements.
Required Skills
- Strong SQL and PostgreSQL.
- Python for data pipelines.
- Experience with ETL and analytics models.
- Experience with BI tools such as Metabase, Tableau or Power BI.
- Experience with AWS S3, RDS and Lambda.
- Understanding of API-based data integration.
- Experience with pgvector or vector databases.
- Exposure to AI or LLM-based systems.
Nice to Have
- Experience with CrewAI or LangChain for building AI agents and RAG workflows.
- Experience with Docker and container-based deployments.
- Experience with financial, insurance or ERP data models.
Experience with data lake architectures on S3 or similar object storage.
Job Types: Full-time, Permanent
Pay: From ₹300,000.00 per month
Benefits:
- Flexible schedule
- Work from home
Work Location: In person