Role Overview
We are seeking an experienced Neo4j Developer / Graph Data Engineer with strong expertise in graph databases, data modeling, and Python-based data pipelines. The ideal candidate will have hands-on experience designing, maintaining, and optimizing Neo4j graph data models used in production environments. You will be responsible for building ingestion workflows, integrating Neo4j with external systems, and writing high-quality Cypher queries to support analytics and application features.
Key Responsibilities
1. Neo4j Development & Data Modeling
-
Design, build, and maintain graph data models (schemas) for production-grade systems.
-
Optimize node/relationship structures to improve performance and data integrity.\
-
Write complex Cypher queries for CRUD operations, analytics, and graph algorithms.\
-
Monitor and enhance Neo4j performance, indexes, and constraints.
2. Data Pipeline & ETL Development
-
Build and maintain Python-based data ingestion pipelines for loading, syncing, and transforming data into Neo4j.
-
Develop scheduled jobs and automated workflows using tools such as Airflow, Prefect, or custom scripts.
-
Ensure data quality, consistency, and error handling across ingestion processes.
3. Systems Integration
-
Integrate Neo4j with SQL/NoSQL databases, external APIs, or internal systems.
-
Build connectors, transformation layers, and synchronization logic between multiple data sources.
-
Collaborate with backend engineers to integrate Neo4j with application services.
4. Collaboration & Documentation
-
Work closely with engineering, product, and analytics teams to understand graph-related business needs.
-
Document data models, pipelines, integration workflows, and best practices.
-
Participate in code reviews and architecture discussions.
Required Qualifications
-
3–7+ years of hands-on experience with Neo4j, including data modeling and writing Cypher queries.
-
Proven experience designing or maintaining graph data models for production systems.
-
Strong experience building Python-based ETL/ELT pipelines that load or sync data into Neo4j.
-
Experience integrating Neo4j with SQL/NoSQL databases or external APIs.
-
Solid understanding of graph theory, graph algorithms, and query optimization.
-
Proficiency in Python (requests, pandas, py2neo, Neo4j Python Driver, etc.).
-
Preferred Qualifications
-
Experience with data orchestration tools (Airflow, Prefect, Dagster).
-
Familiarity with Neo4j Aura, APOC procedures, or Graph Data Science (GDS) library.
-
Experience with cloud platforms (AWS, GCP, Azure).
-
Knowledge of microservices architecture and REST/GraphQL APIs.
-
Example Experience Indicators (for screening)
Neo4j Experience:
-
Number of years working with Neo4j, building Cypher queries, designing models.
-
Graph Data Model Experience:
-
Example: Designed a graph schema for customer behavior analytics, connecting customers, products, and interactions to support recommendation features.
Python Pipeline Experience:
-
Example: Built a Python ETL pipeline pulling data from PostgreSQL, transforming it, and loading nodes/relationships into Neo4j via the Neo4j Python Driver.
Integrations Experience:
-
Example: Synced data between Neo4j and MongoDB, or integrated external API data (e.g., CRM, ERP) into the graph model.
Why Join Us?
-
Opportunity to work on advanced graph-based applications and large-scale data projects.
-
Collaborative team environment with focus on innovation and engineering excellence.
-
Competitive salary, remote flexibility, and strong growth opportunities.