Find The RightJob.
Who We Are
JIG-SAW operates 24/7 Operations Centers in Japan and Canada that proactively monitor systems, issue alerts, and deliver live incident response—keeping your web services and IoT environments secure and running smoothly.
Corporate site: https://jig-saw.com/en/
About the Role
Help build a game-changing, AI-driven IoT platform at enterprise scale. As a Senior Backend Engineer, you’ll own the core data pipeline—evolving our backend from its current foundation (Node.js/TypeScript, PostgreSQL, MongoDB, pg-boss job queues) toward a high-throughput, event-driven architecture capable of handling millions of sensor readings per day.
You’ll architect the migration to Kafka-centric streaming, implement distributed caching with Redis, extend IoT protocol support beyond MQTT, and build the resilient, scalable services that power real-time dashboards, intelligent alerts, and AI-driven anomaly detection for multi-tenant enterprise customers.
Key Responsibilities
High-Throughput Data Streaming: Architect the migration from PostgreSQL-backed job queues (pg-boss) to Kafka-centric event streaming. Design bi-directional data flow—ingest from IoT protocols (MQTT today, expanding to Modbus, BACnet, LoRaWAN) and expose processed data to downstream microservices via optimized topics.
Distributed Caching & Performance: Replace in-process caching (node-cache) with Redis. Design multi-layer caching strategies to reduce database load for high-frequency sensor data. Optimize API response times for real-time dashboarding—sub-second latency at scale.
Scalable Backend Services: Evolve the worker architecture (HTTP, WebSocket, MQTT, alerts, jobs) into independently scalable, distributed services. Design for horizontal scaling—stateless services, distributed state management, graceful degradation, backpressure handling.
Protocol Integration: Build protocol adapters and bridges that normalize heterogeneous device data (Modbus TCP/RTU, BACnet/IP, MQTT, LoRaWAN) into standardized schemas with reliable delivery and data integrity.
API & Integration Layer: Extend the REST API for enterprise integrations and partner platforms. Design webhook/event delivery systems for downstream consumers. Support real-time data delivery via WebSocket alongside REST.
Production Excellence: Own the backend end-to-end—from device ingest to dashboard delivery. Implement circuit breakers, retry strategies, dead-letter queues, and observability hooks across all services.
Infrastructure as Code: Help manage cloud resources via Terraform, CloudFormation and Crossplane, enabling self-service infrastructure provisioning, drift detection, and GitOps-driven deployments tied to the backend services you own.
Required Skills & Qualifications
Backend Engineering: 5+ years building production backend services in Node.js/TypeScript (or equivalent) for high-traffic, distributed systems.
Event Streaming: Hands-on experience designing and operating Kafka (or comparable: Pulsar, Kinesis, EventHub)—topic design, consumer groups, partitioning, exactly-once semantics, schema evolution.
Distributed Caching: Production experience with Redis or Memcached—cluster mode, eviction policies, cache invalidation patterns, pub/sub.
Databases: Deep PostgreSQL experience—query optimization, partitioning, connection pooling (PgBouncer), replication. MongoDB experience—schema design for time-series/IoT data, sharding, aggregation pipelines.
Distributed Systems: Service decomposition, eventual consistency, backpressure handling, circuit breakers, idempotency patterns. Experience building systems that degrade gracefully under load.
API Design: RESTful conventions, versioning, rate limiting, authentication patterns, WebSocket architecture.
Cloud & Containers: AWS, GCP, Azure, Docker in production, CI/CD pipelines (GitHub Actions or similar).
Education: Bachelor’s degree or higher in Computer Science (or closely related field) is required.
Experience Guidelines
5+ years of backend engineering: building and operating high-throughput data services in production, including event streaming, caching layers, and database optimization.
3+ years with IoT or real-time data systems: high-frequency ingest, time-series patterns, device connectivity, protocol handling.
2+ years with Kafka or equivalent: designing topics, managing consumer groups, handling schema evolution and exactly-once delivery in production.
Nice-to-Have
IoT Protocols: Hands-on experience with MQTT brokers (HiveMQ), Modbus TCP/RTU, BACnet/IP, LoRaWAN, OPC-UA, certificate-based device authentication.
Time-Series Data: TimescaleDB, InfluxDB, or partitioned PostgreSQL for high-volume sensor data; data retention and downsampling strategies.
Migration Experience: Strangler fig pattern, dual-write, shadow traffic—incrementally evolving production systems without downtime.
Multi-Tenant SaaS: Data isolation patterns, tenant-aware routing, noisy-neighbor mitigation.
AI/ML Integration: Experience feeding data pipelines into ML models (anomaly detection, forecasting) or building feature stores.
Prisma ORM: Production experience with Prisma on PostgreSQL.
Security: PII handling, secrets management, RBAC, audit logging, SOC 2 awareness.
Compensation & Benefits
Job Type: Full-time
Pay: $100,000.00 - $150,000.00 per year
Benefits:
Education:
Experience:
License/Certification:
Work Location: Remote
Similar jobs
No similar jobs found
© 2026 Qureos. All rights reserved.