Role: Data Engineer
Location: New York, NY
Responsibilities
- Use the latest technology to build data pipelines and integrate machine learning models
- Build and expand our data platform
- Develop applications that run on a Google Cloud-based infrastructure.
Qualifications
- 2+ years of experience in Big Data and Data Engineering, building enterprise-level applications in a public cloud, preferably Google Cloud
- Experience building cloud-native data pipelines using Python, Apache Airflow, Apache Spark, and Apache Beam.
- Working knowledge of SQL and data warehousing concepts, including PostgreSQL and BigQuery.
Optionally, knowledge of Mongo and AlloyDB is a plus.
- Comfortable using Azure DevOps or similar CI/CD tools, Git.
- Experience with TDD, agile software development processes, and collaborating in multi-functional agile teams.
- Communicate effectively with technical teams and non-technical stakeholders.
- Bachelor's degree in Computer Science or related engineering discipline, or equivalent combination of education and experience.
Optionally, experience building Docker images and deploying them to a Production environment. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, and deployments using Terraform.
- Optionally, experience with C#, .NET core, and microservices