ECS is seeking a Data Engineer to work remotely. Please Note: This position is contingent upon additional funding.
ECS is currently seeking a Data Engineer that develops, implements, and maintains architecture solutions across a large enterprise data warehouse to support effective and efficient data management and enterprise-wide business intelligence analytics.
Responsibilities:
- Implement, and optimize data pipeline architectures for data sourcing, ingestion, transformation, and extraction processes, ensuring data integrity, consistency, and compliance with organizational standards.
- Develop and maintain scalable database schemas, data models, and data warehouse structures; perform data mapping, schema evolution, and integration between source systems, staging areas, and data marts.
- Automate data extraction workflows and develop comprehensive technical documentation for ETL/ELT procedures; collaborate with cross-functional teams to translate business requirements into technical specifications and data schemas.
- Establish and enforce data governance standards, including data quality metrics, validation rules, and best practices for data warehouse design, architecture, and tooling.
- Develop, test, and deploy ETL/ELT scripts and programs using SQL, Python, Spark, or other relevant languages; optimize code for performance, scalability, and resource utilization.
- Implement and tune data warehouse systems, focusing on query performance, batch processing efficiency, and resource management; utilize indexing, partitioning, and caching strategies.
- Perform advanced data analysis, validation, and profiling using SQL and scripting languages; develop data models, dashboards, and reports in collaboration with stakeholders.
- Conduct testing and validation of ETL workflows to ensure data loads meet scheduled SLAs and business quality standards; document testing protocols, results, and remediation steps.
- Perform root cause analysis for data processing failures, troubleshoot production issues, and implement corrective actions; validate data accuracy and consistency across systems; support iterative development and continuous improvement of data pipelines.
Salary Range: $140,000 - $165,000
General Description of Benefits
Requirements:
- 5-10+ years of experience
- US Citizen or Green Card holder and must be able to obtain a Public Trust clearance.
- Detail oriented with strong analytical and problem-solving skills
- Ability to use database tools, techniques, and applications (e.g., Teradata, Oracle, Non-Relational) to develop complex SQL statements (e.g., multi-join), and to tune and troubleshoot queries for optimal performance.
- Skill using Unix/Linux shell scripting to develop and implement automation scripts for Extract, Transfer Load (ETL) processes.
- Communications skills (both verbal & written) - ability to work and communicate with all levels in team structure
- Team player with the ability to prioritize and multi-task, work in a fast-paced environment, and effectively manage time.
- Java/J2EE and REST APIs, Web Services and building event-driven Micro Services and Kafka streaming using Schema registry, OAuth authentication.
- Spring Framework and GCP Services in public cloud infrastructure, Git, CI/CD pipeline and containerization, data ingestion/data modeling
- Develop Microservices using Java/J2EE Spring for ingesting large volume real-time events into Kafka topics. Architect solutions that make the data available to consumers in real time
Req Benefits: