Liquid Technologies (LiqTeq) is an end-to-end Design and Development Agency. LiqTeq offers various services in DevOps, Data services, and AI to multiple businesses and organizations and those that are in the startup phase. We're a team of creators, thinkers, explorers, and judo trainees. We approach work and play with curiosity and experimentation, using what we learn to create meaningful digital products that connect with people, just like you.
Job Description
- Assemble large, complex data sets and build data pipelines that meet functional, non-functional business and data monetization requirements.
- Work with the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources and cloud-based data storage and analytics services such as AWS, Snowflake etc
- Build analytics tools that utilize the data pipeline to provide actionable insights
- Architect and implement large scale data intelligence solutions around various warehouse solutions by various technologies
- Understand data transformation and translation requirements and which tools to leverage to get the job done
- Developing ETL pipelines in and out of data warehouse using a combination of Python and Snowflakes Snow SQL
- Provide production support for Data Warehouse issues such as data load problems, transformation, and translation problems
- Translate requirements for BI and Reporting to database design
- Understanding data pipelines and modern ways of automating data pipelines using cloud
- Ability to build conceptual, physical and logical data models
- Utilize database and web application technologies to design, develop, and evaluate innovative business intelligence tools and automate reports
- Merge BI platforms with enterprise systems and applications and document models, solutions, and implementations.
- Work with various internal and stakeholders including the product, data and design teams to assist with data-related technical issues.
Qualification - Experience Requirements:
- Minimum 2 years of experience in the data engineering role is a must.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Data engineering experience with database-centric and pipeline-centric roles ○ Strong analytic skills related to working with unstructured datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- A solid experience and understanding of architecting, designing and operationalization of large scale data and analytics solutions on Snowflake Cloud Data Warehouse is a must.
- Solid proficiency in data warehousing concepts (ETL, Stage/Core layer, Dimensional Modeling, Data Marts etc.) and SQL programming
- Ability to capture customer requirements and translate into functional and technical solutions
- Past experience in using Enterprise BI tools like PowerBI, Tableau, Oracle BI, Oracle Analytics Cloud, AWS, Azure And GCP etc.
- Ability to develop technical design documentation, training and presentation material
- Understanding and experience of working in an agile project environment
- Highly proficient in DDL, DML, DCL, TCL, and DQL
- Experience with big data tools: Apache Hadoop, Spark, Kafka, Storm, Airflow is a plus.
- Experience with relational SQL and NoSQL databases.
- Experience with AWS cloud services: EC2, EMR and RDS is a plus
Job Type: Full-time
Application Question(s):
- How many years of experience do you have working with relational databases and SQL?
- Have you worked with big data tools like Spark, Hadoop, or Kafka?
- Have you built ETL pipelines using Python and Snowflake in a production environment?
- What is your Current Salary?
- What is your Expected Salary?
- Are you comfortable to manage your commute to Zamzama DHA Phase V?
Work Location: In person