Must Have Technical/Functional Skills
1. Should be strong in Python, SQL, and PySpark for data wrangling.
2. Associates should have extensive experience in ELT workflows across cloud and on-prem platforms.
3. Should have AWS hands on knowledge, should be proficient in Informatica IDMC, IBM DataStage,
Airflow, and Autosys.
4. Associates should have extensive knowledge in Azure, Salesforce, Snowflake, AWS work integrations.
5. Associate should be experience to support ER diagrams using SAP Power Designer.
Roles & Responsibilities
- Building and Implementing data ingestion and curation process developed using Big data tools such
as Spark (Scala/python/Java), Hive, HDFS, Sqoop ,Kafka, Kerberos, Impala etc. and CDP 7.x
- Ingesting huge volumes data from various platforms for Analytics needs and writing high-performance,
reliable, and maintainable ETL code Strong
- Strong analytic skills related to working with unstructured datasets.
- Strong experience in building/designing Data warehouses, data stores for analytics consumption
On prem and Cloud (real time as well as batch use cases)
- Ability to interact with business analysts and functional analysts in getting the requirements and
implementing the ETL solutions.
- Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL)
Processes.
- Develop reusable frameworks to reduce the development effort involved thereby ensuring cost
savings for the projects.
- Develop quality code with thought through performance optimizations in place right at the
development stage.
- Appetite to learn new technologies and be ready to work on new cutting-edge cloud technologies.
- Work with team spread across the globe in driving the delivery of projects and recommend
development and performance improvements.
Salary Range: $95,000 to $110,000 per year