- Strong proficiency in Python programming.
- Hands-on experience with PySpark and Apache Spark.
- Knowledge of Big Data technologies (Hadoop, Hive, Kafka, etc.).
- Experience with SQL and relational/non-relational databases.
- Familiarity with distributed computing and parallel processing.
- Understanding of data engineering best practices.
- Experience with REST APIs, JSON/XML, and data serialization.
Desired Skills
Digital : PySpark | Python
Desired Candidate Profile
Qualifications : BACHELOR OF TECHNOLOGY