About the Role:
We are looking for a forward-thinking AI & Data Architecture Expert to design and shape the future of our artificial intelligence and data platforms. You will be responsible for architecting scalable, robust, and efficient systems for data ingestion, processing, storage, machine learning model development, deployment, and analytics. Your expertise will be crucial in building the foundational platforms that enable data-driven decision-making and power intelligent features within our software products and operations.
Responsibilities:
-
Design end-to-end architectures for AI/ML and data platforms, encompassing data pipelines, data lakes/warehouses, ML model training/serving infrastructure, and analytics environments.
-
Define standards, patterns, and best practices for data management, data governance, MLOps, and AI model development.
-
Evaluate and recommend technologies, frameworks, and tools for data processing (e.g., Spark, Flink), storage (e.g., S3, HDFS, NoSQL), ML platforms (e.g., SageMaker, Kubeflow, MLflow), and data warehousing/analytics (e.g., Snowflake, BigQuery, Redshift).
-
Collaborate with data scientists, ML engineers, data engineers, and software developers to understand requirements and translate them into effective platform architectures.
-
Ensure platform designs address scalability, performance, security, reliability, and cost-efficiency requirements.
-
Architect solutions for real-time data processing and streaming analytics.
-
Design data governance and metadata management solutions.
-
Provide technical leadership and guidance on AI/data architecture matters.
-
Stay abreast of the latest trends and advancements in AI, machine learning, and big data technologies.
Qualifications:
Minimum Qualifications:
-
Preferred Master’s degree or Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
-
8+ years of experience in data engineering, software engineering, or ML engineering.
-
3+ years of experience specifically architecting complex data and/or AI/ML platforms.
-
Hands-on experience with big data technologies (e.g., Hadoop ecosystem, Spark), cloud data services (AWS/Azure/GCP), and ML frameworks/platforms.
-
Strong understanding of data modeling, data warehousing concepts, data pipeline orchestration, and MLOps principles.
-
Deep expertise in cloud-native data services and architectures.
-
Experience with real-time data processing technologies (e.g., Kafka, Kinesis).
-
Strong programming skills (e.g., Python, Scala, Java).