Job Description
About the position
The Data and Artificial Intelligence Platform (DAP) is a critical component of Visa's Technology organization, providing the essential technology and processes to manage Visa's extensive data and AI assets and deliver valuable intelligence, products, and services to customers. Within DAP, the Data as a Service (DaaS) team consists of skilled data engineers focused on building data pipelines to meet Visa's data needs and delivers secure, high-quality, and easy-to-use Visa core data assets.
We are seeking a Principal Data Engineer who will play a crucial role in developing next-generation enterprise-level data pipelines for Real Time and Batch Data ingestion and processing, incorporating Agentic AI to support Visa's 2030 strategy. This individual should be a versatile, curious, and energetic technology leader with expertise and experience in architecting, designing and delivering enterprise level Big Data solutions and services. In this role, you will collaborate with business partners, project managers, data engineers, and operations across the company to tackle complex, high-impact data-related initiatives on a global scale.
You will be responsible for executing a data strategy that leverages our data assets to drive innovation through data engineering and GenAI framework. You will also guide and mentor junior engineers, ensuring security, scalability, performance, and reliability in all data solutions. As an individual contributor based in Foster City, California, USA you will be reporting to Sr. Director of Data as a Service (DaaS) team. This team is growing fast! Join us and be an integral part of the group that is at the leading edge of pioneering Data and AI.
Responsibilities • Oversee the entire data lifecycle, from data acquisition and ingestion to transformation, storage, and analysis for both streaming and batch data pipelines. • Lead architecture, design, and development of highly scalable and reliable data engineering solutions. • Ensure data security, privacy, governance, and compliance with all relevant regulations, and develop and implement auditable policies and procedures. • Future-proof data architecture for the payment processing pipelines that aligns with the product vision and accelerates innovation and time to market.
• Actively contribute with hands on development to critical projects by developing reusable modules, core frameworks and automation tools. • Establish engineering best practices for application development, testing, deployment and monitoring. • Leverage AI/ML technologies in bringing productivity across the SDLC phases. • Champion the adoption of GenAI and Agentic AI technologies and develop strategies to integrate them into existing data pipeline workflows or develop new ones. • Provide technology leadership and motivate a high performing team of data engineers through coaching and mentoring and elevate team talent expertise fostering a culture of innovation and continuous learning.
• Collaborate with business partners to convert product requirements into high quality solutions that comply with all non-functional requirements, including security, scalability, availability, and reliability. • Effectively communicate technical strategy and engineering solutions to leadership and business stakeholders. • Adhere to Visa's Leadership Principles by promoting collaboration, encouraging constructive debate, and executing with excellence. Requirements • 12 or more years of work experience with a Bachelor's Degree or at least 10 years of work experience with an
Advanced degree (e.g.
Masters/MBA /JD/MD), or a minimum of 5 years of work experience with a PhD. • Proven track record of building and deploying complex architectures for streaming and batch ETL pipelines using the latest Big Data technologies. • Strong hands on proficiency in programming languages like Java, Scala, SQL, and Python. • Expertise with Apache Spark, Kafka, Hadoop, Hive, Trino, Presto, Apache Airflow and NoSQL databases like HBase and Cassandra. • Experience in on-prem (Hadoop) and cloud-based data platforms (AWS, Azure, DataBricks) and related data storage (HDFS, S3) and processing tools (Spark).
• Experience developing proper metrics instrumentation in software components using Prometheus, Grafana for monitoring, logging, auditing, and security implementation to help facilitate real-time and remote problem solving /performance monitoring is highly preferred. • Experience in deployment with automated and scalable bolthires/CD tools, including Jenkins and Maven. Nice-to-haves • Exposure with GenAI or Agentic AI technologies (e.g., Large Language Models, NLP) and RAG-based architecture and vector databases.
• Experience with containerization technologies and orchestration tools, including Docker and Kubernetes preferred. Benefits • Medical • Dental • Vision • 401 (k) • FSA/HSA • Life Insurance • Paid Time Off • Wellness Program Apply tot his job