Senior Data Engineer at Visa
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

11 Jun, 26

Salary

0.0

Posted On

13 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Apache Spark, Python, Java, Scala, SQL, NoSQL, AWS, Azure, GCP, Apache Airflow, Docker, Kubernetes, CI/CD, Generative AI, Prompt Engineering, MLOps

Industry

IT Services and IT Consulting

Description
Company Description Visa is a world leader in payments technology, facilitating transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories, dedicated to uplifting everyone, everywhere by being the best way to pay and be paid. At Visa, you'll have the opportunity to create impact at scale — tackling meaningful challenges, growing your skills and seeing your contributions impact lives around the world. Join Visa and do work that matters — to you, to your community, and to the world. Progress starts with you. Job Description Commercial Money Movement Solution (CMS) division’s charter is to capture new sources of money movement through card and non-card flows, including Visa Business Solutions, Government Solutions and Visa Direct which presents an enormous growth opportunity. Our team brings payment solutions and associated services to clients around the globe. Our global clients and partners deploy our solutions to serve the needs of Small Businesses, Middle Market Clients, Large Corporate Clients, Multi Nationals and Governments. The Visa Business Solutions (VBS) and Visa Government Solutions (VGS) team is a world-class technology organization experiencing tremendous, double-digit growth as we expand products into new payment flows and continue to grow our core card solutions. This is an incredibly exciting team to join as we expand globally. Essential Functions Work with manager and clients to fully understand business requirements and desired business outcomes Assist in scoping and designing analytic data assets, implementing modelled attributes and contributing to brainstorming sessions Build and maintain a robust data engineering process to develop and implement self-serve data and tools for Visa’s data scientists Perform other tasks on R&D, data governance, system infrastructure, analytics tool evaluation, and other cross team functions, on an as-needed basis Find opportunities to create, automate and scale repeatable analyses or build self-service tools for business users Execute data engineering projects ranging from small to large either individually or as part of a project team Ensure project delivery within timelines and budget requirements This is a hybrid position. Expectation of days in the office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications: 2+ years of relevant work experience and a Bachelors degree, OR 5+ years of relevant work experience Preferred Qualifications: 3 or more years of work experience with a Bachelor’s Degree or more than 2 years of work experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) Bachelor’s degree in Computer Science, Computer Engineering, Data Engineering, or a related technical field required Master’s degree in Data Science, AI/ML, or Software Engineering preferred 3–6 years of software development experience, with a strong focus on data centric and analytics platforms 3+ years of hands on experience with Java or Scala Core Experience: Demonstrated expertise in modern software engineering best practices (clean code, modular design, code reviews, CI/CD) Big Data & Distributed Systems (Modernized) Strong hands on experience with distributed data processing frameworks, including: Apache Spark (Core, SQL, Structured Streaming) Hadoop ecosystem (HDFS, Hive, Hbase, MongoDB) Apache Flink or Kafka Streams for real time processing (preferred) Experience building highly scalable, fault tolerant, low latency data pipelines Programming Languages: Working proficiency in Python for data processing, ML, and automation Familiarity with SQL and performance tuning for analytical workloads Data Platforms & Storage Experience with relational and NoSQL databases, including: DB2, MySQL, PostgreSQL NoSQL / distributed stores: HBase, Cassandra, DynamoDB, MongoDB Experience with modern data lake and lakehouse architectures, including: Delta Lake, Apache Iceberg, Apache Hudi Familiarity with cloud data warehouses: Snowflake, BigQuery, Amazon Redshift, Azure Synapse Cloud & Infrastructure: Hands on experience with cloud platforms: AWS, Azure, or GCP Experience with cloud native data services, such as: AWS EMR, Glue, Athena Azure Data Factory, Databricks GCP Dataflow, Dataproc Familiarity with containerization and orchestration: Docker, Kubernetes (basic to intermediate) Data Engineering & Orchestration Experience with data pipeline orchestration tools: Apache Airflow, Dagster, Prefect Knowledge of data quality, lineage, and governance frameworks Experience with schema evolution, data validation, and observability AI / ML & GenAI (Added – Hands On) Practical experience supporting or implementing ML pipelines, including: Feature engineering and dataset preparation Model training, evaluation, and batch/real time inference Hands on exposure to ML frameworks: scikit learn, TensorFlow, PyTorch (working knowledge) Experience with MLOps concepts: Model versioning, monitoring, CI/CD for ML (MLflow, SageMaker, Vertex AI) Generative AI & Prompt Engineering (New) Hands on experience integrating Generative AI models into data workflows: Strong understanding of prompt engineering techniques, including: Few shot and zero shot prompting Prompt optimization and evaluation Structured outputs (JSON, schemas) Experience working with LLM APIs for: Data enrichment Text summarization Semantic search and embeddings Familiarity with RAG (Retrieval Augmented Generation) architectures and vector databases: FAISS, Pinecone, Weaviate, OpenSearch CI/CD, DevOps & Automation Experience with CI/CD pipelines and automation tools: Jenkins, GitHub Actions, GitLab CI Experience with version control and artifact management: Git, Artifactory, Nexus Familiarity with infrastructure as code: Terraform, CloudFormation (preferred) Development Methodologies: Experience working in Agile / Scrum environments. Strong background in Test Driven Development (TDD) and automated testing Familiarity with data testing frameworks (Great Expectations, dbt tests) Analytics & Data Science (Nice to Have): Familiarity with data mining and statistical modeling techniques, including: Regression, classification, clustering, decision trees Ability to collaborate with data scientists and analysts to productionize models Business & Soft Skills Strong business acumen, able to translate business needs into scalable data solutions Strategic thinker with a product oriented mindset Demonstrated analytical rigor, attention to detail, and problem solving skills Team oriented, collaborative, adaptable, and able to work across functions Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Job Family Group: Technology and Operations
Responsibilities
The role involves working with stakeholders to define business requirements, designing and implementing analytic data assets, and building robust data engineering processes to support data scientists with self-serve tools. Responsibilities also include executing data engineering projects and ensuring timely delivery.
Loading...