Azure Engineer

at  Cognizant Technology Solutions

Melbourne VIC 3001, Victoria, Australia -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate22 Jul, 2024USD 100000 Annual28 Apr, 20242 year(s) or abovePipelines,Clarity,Kubernetes,Docker,Sql,Jenkins,Microsoft Azure,Business Requirements,Python,Solution Delivery,Testing,Containerization,Git,Demand,DevopsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

Cognizant (Nasdaq-100: CTSH) is one of the world’s leading professional services companies, transforming clients’ business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Learn how Cognizant helps clients lead with digital at www.cognizant.com or follow us @Cognizant

POSITION SUMMARY:

  • We are seeking a talented and experienced Azure Cloud and Data Engineer to join our growing team and play a key role in developing and enhancing the Data and Analytics applications for our clients within Insurance domain.
  • The candidate must have a solid background in Python, SQL and strong knowledge in Azure cloud, Data warehousing, data-modelling along with expertise in Azure Synapse.

MANDATORY SKILLS:

  • 7+ years of experience in Data and analytics roles.
  • Knowledge of Azure data technologies including Azure Data Factory (ADF), adlsGen2, Synapse, and Databricks/ Spark Pools.
  • Hands-on experience in delta-lake supported by Azure Databricks, Azure Synapse (preferred), DevOps including Azure repos, Azure DevOps, Jenkins, Git, and CI/CD processes.
  • Prior extensive experience with Python, SQL, and PySpark.
  • Solid understanding of cloud architecture, demonstrated with Microsoft Azure.
  • Must have experience in Azure Cosmos DB.
  • Proficient in designing, developing, and maintaining data pipelines using Azure Data factory.
  • Must have 4+ years of programming knowledge in Python/PySpark.
  • Good exposure to Data frames and Spark SQL.
  • Must have experience in technically leading Duck Creek development & test teams.
  • Leveraging prior experience in providing hands on support and guidance to the Duck Creek platform team on best practices for solution delivery (development and testing) in the context of Duck Creek on-demand billing, policy active & clarity.
  • Experience with containerization and associated microservice tooling such as Docker, and Kubernetes.
  • More than 2 years of domain knowledge in Insurance project is highly required.
  • Hands-on experience in SQL and NoSQL DBS.
  • Prior experience in streaming - Kafka.
  • Excellent Knowledge of ADF building components- integration runtime, linked services, Data sets, pipelines and activities.
  • Strong customer-facing skills with ability to define and work through business requirements and problem statements with clients.

Responsibilities:

  • Lead end-to-end development and implementation of insurance project in Azure cloud.
  • Optimization of data-pipeline using databricks, ensuring high performance, and scalability.
  • Collaborate with data scientists and analytics to provide accessible, clean, and integrated data for advanced analytics.
  • Practical experience in executing cloud deployments using Azure DevOps pipeline or GitHub actions.
  • Enabling development & testing for Duck Creek on demand implementation including DevOps.
  • Ensure data quality and integrity, implementing appropriate data governance and security measures used to create policy.
  • Implement CI/CD pipelines for AKS.
  • Create pipelines in ADF using linked services/datasets/pipeline to extract transform and load data from different sources like Azure SQL, blob storage.
  • Provide support for optimal pipeline, dataflows and complex data transformations and manipulation using ADF and PySpark with data bricks, using Polybase to load tables in Azure synapse.
  • Implement framework to automate job using different triggers like events, schedules and tumbling in ADF.
  • Implement Azure data factory extensively for ingesting data from different source systems like relation and unstructured data to meet business requirements.
  • Create store procedure, lookup, execute pipeline, Dataflow Azure function features in ADF.
  • Experienced in troubleshooting and creating database objects within SQL server, Azure SQL, or Synapse SQL pools.
  • Contribute to design reviews, stakeholder meetings and agile processes.
    Salary Range: >$100,000


REQUIREMENT SUMMARY

Min:2.0Max:7.0 year(s)

Information Technology/IT

IT Software - System Programming

Software Engineering

Graduate

Proficient

1

Melbourne VIC 3001, Australia