Lead Data Engineer

at  Cognizant

Melbourne VIC 3001, Victoria, Australia -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate22 Nov, 2024USD 100000 Annual22 Aug, 20242 year(s) or aboveHive,Jenkins,Indexing,Bamboo,Programming Languages,Java,Scala,Spark,Maven,Sql,Git,Design Patterns,Gitlab,Business Requirements,Sbt,Github,Pipelines,Hbase,Integration,Bitbucket,Gradle,Python,Continuous Delivery,Mongodb,AnalyticsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

JOB DESCRIPTION

Cognizant (Nasdaq-100: CTSH) is one of the world’s leading professional services companies, transforming clients’ business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Headquartered in the US, Cognizant is ranked 185 on the Fortune 500 and is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at www.cognizant.com or follow us @Cognizant

POSITION SUMMARY:

This position entails designing and building efficient data pipelines to attain a modern future state platform for collections operations. This involves migrating and streamlining the existing systems which service credit products including loans and cards that are used by collections operations teams. It requires building robust batch streaming pipelines to produce high-quality, consistent, and structured data and performing necessary encryption to restrict access to sensitive customer information. This requires strong knowledge in big data streaming technologies with ability to understand complex business requirements to implement data processing capabilities including ingestion, enrichment, transformation across cloud services.

MANDATORY SKILLS:

  • Solid experience on design patterns to implement strategic, tactical and operational products spanning across all disciplines of BI and Analytics including data ingestion, integration, modelling, performance acceleration, visualization etc.
  • Solid functional understanding of the Open-Source Big Data Technologies
  • Experience working on Spark Streaming
  • Preferable work experience with Hortonworks distribution and Azure cloud platform.
  • Proficient with HDFS, Hive, HBase, Spark and SQL.
  • Experience in Azure cloud platform and services including ADLS2, HD Insight, Azure SQL, Azure Data Factory, Cosmos DB, Azure Blob Storage, and Azure Databricks is desirable.
  • Expertise in various connectors and pipelines for batch and real-time data collection/delivery.
  • At least 2+ years of experience in NoSQL databases such as HBase, MongoDB, and Phoenix.
  • Must have strong experience with code management and build tools such as Git, Bitbucket, GitHub, GitLab, SVN, Maven, Gradle, and SBT.
  • Must have hands on experience with CI/CD deployment tools such as Jenkins, Bamboo, and Azure DevOps.
  • Proficient in relevant programming languages such as Java, Scala, Python, and SQL.
  • Proficiency in analyzing business requirements for data architecting with both SQL/NoSQL databases.
  • Performance tuning - table partitioning and indexing, process threading.
  • Support multiple Agile Scrum teams with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.

Responsibilities:

  • Managing data engineering initiatives, overseeing the development of pipelines that are dependable, optimized, verifiable, and sustainable. Hold significant responsibility in providing valuable inputs in design and data solutioning.
  • As a Lead Data Engineer, taking charge of data engineering initiatives to ensure the dependability, effectiveness, testability, and sustainability of pipelines.
  • Collaborating with business representatives, data analysts, and software developers to comprehend data requirements and devise strategies to fulfill those requirements.
  • Performing data encryption to ensure data masking of sensitive customer information using Spark and ingesting large sizes of data files into Hive, HBase and Phoenix tables.
  • Performing batch processing on Hive tables with Spark using Python to merge and transform large volumes of data.
  • Aid in creating data governance protocols and processes aimed at safeguarding customer personal information, ensuring business compliance with governmental regulations and security benchmarks.
  • Improving the overall Code quality and Code reliability by automated functional and unit testing, thus enabling early detection of bugs and defects.
  • Using CI/CD tools like Azure DevOps to build, test, and deploy the code and automated job schedules.
  • Developing python and shell scripts to perform load assurance tests on ingestion and transformations.
  • Providing guidance and support to new joiners on the best Data Engineering practices.
  • Performing code reviews and offering input regarding the design and execution of the code through peer reviews.
  • Performing System Testing across multiple sources to identify errors and supporting bug fixes in production releases.
    Salary Range: >$100,000


REQUIREMENT SUMMARY

Min:2.0Max:7.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Proficient

1

Melbourne VIC 3001, Australia