Data Engineer at Environmental and Safety Solutions Inc
Glendale, Arizona, USA -
Full Time


Start Date

Immediate

Expiry Date

12 Nov, 25

Salary

0.0

Posted On

12 Aug, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

ESS-254

Job Title: Data Engineer
Green Card and Visa Holders are acceptable
Hybrid 3 days in office required each week.
Client Corporation is seeking a talented and ambitious data engineer to join our team in designing, developing, and deploying industry-leading data science and big data engineering solutions, using Artificial Intelligence (AI), Machine Learning (ML), and big data platforms and technologies, to increase efficiency in the complex work processes, enable and empower data-driven decision making, planning, and execution throughout the lifecycle of mega-EPC projects.
Who you are:
• You yearn to be part of groundbreaking projects and cutting-edge research that work to deliver world-class solutions on schedule
• Someone who is motivated to find opportunity in and develop solutions for evolving challenges, is passionate about their craft, and driven to deliver exceptional results
• You love to learn new technologies and mentor junior engineers to raise the bar on your team
• You are imaginative and engaged about intuitive user interfaces, as well as new/emerging concepts and techniques
Job Responsibilities:
• Big data design and analysis, data modeling, development, deployment, and operations of big data pipelines
• Collaborate with a team of other data engineers, data scientists, and business subject matter experts to process data and prepare data sources for a variety of use cases including predictive analytics, generative AI, and computer vision.
• Mentor other data engineers to develop a world class data engineering team
• Ingest, Process, and Model data from structured, unstructured, batch and real-time sources using the latest techniques and technology stack.
Basic Qualifications:
• Bachelor’s degree or higher in Computer Science, or equivalent degree and 5+ years working experience
• In depth experience with a big data cloud platform such as Azure, AWS, Snowflake, Palantir, etc.
• Strong grasp of programming languages (Python, Scala, SQL, Panda, PySpark, or equivalent) and a willingness to learn new ones. Strong understanding of structuring code for testability.
• Experience writing database-heavy services or APIs
• Strong hands-on experience building and optimizing scalable data pipelines, complex transformations, architecture, and data sets with Databricks or Spark, Azure Data Factory, and/or Palantir Foundry for data ingestion and processing
• Proficient in distributed computing frameworks, with familiarity in handling drivers, executors, and data partitions in Hadoop or Spark.
• Working knowledge of queueing, stream processing, and highly scalable data stores such as Hadoop, Delta Lake, Azure Data Lake Storage (ADLS), etc.
• Deep understanding of data governance, access control, and secure view implementation
• Experience in workflow orchestration and monitoring
• Experience working with and supporting cross-functional teams
Preferred Qualifications:
• Experience with schema evolution, data versioning, and Delta Lake optimization
• Exposure to data cataloging solutions in Foundry Ontology
• Professional experience implementing complex ML architectures in popular frameworks such as Tensorflow, Keras, PyTorch, Sci-kit Learn, and CNTK
• Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureM

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

Please refer the Job description for details

Loading...