Data Engineer II at BD
Irvine, California, USA -
Full Time


Start Date

Immediate

Expiry Date

03 Dec, 25

Salary

80600.0

Posted On

03 Sep, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Processing, Aws, Cloud Services, Etl Tools, Data Structures, Creativity, Python, Information Systems, Healthcare Industry, Statistics, Pentaho, It, Airflow, Clinical Data, Relational Databases, Matlab, Computer Science

Industry

Information Technology/IT

Description

JOB DESCRIPTION SUMMARY

Looking for a Engineer II, Data Engineering who will be responsible for expanding and optimizing our data ingestion, storage and analytical platform. Experienced Modern Data Warehouse expert who enjoys optimizing data systems and building data pipelines.

JOB DESCRIPTION

We are the makers of possible
BD is one of the largest global medical technology companies in the world. Advancing the world of health™ is our Purpose, and it’s no small feat. It takes the imagination and passion of all of us—from design and engineering to the manufacturing and marketing of our billions of MedTech products per year—to look at the impossible and find transformative solutions that turn dreams into possibilities.
We believe that the human element, across our global teams, is what allows us to continually evolve. Join us and discover an environment in which you’ll be supported to learn, grow and become your best self. Become a maker of possible with us.

QUALIFICATIONS

  • 4+ years of experience building processes supporting data transformations, data structures, dependencies and workload management
  • Strong analytic skills related to working with structured and unstructured datasets
  • Advance SQL and ETL experience working with relational databases
  • Experience building and optimizing ‘Big data’ pipelines, Architectures and Data sets.
  • Proved history of manipulating, processing and extracting value from large disconnected datasets
  • Working knowledge of one or more of the following: AWS Redshift, Azure Synapse, AWS or Azure Cloud Services, Postgres
  • Good knowledge of one of the ETL tools: Informatica, Matillion, Pentaho or SSIS
  • Good Experience with Python and/or Matlab
  • Experience with Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. will be a plus
  • Knowledge of Clinical data or Medical Device products will be a huge plus
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Bachelor’s degree in Computer Science, Statistics, Information Systems, Engineering or another quantitative field with minimum 4 years industry experience or Master’s Degree with 2 years of industry experience
    At BD, we prioritize on-site collaboration because we believe it fosters creativity, innovation, and effective problem-solving, which are essential in the fast-paced healthcare industry. For most roles, we require a minimum of 4 days of in-office presence per week to maintain our culture of excellence and ensure smooth operations, while also recognizing the importance of flexibility and work-life balance. Remote or field-based positions will have different workplace arrangements which will be indicated in the job posting.
    For certain roles at BD, employment is contingent upon the Company’s receipt of sufficient proof that you are fully vaccinated against COVID-19. In some locations, testing for COVID-19 may be available and/or required. Consistent with BD’s Workplace Accommodations Policy, requests for accommodation will be considered pursuant to applicable law.
Responsibilities
  • Design and implement Data Warehouse solution using Cloud Services
  • Design Data model and build highly scalable cloud based analytical solutions
  • Design, Build and maintain Modern Data warehouse schematics, Data Pipelines and Data Streams
  • Design and Develop ETL code for integrating large datasets into the Cloud platform from various data sources
  • Automate ETL processes, optimize data delivery, re-designing Data Warehouse for greater scalability
  • Design and implement Audit processes for Data ingestion and access
  • Good working experience with AWS, Azure or GCP
  • Should be hands-on and have solid experience in applying principles and best practices of schema design and storage solutions utilizing various types of database/file systems: Relational (Redshift, Synapse, or Snowflake), Data Lakes
Loading...