Scientist III, Data Sciences
at Thermo Fisher Scientific
Pittsburgh, PA 15205, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 27 Dec, 2024 | Not Specified | 01 Oct, 2024 | 4 year(s) or above | Statistics,Jira,Glue,Apache Spark,Github,Data Analytics,Etl,Python,Jenkins,Working Experience,Confluence,Business Intelligence,It,Computer Science,Data Science,Data Integration,Relational Databases,Kafka | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
JOB DESCRIPTION
When you join us at Thermo Fisher Scientific, you’ll be part of an inquisitive team that shares your passion for exploration and discovery. With revenues of more than $40 billion and the largest investment in R&D in the industry, we give our people the resources and chances to create significant contributions to the world.
EDUCATION:
- Undergraduate degree in Statistics, Computer Science, Data Science or a related field preferred; an MBA or equivalent consulting / working experience is strongly preferred, services background preferred
- demonstrated ability with five years of AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
EXPERIENCE:
- Total 8+ years of knowledge in the IT, Leading and developing BI and DW applications.
- 4+ Years of Experience in Data Lake, Data Analytics & Business Intelligence problems.
- Experience with relational databases, ETL (Extract-Transform-Load), and ETL and DB scripting language (Databricks, Oracle preferably)
- Solid experience in Data Lake using AWS Databricks, Apache Spark & Python
KNOWLEDGE, SKILLS, ABILITIES
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
- Led life cycle of ETL Pipelines and other cloud platform tools, including GitHub, Jenkins, Terraform, Jira, and Confluence.
- Highly hard-working, execution-focused, with a willingness to do “what it takes” to deliver results as you will be expected to rapidly cover a considerable amount of demands on data integration
- Ability to analyze trends associated with huge datasets.
- Excellent prioritization and problem-solving skills.
- Takes a broad view when approaching issues; using a global lens.
- Ability to learn from and train other team members
EDUCATION
- Bachelor’s degree in Computer Science or equivalent with 5+ years of experience in a data engineering role with a solid grasp of technical, business, and operational process requirements.
Our Mission is to enable our customers to make the world healthier, cleaner and safer. Watch as our colleagues explain 5 reasons to work with us. As one team of 100,000+ colleagues, we share a common set of values - Integrity, Intensity, Innovation and Involvement.
Responsibilities:
Please refer the Job description for details
REQUIREMENT SUMMARY
Min:4.0Max:8.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
MBA
Computer Science, Statistics
Proficient
1
Pittsburgh, PA 15205, USA