Senior DevOps Engineer

at  T Rowe Price

London, England, United Kingdom -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate22 Dec, 2024Not Specified25 Sep, 2024N/AVersion Control,Oversight,Security,Logging,Requirements Analysis,Storage,Root Cause,Glue,Operations,Python,Etl,Teams,Kafka,Athena,Data Models,Infrastructure,Programming Languages,It Service Management,Problem Management,Data Infrastructure,Splunk,DesignNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

There is a place for you at T. Rowe Price to grow, contribute, learn, and make a difference. We are a premier asset manager focused on delivering global investment management excellence and retirement services that investors can rely on today and in the future. The work we do matters. We invite you to explore the opportunity to join us and grow your career with us.

Responsibilities

  • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load log data
  • Implement scalable, fault-tolerant solutions for data ingestion, processing, and storage.
  • Support systems engineering lifecycle activities for data engineering deployments, including requirements gathering, design, testing, implementation, operations, and documentation.
  • Automating platform management processes through Ansible or other scripting tools/languages
  • Troubleshooting incidents impacting the log data platforms
  • Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
  • Develop training and documentation materials
  • Support log data platform upgrades including coordinating testing of upgrades with users of the platform
  • Gather and process raw data from multiple disparate sources (including writing scripts, calling APIs, writing SQL queries, etc.) into a form suitable for analysis
  • Enables log data, batch and real-time analytical processing solutions leveraging emerging technologies
  • Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems

Experience

  • Ability to troubleshoot and diagnose complex issues
  • Able to demonstrate experience supporting technical users and conduct requirements analysis
  • Can work independently with minimal guidance & oversight
  • Experience with IT Service Management and familiarity with Incident & Problem management
  • Highly skilled in identifying performance bottlenecks, identifying anomalous system behavior, and resolving root cause of service issues.
  • Demonstrated ability to effectively work across teams and functions to influence design, operations, and deployment of highly available software
  • Knowledge of standard methodologies related to security, performance, and disaster recovery

Required Technical Expertise

  • Expertise in language such as Java, Python. Implementation knowledge in data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data
  • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets
  • Troubleshoot and resolve issues related to data processing, storage, and retrieval.
  • 3-5 years’ Experience in designing, developing, and deploying data lakes using AWS native services (S3, Glue (Crawlers, ETL, Catalog), IAM, Terraform, Athena)
  • Experience in development of systems for data extraction, ingestion and processing of large volumes of data
  • Experience with data pipeline orchestration platforms
  • Experience in Ansible/Terraform/Cloud Formation scripts and Infrastructure as Code scripting is required
  • Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments
  • Proficiency in implementing monitoring, logging, and alerting solutions for data infrastructure (e.g., Prometheus, Grafana)
  • Proficiency in distributed Linux environments

Preferred Technical Experience

  • Familiarity with data streaming technologies such as Kafka, Kinesis, spark streaming, etc
  • Knowledge of cloud platforms (prefer AWS) and container + orchestration technologies
  • Experience with AWS OpenSearch, Splunk
  • Experience with common scripting and query languages

Commitment to Diversity, Equity, and Inclusion:
We strive for equity, equality, and opportunity for all associates. When we embrace the power of diversity and create an environment where people can bring their authentic and best selves to work, our firm is stronger, and we create greater value for our clients. Our commitment and inclusive programming aim to lift the experience for each associate and builds allies for our global associate community. We know that a sense of belonging is key not only to your success at the firm, but also to your ability to bring your best each day.
T. Rowe Price is an equal opportunity employer and values diversity of thought, gender, and race. We believe our continued success depends upon the equal treatment of all associates and applicants for employment without discrimination on the basis of race, religion, creed, colour, national origin, sex, gender, age, mental or physical disability, marital status, sexual orientation, gender identity or expression, citizenship status, military or veteran status, pregnancy, or any other classification protected by country, federal, state, or local law

Responsibilities:

  • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load log data
  • Implement scalable, fault-tolerant solutions for data ingestion, processing, and storage.
  • Support systems engineering lifecycle activities for data engineering deployments, including requirements gathering, design, testing, implementation, operations, and documentation.
  • Automating platform management processes through Ansible or other scripting tools/languages
  • Troubleshooting incidents impacting the log data platforms
  • Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
  • Develop training and documentation materials
  • Support log data platform upgrades including coordinating testing of upgrades with users of the platform
  • Gather and process raw data from multiple disparate sources (including writing scripts, calling APIs, writing SQL queries, etc.) into a form suitable for analysis
  • Enables log data, batch and real-time analytical processing solutions leveraging emerging technologies
  • Participate in on-call rotations to address critical issues and ensure the reliability of data engineering system


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Proficient

1

London, United Kingdom