Senior Data Scientist - $55.59/hr at Healthcare Staffing Professionals Inc
California, California, USA -
Full Time


Start Date

Immediate

Expiry Date

03 Dec, 25

Salary

55.59

Posted On

03 Sep, 25

Experience

4 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Go, Data Engineering, Postgresql, Pipelines, Data Infrastructure, Employee Relations, Disabilities, Data Models, Computer Science, Sql, Indexing, Consideration, Agile, System Development, Kubernetes, Infrastructure, Color, Docker, Java, R, Python, Testing, Scrum

Industry

Information Technology/IT

Description

Healthcare Staffing Professionals has an immediate need for a Senior Data Scientist to support the Disease Control Informatics program. The Disease Control Informatics Unit is seeking a Senior Data Scientist (SDS) with a strong DevOps and backend engineering orientation to join its data systems team. This individual will lead efforts in building scalable, ETL pipelines and analytics-ready environments to support public health operations. The ideal candidate combines a deep understanding of data architecture and DevOps principles with hands-on experience in orchestrating workflows, managing large-scale relational datasets, and ensuring high availability and resilience in cloud-native environments.
This role will collaborate closely with data engineers, application developers, and analysts to design systems that can scale with the agencys growing data needs while maintaining security, uptime, and performance.
The position will start immediately upon acceptance and employment clearance and are initially funded through 12/31/25 with potential extension.
Pay Rate $55.59/hr working 40 hours a week
Location: Remote

MINIMUM QUALIFICATIONS:

  • Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a closely related field AND 2+ years of recent full-time experience in backend system development and data infrastructure OR
  • 4+ years of hands-on experience in a DevOps, infrastructure, or data engineering role with a focus on production pipeline support
  • A valid California Class C Driver License or equivalent transportation access

REQUIRED SKILLS:

  • Expertise in Python, SQL, R with proficiency in additional languages such as Go, Java, or Julian
  • Proficient in PostgreSQL, including advanced SQL, indexing, partitioning, and query planning for large datasets
  • Experience managing star schema and snowflake schema data models for BI/analytics
  • Production experience with Kubernetes, Docker, and other containerized applications.
  • Strong knowledge of Airflow, Spark, or other distributed data frameworks
  • Experience working on linux systems managing multiple applications across production and testing environments.
  • Experience designing and managing CI/CD pipelines with testing, linting, and deployment automation
  • Working knowledge of monitoring, alerting, and observability practices across infrastructure and pipelines
  • Background in HIPAA or similar compliance frameworks is a plus
  • Strong team communication and documentation skills, with experience working in Agile or Scrum environments
    We are an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation, gender identity, or any other characteristic protected by law.
    TrueBlue, Inc. And its brands welcome and encourage applications from candidates with disabilities. Accommodations are available upon request for candidates taking part in the application or interview process. If you require disability-related accommodation during the application or interview process, please contact your recruiter directly, employee relations at HR-Advice@trueblue.com, or 1-800-610-8920. TrueBlue, Inc. And its brands will consult with all applicants who request disability-related accommodation during the application or interview process to ensure that the accommodation provided takes into account the applications individual accessibility needs.
    We consider qualified applicants with arrest and conviction records in accordance with applicable law
Responsibilities
  • Lead architecture and deployment of production-grade ETL pipelines supporting millions of records daily across multiple relational databases, following star-schema model.
  • Design and maintain scalable backend infrastructure for automated data ingestion, transformation, and reporting (Apache Airflow, Spark).
  • Manage and optimize Kubernetes-based workloads including container deployment, autoscaling, pod networking, and resource monitoring.
  • Develop and maintain secure, high-availability data services integrated with relational databases (PostgreSQL, SqlServer) and cloud-native data stores (Snowflake, Azure, AWS).
  • Automate infrastructure provisioning and deployment using CI/CD pipelines (GitHub Actions, GitLab CI).
  • Implement DevOps best practices including continuous testing, environment isolation, logging, and observability
  • Conduct performance tuning and query optimization on datasets exceeding millions of rows, ensuring efficient analytics on normalized and denormalized data models.
  • Support failover, redundancy, and disaster recovery planning for critical data systems and pipelines.
  • Mentor junior developers and data scientists, providing architectural guidance and code review support across the data systems lifecycle.
  • Document workflows, system architecture, and operational protocols to ensure team knowledge continuity and scalability.
Loading...