Research Compute Platform Engineer at Lasso Informatics Inc Lasso Informatique Inc
Montreal, Quebec, United States -
Full Time


Start Date

Immediate

Expiry Date

26 Jun, 26

Salary

115000.0

Posted On

28 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Aws, Gcp, Linux System Administration, Terraform, Pulumi, Rstudio, Jupyter, Matlab, Apptainer, Singularity, Docker, Datalad, Infrastructure As Code, Cost Management, Security, Compliance

Industry

Data Infrastructure and Analytics

Description
Research Compute Platform Engineer Development / Engineering Location: Remote, North America  About Lasso Lasso Informatics is a SaaS start-up with a live research data management and analysis platform that brings together multi-modal (imaging, genetics, behavioral, and biosample) data for large-scale studies. Thousands of researchers across the globe rely on our platform today, and we’re rapidly iterating and improving to push the boundaries of what’s possible in research data management. We live to innovate, and empower scientists to focus on the science, not the technology, leading to a faster time to science, and cure.   Our team is incredibly diverse both by background and expertise, and that is not by accident, we believe that the most creative and powerful solutions come from different ways of thinking about the world. You will be working in an inspiring ecosystem alongside world-renowned professionals in medicine, physics, engineering, imaging, epidemiology, software development and genetics.  We thrive on empowering our colleagues to be thought leaders and innovate fresh new solutions for an exciting and rapidly changing field.   ABOUT THE ROLE We are looking for a Research Compute Platform Engineer to design and build the next generation of Secure Compute Environments (SAFE) across AWS and GCP. This role focuses on developing new platform capabilities, establishing scalable and secure patterns, and enabling smooth handoff to SysOps for production rollout and ongoing operations. This is a platform design and implementation role: you will define how systems should work, ensure they scale across multiple environments, and package them for reliable operation by others. RESPONSIBILITIES PLATFORM DESIGN & IMPLEMENTATION * Design and implement new platform capabilities for secure research environments (compute, storage, access, tooling) * Build reusable reference architectures and standardized patterns for SAFE deployments * Develop infrastructure-as-code (e.g., Terraform) to enable consistent and repeatable environments RESEARCH & SCIENTIFIC COMPUTE ENABLEMENT * Build and support platforms for RStudio, Jupyter, and MATLAB in secure, multi-user environments * Enable reproducible workflows using tools such as DataLad and Apptainer (Singularity) * Support machine learning and data-intensive workloads, including GPU-enabled environments SCALABILITY & MULTI-ENVIRONMENT ARCHITECTURE * Design multi-tenant, multi-environment systems with clear isolation boundaries * Define cloud resource organization strategies (AWS accounts, GCP projects/folders) * Ensure systems scale across teams, projects, and data sensitivity levels PERFORMANCE & SYSTEMS OPTIMIZATION * Optimize compute environments for CPU, GPU, memory, and disk I/O performance * Design efficient storage and data access patterns (object storage, buckets, high-throughput file systems) * Identify and resolve bottlenecks across compute, storage, and networking layers COST MANAGEMENT & GOVERNANCE * Define and implement tagging/labeling strategies for cost attribution and governance * Establish billing visibility and usage tracking across environments * Implement guardrails for budget control, quotas, and cost optimization SECURITY & COMPLIANCE * Translate security and compliance requirements into enforceable infrastructure patterns * Implement access controls, audit logging, and data governance mechanisms * Ensure environments meet regulatory and organizational requirements HANDOFF TO SYSOPS * Produce clear documentation, runbooks, and implementation guides * Ensure solutions are operationally sound, automatable, and maintainable * Partner closely with SysOps to support rollout and ongoing operations * Iterate on designs based on operational feedback QUALIFICATIONS REQUIRED * Strong experience with AWS and/or GCP (compute, storage, networking) * Experience designing scalable, multi-environment or multi-tenant systems * Hands-on experience with Linux system administration * Experience with Infrastructure as Code (Terraform, Pulumi, or similar) * Familiarity with: * RStudio, Jupyter, and/or MATLAB in shared environments * Containers (Apptainer/Singularity, Docker) * Data versioning or reproducibility tools (e.g., DataLad) * Understanding of: * Disk I/O and storage performance * Object storage (S3/GCS) and bucket design * GPU selection and workload optimization * Experience supporting machine learning or data-intensive workloads * Ability to design systems that balance security, usability, performance, and cost NICE TO HAVE * Experience with HPC environments or research clusters * Familiarity with schedulers (Slurm, Kubernetes) * Experience with secure data enclaves or clean room environments * Knowledge of compliance frameworks (HIPAA, SOC2, ISO 27001) * Experience with policy-as-code or security automation KEY CHARACTERISTICS * Strong systems thinker with attention to scalability and standardization * Able to move from prototype to production-ready design * Designs with operational handoff in mind * Comfortable working across infrastructure, security, and research domains * Clear communicator who can document and transfer knowledge effectively  
Responsibilities
The role involves designing and implementing the next generation of Secure Compute Environments (SAFE) across AWS and GCP, focusing on developing new platform capabilities and establishing scalable, secure patterns. Responsibilities include building reusable reference architectures, developing infrastructure-as-code, enabling scientific compute platforms (RStudio, Jupyter, MATLAB), and optimizing performance for data-intensive workloads.
Loading...