Senior Machine Learning Engineer - FedRAMP
at Okta
United States, North Carolina, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 08 Sep, 2024 | USD 146000 Annual | 10 Jun, 2024 | N/A | Java,Written Communication,Access,Scala,Testing,Training,Authentication,Python,C++,Addition,App,Computer Science,Platforms,Perspectives,Model Development,Health,Automation,Documentation,Pto | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
GET TO KNOW OKTA
Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth.
At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences.
Join our team! We’re building a world where Identity belongs to you.
BASIC QUALIFICATIONS
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Fluency in a computing language, e.g. Python, Scala, C++, Java, etc.
- Knowledge of AWS Bedrock, OpenAI or similar Generative AI platforms.
- Experience with developing production AI/ML systems and platforms at scale, including retrieval-augmented generation (RAG) and embedding workflows.
- Experience with LLMOps, CI/CD, and IaC.
- Familiar with the full AI/ML lifecycle from model development, training, testing, deployment, monitoring, and iterating.
- Knowledge in prompt engineering and guardrails.
- Excellent verbal and written communication.
- Exceptional troubleshooting and problem solving skills, thrive in a fast-paced, innovative environment.
PREFERRED QUALIFICATIONS
- Master’s degrees in Computer Science, Engineering, or a related field.
- Familiar with training and fine tuning models at scale with experience in LLM evaluation.
- Experience with ML frameworks (e.g. TensorFlow, Spark ML, PyTorch), data workflow platforms (e.g. Airflow), and container technologies (e.g. Docker, Kubernetes).
- Familiar with Python and AI/ML libraries such as LangChain and FastAPI. .
- Ability to work with ambiguity, ability to self-motivate, prioritizing needs, and delivering results in a dynamic environment.
- Combination of deep technical skills and business savvy to interface with all levels and disciplines within our and our customer’s organizations.
REQUIRED REQUIREMENTS:
- This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.
LI-SH1
LI-Remote
Below is the annual base salary range for candidates located in California, Colorado, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, New York, and Washington is between:$146,000—$218,000 USD The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $163,000—$245,000 USD
Responsibilities:
- Design and implement infrastructure and platform components for leveraging LLM in production.
- Tune and optimize LLM to improve response quality and optimize performance.
- Build workflows and pipelines to process data from myriad sources into a knowledge base for various use cases.
- Collaborate with platform engineering teams to ensure that AI/ML systems integrate successfully into production environments while adhering to performance and availability SLOs.
- Participate in project planning, design, development, and code reviews.
- Communicate verbally and in writing to business customers and leadership teams with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations.
- Partnership across Engineering, Product Management, Security and Design teams to solve technical and non-technical challenges.
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - System Programming
Software Engineering
LLM
Proficient
1
United States, USA