Sr. AI Applications & Prompt Engineering Analyst at OpenGov
Chicago, Illinois, USA -
Full Time


Start Date

Immediate

Expiry Date

05 Nov, 25

Salary

150000.0

Posted On

06 Aug, 25

Experience

4 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Gtm, Knowledge Management Systems, Base, Python, Dbt, Data Architecture, It, Airflow

Industry

Information Technology/IT

Description

OpenGov is the leader in AI and ERP solutions for local and state governments in the U.S. More than 2,000 cities, counties, state agencies, school districts, and special districts rely on the OpenGov Public Service Platform to operate efficiently, adapt to change, and strengthen the public trust. Category-leading products include enterprise asset management, procurement and contract management, accounting and budgeting, billing and revenue management, permitting and licensing, and transparency and open data. These solutions come together in the OpenGov ERP, allowing public sector organizations to focus on priorities and deliver maximum ROI with every dollar and decision in sync.Learn about OpenGov’s mission to power more effective and accountable government and the vision of high-performance government for every community at OpenGov.com.

BONUS SKILLS

  • Experience with Python or tools like LangChain, Airia (Agent Orchestration Platforms), Workato, Airflow, or dbt.
  • Familiarity with knowledge management systems and enterprise data architecture.
  • Past experience implementing AI-powered workflows in GTM, Customer Support, or IT.
    $115k - $150k
    On target ranges above include base plus a portion of variable compensation that is earned based on company and individual performance.
    The final compensation will be determined by a number of factors such as qualifications, expertise, and the candidate’s geographical location.
Responsibilities

ABOUT THE ROLE

This is an exciting opportunity to join the team leading AI innovation at OpenGov. You’ll help shape the future of AI-enabled work by developing and scaling both custom AI solutions and vendor-supported tools that drive real business impact. From internally built LLM agents and RAG pipelines to strategic oversight of AI capabilities in platforms like Gong, Enterprise Search and Customer Support Agents (ex. Sierra, Decagon, Forethought), this role bridges the technical and strategic to create cohesive, intelligent user experiences.
We’re seeking a strategic and hands-on Senior Analyst to lead our efforts in applying AI across enterprise systems and workflows. This role will be responsible for designing, optimizing, and scaling prompt engineering strategies across core business functions — from customer support agents to sales assistants and internal automation tools.
You’ll work closely with stakeholders across Product, RevOps, Support, and IT to transform complex workflows into intelligent, AI-assisted experiences using Large Language Models (LLMs), prompt libraries, and retrieval-augmented generation (RAG) frameworks.

KEY RESPONSIBILITIES

  • Design and optimize high-quality prompts for generative AI agents embedded in Salesforce, Slack, support tools, and internal enterprise systems.
  • Collaborate with subject matter experts to turn business workflows into scalable, AI-powered use cases.
  • Lead the development of reusable prompt templates and libraries for cross-functional agents (e.g. support, sales, enablement).
  • Partner with Data and Engineering teams to implement retrieval workflows (e.g. Snowflake + vector search) for grounded LLM responses.
  • Conduct prompt testing and evaluation, continuously refining outputs based on user feedback and performance metrics.
  • Own the feedback loop and reporting for AI agent performance, including hit rates, grounding quality, and hallucination rates.
  • Partner with stakeholders to co-develop prompt libraries and support QA efforts across both internal and external AI systems, while ensuring domain experts own the final validation of outputs.
  • Partner with stakeholders to define and monitor agent success metrics (e.g., case deflection, MTTR reduction, response quality scores).
  • Stay current on developments in LLM technologies, open-source models, and prompt tuning strategies.
  • Contribute to internal enablement by documenting prompt frameworks, use case playbooks, and design patterns.
Loading...