Safely Executing LLM-Proposed Actions with Typed Verifiers

This article presents a pattern for safely executing actions proposed by large language models (LLMs) by using a deterministic verifier and a strictly typed executor, avoiding direct execution of LLM output.

đź’ˇ

Why it matters

This pattern helps address the safety and reliability challenges of using LLMs in mission-critical business applications.

Key Points

  • 1Split roles: LLM proposes a plan, Verifier deterministically accepts/rejects/degrades the plan, Executor runs only verified typed actions
  • 2Input schema defines what counts as grounds, LLM outputs a plan of typed actions, Verifier returns ACCEPT/REJECT/DEGRADE
  • 3Executor only runs verified typed actions, avoiding direct execution of LLM output

Details

The article discusses the problem of directly executing the probabilistic output of LLMs, which can lead to accidents. It proposes a pattern with three key components: 1) Structuring the input schema to define what counts as grounds, 2) Having the LLM output a plan of typed actions instead of free text, and 3) Using a deterministic verifier to accept, reject, or degrade the plan before the executor runs the verified typed actions. This approach aims to make the system more operable and reduce the risk of accidents from executing LLM output directly. The article also explores a thought experiment around just-in-time production access, where the LLM proposes a plan but a compiler-like verifier determines eligibility, least privilege, and approval before the executor runs the approved actions.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies