Revolutionizing Trading Systems: From Distributed Veto to Deterministic State Machines

A new approach to trading systems is emerging, one that strips Language Learning Models (LLMs) of their execution rights and instead utilizes a deterministic state machine for trading decisions. This shift is a response to the limitations of the ‘distributed veto’ system, where multiple LLM agents would argue until they reached a consensus on a trade.

The new system, dubbed v2, implements a strict state machine using a deterministic runtime (llm-nano-vm). In this setup, Python is responsible for the mathematical aspects and execution contracts, while the LLM interprets the context. The architecture consists of five modules, each with distinct roles for Python and LLMs.

The modules include:

  • HTF Agent (Higher Timeframe – D1/H4): Python extracts structural levels, BOS/CHoCH, and premium/discount zones, while the LLM determines the institutional narrative and selects the most relevant Draw on Liquidity (DOL).
  • Structure Agent (H1): Python identifies valid Order Blocks (OB) and Fair Value Gaps (FVG) with displacement, and the LLM selects the highest-probability Point of Interest (POI) based on the HTF Agent’s narrative.
  • Trigger Agent (M15/M5): This module is 100% Python, focusing on deterministic checks for liquidity sweeps and LTF CHoCH inside the selected POI.
  • Context Agent: The LLM cross-references active killzones, news blackouts, and currency correlations to either greenlight or veto the setup.
  • Risk Agent: This module is also 100% Python, calculating Entry, SL, TP, Expected Value (EV), and position sizing.

The state machine will only transition to EXECUTING if the deterministic Trigger and Risk modules agree. The LLMs serve as ‘context providers’ for the state machine, raising questions about the division of labor and the responsibilities assigned to the LLMs.

Key concerns include whether the LLMs have too much or too little responsibility, the potential loss of AI advantages due to the deterministic Trigger layer, and the potential benefits of merging the HTF and Structure agents to reduce token constraints and hallucinations.

Photo by alex on Pexels
Photos provided by Pexels