Vincent Shing Hin Chong has unveiled SLS (Semantic Logic System), an innovative prompt architecture that leverages modular prompt layering and semantic recursion to establish internal control mechanisms within Large Language Models (LLMs). This approach distinguishes itself by treating prompts as structured logic environments, allowing for the creation of rhythm, memory-like functions, and modular output flows. Crucially, SLS achieves this without relying on external tools, plugins, or fine-tuning of the underlying LLM.
A compelling demonstration of SLS involves creating a ‘semantic force field’ within GPT-4 using a simple prompt. This force field rigidly enforces an English-only constraint. If the model deviates from English, the system automatically resets. This behavior persists across multiple interactions without the need for external memory or retrieval augmented generation.
According to Chong, SLS v1.0 uses language as the primary logic layer to structure, control, and recursively guide LLM output. This enables the creation of modular behavior, the simulation of memory, and prompt-based self-regulation. These features are all achieved solely through prompt engineering, without altering the model’s inherent parameters or incorporating external code.
The white paper and examples for SLS are now accessible to the public, fostering exploration and further development. Chong encourages feedback and offers to share additional examples of prompt-structured behaviors. Additional resources and documentation for SLS v1.0 and LCM v1.13 can be found on GitHub and OSF, providing a comprehensive overview of the system’s architecture and capabilities.