Skip to main content
← Back to authors

Ittirit Saengow

Solo Founder & Developer

อิทธิฤทธิ์ แซ่โง้ว

Founder and Architect, RCT Labs

Ittirit Saengow (อิทธิฤทธิ์ แซ่โง้ว) is the founder, sole developer, and primary author of RCT Labs — a constitutional AI operating system platform built independently from architecture through publication. He conceived and developed the FDIA equation (F = (D^I) × A), the JITNA protocol specification (RFC-001), the 10-layer architecture, the 7-Genome system, and the RCT-7 process framework. The full platform — including bilingual infrastructure, enterprise SEO systems, 62 microservices, 41 production algorithms, and all published research — was built as a solo project in Bangkok, Thailand.

constitutional AI system designFDIA equation and frameworkJITNA protocol specificationenterprise AI governancefull-stack Next.js developmentbilingual AI platform architectureAI operating systemsThailand enterprise AI deployment

Related articles

research
Constitutional AI vs RAG: Which Architecture Actually Prevents Hallucination?

RAG (Retrieval-Augmented Generation) reduces hallucination by grounding responses in retrieved documents. Constitutional AI prevents hallucination through architectural constraints. This comparison explains the fundamental difference, performance data, and when to use each approach — or both.

research
Delta Engine: How RCT Labs Achieves 74% Memory Compression and Sub-50ms Recall

The Delta Engine is the memory compression and recall system at the core of the RCT Ecosystem. By storing only state changes (deltas) rather than full state snapshots, it achieves 74% lossless compression and enables warm recall in under 50 milliseconds — reducing per-request cost to near zero for repeated patterns.

research
Evaluation Harnesses for Enterprise LLMs: Beyond Vibe-Testing

Most AI teams evaluate their LLM deployments by looking at outputs and deciding if they seem right. This is vibe-testing. Here is a rigorous alternative — how the RCT Ecosystem runs 4,849 automated tests across 8 evaluation levels to produce verifiable enterprise trust signals.

research
The FDIA Equation Explained: How F = (D^I) × A Powers Constitutional AI

FDIA is the mathematical foundation of RCT Labs — a four-variable equation that governs how AI systems produce trustworthy output. This article explains every component, why Intent acts as an exponent, and how FDIA achieves 0.92 accuracy vs the industry baseline of ~0.65.

research
HexaCore: The 7-Model AI Infrastructure with Geopolitical Balance

HexaCore is the multi-model AI routing infrastructure at the heart of the RCT Ecosystem. This article explains how 7 AI models (3 Western + 3 Eastern + 1 Regional Thai) are selected, balanced, and verified to achieve 0.3% hallucination and 30-40% cost savings vs single-model deployments.

philosophy
The Intent Operating System: Why Enterprise AI Needs an Orchestration Layer

An LLM is not an operating system. It is an application. Enterprise AI needs what every enterprise software system needs: an orchestration layer that manages resources, enforces policies, routes tasks, and maintains state. This is what an Intent OS provides — and why the RCT Ecosystem is built as one.

research
PDPA and AI Compliance in Thailand: A 2026 Enterprise Guide

Thailand's PDPA (Personal Data Protection Act) imposes strict requirements on AI systems that process personal data. This guide explains the key obligations, common compliance gaps, and how a Constitutional AI framework like RCT Labs addresses PDPA requirements architecturally.

research
4,849 Tests, 0 Failures: How RCT Labs Verifies Everything

The RCT Ecosystem runs 4,849 automated tests — and passes all of them, with 0 failures and 0 errors. This article explains the 8-level test pyramid, the 62-microservice verification strategy, and why this testing discipline is a direct SEO and enterprise trust signal.

research
RCTDB v2.0: The 8-Dimensional Universal Memory Schema for AI Systems

RCTDB is the universal memory architecture of the RCT Ecosystem — an 8-dimensional schema designed for structured AI memory, full provenance tracking, and PDPA-compliant right-to-erasure. This article explains the schema, three storage zones, and why traditional vector databases fall short for enterprise AI.

philosophy
Reverse Component Thinking: The Engineering Philosophy Behind RCT Labs

Reverse Component Thinking (RCT) is the engineering methodology at the core of RCT Labs. Instead of building forward from features, RCT starts from the desired outcome and decomposes backwards to find the smallest verifiable parts. This article explains why this inversion changes what you build — and why it matters for AI safety.

research
SignedAI: Multi-LLM Consensus to Prevent Hallucination at Scale

SignedAI is the multi-model consensus verification system of the RCT Ecosystem. Instead of trusting a single AI model's output, SignedAI routes critical queries through 4-8 models simultaneously and requires formal agreement before any result is released — reducing hallucination by 95% vs single-model systems.

news
Thai AI Platform Vision 2030: Building a 50-100 Billion THB National Infrastructure

RCT Labs was built with a specific long-term vision: become the constitutional AI operating standard for 1,000+ Thai enterprises by 2030, generating 50-100 billion THB in national economic value. This article explains the vision, the technical foundation that makes it credible, and the role of open standards in achieving it.

research
Verification vs Prompt Engineering: Why Constitutional AI Changes the Equation

Prompt engineering tells the model what to do. Constitutional AI verification ensures the system can only do what it is authorized to do. This article explains the fundamental difference — why verification is deterministic and prompt engineering is probabilistic — and what this means for enterprise AI deployments.

research
Constitutional AI for Thailand: A Practical Enterprise Deployment Guide

A practical guide for deploying constitutional AI in Thailand, combining global governance frameworks with local requirements around data control, bilingual operation, and enterprise trust.

research
Designing Low-Hallucination AI Systems: What Actually Reduces Failure Rates

Low-hallucination AI is not the result of one prompt trick. It comes from system design choices across retrieval, memory, verification, routing, evaluation, and operator review.

research
Enterprise AI Governance Playbook 2026: From Policy Principles to Operating Controls

A practical governance playbook for enterprise AI teams translating NIST AI RMF, OECD AI Principles, and the EU AI Act into operating controls, review loops, and deployment gates.

research
Enterprise AI Memory Systems Explained: What Teams Get Wrong About Context, Recall, and Trust

Enterprise AI memory is not just storing more tokens. It is about preserving relevant context, separating durable facts from temporary state, and making long-running AI behavior auditable.

research
How to Evaluate an Enterprise AI Platform Before Procurement

A buyer-side framework for evaluating enterprise AI platforms across governance, architecture, memory, routing, observability, and release transparency before procurement.

release
2026.03 Snapshot: Platform Reliability, Public Readiness, and Enterprise Launch Alignment

RCT Labs 2026.03 Snapshot aligns the public platform with the current benchmark baseline, strengthens enterprise readiness, and improves launch-critical SEO and content governance.

research
JITNA — Just In Time Nodal Assembly: The Communication Protocol for Agentic AI

JITNA (Just In Time Nodal Assembly) is the open agent-to-agent communication protocol of the RCT Ecosystem — think of it as the HTTP of Agentic AI. This article explains the RFC-001 specification, negotiation flow, and how JITNA differs from tool-calling APIs.

research
The RCT-7 Process: A Comprehensive Guide to Reverse Component Thinking

RCT-7 is the seven-step continuous improvement process at the heart of Reverse Component Thinking. This guide explains each step in detail — from decomposition through constitutional verification — and how it achieves systematic quality improvement across the entire AI platform.

philosophy
Understanding Intent Operations: The Foundation of RCT Labs

Intent operations form the core of RCT Labs' approach to AI. Let's dive into what they mean and why they matter.