When I first started building what would become the RCT Ecosystem, I faced a question that every AI architect eventually encounters: How do you guarantee that an AI system produces outputs that are both accurate and safe — every single time?
The answer is not "better prompts." It is not "more data." It is a mathematical framework that makes quality and safety structurally inseparable from the system's output.
That framework is FDIA.
$$F = (D^I) \times A$$
This single equation governs every decision in the RCT Ecosystem — from a simple chatbot response to a multi-model consensus on a critical enterprise question. In this article, I will explain every variable, show why the equation works, and demonstrate how it achieves 0.92 accuracy against an industry baseline of approximately 0.65.
What is FDIA? (The 50-Word Answer)
FDIA stands for Future = (Data ^ Intent) × Architect. It is a constitutional equation that calculates the quality of an AI system's output (F) based on three controlled inputs: the quality of Data (D), the clarity of Intent (I) as an exponential amplifier, and a human Architect gate (A) that can block any output.
The Four Variables — Deep Dive
F — Future (ผลลัพธ์)
F represents the quality and trustworthiness of the AI system's final output. It is not a single number — it is a composite score that encompasses accuracy, relevance, safety, and completeness.
In the RCT Ecosystem, F is measured continuously and benchmarked against production workloads. The current measured value across the platform is:
- Overall Accuracy: 96.1%
- Hallucination Rate: 0.3% (industry average: 12–15%)
- Factual Detection Intelligence Accuracy: 0.92 (industry baseline: ~0.65)
F is the dependent variable — you do not control it directly. You control it by tuning D, I, and A.
D — Data (ข้อมูล)
D measures the quality and completeness of the input data on a scale of 0.0 to 1.0.
This is not simply "how much data do you have." It is a composite measure of:
| Factor | Description | Weight | |---|---|---| | Completeness | Are all required fields present? | 25% | | Freshness | Is the data current? (Decay function applied) | 20% | | Provenance | Can the data source be verified? | 20% | | Consistency | Does the data contradict other known facts? | 20% | | Format Quality | Is the data structured and machine-readable? | 15% |
In the RCT Ecosystem, Data quality is validated at the first stage of the Intent Loop Engine — the RECEIVED → VALIDATED transition. If D falls below a configurable threshold (default: 0.3), the request is rejected before any LLM is invoked. This is the GIGO Protection layer (Garbage In = Garbage Out prevention).
Key insight: Even if D is moderate (say 0.6), the system can still produce excellent results — if Intent is high enough.
I — Intent (ความตั้งใจ) — The Exponent
This is the most important variable in the equation, and the one that makes FDIA fundamentally different from every other AI quality framework.
Intent does not multiply Data — it exponentiates it.
$$D^I$$
This means:
| D Value | I = 1.0 (Low Intent) | I = 1.5 (Mid Intent) | I = 2.0 (High Intent) | |---|---|---|---| | 0.5 | 0.50 | 0.35 | 0.25 | | 0.7 | 0.70 | 0.59 | 0.49 | | 0.8 | 0.80 | 0.72 | 0.64 | | 0.9 | 0.90 | 0.86 | 0.81 |
Wait — this looks like Intent reduces the score? That is correct when D < 1.0 and I > 1.0. The mathematical behavior is:
Enterprise implicationWhen Intent is high (I → 2), only high-quality Data produces high F values. Low-quality Data is exponentially punished.
This is by design. High Intent means the system demands more from its inputs. A medical diagnosis query (I = 2.0) requires near-perfect data quality (D > 0.9) to produce useful output. A casual chat query (I = 1.0) can work with moderate data quality.
In practice: The Intent variable is set by the Intent Loop Engine which analyzes the user's query through:
- Semantic classification — What domain is this? (medical, legal, casual, creative)
- Specificity detection — How precise is the request?
- Criticality assessment — What are the consequences of an incorrect answer?
The 7-State Pipeline processes every query through this classification:
RECEIVED → VALIDATED (FDIA) → MEMORY_CHECK → COMPUTING → VERIFYING (SignedAI) → COMMITTING (RCTDB) → COMPLETED
A — Architect (สถาปนิก) — The Kill Switch
A is the human-in-the-loop approval gate. It operates on a scale of 0.0 to 1.0.
- A = 1.0: Full approval — the system operates autonomously.
- A = 0.5: Partial approval — outputs require human review before release.
- A = 0.0: Complete block — no output is produced regardless of D and I.
This is the "constitutional" part of Constitutional AI. The Architect variable ensures that:
Enterprise implicationWhen A = 0, F = 0. Always. No exceptions.
No matter how perfect your data is, no matter how clear your intent is — if the human architect says "stop," the system stops. This is implemented as a hard gate in core/kernel/fdia.py, not as a soft preference that can be overridden by the model.
Why this matters for enterprise: In regulated industries (healthcare, finance, legal), the ability to provably halt AI output is not a feature — it is a legal requirement. FDIA makes this guarantee mathematical, not behavioral.
FDIA vs Prompt Engineering: A Fundamental Difference
| Dimension | Prompt Engineering | FDIA Framework | |---|---|---| | Control mechanism | Text instructions to the model | Mathematical constraints on the system | | Guarantee level | Probabilistic (model may ignore) | Deterministic (A=0 always blocks) | | Quality scaling | Depends on model capability | Governed by D, I, A independent of model | | Measureability | Subjective evaluation | Quantitative score (0.0–1.0) | | Adaptability | New prompts for new tasks | Same equation, different parameter values | | Multi-model support | Per-model prompt tuning needed | Single framework across all models |
Prompt engineering tells the model what to do. FDIA tells the system what quality constraints to enforce regardless of which model is running.
The 7-State Intent Loop Pipeline
Every query in the RCT Ecosystem passes through a 7-state pipeline powered by FDIA:
State 1: RECEIVED
The raw user query enters the system. No processing has occurred yet.
State 2: VALIDATED (FDIA Gate)
The FDIAGatekeeper evaluates D, I, and A:
- D is calculated from input quality metrics
- I is determined by semantic classification of the query
- A is read from the current governance policy
If F falls below the minimum threshold, the query is rejected before any LLM call — saving cost and preventing hallucination.
State 3: MEMORY_CHECK
The Delta Engine checks RCTDB for cached responses with semantic similarity > 0.95. If found, the system returns a warm recall response in under 50 milliseconds, at near-zero cost.
State 4: COMPUTING
The HexaCore router selects the optimal model(s) from 7 available AI models based on task type, complexity, and cost priority. The system uses geopolitically balanced selection (3 Western + 3 Eastern + 1 Regional Thai).
State 5: VERIFYING (SignedAI Consensus)
For production-critical queries (SignedAI Tier 4/6/8), multiple models independently process the query. Results are compared using:
- Majority voting (>50% agreement)
- Weighted voting (confidence-scored)
- Jaccard similarity (agreement measurement)
Consensus threshold depends on tier: Tier 4 = 50%, Tier 6 = 67%, Tier 8 = 75%.
State 6: COMMITTING (RCTDB)
The verified response and its full provenance chain are stored in RCTDB — the 8-dimensional universal memory schema. This includes: the original query, D/I/A scores, model selections, consensus results, and timestamps.
State 7: COMPLETED
The response is delivered to the user. The Delta Engine stores only the differential state change (74% compression vs full-state storage).
Performance: 0.92 Accuracy — How We Measured It
The FDIA accuracy score of 0.92 is measured against a benchmark suite designed to test:
- Factual correctness — Does the system output verifiable facts?
- Hallucination detection — Does the system catch when models fabricate information?
- Intent preservation — Does the final output match the original user intent?
- Safety compliance — Are constitutional constraints respected?
The industry baseline of ~0.65 comes from standard LLM accuracy measurements across comparable enterprise workloads, where models operate without FDIA-style gating.
| Metric | FDIA Score | Industry Baseline | Improvement | |---|---|---|---| | Factual accuracy | 0.92 | ~0.65 | +41.5% | | Hallucination rate | 0.3% | 12–15% | 95% reduction | | Intent preservation | 94% | ~70% | +34% | | Safety compliance | 100% | Variable | Guaranteed |
The source implementation can be found in core/kernel/fdia.py (NPC scoring) and rct_platform/microservices/intent-loop/loop_engine.py (FDIAGatekeeper class).
Enterprise Use Cases
Healthcare: Drug Interaction Check
- D = 0.95 (structured pharmaceutical database)
- I = 2.0 (medical-critical intent)
- A = 0.7 (pharmacist review required before patient delivery)
- F = (0.95^2.0) × 0.7 = 0.9025 × 0.7 = 0.632
The system produces the interaction analysis but flags it for pharmacist review. If A were 0.0, no output would be produced at all.
Customer Service: Product FAQ
- D = 0.7 (moderately structured knowledge base)
- I = 1.0 (routine informational query)
- A = 1.0 (full autonomous operation)
- F = (0.7^1.0) × 1.0 = 0.70
The system operates autonomously for routine queries — saving human agents for complex cases.
Legal: Contract Review
- D = 0.85 (well-structured legal documents)
- I = 1.8 (high-criticality legal context)
- A = 0.3 (heavy human oversight required)
- F = (0.85^1.8) × 0.3 = 0.737 × 0.3 = 0.221
The low F score means the system produces analysis but under heavy supervision. This is correct behavior — legal AI should assist, not replace, human judgment.
Frequently Asked Questions
What does FDIA stand for?
FDIA stands for Future = (Data ^ Intent) × Architect. It is the constitutional equation governing AI output quality in the RCT Ecosystem.
Who created the FDIA equation?
FDIA was conceived and developed by Ittirit Saengow (อิทธิฤทธิ์ แซ่โง้ว), founder and sole developer of RCT Labs, as part of the RCT (Reverse Component Thinking) Ecosystem.
How is FDIA different from RLHF (Reinforcement Learning from Human Feedback)?
RLHF trains models to prefer certain outputs. FDIA does not train models — it constrains the system around any model. RLHF is probabilistic; FDIA's Architect gate (A=0) is deterministic.
Can FDIA work with any LLM?
Yes. FDIA operates at the system level, not the model level. The RCT Ecosystem currently uses 7 HexaCore models (Claude Opus, Kimi K2.5, MiniMax, Gemini Flash, Grok, DeepSeek, Typhoon v2) — all governed by the same FDIA equation.
What happens when A = 0?
All output is blocked. F = 0 regardless of D and I. This is the constitutional guarantee — no AI operates without human authorization.
Why is Intent an exponent instead of a multiplier?
Because intent should have a non-linear effect. A slightly unclear intent with good data produces decent results (D=0.8, I=1.0, F=0.8). But a very unclear intent with the same data drops exponentially (D=0.8, I=0.5, F=0.89) — which is actually higher. The exponent penalizes when I > 1 and rewards clarity proportionally to data quality. This captures the real-world behavior that high-stakes queries require both clear intent AND good data.
Where is FDIA implemented in the codebase?
The core implementation is in core/kernel/fdia.py with the gatekeeper integration in rct_platform/microservices/intent-loop/loop_engine.py.
Summary
FDIA is not a feature of the RCT Ecosystem — it is the foundation. Every query, every model selection, every consensus vote, and every memory write passes through the FDIA framework.
The equation F = (D^I) × A achieves three goals simultaneously:
- Quality: Intent-as-exponent ensures that high-stakes queries demand proportionally higher data quality.
- Safety: The Architect gate (A) provides a mathematical guarantee that no AI output is produced without authorization.
- Measurability: Every variable is quantified on a 0.0–1.0 scale, enabling continuous monitoring and improvement.
With a measured accuracy of 0.92 against an industry baseline of ~0.65, FDIA demonstrates that constitutional AI is not just a philosophy — it is an engineering discipline with measurable results.
Related Resources
- 📊 FDIA Entity Page — structured definition with JSON-LD schema
- 🔬 Benchmark Summary — full methodology for 0.92 accuracy measurement
- ⚖️ Constitutional AI vs RAG — architectural comparison
- 🤖 RCT Labs vs LLM APIs — why bare APIs cannot match constitutional governance
This article was written by Ittirit Saengow, founder and sole developer of RCT Labs. FDIA is part of the RCT (Reverse Component Thinking) Ecosystem — a constitutional AI operating system built independently in Bangkok, Thailand.
What enterprise teams should retain from this briefing
FDIA is the mathematical foundation of RCT Labs — a four-variable equation that governs how AI systems produce trustworthy output. This article explains every component, why Intent acts as an exponent, and how FDIA achieves 0.92 accuracy vs the industry baseline of ~0.65.
Move from knowledge into platform evaluation
Each research article should connect to a solution page, an authority page, and a conversion path so discovery turns into real evaluation.
Previous Post
Evaluation Harnesses for Enterprise LLMs: Beyond Vibe-Testing
Most AI teams evaluate their LLM deployments by looking at outputs and deciding if they seem right. This is vibe-testing. Here is a rigorous alternative — how the RCT Ecosystem runs 4,849 automated tests across 8 evaluation levels to produce verifiable enterprise trust signals.
Next Post
HexaCore: The 7-Model AI Infrastructure with Geopolitical Balance
HexaCore is the multi-model AI routing infrastructure at the heart of the RCT Ecosystem. This article explains how 7 AI models (3 Western + 3 Eastern + 1 Regional Thai) are selected, balanced, and verified to achieve 0.3% hallucination and 30-40% cost savings vs single-model deployments.
Ittirit Saengow
Primary authorIttirit Saengow (อิทธิฤทธิ์ แซ่โง้ว) is the founder, sole developer, and primary author of RCT Labs — a constitutional AI operating system platform built independently from architecture through publication. He conceived and developed the FDIA equation (F = (D^I) × A), the JITNA protocol specification (RFC-001), the 10-layer architecture, the 7-Genome system, and the RCT-7 process framework. The full platform — including bilingual infrastructure, enterprise SEO systems, 62 microservices, 41 production algorithms, and all published research — was built as a solo project in Bangkok, Thailand.