Every enterprise AI decision involves tradeoffs. Deploy faster and you pay more. Maximise control and you accept complexity. Optimise for cost and you sacrifice sovereignty. Real procurement decisions — AI model selection, infrastructure architecture, vendor evaluation — do not have a single "best" answer. They have a set of answers that are each best in a specific way.
Most organisations handle this badly. A committee meets, debates competing priorities, and eventually commits to one option because someone with authority prefers it. The mathematical structure of the tradeoff — which solutions actually dominate which others — is never formalised. The decision is made, but it is not proven.
MOIP (Multi-Objective Intent Planning) is the RCT Ecosystem's formal answer to this problem. It is an algorithm that takes any set of conflicting objectives, scores any set of candidate solutions against them, and returns a Pareto frontier — the mathematically complete set of non-dominated choices. From that frontier, a preference-weighted ranking surfaces a single recommended option, with full auditability of how that recommendation was derived.
The enterprise tradeoff problem, formalised
Consider a deployment decision with four objectives that always pull against each other:
| Objective | What it measures | Direction | |---|---|---| | Speed | Time to production | Maximise | | Cost | Monthly operating spend | Minimise | | Control | Infrastructure ownership | Maximise | | Maintenance | Operational overhead | Minimise |
Three candidate solutions — Vercel, VPS + Docker, Kubernetes — each score differently across these four dimensions. No single option is best on all four simultaneously. Vercel is fastest and easiest to maintain, but most expensive and least controllable. VPS is cheapest and most controllable, but slowest and hardest to maintain. Kubernetes balances all four — but at medium values on every dimension.
The classical approach to this problem is to pick one metric, rank by it, and call it done. MOIP does something fundamentally different: it finds every solution that is not strictly worse than any other on all objectives simultaneously — the Pareto-optimal set — and then ranks within that set by stated preference weights.
What Pareto optimality actually means
A solution A is Pareto-optimal if there is no other solution B that is at least as good as A on every objective and strictly better on at least one.
Formally:
$$\text(A, B) \iff \forall i: A_i \geq B_i \land \exists j: A_j > B_j$$
If no B dominates A, then A belongs to the Pareto frontier — the set of solutions that represent genuine, irreducible tradeoffs.
This matters because dominated solutions are never rational choices. If solution B is cheaper, faster, and more controllable than solution A, there is no principled reason to choose A. But if B is cheaper while A is more controllable — and neither dominates the other — then the choice between them is a genuine values question that depends on the organisation's weight on cost versus control.
MOIP's job is to eliminate the dominated options and surface the remaining tradeoff space with full mathematical clarity.
MOIP architecture: four core components
MOIP is implemented as a microservice within the RCT Ecosystem with four tightly coupled components:
1. multi_objective.py — Objective scoring engine
Takes a list of objectives and a list of candidate solutions, each scored per objective. Objectives carry:
- weight — the organisation's relative preference (e.g.
cost: 0.4,speed: 0.3,control: 0.2,maintenance: 0.1) - direction —
maximizeorminimize - target_value — the ideal value used for normalisation
The scoring engine normalises all objectives to a common 0–10 scale before any comparison, ensuring that a 100ms latency improvement is not treated as equivalent to a $100 cost saving just because both are expressed as a difference of 100.
2. pareto_optimizer.py — Pareto frontier discovery
This is the mathematical core. For each pair of candidate solutions (A, B), the optimizer evaluates whether A dominates B:
def dominates(sol_a: Solution, sol_b: Solution) -> bool:
# True if A is at least as good as B on all objectives
# and strictly better on at least one
at_least_as_good = all(
a >= b for a, b in zip(sol_a.normalised_scores, sol_b.normalised_scores)
)
strictly_better = any(
a > b for a, b in zip(sol_a.normalised_scores, sol_b.normalised_scores)
)
return at_least_as_good and strictly_better
def pareto_frontier(solutions: list[Solution]) -> list[Solution]:
return [s for s in solutions if not any(dominates(other, s) for other in solutions)]
The algorithm runs in O(n²) time across the candidate set, which is optimal for exact Pareto discovery with arbitrary objective counts. For enterprise planning contexts — typically 3–20 candidate solutions — this means deterministic frontier results in under 100ms regardless of objective count.
3. constraint_solver.py — Hard constraint enforcement
Not all objectives are tradeoffable. Some requirements are binary: a solution that fails a regulatory requirement or a security baseline is not a Pareto candidate — it is disqualified entirely before the frontier is computed.
The constraint solver runs before the optimizer and removes any solution that violates:
- Mandatory compliance requirements (e.g. PDPA data residency, SOC 2 certification)
- Hard latency ceilings (e.g.
p99 ≤ 500ms) - Non-negotiable cost floors
This keeps the Pareto analysis clean: the frontier only contains solutions that are genuinely viable.
4. ranking_engine.py — Preference-weighted final ranking
The Pareto frontier tells you which solutions are non-dominated. The ranking engine tells you which of those best matches the organisation's stated priorities.
It applies the objective weights from step 1 to compute a weighted sum score for each frontier solution, then ranks them. The output is:
- The full Pareto frontier (for transparency — all non-dominated options)
- A recommended solution (the highest weighted-sum score)
- Per-objective scores and the weight assumptions used
The ranking step runs in under 100ms on a 50-solution frontier. The total pipeline — scoring + frontier + ranking — completes in well under 200ms in practice.
How MOIP integrates with the RCT Kernel
MOIP does not make standalone decisions. It is a planning component within the FDIA equation pipeline — specifically, it sits at the Intent Classification stage where the kernel must resolve a multi-objective planning intent before routing to execution.
The integration path:
User/System Intent (multi-objective signal)
↓
Intent Classification (FDIA Tier 2)
↓
MOIP: POST /moip/plan
→ constraint_solver filters candidates
→ pareto_optimizer builds frontier
→ ranking_engine scores by weights
↓
Recommended solution + full frontier
↓
JITNA RFC-001 packet (objective trace embedded in metadata field)
↓
SignedAI: Ed25519 signature over the final recommendation
↓
Execution layer
Two aspects of this integration are architecturally significant:
JITNA protocol carries the objective trace. The MOIP output is not just a recommendation — it is a structured payload that includes the full Pareto frontier, the weight assumptions, and the per-objective scores. This is embedded in the JITNA packet's M (metadata) field, making the decision logic fully auditable at every stage downstream.
Ed25519 signs the recommendation. When the recommendation reaches SignedAI, it receives a cryptographic signature that includes the MOIP trace. This means the recommendation is tamper-evident: any modification to the weights, scores, or frontier after signing is detectable. For regulated industries — finance, healthcare, PDPA-governed data environments — this creates a chain of custody for planning decisions that satisfies audit requirements.
Enterprise use case: AI model selection
Consider an enterprise AI team evaluating five candidate LLMs for a Thai enterprise workload: GPT-4o, Claude Sonnet, Gemini Pro, Typhoon (Thai-optimised), and RCT-7.
The team defines four objectives with weights reflecting their priorities:
| Objective | Weight | Direction | |---|---|---| | Cost per 1M tokens | 0.35 | Minimise | | Response latency (p95) | 0.25 | Minimise | | Constitutional accuracy | 0.30 | Maximise | | Data sovereignty (TH residency) | 0.10 | Maximise |
After scoring each model across these dimensions and applying MOIP:
- GPT-4o scores highest on accuracy but lowest on sovereignty — enters frontier
- Typhoon scores highest on sovereignty and lowest on cost — enters frontier
- RCT-7 scores second-highest on accuracy with full sovereignty compliance — enters frontier
- Two candidates are dominated and eliminated before ranking
The ranking engine applies the weights. With cost (0.35) and accuracy (0.30) carrying the most weight, RCT-7 surfaces as the recommended option — it is not cheapest or fastest in isolation, but it maximises the weighted objective function across all four dimensions.
The enterprise team receives:
- The Pareto frontier (3 non-dominated options)
- The recommended model with explanation
- The exact weights used — so if priorities shift, the team can re-run with different weights and get a new recommendation in under 200ms
This is not a soft recommendation. It is a mathematically proven result: given the stated objectives and weights, this choice is non-dominated and preference-optimal.
Enterprise use case: infrastructure deployment
The infrastructure deployment example from the MOIP architecture documentation illustrates the algorithm more concretely. Three deployment options with four objectives scored 0–10:
| Option | Speed (0.4) | Cost (0.3, min) | Control (0.2) | Maintenance (0.1) | |---|---|---|---|---| | Vercel | 10 | 20 | 3 | 10 | | VPS + Docker | 4 | 5 | 9 | 4 | | Kubernetes | 7 | 12 | 8 | 7 |
After normalising cost to a maximise scale (lower cost = higher score) and running the Pareto optimizer:
- Vercel: dominated by Kubernetes on control (3 < 8) with no compensating advantage that Kubernetes doesn't also have — eliminated
- VPS: not dominated by Kubernetes (VPS has higher control: 9 > 8, lower cost: 5 < 12) — frontier
- Kubernetes: not dominated by VPS (Kubernetes has higher speed: 7 > 4, lower maintenance: 7 > 4) — frontier
The Pareto frontier contains . With speed weighted at 0.4, the ranking engine returns Kubernetes as the recommended option — it wins on the highest-weighted objective and is non-dominated overall.
Performance benchmarks
MOIP v1 is integrated in RCT Ecosystem v5.4.5, verified against the full 4,849-test suite with 0 failures. The service achieves a score of 9.1/10 on the MOIP benchmark evaluation set, with 100% accuracy on Pareto frontier identification — all Pareto-optimal solutions are correctly included, and no dominated solutions are incorrectly included.
Key performance targets:
- Frontier discovery: O(n²), deterministic — no approximation, no randomness
- Ranking speed: under 100ms for a 50-solution candidate set
- Constraint evaluation: O(n·m) where n = candidates, m = constraint count — runs before frontier to reduce n
Reproducibility scope: The 9.1/10 score and 100% accuracy figures are measured on the MOIP internal benchmark set (RCT Ecosystem v5.4.5, April 2026). The benchmark covers deterministic Pareto correctness, constraint filtering precision, and ranking alignment with stated weights. Production performance varies with candidate set size and objective count — the <100ms figure is measured at n=50 solutions, 4 objectives on reference hardware. Independent benchmark methodology and caveats are published at Benchmark Summary →.
What MOIP changes for enterprise AI governance
AI decision governance has a documentation problem. Organisations make complex, high-stakes technology choices — model selection, vendor lock-in, infrastructure commitment — and the reasoning behind those choices is stored in meeting notes, email threads, or individual memory. When the decision is audited, the original reasoning is often unrecoverable.
MOIP produces an auditable, signed decision record. Every planning output includes:
- The candidate set and their scores
- The objectives and the weights used
- The Pareto frontier before and after constraint filtering
- The recommended solution and the mathematical basis for the recommendation
- An Ed25519 signature over the complete record
This means the reasoning is not just stored — it is cryptographically bound to the recommendation. Future audits, regulatory reviews, and post-decision analysis have access to the exact reasoning chain, not a reconstruction from memory.
For PDPA-governed data environments — where the legal basis for AI deployment decisions must be documented — this creates a compliance artefact that is both machine-readable and human-interpretable.
Frequently asked questions
Can MOIP handle more than four objectives? Yes. The algorithm is objective-count agnostic. However, as objective count grows, the Pareto frontier typically expands (more solutions become non-dominated), which can reduce the discriminating power of the recommendation. In practice, enterprise planning decisions rarely require more than six to eight objectives before the complexity exceeds what human decision-makers can meaningfully interpret.
What if the organisation's weight priorities change mid-process? MOIP is designed for re-evaluation. Because the scoring, frontier, and ranking are three separate steps, changing the weights requires only re-running the ranking engine on the existing frontier — not re-computing the Pareto analysis. This is sub-millisecond, making MOIP suitable for interactive "what-if" analysis where a team explores different priority configurations in real time.
How does MOIP differ from a simple weighted sum? A weighted sum picks the solution with the highest combined score — but it can recommend dominated solutions. If solution A scores 8 on cost and 2 on control, and solution B scores 7 on cost and 9 on control, a naive weighted sum might pick A even though B is clearly better when control matters. MOIP eliminates dominated solutions before applying weights, ensuring the recommendation is always drawn from the rational choice set.
What is not disclosed in this article? The exact weight calibration methodology used in RCT's internal deployment decisions, the constitutional policy thresholds that trigger constraint disqualification in production, and the schema of the JITNA metadata field that carries the MOIP audit trace. These are implementation details that require additional enterprise context to interpret safely.
Related reading
- Understand the FDIA Equation — the constitutional pipeline that governs every MOIP intent classification cycle.
- Explore Dynamic AI Routing — the solution layer that uses MOIP outputs to route intent to the right model and execution path.
- Review SignedAI HexaCore to see how Ed25519 signing makes MOIP recommendations cryptographically auditable.
- Browse All 41 Algorithms for the full algorithm surface MOIP operates within.
- See Benchmark Summary for the production evidence and methodology behind RCT quality targets, including MOIP's 9.1/10 evaluation score.
References
- Daulton et al. (2022), Parallel Bayesian Optimization of Multiple Noisy Objectives: https://arxiv.org/abs/2210.01066
- Deb et al., A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II (Evolutionary Computation): https://doi.org/10.1162/evco_a_00282
- Liu & Nocedal (2019), On the Variance of the Adaptive Learning Rate and Beyond: https://arxiv.org/abs/1906.08878
- NIST Artificial Intelligence: https://www.nist.gov/artificial-intelligence
- RCT Labs Algorithms: https://rctlabs.co/algorithms
- FDIA Equation Protocol: https://rctlabs.co/protocols/fdia-equation
- RCT Benchmark Summary: https://rctlabs.co/benchmark
- RCT Dynamic AI Routing: https://rctlabs.co/solutions/dynamic-ai-routing
What enterprise teams should retain from this briefing
MOIP (Multi-Objective Intent Planning) solves the problem every enterprise faces: multiple conflicting goals that cannot all be maximised simultaneously. Using Pareto frontier analysis, MOIP identifies decisions that are mathematically optimal — no alternative is better on all dimensions at once.
Move from knowledge into platform evaluation
Each research article should connect to a solution page, an authority page, and a conversion path so discovery turns into real evaluation.
Ittirit Saengow
Primary authorIttirit Saengow (อิทธิฤทธิ์ แซ่โง้ว) is the founder, sole developer, and primary author of RCT Labs — a constitutional AI operating system platform built independently from architecture through publication. He conceived and developed the FDIA equation (F = (D^I) × A), the JITNA protocol specification (RFC-001), the 10-layer architecture, the 7-Genome system, and the RCT-7 process framework. The full platform — including bilingual infrastructure, enterprise SEO systems, 62 microservices, 41 production algorithms, and all published research — was built as a solo project in Bangkok, Thailand.