Skip to main content

How It Works

From your natural-language needs to a provably optimal result in milliseconds

Four steps to optimal

The system transforms your intent into a dominance-aware decision

1

Parse natural language → constraints + criteria

You describe what you need in plain English: "Under $1,000, for software development, must have at least 16GB RAM, 512GB SSD, under 1.5kg, 2 USB-C ports, Wi-Fi 6E, deliver by Nov 1. I prefer gray or black—gray is better than black. I prefer Intel to AMD processor, and I wouldn't like a refurbished laptop."

Mandatory Constraints

All candidates must satisfy these

max price: $1000
min ram: 16
min storage: 512
max weight: 1.5
usb c ports: 2
wifi: 6E
delivery date: 2025-11-01

Preference Hierarchy

Your explicit preferences organized by priority

PREFERENCE LAYER (PARTIAL ORDER)
Gray > Black
Intel > AMD
New > Refurbished

These preferences form a partial order (DAG) - they are independent comparisons, not strict levels

Implicit Domain Preferences

Universal domain preferences used to break ties between equally optimal results (lowest priority)

Higher Rating— Tie-breaker preference
More RAM— Tie-breaker preference
More Storage— Tie-breaker preference
Longer Battery Life— Tie-breaker preference
2

Filter by constraints (fast)

Eliminate candidates that fail any hard constraint. Use indexed lookups, bitmap filters, or Boolean formulas for millisecond filtering even on thousands of products.

Example: From 3,000 to 47

• Started with 3,000 laptops

• Failed RAM requirement: 1,200 eliminated

• Failed SSD requirement: 800 eliminated

• Failed weight threshold: 700 eliminated

• Failed seller rating: 253 eliminated

→ 47 candidates remain for scoring

3

Build a DAG (partial order) with dominance weights

Organize criteria into levels where higher-priority wins can't be gamed by stacking lower-level features. Assign level-dominant weights or use lexicographic vectors.

How the DAG Creates Dominance

The partial order determines which laptops are optimal

Preference Comparisons

The system evaluates each laptop against the partial order preferences. A laptop satisfying more preference comparisons dominates one satisfying fewer.

Gray is preferred over Black (meets 'gray > black')
Intel is preferred over AMD (meets 'intel > amd')
New is preferred over Refurbished (meets 'new > refurbished')
Implicit preferences (rating, battery, etc.) only used to break ties

✓ Gaming-Resistant Property

A laptop matching more DAG preferences (Gray + Intel + New = 3/3 preferences met) will always dominateone matching fewer preferences (Black + Intel + New = 2/3 preferences met), even if the latter excels on implicit factors like price or battery life.The partial order ensures explicit preferences cannot be overridden by implicit tie-breakers.

How It Differs From Traditional Ranking

Traditional ML ranking might weight features equally: Price (30%), Battery (25%), Rating (20%), Color (15%), Processor (10%). A cheap laptop with great battery/rating could beat your preferred Gray+Intel choice. With dominance-based DAG, explicit preferences (color, processor, condition) always dominate implicit tie-breakers.

4

Score survivors; return optimal or co-optimal results

Apply the preference DAG to rank surviving candidates. Return the single optimal choice, or all co-optimal results (if multiple items tie at the top) with a level-by-level explanation.

🏆

Optimal Result

Dell XPS 13 Plus - Gray, Intel i7-1360P, New

Price: $949
RAM: 16GB
Storage: 512GB SSD
Weight: 1.24kg
USB-C: 2 ports
Wi-Fi: 6E
Battery: 12h
Rating: 4.7/5

✓ Mandatory constraints:

Passed all: price≤$1000, RAM≥16GB, storage≥512GB, weight≤1.5kg, 2×USB-C, Wi-Fi 6E, delivers by Nov 1

✓ DAG Preferences:

Gray color (meets 'gray > black')
Intel i7-1360P (meets 'intel > amd')
New condition (meets 'new > refurbished')

🏆 Why optimal:

This laptop satisfies all mandatory constraints AND all three DAG preference comparisons (3/3 met). No other candidate dominates it on the partial order. It represents the unique optimal choice.

⚠️

Non-Optimal Result #1

HP Spectre x360 - Black, Intel i7-1355U, New

Price: $929
RAM: 16GB
Storage: 512GB SSD
Weight: 1.36kg
USB-C: 2 ports
Wi-Fi: 6E
Battery: 14h
Rating: 4.8/5

✓ Mandatory constraints:

Passed all constraints

DAG Preferences:

Black color (fails 'gray > black')
Intel i7-1355U (meets 'intel > amd')
New condition (meets 'new > refurbished')

Why NOT optimal:

Even though this laptop is cheaper ($929 vs $949) and has better battery life (14h vs 12h), it's dominated by the optimal choice because it meets only 2/3 DAG preferences (black instead of gray). Implicit advantages (price, battery) cannot overcome explicit preference differences in the partial order.

Non-Optimal Result #2

Lenovo ThinkPad X1 - Gray, AMD Ryzen 7, New

Price: $899
RAM: 16GB
Storage: 512GB SSD
Weight: 1.19kg
USB-C: 2 ports
Wi-Fi: 6E
Battery: 15h
Rating: 4.9/5

✓ Mandatory constraints:

Passed all constraints

DAG Preferences:

Gray color (meets 'gray > black')
AMD Ryzen 7 (fails 'intel > amd')
New condition (meets 'new > refurbished')

Why NOT optimal:

Despite being cheaper ($899), lighter (1.19kg), having better battery (15h), and higher rating (4.9/5), this laptop is dominated by the optimal choice because it meets only 2/3 DAG preferences (AMD instead of Intel). Gaming-resistant: you can't "win" by stacking implicit features (price, weight, battery) if you fail explicit DAG preferences.

🚫

Excluded Result

ASUS ZenBook - Silver, Intel i7-1355U, New

Price: $1,099
RAM: 16GB
Storage: 512GB SSD
Weight: 1.29kg
USB-C: 2 ports
Wi-Fi: 6E
Battery: 13h
Rating: 4.6/5

✗ Mandatory constraints:

Failed: price=$1,099 > $1,000 maximum

Why EXCLUDED:

This laptop violates a hard constraint (price > $1,000), so it's eliminated before preference evaluation. Hard constraints are non-negotiable filters applied first.

Why This Is Not a Trivial Problem

Finding optimal solutions with preferences requires solving fundamental computational challenges

Traditional Approaches Require Expensive Dominance Testing

Without our invention, systems must compare every solution against all others

When a system "sees" a potential solution, traditional approaches cannot determine if it's optimal without comparing it against all previously found solutions via computationally expensive dominance testing procedures.

The Dominance Testing Problem:

  • • For each solution, perform a dominance test with all other solutions
  • • Computational cost grows quadratically (O(N²)) with the number of solutions
  • • With 1,000 solutions: ~1,000,000 comparisons needed
  • • With 10,000 solutions: ~100,000,000 comparisons needed

Worse still: Such approaches require that all solutions are already available before starting to calculate optimal results. This is impractical and infeasible in real-world, industrial use cases dealing with:

Huge Volumes
Millions of products from multiple sources
Progressive Arrival
Solutions appear incrementally over time
Multiple Data Sources
Parallel queries to different databases/APIs

Trivial Scoring Methods Are Mathematically Wrong

Simple linear scoring leads to falsely "optimal" solutions

A naive approach might try to compute weights for DAG nodes by simply summing them up following the partial order, or using linearly increasing weights from top to bottom preferences. This is mathematically incorrect and leads to falsely "optimal" solutions.

Example: Why Linear Weights Fail

Consider a DAG with preferences at different levels. A trivial system using simple additive or linear weights will produce incorrect results where lower-level preferences can overwhelm higher-level ones.

The Problem:

With linear weighting, a solution satisfying multiple lower-priority preferences can incorrectly outscore a solution satisfying a single higher-priority preference. This violates the dominance hierarchy and produces falsely "optimal" solutions.

This trivial approach only works correctly in the special case of a simple linear chain of preferences, but fails for general DAGs with multiple roots and parallel branches.

How Our Invention Solves These Problems

Our patent-pending method enables progressive scoring without dominance testing

Key Innovation: Direct Optimality Scoring

Score each solution independently using only the DAG structure

When the system "sees" a potential solution, it can produce an optimality score calculated using only the current solution and the DAG — no comparison with other solutions needed!

✅ Properties Guaranteed by Our System:

🏆 Pareto-Optimality

Solutions with maximum score are guaranteed to be Pareto-optimal with respect to the DAG preferences. No other solution dominates them.

⚖️ Equivalence Property

Solutions with equal scores are truly equivalent with respect to preferences — they satisfy the same preference comparisons.

⚡ Progressive Computation

Solutions can arrive progressively (from multiple data sources, streaming APIs, etc.) and be scored immediately without waiting for all solutions.

♻️ No Recalculation Needed

When new solutions arrive, their scores remain valid. No need to recalculate or restart the computation from scratch.

📊 Dramatic Computational Improvement

❌ Traditional Approach
O(N² × V)

Quadratic growth: must compare each solution against all others (N²) for each preference criterion (V)

✅ Our System
O(N × V + N log N)

Near-linear growth: score each solution once (N × V), then sort results (N log N)

Real-World Impact:
1,000 solutions, 10 preferences:10M ops~10K ops(1,000× faster)
10,000 solutions, 20 preferences:2B ops~200K ops(10,000× faster)
100,000 solutions, 50 preferences:500B ops~5.8M ops(86,000× faster)

⚙️ Parallel & Distributed Processing

Because each solution's optimality score is independent and stable, our system supports:

  • Parallel scoring of solutions across multiple CPU cores
  • Distributed computation using "partial rankers" for solution subsets
  • Incremental updates as new solutions arrive from different data sources
  • Real-time ranking without reprocessing the entire solution set
Millisecond Response Times

Find optimal solutions from thousands of candidates in real-time, enabling live search, auto-suggest, and interactive user experiences

Key properties

Provably optimal

Dominance weights ensure higher-level wins can't be overturned by gaming lower levels. Patent-backed methodology (US 2025/0298806 A1).

Millisecond performance

Constraint filters and efficient scoring return results in real-time. Fast enough for live search, auto-suggest, and A/B testing.

Explainable

Every result includes a level-by-level rationale. Users and auditors can see exactly why each option won or lost.

Gaming-resistant

Sellers can't boost rank by adding irrelevant features. Only improvements at the right dominance level matter.

See it in action

Try our interactive demos or explore use cases for your industry