Skip to main content
Patent Application

US 2025/0298806 and other countries

Method and system for optimization and personalization of search results according to preferences and mandatory constraints

Inventor
Emanuele Di Rosa, PhD
Published
September 25, 2025

Comparison with State-of-the-Art Methods

See how our Preference-DAG Optimality Scoring compares to existing approaches across key dimensions.

ApproachFast at Web Scale?Gaming-Resistant?Explainable?Provably Optimal?Streaming-Compatible?1Algorithmic Complexity2
Traditional ML Ranking
Google, Amazon, Bing
✅ Yes⚠️ SEO arms race⚠️ Opaque❌ No proofs⚠️ System-level onlyO(N log N)
Fixed candidate set
Recommendation Engines
Netflix, Spotify, Amazon Recs
✅ Yes⚠️ Fraud detection⚠️ Black-box❌ Engagement proxy⚠️ Session-level onlyO(N)
Retrieval, not optimality
MCDA / Pareto / Skyline
AHP, TOPSIS, ELECTRE
⚠️ Limited scale⚠️ Gameable✅ Yes✅ Pareto-optimal❌ NoO(N²×V)
Dominance-testing based
LLM-Enhanced Search
Perplexity, ChatGPT+Search
⚠️ Slower (seconds)⚠️ Prompt injection⚠️ Citations only❌ No guarantees⚠️ Conversational onlyO(N log N + T)
Retrieval + generation
Generic Preference-Based
Pareto, CP-Net, Utility
⚠️ Moderate⚠️ Data gameable✅ Yes✅ Under model❌ NoNP-hard
CP-net dominance testing
Preference-DAG (US 2025/0298806)
PerfectFinder
✅ Yes✅ Yes✅ Full audit✅ Guaranteed✅ Full streamingO(N×V + N log N)
No dominance testing

Notation:

N = number of candidate items
V = number of DAG nodes (preferences)
E = number of DAG edges
T = tokens in LLM context

1 Streaming-Compatible: Can solutions be evaluated and ranked as they arrive from multiple sources, without requiring the full solution set upfront? ✅ = algorithmically progressive; ⚠️ = UI/system-level only; ❌ = requires full dataset.

2 Algorithmic Complexity: Worst-case per-query cost (Big O notation) for computing optimal ranking, excluding offline training, index building, and caching.

Practical Example: What These Numbers Mean

Scenario:

  • V = 10 preference nodes (e.g., color, brand, price range...)
  • E = 15 edges in the preference DAG
  • N = 10,000 candidate products (streaming from multiple sources)
  • • CPU speed: ~1 billion simple operations/second

Computation Time:

MethodOperationsTime
MCDA/Pareto O(N²×V)10,000² × 10 = 1B~1 second
CP-Net (NP-hard)Exponentialseconds–minutes
PerfectFinder O(N×V + N log N)100K + 133K ≈ 233K~0.2 ms

PerfectFinder is ~4,300× faster than dominance-testing approaches for this scenario.

Detailed Analysis by Approach

Traditional ML Ranking

Google, Amazon Search, Bing

Train a model that scores items using weighted relevance signals.

Pros:

  • • Extremely fast at web scale (ms latency)
  • • Mature infrastructure (indexes, caching)

Cons:

  • • Opaque: limited explainability
  • • Vulnerable to SEO/spam manipulation
  • • No formal guarantee of optimality

Recommendation Engines

Netflix, Spotify, Amazon Recs

Predict user-item affinity from past behavior using CF/content/hybrid methods.

Pros:

  • • Highly personalized
  • • Great for browsing or discovery

Cons:

  • • Optimizes engagement, not explicit preferences
  • • Vulnerable to review/interaction fraud
  • • Not suitable for multi-criteria decisions

MCDA Methods

AHP, TOPSIS, ELECTRE, Skyline

Weighted criteria, dominance testing, or Pareto ranking.

Pros:

  • • Transparent and explainable
  • • Can compute Pareto-optimal solutions

Cons:

  • • Requires O(N²) dominance tests
  • • Scaling difficult for large N or streaming
  • • Preference models usually rigid

LLM-Augmented Search

Perplexity, You.com, ChatGPT+Search, Google AI Overviews

Retrieve documents then use LLMs to synthesize answers.

Pros:

  • • Understands natural language needs
  • • Conversational refinement

Cons:

  • • Ranking logic remains opaque
  • • Vulnerable to prompt injection
  • • No formal optimality guarantee
  • • Slower than traditional search

Generic Preference-Based

Utility, CP-Nets, Pareto

Explicit preference modeling; score or compare based on rules.

Pros:

  • • Explainable preference structure
  • • Can provide formal guarantees

Cons:

  • • Pareto/CP-Net can be NP-hard
  • • O(N²) overhead common
  • • Difficult to scale across sources
Our Innovation

Preference-DAG Optimality

US 2025/0298806 / PerfectFinder

DAG weights with parallel scoring and streaming results.

Key Advantages:

  • Scales efficiently: Avoids O(N²) dominance tests
  • Parallel by design: Independent source scoring
  • Strict priority: Higher preferences always dominate
  • Fully explainable: Direct, inspectable influence
  • Gaming-resistant: Prevents feature stacking
  • Incremental: Score without recomputing
  • Formal guarantees: Top scores are optimal
Complexity: O(N×V + N log N) — near-linear (no N² term)

Integrate this technology

License the patent, book a pilot, or explore partnership opportunities.

Abstract

This invention provides a breakthrough method for finding truly optimal solutions from large datasets based on user preferences and mandatory requirements. It solves critical problems that plague traditional search and ranking systems: gaming through feature manipulation, computational infeasibility at scale, and lack of explainability.

The Core Innovation: The system represents user preferences as a directed acyclic graph (DAG) and computes an optimality score for each candidate solution using a novel scoring mechanism. This score can be calculated using only the current solution and the preference structure—without comparing against other solutions. This eliminates the need for expensive pairwise dominance testing that grows quadratically with dataset size.

Key Capabilities:

  • Gaming-Resistant: Higher-priority preferences always dominate any combination of lower-priority ones, preventing manipulation through feature stacking
  • Progressive Processing: Solutions arriving incrementally from multiple data sources can be scored immediately without waiting for the complete dataset
  • Provably Optimal: Maximum-score solutions are guaranteed to be Pareto-optimal with respect to the preference DAG
  • Computationally Efficient: Near-linear scaling (O(N × V + N log N)) instead of quadratic (O(N² × V)), enabling real-time performance on millions of candidates
  • Fully Explainable: Every ranking decision includes a complete audit trail showing which preferences were satisfied and why
  • Parallel & Distributed: Independent scoring enables massive parallelization across multiple processors and data sources

Impact: This technology transforms decision-making in e-commerce, supplier selection, personalized recommendations, and any domain requiring optimal choices from complex preference structures. It delivers millisecond response times while maintaining mathematical guarantees of optimality and full regulatory auditability.

Why Traditional Approaches Fail

Problem 1: Computationally Expensive Dominance Testing

Traditional multi-objective optimization systems must perform pairwise dominance testing: comparing each solution against all other solutions to determine if one dominates another.

The Dominance Testing Bottleneck:
  • Computational Complexity: O(N² × V) — quadratic in solution count
  • • Each new solution requires comparison with all existing solutions
  • • Cannot process solutions incrementally or in parallel without re-testing
  • • Requires all solutions to be available before optimization begins

This approach is impractical for real-world use cases with large datasets, multiple data sources, or streaming results.

Problem 2: Linear Weighting Produces Incorrect Results

A naive approach might assign linear or additive weights to preferences (e.g., Rank Sum or Rank Reciprocal methods). This is mathematically incorrect for preference DAGs.

Why Linear Weights Fail:

With linear weights, multiple lower-level preferences can accumulate a higher score than a single higher-level preference, violating the dominance hierarchy.

This means lower-priority preferences can incorrectly overwhelm higher-priority ones, producing falsely optimal solutions.

This produces falsely optimal solutions that violate user preferences. Such solutions would be dominated by others but incorrectly labeled as optimal.

Our Solution: Direct Optimality Scoring

This patent introduces a novel scoring mechanism that assigns weights to DAG nodes in a way that preserves the dominance hierarchy, ensuring:

Key Mathematical Property:

Higher-level preferences in the DAG always dominate any combination of lower-level preferences, preventing gaming through feature stacking.

  • No dominance testing needed — each solution gets a single, stable optimality score
  • Pareto-optimality guaranteed — maximum-score solutions are provably optimal
  • Incremental processing — new solutions can be scored without recomputing existing scores
  • Parallel computation — independent scoring enables distributed processing

Technical Highlights

Computational Complexity

Time Complexity: O((V+E) + N×V + N log N)

  • V = number of DAG nodes (preference criteria)
  • E = number of DAG edges
  • N = number of feasible solutions (satisfying mandatory constraints)
Breakdown:
  • (V+E): Initialize DAG weights via topological sort
  • N×V: Score each feasible solution against DAG preferences
  • N log N: Sort solutions by optimality score

Key advantage: No pairwise dominance testing needed (which would be O(N² × V)). Our novel scoring mechanism ensures optimality through direct scoring of each solution.

Space: O(V + E + N×V/w) with bitset optimization

Use Cases

  • Personalized recommendations
  • E-commerce product search
  • Vendor/supplier selection
  • Regulatory-compliant decisioning

Prior Art Distinctions

Unlike ML-based ranking (opaque, trainable), multi-objective optimization (NP-hard, slow), or weighted sum models (gameable), this approach provides provable optimality, polynomial-time execution, and full auditability.

Priority and Filing Details

Application Number
US 2025/0298806
Status
Published

Integrate this technology

License the patent, book a pilot, or explore partnership opportunities.