V8 Decision Engine

Turn every “maybe” into a confident decision

The V8 Decision Engine screens dealbreakers, scores fit, maps gaps, and builds win strategies before you invest $50K-$200K writing a proposal. From opportunity intake to Go/No-Go in days, not weeks.

10-20%

Avg win rate

$50K-$200K

Per proposal

7 in 10

Below 30%

30%

"New normal"

What is P-Win?

The single number that decides whether your next proposal is worth the investment.

The Definition

P-Win (Probability of Win) is a quantitative estimate of how likely your organization is to win a specific government contract.

It drives the highest-stakes decision your BD team makes: invest tens of thousands pursuing an opportunity, or walk away.

The Math

  • Average competitive win rate: 10-20%
  • For every 5 proposals, 4 will lose
  • 20 opps/year at 15% win rate = $1M+ wasted annually
  • Every dollar on a low-probability pursuit is a dollar not spent on a winnable one

P-Win vs P-Go vs PTW

P-GoBD lead

Is this opportunity worth investigating?

P-WinCapture manager

What is our probability of winning?

Price-to-WinPricing analyst

What price point beats competitors?

Eight engines, one decision workflow

The V8 Decision Engine is modular. Each engine is a product function that can be delivered in pieces, deepened over time, and reused across verticals.

01

Self-Knowledge Engine

Creates a structured profile of your organization's capabilities and constraints. Establishes the trustworthy baseline that powers Dealbreaker and keeps all downstream recommendations grounded.

Outputs: Capability map, hard exclusions, gaps

02

Market Forecast Engine

Ingests opportunities from many sources and normalizes them into a decision-ready format with scope, timing, buyer context, and requirement signals.

Outputs: Structured opportunity record, timelines

03

Fit & Alignment Engine

Produces an explainable prioritization signal that helps you decide whether an eligible opportunity is worth pursuing. Reduces pursuit noise and makes prioritization defensible.

Outputs: Fit score, factor breakdown, next action

04

Gap & Strategy Engine

Identifies what is missing to win and suggests concrete actions: partnering, narrowing scope, or declining. Prevents late discovery of disqualifying gaps.

Outputs: Gap list, mitigation options, risk flags

05

Competitive Intelligence Engine

Organizes competitor and incumbent context using available signals and optional customer-entered intelligence. Adds realism to prioritization and makes strategy recommendations credible.

Outputs: Likely competitors, incumbent context

06

Influence Layer

Surfaces early shaping moments and suggests engagement actions before requirements are locked. The highest leverage window is pre-RFP; Influence turns time into advantage.

Outputs: Alerts, engagement prompts, shaping cues

07

Outflank Engine

Translates insights into a pursuit strategy: positioning, teaming posture, evidence planning, and differentiation guidance tied to what you actually have.

Outputs: Win themes, differentiation plan, posture

08

Learning & Feedback Engine

Captures outcomes and decision rationale to improve fit scoring and recommendations over time. Creates compounding advantage while remaining explainable.

Outputs: Updated weights, performance dashboards

P-Win decision thresholds

A P-Win score is only useful if it connects to a decision. These ranges represent industry consensus for bid/no-bid actions.

Below 30%

No-Bid

Walk away unless specific factors can be improved before the RFP drops.

30%–50%

Conditional Go

Pursue with defined milestones. Re-evaluate P-Win at each gate.

50%–70%

Go

Full capture and proposal investment. Assign dedicated proposal manager.

Above 70%

Strong Go

Priority pursuit. Commit A-team resources and executive sponsorship.

Common P-Win mistakes

P-Win scoring only works if the inputs are honest and the process is disciplined. Most organizations make the same handful of mistakes.

1

Optimism bias

Capture managers are incentivized to keep opportunities alive. P-Win scores drift upward through wishful thinking and selective attention to positive signals. Require external validation and track predicted vs actual outcomes.

2

Confusing capability with past performance

"We could do this" and "we have done this and here is the proof" are scored very differently. Past performance means specific, recent, relevant contracts with CPARS ratings to prove delivery.

3

Static P-Win scores

A P-Win calculated at opportunity identification and never updated creates false confidence. The competitive landscape changes. Customer priorities shift. Recalculate at every major capture gate.

4

Same weights for every opportunity

Applying identical factor weights to an LPTA services contract and a best-value R&D procurement produces meaningless scores. Calibrate weights to the specific procurement type and evaluation criteria.

5

No calibration against outcomes

If you consistently assign 60% P-Win to opportunities you win only 30% of the time, your model is broken. Without calibration, there is no feedback loop, and the model never improves.

6

The 50% trap

Teams frequently assign "around 50%" because it feels safe. It avoids No-Bid discomfort and Strong Go commitment. If your pipeline clusters at 45-55%, your scoring needs calibration.

From “maybe” to decision in four steps

Dealbreaker screens what you cannot do. Fit Assessment scores what you should pursue. Go/No-Go captures the decision. Handoff starts the proposal.

Dealbreaker Screening

Hard exclusions checked first: clearances, set-asides, certifications, OCI, geographic constraints. Binary stop/continue before you invest any pursuit effort. No silent blocking; transparent rationale for every flag.

Fit Assessment

After passing Dealbreaker, opportunities are scored on configurable fit criteria with explainable rationale. An opportunity can be eligible but still not worth pursuing. Fit makes that distinction visible.

Go/No-Go Decision Workflow

Explicit decision capture with logged rationale. Reviewers see fit scores, gap analysis, competitive context, and risk flags in one view. Decisions are traceable and auditable.

Handoff to Projectory Core

A 'Go' decision creates a Projectory proposal project with all carried context: attachments, decision rationale, gap mitigations, and win themes. No re-entry. No lost context.

Transparency by default

No silent blocking or advancing. Every flag, score, and recommendation comes with traceable rationale, source provenance, and freshness indicators. Your team sees exactly why an opportunity was flagged, scored, or recommended. Decisions are logged with full audit trails.

Dealbreaker is powered by the Self-Knowledge Engine knowing what your organization cannot do under any circumstances. This lets you get value immediately without requiring deep profile data upfront. Additional data is gathered only when it directly increases decision quality.

How the V8 Decision Engine works

A pre-proposal system built into the capture workflow. From opportunity intake to Go/No-Go decision, with a clean handoff to Projectory Core for proposal execution.

01

Self-Knowledge + Ingest

Your organization's capabilities and hard exclusions are captured once and reused. Opportunities are ingested from SAM.gov, GovWin, and other sources into a normalized, decision-ready format.

02

Dealbreaker Gate

Hard exclusions checked immediately: clearances, set-asides, certifications, OCI, compliance boundaries. Impossible or prohibited opportunities are flagged before any pursuit investment. Transparent rationale; explicit user confirmation.

03

Fit, Gap & Strategy

Eligible opportunities are scored on fit with explainable rationale. Gaps are identified with concrete mitigation options. Competitive intelligence adds realism. You see exactly why an opportunity is or is not worth pursuing.

04

Go/No-Go + Handoff

Structured decision workflow with logged rationale. A 'Go' creates a Projectory proposal project with all carried context: win themes, evidence checklists, gap mitigations, and teaming posture.

Frequently asked questions

Common questions about P-Win scoring and capture intelligence.

What is a good P-Win score for a government contract?
A P-Win above 50% is generally considered a strong basis for a Go decision. Scores between 30% and 50% warrant conditional pursuit with clear milestones. Below 30%, most capture teams should default to No-Bid unless specific factors can be improved before proposal submission. The absolute number matters less than how honestly it was calculated and whether it is calibrated against your actual win/loss history.
How is P-Win different from Price-to-Win?
P-Win (Probability of Win) estimates your overall likelihood of winning a contract based on factors like customer relationships, past performance, and competitive positioning. Price-to-Win (PTW) is a pricing analysis that determines the optimal price point to beat competitors while maintaining profitability. PTW is one input to your P-Win score, typically weighted at 10-15%, but it is not the score itself. They are complementary but separate analyses.
Can AI accurately predict P-Win scores?
AI improves P-Win accuracy by analyzing larger datasets, identifying patterns in historical win/loss data, and removing subjective bias from competitive analysis. However, AI works best when combined with human judgment on relationship quality and strategic factors that data alone cannot capture. The goal is augmented decision-making, not fully automated prediction. Organizations using AI-assisted P-Win scoring report better calibration between predicted and actual outcomes over time.
How often should we recalculate P-Win during capture?
P-Win should be recalculated at every major capture milestone: initial opportunity identification, after customer engagement, after draft RFP review, at the bid/no-bid gate, and before final proposal submission. Static P-Win scores calculated once at opportunity identification are one of the most common mistakes in capture management. With AI-powered tools, recalculation can happen continuously as new competitive intelligence and capture activity data enters the system.
Does P-Win scoring work for small businesses?
Yes, but the weight of certain factors shifts. Small businesses competing on set-aside contracts may weight teaming arrangements and socioeconomic status higher, while large-contract past performance carries less weight. The scoring framework is the same; the calibration changes based on your competitive position and the acquisition strategy. Small businesses often benefit more from disciplined P-Win scoring because they have fewer resources to waste on low-probability pursuits.
What data does Projectory use to calculate P-Win?
Projectory analyzes solicitation requirements, your past performance records, competitor award histories from FPDS and SAM.gov, teaming arrangements, compliance alignment, and capture activity maturity. The platform combines structured data analysis with AI-driven pattern matching across historical procurement outcomes to generate a composite P-Win score with a confidence interval and factor-level breakdown.

Stop wasting pursuit dollars on unwinnable bids

See how the V8 Decision Engine turns messy opportunities into confident Go/No-Go decisions with transparent rationale.