Score stack overview
483Radar separates three questions so leadership can understand what each output is claiming and what it is not claiming within the FDA inspection risk scoring methodology.
- `BlindSpot` highlights rising external citation themes that may not yet be obvious from the site's own record.
- `Observation Risk` estimates observation likelihood if an inspection occurs.
- `Inspection Risk` estimates inspection likelihood over 6 and 12 months.
BlindSpot
BlindSpot is the external-intelligence layer. It surfaces peer and market asymmetry using hard-entry gates, suppression rules, and confidence-bounded ranking rather than free-form AI generation.
- Signals are persisted and reviewed, not generated live on page load.
- Low-data conditions suppress weak claims instead of forcing them into the report.
- BlindSpot does not claim FDA targeting intent or replace internal regulatory judgment.
Observation Risk Score
Observation Risk compares a contextual baseline with matched facility yield, profile vulnerability, and residual district context. Weak-evidence runs are shrunk back toward baseline so the output stays conservative when support is thin.
- The output distinguishes baseline probability, pre-shrink model output, and final probability.
- Weak matching, sparse peers, and missing citation detail lower reliability.
- Observation Risk does not forecast inspection timing, remediation success, or legal outcomes.
Inspection Risk Score
Inspection Risk is a deterministic forecast built from saved inspection history, peer context, adverse-history context, and a bounded legacy-geography residual. Missing evidence stays neutral rather than becoming an adverse signal.
- Outputs IPS-6 and IPS-12 with basis and reliability labels.
- Weak support lowers reliability before it raises risk.
- Inspection Risk does not predict observations, warning letters, or enforcement actions.
Reliability and missing-data behavior
Reliability is a defined reading of evidence sufficiency, cohort support, and match quality. It is not a cosmetic label. Thin evidence lowers reliability, dampens the result, or suppresses the signal.
- High reliability means stronger support across facility history, peer context, and match quality.
- Moderate reliability means the signal is still useful, but at least one support layer is thinner than ideal.
- Low reliability means the output should be treated as directional planning support only.
Basis labels and fallback modes
Reports are designed to show what kind of evidence is carrying the result, not just the final number. Labels such as `Context-limited`, `Profile-dominant`, `Fallback broad`, `Thin`, `Unavailable`, and `Supported` are meant to make handoff and challenge easier.
Warning disclosure
When support is weak, the product should disclose it explicitly. That includes filtered evidence exclusions, thin peer cohorts, weak site match conditions, fallback modes, and visible top-N limits where the displayed list is not the full underlying population.
Technical appendix and challenge layer
The executive layer is intentionally short. Technical basis, caveats, scope notes, and source-detail pages are preserved so quality, regulatory, and operating leaders can challenge the result without reverse-engineering the output.
AI boundary
AI may be used for optional narrative phrasing or briefing expansion. It does not derive BlindSpot, observation risk, or inspection risk scores. Deterministic scoring remains the source of truth for the public product claims.
Frequently asked questions
What do BlindSpot, observation risk, and inspection risk mean?
BlindSpot highlights external citation themes that may be building before they are obvious inside the site, observation risk estimates observation likelihood if an inspection occurs, and inspection risk estimates inspection likelihood.
How does reliability affect the score outputs?
Reliability reflects evidence sufficiency, cohort support, and match quality. When support is thin, the output is dampened, downgraded, or suppressed rather than overstated.
Does the methodology predict FDA targeting intent?
No. The methodology is designed for decision support and external signal interpretation. It does not claim FDA targeting intent, enforcement certainty, or guaranteed outcomes.
What data sources does FDA inspection risk scoring use?
483Radar scores are derived from locally synced FDA datasets: published Form 483 records, annual inspection and citation data, and warning letter context. The scoring engine runs entirely against this stored evidence, without live FDA API calls during analysis.