Why Damages Allocation results differ in Colorado

4 min read

Published April 15, 2026 • By DocketMath Team

The top 5 reasons results differ

Run this scenario in DocketMath using the Damages Allocation calculator.

If you’re using DocketMath → Damages Allocation in Colorado (US-CO) and your output doesn’t match another party’s spreadsheet, the differences usually come from jurisdiction-aware modeling choices—not “mystery math.” Below are the top five causes we see when comparing allocation runs.

  1. Colorado’s damages model is sensitive to category selection

    • If one run allocates losses to only economic damages while another includes a non-economic bucket (or vice versa), totals will diverge even if the underlying numbers look similar.
    • In practice, category mapping affects the allocation denominators, not just labels.
  2. Inclusion/exclusion of mitigation and timing assumptions

    • Colorado damages calculations often hinge on what losses are treated as recoverable within the modeled period (e.g., losses occurring within a certain “loss window” or offset-eligible period).
    • DocketMath’s allocation results can change materially when you set different “loss window” start/end dates or allowable offset timing.
  3. Different treatment of offsets and credits

    • If one run accounts for offsets (like amounts already paid or credited) and another does not, you’ll typically see a higher “residual” damage allocation in the credit-free scenario.
    • Even small input differences here can create large allocation percentage swings.
  4. Attribution rules drive how shared components are split

    • When damages include shared elements (e.g., combined performance failures), allocation can depend on the attribution methodology.
    • DocketMath’s jurisdiction-aware rules will split shared components based on the selected attribution approach and the evidentiary drivers you input.
  5. Rounding and constraints

    • Colorado-focused allocation outputs can appear inconsistent when runs use different rounding precision or apply caps/floors.
    • Two spreadsheets may agree on raw totals but disagree after rounding and constraint logic is applied.

Pitfall: Comparing only the grand total (instead of line-item allocations) hides the real cause. A run can shift $30,000 between categories while keeping the overall sum similar—making the discrepancy look random until you audit the inputs.

How to isolate the variable

Use DocketMath to run a controlled diagnostic: change one thing at a time, and record both (a) the final allocation and (b) which internal bucket changed.

Practical isolation workflow

  • Step 1: Lock the dataset
    • Keep claimant facts, event dates, and all dollar inputs constant except the suspected variable.
  • Step 2: Use the same output basis
    • Confirm you’re comparing the same reporting view (for example: totals by category vs. totals by party vs. percent shares).
  • Step 3: Flip one jurisdiction-aware modeling setting at a time
    • First places to check in Colorado runs:
      • loss window start/end dates
      • whether offsets/credits are included
      • which damages categories are enabled
      • attribution approach for shared components
      • rounding precision (if exposed in your configuration)

**Diagnostic matrix (quick pattern-check)

Variable to testWhat to change in DocketMathExpected pattern if it’s the cause
Damages category mappingToggle category inclusionCategory line items swing; totals re-balance
Loss timingAdjust the date windowEarlier vs. later buckets shift (not everything moves together)
Offsets/creditsInclude/exclude credit inputsResidual allocation rises/falls consistently
Shared attributionSwitch attribution approachShared lines split differently; other lines stay more stable
Rounding/constraintsChange precision/capsDifferences concentrate near boundary values

If you need a single starting point: run a baseline in DocketMath damages allocation and then branch only one change per run.

Next steps

  1. Create a baseline run
    • Document the exact input set you used (dates, enabled categories, offsets/credits, attribution approach, and any rounding/constraint settings).
  2. Run five targeted variants
    • One variant for each of the top five reasons listed above.
  3. Compare outputs at the line-item level
    • Look for:
      • category-level deltas
      • which buckets changed first
      • whether the difference is driven by residual vs. base components
  4. Save an audit trail
    • Export results or capture screenshots for each run so you can point to exactly what moved.

Gentle reminder: This is a modeling/data QA workflow. DocketMath helps you apply allocation logic consistently, but outcomes still depend on the facts and the mapping choices you select.

Related reading