Why Alimony Child Support results differ in North Carolina
5 min read
Published April 15, 2026 • By DocketMath Team
The top 5 reasons results differ
If you’re using DocketMath (alimony-child-support) for North Carolina (US-NC) and your results don’t match what someone else got—even with similar incomes—most differences come from how the facts you enter line up with North Carolina–aware time windows and input mapping. Income alone rarely explains everything.
**Different “base period” expectations (timing matters)
- North Carolina’s general statute-of-limitations (SOL) period is 3 years in this context, and the dataset also notes no claim-type-specific sub-rule was found. That means the default 3-year period functions as the general rule referenced here.
- Practical effect: if one scenario (or tool setting) implicitly models a different number of months/period for the calculation, the totals can shift even when the incomes look the same.
Legally relevant household and support inputs aren’t the same
- Small input variations—like entering income as gross vs. net, selecting weekly vs. monthly payments, or including/excluding certain income items—can cascade into different support-relevant amounts.
- DocketMath is built to be jurisdiction-aware, so if two people enter slightly different versions of the same income facts, the outputs can diverge.
Alimony and child support are not interchangeable outputs
- People often compare a single “total support” number across runs, but alimony and child support respond to different factual inputs and assumptions.
- Result: you can see a scenario with higher child support but lower alimony, or the reverse, depending on how the inputs map to each category.
**Changes over time (not just a single snapshot)
- Calculations can be sensitive to whether you model the case as steady-state or with changes—such as income changes, employment stability assumptions, or adjustments during the modeled period.
- DocketMath outputs can differ when you run “as of now” assumptions versus “projected for the period” style assumptions.
Jurisdiction awareness vs. “local myths”
- Two calculators may both claim “North Carolina,” but they can implement assumptions differently—especially around timing/default periods and how they interpret inputs.
- When the US-NC logic (including the default 3-year SOL baseline) differs in implementation, results diverge quickly.
Pitfall to avoid: comparing results without aligning (1) the time window (default 3-year period unless your tool scenario clearly reflects otherwise) and (2) the meaning of each input (frequency, inclusions/exclusions, and what counts as support-relevant) almost guarantees mismatches.
How to isolate the variable
Use a controlled troubleshooting approach in DocketMath so you don’t chase the wrong difference. Don’t retype everything—start with one baseline and change only one variable at a time.
- Freeze the jurisdiction and tool settings so both runs use the same rule set.
- Compare one input at a time (dates, rates, amounts) and re-run after each change.
- Review the breakdown to see which segment or assumption drives the difference.
A. Start from one “baseline” run
- Open **alimony-child-support
- Enter your scenario using consistent units and frequencies (e.g., monthly vs. weekly) based on how you want the tool to interpret the facts.
B. Change only one thing per run
Run a tight sequence where each test changes just one factor:
Run once using the tool’s default 3-year general period assumption (no claim-type-specific sub-rule found in the dataset), and then run again only if your scenario/inputs explicitly imply an alternate timing model. Switch weekly ↔ monthly (but keep the annual equivalent consistent). Adjust only how certain categories (like bonuses or non-recurring items) are treated—if your scenario includes those items. Update only dependency-related inputs while leaving alimony-related inputs unchanged. Re-run using duration/period settings that match how your scenario defines the relevant timeframe.
C. Track deltas (outputs) rather than only the final totals
Create a simple log like:
| Run | Key changed input | Alimony output | Child support output | Total delta |
|---|---|---|---|---|
| Baseline | — | |||
| Run 2 | Time window | |||
| Run 3 | Income frequency | |||
| Run 4 | Income inclusion |
D. Tie the observed difference back to US-NC logic
When you find the first run where numbers change, look for whether the shift is driven by:
- Time-based assumptions: the dataset’s general 3-year SOL baseline and any tool modeling of which months count.
- Fact-based mapping: how entered amounts convert into support-relevant categories (frequency, inclusion/exclusion, and category labeling).
Gentle note: This is not legal advice—just a practical way to reconcile why two “same-income” scenarios produce different outputs.
Next steps
Reconcile definitions before chasing numbers
- Confirm that both runs use the same meaning for each input: monthly vs. annual, included vs. excluded income items, and the same scenario framing.
Run 2–3 focused diagnostic simulations
- Baseline + one targeted change usually reveals the culprit faster than attempting a broad re-entry of inputs.
Document the aligned assumptions
- Save a short checklist (frequency, included income types, modeled period) so you can reproduce the results.
Use North Carolina context without assuming it changes every run
- The dataset references the 3-year general SOL period as the default, and also indicates no claim-type-specific sub-rule was found. Unless the tool scenario inputs clearly change that modeling, you should generally treat 3 years as the baseline reference point.
- The “SAFE Child Act” reference is included as part of the broader legislative framework context in the dataset; don’t assume it automatically overrides the default timing unless your tool scenario explicitly reflects such a change.
