trading psychologyscorecardperformance reviewroutinesdisciplinejournalingKPIsScorecardQuality

Weekly Trading Scorecard KPIs: Adherence, Quality, State, Review Depth

Build a focused weekly trading scorecard around four KPIs: adherence, quality, state, and review depth, with clear scales, weights, and practical examples.

Headge Team

Headge Team

Product Development

January 28, 2026
9 min read
Overhead photo of a desk with a weekly trading scorecard notebook, pen, and coffee.

A weekly scorecard translates intentions into observable behavior. It converts the fog of PnL noise into a small set of inputs a trader controls day after day. Four KPIs form a compact and practical framework: adherence, quality, state, and review depth. Each captures a different failure mode and, together, they close the loop between plan, execution, and learning.

Research across performance domains shows that simple, behaviorally specific measures improve consistency by prompting self-regulation and supporting deliberate practice. In trading, a weekly cadence is long enough to smooth daily randomness and short enough to allow corrective action before bad habits set.

KPI 1: Adherence

Adherence tracks compliance with the trading plan and risk rules. The goal is not perfection but visible fidelity to predefined behaviors. Operationalize it with a checklist tied to the playbook: pre-market preparation steps, risk limits, entry criteria, position sizing, and exit protocol. Count completed steps relative to the total steps required on days traded. Adherence can also be defined as the percentage of trades that met all pre-trade criteria.

Use a 1 to 5 scale with clear anchors. A 1 indicates frequent rule breaks and improvised trades. A 3 indicates most rules followed with occasional lapses that did not materially alter risk. A 5 indicates full compliance, including honoring stop levels and position sizing rules on every trade. Adherence should be assessed from contemporaneous notes or platform logs, not memory. When in doubt, score toward the stricter interpretation; it keeps the metric predictive of future outcomes.

Why it matters: process compliance reduces variance and prevents a small number of extreme errors from dominating the week. In practice, adherence often correlates more tightly with long-run PnL stability than hit rate because it reflects exposure to tail risk.

Illustrative example: a trader executes eight trades in a week. Six meet all pre-trade criteria, one violates position size by 20 percent, and one is an impulsive entry outside the plan. The adherence assessment might be a 3 or 4, depending on how the rubric defines rounding and the severity of the sizing error.

KPI 2: Quality

Quality captures the merit of trade selection and execution relative to the plan. It is not the same as outcome. Define it using a brief rubric that inspects location quality, timing against the setup’s trigger, confluence with higher time frame context, and the presence of adverse information that should have disqualified the entry.

Evaluate a representative sample of trades and rate the average quality on a 1 to 5 scale. A 1 indicates trades mostly outside the playbook or taken in poor locations. A 3 indicates mixed selection with adequate locations and some late or early entries. A 5 indicates trades that matched core patterns, aligned with broader context, and were executed at prices consistent with the plan. Incorporate ex ante thinking by recording the planned R-multiple and the intended invalidation level before entry; post hoc drift in justifications should not influence the quality score.

Why it matters: over time, expectancy depends more on selecting high-quality opportunities than on managing mediocre ones. Quality scoring keeps attention on filters that generate edge rather than post-entry micromanagement.

Illustrative example: three trades matched a high-quality setup with clear catalysts and favorable structure, two were taken from boredom during a flat session, and one was a late entry. The composite quality might average near 3, despite some profitable outcomes, because two trades would not be repeatable under the plan.

KPI 3: State

State assesses readiness to trade, including sleep, stress, focus, and emotional load. It is a practical proxy for cognitive bandwidth and impulse control. Track state once pre-session with a brief self-rating that reflects how capable the trader is of monitoring risk and following the plan, not simply how motivated they feel.

Use a 1 to 5 scale linked to behavioral markers. A 1 reflects fatigue, elevated irritability, racing thoughts, or inability to sit still. A 3 reflects workable energy and attention with mild distraction. A 5 reflects calm, clear focus and stable energy. If biometric signals are available, treat them as a secondary check rather than the score itself. The key is consistency in the self-assessment method, ideally at the same time each trading day.

Why it matters: performance psychology research shows that state influences both vigilance and error rates. Traders in low-quality states are more likely to break rules, chase, and misread context. A weekly state KPI makes it legitimate to reduce size or skip marginal sessions when readiness is compromised.

Illustrative example: a trader logs 4, 3, 2, 4 across four sessions due to a midweek sleep deficit. The weekly state score trends down before mistakes increase. This is exactly the early warning the scorecard should surface.

KPI 4: Review Depth

Review depth measures the thoroughness of post-trade learning. It is not about writing more words but about capturing hypotheses, evidence, error classification, and next steps. The simplest operationalization is a structured journal template: what was anticipated, what occurred, what was learned, and what process change will be tested.

Score on a 1 to 5 scale. A 1 is a thin recap with outcome-only notes. A 3 is a balanced narrative that includes why the trade was taken and what could be improved. A 5 includes explicit error tags or success tags, a brief root-cause statement, and a concrete change to test next time. If the trader records charts or screenshots, note whether the annotation addresses decision points rather than just marking entries and exits.

Why it matters: durable improvement in skill depends on the reflection cycle. Traders who journal deeply tend to reduce repeat errors and increase the speed at which they internalize pattern recognition.

Illustrative example: after a stop-out, the journal captures that the trigger fired into midday liquidity vacuum and that waiting for the primary session open aligns better with the setup. The next steps include adding a time filter to the checklist. That merits a higher review depth score even though the trade lost.

Combining the KPIs into a Weekly Score

A single composite score focuses attention without hiding the parts. One useful weighting is adherence 35 percent, quality 35 percent, state 15 percent, and review depth 15 percent. Convert each KPI to a 0 to 100 scale by multiplying the 1 to 5 rating by 20, apply weights, and sum. The formula is: composite = 0.35 Adh + 0.35 Qual + 0.15 State + 0.15 Review.

Worked example: adherence 4.0, quality 3.2, state 3.5, review depth 4.5. On a 100-point scale that is 80, 64, 70, and 90. The composite becomes 0.35×80 + 0.35×64 + 0.15×70 + 0.15×90 = 75.5. The breakdown points to selection quality as the main limiter, not discipline or learning effort.

Calibration matters. Early weeks should be used to refine anchors so that scores are stable and meaningfully different across weeks. If all KPIs cluster at 4 to 5, the rubric is too generous or too vague. If they swing wildly, definitions are unclear or the trader is not using contemporaneous data.

Data Hygiene and Reliability

Define each KPI before the week starts. Keep the scoring ritual short and done at the same times. Use artifacts that already exist: broker logs for adherence, pre-planned levels and context notes for quality, a pre-market state check for readiness, and a structured post-trade template for review depth. Periodically spot-check a sample of trades and rescore them the next day without looking at outcomes. Stable scores under this small blind retest indicate a reliable rubric.

Guard against score gaming. The point is to detect friction, not to win the scoreboard. If a metric becomes easy to maximize without improving process, adjust the rubric. For example, if adherence looks perfect but quality lags, strengthen pre-trade criteria so that low-quality setups cannot be marked as compliant.

Using the Scorecard Within the Weekly Rhythm

Wednesday is an ideal pulse check. With two sessions in the books and two to come, there is enough data to adjust. If adherence is trending well but state is low, the intervention is recovery focused: shorter session, more breaks, or temporarily reduced size. If quality is lagging, tighten filters for the remainder of the week and accept fewer trades rather than working harder on management.

A short midweek review can include up to three actions: choose one constraint to reinforce, one situational adjustment to improve state, and one hypothesis to test in the next session. Keep the actions small enough to execute immediately rather than planning a wholesale overhaul on Friday.

Example Weekly Pattern and Adjustment

Consider a week where Monday adherence is high but selection quality is mediocre due to late entries during thin liquidity. Tuesday improves quality but review depth is low because post-trade notes are rushed. By Wednesday morning the composite is dragged down mainly by quality and review. The plan for Wednesday and Thursday becomes precise: predefine two A-quality setups with price levels and invalidation, cap total trades at three, and schedule a 15-minute post-close block to complete the structured review template before leaving the desk. If state is a 2 on Wednesday morning, reduce size by half and skip marginal setups to protect adherence.

By Friday, the expected pattern is a modest composite increase driven by better quality and deeper review, regardless of PnL. Over several weeks, the moving average of the composite and the dispersion of the four KPIs will show whether improvements are systemic or cosmetic. A rising composite with narrowing variance between adherence and quality is a strong signal that the plan and the trader’s behavior are converging.

Interpreting and Iterating

Use the KPIs to ask different questions. Low adherence suggests rule simplification or clearer cues. Low quality suggests better pre-trade filters or improved context mapping. Low state suggests changes in sleep timing or session duration. Low review depth suggests that the journal template is either too long or not specific enough to create next steps.

As patterns emerge, convert insights into micro experiments. Change only one variable at a time over a week, such as adding a time-of-day constraint or a volatility filter. Let the KPIs reflect the impact. The scorecard thus becomes a laboratory for improving execution rather than a weekly verdict on worth as a trader.

A good scorecard is compact, behavior anchored, and honest. Keep it the same long enough to learn from it, and use the midweek checkpoint to steer the ship while there is time to correct course.

James Strickland

Founder of Headge | 15+ years trading experience

James created Headge to help traders develop the mental edge that strategy alone can't provide. Learn more about Headge.

Ready to build your mental edge?

Meditation and mindfulness techniques designed specifically for traders. Build the discipline and emotional resilience you need to perform at your best.

Download on the App StoreGet it on Google Play
4.8/5 stars on the App Store