1k_scanner is not a document scanner. It is a Rust+egui based multi-market, multi-timeframe trading scanning app.
In post-trade reviews, people often write:
- “The signal was wrong.”
- “Indicators did not work today.”
The problem is simple: these lines explain the outcome, but they do not preserve a reusable decision process. So today’s template is:
“The signal was wrong” → Assumption / Constraint / Trigger
The focus is not prediction pride. It is preserving how you read the scene and why you acted.
1) Why “the signal was wrong” is weak review language
That sentence is result-oriented, not process-oriented.
The same loss can come from very different causes:
- assumption was overstretched,
- constraint (invalidation boundary) was too loose,
- trigger (execution condition) was accepted too early.
If those are not separated, the same mistake returns under a new label.
2) The 3-line frame: Assumption / Constraint / Trigger
Keep every review note in three lines:
- Assumption: how you read the market state
- Constraint: where this interpretation becomes invalid
- Trigger: what scene must appear before action
Example (long candidate):
- Assumption: “Higher-timeframe persistence is still active; pullback may re-accelerate.”
- Constraint: “If structure settles below NRZ, this idea is invalid.”
- Trigger: “Keep candidate only after reaction above NRZ is re-confirmed; otherwise drop.”
You do not need perfect wording. You need statements that can be re-tested next session.
3) Fix the observation order: run → scan → focus → record
Review quality improves when live observation order is consistent:
- Run: open app, load template (
Cmd/Ctrl + L) - Scan: broad market pass in dense mode (
Cmd/Ctrl + 7) - Focus: inspect candidates in expanded/single modes (
Cmd/Ctrl + 8,Space) - Record: store evidence in Check Note (
V,N)
When this order is stable, your review can clearly reconstruct why each decision happened.
4) Anchor your review to the grid model
In 1k_scanner, the visual model is:
- Rows = timeframes
- Columns = symbols
Use the same model in review:
- read one column as one symbol’s MTF story,
- compare columns to reprioritize candidates.
This prevents overreacting to one pretty candle and preserves context continuity.
5) Write EMA/NRZ as state sentences, not right/wrong verdicts
At user level, EMA/NRZ should stay observational:
- EMA: is persistence still alive?
- NRZ: holding or breaking?
Reusable review lines:
- “EMA persistence remains, but waiting for cleaner NRZ re-validation.”
- “Reaction near NRZ appeared, but holding quality was weak; lowered priority.”
These lines accelerate decision speed in the next session.
6) Treat consensus as candidate compression, not execution
In 1k_scanner, when multiple signals align, charts show green/red frame emphasis.
In review terms, this means:
- not “execute now,”
- but “review this candidate first.”
Operationally:
- stronger emphasis → inspect sooner,
- weak/neutral emphasis → defer or exclude faster.
Consensus is best used as attention allocation, not prediction certainty.
7) Combine Check Note + templates for repeatable review
In live markets, speed and repeatability matter most.
V: instantly add/toggle current chart in Check NoteN: open list and batch-edit notes
Then lock layout repeatability:
- choose
Grid Size,Timeframes per row,Exchange, run Generate Grid Template - save with F12 before session end
- reload with Cmd/Ctrl + L next session
When this loop is fixed, your Assumption/Constraint/Trigger notes start matching real observed scenes.
8) Copy-paste 3-line review template
You can paste this directly into Check Note:
| |
The goal is not to grade the past. It is to maximize reusability of your decision process.
One final line:
“The signal was wrong” explains yesterday. “Assumption / Constraint / Trigger” prepares tomorrow.
Review is not a win/loss diary. It is design work for better next execution.