The first step was simple: we built an early screen in Python.
- A 5x5 grid
- Only one tab
Even with that minimal setup, the result was clear.
We still saw heavy lag and high system load.
So the issue was not “a bit more optimization.”
The real issue was scale.
Why scale changed everything
The target was not monitoring a few dozen symbols.
We needed to monitor hundreds to around a thousand.
And the goal was not “inspect all of them in detail.”
The goal was signal distillation.
- First, scan the whole market
- Then, filter down to only what deserves attention now
So filtering was not step one. It was step two.
If step one (full-market scan) stutters, step two never starts.
Why we moved to Rust
That changed our technical criteria.
Not:
- “Can one screen run?”
- “How many features can we add?”
But:
- Can full-market scanning run repeatedly?
- Can we keep the flow from scanning to candidate compression without breaking?
The Python prototype was enough to validate the problem,
but not enough to sustain the workflow we needed at runtime.
That is why we moved to Rust.
We chose Rust not for style, but to support the actual operating flow:
scan the whole market, decide what to watch, then apply filters as the next step.