Optimization
Optimization can be useful here.
Written By AxiomCharts
Last updated About 1 hour ago
Optimization
Optimization can be useful here. It can also become the fastest path to self-deception if you start before the workflow is understandable. This page is not here to tell you never to tune. It is here to help you tune in a way that builds understanding instead of replacing it.
What Optimization Should Be For
Healthy optimization asks:
"Does this workflow remain itself when I refine one part of it?"
Unhealthy optimization asks:
"What can I change until the past tells a cleaner story?"
Those may sound similar when you are tired, but they produce very different habits. The first makes the method clearer. The second often makes the report prettier while the method becomes harder to explain.
The Best Time To Start Tuning
Do not optimize just because the strategy finally ran. Start tuning only after these things are true:
you can explain one setup, one entry, and one management path on the chart β optimization without behavioral clarity is mostly guesswork
you know which assumptions shaped the current report β otherwise you may be tuning the tester more than the workflow
you know which tokens matter most to the logic β otherwise a "win" may only be coming from a misunderstood input
you have one baseline worth preserving β otherwise improvement has nothing stable to compare against
If even one of those is missing, the stronger move is usually more verification, not more tuning.
What A Baseline Should Capture
A baseline is not just "the one I liked before." A real baseline should record:
the symbol and timeframe
the direction mode
the strategy properties that matter
the exact authored workflow in play
any custom tokens the workflow depends on
whether important non-market paths use fixed or monitored pricing
If your baseline is fuzzy, your optimization conclusions will be fuzzy too. This is one of the easiest professional habits to build and one of the most valuable. A small written baseline often prevents days of circular tuning.
Safer Places To Tune First
Some surfaces are more teachable than others. They still need discipline, but they usually let you learn something without changing the entire identity of the workflow.
These surfaces are useful because they teach you something structural about the workflow.
Places To Tune Later
Some surfaces are powerful enough that they can make the backtest look dramatically better before your understanding catches up. Those are the ones to approach later and more cautiously.
The general rule is simple: the more a change affects identity rather than refinement, the later it should come.
A Clean Optimization Loop
Use this loop if you want to tune without losing the thread.
preserve one baseline in writing
choose one variable family only
keep the symbol and timeframe fixed on the first comparison
rerun the workflow
inspect the chart path before reading the summary
write down what changed and what you think caused it
only then test whether the result generalizes elsewhere
This is not slow for its own sake. It is fast in the sense that it keeps your future self from forgetting what actually happened.
What To Look For On The Chart After A Change
Do not only ask whether the metrics improved. Ask:
did the setup become more selective or just more rare?
did the entry become more disciplined or simply later?
did the exits become more coherent or just harder to reach?
did the strategy become easier to explain or only easier to admire?
Those questions help you tell the difference between structural improvement and statistical cosmetics.
Optimization Questions That Keep You Honest
These are excellent questions to ask after every apparently successful change:
Did I strengthen the method or strengthen the story?
This is the single best optimization question in the whole manual.
Would I still make this change if I could not see the equity curve?
If the answer is no, the change may be more aesthetic than structural.
Did this edit make the workflow more understandable or more impressive?
Sometimes a valid improvement does both. But if it only improves the surface and not the explanation, be cautious.
What would I expect to remain true on another symbol or timeframe?
If the answer is "almost nothing," the improvement may be highly local.
Signs You Are Overfitting
Overfitting rarely announces itself dramatically. It usually arrives disguised as progress.
One Useful Distinction: Refinement Versus Identity Change
Some edits refine the same workflow. Others quietly turn it into a different workflow altogether.
Refinement usually looks like
adjusting confirmation discipline
clarifying expiry behavior
improving ladder balance
cleaning up entry timing inside the same setup idea
Identity change usually looks like
adding a new source of decision-making through custom tokens
changing ownership from named context to global behavior
changing participation style and management style together
changing cost, timing, and structure at the same time
Neither category is forbidden. The mistake is pretending the second category is the first one.
A Better Definition Of A Winning Change
A winning change is not only one that improves the summary. A winning change is one that:
produces behavior you can still explain
survives a realism check
improves the workflow without making it more mysterious
gives you a reason to believe the method learned something real
That is a higher standard than "the net profit went up." It is also a much more durable standard.
What To Do If A Change Looks Great Too Fast
When a change suddenly makes the report look much better, slow down and do this:
re-check the properties assumptions
inspect the first few changed trades
ask whether scope, ownership, or token meaning also changed
test whether the same improvement remains visible in a second context
You are not trying to kill good ideas. You are trying to make sure the improvement is not only a better arrangement of the past.
Where To Go Next
read Backtesting and Realism if your latest "improvement" came mostly from assumption changes
read Operating Checklist if you want a repeatable tune-and-verify routine
read For the Geeks if the behavior you are trying to tune depends on deeper mechanics such as fixed versus monitored pricing or scoped exit handling