Optimization

Optimization can be useful here.

Written By AxiomCharts

Last updated About 1 hour ago

Optimization

Optimization can be useful here. It can also become the fastest path to self-deception if you start before the workflow is understandable. This page is not here to tell you never to tune. It is here to help you tune in a way that builds understanding instead of replacing it.

What Optimization Should Be For

Healthy optimization asks:

"Does this workflow remain itself when I refine one part of it?"

Unhealthy optimization asks:

"What can I change until the past tells a cleaner story?"

Those may sound similar when you are tired, but they produce very different habits. The first makes the method clearer. The second often makes the report prettier while the method becomes harder to explain.

The Best Time To Start Tuning

Do not optimize just because the strategy finally ran. Start tuning only after these things are true:

  • you can explain one setup, one entry, and one management path on the chart β€” optimization without behavioral clarity is mostly guesswork

  • you know which assumptions shaped the current report β€” otherwise you may be tuning the tester more than the workflow

  • you know which tokens matter most to the logic β€” otherwise a "win" may only be coming from a misunderstood input

  • you have one baseline worth preserving β€” otherwise improvement has nothing stable to compare against

If even one of those is missing, the stronger move is usually more verification, not more tuning.

What A Baseline Should Capture

A baseline is not just "the one I liked before." A real baseline should record:

  • the symbol and timeframe

  • the direction mode

  • the strategy properties that matter

  • the exact authored workflow in play

  • any custom tokens the workflow depends on

  • whether important non-market paths use fixed or monitored pricing

If your baseline is fuzzy, your optimization conclusions will be fuzzy too. This is one of the easiest professional habits to build and one of the most valuable. A small written baseline often prevents days of circular tuning.

Safer Places To Tune First

Some surfaces are more teachable than others. They still need discipline, but they usually let you learn something without changing the entire identity of the workflow.

Better early tuning surfaceWhat it can teach youMain caution
setup confirmation and cancellation disciplinewhether the context layer is too loose or too stricteasy to overtune to one symbol's rhythm
entry order behaviorwhether the method should act immediately or rest more deliberatelychanging order type also changes realism and fill behavior
fixed versus monitored price behaviorwhether a resting path should stay committed or stay adaptivethese are different tests and should be compared honestly
leg allocation structurewhether participation or harvesting is too concentratedimproved smoothness is not automatically improved robustness
broader risk railswhether the workflow benefits from harder participation boundariesrails can also beautify the sample without improving the method

These surfaces are useful because they teach you something structural about the workflow.

Places To Tune Later

Some surfaces are powerful enough that they can make the backtest look dramatically better before your understanding catches up. Those are the ones to approach later and more cautiously.

Better later tuning surfaceWhy it deserves caution
custom-token selectiona new token can import a whole new assumption stack into the strategy
several parameter families at onceyou lose the ability to explain what actually changed the result
tester friction assumptionsa friendlier cost model can make weak logic look cleaner than it is
ownership structure and global usage at the same timeyou may change the whole operating model while thinking you only made a small edit
add behavior and exit targeting togethermulti-layer changes make cause and effect much harder to read

The general rule is simple: the more a change affects identity rather than refinement, the later it should come.

A Clean Optimization Loop

Use this loop if you want to tune without losing the thread.

  1. preserve one baseline in writing

  2. choose one variable family only

  3. keep the symbol and timeframe fixed on the first comparison

  4. rerun the workflow

  5. inspect the chart path before reading the summary

  6. write down what changed and what you think caused it

  7. only then test whether the result generalizes elsewhere

This is not slow for its own sake. It is fast in the sense that it keeps your future self from forgetting what actually happened.

What To Look For On The Chart After A Change

Do not only ask whether the metrics improved. Ask:

  • did the setup become more selective or just more rare?

  • did the entry become more disciplined or simply later?

  • did the exits become more coherent or just harder to reach?

  • did the strategy become easier to explain or only easier to admire?

Those questions help you tell the difference between structural improvement and statistical cosmetics.

Optimization Questions That Keep You Honest

These are excellent questions to ask after every apparently successful change:

Did I strengthen the method or strengthen the story?

This is the single best optimization question in the whole manual.

Would I still make this change if I could not see the equity curve?

If the answer is no, the change may be more aesthetic than structural.

Did this edit make the workflow more understandable or more impressive?

Sometimes a valid improvement does both. But if it only improves the surface and not the explanation, be cautious.

What would I expect to remain true on another symbol or timeframe?

If the answer is "almost nothing," the improvement may be highly local.

Signs You Are Overfitting

Warning signWhy it matters
every improvement comes from adding more conditionsthe workflow may be memorizing noise rather than refining structure
the chart path is getting harder to explain while the report is getting easier to likethis is one of the clearest overfitting signals
you stop reviewing actual trades once the curve improvessummary comfort has replaced behavioral understanding
the result only survives under one narrow friction modelthe apparent edge may be mostly assumption-dependent
each new version needs more narration to justify why it is betterthe method may be losing clarity as it gains complexity

Overfitting rarely announces itself dramatically. It usually arrives disguised as progress.

One Useful Distinction: Refinement Versus Identity Change

Some edits refine the same workflow. Others quietly turn it into a different workflow altogether.

Refinement usually looks like

  • adjusting confirmation discipline

  • clarifying expiry behavior

  • improving ladder balance

  • cleaning up entry timing inside the same setup idea

Identity change usually looks like

  • adding a new source of decision-making through custom tokens

  • changing ownership from named context to global behavior

  • changing participation style and management style together

  • changing cost, timing, and structure at the same time

Neither category is forbidden. The mistake is pretending the second category is the first one.

A Better Definition Of A Winning Change

A winning change is not only one that improves the summary. A winning change is one that:

  • produces behavior you can still explain

  • survives a realism check

  • improves the workflow without making it more mysterious

  • gives you a reason to believe the method learned something real

That is a higher standard than "the net profit went up." It is also a much more durable standard.

What To Do If A Change Looks Great Too Fast

When a change suddenly makes the report look much better, slow down and do this:

  1. re-check the properties assumptions

  2. inspect the first few changed trades

  3. ask whether scope, ownership, or token meaning also changed

  4. test whether the same improvement remains visible in a second context

You are not trying to kill good ideas. You are trying to make sure the improvement is not only a better arrangement of the past.

Where To Go Next

  • read Backtesting and Realism if your latest "improvement" came mostly from assumption changes

  • read Operating Checklist if you want a repeatable tune-and-verify routine

  • read For the Geeks if the behavior you are trying to tune depends on deeper mechanics such as fixed versus monitored pricing or scoped exit handling