Operating Checklist
This is a repeatable routine for building, testing, and iterating on strategies in Axiom Strategy Lab Pro. It is not a workflow for one specific kind of strategy — it is a discipline that applies to every strategy you...
Written By Axiom Admin
Last updated About 1 month ago
Operating Checklist
This is a repeatable routine for building, testing, and iterating on strategies in Axiom Strategy Lab Pro. It is not a workflow for one specific kind of strategy — it is a discipline that applies to every strategy you build, whether it has one setup or ten.
Use it as a pre-flight check before you take results seriously, a post-test review before you change anything, and a loop you return to every time you iterate. The checklist assumes you have already completed the Quick Start and understand the basics of Rules & Risk.
Phase 1: Build
Before you paste any YAML, get clear on what you are testing.
Write your thesis in plain language. Not in YAML, not in expressions — in a sentence or two. "I think price tends to bounce off the 200 EMA when RSI is oversold, and I want to enter long with a 2:1 reward-to-risk exit." This is your anchor. Everything you build in YAML should trace back to this sentence. If you cannot state the thesis clearly, you are not ready to configure it. If you find yourself adding setups or filters that you cannot explain in terms of this thesis, you are either expanding the thesis (write the new version down) or you are fitting to the backtest.
Identify the indicators you need. Which external indicators does your thesis require? Add them to the chart and verify they are producing data.
Map your custom tokens. For each external indicator output, create a custom token with the correct name, type, and source. Verify with the token diagnostics label.
Write the simplest version of the YAML. One direction. One setup (or GLOBAL). One entry. One exit. Do not build the full multi-setup, multi-exit version yet. Get the core logic working first and confirm it behaves as you intend. See YAML Reference for every available field, or start from a working example in Workflows.
Choose your direction mode. Long Only, Short Only, or Swing Mode — based on your thesis, not on which mode produces better results.
Phase 2: Validate
Before you interpret any results, confirm the engine is reading your rules correctly.
Check the error table. It should read "Strategy Active" in green. If it shows errors, fix them before proceeding. See Troubleshooting.
Check the schema summary table. Enable it and confirm that the parsed counts of setups, entries, TPs, and SLs match what you intended. If the counts are wrong, the YAML structure is off — fix it now, before you look at any equity curve.
Check the expression diagnostics. Enable the expression value label. Confirm that your key conditions show EVAL status and are producing values in the range you expect. If a condition shows SKIP, trace back to the owning setup — is it confirmed?
Check custom token values. Enable token diagnostics and confirm your external indicator data is flowing correctly. Look for unexpected zeroes, NaN values, or stale data.
Confirm the consent checkbox is checked after a genuine Properties review. Not just checked — reviewed. Do you know what commission, slippage, and pyramiding are set to?
Phase 3: Test
Now look at the results, but look at them carefully.
Run the baseline. Let the strategy process the full history with your current settings. Note the key metrics: net profit, profit factor, max drawdown, win rate, total trades, average trade.
Run the slippage sensitivity test. Test at 0, 5, 10, and 15 ticks of slippage. How much does the equity curve change? If it collapses as slippage increases, the edge is thinner than the execution cost. See Backtesting & Realism.
Check trade count. If you have fewer than 30 trades, treat the metrics with extra skepticism. Small sample sizes can produce impressive numbers from luck alone.
Check trade distribution. Are the profitable trades clustered in one period, or spread across the test window? Is the strategy making money in one regime and giving it back in another?
Check max drawdown honestly. Would you hold through the maximum drawdown this backtest shows with real money? If not, the equity curve is describing a path you would not actually walk.
Phase 4: Interpret
Results are data, not conclusions. This phase is about understanding what the data means.
What did the strategy actually capture? Look at the individual trades in the trade list. Do they match your thesis? Are the entries firing at the moments your thesis predicts? Are the exits closing where your logic intended?
What did the strategy miss? Were there obvious opportunities your conditions did not fire on? Is that a feature (the filter is correctly being selective) or a bug (the conditions are too strict or a gate is blocking evaluation)?
What would break this? What market condition would make this strategy fail? A regime change? A volatility spike? A prolonged range? A sudden liquidity gap? Be specific. "A range-bound market" is too vague. "A sideways range on low volume where the EMA flattens and RSI stays between 40–60 for weeks" is a condition you can actually watch for. Knowing the strategy's failure conditions is more valuable than knowing its win rate, because the failure conditions tell you when to stop trusting it.
Are the results regime-dependent? If you removed the best-performing 20% of the test window, would the strategy still be positive? If you isolated just the worst-performing segment, how bad does it get? A strategy that makes all its money in one regime and gives it back in another is a regime bet. That can be valid — but only if you are aware of it and have a plan for the other regime.
Do the results survive a different time window? Shift the test period forward or backward. If the strategy's performance changes dramatically, it may be fitted to the original window. See Optimization for how to structure this validation.
Phase 5: Iterate
If the strategy needs changes, make them deliberately.
Change one thing at a time. If you change multiple parameters, you will not know which change caused the difference. Isolate your adjustments.
Re-validate after every change. Re-check the schema summary, expression diagnostics, and token values. A YAML edit can introduce parse errors or change ownership in ways you did not intend.
Log what you changed and why. Even a simple note — "tightened RSI threshold from 30 to 25 because I wanted to test whether the signal quality improves" — prevents you from losing track of your reasoning across iterations.
Watch the trade count. If your adjustments keep reducing trades, you may be optimizing away noise — or you may be optimizing away data. See Optimization for how to tell the difference.
Know when to stop. The goal is a strategy you understand, not a strategy with the best possible numbers. If you can explain every rule, predict its behavior in different regimes, and accept its drawdown with real money, the iteration is done. The feeling you are looking for is not excitement about the equity curve — it is quiet confidence that you know what the strategy does, where it fails, and why you are willing to accept those failures. If you are still chasing a better number, you are still optimizing. If you can describe the strategy to someone else without referencing the equity curve and they can understand the logic, you are close.
Phase 6: Review
Before taking any result forward — to paper trading, automation, or real money — ask:
Can I explain this strategy to someone else in plain language without referencing the equity curve?
Have I tested at realistic slippage and commission for my actual broker and asset?
Have I tested on more than one time window?
Do I know the market conditions that would make this strategy fail?
Am I comfortable with the maximum drawdown I saw, knowing that live drawdown is almost always worse than the backtest shows?
Have I checked whether a risk circuit breaker halted trading at any point during the test?
If I plan to automate, have I read Alerts & Automation and understood the three-model gap?
If you can answer yes to all of these, you have done the work. The results still carry uncertainty — every backtest does — but you have reduced the self-deception surface as far as the tool allows.
If you cannot answer yes, you know where to go back.