For the Geeks

This page is for the reader who does not want to take the oscillator's behavior on faith. If you are wondering why a standard MACD reading becomes a bounded -100 to +100 value, what happens at the extremes, and why th...

Written By Axiom Admin

Last updated About 1 month ago

For the Geeks

This page is for the reader who does not want to take the oscillator's behavior on faith. If you are wondering why a standard MACD reading becomes a bounded -100 to +100 value, what happens at the extremes, and why the tool was designed this way instead of something simpler β€” this is where those questions get answered.

The goal is to build your understanding of the normalization well enough that you can trust what it shows you, verify its behavior, and know where it stops being precise β€” without needing to reproduce the implementation.


The problem the normalization solves

Standard MACD produces a number that is the difference between two moving averages. That number is denominated in the price units of the instrument. If SPY is at $500 and its MACD reads +2.5, the fast MA is $2.50 above the slow MA. If BTC is at $60,000 and its MACD reads +250, the fast MA is $250 above the slow MA.

These readings are not comparable. A MACD of +2.5 on SPY and +250 on BTC do not describe the same intensity of momentum β€” they are not even on the same scale. And the same instrument's MACD at different timeframes produces values at different magnitudes, because the price swings captured by the MAs are proportional to the bar size.

If you want to stack multiple MACD readings in one pane and compare them β€” or blend them into a composite β€” the raw values are useless. Summing +2.5 and +250 produces a number dominated by the larger one. Averaging them hides the smaller one. The units do not match and the magnitudes are incomparable.

The normalization exists to fix this. It converts each slot's MACD from "dollars of momentum" into "momentum relative to recent volatility," then maps that into a fixed, bounded range. After normalization, every slot speaks the same language.


What the normalization does, step by step

There are three stages in the conversion from raw MACD to bounded oscillator value. Each stage has a specific purpose.

Stage 1: Make it volatility-relative

The raw MACD value (and the signal and histogram alongside it) is divided by ATR β€” the Average True Range over a configurable lookback period. ATR measures the instrument's typical bar-to-bar volatility in the instrument's own price units.

Dividing by ATR converts the reading from "price units of momentum" to "momentum as a multiple of recent volatility." If ATR is 1.0 and the MACD is 2.0, the ratio is 2.0 β€” the MACD spread is twice the size of a typical bar's range. If ATR is 100 and the MACD is 200, the ratio is also 2.0 β€” different instrument, different price level, but proportionally the same momentum intensity.

After this division, the readings are unitless. They no longer depend on the instrument's price level or its absolute volatility. This is what makes cross-timeframe and cross-ticker comparison possible.

Stage 2: Scale by sensitivity

The volatility-relative value is multiplied by the ATR Sensitivity setting. This is a user-controlled knob that determines how much of the oscillator's range the readings will use.

At Sensitivity = 1.0 (the default), the volatility-relative values map to a moderate portion of the output range. At higher sensitivity, the same raw momentum produces a larger input to the bounding function β€” the oscillator approaches its limits faster. At lower sensitivity, the input is smaller and the oscillator stays closer to zero.

Think of sensitivity as a zoom control on the bounding curve. Higher sensitivity zooms in on the extremes β€” you see more saturation and less mid-range detail. Lower sensitivity zooms out β€” you see more mid-range detail but the extremes are rarely reached.

Stage 3: Bound it

The scaled value is passed through a mathematical function that maps any input β€” no matter how large or small β€” to an output between -1 and +1. That output is then multiplied by 100, giving the -100 to +100 range on the chart. A final hard clamp ensures the value never exceeds the bounds, even with extreme numerical inputs.

The bounding function has properties that matter for how you interpret the readings:

Near zero, it behaves approximately linearly. Small inputs produce proportionally small outputs. A volatility-relative value of 0.1 produces a reading close to 10 (on the -100/+100 scale). A value of 0.2 produces a reading close to 20. In the middle of the range, the oscillator is a straightforward scaled version of the volatility-relative MACD.

At the extremes, it compresses asymptotically. As the input gets larger, the output approaches +100 but never reaches it through the bounding function alone (the hard clamp handles the final boundary). Each additional unit of input produces a smaller and smaller change in the output. Going from a volatility-relative value of 2.0 to 3.0 might move the oscillator from +90 to +96. Going from 3.0 to 4.0 might move it from +96 to +98. The curve flattens.

The transition between linear and compressed is smooth. There is no hard inflection point. The compression increases gradually as the reading moves away from zero. By about Β±60 to Β±70 on the output scale, the compression is noticeable. By Β±85, it is substantial. By Β±95, additional raw momentum barely moves the reading.

Mental model β€” the rubber band: Think of a rubber band stretched between -100 and +100. In the middle, it stretches easily β€” small forces produce noticeable movement. Near the ends, the band gets stiffer β€” the same force barely moves it. That is what the bounding function does. Small momentum moves near the center of the range produce proportional changes. Large momentum moves near the extremes produce smaller and smaller changes.


Why this approach instead of something simpler

Why not min-max scaling?

Min-max scaling maps the lowest and highest observed values to the bounds. The problem: the min and max change over time. Yesterday's maximum might be different from today's. A reading of +80 yesterday and +80 today could represent very different amounts of raw momentum because the scaling window shifted. The bounds move. That instability defeats the purpose.

Why not z-score normalization?

Z-scores divide by the standard deviation and center on the mean. More stable than min-max, but z-scores are unbounded. A z-score can be +5, +10, or +200 depending on how extreme the value is. That defeats the purpose of a fixed, bounded range. You would still need a bounding step on top of the z-score, and then you are back to a two-stage process.

Why not simple division by a fixed constant?

You could divide by a single number to bring everything into roughly -100 to +100. But what number? The right divisor for SPY is wrong for BTC, and the right divisor for a 5-minute MACD is wrong for a daily MACD. A fixed constant cannot adapt to different instruments or timeframes. ATR provides an adaptive divisor that adjusts to each instrument's own volatility, which is the right tool for a multi-instrument, multi-timeframe application.

The tradeoff this approach accepts

The ATR-normalized bounding gives you:

  • A fixed, bounded range (-100 to +100) that does not shift with the data

  • Adaptive scaling that responds to each instrument's volatility

  • Unitless readings that can be compared across timeframes and tickers

  • Stable bounds that do not depend on historical extremes

In exchange, it costs you:

  • Resolution at the extremes. When the reading approaches the bounds, the bounding compresses it. Two readings that look similar near Β±100 might represent very different amounts of raw momentum. The oscillator tells you "things are extreme" but cannot tell you "how much more extreme they have gotten."

  • Perfect proportionality at high values. Near zero, doubling the raw momentum approximately doubles the oscillator reading. Near Β±100, doubling the raw momentum barely moves the reading. The relationship becomes nonlinear in the tails.


How ATR Length and Sensitivity interact

These are the two user-controlled parameters that shape the normalization pipeline. Understanding their interaction helps you calibrate the oscillator for different instruments and trading styles.

ATR Length controls the lookback for the volatility baseline β€” the denominator. A longer ATR Length (30–50) produces a smoother, slower-moving denominator. The normalization baseline changes gradually, which means the oscillator responds primarily to momentum shifts, not to volatility shifts. A shorter ATR Length (5–10) produces a denominator that reacts quickly to recent volatility changes. This can make the oscillator noisier because the scaling factor itself is moving.

ATR Sensitivity controls the multiplier applied before bounding β€” it determines how aggressively the oscillator uses its range.

Together:

ATR Length

Sensitivity

What it produces

Long (30–50)

Default (1.0)

Smooth, stable baseline. Readings change mainly in response to momentum, not volatility. Good for instruments with relatively stable volatility.

Short (5–10)

Default (1.0)

Reactive baseline. The oscillator may jitter as the ATR denominator responds to recent spikes. Better for instruments where volatility itself is volatile β€” but accept the noise.

Default (14)

High (2.0–3.0)

Default normalization speed, but readings saturate quickly. The oscillator spends most of its time near the bounds. Loss of mid-range resolution. Useful if you only care about extreme readings and want them to trigger sooner.

Default (14)

Low (0.3–0.5)

Default normalization speed, but readings cluster near zero. Rarely reaches OB/OS levels. Maximum mid-range resolution. Useful if you want to distinguish between small and moderate momentum moves precisely.

The interaction matters: a short ATR Length with high sensitivity amplifies volatility noise and produces a jittery oscillator that saturates on minor moves. A long ATR Length with low sensitivity produces an extremely smooth oscillator that barely moves. Finding the right combination depends on the instrument and on what you are trying to see.


How to verify the normalization behavior yourself

Experiment 1: Sensitivity sweep

  1. Load the indicator with default settings on a 1-minute chart.

  2. Note the range of readings β€” they should use a moderate portion of the -100/+100 range.

  3. Change ATR Sensitivity to 3.0. The readings now cluster near the Β±100 boundaries. The oscillator is saturating β€” moderate moves push it to the extremes.

  4. Change ATR Sensitivity to 0.3. The readings now cluster near zero. Even strong moves barely push the reading past Β±30.

  5. Return to 1.0.

What this confirms: Sensitivity directly controls how aggressively the bounding function engages. The pipeline responds to this parameter exactly as described.

Experiment 2: ATR Length comparison

  1. With default settings, note how smoothly the oscillator moves.

  2. Change ATR Length to 5. The oscillator becomes noisier β€” not because the MACD changed, but because the normalization denominator is now reacting to short-term volatility spikes.

  3. Change ATR Length to 50. The oscillator becomes smoother and slower. The denominator is averaging over more bars and changes gradually.

  4. Return to 14.

What this confirms: ATR Length controls the stability of the normalization denominator. Shorter makes it more reactive and noisier. Longer makes it more stable and slower to adapt.

Experiment 3: Cross-ticker scale verification

  1. Set Slot 01 to SPY at 5m, Slot 02 to QQQ at 5m, Slot 03 to IWM at 5m (use the Optional Ticker field for Slots 02 and 03).

  2. Observe that all three slot lines occupy comparable portions of the -100/+100 range, despite the instruments having different price levels and different volatility profiles.

  3. Without normalization, their raw MACD values would be at different magnitudes. The normalization has made them comparable.

What this confirms: The ATR normalization successfully converts instrument-specific MACD values into unitless readings on a shared scale.

Experiment 4: Boundary compression check

  1. Find a period on the chart where the oscillator reading is near +90 or -90.

  2. Look at the price action β€” it should correspond to a strong move.

  3. Now find a nearby period where the reading was at +95 or more.

  4. Compare the price action. The move at +95 was likely larger than the move at +90, but the oscillator difference is only 5 points. In the mid-range, a 5-point difference would correspond to a smaller price change.

What this confirms: The bounding compresses values near the extremes. Equal point differences on the oscillator represent smaller raw momentum differences in the mid-range and larger raw momentum differences near the bounds.


What the normalization does not do

The normalization makes readings comparable. It does not make them equally reliable.

An instrument with smooth, consistent volatility produces stable, trustworthy normalization. The ATR denominator changes slowly, and the scaled readings are a clean reflection of momentum changes. An instrument with wild, erratic volatility produces normalization that is itself noisy. The denominator jumps, the readings jump, and some of the movement on the oscillator comes from the denominator shifting rather than from momentum actually changing.

The normalization does not eliminate MACD's inherent lag. The oscillator is a normalized, bounded MACD β€” not a replacement for MACD. The lag from the moving averages, the signal line, and the ATR lookback is all still present. The bounding adds no lag of its own, but it does not remove what was already there.

The normalization does not know about market context. A +80 during a news-driven spike and a +80 during a quiet trend look the same on the oscillator. The normalization measures magnitude relative to volatility β€” it is indifferent to cause. Whether those two readings deserve the same interpretive weight is a judgment call the tool cannot make for you. This is not a flaw in the normalization. It is the boundary of what any mathematical scaling operation can do. The normalization gives you comparability of magnitude. The meaning behind the magnitude is yours to assess.


What this means for how you read the oscillator

In the mid-range (-60 to +60): Trust the readings as approximately proportional. A +40 represents roughly twice the momentum intensity of a +20. Differences between readings are meaningful. This is where the oscillator gives you its best resolution.

In the extended zone (Β±60 to Β±85): Readings are still meaningful but compression is building. A +75 and a +80 represent a real difference, but it is a smaller raw momentum difference than the 5-point gap suggests compared to what a 5-point gap means in the mid-range. Comparisons between readings are still useful but less precise.

In the saturation zone (Β±85 to Β±100): The compression is substantial. A +92 and a +97 may look close on the chart but could represent very different raw momentum levels. The oscillator is telling you "things are extreme." It cannot tell you how much more extreme they might get. In this zone, shift your attention from the oscillator to other inputs β€” price structure, volume, broader context. The oscillator has told you what it can. Beyond this, it cannot differentiate.

The practical takeaway: calibrate your trust to the zone. In the mid-range, you can compare readings, track changes, and draw meaningful distinctions between slot levels. In the saturation zone, the oscillator has told you something valuable β€” "things are extreme relative to recent volatility" β€” and that is where its contribution ends. Do not try to extract precision from a zone that was designed for categorization. The oscillator is doing its job across the full range. But the nature of that job changes at the boundaries, and reading it well means adjusting your expectations accordingly.