For the Geeks

This page is for the reader who does not want to take the oscillator's behavior on faith. If you are wondering why a standard MACD reading becomes a bounded -100 to +100 value, what happens to the reading at the extre...

Written By Axiom Admin

Last updated About 1 month ago

For the Geeks

This page is for the reader who does not want to take the oscillator's behavior on faith. If you are wondering why a standard MACD reading becomes a bounded -100 to +100 value, what happens to the reading at the extremes, and why the tool was designed this way instead of a simpler alternative, this is where those questions get answered.

The goal is to build your understanding of the normalization mechanic well enough that you can trust what it shows you, verify its behavior, and know where it breaks down β€” without needing to reproduce the implementation.


The problem the normalization solves

Standard MACD produces a number that is the difference between two moving averages. That number is denominated in the price units of the instrument. If SPY is at $500 and its MACD is +2.5, that means the fast MA is $2.50 above the slow MA. If BTC is at $60,000 and its MACD is +250, that means the fast MA is $250 above the slow MA.

These two readings are not comparable. A MACD of +2.5 on SPY and +250 on BTC are not the same strength of momentum β€” they are not even on the same scale. And it gets worse: the same instrument's MACD at different timeframes produces values at different magnitudes too, because the price swings the MACD captures are proportional to the timeframe's bar range.

If you want to blend multiple MACD readings from different timeframes or different instruments into a single composite, the raw values are useless. Summing +2.5 and +250 gives you a number dominated by the larger one. Averaging them gives you a result that hides the smaller one. The units do not match and the magnitudes are incomparable.

The normalization exists to fix this. It converts each slot's MACD from "dollars of momentum" into "momentum relative to recent volatility," then maps that into a fixed, bounded range. After normalization, every slot speaks the same language. A +50 on one slot and a +50 on another mean similar things: that instrument's momentum at that timeframe is moderately above average relative to its own recent volatility.


What the normalization does, step by step

There are three stages in the conversion from raw MACD to bounded oscillator. Each stage has a specific purpose.

Stage 1: Make it volatility-relative

The raw MACD value (or Signal, or Histogram) is divided by ATR β€” the Average True Range over a configurable lookback period. ATR measures the instrument's typical bar-to-bar volatility in price units.

Dividing by ATR converts the reading from "price units of momentum" to "momentum as a multiple of recent volatility." If ATR is 1.0 and the MACD is 2.0, the ratio is 2.0 β€” meaning the MACD spread is twice the size of a typical bar's range. If ATR is 100 and the MACD is 200, the ratio is also 2.0 β€” different instrument, different price level, but proportionally the same momentum intensity relative to volatility.

After this division, the readings are unitless. They no longer depend on the instrument's price level. This is what makes cross-timeframe and cross-ticker comparison possible.

Stage 2: Scale by sensitivity

The volatility-relative value is multiplied by the ATR Sensitivity setting. This is a user-controlled knob that determines how much of the oscillator's range the readings will use.

At Sensitivity = 1.0 (default), the volatility-relative values map to a moderate portion of the final range. At higher sensitivity values, the same raw momentum produces a larger input to the bounding function, causing the oscillator to approach its bounds faster. At lower values, the input is smaller and the oscillator stays closer to zero.

This stage exists so you can adapt the oscillator to the instrument's typical behavior. Some instruments naturally produce MACD values that are large relative to ATR; others produce smaller ones. The sensitivity knob lets you calibrate the oscillator's dynamic range to match the instrument's personality.

Stage 3: Bound it

The scaled value is passed through a mathematical bounding function that maps any input to an output between -1 and +1 (then multiplied by 100 to give the -100 to +100 range you see on the chart).

The bounding function has specific properties that matter:

  • Near zero, it is approximately linear. Small inputs produce proportionally small outputs. A volatility-relative value of 0.1 produces a reading close to 10 (on the -100/+100 scale), and a value of 0.2 produces a reading close to 20. In the middle of the range, the oscillator behaves like a simple scaled version of the volatility-relative MACD.

  • At the extremes, it compresses asymptotically. As the input gets larger, the output approaches +100 (or -100) but never reaches it exactly. Each additional unit of input produces a smaller and smaller change in the output. Going from a volatility-relative value of 2.0 to 3.0 might move the oscillator from +90 to +96. Going from 3.0 to 4.0 might move it from +96 to +98. The oscillator approaches the boundary but flattens.

  • The output is hard-clamped at Β±100. For safety, there is a final clamp that ensures the reading never exceeds the bounds, even with extreme numerical inputs.

Normalization mental model: Think of a rubber band stretched between -100 and +100. In the middle, it stretches easily β€” small forces produce noticeable movement. Near the ends, the band gets stiffer β€” the same force barely moves it. That is what the bounding function does to the readings. Small momentum moves near the center of the range produce proportional oscillator changes. Large momentum moves near the extremes produce smaller and smaller changes in the reading.


Why this approach instead of something simpler

There are simpler ways to bound a value. The obvious alternatives and why they were not chosen:

Why not min-max scaling?

Min-max scaling maps the minimum and maximum observed values to the bounds. The problem: the minimum and maximum change over time. Yesterday's max might be different from today's, which means the scale shifts. A reading of +80 yesterday and +80 today might represent very different amounts of raw momentum because the scaling window changed. Min-max normalization is unstable over time.

Why not z-score normalization?

Z-score normalization divides by the standard deviation and centers on the mean. This is more stable than min-max, but z-scores are unbounded. A z-score can be +5, +10, or +200 depending on how extreme the value is. That defeats the purpose of having a fixed, bounded range that is easy to interpret. You would still need a bounding step, and then you are back to a two-stage process anyway.

Why not simple division by a fixed value?

You could divide by a constant to scale everything to roughly -100 to +100. The problem: what constant? The appropriate divisor depends on the instrument and the timeframe. SPY's MACD ranges are very different from BTC's. A fixed divisor that works for one instrument would over-scale or under-scale another. ATR provides an adaptive divisor that adjusts to the instrument's volatility, which is the right approach when you need to handle multiple instruments and timeframes with the same scaling logic.

The tradeoff this approach makes

The ATR-normalized bounding approach gives you:

  • A fixed, bounded range (-100 to +100)

  • Adaptive scaling that responds to the instrument's volatility

  • Unitless readings that are comparable across timeframes and tickers

  • Stable bounds that do not shift with the data window

In exchange, it takes away:

  • Resolution at the extremes. When the oscillator approaches the bounds, it compresses. Two readings that look similar near Β±100 might represent very different amounts of raw momentum. The oscillator can tell you "things are extreme" but cannot tell you "exactly how extreme."

  • Perfect proportionality at high values. Near zero, a doubling of raw momentum approximately doubles the oscillator reading. Near Β±100, a doubling of raw momentum barely moves the reading. The relationship between raw MACD and oscillator value is nonlinear at the tails.

This is the fundamental tradeoff of the bounding. You gain bounded comparability at the cost of tail resolution. For most of the oscillator's range β€” roughly -80 to +80 β€” the readings are proportional and meaningful. In the extreme zones, they tell you "things are stretched" but not "how much more stretched they have gotten."

What this means for your decisions: in the mid-range, you can trust the oscillator to show you relative differences between slots, between timeframes, and between time periods. A reading of +40 is meaningfully different from +60, and both are meaningfully different from +80. But once you are past about Β±85, the readings are in the compression zone. A +92 and a +97 may look similar on the chart and may represent very different amounts of raw momentum underneath. In that zone, the oscillator is still useful β€” it tells you things are extreme β€” but it is no longer useful for gauging how extreme. That is the exact region where you need to shift your attention from the oscillator to other inputs: price structure, volume, and the broader context that the oscillator cannot see.


How ATR Length and Sensitivity interact

ATR Length and ATR Sensitivity are the two user-controlled parameters that shape the normalization. Understanding their interaction helps you calibrate the oscillator.

ATR Length controls the lookback for the volatility baseline. A longer ATR Length (e.g., 50) produces a smoother, slower-moving denominator. The normalization baseline changes gradually, which means the oscillator responds to shifts in momentum but not to shifts in volatility regime (at least not quickly). A shorter ATR Length (e.g., 5) produces a denominator that reacts quickly to recent volatility. This can make the oscillator noisier, because the denominator itself is moving.

ATR Sensitivity controls the multiplier applied before bounding. It determines how aggressively the oscillator uses its available range.

Together:

ATR Length

Sensitivity

Effect

Long (30-50)

Default (1.0)

Smooth, stable normalization baseline. Oscillator readings change primarily in response to momentum shifts, not volatility shifts. Good for instruments with stable volatility.

Short (5-10)

Default (1.0)

Reactive normalization baseline. Oscillator may jitter as the ATR denominator responds to recent volatility spikes. Good for instruments where volatility itself is volatile β€” but accept the noise cost.

Default (14)

High (2.0-3.0)

Default normalization speed, but readings saturate quickly. The oscillator spends most of its time near the bounds. Loss of mid-range resolution.

Default (14)

Low (0.3-0.5)

Default normalization speed, but readings cluster near zero. The oscillator rarely reaches the attention thresholds. Loss of extreme-range utility.


How to verify the normalization behavior yourself

You do not need to understand the math to verify that it works as described. Here are three experiments.

Experiment 1: Sensitivity sweep

  1. Set the indicator to default settings on a 1m chart.

  2. Note the range of oscillator readings β€” they should use a moderate portion of the -100/+100 range.

  3. Change ATR Sensitivity to 3.0. Observe that readings now cluster near the Β±100 bounds. The oscillator is saturating faster.

  4. Change ATR Sensitivity to 0.3. Observe that readings now cluster near zero. The oscillator barely reaches the attention thresholds.

  5. Return to 1.0.

What this confirms: The sensitivity multiplier directly controls how aggressively the bounding function is engaged. The normalization pipeline is responsive to this parameter in the way described.

Experiment 2: ATR Length comparison

  1. With default settings, note how smoothly the oscillator moves.

  2. Change ATR Length to 5. Observe that the oscillator becomes noisier β€” not because the MACD changed, but because the normalization denominator is now responding to short-term volatility shifts.

  3. Change ATR Length to 50. Observe that the oscillator becomes smoother and slower to respond. The normalization baseline is averaging over more bars.

  4. Return to 14.

What this confirms: ATR Length controls the smoothness and responsiveness of the normalization denominator, as described.

Experiment 3: Cross-ticker scale verification

  1. Set Slot 01 to SPY at 5m, Slot 02 to QQQ at 5m, Slot 03 to IWM at 5m.

  2. Observe that all three slot lines occupy comparable portions of the -100/+100 range despite the instruments having different price levels and different volatility profiles.

  3. If you had raw MACD readings for these three instruments, they would be at different magnitudes. The normalization has made them comparable.

What this confirms: The ATR normalization successfully converts instrument-specific MACD values into unitless readings on a shared scale.


What the normalization does not do

The normalization makes readings comparable. It does not make them equally reliable.

An instrument with smooth, consistent volatility will produce stable, trustworthy normalization. An instrument with wild, erratic volatility will produce normalization that is itself noisy β€” the denominator jumps around, which makes the oscillator reading jump around, even if the MACD numerator is stable.

The normalization also does not eliminate the MACD's inherent lag. The oscillator is a normalized, bounded MACD β€” not a replacement for MACD. The lag from the moving averages, the signal line, and the ATR lookback is all still present. The bounding does not add lag, but it does not remove the lag that was already there.

And the normalization does not know about market context. A reading of +80 during a news-driven spike and a reading of +80 during a quiet trend look the same on the oscillator. The normalization responded to the ATR of the period, not the cause of the momentum. Whether the reading deserves your trust depends on what drove it, and that is something you have to judge. Two identical readings on the oscillator can represent completely different market situations. The normalization makes them comparable in magnitude; it does not make them comparable in meaning.


The honest bottom line

The normalization is a tool for comparability, not a guarantee of accuracy. It gives you a way to put multiple MACD readings on the same scale so you can compare them, blend them, and make sense of them in a single pane. It does this well enough that the comparison is meaningful across timeframes and across instruments.

The cost is resolution at the extremes. The benefit is a bounded, unitless reading that does not lie about its scale. For most of the oscillator's range, the tradeoff is heavily in your favor. Near the bounds, you need to be aware that the oscillator is compressing and that the exact reading carries less information than it does in the mid-range.

Knowing this changes how you use the tool. In the mid-range, treat the readings as meaningful measurements β€” compare them, track them, weight them. In the extreme zones, treat them as category labels: "things are stretched." The oscillator is still doing its job in both zones. But the job it can do for you changes at the boundaries, and the reader who understands that shift will use it better than the reader who treats Β±95 with the same precision they would treat Β±40.