For the Geeks

This page is optional. You can use the indicator correctly without reading it. But if you are the kind of person who wants to understand what happens between the raw price data and the line on your chart, what the pro...

Written By Axiom Admin

Last updated About 1 month ago

For the Geeks

This page is optional. You can use the indicator correctly without reading it. But if you are the kind of person who wants to understand what happens between the raw price data and the line on your chart, what the processing pipeline actually does, why the numbers look the way they do, and what shapes the output at each stage, this is the page.

Nothing here is required for day-to-day use. Everything here will make you a better reader of what the tool shows you.


The bipolar conversion

Standard stochastic oscillators output values between 0 and 100. The 50 line is the mathematical center, but most traders do not use it as a reference. They focus on the overbought zone (usually 80) and the oversold zone (usually 20). The midpoint is a dead zone that people pass through without paying attention.

This indicator rescales the stochastic output to a bipolar range: -100 to +100, centered at zero. The conversion is a linear shift. Each value from the standard 0-100 range gets recentered around 50 and then scaled so the range maps onto -100 to +100. The result is clamped at the bounds.

In concrete terms:

Traditional (0-100)

Bipolar (-100 to +100)

0

-100

10

-80

20

-60

30

-40

40

-20

50

0

60

+20

70

+40

80

+60

85

+70 (default OB)

90

+80

100

+100

Why this matters: The zero line becomes meaningful. On a standard stochastic, 50 is the midpoint but nobody watches it. On the bipolar scale, zero is the dividing line between net bullish territory (positive values) and net bearish territory (negative values). When the oscillator crosses zero, it is moving from one side to the other. That crossing has interpretive weight that the 50-line on a standard stochastic never carried.

The bipolar scale also makes blending more intuitive. When you average a value of +60 with a value of -40, you get +10, a slightly bullish composite. On the traditional scale, averaging 80 and 30 gives you 55, which does not communicate "slightly bullish" as immediately. The bipolar scale's symmetry around zero makes the composite reading easier to parse at a glance.

The tradeoff: Every trader who learned stochastics on the 0-100 scale has to remap their overbought and oversold anchors. If your instinct says "80 is overbought," you need to translate that to +60 on the bipolar scale. The default OB level at +70 corresponds to traditional 85, which is more extreme than the 80 threshold most people carry in their heads. This is not wrong. It just requires a mental adjustment.

How to verify: Load a standard 14-period stochastic with SMA(3) smoothing on the same timeframe. Compare its value to the Axiom slot. A traditional value of 70 should correspond to an Axiom value of approximately +40. A traditional value of 50 should correspond to approximately 0. If the relationship holds, the conversion is working as expected.


The smoothing pipeline

The indicator does not go straight from raw stochastic to the chart. There are multiple processing stages, and understanding the order matters if you want to reason about lag, reactivity, and what each setting actually changes.

The reason to know the order is practical: when a setting does not behave the way you expect, the pipeline tells you which stage is responsible and which settings affect it. If the K line is too noisy, the problem is before the bipolar conversion. If the regime is flipping too often, the problem is at the D stage. If the blend is too sluggish, it may be master smoothing or it may be too many heavily smoothed slots. The pipeline is your diagnostic map.

Here is the pipeline for each slot:

Example
Raw %K (0-100) -> K smoothing MA -> smoothed K (0-100) -> D smoothing MA fed by smoothed K -> smoothed D (0-100) -> Bipolar conversion of both series -> K/D outputs (-100 to +100) -> On Bar Close? shift (optional) -> returned slot values

And then after all slots are computed:

Example
All slot K values -> weighted blend -> blended K All slot D values -> weighted blend -> blended D | v optional master smoothing -> final blended K/D

Stage 1: Raw %K

The stochastic calculation takes the configured source (default: close), the high, and the low over the K Length period (default: 14 bars). The output is raw %K on the traditional 0-100 scale. This is the standard stochastic formula. Nothing unusual there.

Stage 2: K Smoothing

Raw %K is passed through the selected K MA Type (default: SMA) at the K Smoothing length (default: 3). This produces a smoothed version of %K that is still on the 0-100 scale. The purpose of this stage is to control how reactive the K line is. Longer smoothing means a calmer, slower K line.

Stage 3: D Smoothing

The smoothed K value is then passed through the selected D MA Type (default: SMA) at D Length (default: 3). This produces D on the same 0-100 scale that K was still using at that point.

That order matters. D is not computed from the already-converted bipolar K line. It is computed from the smoothed 0-100 K series first, and only after that do both lines get converted for plotting and regime reading.

The purpose of D is regime detection. When K is above D, the slot is bullish. When K is below D, bearish. The D Length controls how sensitive this regime detection is. Shorter D stays close to K (more flips), longer D creates more separation and fewer flips.

Stage 4: Bipolar conversion and output selection

Once the smoothed K and D values exist, both are converted from the standard 0-100 range to the bipolar -100/+100 range and clamped to those bounds.

After that conversion, the "On Bar Close?" setting decides which bar gets returned:

  • Enabled: the slot returns the previous converted bar's value.

  • Disabled: the slot returns the current converted bar's value.

So the repaint control happens after the smoothing and conversion work, not before it.

Stage 5: Weighted blend

After all enabled slots have produced their K and D values, the indicator computes a weighted average across slots. For each slot that is enabled, has a nonzero weight, and has a non-missing K value, the blend adds that slot's K (and D) multiplied by its weight. The total is divided by the sum of contributing weights.

Slots with a weight of zero are skipped entirely. They do not contribute zero to the average. They are excluded from the calculation. Slots where K is not available (missing data, warmup period) are also excluded, and the weights of the remaining slots re-normalize around whatever is left.

This means the blend is always a properly weighted average of whatever data is actually available. If one of three slots has missing data and the other two have weights of 40 and 60, the blend becomes a 40/60 average of those two slots, not a 40/60/0 average that includes a zero for the missing slot.

Why auto-normalization matters: You do not need to make your weights sum to 100. Weights of 2, 1, and 1 produce the same result as 50, 25, and 25. The indicator handles the division. This makes it easy to express relative importance (for example, "I want the hourly to count twice as much as the 5m") without doing arithmetic.

Stage 6: Master Smoothing (optional)

If Master Smoothing is enabled, the blended K and blended D each get one more MA pass at the Master MA Type and Master Length. This is a third smoothing layer, applied after blending.

The cumulative effect is important. By this point, the data has been: calculated as a raw stochastic, smoothed once (K Smoothing), smoothed again (D Smoothing), converted to bipolar, averaged across slots, and now smoothed a third time. Each layer adds lag. The result can be very stable and very slow. That stability is not free. You are paying for it with timeliness.


The safe-MTF pattern

When a slot's timeframe is higher than the chart timeframe, the indicator needs to fetch data from a different timeframe context. Pine Script provides a mechanism for this, but the mechanism has a trap that is worth understanding because it explains why so many multi-timeframe indicators produce misleading historical charts.

By default, requesting a higher-timeframe value on a lower-timeframe chart can introduce future information. Here is what that looks like in practice: your chart is on 5-minute bars, and you request the 1-hour stochastic. At 10:05 (one bar into the hourly period), the chart already has access to the hourly bar's final value, a value that in real time would not exist until 11:00. On the live chart, this manifests as the indicator "knowing" the hourly result before the hour is over. On the historical chart, every 5-minute bar within that hour shows the final hourly value, creating a perfectly clean reading that was never actually available bar by bar during the live session.

This indicator uses the standard safe-MTF approach to avoid that trap. When "On Bar Close?" is enabled, the data request is configured to access the higher-timeframe bar with lookahead enabled and then shifted back by one bar. The effect: the indicator reads the previous completed HTF bar's values, not the current (possibly still-building) bar. This eliminates the future-leak problem because the previous bar's values are fully confirmed. They existed in real time and they will not change.

When "On Bar Close?" is disabled, the one-bar shift is removed. The indicator reads the current HTF bar's values, including the still-forming data. This is faster and more responsive but introduces the repainting behavior described in MTF & Repainting.

Why this matters for interpretation: On higher-timeframe slots, confirmed mode means the stochastic values you see are always from one completed HTF bar ago. They are stable and trustworthy, but they do not reflect what is happening right now in the HTF bar that is currently building. That one-bar lag is the price of stability. If a slot is set to the chart timeframe instead, confirmed mode is still one completed slot bar behind. It just does not create the same staircase look you get on true HTF slots. In live mode, you see the current slot-timeframe stochastic, which is more current but provisional. It can and will change as the bar builds. The indicator gives you the choice per slot because both modes have legitimate uses, and neither is universally better. Knowing which mode each slot is running in is part of knowing what your chart is telling you.


What this means for how you read the output

Understanding the pipeline is not academic. It helps you reason about the indicator's behavior in the exact moments where the chart surprises you, and those are the moments where reasoning matters most:

  • Why the blend is smoother than the individual slots: Blending averages out the noise from individual slots. Adding master smoothing on top adds another layer of damping. If the blend looks like it barely moves, check the smoothing chain.

  • Why the blend lags the fastest slot: The blend includes slower slots that have not turned yet. The fastest slot (lowest timeframe) reacts first, but its contribution is diluted by the others. The blend catches up only when enough slots turn to shift the weighted average.

  • Why regime flips happen at different times on different slots: Each slot runs its own stochastic on its own timeframe with its own smoothing settings. A regime flip on the 5m slot does not cause a flip on the 1h slot. They are independent calculations. The blend flips when the composite weighted average of K crosses the composite weighted average of D.

  • Why higher-timeframe slots step in confirmed mode: When a slot is above the chart timeframe, it is locked to the last closed HTF bar. It updates only when a new HTF bar closes, so between closes the value is constant. If the slot timeframe matches the chart timeframe, confirmed mode is still one bar behind. It just behaves like a delayed line instead of a staircase.

  • Why over-smoothing feels deceptively stable: Three layers of smoothing (K + D + Master) can produce a line that barely moves. That stability is mathematical, not informational. The underlying momentum may have shifted bars ago, but the smoothing chain has not propagated the change yet. The line tells you where momentum was, not where it is.