For the Geeks
This page is for anyone who has looked at the oscillator and thought, "this does not behave like the stochastic I know." It does not. Three mechanics make it different from a textbook stochastic setup: the bipolar sca...
Written By Axiom Admin
Last updated About 1 month ago
For the Geeks
This page is for anyone who has looked at the oscillator and thought, "this does not behave like the stochastic I know." It does not. Three mechanics make it different from a textbook stochastic setup: the bipolar scale, the K/D smoothing chain, and the weight-normalized blending. Each one is a deliberate design choice with a clear tradeoff. Understanding them will not change what the oscillator shows, but it will change how much sense the display makes to you.
Nothing here is implementation detail. This page explains what each mechanic does to your experience and how to verify it, not how to reproduce the code.
The bipolar scale
Standard stochastic oscillators run from 0 to 100. The midpoint is 50 β above it is considered bullish, below it is bearish. This oscillator converts the standard stochastic into a bipolar range: -100 to +100, with zero as the midpoint.
The conversion is straightforward in concept: the old midpoint (stochastic 50) becomes zero. Everything above 50 maps to positive values. Everything below maps to negative values. The full range (0-100) maps to the full bipolar range (-100 to +100). The midpoint is subtracted and the result is scaled to fill the full range, then clamped to prevent overshoot.
Conversion reference
This table is the key to the most common confusion with this oscillator. When you see +70 on the display, you are not looking at stochastic 70. You are looking at something equivalent to stochastic 85 β a genuinely extreme reading that occurs less frequently than traditional overbought. The default overbought and oversold thresholds on this oscillator are already deep into territory that standard stochastic analysis would consider rare.
The traditional stochastic 80 overbought level corresponds to +60 on this oscillator. If you have been trading with 80/20 thresholds and want a similar frequency of OB/OS events on this tool, set the levels to +60/-60 rather than using the +70/-70 defaults.
Why use a bipolar scale?
Three practical reasons:
Zero becomes meaningful. On a standard stochastic, 50 is the midpoint, but the visual center of the 0-100 range does not feel like a midpoint β the chart's axis still runs from 0 to 100. On the bipolar scale, zero is both the mathematical and visual center. Above zero is positive momentum within the range. Below zero is negative. The zero line becomes an immediate visual reference for direction.
Symmetry around center. The overbought and oversold thresholds are equidistant from zero (+70 and -70 by default), which makes the visual display symmetric. The upper extreme and lower extreme occupy equal visual space.
Slot-to-slot comparability. Every slot and the blended output live on the same centered range, so a 5-minute read, a 60-minute read, and a cross-ticker read can all be compared on one ruler without mentally remapping the midpoint each time.
The recalibration cost
The practical consequence of the bipolar scale is that your existing stochastic instincts need adjustment, and that adjustment is harder than it sounds. If you have spent months or years reading a standard stochastic β knowing what 80 "feels like," recognizing the difference between 70 and 85 at a glance, having a trained sense for when OB means "stretched but fine" versus "about to snap" β those calibrations were built for a different coordinate system. On this oscillator, 80 is not 80. Your instinct that says "the reading is high, probably overbought" fires at the wrong threshold.
This is not something you solve by reading the conversion table once. It takes time. The table tells you the math. Your eye needs to learn the new scale through repetition. During that transition, the most likely mistake is treating +60 as "not extreme" because it does not match the 80 you are used to β when +60 is, in fact, exactly traditional 80. The second most likely mistake is treating +70 as "just a little past overbought" when it corresponds to traditional 85, a significantly rarer level. Until the new scale feels as natural as the old one, keep the conversion table somewhere you can check it quickly.
How to verify
Set a slot to the same timeframe as your chart. Set K Smoothing to 1 with SMA and D Length to 1 with SMA (effectively passing near-raw values through). Compare the slot's K reading to a standard stochastic indicator with the same K Length on the same chart. If the standard stochastic shows 80, this oscillator should show approximately +60. If the standard stochastic shows 50, this oscillator should show approximately 0.
The match gets less direct once smoothing is active, because this script smooths the stochastic before plotting the bipolar result. But with smoothing minimized, the conversion should be clear.
The K/D smoothing chain
When you look at the settings, you see "K Smoothing" and "K Type" alongside "D Length" and "D Type." These are two separate smoothing passes applied in sequence, and they mirror the classic stochastic %K and %D relationship β with the additional flexibility of selectable MA types.
What actually happens
Raw %K is calculated using the standard
ta.stoch()function with the K Length you set. This measures where the source price closed relative to the highest high and lowest low over the lookback period. Nothing unusual here β it is the same stochastic core that any standard indicator uses.
First smoothing pass: K Smoothing. The raw %K is passed through a moving average β the type selected by "K Type" with the period set by "K Smoothing." The result, after bipolar conversion, is the slot's K line (the one that is plotted). This is analogous to the "slow %K" in traditional stochastic terminology.
Second smoothing pass: D. The already-smoothed 0-to-100 K value is passed through another moving average β the type selected by "D Type" with the period set by "D Length." That result is then converted to the slot's bipolar D line. The D line is not plotted individually for each slot, but it determines the slot's regime (bullish when K > D, bearish when K < D) and feeds into the blended D value.
This is similar in structure to how some oscillators use a fast line and a signal line. K is the faster, more responsive one. D is the slower reference. The crossover between them defines regime.
How the smoothing stacks
Each smoothing pass introduces its own lag. K Smoothing delays the slot's reaction to new price data. D Length delays the regime detection further. If Master Smoothing is enabled (see below), a third pass delays the blend on top of everything else.
The lag from each layer compounds. Here is the general picture:
The stable, confident appearance of a heavily smoothed line is exactly what makes over-smoothing dangerous. The line looks like it knows what it is doing. What it is actually doing is reporting old news slowly. And the more smoothing you add, the harder it becomes to tell the difference between genuine trend conviction and a line that simply has not caught up to a reversal yet.
If you find yourself adding smoothing because the display is "too noisy," consider whether the noise is the indicator being honest about choppy conditions rather than a problem to solve with another MA pass. Choppy markets produce choppy stochastic readings. That is the stochastic doing its job. If the market has no clear direction, the oscillator should look uncertain. When you smooth away that uncertainty, you are not revealing an underlying trend β you are inventing the appearance of one by averaging away the disagreement. Sometimes that averaging is useful. Sometimes it is the most expensive kind of self-deception. The test is always the same: toggle the smoothing off and see what the less-filtered picture says. If it says something meaningfully different, the smoothing was hiding information, not cleaning it.
How to verify
Set both K Smoothing and D Length to 1 with SMA. The K and D values should nearly overlap, because both are near-identity smooths of the same input. Now increase D Length to 9. You should see the D line visibly lag behind K, and regime flips (brightness changes) happen later. That lag is what D Length adds to every reading. Then reduce D Length back to 1 and increase K Smoothing to 9 instead. The K line itself becomes smoother and more delayed, while D tracks it closely. Each smoothing stage has a different effect on the visual output, and understanding which one is responsible for a particular behavior requires testing them independently.
Weight-normalized blending
When multiple slots are enabled, the indicator blends their K values and their D values into a single composite pair: the Blended K and Blended D.
How weights work
Each slot has a Blended Weight setting. The weights are relative, not absolute. They do not need to sum to 100 or any specific number. The indicator normalizes automatically by dividing each slot's weighted contribution by the total weight of all contributing slots.
This means:
Weights of 33.3 / 33.3 / 33.3 give approximately equal influence (one-third each)
Weights of 10 / 20 / 30 give the same ratios as 16.7% / 33.3% / 50%
Weights of 1 / 1 / 1 are identical to 33.3 / 33.3 / 33.3
The normalization is automatic and invisible. You never see a "total weight" or a percentage breakdown. You just set the relative numbers and the indicator does the division.
Weight 0 versus disabled
This distinction matters and is easy to miss:
A slot at weight 0 still exists as an independent stochastic reading. It still plots its K line on the chart and still fires its alerts. The only thing weight 0 removes is the slot's contribution to the blended composite. This is useful when you want to see a slot's data alongside the blend without letting it influence the blend.
If all enabled slots have weight 0 (or all produce no valid stochastic data), the blended lines show nothing β the blend requires at least one contributing slot.
Consensus bias
Because the blend is a weighted average, it inherently smooths out disagreement. Two strongly bullish slots and one strongly bearish slot produce a moderately bullish blend. The bearish slot's contribution is diluted by the averaging, not amplified.
This is not a flaw β it is how weighted averages work. But it has a practical consequence: the blend will always understate extremes and always mute minority opinions. If one slot is at -60 while the other two are at +40, the blend sits near +7 β a reading that suggests mild bullishness and hides the fact that one timeframe is deeply bearish. The most important slot at any given moment might be the dissenting one, and the blend is the last place its dissent will show clearly. The individual slot lines are where you see disagreement; the blend is where disagreement gets averaged away.
Edge cases
One slot enabled, other two disabled: the blend matches that slot exactly. No averaging occurs.
One slot at weight 0, two at equal weight: the zero-weight slot is excluded from the blend. The blend averages the other two.
All enabled slots at weight 0: the blend shows nothing (na). At least one slot must have a non-zero weight for the blend to compute.
One slot returns no valid data (for example, insufficient history on a cross-ticker symbol): the blend ignores that slot and normalizes around the remaining contributing slots. This happens silently β there is no visual indicator that the blend is missing a contributor.
How to verify
Enable only one slot (disable the other two). The blend should exactly match that slot's K and D. Now enable a second slot with equal weight. The blend should sit at the midpoint between the two. Set one slot's weight to 0. The blend should ignore it β it should match the other contributing slot(s) exactly. This simple test confirms that the weighting and normalization are behaving as described.
The MA library integration
The smoothing behind each slot β both K smoothing and D smoothing β is powered by the Axiom Moving Average Library. This library provides a range of moving average algorithms beyond the standard SMA and EMA, including WMA, RMA, VWMA, HMA, ALMA, and SWMA.
For most users, SMA or EMA will be the right choice. The other algorithms exist for traders who have specific reasons to use them β for example, HMA for reduced lag at the cost of occasional overshoot, or ALMA for its tunable center-of-mass weighting.
The key thing to know about the library integration is that ALMA is a special case. When you select ALMA as any K Type, D Type, or Master MA Type, the ALMA Offset, ALMA Sigma, and ALMA Floor Offset settings in the Power User section apply globally. Every instance of ALMA across all three slots and the master smoother shares the same parameters. If you change the ALMA Offset to tune Stoch 02's K smoothing, Stoch 01's D smoothing will also change if it uses ALMA. There is no per-slot ALMA configuration.
This global scope is easy to forget. If you are using ALMA on multiple slots and one slot starts behaving unexpectedly after a parameter change, check whether the change propagated to the other ALMA instances. The simplest way to avoid this is to limit ALMA use to one slot or one smoothing stage, so the global parameters only affect one calculation.
A note on the MTF safety pattern
This oscillator uses a well-known technique for fetching higher-timeframe data safely. When On Bar Close is on, the indicator fetches the previous HTF bar's finalized values, which eliminates repainting. When Off, it fetches the current building bar's values, which provides responsiveness at the cost of retroactive display changes.
This page is not the right place for the full explanation β see MTF and Repainting for that. The relevant point here is that the stochastic math runs inside the higher-timeframe data context. The ta.stoch(), the K smoothing, and the D smoothing all execute using the higher-timeframe bar series. A K Length of 14 on a 60-minute slot means 14 sixty-minute bars of lookback, not 14 one-minute bars. The slot behaves as if it were a standalone stochastic indicator running on that timeframe's chart β the multi-timeframe request wraps the entire calculation, not just the price data.
The On Bar Close toggle exists because the tradeoff between stability and freshness is a real one, and hiding it from the user would make the tool less honest, not more convenient.