What an extensive review of pattern detection performance taught us about the gap between technical analysis theory and what actually works in live markets.


The cleanest ascending triangle on your screen right now is probably the worst trade on it.

That sounds wrong. Every technical analysis textbook teaches the opposite: cleaner patterns are higher-conviction setups. Sharper trendlines, more confirmation touches, tighter geometric symmetry. Reward those features in your scanning, downweight the ambiguous ones, and your hit rate should climb.

I built a stock pattern detection system around exactly that assumption. Then I measured what actually happened.

Patterns scoring 30 or higher on our 32-point geometric quality scale hit their targets 29% of the time. Patterns scoring under 23 hit theirs 60% of the time. The system I had built was systematically selecting the worst available trades.

This article is what I learned from that failure, drawn from roughly 370,000 pattern detections analyzed across nearly two years of NASDAQ and NYSE coverage. Most of it contradicts what mainstream trading guides recommend, including respected sources like Bulkowski's Encyclopedia of Chart Patterns. All of it is what my own data showed.


The detection system, briefly

I run StockDataAnalytics, where we scan roughly 6,000 stocks every trading day. Around 3,500 of those get filtered out for being too thinly traded or priced under five dollars. The remaining 2,500 receive full pattern detection across sixteen bullish formations: ascending triangles, bull flags, cup-and-handle setups, double bottoms, volatility compressions, falling wedges, and others.

Every detection earns a score across three independent dimensions:

  • Structure (0 to 12 points): geometric fit, meaning how cleanly the price action matches the textbook definition.
  • Volume (0 to 10 points): whether participation behavior confirms the pattern.
  • Breakout Readiness (0 to 10 points): how positioned the setup looks for an imminent move.

The total runs from zero to thirty-two.

That three-dimensional framework was the right one. The problem was how I weighted the inputs inside it. Structure was treated as the primary signal, with volume and readiness acting as secondary confirmations. That ordering was backwards. Properly measured, structure score was inversely correlated with two-week forward returns. The geometry that scanners reward most was the geometry that produced the worst trades.

Three findings explain why.


Finding one: each resistance touch erodes the edge

The cleanest version of this story comes from the ascending triangle dataset. As we tightened the detector, we tracked how many times price had pressed against the resistance ceiling before the pattern qualified for scoring.

Resistance Touches Market Beat Rate Average Alpha
2 touches 55% +1.6%
3 to 4 touches 48% +0.4%
5 touches 42% -0.3%
6 or more touches 35% -0.9%

The decay is monotonic. Every additional touch shaved predictive value off the pattern. By six touches, the trade carried negative alpha: you would have been better off holding cash.

The mechanism is straightforward. A resistance level touched twice is fresh territory. Few participants have built positions around it, few stop-loss clusters sit beneath it, and a break carries genuine surprise. A resistance level touched six times is well-known territory. Every algorithmic scanner has flagged it. Every retail charting platform has drawn a horizontal line on it. By the time the pattern looks confirmed, the trade is crowded, the stops are obvious, and the move that breakout buyers are paying for has already largely happened.

This is the inversion that disorganized my earlier model: confirmation and opportunity are not the same thing. The features I had treated as evidence of pattern quality were actually evidence that the pattern was no longer tradable.


Finding two: perfect structure scores predicted the underperforming trades

The resistance touch result was a specific case of a broader effect. When we correlated structure scores against two-week forward returns across the full dataset, the relationship inverted somewhere around score 28 and stayed inverted through the top of the range.

For the ascending triangle detector, scores of 7 to 8 produced a 62% market beat rate. Scores of 9 or higher produced 40%. The gap was not subtle, and it was not isolated.

What kept the system viable was a small bucket we noticed in the outliers. Patterns combining a moderate structure score (7 to 8) with a strong volume score (8 or higher) and a strong breakout readiness score (8 or higher) hit a 75% market beat rate. We started calling that combination the golden bucket. The problem: only 2.5% of detected patterns landed there. The dominant bucket, holding 58% of detections, was the opposite: high structure with weak volume and weak readiness. Textbook-perfect setups with no real participation behind them.

When we ran the correlations directly, the math explained the imbalance. Structure score correlated negatively with volume score at -0.30 and with breakout score at -0.51. The detector was systematically favoring patterns that looked good but lacked the underlying activity that actually predicted breakouts.

The conclusion was uncomfortable to internalize but mechanically obvious: rewarding pattern beauty was selecting against pattern readiness.


Finding three: volume is simpler than the textbooks claim

Volume was the part of the analysis where I expected the most nuance. The conventional framework says quiet accumulation during consolidation followed by a volume spike at breakout is the highest-confidence setup. I expected the data to reveal a Goldilocks zone: too quiet means abandonment, too loud means amateur chasing, somewhere in the middle is institutional positioning.

Across 137,937 pattern detections with sufficient volume data, the relationship was essentially monotonic. More volume produced better outcomes. Less volume produced worse ones.

Volume Ratio N Beat Rate
Below 0.4x 2,593 44.5%
0.4 to 0.6x 14,923 47.8%
0.6 to 0.8x 32,846 49.2%
0.8 to 1.0x 32,119 50.4%
1.0 to 1.5x 37,322 53.5%
1.5 to 2.5x 14,379 55.6%
Above 2.5x 3,755 58.2%

Random sampling across sixteen patterns confirmed it: thirteen of them showed higher volume outperforming middle volume. The Goldilocks idea was wrong.

One specific signal did hold strongly. Patterns identified on stocks trading below 40% of average volume were almost universally bad trades regardless of pattern type. The 7.6 percentage point gap between the lowest and highest volume buckets was statistically significant at Z=17.56, p<0.001. The mechanism is intuitive: stocks with extreme volume dryup are stocks institutions have stopped tracking. A pretty pattern on an abandoned stock is still an abandoned stock.

The most uncomfortable finding was about our own scoring. Our volume score, which I had designed carefully across multiple metrics, showed essentially zero correlation with outcomes. In the bull flag detector, winners averaged 7.83 points, losers averaged 7.82. Whatever our volume score was measuring, it was not what predicted breakout follow-through.

Volume mattered. Our measurement of it did not.


What we changed

These results forced a fundamental redesign of the scoring framework.

We renamed Structure Score to Setup Quality Score. The change was not cosmetic. We reduced weight on geometric perfection and added weight on factors the geometry could not capture: volatility compression, moving average alignment, and momentum positioning. We also introduced what we now call obviousness penalties. Patterns scoring at the top of every dimension simultaneously get flagged as probable crowded trades and downweighted, the inverse of how the original model treated them. A perfect score is now a yellow flag, not a green light.

Volume score got rebuilt around behavior rather than absolute levels. We added On-Balance Volume slope analysis to detect divergences between price and cumulative buying pressure. We replaced "more dryup is always better" thresholds with empirically validated ranges. We weighted up-day versus down-day volume spikes to capture institutional behavior. And we shifted weight away from breakout-day volume, which turned out to be less predictive, toward formation-period volume, which turned out to be more predictive.

Breakout Readiness got tied to genuine timing indicators: RSI positioning, MACD histogram direction as momentum confirmation, moving average alignment, and proximity to resistance with recency weighting.

The net effect was shifting weight away from structure (which had been actively misleading) toward volume behavior (which, properly measured, predicted outcomes) and breakout readiness (which captured timing rather than aesthetics). Performance improved across most pattern types, with several detectors that had previously been close to coin flips climbing into the 60s and 70s.


What this means if you trade off charts

Three principles fall out of this work, whether you scan algorithmically or read charts manually.

Be skeptical of textbook-clean setups. If a pattern jumps off the screen at you, it has jumped off thousands of other screens too. By the time you act on it, the move has already been priced in by every other trader running a similar scan. The opportunities that survive are usually the ones that look slightly off: the trendline with a kink, the base that is not perfectly flat, the right shoulder that does not quite mirror the left. Ambiguity keeps competition out, and competition is what arbitrages the edge away.

Watch volume during formation, not at breakout. The breakout-day volume surge gets all the attention because it is visually obvious and emotionally satisfying. It is also too late to be useful as a signal. The information that matters is what happened during consolidation. Were up days carrying more participation than down days? Was OBV trending up while price moved sideways? Was the stock attracting any volume at all, or had it been left for dead? Quiet accumulation beats loud breakouts more often than the textbooks suggest.

Fewer touches mean fresher setups. A resistance level touched six times is well-mapped territory. Everyone has it on their chart. A resistance level touched twice is unmapped. The break carries surprise value, the stops are not yet clustered, and the move can run before the crowd catches up. Confirmation and opportunity are not the same thing.

The meta-lesson behind all three is that the textbooks teach pattern detection, not pattern prediction. Recognizing an ascending triangle is table stakes. Anyone with a charting platform can do it. The actual edge lives in distinguishing the triangles that work from the triangles that do not, and the data on that distinction looks very different from what most retail trading content describes.


Methodology and limitations

These findings come from roughly 370,000 pattern detections gathered through SDA's daily scanner across NASDAQ and NYSE, with two-week forward outcomes tracked for each detection. Market beat rate is calculated against SPY total return over the same window. The volume analysis specifically uses 137,937 detections with sufficient data for ratio classification.

A few limitations worth naming directly: The earliest version of the scoring backtest was later found to contain a bug that overstated raw performance numbers. The directional findings in this article (more resistance touches predicting worse outcomes, structure scores inversely correlating with returns, volume dryup signaling abandonment) replicated cleanly after the bug fix and are not affected. The absolute beat rate numbers in the original detector versions were higher than the recalibrated system would now report. All scoring numbers in this article reference the original detector versions as they were running when the analysis was performed. Bear regime data was thin during the analysis window and is not separately reported here.

The current production detectors use the redesigned scoring described above and produce different score-to-outcome curves. We will publish updated per-pattern performance breakdowns separately.


About StockDataAnalytics

StockDataAnalytics scans roughly 6,000 NASDAQ and NYSE stocks every trading day for sixteen bullish chart patterns, scores each detection across structure, volume, and breakout readiness, and delivers curated daily recommendations to subscribers before market open. The findings in this article shape the current production scoring. If the analysis here was useful to you, the daily signals apply these filters and others across the full pattern set in real time.

About the author

Rene Haase is the founder of StockDataAnalytics and Prismora Data LLC. He spent roughly 30 years in software engineering with leadership roles at Amazon, Groupon, Paylocity, and Integral Ad Science, managing teams of up to 100 engineers. He holds economics degrees from German universities. He writes about pattern research on the SDA blog, Medium, Seeking Alpha and Investing.com.


Disclosure: The author is the founder of StockDataAnalytics.com and uses recommendations from this system for personal investment decisions. This article presents analysis and observations from research data and should not be construed as investment advice. Past performance does not guarantee future results.


Disclaimer: StockDataAnalytics.com is a financial data and analytics service. The information provided through our platform, including stock pattern detection, entry zones, stop losses, and price targets, is for informational and educational purposes only and does not constitute financial advice, investment advice, trading advice, or any other type of advice. We are not registered investment advisors, broker-dealers, or financial planners. Past performance of any pattern or recommendation does not guarantee future results. All investments involve risk, including the possible loss of principal. You should consult with a qualified financial advisor before making any investment decisions. By using our service, you acknowledge that all trading decisions are made at your own risk.