A critical review of 2024 Gartner Magic Quadrant for Supply Chain Planning Solutions, April 2025

By Léon Levinas-Ménard
Last modified: April 5th, 2025

Introduction

The 2024 Gartner Magic Quadrant (MQ) for Supply Chain Planning Solutions purports to map the top software vendors by “Completeness of Vision” and “Ability to Execute,” a format that carries an illusion of objectivity. In practice, however, this MQ tells us more about Gartner’s own incentives and the industry’s legacy baggage than about genuine technical merit. This critical review takes a truth-seeking, skeptical look behind the glossy quadrant. We dissect the methodology and structure of the MQ, exposing systemic flaws – from pay-to-play dynamics that skew rankings, to an over-representation of decades-old “dinosaur” vendors. We scrutinize the Leaders quadrant, calling out vague marketing claims (inflated ROI figures, magical AI/ML promises with no technical detail, “black-box” automation), and highlight internal contradictions in vendor narratives (e.g. boasting real-time planning while also claiming to optimize massive assortments – a combination that is computationally incompatible in real-world computing). Throughout, we draw on deep technical reasoning and independent analyses (including Lokad’s 2021–2025 research) to cut through the hype. We also spotlight what the MQ omits – notably the frequent failed implementations of these very solutions, and the absence of disruptive, scientifically rigorous vendors who choose not to play Gartner’s game. The goal is a comprehensive, skeptical analysis that challenges Gartner’s vision of the supply chain planning software landscape and equips readers to see through the quadrant’s comforting-but-misleading simplicity.

The Magic Quadrant Methodology: Structure, Biases, and Pay-to-Play

Gartner’s MQ is presented as an impartial evaluation: a neat chart with two axes, “Ability to Execute” (y-axis) and “Completeness of Vision” (x-axis). In theory, a vendor in the coveted top-right “Leaders” quadrant has both a strong execution and a compelling vision. Yet the process behind these rankings is far from neutral. In Gartner’s own descriptions, the criteria include things like product capabilities, customer experience, market responsiveness, strategy, etc. – highly qualitative factors that give analysts broad leeway. It’s an open secret in the enterprise software world that major analyst firms like Gartner operate on a “pay-to-play” model, and their endorsements often reflect vendor relationships more than product excellence 1. As one Lokad FAQ bluntly puts it, “vendors that choose not to engage in the substantial paid interactions with Gartner typically see themselves relegated to less favorable positions or entirely omitted.” The result is that Magic Quadrants tend to function as infomercials for those who pay up, rather than rigorous evaluations – many executives treat these rankings “with the same credibility they would assign to casual horoscopes” 2.

This systemic bias is not just an accusation from competitors; it’s borne out by how Gartner conducts business. Vendors invest heavily in analyst relations – buying Gartner research services, briefing analysts, purchasing reprint rights – knowing well that their MQ position can improve with more engagement. Gartner, of course, denies any quid pro quo, but even if individual analysts strive for objectivity, the conflict of interest is inescapable. As Joannes Vermorel observed, there is a “pretense of neutrality” in these vendor evaluations, but in reality “the conflicts of interest are so prominent that you don’t get neutrality; what you get is pay-to-win.” 3 4 No code of conduct or analyst firewall can fully remove the subtle pressures; as Vermorel notes, even well-intentioned people exhibit unconscious bias when significant commercial interests loom 5 6. In the MQ context, this means large vendors with big marketing budgets and Gartner subscriptions are systematically favored. The absence of truly independent analysis is baked into the model – Gartner’s revenues come from the very firms being “objectively” ranked.

Vision vs. Execution – Who Defines Success?

The MQ’s two axes ostensibly measure a vendor’s “Vision” and “Execution,” but these concepts are nebulous. What counts as a bold vision in supply chain planning software? In many cases, it’s whatever Gartner’s analysts have been hearing in vendor briefings and market buzzwords. For example, having all the trendy acronyms on your roadmap (AI/ML, digital twin, real-time IBS, etc.) will tick the Vision box, whether or not your product truly delivers on them. Conversely, a vendor with a genuinely new approach might be marked down if it doesn’t fit Gartner’s preconceived template of “what good looks like.” Ability to Execute often boils down to size: number of customers, global presence, implementation partner network – essentially a proxy for marketing reach and enterprise sales execution, not actual successful outcomes. This skews the MQ against smaller, technically innovative firms (who might have better algorithms but fewer big references) and in favor of incumbents who have large install-bases even if their implementations often under-deliver.

Crucially, Gartner’s scoring does not account for the real-world success rate of deployments in any transparent way. A vendor that sells 100 copies of its software and has 80 failures out of 100 will still score high on “Ability to Execute” by sheer sales and presence, whereas a vendor that sells 10 copies and succeeds 10/10 might be deemed weaker execution. The MQ’s methodology thus penalizes quality in favor of quantity. It’s telling that Gartner’s own analysts admitted that user adoption of these planning solutions is abysmally low. At the 2024 Supply Chain Planning Summit, Gartner’s Pia Orup Lund shared that on average only 32% of a typical organization’s planners actually migrated to the new planning tool that was implemented – a shockingly low adoption rate given the multi-million dollar projects 7. In other words, two-thirds of supposedly successful deployments fail to win over users, becoming shelfware. Yet such outcomes barely dent a vendor’s Magic Quadrant positioning, since Gartner’s evaluation glosses over these failures. The “Ability to Execute” axis is not a measure of delivering value, but largely a measure of market penetration and vendor stamina. This calls into question the meaningfulness of the Leaders quadrant: execution in name only, not in reality.

The Quadrant’s False Objectivity

The very format of the MQ – a quadrant graphic – lends an air of scientific analysis, as if vendors were precisely measured and plotted on a Cartesian chart. This is misleading. Unlike a data-driven scatter plot, the positions on a Magic Quadrant are the result of closed-door discussions, weighted scoring rubrics that Gartner doesn’t fully disclose, and ultimately subjective judgment. The visual simplicity (who’s up and to the right vs down and left) masks a multitude of subjective choices. It also forces a one-size-fits-all comparison that ignores context: a “Leader” for one type of company might be a terrible choice for another’s needs, yet the MQ will still portray one as universally better. By condensing multifaceted products into a single dot, nuance gets lost. For example, a vendor might have an excellent solution for forecasting but a mediocre one for production scheduling – how do you reflect that in one X–Y point? Gartner’s answer is effectively to average it out and weigh it by whatever criteria they fancy that year. The result is a blurring of distinctions that can mislead readers into thinking differences are merely incremental. The quadrant format encourages a lazy interpretation: “top-right is best, bottom-left is worst,” bypassing the hard work of understanding trade-offs and specific capabilities. As Vermorel quipped, “Magic Quadrants are, as the name suggests, superstition at their best, and fake science at their worst.” 8 The harsh phrasing underscores that the quadrant graphic is more marketing theater than rigorous research.

Legacy Vendors Dominating the Leaders Quadrant

Looking at the 2024 MQ for Supply Chain Planning, one cannot help but notice that the Leaders quadrant is effectively an alumni club of legacy vendors. Kinaxis, Blue Yonder, Oracle, OMP, Logility – these companies (or their predecessor names) have been around for decades. Kinaxis was founded in the 1980s (as WebPlan), Blue Yonder dates back to 1985 (as JDA Software), OMP to the 1970s, Logility to the 1990s, and Oracle is as old as modern IT itself. Their continued presence at the top could indicate enduring excellence – or it could indicate Gartner’s criteria inherently favor scale and longevity. History suggests the latter. These incumbents achieved prominence often not purely through superior technology, but through aggressive acquisitions and expanding their portfolios. Blue Yonder is a case in point: it is “the outcome of a long series of M&A operations”, resulting in “a haphazard collection of products, most of them dated” under one brand 9. Gartner’s MQ still lists Blue Yonder as a Leader with a “comprehensive microservices architecture” and full end-to-end suite, glossing over the reality that much of that suite is stitched together from older tools. Enterprise software doesn’t magically unify through M&A; integration is hard, and Blue Yonder’s stack shows its seams. The Lokad study of vendors noted that Blue Yonder “prominently features AI” in marketing, but the “claims are vague with little or no substance.” In fact, the few clues from Blue Yonder’s public technical materials (e.g. some open-source projects) “hint at pre-2000 approaches” like basic ARMA forecasting models 10. So we have a Leader hyping “AI” while likely using 20+ year-old forecasting techniques under the hood. This raises a tough question: Is Blue Yonder a Leader because of technical merit, or because of legacy momentum? Gartner’s report doesn’t ask this, but a skeptical review must.

Kinaxis and Oracle are also instructive examples. Kinaxis, celebrated for its RapidResponse platform, is indeed a pioneer of sorts – it introduced fast, in-memory “concurrent planning” well ahead of many competitors, and it remains very popular for Sales & Ops Planning. But it, too, is a legacy player modernizing on the go. Historically, Kinaxis did not offer advanced statistical or ML forecasting in its core; users had to import forecasts or use simple methods. A few years ago, Kinaxis recognized this gap and started bolting on probabilistic tools via acquisitions/partnerships (e.g. acquiring Rubikloud for AI forecasting, partnering with Wahupa for inventory optimization) 11 12. These are positive moves, but essentially Kinaxis is catching up on AI/ML capabilities that others have had, and doing so by integrating separate modules. This raises questions of technology coherence – Kinaxis’s new features are “bolt-ons” that “raise questions of tech stack coherence” 12. It remains to be seen whether these probabilistic modules are deeply integrated or just superficial add-ons for marketing. In the MQ narrative, Kinaxis is top-ranked due to its “Ability to Execute” and a decade of success, but a deep technical audit shows a deterministic legacy architecture evolving into a hybrid. Not to mention, the very in-memory approach that gives Kinaxis speed also imposes limits – large deployments face “high hardware costs and scalability limits as data grows (large deployments require massive RAM)” 13. This nuance is missing from Gartner’s assessment of “execution.” A planner reading the MQ might think Kinaxis is a safe bet because of its Leader status, without realizing that if their supply chain data is huge, they might hit cost/performance walls or need significant hardware investment to use Kinaxis’s real-time simulations. These realities rarely surface in Gartner’s write-up.

Oracle’s inclusion as a Leader in 2024 is another nod to incumbency. Oracle’s SCP solution is part of its vast Cloud SCM suite. Gartner applauds Oracle’s “vision for composable architecture” and its ability to “plan at any level of detail.” 14 But that reads like a brochure – “any level of detail” planning sounds great, except seasoned practitioners know that planning at extremely high granularity (say, SKU-store level with complex constraints) will not be instantaneous or even feasible if you truly mean any detail. There’s a computational trade-off: you either aggregate to plan fast, or you take more time (or more computing power) to plan in detail. Oracle, like others, is effectively implying they can square the circle. Perhaps their cloud can crunch more than older systems, but the claim of full granularity with no consequence strains credulity. It mirrors a general trend: legacy vendors rebranding as “cloud AI platforms” but under the hood still grappling with limitations. Oracle has acquired numerous companies over the years (Demantra for demand planning, G-Log, etc.) and integrated them to build its suite. Credit where due: Oracle has invested to modernize, but again the MQ blurb won’t mention how many years and consulting hours it might take to actually realize that “composable” vision in a client deployment.

It’s also notable which legacy vendors do not appear in the MQ Leaders or at all. SAP, for instance, is only a Challenger in 2024 (despite being the ERP juggernaut with an SCP product, IBP). Infor – another big ERP player that had acquired the likes of Mercia and Predictix for planning – is absent from the 2024 MQ altogether. Why? Possibly because Infor’s focus shifted (or it chose not to participate in Gartner’s evaluation). Lokad’s vendor study pointed out that Infor acquired Predictix (an AI forecasting specialist) in 2016, but “the forecast angle remained a second-class citizen” within Infor’s suite 15. Predictix’s supposedly advanced ML techniques were “deprioritized” and it’s “dubious that those methods outperform pre-2000 forecasting models”, with Infor’s “AI” claims deemed dubious as well 16. In short, Infor’s planning innovation fizzled, so they’re not on the MQ. This is actually a strike in favor of Gartner’s integrity – they didn’t mind dropping a big name when it fell behind. But it also underlines how acquisitions can lead nowhere: buying AI startups doesn’t guarantee leadership if the core company can’t integrate and execute. The irony is, those that do stay in the Leaders quadrant have similar acquisition-heavy histories (Blue Yonder with JDA/i2/Manugistics, Logility grabbing Garvis and Starboard in recent years 17 18, Kinaxis with Rubikloud, etc.), yet Gartner continues to give them the benefit of the doubt. The over-representation of legacy vendors suggests that past market share and relationship with Gartner often trump present technical excellence.

Hype vs. Reality: Questionable Claims in the Leaders Quadrant

The marketing fluff in supply chain planning is legendary, and the MQ write-ups often reflect vendor claims that warrant extreme skepticism. A recurring pattern with Leaders is the boast of sky-high ROI and transformative outcomes – usually without concrete evidence. For instance, many vendors tout figures like “30% inventory reduction, 98% service level, 90% productivity improvement” after implementing their solution. ToolsGroup, now a Niche Player but historically often cited by analysts, has advertised results such as “90+% product availability, 20-30% less inventory, 40-90% reduced workload.” While these numbers have presumably occurred for some client somewhere, they sound too good to be true in combination. A Lokad analysis cautioned that such stats are typically cherry-picked: “likely come from different clients each hitting one of those high marks, not one client hitting all simultaneously” – no one should expect all those gains at once 19. Reality involves trade-offs; you might reduce inventory 20% but then service level might dip, or vice versa. The MQ, however, rarely includes any such caveats when praising a Leader’s “ability to deliver value.” It tends to echo the success stories the vendor provides. The result is an inflation of expectations. A supply chain executive reading about Kinaxis or Blue Yonder in the MQ might think these tools will automagically solve problems and yield rapid ROI, when in fact the implementation might struggle and the gains, if any, come after long change management efforts.

Another area of hype is forecasting accuracy and AI. Every vendor now claims some form of “AI-powered forecasting” that will drastically improve demand predictions. Yet specifics are almost always missing. Blue Yonder’s and Logility’s blurbs mention AI/ML, Kinaxis talks about “Planning AI,” etc., but Gartner’s summary doesn’t press for details on how their AI is different or proven. A stark example is the concept of “demand sensing” – a buzzword for using very near-term data to adjust forecasts. ToolsGroup has used this term, as have others. However, as Lokad’s research noted, “claims about ‘demand sensing’ are unsupported by scientific literature.” 20 It’s basically a marketing term; there’s little evidence that what vendors call demand sensing yields consistently better forecasts beyond what good short-term stats can do. Similarly, one vendor (John Galt Solutions, a Challenger) brags about a proprietary algorithm “Procast” being more accurate than competitors, but provides no public proof – in fact, it’s telling that this algorithm was absent from the top ranks of the M5 forecasting competition, where open-source methods excelled 21. In all likelihood, John Galt’s secret sauce is not beating the likes of Facebook’s Prophet or Hyndman’s R packages in pure accuracy, but the MQ write-up wouldn’t reveal that. It takes independent digging to uncover these things. The MQ’s Vision axis tends to reward vendors for talking up AI and analytics, regardless of whether their approaches are novel or statistically sound. Consider o9 Solutions: last year (2023) Gartner had o9 in the Leaders quadrant, partly on the strength of its hype as a “digital brain” platform. By 2024, o9 slipped to Visionary. What changed? Possibly Gartner realized some of o9’s grand claims were unproven. Lokad’s inspection of o9 found that “many of its [AI] claims (for example, that its knowledge graph uniquely improves forecasting) are dubious without scientific backing” 22. Indeed, analysis of o9’s publicly visible tech components showed mostly standard techniques, “nothing fundamentally novel enough to justify the grand ‘AI’ branding” 22. This is a common story: marketing outpaces reality. Gartner, to its credit, eventually adjusts (as with o9), but only after initially amplifying some of that hype by placing the vendor as a Leader. This flip-flop also highlights how subjective the MQ is – a visionary one year, a leader the next, then back to visionary – which doesn’t inspire confidence in a stable, criteria-driven process.

One of the most misleading claims spread across Leader vendors is the idea of “real-time, end-to-end planning.” This phrasing suggests you can have a truly up-to-the-minute synchronized plan across your entire supply chain, maybe even automatically adjusting in real-time. Kinaxis and Blue Yonder have both used language around concurrent or continuous planning; Gartner’s text for Oracle highlights “planning at any level of detail” and Kinaxis is praised for automation and alignment. The contradiction lies in the scale versus speed trade-off. For large enterprises, “any level of detail” can mean millions of SKU-location combinations, complex multi-echelon constraints, seasonality, etc. Achieving an optimal plan even daily for that scope is a massive computational undertaking. To do it in real-time (sub-second or instantaneous updates whenever data changes) is virtually impossible with today’s algorithms and hardware, unless you sacrifice detail or optimality. Kinaxis addresses this by using in-memory architecture to recalc fast, but even they have limits (needing huge RAM and simplifying some calculations) 13. Blue Yonder’s “Luminate” platform talks about an AI engine and perhaps uses heuristics for quick adjustments rather than full reoptimization. The MQ write-ups don’t acknowledge these technical realities. They let the vendors have it both ways: claim comprehensive, granular analysis and instant response. A critical eye should notice this as marketing doublethink. For example, if a vendor claims to handle “real-time planning” and also “attribute-based planning at highly granular levels” (as Gartner notes for some Visionaries too) 23 24, one should ask: how do they maintain real-time speed with such granularity? The likely answer: they don’t, not without heavy hardware or simplifications. Lokad’s team has pointed out that pushing the envelope on both extremes usually fails – either the system bogs down, or it silently drops the granularity (e.g. updates some aggregate numbers in real-time but not everything). Unfortunately, Gartner’s MQ does not press vendors to resolve these contradictions. The appearance of cutting-edge capability is presented, and it falls to users to discover later that certain combinations of promises are infeasible.

Black-Box “AI” and Lack of Transparency

Another concern with the Leader quadrant vendors is how much they rely on “black-box” solutions. Many brag about AI-driven automation where the system makes decisions with minimal human input. In theory, this is great – who wouldn’t want an autopilot for supply chain? – but in practice, if the AI is a black box, it can be dangerous. Planners have decades of experience with optimization software that is unexplainable; they tend not to trust it, or it produces weird recommendations that are hard to debug. Blue Yonder, for example, has leaned heavily into AI since rebranding (the very name “Blue Yonder” came from an AI startup it acquired). However, little is published about how their AI works, and users often describe the need to manually override or adjust the outputs. Léon Levinas-Ménard noted that Blue Yonder’s approach comes with “black-box AI complexity”, a double-edged sword 25. It might be sophisticated inside, but if it’s opaque, it increases user resistance and risk of unseen errors. Gartner’s evaluation gives almost no insight into this. A vendor could have a fragile machine learning model under the hood, but as long as they have a few reference customers willing to say it helped them, Gartner will mark them high. There is a broader pattern of lack of technical transparency: with a few exceptions, these vendors do not publish research papers, do not compete in open algorithm competitions (as mentioned with John Galt’s absence from M5, and similarly none of the big Leaders had top entries in such events), and do not open-source meaningful parts of their software. They operate on trust and brand. Gartner’s Quadrant perpetuates that because it doesn’t demand evidence beyond customer interviews and demos. It’s telling that a vendor like ToolsGroup, which historically had a more analytical, white-box approach (with its well-known SO99+ optimization engine), felt the need to join the AI hype wave recently. ToolsGroup started dubbing everything “AI-powered” and introduced probabilistic forecasting in marketing around 2018, but did so in a clumsy way – advertising probabilistic forecasts yet still bragging about MAPE improvements 26 27 (even though MAPE, an error metric, is meaningless for probabilistic forecasts!). This kind of inconsistency shows a marketing-driven adoption of buzzwords without true understanding. Lokad’s critique was pointed: ToolsGroup’s claims of AI were “dubious” and their materials “hint at pre-2000 forecasting models” dressed up as new 28. If a relatively technical vendor like ToolsGroup succumbed to buzzword inflation, one can imagine how much pure marketing goes into the portfolios of more sales-driven companies.

The Gartner MQ report does occasionally acknowledge when something is mostly vision. For instance, it notes a vendor’s “vision for AI” as a strength (e.g. Logility’s “above-average vision for AI” is mentioned after its recent acquisitions) 17 18. But calling “vision for AI” a strength essentially means they talk a good AI game. It’s not a delivered feature – it’s a plan or aspiration. Praising that in the same breath as actual capabilities blurs the line between current reality and future roadmap. This again serves the vendors: it rewards slideware and announced intentions. A customer might sign with a Leader thinking they’re buying an AI-powered, fully automated, real-time planning solution, only to find that a lot of those capabilities are early, unproven or require separate projects to implement. Gartner’s format doesn’t clearly differentiate proven functionality vs. roadmapped features in the MQ graphic; both get baked into that “Completeness of Vision” placement. Thus the Leaders quadrant tends to be filled with companies that are great at telling a compelling story about the future of supply chain (often borrowing that story from Gartner’s own published trends to curry favor), regardless of whether they are the ones actually realizing that future.

Ignoring the Ugly: Omitted Failures and Ongoing Struggles

An aspect conspicuously missing from Gartner’s glossy quadrant is the dark side of enterprise software: the failed projects, massive cost overruns, and shelved implementations. Supply chain planning, in particular, has a long history of failed or underwhelming deployments – so much so that many practitioners grow cynical of any new “solution” after being burned a couple of times. Yet if one reads the MQ report, you would think it’s all success stories and differentiating features. Gartner does gather customer feedback as part of MQ research, but it typically only publishes a sanitized summary of “Strengths” and “Cautions” for each vendor. Those cautions are usually phrased mildly (“some customers cite usability challenges” or “integration can be complex”). You will not see blunt statements like “Vendor X had multiple project failures in the last year” in an MQ. That kind of truth, if it emerges, comes through grapevines and user forums, not from Gartner. The result is an information asymmetry: a prospective buyer reading MQ might be unaware that, say, a certain Leader vendor has a reputation for 18-month implementations that often never go live. Gartner’s omission of failure rates does a disservice to the industry, as it paints an overly rosy picture.

Consider “time to value” – an absolutely critical factor for any project. Did Gartner evaluate how long each vendor’s typical implementation takes, or how often they deliver on time? If it did, that insight is not reflected clearly in the quadrant. We know anecdotally that some of the large suite vendors (like traditional Blue Yonder or SAP projects) could take years to fully roll out. Meanwhile, some newer SaaS players might deploy in months. But the MQ’s Ability to Execute doesn’t explicitly call this out. In fact, a smaller vendor might get knocked as “not scalable for large projects” even if they actually deploy faster, simply because they haven’t tackled as many global rollouts yet. Success bias also creeps in: Gartner largely talks to reference customers provided by the vendors, who are usually the happier ones. The many unhappy or less successful clients aren’t proactively offered up for interviews. So the sample is biased towards success cases. Gartner analysts know this, but the MQ write-ups seldom acknowledge it beyond generic caution statements.

The frequency of failed implementations is the elephant in the room. Various studies (including one by Gartner in a different context) have cited extremely high failure rates for big tech initiatives – e.g., Gartner famously said 85% of AI projects fail, and a large percentage of supply chain tech projects underdeliver. A LinkedIn summary of the 2024 Gartner SCP Summit mentioned that despite modern planning tech, many companies still struggle and planners don’t adopt the tools 29 7. When only 32% adoption is the average, that means the majority of projects are not yielding their intended impact. Yet, the MQ doesn’t integrate that metric into vendor rankings. If anything, it hints at it obliquely: a vendor with lower “Ability to Execute” might be one whose customers complained about usability or complexity. But it’s all reading tea leaves. The MQ graphic itself, showing some dots lower on the execution scale, doesn’t tell you “many clients failed to go live with this software.” It just shows a dot in the lower half, which could be misinterpreted as the company being small or something, rather than a red flag of troubled implementations. Gartner’s narrative thus sidesteps accountability: the vendors aren’t truly held accountable for outcomes in the field, only for selling and having a nice roadmap.

For a practitioner audience, this is a serious flaw. It means the MQ is not a reliable predictor of success. A “Leader” could very well lead you into a multi-year, multimillion-dollar quagmire if your organization isn’t extremely prepared and aligned, and Gartner wouldn’t have signaled that clearly. Conversely, a niche or visionary vendor might actually give you a quicker win, but Gartner’s low ranking might scare off your executives from considering them. This dynamic is why many experienced supply chain leaders take the MQ with a grain of salt and rely on peer recommendations and independent evaluations instead. In the words of Lokad’s FAQ, “genuine due diligence is best served by examining proven results in live operational contexts”, rather than trusting a “seal of approval from a pay-to-play consultancy” 30. The MQ provides at best a starting list of vendors, but absolutely must be tempered with external research into how those vendors have fared in companies similar to yours.

Challenging Gartner’s Leaders: Case Studies in Underwhelming Technology

To ground the critique, let’s zoom in on two of the vaunted Leaders from 2024 – Kinaxis and Blue Yonder – and examine whether their top-right positioning is justified by technical substance or belied by known issues.

Kinaxis (Leader)Concurrent Planning, but Late to AI. Gartner positions Kinaxis as the highest Leader, praising its “unified user experience” and automation. Kinaxis’s strength indeed lies in its responsive planning engine: an in-memory model that propagates changes quickly so you can do scenario simulations on the fly. This is very useful for S&OP and what-if analyses. However, Kinaxis historically did not offer advanced forecasting or optimization out of the box. Its planning was largely rule-based and deterministic, relying on planners to set up supply/demand balancing logic. Recognizing industry shifts, Kinaxis recently added probabilistic forecasting and inventory optimization capabilities – but it did so by acquiring or partnering for those pieces (e.g. the Wahupa MEIO engine, the Rubikloud AI forecasting) 11 31. These additions raise questions: Are they seamlessly integrated into the RapidResponse platform, or are they external modules kludged in? Early indications suggest the latter – effectively Kinaxis now has “apps” for inventory optimization and ML forecasting that plug into its system. It’s not the same as a homegrown, unified analytical core. Moreover, Kinaxis’s venture into AI is quite new. As of 2023, it started marketing “Planning.AI,” which signals it knows it must play the AI game, but it has been cautious in its messaging – perhaps because it knows its AI/ML depth is still developing 32 33. Lokad’s analysis pointed out that Kinaxis had not publicly demonstrated its probabilistic forecasting prowess (no publications or competitions), so one must take on faith that it’s effective 34. In short, Kinaxis absolutely deserves credit for its pioneering concurrency and many happy customers, but from a purely technical angle, it is hardly the most advanced in analytics. Its core architecture is aging – reliant on lots of RAM and CPU to brute-force fast calculations – and it is only now modernizing its forecasting approach which others embraced years earlier. There have been whispers in user communities of Kinaxis struggling when datasets get very large or when trying to do detailed planning beyond certain thresholds (which aligns with the noted RAM/scalability concerns 13). So is Kinaxis truly the “best of the best” in supply chain planning software in 2024? Or is it just the best at selling an end-to-end vision and having a track record of implementations (albeit at a hefty price and effort)? Gartner’s MQ squarely puts it as #1, but a more critical ranking might put Kinaxis as very strong in interactive planning but still middling in algorithmic forecasting. The MQ’s single-axis scoring can’t reflect that dichotomy well. Thus, Kinaxis’s Leader position – while earned through market success – papers over its late start in AI and potential integration challenges ahead.

Blue Yonder (Leader)All-in-One Suite or Miscellaneous Mess? Blue Yonder’s presence as a Leader seems almost a given due to its long heritage (formerly JDA). Gartner cites its “Luminate Platform” and comprehensive functionality, implying it does everything: demand planning, supply planning, inventory optimization, production scheduling, etc., plus newer things like analytics and microservices. The promise is an end-to-end, integrated platform. The reality reported by those who know the product is different. Blue Yonder’s suite is the result of many acquisitions over decades: they have multiple demand planning engines (the legacy JDA vs the newer Blue Yonder ML engine), multiple supply planning and fulfillment modules, store replenishment tools from different origins, etc. It has been a challenge for them to truly unify these. Lokad’s vendor study gave a scathing review: “under the BY banner lies a haphazard collection of products, most of them dated.” 9 The integration is more at the user interface and marketing level than at a deep technical level. For example, Blue Yonder might offer a common portal, but behind the scenes the demand planning might be a different codebase than the fulfillment or the production scheduling. From a customer perspective, that can mean inconsistent user experience and data synchronization headaches. Gartner’s MQ write-up does not mention this at all; it portrays Blue Yonder as a modern, unified cloud (the term “microservices architecture” is used 35, which sounds very cutting-edge). The skeptic asks: if Blue Yonder truly had a unified microservices re-architecture, why did it need to be acquired by Panasonic to stay afloat, and why are so many of its long-time customers reportedly still running older on-prem versions of JDA modules? The answer is that the transformation is incomplete. Blue Yonder’s marketing also leans heavily on AI now, likely due to the influence of the small Blue Yonder (German AI startup) they acquired and then named the whole company after. Yet, as noted, their AI claims are vague. Lokad noted the lack of substance and that their known techniques were quite conventional 36. In day-to-day use, some BY modules like demand forecasting are okay, but not necessarily better than off-the-shelf statistical packages – and sometimes worse, given reports of struggles to get the “AI” to outperform simple baselines. There have also been high-profile implementation challenges: for instance, large retailers who tried to implement Blue Yonder’s demand and fulfillment planning have encountered multi-year delays and only partial success (this often isn’t public, but insiders know some examples). Gartner’s MQ, of course, makes no mention of any such cases. Blue Yonder remains in Leaders, likely buoyed by its breadth and global reach (and yes, its consistent engagement with Gartner and presence in analyst conversations). Challenging Blue Yonder’s leader placement, one would say: if a vendor’s stack is an “aging technology” mix and its AI is unproven, should it be top-right? The MQ says yes, because they can execute (they have lots of service partners, they can support big clients – which is true) and they have a broad vision (i.e., a solution for everything). This illustrates the MQ’s bias: breadth and market presence trump depth or elegance. A company that does 10 things half-well will outrank one that does 3 things extremely well. Blue Yonder does many things, and some arguably poorly, yet it’s a Leader because it covers all bases and no one got fired for buying JDA (to paraphrase the old IBM saying). But supply chain teams should be wary – a jack of all trades suite can be a master of none, and integrating old tech under a new interface can create more complexity than it solves. The MQ doesn’t account for this risk.

These case studies reinforce why a skeptical lens is needed. The Leaders often have credentials (lots of customers, full feature lists, big teams) but also have baggage (old code, past failures, marketing fluff). Gartner’s format mostly only sees the former. It’s left to the user to uncover the latter, which is what we’re highlighting here.

The Visionaries and Niche Players: More Signal or Noise?

While much of our focus is on the Leaders and the MQ methodology, a brief word on the other quadrants: Visionaries, Challengers, and Niche Players. Paradoxically, some of the most interesting vendors reside there – but Gartner’s nomenclature can mislead here too. A “Visionary” in MQ terms means high Completeness of Vision, lower Ability to Execute. It might as well say “good ideas, not enough market presence/resources.” In 2024, the Visionaries quadrant included o9 Solutions, GAINSystems, E2open, and Dassault Systèmes (DELMIA). These are a mix of relatively newer players (o9, GAINS) and established ones that haven’t dominated this segment (E2open, Dassault). Notably, o9 was demoted from Leader to Visionary 37, which Gartner explained by saying o9 still has strong vision (no kidding – they market aggressively with buzzwords) but perhaps execution issues or competition caught up. E2open and Dassault have interesting tech components (E2open has a broad supply chain network focus; Dassault owns Quintiq which is a powerful optimization tool). Yet, none of them made Leader. Why? Likely because they are either smaller in SCP market share (GAINS is smaller specialty provider, Quintiq is often used in very custom planning scenarios, etc.) or have had mixed customer feedback. The thing to note is that some Visionaries or even Niche players might be the right choice for certain situations. For example, GAINS (a.k.a. GAINSystems) is well-regarded for its inventory optimization prowess and has very happy clients in certain sectors – it’s just not as big as the Leaders. A company whose primary pain point is inventory optimization might get more value from GAINS than from, say, implementing Oracle’s whole suite. But the MQ’s nature is to emphasize Leaders. Visionaries get a nod, but many executives reading it will think, “They aren’t Leaders, so they’re second-tier.” This is unfortunate: in some cases a Visionary is a Leader-in-waiting that just hasn’t paid dues in the market, or a niche specialist that chooses depth over breadth. Gartner at least acknowledges them, but again the format downplays their stature.

The Niche Players quadrant in 2024 is crowded (Adexa, Coupa, ToolsGroup, Slimstock, AIMMS, Blue Ridge). This quadrant effectively says “lower vision, lower execution” – which can be a kiss-of-death label. But within Niche are also some new entrants and specialists that simply don’t match Gartner’s broad SCP definition. AIMMS, for instance, is a specialist in supply chain modeling (a optimization toolkit), and Blue Ridge focuses on distribution-centric planning. They are niche by design, serving particular needs, not aiming to be end-to-end. Their placement doesn’t necessarily imply they’re bad; it just means they aren’t broad or big enough in Gartner’s eyes. ToolsGroup being in Niche as a “new addition” 38 is interesting, because ToolsGroup was an established vendor for years – its absence before might have been due to not participating. Now it’s included, but Gartner tossed it into Niche with some praise for its vision on handling uncertainty (probably referencing its probabilistic approach) 39. One could argue ToolsGroup has more real vision (with its focus on probabilistic forecasting years ago) than some so-called Visionaries did. But Gartner’s criteria can be quirky. Coupa’s presence as Niche (after being Challenger before) shows how quickly fortunes change – Coupa acquired LLamasoft (supply chain design) and now itself got acquired, and apparently its SCP story isn’t resonating; hence it fell in quadrant. The common theme is that the quadrant placements often lag or smooth over industry turmoil. A company might be struggling or evolving in reality, but in MQ land they move one quadrant over, or linger in a category that doesn’t fully reflect their potential or problems. It’s a coarse categorization.

From a critical perspective, one should treat Visionary/Niche players not as “ignore list” but as potentially hidden gems or at least as sources of specific capabilities. However, Gartner’s text often gives them short shrift – a few sentences each – compared to the attention lavished on Leaders. This again reflects Gartner’s business model: their clients (the readers of MQs, typically large enterprises) are often only interested in “top vendors,” and Gartner obliges. The unfortunate side effect is innovation suffers; if emerging or more focused players don’t get visibility, enterprises keep feeding the big fish, and the cycle continues.

The Omission of Disruptors: Where is Lokad (and Others)?

Perhaps the strongest indictment of the Gartner MQ is not who it includes, but who it leaves out. Nowhere on the 2024 quadrant do we see names like Lokad, despite Lokad being a supply chain software firm that by many technical measures out-innovates most of the MQ incumbents. Granted, Lokad is smaller and has taken an unconventional approach (focusing on probabilistic forecasting, a domain-specific language for supply chain, and a “quantitative supply chain” philosophy). But consider Lokad’s track record: it pioneered probabilistic forecasting a decade ago (far ahead of Kinaxis et al.), and in the M5 forecasting competition in 2020 (a global benchmark with hundreds of teams), Lokad’s methodology ranked #1 worldwide at the SKU level (and #6 overall among 909 teams) 40 41 – essentially proving its algorithms on an open stage. The company has its technology openly documented and even teaches supply chain science via YouTube lectures 42. This kind of transparency and technical achievement is rare. By objective standards, shouldn’t a vendor like that at least qualify as a Visionary, if not a Challenger? The reason it’s absent is very simple: Lokad refuses to play the Gartner game. Lokad has publicly stated it is not a Gartner subscriber and does not invest in analyst relations, focusing instead on building product and serving customers 43. As a result, Gartner’s analysts have minimal exposure to Lokad (and perhaps even a bias against it, since it’s a challenge to their narrative). The MQ inclusion criteria might say that a vendor needs a certain revenue or customer count, but one suspects even if Lokad met those, without paying Gartner, it would remain ignored or undervalued. This absence is a red flag for the MQ’s completeness. A quadrant that claims to cover “the most significant SCP solution providers” and yet omits a player known for technical excellence and unique approach is clearly not all-encompassing. And Lokad is not alone – other analytics-focused or emerging players (perhaps in academic or open-source realms, or regional specialists) don’t feature either.

One might argue Gartner can’t include everyone, and that’s fair. But the omission of a known innovator suggests a pattern: the MQ is inherently conservative. It lags on recognizing paradigm shifts. It’s great at cataloguing the established vendors and incremental improvements, but poor at acknowledging when a smaller entrant has a fundamentally better mousetrap. Gartner’s clients (big companies) also often ask Gartner to evaluate only established vendors (“we want to see how the usual suspects stack up”). Thus the MQ is as much a mirror of large enterprise procurement shortlists as it is an analysis. It reinforces a cycle: if it’s not on the MQ, many won’t consider it. Lokad’s strategy has been to bypass this by proving value directly to practitioners and through independent media. But how many potential buyers might never even hear of Lokad because it’s absent from Gartner’s reports? This is why we call it a pay-to-play bias – not in the sense of a crude bribe, but in the sense that the rules of the game favor those who participate in Gartner’s ecosystem.

From a truth-seeking standpoint, the absence of technically sound yet disruptive vendors like Lokad from the MQ should make readers very cautious. It means the MQ’s view of “completeness of vision” might actually be incomplete. It also means that if your goal is to find the best solution for your supply chain problem, you cannot rely solely on the MQ; you must cast a wider net. The MQ should perhaps come with a warning label: “Non-traditional or maverick approaches not represented.” In scientific terms, it’s as if a research review excluded outlier studies that had breakthrough results simply because they weren’t published in the usual journals. A major quadrant purporting to map innovation that excludes one of the few vendors known for a radically different approach (probabilistic programming in this case) is arguably invalid as a map of innovation. It has a big blind spot.

Conclusion: A Call for Skepticism and Deeper Analysis

The Gartner Magic Quadrant for Supply Chain Planning Solutions, 2024 edition, presents itself as the definitive guide to choosing a planning software vendor. In reality, it is a highly subjective, commercially influenced snapshot that should be read with healthy skepticism. We have seen how the MQ’s structure – its axes and visuals – conceal deep biases: favoring large legacy vendors, rewarding marketing hype and broad promises, and overlooking critical factors like implementation success and technical depth. The Leaders quadrant, far from an assurance of quality, contains vendors with well-known shortcomings, from Kinaxis’s reliance on bolt-on AI to Blue Yonder’s patchwork platform and others’ inflated claims. Gartner’s pay-to-play dynamics and the “infomercial” nature of some Magic Quadrants mean that vendor ratings can correlate as much with Gartner engagement as with product excellence 1. The over-emphasis on vision (often meaning buzzwords) and execution (often meaning sales footprint) creates a ranking that is only loosely connected to what actually drives success in supply chain planning – namely, solid technology tailored to the business’s needs, implemented by capable people, and adopted by its users.

For a company seeking a supply chain planning solution, the MQ can be a starting point – it lists many players, and Gartner’s detailed report (outside the quadrant graphic) does note some strengths and weaknesses. But one must go beyond the quadrant. Treat it as one input among many, and critically cross-examine its claims. Ask: What’s not being said? What might be biased? Investigate independent reviews, talk to actual users (not just the happy references), and consider doing pilots or benchmarks. The maxim “trust but verify” applies strongly – or perhaps “mistrust until verified.” As we have highlighted, even Gartner’s own analysts acknowledge how tough it is to get these projects to succeed (with shockingly low adoption rates in many cases) 44. That reality should humble any glowing quadrant ranking.

Ultimately, the Magic Quadrant’s biggest value may be in provoking the right questions rather than giving answers. It can alert you to who the big players are and what they claim. But it falls on you to cut through the hype. If a vendor says “AI-driven real-time planning,” challenge them to explain concretely how it works and how they avoid the pitfalls. If a Leader has never published or proven its tech, don’t take Gartner’s word that it’s great – demand evidence. And be aware of the confirmation bias: once a vendor is labeled a Leader, we tend to rationalize why they deserve it. Try the inverse – imagine they weren’t on the quadrant, would you still shortlist them? Conversely, imagine a niche player had the marketing clout of a Leader, would their tech suddenly seem more viable?

The MQ provides a comforting simplification in a complex domain, but managing a supply chain is not so simple as picking the farthest-up-right dot. In fact, the dot might be misleading you away from a better solution that’s off the chart. Savvy supply chain professionals will therefore use Gartner’s MQ as a light reference, not a bible. They will appreciate why some call these quadrants “fake science” 8 and instead focus on first principles and real evidence. As Joannes Vermorel advises, real-world case studies and proven results should trump paid ratings 30. In supply chain planning, what matters is whether the software delivers improvement in service levels, inventory, cost, and agility – and whether it can be sustained in your organization. That doesn’t come out of an x-y plot, but from rigorous evaluation and maybe a bit of adversarial thinking (testing vendor claims against hard scenarios).

In conclusion, Gartner’s 2024 MQ for Supply Chain Planning, when stripped of its mystique, appears as a conservative, marketing-tinged portrayal of the vendor landscape. It highlights the usual giants (with all their warts unspoken), sprinkles in some smaller ones, and misses some true innovators. A maximally truth-seeking review finds that the emperor has few clothes: the quadrant graphic conceals more than it reveals. By being skeptical and demanding technical depth over glossy narratives, one can avoid the quadrant’s pitfalls. The onus is on the buyer to see through the quadrant’s limitations – because successful supply chain planning is rooted in reality, not in magic. 2 4

Footnotes


  1. FAQ: SCM Reassurance ↩︎ ↩︎

  2. FAQ: SCM Reassurance ↩︎ ↩︎

  3. Adversarial market research for enterprise software - Lecture 2.4 ↩︎

  4. Adversarial market research for enterprise software - Lecture 2.4 ↩︎ ↩︎

  5. Adversarial market research for enterprise software - Lecture 2.4 ↩︎

  6. Adversarial market research for enterprise software - Lecture 2.4 ↩︎

  7. The State of Supply Chain Planning: Takeaways from Gartner’s London Summit ↩︎ ↩︎

  8. #supplychain #digitaltransformation #predictiveanalytics | Joannes Vermorel | 38 comments ↩︎ ↩︎

  9. Market Study, Supply Chain Optimization Vendors ↩︎ ↩︎

  10. Market Study, Supply Chain Optimization Vendors ↩︎

  11. Supply Chain Planning and Forecasting Software ↩︎ ↩︎

  12. Supply Chain Planning and Forecasting Software ↩︎ ↩︎

  13. Supply Chain Planning and Forecasting Software ↩︎ ↩︎ ↩︎

  14. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  15. Market Study, Supply Chain Optimization Vendors ↩︎

  16. Market Study, Supply Chain Optimization Vendors ↩︎

  17. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎ ↩︎

  18. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎ ↩︎

  19. eCommerce Optimization Software ↩︎

  20. Market Study, Supply Chain Optimization Vendors ↩︎

  21. Market Study, Supply Chain Optimization Vendors ↩︎

  22. Supply Chain Planning and Forecasting Software ↩︎ ↩︎

  23. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  24. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  25. eCommerce Optimization Software ↩︎

  26. Market Study, Supply Chain Optimization Vendors ↩︎

  27. Market Study, Supply Chain Optimization Vendors ↩︎

  28. Market Study, Supply Chain Optimization Vendors ↩︎

  29. The State of Supply Chain Planning: Takeaways from Gartner’s London Summit ↩︎

  30. FAQ: SCM Reassurance ↩︎ ↩︎

  31. Supply Chain Planning and Forecasting Software ↩︎

  32. Supply Chain Planning and Forecasting Software ↩︎

  33. Supply Chain Planning and Forecasting Software ↩︎

  34. Supply Chain Planning and Forecasting Software ↩︎

  35. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  36. Market Study, Supply Chain Optimization Vendors ↩︎

  37. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  38. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  39. What’s Changed: 2024 Magic Quadrant for Supply Chain Planning Solutions ↩︎

  40. Market Study, Supply Chain Optimization Vendors ↩︎

  41. Market Study, Supply Chain Optimization Vendors ↩︎

  42. Market Study, Supply Chain Optimization Vendors ↩︎

  43. FAQ: SCM Reassurance ↩︎

  44. The State of Supply Chain Planning: Takeaways from Gartner’s London Summit ↩︎