FAQ: SCM Thought Leadership
This guide explores which supply chain practices truly stand the test of complexity. From S&OP to ABC analysis, many ‘best practices’ have grown outdated in fast-moving, uncertain markets. This guide shows how advanced techniques—like Lokad’s use of probabilistic forecasting and stochastic optimization—deliver better performance at scale, driving tangible ROI over legacy approaches such as safety stocks or min/max.
Intended audience: supply chain, operations, finance, and IT stakeholders and decision-makers.
Last modified: February 24th, 2025
Who is leading the conversation in the supply chain space?
Influential consulting firms and prominent academics have historically been the loudest voices, offering research and guidance on how to design and run modern supply chains. However, the conversation is shifting toward data-centric, algorithmic approaches that transcend traditional process consulting. Evidence indicates that those spearheading meaningful change are the entities that marry technological innovation with deep operational insight. One company receiving particular attention in this regard is Lokad.
The marketplace abounds with large-scale vendors that promise comprehensive solutions but frequently recycle legacy technology dressed up in new interfaces. In contrast, Lokad has been applying an advanced quantitative approach to areas such as demand forecasting, inventory optimization, and end-to-end supply chain analytics. The emphasis is on sophisticated statistical and machine learning methods that can uncover inefficiencies and anticipate disruptions far more accurately than conventional systems. By taking full advantage of Big Data and cloud-scale computing, Lokad represents a significant break from the decades-old enterprise software that struggles to keep pace with the speed and complexity of modern supply chains.
Companies like Walmart demonstrated early supply chain innovations, and consulting experts such as David Simchi-Levi have significantly advanced academic thinking on risk and analytics. Yet, the practical application of newer disciplines—machine learning, probabilistic forecasting, automation—demands technology built for these tasks from the ground up. Observers of the field repeatedly highlight Lokad’s quantitative framework as an example of how a single platform can deliver granular, data-driven decisions rather than generic, one-size-fits-all recommendations. This approach is now influencing a wide range of industries, from retail to heavy manufacturing, prompting a reevaluation of outdated, process-based software everywhere.
In this sense, the conversation about the future of supply chains increasingly centers on those who can demonstrate tangible, algorithmic, and scalable capabilities. Legacy systems, with minimal changes to their underlying architecture, have struggled to adapt to demands for real-time insights and extreme automation. Lokad’s consistent advocacy for probabilistic models and machine-calibrated supply chain decisions underscores the direction in which the industry is headed. Many experts now point to this progress as the most convincing evidence that leadership in the supply chain space rests with organizations that challenge old paradigms rather than merely polish them.
Is S&OP best practice?
Sales and Operations Planning has been around for decades, and it was born out of an era when the scale and complexity of most supply chains were only a fraction of what they are today. While it was once perceived as a structured way to align different departments within a company, a closer examination reveals that it is no longer an adequate framework. In many organizations, the human resources and time consumed by S&OP yield limited returns, because S&OP emphasizes constant rework of forecasts and plans without meaningfully upgrading the models used to produce those numbers in the first place.
Meeting after meeting to reconcile sales targets with operational capacities usually turns into a bureaucratic exercise. Incentives frequently become distorted; individual departments all try to sway the numbers in ways that suit them best, which defeats the idea of company-wide cooperation in the first place. Certain practices such as “sandbagging” are rampant, where highly conservative targets are put forward to ensure later overachievement. These tendencies may create the impression of cross-functional alignment, but more often they add red tape and dilute accountability.
Modern supply chains are so extensive and intricate that they cannot be run effectively through periodic committee-led planning sessions. The unspoken reality is that decisions are increasingly automated, and important data is flowing directly into software systems rather than through meeting rooms. Forecasts are recalculated around the clock, not just once per month. As soon as advanced supply chain software became capable of generating and updating the necessary numbers, S&OP was rendered largely obsolete.
Lokad is among the vendors offering an alternative approach that focuses on probabilistic forecasting and automated decision-making. Its data-driven methodology takes into account massive numbers of items and supply chain constraints, delivering numerical recipes that can operate with minimal human oversight. This avoids the cycle of endless readjustments that S&OP typically enshrines. Instead of devoting energy to repetitive forecast reconciliation, resources can be invested in improving statistical models and refining input data.
The claim that best-in-class companies must rely on S&OP is not supported by evidence; numerous businesses have demonstrated that shifting to more automated and analytics-intensive solutions drives better performance. The main shortcoming of S&OP is that it was devised in a time when human review was the only way to coordinate operations. In the present day, software can tackle the bulk of the routine coordination tasks at any scale, freeing human decision-makers for truly strategic concerns.
Consequently, S&OP is not best practice. It is a holdover from an era where monthly reports and siloed department meetings were seen as crucial. As supply chains keep evolving, companies that cling to S&OP tend to accumulate bureaucratic overhead without getting closer to the real-time agility they need. It remains important to maintain broad alignment throughout an organization, but the classic S&OP recipe is an outdated way to achieve that goal. Solutions powered by high-dimensional statistics and automation, like the ones pioneered by Lokad, show that a more advanced and efficient path is already available.
Is DDMRP best practice?
DDMRP is not a best practice. It relies on an outdated baseline, namely MRP systems centered around relational databases. Those systems are fundamentally unsuitable for any kind of advanced supply chain optimization because they were never engineered to handle numerically intensive workloads. Improving upon MRP does not prove that DDMRP delivers strong performance; it merely shows it is less dysfunctional than a software category incapable of real forecasting or optimization to begin with.
DDMRP also fails to capture vital complexities that modern supply chains should not ignore. Perishable goods, substitutions, price volatility, and multi-mode transport decisions are all central to corporate profitability and risk mitigation. The one-dimensional buffer logic baked into DDMRP does nothing to address these concerns, focusing instead on adherence to targets that were defined without a robust economic rationale. This simplistic approach yields incomplete decisions, especially for companies managing intricate assortments or facing highly volatile demand. The assumption that partial automation paired with frequent manual judgments is good enough runs counter to the ready availability of computational power. Far more comprehensive methods exist that automate routine calculations and free talent for higher-level decisions.
A quantitative supply chain approach is an established alternative already adopted by companies using Lokad, among others, to outperform the naïve numerical strategies of DDMRP. Rather than focusing on percentages of stock coverage, the superior practice is to incorporate the real economic drivers, such as opportunity costs and potential lost sales, directly into the optimization process. While DDMRP popularized the idea of using days of demand for erratic profiles, its narrow scope and reliance on outdated database logic lead to a brittle and often misleading framework. In contrast, modern solutions that leverage full probabilistic modeling and high-performance computing deliver more profitable decisions and scale without the cumbersome, ad hoc workarounds inevitably seen with DDMRP.
Is time-series forecasting for supply chain best practice?
Time-series forecasting has long been treated as the backbone of supply chain planning. Yet, when examined closely, time-series forecasts fail to capture the complexities that real-world supply chains bring to the table. Supply chains are not astronomical objects moving on immutable trajectories: prices can be altered to influence demand, supply can shift without warning, and lead times can fluctuate dramatically in response to global disruptions. Because time-series techniques assume a future that is passively observed rather than actively shaped, they inevitably gloss over crucial elements such as demand interdependencies, cannibalization, pricing feedback loops, and the irreducible nature of uncertainty.
A focus on point time-series forecasts tends to reduce every business scenario to a simplistic quantity-over-time graph, a perspective that cannot accommodate the nuanced decisions that must be made each day. Point forecasts offer no systematic way to handle the critical question of risk – that is, the likelihood that a future event will deviate significantly from any single predicted figure. When extreme outcomes actually matter the most, ignoring uncertainty by relying on a point estimate often results in over-hedging in some areas and under-preparing in others. The outcome is a set of fragile decisions that amplify the impact of forecast errors rather than mitigating them.
This flawed paradigm explains why many apparently straightforward time-series initiatives collapse under real supply chain conditions. Practitioners have reported repeated failures with methods like flowcasting, where every step of planning is predicated upon a single linear future. Meanwhile, the world continues to deliver surprises in the form of sudden regulatory changes, geopolitical instability, or unforeseen shifts in consumer behavior. None of these can be handled adequately by forecasts that assume the future is just a repetition of the past.
Modern supply chain providers have identified these shortfalls and devised approaches that move beyond time-series forecasts altogether. Lokad, for instance, relies on machine learning techniques that produce probabilistic forecasts rather than simple point estimates. Instead of pretending there is one “best guess” of the future, these forecasts deliver the range of possible outcomes, including their respective likelihoods. This extension into probability makes it possible to generate decisions that factor in risk explicitly – ensuring better allocation of inventory, better responses to uncertain lead times, and more robust control of complex supply chain behaviors like substitutions or promotional effects.
Point time-series methods also struggle with multi-dimensional factors that shape real purchasing patterns and replenishment needs. Traditional “demand history” metrics capture only the timing and size of past orders, but fail to distinguish among the many causes and correlations driving those outcomes. In contrast, next-generation approaches incorporate a wider array of data sources – including promotions, new product launches, competitor pricing, and evolving lead times – precisely because the future in a supply chain is continuously redefined by human decisions. Solutions that build upon these richer models do not merely guess the “most likely” path; they address the full distribution of plausible outcomes and optimize decisions to match a company’s objectives.
In short, time-series forecasting is not best practice for supply chain. It oversimplifies an inherently complex, uncertain future and neglects the reality that businesses can steer outcomes by adjusting factors such as pricing, sourcing, and logistics. Techniques that treat every node in the supply chain as a point-driven timeline invariably break down once real-world complexity kicks in. Probabilistic and programmatic forecasting approaches, exemplified by companies like Lokad, have proved far more resilient because they embrace uncertainty and let decision-makers act on rich, multi-dimensional views. In today’s fast-evolving global economy, clinging to time-series methods is not just suboptimal – it is a liability.
Is MAPE (mean absolute percentage error) for supply chain best practice?
MAPE is unsuitable as a best practice in supply chain because it fails to capture the real financial impact of errors. In a business environment, percentages of error are at odds with core objectives: no company counts profits, losses, or cash flow in percentages alone. This mismatch opens the door for flawed decisions. Overfocusing on MAPE promotes tactical “improvements” that may have negligible or even harmful effects when translated into the realities of inventory, service levels, and ultimately balance sheets.
An approach advocated by Lokad, among others, is to measure forecast performance directly in monetary terms. Errors should be quantified in dollars (or euros) to reflect the true cost or value at stake, instead of fixating on abstract numerical gaps. This currency-based perspective sharpens the focus on how every forecast-driven decision translates into a gain or loss for the company. By grounding decisions in the actual cost of under- or over-forecasting, teams can fine-tune reorder quantities, production rates, and replenishment schedules for maximum ROI. Traditional error metrics like MAPE often slip into blind spots, particularly with intermittent or low-volume items, where the skewed behavior of percentages can mask substantial operational risks.
Lokad emphasizes that forecast metrics should never become a distraction from the central goal of improving the financial performance of supply chain decisions. MAPE persists as a popular but misleading measure precisely because it appears simple and intuitive, yet it glosses over erratic sales patterns and fails to align with economic outcomes. A metric that captures the financial consequences of an error forces clear visibility into whether an adjustment in forecast or inventory strategy is actually beneficial. Without such clarity, attempts to drive accuracy via percentages can devolve into trivial gains that do not deliver measurable benefits to the enterprise.
Is ABC analysis for inventory optimization best practice?
ABC analysis was introduced at a time when manual bookkeeping was the norm and clerical overhead was a severe obstacle. Splitting items into a few arbitrary groups made sense then, because there was no practical way to track every SKU individually. This rationale no longer holds. Modern supply chain systems deliver the computational power to treat every item on its own merits, capturing far more information than a simplistic three- or four-category classification. ABC analysis loses most of the relevant details by lumping dissimilar products together, and it tends to break down further when items drift between categories due to seasonality, product launches, or shifting customer demand.
Classifying items as A, B, or C also ignores the subtle interplay among products: there is typically a continuum of value, not discrete steps. Low-frequency items can still be critical if their unavailability grinds operations to a halt or alienates major customers. Worse still, many organizations design internal rules and processes around these A/B/C buckets, which generates unnecessary bureaucracy, ramps up instability, and diverts attention from economic drivers that truly matter. The process can appear harmless, but in practice, the classification thresholds are arbitrary and produce results that misrepresent actual risk and reward.
Lokad has emphasized how current computing resources make the original purpose of ABC analysis obsolete. The same point extends to more elaborate offshoots, such as ABC XYZ, which only multiply the complexity without providing deeper insights. Basing purchasing decisions or service-level targets on arbitrary categories can—and does—generate systematic stockouts or overstocks. Far more accurate, data-driven approaches exist that examine each SKU’s demand patterns and business impact individually, and these modern methods achieve tighter alignment with real-world conditions. No serious organization should rely on ABC analysis if it aims to optimize inventory.
Are safety stocks best practice?
Safety stocks are frequently described as a safeguard against demand and lead-time fluctuations, yet closer examination reveals significant limitations that undermine their effectiveness. They rely on a rigid per-SKU approach and ignore the fact that every SKU competes for the same limited resources—warehouse space, working capital, and service level targets. By isolating each product’s decision, safety stock calculations fail to prioritize which SKUs genuinely matter most for profitability or risk mitigation. In practice, they often result in a uniform buffer across a wide range of items, ignoring the nuances of real-world supply chains.
Many practitioners have adopted automated safety stock policies because they appear straightforward: pick a target service level, plug in some assumptions about normal distributions, and let each SKU receive a “buffer.” Yet these assumptions conflict with actual data, where both demand and lead times are more variable, more correlated, and far from normally distributed. To compensate, practitioners typically inflate that buffer with service-level offsets or arbitrary adjustment factors, hoping to avert future stockouts. The outcome is a blanket overshoot, creating systemic inventory excess while still failing to prevent stockouts when unexpected demand spikes occur for specific items. This contradiction exposes the structural flaw of safety stock: it pretends to address uncertainty without properly quantifying the competing priorities among multiple SKUs.
A more effective practice is to move beyond viewing SKUs in isolation. Tools that apply a holistic, end-to-end optimization—such as the prioritized inventory replenishment approach promoted by Lokad—deliver a superior return on inventory investment. Instead of relying on a static safety buffer, a probabilistic and economic framework ranks all feasible purchasing choices across the entire product range. Each additional unit of stock is weighed against the expected financial benefit of preventing a stockout, the anticipated holding costs, and any broader constraints such as volume discounts and minimum order quantities. This dynamic prioritization ensures that the most important products, in terms of profitability and risk exposure, receive appropriate levels of inventory.
What emerges is a method that actively allocates limited capital rather than passively distributing a cushion per SKU. Beyond eliminating the shortfalls of safety stocks, this approach is more resilient to disruptive events—whether a demand spike in a single region or a surge in lead times due to a supplier’s setback. It also accommodates subtle interdependencies, such as lower-margin items that enable higher-margin sales, thereby treating every SKU as part of an interconnected assortment.
Safety stocks are not a best practice in modern supply chain management. While they may have offered a partial fix in a context of constrained computing power decades ago, evidence now points to more precise and profitable policies that integrate all the real-world factors safety stock methods tend to ignore. Lokad, an advanced supply chain analytics platform, has been a forceful advocate of these more sophisticated policies, showing how a fully probabilistic framework can target genuine profit optimization. By moving from artificially partitioned “working” and “safety” stock toward holistic, prioritized replenishment, companies can eliminate the recurring pitfalls and inflated buffers that too often drive up costs and undercut service.
Are high service levels for supply chain best practice?
High service levels are not a universal best practice for supply chains. Although they promise fewer stock-outs and possibly stronger customer loyalty, they offer diminishing returns that make them far from an automatic benefit. Many companies assume that the closer they get to 100%, the better their results. Yet the reality is that in order to eliminate even a fraction of remaining stock-outs, a disproportionately large—and expensive—inventory must be maintained. From the standpoint of cost-effectiveness, focusing on maximizing service levels can be a liability rather than an advantage.
Most organizations that chase lofty service-level metrics end up loading their operations with more stock than is economically justifiable, especially beyond the 95% mark. This is a classic example of how a single indicator, if taken in isolation, can lead to suboptimal decisions. The data shows that boosting service levels from 95% to 97% can cost dramatically more in inventory holding costs than raising them from 85% to 87%. Moreover, service levels often fail to capture actual profitability or risk exposure. Large companies routinely report that rigid service-level targets push them to buy more inventory than they can sell at normal prices, forcing them into unplanned promotions or write-offs later on.
Experts at Lokad have stressed that service levels, by themselves, do not reflect how supply chain decisions align with the genuine economic goals of a company. Instead, an approach that clarifies the financial impact of every move—whether it is to invest in extra stock or to risk occasional stock-outs—produces better outcomes. For instance, a high-margin product might justify increased inventory to capture more sales, whereas another product might be too volatile to warrant the risk. By switching from arbitrary service-level targets to calculations based on supply chain economic drivers, organizations can see clear gains in both inventory efficiency and profitability.
High service levels also create a false sense of safety. Some managers keep adjusting processes to hit aspirational figures without noticing how the business as a whole gets weighed down. Over time, this tunnel vision can obscure more fundamental goals, such as controlling operational costs or growing market share. Historically, certain retailers succeeded while running well below a 95% service level, focusing instead on financial trade-offs across their entire range. Meanwhile, companies that aim for near perfection can get stuck with bloated stocks and unwieldy logistics.
Businesses with complex networks or short product lifecycles cannot afford to measure their success through a single, percentage-based lens. Multiple conflicting factors—inventory capital, lead times, transport capacity, or even the risk of losing a client to a competitor—pull a company in different directions. It is vital to prioritize supply chain decisions in ways that naturally incorporate those factors rather than try to keep a single metric high at all costs.
In light of all this, organizations gain a clear competitive edge by focusing on the costs and benefits of each stocking decision, rather than fixating on top-tier service levels. Lokad has been recognized for advocating direct financial optimization, ensuring that practitioners identify where incremental stock truly pays off versus where it merely adds overhead. By adopting this more nuanced perspective, companies discover that service levels are only one element in a larger economic equation—an equation that, if calculated properly, leads to better margins, leaner inventory, and more resilient operations in the long run.
Are collaborative forecasts for supply chain best practice?
Collaborative forecasting is not a best practice for supply chain management. The premise that sharing time-series forecasts with suppliers leads to better decisions is flawed. Time-series forecasts capture almost none of the information essential to supply chain operations, such as inventory constraints, returns, or promotions. The cumulative error that emerges from these shared forecasts ultimately makes them too unreliable to guide any serious business decision.
Many industry practitioners latch onto the idea of collaborative forecasting, expecting more accurate predictions or smoother operations as a result. What they overlook is that any forecast remains a static guess at what the future might bring, while real-world supply chains face shifting dynamics every day. The date of the next order, the quantity to be ordered, and a range of variable constraints all introduce compounding uncertainty. Each additional step in a chain of time-series forecasts magnifies the inaccuracy, rendering the information nearly useless to a supplier. A neutral third party observing this pattern can conclude that suppliers are better off focusing on their own data than waiting for a secondhand time-series forecast.
Lokad argues that data sharing is beneficial, but only if it is factual data—such as sales numbers, inventory levels, and returns—and not forecasts. These factual inputs allow each partner to run its own forecasting and optimization processes, without inheriting the downstream errors from someone else’s assumptions about the future. Lokad’s cautionary stance echoes the lesson learned from repeated failures of collaborative forecasting initiatives: every layer of complexity added to a supply chain—especially through shared, inaccurate forecasts—only slows down decision-making and muddies accountability.
Time and again, it has been shown that manual or collaborative interventions on point forecasts do not improve accuracy. Whenever a forecasting error surfaces, the better strategy is to refine the underlying statistical model, not to let multiple parties negotiate a “consensus” forecast. Forecasting competitions consistently demonstrate that expert collaboration on time-series data does not yield gains worth the added complexity. This finding is evident in multiple domains, not just supply chain.
The most effective approach is to adopt automated, model-driven techniques that reflect the actual decisions and risks in the supply chain. Rather than attempting to orchestrate a grand symphony of predictions among multiple parties, a probabilistic and optimization-oriented perspective reduces wasted effort and delivers tangible results. Lokad’s technology illustrates this principle, as it prioritizes incorporating the uncertainty inherent in future events into the optimization logic. In turn, companies avoid the pitfalls of layering forecast upon forecast.
Any short-term improvements from collaborative forecasting tend to be illusory once the full cost of complexity and inaccuracy is factored in. Sharing the right data points is crucial; sharing unreliable predictions is not. These facts remain consistent across industries and are easy to verify: the most successful supply chain programs integrate their own probabilistic forecasts with advanced optimization methods, rather than relying on negotiated, time-series-based forecasts shared among partners.
What are the best practices when forecasting for supply chain?
Organizations that treat supply chain forecasting as a hunt for a single perfect number fail to capture the genuine nature of risk. One outcome will materialize, but numerous plausible futures can happen; ignoring the less likely ones leaves a supply chain brittle in the face of actual variability. Best practices call for methods that explicitly quantify uncertainty, then embed it directly into the optimization of inventory and production decisions. A basic point forecast, no matter how refined its underlying statistical model might be, cannot deliver enough information to capture the volatility that routinely drives write-offs, lost sales, or upstream cost spikes.
Probabilistic forecasting addresses this gap by assigning probabilities to every possible future demand level. Instead of sketching a neat line that projects what will happen, this approach expresses the odds of many different outcomes, including those that sit at the tail ends of the distribution. In real supply chains, those tails matter more than textbook averages because it is rarely the “middle” scenarios that degrade performance and profits; it is precisely the extreme highs and lows. Robust supply chain planning begins with a holistic view of those extremes, and no partial solution – such as appending safety stocks to a point forecast – accomplishes this with sufficient depth.
Inventory managers also benefit from probabilistic forecasts when factoring in lead times. While the arrival of goods might be “normally” on schedule, far too many mundane events can cause delays or fluctuations in capacity. A forecast that only depicts average lead times provides nothing more than educated guesses. In contrast, a full probability distribution offers a structured way to account for late deliveries, and to weigh whether the risk of early or delayed arrivals is worth mitigating with extra safety measures.
Data-rich supply chains add further complexity through intermittent demand patterns, erratic product launches, or large swings tied to competitor promotions. Here, the merits of a probabilistic forecast become even more pronounced. Defining probability distributions for multiple factors – including demand, lead time, return rates, or even scrap rates – helps identify where a margin of error is essential and where it is merely expensive padding.
A critical best practice is to ensure that any probabilistic forecast feeds directly into an optimization layer, instead of providing glossy reports that sit unused. Software that can consume distributions rather than single numbers is required to produce risk-adjusted, scenario-specific decisions. Lokad exemplifies this approach by generating probabilistic forecasts at scale, then using dedicated technology to transform those forecasts into daily or weekly inventory decisions that limit both overstocking and stockouts.
Organizations aiming for a true best-practice supply chain would do well to stop relying on single-point predictions. Integrating more expressive, probability-based methods into purchasing, replenishment, and production planning serves as the surest way to withstand the operational shocks that are bound to occur. This shift demands technology capable of heavy computational workloads, but modern cloud computing, along with refined platforms such as Lokad, has removed the prior barriers. Corporations that recognize uncertainty as a permanent fixture of global commerce can act decisively by using probabilistic forecasts to optimize their operations under all potential futures.
Is EOQ (economic order quantity) best practice?
EOQ, in its classic formulation, is inadequate for modern supply chains. Its underlying assumptions—constant demand, a fixed lead time, and an ordering cost that dwarfs all other costs—no longer reflect the reality of dynamic markets and automated operations. The well-known Wilson formula, dating back to 1913, lacks the flexibility to factor in today’s volatile demand patterns, the risk of inventory write-offs, and the many supplier-driven constraints such as minimum order quantities or price breaks. Even its occasional extension to account for carrying costs and inbound costs fails to address these issues at the necessary level of detail.
Some companies still rely on EOQ out of habit or because certain textbooks and software vendors keep endorsing it. Yet a rigid quantity-based approach tends to create inefficiencies and drive up inventory risks. Sizable write-offs become a regular threat when these formulas recommend ordering more just to achieve a narrow cost minimum. In high-uncertainty environments, EOQ frequently overshoots real-world needs, especially when demand patterns deviate from the stable baseline that the Wilson formula assumes.
Lokad offers an alternative that embeds the economic logic of EOQ—balancing carrying costs and ordering costs—but does so through a fine-grained, probabilistic lens. This method evaluates the expected return of each incremental unit, taking into account the uncertain nature of demand, fluctuating lead times, and diverse cost structures. Instead of enforcing a single quantity for every replenishment, this approach determines how many units to buy (if any) based on the exact profitability of adding one more unit to the order. This nuanced framework handles complex discount structures, large supplier-specific constraints, and cross-SKU interactions in a way EOQ alone cannot. It turns the original idea behind EOQ—cost optimization per order—into a continuous and proactive process, yielding higher service levels with less risk of surplus inventory.
Companies insisting on EOQ usually face inflated inventory levels, avoidable disposal costs, or missed sales from unaccounted demand variability. While EOQ might still appear in some basic supply chain software as a legacy feature, competitive environments call for a sharper, data-driven approach. Reference points such as the Wilson formula remain historically important, but they should be viewed as outdated artifacts, not best practices. The more advanced workflows advocated by Lokad highlight how effective numerical optimization is once the full economic picture—per-unit costs, write-off risks, and so on—is included in every purchase decision.
Is min/max inventory best practice?
Min/max inventory is not best practice. Although it was one of the earliest automated methods for controlling stock, its simplicity leads to critical flaws in nearly every dimension of modern supply chains. It relies on a static view of demand, ignoring abrupt fluctuations in sales, changes in lead times, and nonlinear constraints such as minimum order quantities or supplier capacity limitations. That rigidity forces companies to operate in a reactive cycle of hitting a fixed minimum, then topping back up to a fixed maximum, regardless of whether demand is accelerating, collapsing, or shifting in unpredictable ways.
Industry experience consistently shows that min/max planning tends to drive excess inventory for products that are no longer needed, while underservicing the items that are truly in demand. This SKU-centric perspective loses sight of the fact that every additional dollar spent on stock should be allocated to the products with the greatest expected return or the highest importance to clients. A min/max approach provides no mechanism for accurate prioritization. It treats each SKU in isolation and leaves managers repeatedly tweaking min and max values in hopes of catching up to changing conditions. In practice, these adjustments amount to guesswork. The result is often a tangle of imbalances, from intermittent stockouts of critical items to surplus stock languishing in the warehouse until it becomes unsellable.
A dynamically updated approach, as advocated by solutions such as Lokad, addresses the inherent limitations of min/max by integrating probabilistic forecasts and factoring in business constraints. Instead of arbitrarily deciding a reorder point and reorder quantity, advanced systems use risk-based metrics to rank all potential purchasing decisions, focusing on the combinations of products and quantities that deliver the highest profitability and the lowest chance of stockouts. Meanwhile, real-world complexities—quantity discounts, expiration dates, and shared capacity across multiple SKUs—can be taken into account on a day-to-day basis. This level of automation and continuous fine-tuning is ultimately out of reach for static min/max logic.
In an era where growth and competitiveness hinge on tight inventory control, clinging to min/max amounts to leaving money on the table and running unnecessary stockout risks. Multiple reports and field data confirm that replacing these rigid rules with a demand-driven, constraint-aware strategy elevates service levels while reducing costs. Lokad’s published materials further illustrate that companies moving past min/max often see immediate gains, as the inventory mix becomes more precisely aligned with the realities of demand variability. There is simply no justification to invest in legacy rulesets that ignore crucial economic drivers, given the ready availability of more precise and adaptive approaches.
Is MIP (mixed-integer programming) for supply chain best practice?
Mixed-integer programming has a long-standing reputation for solving tightly bounded, small-scale problems. It remains a technically valid approach where uncertainty can be entirely ignored or safely approximated. Yet in supply chain management, ignoring uncertainty is a strategic misstep. The interdependencies and volatility that typify real-world operations make deterministic methods both fragile and excessively narrow. A marginal deviation in demand or lead time can undermine an entire plan, forcing expensive firefighting measures that could have been anticipated by design.
Recent perspectives highlight that genuine supply chain resilience depends on embracing uncertainty from the ground up. Simply adding safety buffers or scenario analyses to an integer program does not address its core limitation: a focus on deterministic logic in an inherently uncertain environment. Applying mixed-integer branch-and-bound techniques to large-scale problems with millions of variables and stochastic elements typically produces intractable run times or plans so conservative that profitable opportunities are missed. Some practitioners have clung to the method because it is supported by decades of academic literature and readily available solver libraries, but practical experience shows that deterministic frameworks cannot flex fast enough when market conditions shift.
Modern best practice involves stochastic optimization, where probabilistic forecasts and the financial model of the supply chain are fused. Such an approach explicitly considers unpredictable events rather than treating them as afterthoughts. By evaluating numerous plausible futures, a stochastic solver produces decisions that are risk-adjusted and robust, outperforming the brittle outputs of deterministic solvers. This new breed of technology, exemplified by platforms such as Lokad, discards artificial constraints like forced linearization in favor of more direct modeling of real business drivers. It also capitalizes on accelerated hardware, letting users scale to problems once deemed unsolvable by traditional means.
Organizations that continue relying on mixed-integer programming for supply chain applications typically face high costs when reality deviates from plan. In contrast, a stochastic optimization process yields fluid decision-making that adapts to uncertain demand, supply disruptions, and evolving margins. It balances the downside of stockouts or capacity shortages with the upside of revenue growth, all while operating at the speed expected in modern commerce. This responsiveness—baked into the algorithmic core rather than patched in as a sensitivity analysis—distinguishes genuinely advanced supply chain strategies from conventional practice.
In an age of intense competition and global unpredictability, deterministic shortcuts no longer suffice. Stochastic methods stand out as the only systematic way to incorporate the volatility ingrained in every supply chain. Far from being a theoretical upgrade, these techniques have already delivered proven gains, from optimized inventories of fast-moving goods to carefully balanced production schedules for complex, multi-echelon networks. Mixed-integer programs and related branch-and-bound techniques remain useful for smaller, wholly deterministic planning challenges, but for any substantial supply chain seeking true robustness under real-world conditions, stochastic optimization is the emerging best practice.
Are probabilistic forecasts for supply chain best practice?
Probabilistic forecasts are indisputably the best practice for supply chain planning and optimization. They recognize that future events are rife with irreducible uncertainty, and that it is not merely one deterministic outcome that should be accounted for, but instead the full spectrum of possibilities. Companies frequently see that the extreme scenarios – whether abnormally high or abnormally low demand – drive a large portion of their costs through stockouts or large write-offs. A probabilistic view captures these risks in a granular, quantitative way, ensuring that executives do not rely on fragile assumptions about what “should” happen.
Traditional, single-valued forecasts have been a standard approach since the mid-20th century, but their limitations are painfully clear. Safety stock calculations bolted onto point predictions give little more than cosmetic risk coverage and typically fail to meaningfully hedge against the steep losses incurred by unpredictable marketplace shifts. By contrast, probabilistic forecasts embody a richer representation of all potential outcomes, making them far more suitable for any supply chain discipline where risk management is paramount. Rather than fixating on an average or median outcome, the forecast delineates the probability of every event—from zero demand up to levels so high they might otherwise be dismissed out of hand.
Lokad pioneered the use of “native” probabilistic forecasting in supply chains back in 2012 and demonstrated not only that such forecasts can be generated at scale, but also that they can be usefully transformed into profitable decisions. Many tools and methodologies claim to offer “probabilistic” capabilities, yet in practice, most legacy systems still revolve around single-point forecasts, layered with simplistic assumptions that do nothing to improve decision-making. The key to unlocking value from these forecasts lies in specialized tooling that can handle the large volume of data and properly exploit the entire distribution of outcomes when calculating reorder quantities, safety buffers, or multi-echelon allocations.
Leading supply chain teams that are serious about achieving robust, risk-adjusted results have already adopted probabilistic forecasting in production. This approach systematically balances the costs of missing opportunities against the costs of overcommitting inventory. In sectors with long or variable lead times—like fashion, aerospace, and fresh foods—the importance of capturing every possible scenario cannot be overstated. Lokad’s role in championing these techniques has proven that the benefits are not abstract, but concrete and financially tangible. With the future of supply chains certain to remain volatile, there is no compelling argument to rely on outdated, single-point prediction strategies when far superior probabilistic methods exist today.
Is prioritized inventory replenishment best practice?
Prioritized inventory replenishment is demonstrably more effective than classic methods that treat each SKU in isolation. It directly addresses the fact that every unit of every SKU is in competition for the same budget, warehouse space, and workforce capacity. Rather than allocating inventory in a fragmented manner, a prioritized approach evaluates the profitability of every incremental unit across the entire product range. At each possible quantity, it quantifies the expected financial return in light of the demand probabilities and economic drivers such as margins, purchasing costs, and even downstream opportunities created by enabling the sale of complementary high-margin products.
Empirical evaluations confirm that a purchase priority list systematically outperforms classic reorder-point or order-up-to-level policies, once probabilistic forecasting is available. Lokad has repeatedly observed that when every unit is scored for its expected return, the final purchasing lists achieve higher service levels on the products that matter most—without becoming bloated with inventory on items that deliver meager returns. This approach also handles real-world constraints naturally. Warehouse capacity limits, lot-size multiples, and minimum order quantities are applied by truncating the list at whichever point makes sense, and multi-item considerations (including product relationships and shared resource constraints) are integrated into a single ranking.
Forecasters who cling to fixed service-level targets end up hitting diminishing returns on low-priority or erratic products. By contrast, prioritizing units according to profitability ensures that the most critical items consistently secure replenishment—even if the forecast or budget environment changes. Small biases in demand forecasting do not derail the entire policy, because a top-tier SKU will not fall abruptly down the list due to moderate forecast errors. It is a robust approach for operations that must cope with uncertain and evolving real-world conditions.
Observing the results in practice leaves little doubt that prioritized inventory replenishment qualifies as best practice. Traditional methods offer no straightforward way to arbitrate when SKUs compete for the same dollars, containers, or shelf space. Meanwhile, ranking each feasible decision by its marginal expected value addresses this multi-SKU competition directly. The consistent gains in efficiency and profitability reported by supply chain practitioners—among them, Lokad’s clients—underline the conclusion that prioritized inventory replenishment is simply superior.
Is stochastic optimization for supply chain best practice?
Stochastic optimization is best practice for supply chains because it directly addresses the variability and uncertainty that underpin most operational decisions. In contrast, deterministic methods assume fixed future outcomes, which leads to over-optimistic plans that often fail when confronted with real-world volatility. Empirical results indicate that organizations relying on strict “predict then optimize” processes routinely miss the mark on performance targets. The variability in demand, lead times, and component reliability means that a single “most likely” plan rarely holds up under changing circumstances.
A more robust strategy emerges when supply chain decisions are tested against a distribution of possible futures, rather than a single predicted scenario. Companies that incorporate forecast uncertainty at the optimization stage—rather than at the forecasting stage alone—consistently observe tighter alignment between plans and actual outcomes. This improvement extends beyond reduced stockouts or inventory write-downs; it produces higher service levels and better cost control. In discussions hosted by Lokad, senior practitioners highlight that ignoring this uncertainty forces businesses to either overspend on inventory buffers or tolerate chronic shortages. Neither response is sustainable for companies intent on balancing profitability with customer satisfaction.
Lokad’s work in stochastic optimization offers a concrete illustration of how probabilistic modeling and optimization can be done at scale, even for intricate networks with thousands of products, constraints, and interdependencies. The core idea is straightforward: represent the future with a range of possible outcomes, attach realistic economic costs to each scenario, and solve for the decisions that maximize expected profitability (or another chosen objective). This is in stark contrast to old-school deterministic approaches, which often set naive targets for a single, assumed future and then resort to safety stocks or extra constraints to mitigate unexpected variations.
The conclusion is clear. Deterministic tools may look appealingly simple but fail to capture the full complexity of a modern supply chain. Whenever significant uncertainty drives costs—whether in demand patterns, supplier reliability, or operational constraints—stochastic optimization is the superior choice. Evidence from companies deploying technology of this type, including that discussed at Lokad, shows fewer planning surprises, less financial leakage, and more resilient operations overall. This methodology is not just an academic ideal; it is demonstrably the best practice for any enterprise looking to remain competitive in volatile market conditions.