Artificial intelligence is ushering in a new way of looking at supply chain optimization. It is allowing supply chain managers to compress the sense-plan-act cycle to almost real-time intervals. Companies that plan supply and demand on a monthly or weekly basis can now read signals from their supply chain in seconds, plan based on the realities of the moment and take immediate action based on the new plan.
Planning teams have always reacted to actual data, so one might ask what is different now? The main difference is that companies can now get so granular that they can sense, plan and act to micro-events. A micro-event is a factor that can impact a plan, is very detailed, and is potentially intermittent, meaning it does not occur with regular frequency.
Probably the most intuitive example of a micro-event is weather. Weather has a profound impact on supply chain operations, but it is intermittent and totally unpredictable. Thus, how can we use weather to help optimize supply chains? Let’s take demand planning, for example. Most demand planning is performed using historical statistical algorithms that average or smooth demand data. Some use moving averages that take a window of previous periods and average them to find the next period’s prediction for demand. For instance, if a product had a demand of 10, 12, and 14 units in the last three months, we could average these to predict demand of 36/3 or 13 units for the next month. Other approaches use exponential smoothing to try to capture trends in demand signals. These ubiquitous approaches use actual data from previous periods to predict future demand.
Artificial intelligence is ushering in a new way of looking at supply chain optimization. It is allowing supply chain managers to compress the sense-plan-act cycle to almost real-time intervals. Companies that plan supply and demand on a monthly or weekly basis can now read signals from their supply chain in seconds, plan based on the realities of the moment and take immediate action based on the new plan.
Planning teams have always reacted to actual data, so one might ask what is different now? The main difference is that companies can now get so granular that they can sense, plan and act to micro-events. A micro-event is a factor that can impact a plan, is very detailed, and is potentially intermittent, meaning it does not occur with regular frequency.
Probably the most intuitive example of a micro-event is weather. Weather has a profound impact on supply chain operations, but it is intermittent and totally unpredictable. Thus, how can we use weather to help optimize supply chains? Let’s take demand planning, for example. Most demand planning is performed using historical statistical algorithms that average or smooth demand data. Some use moving averages that take a window of previous periods and average them to find the next period’s prediction for demand. For instance, if a product had a demand of 10, 12, and 14 units in the last three months, we could average these to predict demand of 36/3 or 13 units for the next month. Other approaches use exponential smoothing to try to capture trends in demand signals. These ubiquitous approaches use actual data from previous periods to predict future demand.
But for the food supply chain, the growing trend for fresh, local food to make its way to grocery stores, restaurants, fast casuals and even quick service restaurant (QSR) chains, has made monthly, and even weekly, planning almost obsolete. Now, companies across the food industry are tapping into machine learning and artificial intelligence to make better decisions faster when it comes to supply and demand.
Using Machine Learning to Manage Micro-Events
New machine learning approaches to forecasting augment this historical approach by adding a real-time predictive component. Machine learning models are often used to classify data into categories. For example, suppose we built a model that learned whether demand is affected by weather. If we had an accurate model, we could apply it in real time, and use the micro-event to adjust our forecast. The model would take weather variables and statistically correlate them with demand signals. We could then increment or decrement the forecast based on the model’s prediction. Then, the model would test real-time conditions and predict the degree to which the weather would impact the forecast.
There are many mathematical approaches to machine learning, but what is common is that the model could dynamically predict an adjustment to demand based on the weather. This model is trained from past experience.
What are other micro-events that can be very powerful influencers of supply and demand? They tend to be localized—and perhaps short-lived—but often wreak havoc in a supply chain.
For example:
- Local sporting events and concerts. In one geography, a local QSR noticed large spikes after Friday night high school football games
- Power outages. One short power outage can cause conditions rendering certain products highly in demand, or cause spoilage that strips supply
- Social media. One celebrity mention can have a significant impact on demand
- Commerce activity. The popularity of certain products can impact other products, even across vendor and category
- Fires and natural disasters. An event can take out full sources of supply, while also dramatically changing demand
- Recalls. Product quality announcements can affect both demand and supply
- Strikes. In some geographies, strikes can have a profound impact on delivery times, making supply difficult
- Traffic. When an accident happens on a bridge and there is a closure, there can be a ripple effect on delivery times down the supply chain
Building Smarter, AI-Powered Applications Is a Journey
There are many micro-events that can affect supply and demand, plus they change over time. So, while machine learning is a powerful tool to augment traditional supply and demand planning, it is difficult to standardize. Machine learning is a continuous journey of experimentation and an organizational team sport. It is an ongoing iterative process that requires cooperation between IT, data science and the business.
There is almost never a standard model for any problem over time. Effective data scientists are continuously producing new models, trying new experiments that make models more predictive. They adjust many dimensions of a model, including the algorithms and the parameters to those algorithms, but what is most important, is that they vary the features of the data that train the learning model. This is often referred to as “feature engineering”, and it is the most time-consuming and perhaps most powerful factor contributing to the precision and accuracy of models.
Every organization that attempts to use machine learning to predict micro-events must establish a culture of experimentation with a “feature factory.” They must have the systems and processes in place to allow the data scientists to capture, cleanse and transform the raw data, be able to describe micro-events, and perform experiments training models seeking a lift in both precision and accuracy.
Markets change over time, resulting in customers that change desires and behavior. Suppliers change behaviors as well. Therefore, micro-events that predicted spikes in supply and demand last year, month or week may no longer have the same impact. Therefore, the data scientist must always be on the lookout for change. They must try new features and deploy new models in production to predict supply or demand.
A Vision for Operational AI Applications
This feature factory and culture of experimentation imposes new system requirements on IT. No longer can you simply purchase an integrated business planning software package. Now, you need more flexibility. There are three system requirements that need to be seamlessly integrated:
- Operational intelligence companies. These companies need a data platform that can—in real-time—consume signals from the supply chain to predict micro-events. No longer does it suffice to store transactional data such as inventory changes from orders. You must store and retrieve data from exogenous signals such as weather and social media in real time. This requires new “scale-out” architectures that store data on many machines to scale to petabytes.
- Business intelligence. Companies now need to perform analytics at petabyte scale to account for the signals and derive new features for models. These new analytical processes also require scale-out architectures to perform distributed computation. By putting many CPUs on many machines to work simultaneously, data scientists can prepare data sets for machine learning interactively.
- Artificial intelligence. Now companies need machine learning platforms that enable data scientists to use multiple algorithms, keep track of experiments, and deploy learned models in operational real-time systems.
By bringing these three dimensions together, you enable operational AI applications at scale. There are a variety of ways of assembling these components, both with on-premise computing as well as in the cloud. Until recently, these components would have to be duct-taped together by IT, requiring large teams to continuously engineer the interfaces and operate the engines forming the operational AI system. Now there are seamlessly integrated operational AI systems that bring these three dimensions together to power smart applications.
The age of operational AI is now here with the seamless integration of scale-out operational databases, data warehouses and machine learning platforms. Companies can now use these platforms to store the data necessary to model micro-events and the computation required to build machine learning predictors of micro-events. Plus, these platforms can inject these predictors into real-time sense-plan-act applications that sense micro-events, plan changes to supply and demand, and issue orders to continuously adapt to their supply chain.