
The situation
A consumer goods business had a forecasting problem layered on top of its incentive problem. Forecasts swung by 15-25% month to month. Nobody trusted the numbers. Stock availability suffered because planning couldn't keep up with reality. Sales blamed operations, operations blamed sales, and the CEO got surprised on day 30.
What I did
I built simple but robust forecasting models using historical patterns, seasonality, and campaign effects. Nothing exotic, just rigorous. Then I integrated these into a disciplined operating rhythm: a standard Week 1-4 view with a fixed set of questions for every review meeting.
In parallel, I built a monthly performance engine that tracked the incentive scheme itself. One unified data export, behaviour flags, growth buckets, and a compact eight-panel dashboard. Month-on-month and year-on-year views separated volume changes from quality changes from mix shifts.
I presented everything in plain English. Not "the model suggests a variance" but "we're short because we have fewer active customers this month, not because the scheme is weaker" or "we hit volume but lost money because the mix drifted to low-margin products.
What changed
Forecast errors narrowed significantly. Leadership stopped being blindsided at month end. Stock and resource decisions improved because planning could actually rely on the numbers. Monthly conversations shifted from "why is this report different to that one" to "what exactly do we do in Week 2 to close the gap."
Who this is for
If you're still finding out how the month went on day 30, I can give you a simple, disciplined view of where the month is heading and whether the problem is volume, quality, mix, or price. That becomes the backbone for your weekly and monthly operating process.