via an online survey in January. In all, 602
individuals filled out the questionnaire, of
which 579 provided usable responses.
Respondents were asked to identify the metrics they used as well as to grade their own
facilities’ performance in 2010 against 44 specific operational metrics. (For purposes of
analysis, the measures have been grouped into
five balanced sets: customer, operational,
financial, capacity/quality, and employee.)
The research, which was jointly sponsored
by DC VELOCITY and WERC with support
from Ryder, was carried out by Georgia
Southern University and the consultancy
Supply Chain Visions. The full results will be
available online at www.werc.org after the
annual WERC conference, which takes place
in Orlando, Fla., from May 15–18.
EXHIBIT 1
The Top 12: The most commonly
used DC metrics
Metric (by rank in 2011 survey) and category
1. On-time shipments (Customer)
2. Average warehouse capacity used (Capacity/Quality)
3. Order picking accuracy (Capacity/Quality)
4. Peak warehouse capacity used (Capacity/Quality)
5. Dock-to-stock cycle time, in hours (Operational)
6. Internal order cycle time (Customer)
7. Total order cycle time (Customer)
8. Lines picked and shipped per hour (Operational)
9. Lines received and put away per hour (Operational)
10. of supplier orders received damage-free (Operational)
11. Fill rate – line (Operational)
12. Annual workforce turnover (Employee)
Did not appear in top 12
2010 Rank 2009 Rank
11
47
23
9*
66
10 8
12
11 11
**
10
34
8*
WHICH METRICS MATTER MOST?
When it comes to the performance metrics used by DC professionals, the survey showed that the most popular measures don’t vary much from year to year. The metrics that
received the most mentions in this year’s survey—“on-time
shipments,” “average warehouse capacity used,” and “order
picking accuracy” have appeared on the top 12 list since the
study was launched.
But that’s not to say the situation has remained static. As
Exhibit 1 shows, there has been some change in the list of
top 12 metrics compared with the 2010 survey results. Why
is that? This year we changed methodologies in calculating
the top 12 list. To stay consistent with the new methodology, we recalculated prior years’ top 12 lists. While we found
that the choice of metrics remained largely unchanged,
there were some shifts in the rankings.
It’s important to note that decisions about which metrics
a facility will use may be dictated by company policy and
may not reflect the respondents’ own opinions or prefer-
ences. For that reason, the survey included a question ask-
ing, “If you were the boss, what metrics would you use to
run the DC or warehouse?”
As it turned out, there were some disparities between the
two sets of metrics. Although “on-time shipments” and
“order picking accuracy” appeared on both lists, the
respondents’ top five picks included three measures that did
not make the list of the most widely used metrics: “inven-
tory count accuracy, by unit,” “inventory count accuracy, by
location,” and “distribution costs as a percentage of sales.”
The fact that respondents chose a financial metric indicates
that what we do in the DC—and how we do it—affects
more than customer satisfaction; it also has an impact on
the organization’s bottom line.
HOLDING THEIR OWN
As for how the nation’s warehouses and DCs are performing against key metrics, the news is generally good. As noted
above, the upswing in volume hasn’t brought a halt to the
improvement trend. In fact, the latest survey found that relative to last year’s findings, respondents either maintained
or improved their performance against 52 percent of the 44
metrics studied.
The news was even better among the top-performing
companies, the 20 percent of respondents designated “best
in class.” A comparison with last year’s findings showed that
these companies either maintained or improved their performance against nearly seven out of 10 metrics.
Exhibit 2 identifies the metrics that saw the most
improvement over last year across the entire respondent
base. (When making comparisons from year to year, we
have continued to use the median—the midpoint of all the
responses—rather than the mean, or average, because it’s
less likely to be skewed by very high or low numbers.)
Of particular note are the improvements in average internal order cycle time and total order cycle time, both of
which dropped by a whopping 12 hours compared with the
two previous years. We believe these results speak to a
greater sense of urgency among warehouse and DC managers to keep up with orders as activity picks up.
Another interesting finding is the shift in the status of the
“dock-to-stock cycle time” metric, a measure of receiving
and put-away efficiency. Last year, “dock to stock” performance was identified as one of the major pain points, with
median performance slipping to 9. 1 hours from eight hours
the year before. This year, however, “dock-to-stock time”
ranked among the “most improved” metrics, with the