Benchmarking: Behind the Numbers
Hidden variables and information overload make it difficult to go from raw information to informed decisions.
Facility executives and senior managers alike are often hungry for a boilerplate measure of their current design or building’s performance against an industry standard. The hope is that the data will provide guidance or a solution to a difficult question: How do we learn to optimize our own facilities from what our industry is doing?
Organizations often undertake benchmarking to identify the industry average for space allocation. It is important to keep in mind, however, that although the concept of comparing one facility to another is important, benchmarking itself is not always black and white. Anyone who has undertaken a comparative analysis is aware of the dangers of oversimplifying data while neglecting the day-to-day situation. There is a tendency to focus solely on the numbers: Are we above or below the average?
The danger in making comparisons is twofold: The comparative items are often very different, and data compilations are open to interpretation.
Any accurate comparative analysis needs to take into account hidden variables and their implications for the data. It is crucial to understand why the numbers say what they say and how they can inform the decision-making process. The presentation of results should include more than just a description of the data-gathering process; instead, it should analyze the information and explain the findings. It should also illustrate ways to use the information to promote effective decisions.
Benchmarking can capture lessons learned from projects. This information can help to establish what worked well and what did not. Because of the changing nature of business, operations and technology, this information should be reviewed and updated frequently.
Benchmarking can also identify a range of critical space metrics that can be used to evaluate current projects. These numbers can be used to help evaluate drivers such as building size, overall population and gross square footage. But these metrics are not meant to stand alone. New technologies and trends should be considered to present a true picture of space needs. This type of analysis is intended not as a simple comparison, but as a way to identify an acceptable range.
It is important to distinguish similarities among what is being compared — such as facility type and operations — and the differences — for example, stand-alone facilities and facilities as part of a larger campus — to confirm the findings and validate the analysis. All of the variables in a benchmarking study should be normalized; as an example, adjustments should be made to take into account location and current dollars. The data gathering must be consistent, accurate and in a common language, or it will not be credible.
Here are some examples of how data and certain comparisons can be erroneously applied:
- Benchmarking alone cannot accurately establish relationships between facility design and productivity. Trying to do this could create inflexible or rigid standards that limit improvements in design.
- By itself, benchmarking cannot predict future needs or quantify future trends and performance improvements in the evolution of technology and techniques. Using this approach could make it harder to explore alternative design approaches and paradigm shifts which could benefit the project and the business.
- Without other information, benchmarking cannot evaluate the need for a reduction in project scope. Doing so could create somewhat false statistics relative to cost/area which might then be used to limit project costs unrealistically.
But there are ways to create more thoughtful comparisons that will address many of the challenges facing top executives. These include:
- Using a common language, normalization, and consistent analytical methods.
- Identifying subjective factors that influence the numbers.
- Creating tools to tell the story.
Apples to apples
A recent benchmarking analysis looked at five research facilities for a multinational pharmaceutical company. Because each facility was created as a response to specific business and design needs, the first step to developing a useful benchmark analysis was to understand and normalize the context to ensure that the study was comparing apples to apples.
- Building program: Although the focus of all five facilities was on basic research laboratories, some also had administrative offices, specialized research, campus-supporting amenity spaces or enclosed penthouse mechanical spaces.
Geographic location: The facilities spanned the globe. Three were located in the United States — one in the East, one in the Midwest, one in the West. The other two were located in Europe and Japan. Labor availability and labor requirements (open vs. closed shop) and material availability and supply are just some of the influences that location can have on a benchmarking study.
Time: The completion of the facilities ranged over a period of three years. Currency exchange rates had changed over this period of time.
Normalization in this study meant breaking the facility program into like components: laboratory spaces, specialized research facilities, and administrative, amenity and support spaces. Normalization also called for either inflating or deflating costs to adjust for time and location.
Another goal was a consistent approach to data analysis. Globalization makes it essential to understand cultural differences and to develop a method of neutralizing those differences.
Facility executives at each site measured and identified space somewhat differently. Therefore, during data gathering, it was important to establish which space measurement methodology and definitions would be used. Discussions explored both the method of space measurement — for example, whether to use BOMA standards — and broad definitions of space types, for example, what constitutes laboratory space vs. a specialized space.
To create a credible analysis that was supported by facility executives at each site, it was essential to document how space was measured and in which category each space belonged. If a specific space was in question, a closer review of the definitions clarified the situation.
One of the primary goals of the analysis was to identify a cost-per-square-foot benchmark for laboratory facilities and to understand if, where and why facilities did not meet these benchmarks. Laboratory facilities, by nature, are expensive. They must provide a safe, high-quality environment for scientists to perform experiments using highly volatile compounds, as well as an atmosphere that allows scientists to concentrate and collaborate. Doing this requires creative office and gathering spaces as well as research spaces. Each of these space types uses different building criteria.
More than Numbers
As space becomes more specialized, its cost increases. It is not the highly visible collaborative spaces that drive facility costs but the quality and redundancy of mechanical and support spaces required by laboratories and other specialized research facilities. It was necessary to develop a tool that would easily convey this to business managers making decisions.
Two methods were used to achieve that goal. One was a strict analysis of the numbers (gross square feet per person, cost per gross square foot and cost per person). The other was identification of “influencing factors” for each component, as described below.
Clearly, any benchmarking study needs to provide numerical data. As already mentioned, facilities were broken into components. During this study, construction costs were broken into those same components (laboratory spaces, specialized research spaces, and administrative, amenity and support spaces) to make it clear to senior management where construction dollars were being spent.
However, the data from this study was intended to instruct rather than be used as a mandate for future facilities to adhere to. Rather than just identify one number for gross square feet per person, cost per gross square foot or cost per person, it was important to develop a target range that could help guide future projects. In addition, this study was focused internally, there was the potential to ignore what was happening in the industry. To avoid that problem, a broad survey of similar facilities built at similar times was conducted. This information was given to provide additional context, not to be used as a strict comparison.
Because future facilities will be built to address distinct business and design needs, the benchmarking study identified “influencing factors” that fell into six categories:
- Scientific mission — research activities that drive lesser or greater space allocations.
- Operational efficiencies — circulation strategies (whether personnel and service traffic are separated or mixed) and sharing concepts (whether basic functions are centralized to support a larger population, reducing duplication of equipment, space or both).
- Safety — providing adequate space to safely conduct research activities; factors include module widths and lengths, equipment densities and proper support allocation.
- Flexibility — redundancy of mechanical, electrical and plumbing systems and providing infrastructure to support future occupancy needs.
- Security — the range of levels of access to facilities.
- Workplace standards — characteristics of the organization that affect allocation of lab and office space (e.g., whether internal standards follow industry best practices, whether open-plan workstations or hard-walled offices are used, and whether employees get more or less space based on their rank in the organization).
Each factor increased or decreased both cost and space. A simple graphic comparison of the facilities and how each of the influencing factors was addressed became a very instructive tool for senior management. (See box below.) They could quickly and easily see what was driving the cost of facilities up or down. They were then able to focus on the value rather than the cost of facilities.
An accurate comparative analysis that instructs is a solid start. How that analysis is presented to decision-makers matters as much, if not more. Numbers can be dangerous when evaluated out of context. A highly complex, detailed and analytical presentation can be confusing and ineffective in transmitting a message to senior management.
Globalization requires that senior management spend more time at other locations, leaving them with less time to review data and make decisions. They need to be confident that the analysis is credible, understand the data, and be able to use what they have learned from the data.
A successful benchmarking study does not stop at the analysis. It requires developing tools that senior management can use to support effective decision-making, presenting the right amount of information for the audience and being consistent in the presentation of the data.
The results of the comparative analysis of the five research facilities were presented in four ways.
Benchmark Analysis is a high-level data comparison that summarizes the findings of the study. It uses a simple matrix format to identify the data of most interest to senior management: historical data for the five buildings that were evaluated (gross square feet per person, cost per gross square foot and cost per person); a planning range that could be used to evaluate future research facilities; and a broad list of the influencing factors that could have an effect on each of the space components. (See second box below).
Influencing Factors is an educational document that uses graphics and text to illustrate the issues around space and cost, and then organizes these issues in matrix form for the buildings that were evaluated. Senior management could then see that there may have been acceptable reasons why a facility was out of the range.
Area Summaries is a graphical analysis of each of the facilities. It is a supporting reference document that addresses the accuracy and consistency of the method of evaluation.
Design Guidelines is a tool to aid in assessing the value of future facilities at the time that funding is being sought. This document uses both graphics and text to identify the key architectural, planning and engineering cost drivers for facilities. A simple visual rating system to illustrate three approaches (basic or low cost, intermediate or mid-range cost, and complex/expansive or higher cost) allows project managers to use design guidelines as a scorecard for a proposed building and to help senior management focus on the kind of facility that will be built.
Each of these four components includes summaries that senior management can use to quickly ascertain both the quality and the value of facilities they are funding. Yet each component also includes supporting detailed information that facility executives can use to compare current projects with historical projects, and to underline why there are differences. Using the same tools for both audience during the funding process ensures that both sides evaluating consistent information in the same manner.
This method of benchmarking showed that there are features of research facilities that can add great value without adding excessive cost, features such as interaction areas, connection to scientific operations and the use of repetitive building blocks. There are also features that, on the surface, appear to be low cost — including multicorridor configurations, mechanical penthouses and minimal sharing arrangements — but that can impede operations and have a much higher long-term operational cost. The added value of the analysis was not in the historical data it provided, but in the tools that it offered for making better decisions leading to well designed, more cost-effective laboratory facilities.
The benefit of an internal benchmark study is that the accuracy, consistency and context are customized to the particular organization’s situation, as opposed to an industry benchmark.
Without proper context and validation, benchmarking is simply numbers on a page. When appropriately undertaken and understood, however, the process of benchmarking can be of enormous benefit. Organizations can evaluate past planning and design decisions with an eye towards establishing new paradigms, tapping into lessons learned, and understanding the value of a thoughtful space allocation and design solutions.
Lynn Mignola is a senior strategic facility planner with CUH2A. She played a significant role in the global strategic facilities planning efforts following a merger of two large pharmaceutical companies, including developing relevant benchmarking data. Erik W. Terry is a senior laboratory planner with CUH2A. His experience includes institutions of higher learning, government agencies, pharmaceutical and biotech companies.
Click here to view PDF of chart A
Click here to view PDF of chart B