Giving the customer a voice
Editor's note: Henry Blackwell is customer acceptance survey coordinator for Caterpillar Inc., a Peoria, Illinois-based manufacturer of heavy equipment. This article is adapted from a presentation delivered in July at The Manufacturing Institute's conference on "Measuring and Improving Customer Satisfaction."
Caterpillar's interest in customer satisfaction began with founder Daniel Best, who wrote a personal letter to every customer to ask if he was satisfied with his machine. Our concern for customer satisfaction still exists today and that is why Caterpillar was ranked 7th out of 305 United States companies by Fortune Magazine's annual "Corporate Reputation" survey on the quality of products and services. Caterpillar has ranked in the top 10 for quality every year since Fortune began the survey in 1982.
In the early 1980's, Dr. Joseph M. Juran, a noted lecturer on quality, made a strong case for quality-related market research to gather essential input that is not available in-house. In our case, that is the customer's perception of product quality. With this in mind, our corporate quality committee in 1986 decided that Caterpillar would measure customer perception.
We have three major reasons for surveying customers.
1. To solicit customer participation in improving products.
Who better to evaluate the product and tell us the best way to improve our products than the people who use them day in and day out? They know even the smallest of problems, especially the ones they fix themselves with no dealer involvement.
Additionally, the customer is in the best position to evaluate the dealer product support. Since all but one of our 200 plus dealerships in North America are independently owned, it is difficult for Caterpillar to know what kind of service our customers are really getting from our dealer organizations. This is a very efficient method to measure the customer's satisfaction with our product support capabilities.
But most important, this is the customer's chance to let us know what they think. To illustrate this point, about a year ago, we became concerned about our survey response rate, so we decided to do a little test. During a two-month period, we enclosed a dollar bill in each mailing. It worked well and our response rate jumped to 38%. I even received comments back such as, "Please send me another survey with more money," and a note from a municipality in New York that said, "What are you guys trying to do us? We are already being investigated by the FBI." But I also got this: "Here's your dollar back. It was worth 100 times that to tell you what I think. Thanks for asking."
2. To provide information needed to manage new product programs.
Before you can build a quality product, you must have accurate customer information about what their needs are. Then you can begin to build products based on the customers' requirements-and then measure how well they like the result. And keep measuring.
Our customers are the only ones who can tell us if our new products are up to snuff. Quality is the customer's assessment-it is his opinion. But it is the only one that matters, so that's how we must verify the quality of products. Our customers tell us what they expect product to be able to do, both current and future. That includes productivity, serviceability, and comfort-related items.
Also, depending on the sample size for a given model, the customer survey has the potential to provide the credibility needed to move the company into action. Usually people within the organization have heard all of the comments and complaints before. But once you can quantify these complaints and show the decision makers the customers' comments in black and white, then your organization can truly become customer driven.
3. To calibrate existing quality indicators.
Most companies have a large variety of internal quality indicators. For example, the number of defects counted during the manufacturing and assembly process are often used to measure product quality. But we all recognize how far these internal measures can be from monitoring the customer perception of quality. Even external indicators, like dealer reported repairs, do not necessarily measure customer satisfaction.
Internal targets which are used as an indicator of product reliability need to be calibrated against a customer satisfaction index. If customers are not satisfied with a product that achieves its target, then the target needs to be revised. Statistical analysis of our survey data and of our internal measure of product reliability shows that there is a direct correlation between these two quality indicators.
The 1990 Technical Assistance Research Program Institute (TARP) report indicates that top management is directly involved with only 5 % of customer problems. Forty-five percent through normal channels. But 50% of our customers do not complain to the manufacturer at all. Why?
- It's not worth the time or trouble.
- They believe no one really cares.
- They do not know how to complain.
With a customer survey, we hope to eliminate the first and third of these excuses. First, we have given them the method to complain. If nothing else, they have the name of someone to call. Second, with a simple survey sent for every machine sold, it is an easy and trouble free method for customers to tell us about any problems or concerns that they have.
Theoretically, the survey has the potential to reduce the percentage of customers who do not complain or just complain to front line people. This should increase top management's awareness of many product or dealer related issues.
Development of the survey
In 1983, we introduced a new product to the field. As usual, we did a field follow up program on a few pre-production machines. Because we watched them so closely and gave those customers special attention during that time, no major problems were perceived. Once production machines hit the field, we found out differently. After several rework programs and millions of dollars in cost, we realized that we needed additional input.
So before introducing the backhoe loader in 1985, we decided to send a survey to prospective customers to get direct feedback from the end users. That information about the product did not get filtered by dealer and support staff, giving us a better understanding of any deficiencies before going into full production.
In early 1986, a corporate committee was formed and they determined that we needed another quality indicator. It was agreed that an external measure was needed and that the backhoe loader survey seemed to work quite well. It was reconstructed into a survey which could be sent to any machine owner. This survey was four pages long and was sent six months after the sale of each machine.
With the help of an outside marketing company, we soon began monthly production mailings in the United States and Canada. That firm also provided us with a computer dump of the data. That is what we asked for, but that is not what we wanted or needed. Only a handful of people actually read and understood this compilation of data.
In January 1988, since our response rate at that time was only 20 % and the computer reports were difficult to interpret, we decided to redesign the form and to collect the data in-house. This is when we really began to manage the entire customer satisfaction program.
Start-up considerations
It is very important to start with a customer focus group to determine the kind of issues customers are concerned about for their business; then try to pick out their most critical concerns.
Because of the variety of information desired, surveys should be constructed by a multi-functional team, with representatives from marketing, engineering, quality and service. Collect input from each group, and then develop a survey with questions that can result in actionable information.
Market research consultants can be very helpful in the development, execution and analysis of a customer survey. In fact, Caterpillar works with some outside consulting firms on other types of market research. A research firm helped to get this project going back in 1986. However, because of a falling response rate and insufficient report capabilities, we felt it was best to handle this survey in-house.
Timing important
The timing of the survey is also important. We wanted the respondent to have time to use the machine and to get familiar with the product, but we were also very interested in his initial impression of our product. We believe surveying six months after delivery allows us to accomplish both objectives. Early hour reliability and a quality image go hand in hand. Many of these early hour deficiencies can be attributed to the dealer, plant, design, or supplier. Once these deficiencies are discovered, then you can begin to address the causes of those problems and resolve them.
Survey method
The survey method was determined by a process of elimination. Focus groups allowed us to get detailed information from the appropriate people, but on a continuous basis this method was too expensive and it limited us in the number of respondents we could contact.
Telephone surveys provided quick answers at relatively low costs. The problem is that most of our customers are not accessible by phone. But, assuming we could contact customers by phone, they may not be able to take the time to give well thought-out answers. Most customers need to review their records on any given machine in order to give accurate and intelligent responses. Also, we felt that this method would not serve our customers well because most phone operators would not be able to interact in a knowledgeable fashion with our customers.
By using a mail survey, we ask both a greater number of questions and more detailed questions. Another advantage is that a mail survey allows the customer to complete the survey at his convenience. And the cost is even less than a phone survey when attempting 100 % coverage.
Constructing a survey
A survey should first include an acknowledgment of the recent purchase and tell users that their opinions are important. Then we want to know why they bought our product; are they satisfied with our product?; what kind of failures (if any) have they had?; and, finally, what kind of service have they received from our dealers?
The envelope should look official and important. Using a first class stamp rather than a postage meter helps to get the customer's attention. With all of the junk mail that people get these days, the survey could end up in the trash without even being opened.
In addition, every survey should have some kind of introduction that tells what we are doing and why we are doing it. Then we tell the customer what we expect of him. We feel the cover letter should come from a real person and someone who has authority to take some action if necessary. Every once in a while a customer will call and want to talk to someone rather than complete the survey.
It is important to keep the form short and simple. Our survey is a single sheet, front and back. Most questions ask for a simple "X" in a box that best reflects their feelings. But we also leave room for unstructured comments. We include a postage paid return envelope and a thank you note at the end of the survey form.
Our survey gathers customer opinions in three areas.
1. The buying decision.
It is important to know why the customer bought our products, so we asked about specific attributes that the customer considers before making a purchase. We felt this was necessary so that we could evaluate if we were living up to customers' expectations. Once you know why your customers buy your product (at least in our industry), you do not need to keep asking. The answer does not change from month to month or year to year unless there is some major change in the company's marketing philosophy or the world economy.
2. Satisfaction with product characteristics.
The heart of the survey asks how satisfied the customer is with various characteristics of the product. Satisfaction at the time of delivery indicates whether plant quality systems and dealer pre-delivery systems are working as we intend. Here the survey touches on non-subjective observations such as, were there any leaks, loose bolts, or missing parts? Did the dealer have to make adjustments? Again, as a company, these are things we can take action on to improve future products. Since we continue to measure, we know if the corrective action worked.
Satisfaction with product performance attributes allows us to evaluate how this product compares with a previous model, the competition, or with our customers' own standards. Customers' expectations are usually reasonable, and these expectations must be met if we hope to continue to be successful.
3. Repair information.
The survey respondents also provide input that can be used to help convince decision makers to make changes to products in the field. Data on the number of repairs and the types of repairs can also be obtained. All of the repairs won't be reported on the survey, and we had no intention of using these questions as a measure of product reliability. It does, however, give some insight into the customers' frame of mind when he answered the survey. It also gives us some specific things to discuss during follow-up calls.
Customer perception of the dealer's service can also be measured. The quality of dealer service, because it reflects back on the manufacturer, has a strong effect on sales. Any strengths or weaknesses identified in the dealer organizations are addressed by our Dealer Marketing group.
Finally, it is important for the customer to summarize his overall impression of the product. Summary information could include questions such as: How satisfied is the customer with the ownership experience? How can the product be improved? Is there anything else that the customer might want to communicate to the factory? Can we assist in resolving any concerns he may have? If the customer does indicate he would require assistance, then make sure you have people with good product knowledge and the personality to communicate with a potentially irate customer.
When we have an unhappy customer, we feel it is imperative to communicate with him. We make a contact, listen to their concerns and rectify them if we can. But most importantly, we let them know we are interested in their comments.
Throughout corporation
Because all of the responses are collected and entered in to our mainframe computer system, the survey data is available throughout the corporation. Since different groups are interested in different issues and products, we created an on-line menu-driven selection report system which allows for sorting of data by plant, model, dealer, etc.
Looking at the results geographically, we might, for example, see that customers in one region were less satisfied with the number of repairs than those in other regions. This possible sensitivity to the number of repairs could be due to a cultural difference or may be due to a concentration of a certain product which is less reliable. Environmental differences such as the climate might affect performance which could have some impact on the customers' impression of the product. Remember, the survey results are only to help find inconsistencies. Think of them as a starting point for identifying the real problems or issues.
A breakdown by dealers within the same region could also be revealing. Dealer delivery inspection, parts availability or service support do vary from dealer to dealer. These are things that district managers are tracking and can work on with deficient dealers to bring up their customers' satisfaction level. A secondary benefit of the survey is an improvement in product support. If your dealer organization knows you are getting direct customer feedback, they will try harder to make the customer happy.
Tabular reports can be generated as needed. We currently send a report to our executive office on a quarterly basis highlighting any significant changes or trends.
Response rates are reported as well. We want to ensure adequate sample sizes so there is no question about the credibility of the data. Generally, this rate fluctuates between 25% and 30%. The rate in the second quarter 1988 of 35% is due to the inclusion of a dollar bill in two of the three monthly mailings during that quarter. A double mailing will accomplish nearly the same results.
Promote the survey
All of this information is of little value unless people know about it and use it. At Caterpillar, we promoted the survey in several different ways. We started by routing the summaries to all of our Service Engineering division managers. Quarterly updates were sent to the administrative vice-president, who communicated the survey results to the executive committee. Once Plant management realized that the executive committee was reviewing survey results, they got very interested. Now, all of our Service Engineering personnel have been made aware of the process and have access to and utilize the survey files.
Finally, we did an internal media blitz. The survey was highlighted in a management newsletter, an article in the company newspaper and a short video blurb was created to be part of a monthly company news show.
Take corrective action
The final step in evaluating the results is to take corrective action. This means: do something. Once you begin to get feedback from your customers, do something with it. Make follow-up calls or visits to disappointed customers, just to get additional information about their specific problems or concerns. Customers are pleasantly surprised that someone read their responses and are elated that someone is following-up with them. That gesture alone could elevate your company's image in the customer's mind. Chart the results and look for trends (positive or negative) and use the data to generate action. Then continue to measure, because the target is customer satisfaction and it is constantly changing.