Communication During a Crisis

[article]
Summary:

When a crisis hits a business, you've got to work hard and fast to mitigate the negative consequences--a process which includes communicating with your clients. In this week's column, Payson Hall reminds us that keeping clients in the know is critical to a successful recovery and will stabilize the clients' faith in you, even when all has failed. Drawing from a recent crisis in which he was the client, Payson gives us key points to consider the next time we are overwhelmed by customers who want to know when business will return to normal.

My Internet service provider (ISP) was bought out earlier this year. Three weeks ago, in a poorly executed "big bang" migration, all servers and domains were transferred from the previous provider to the new owners. The smoke still hasn't cleared. Our Web site was crippled and email service has been sporadic since the migration. It's my guess that most customers have either found a new provider or are looking; my company already switched.

There are two interesting aspects to this fiasco:

  • The technical debacle--It's a tale of arrogance and ignorance in trying to convert tens of thousands of domains all at once with what appeared to be little planning, prototyping, or testing. To complicate matters, the ISP also changed the help ticket application and moved the support telephone lines on the same day they migrated all the customer domains. The outcome was sadly predictable: The migration created a huge, smoking crater in place of customer Web sites. As customers realized they had problems and tried to contact the provider, they discovered the help ticket and telephone systems were offline. This exacerbated the second part of the disaster.
  • Botched communication, both before and during the crisis

This column explores the troubled communication aspects of this tragedy to discover what can be learned and applied make to your next crisis less disastrous. (Feel free to share a link to this column with any friends working for a federal disaster support agency.)

In March, the ISP notified customers via email that the migration was going to occur and encouraged users to back up their sites and data. It also cautioned that any changes made after backups were taken on March 15 would be lost when systems were restored on April 15. Setting aside the fact that a thirty-day delay between backup and restore is criminally poor service in the twenty-first century, I received the email warnings prior to the conversion only because my personal email account is registered as our domain administrator. Several thousand other customers never got this message, because they only check their administrator email when they are actively engaged in site administration. Others apparently had non-technical types monitoring the admin accounts; they received the message but didn't understand its significance. The bottom line is that many businesses lost thirty days of critical customer data--not just Web pages, but databases with purchase and payment information.

Take away points:

  • Keep people informed about what you plan to do.
  • Assure they are getting the message.
  • Check to see if they understand the message.

At the time of the conversion, the vendor established a Web page to communicate migration status--a good idea.

Unfortunately, the only way to find the page was to navigate through the trouble-ticket-creation Web page--a bad idea. People who identified problems days after the conversion and tried to enter trouble tickets discovered what looked like a secret site that seemed to hide that many of their problems were already known and still unresolved.

The migration status page was updated every few hours on the first day, then once per shift for a day or so, then once per day, then not at all. The last entry was April 22, two weeks before I wrote this column. The first entry triumphantly proclaimed the success of the migration, noting a few "minor issues." Later updates acknowledged more serious problems and asked customers to be patient. Even later entries actually chided users for some of the ways they had coded Web pages and applications, telling them that many of the problems wouldn't have occurred if they had done things "correctly." Starting and stopping communication while the problems persisted? A terrible idea.

Take away points:

  • et up and publicize a central source of status information.
  • Make sure the status is easy to find and available to everyone who might care.
  • Keep status current, even if you have no new information. Better a message every two hours saying "no change" than silence.
  • Establish a gatekeeper or editor for all broadcast communication who is responsible for assuring that content is balanced and not defensive and who explains what is happening and why. Blaming your victims for your mistakes is a truly awful idea.

I logged five trouble tickets and never received a reply. On the rare occasions when the support phone number worked, I left voice messages but never received a call back. Five days after the migration, a notice was posted to the status page reporting that all problems were known and asking customers to please stop submitting trouble tickets.

Take away points:

  • Positive acknowledgment of all incoming communication is essential. One reason for the flood of trouble tickets that overwhelmed the provider's support staff was that customers were reporting the same problem several times because they had not received a response. Customers were doubly frustrated because they felt their issues were not being recognized or addressed.
  • Precision is vital if you must broadcast a response. It is arrogant to say (and dismaying to hear), "We know about all problems." Much better to say, "We are aware of problems X, Y, and Z and will notify you when they have been resolved."

The best communication rule I've heard is "The burden of communication lies with the party that has the most to lose." Everyone had a lot to lose in this situation. The ISP may fail as a consequence of its poor migration. Customer operations were disrupted, and many lost business, data, and money. Some may be mortally wounded.

Better planning and testing clearly could have prevented much of this disaster, and better communication might have mitigated the consequences. Informed customers, most of whom gave up in frustration, might have been better prepared and more patient once they understood the nature of the problems and could see progress in addressing them.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.