Or, perhaps put another way, does the automation of flight control activities sufficiently remove the pilots from the loop that they are no longer truly in control of the plane, even when they are in control? A story out last week in the New Yorker addresses this topic in reasonable detail, though not quite to the level of a full research paper. Still, the tales told and conclusions reached present a bit of a conundrum for pilots and passengers alike.
Autopilot is not a new concept. It was 1914 when Lawrence Sperry introduced his father’s Airplane Stabilizer, the first version of autopilot, to the world. In the 100 years since then many changes have been made to the level of control assigned to the electronics versus the humans on the flight deck. And, generally speaking, the decision to automate more was based on sound ground.
The supporting logic was the same in aviation as it was in other fields: humans are highly fallible; systems, much less so. Automation would prevent mistakes caused by inattention, fatigue, and other human shortcomings, and free people to think about big-picture issues and, therefore, make better strategic decisions. Yet, as automation has increased, human error has not gone away: it remains the leading cause of aviation accidents.
Yes, computers do exactly what we program them to do, but there’s still the part where humans are highly fallible. And even with all the checking and double-checking of the code things can go wrong. Moreover, the automated systems do not necessarily have a plan for every scenario. There are times when human control is required even in such a highly automated environment. The story discusses the fate of the Colgan Air crash outside Buffalo in 2009, one in which the automated systems indicated problems to the pilots but where the reaction to those indications contributed to the crash of that plane.
Much of the recent research on the topic has focused on the ability of pilots – both new and with significant experience – to react to abnormal situations as presented in simulators. One study tracked the amount of time the pilots focused on actively flying the plane based on the level of automation involved.
It was true, as automation proponents argued, that pilots spent less time worrying about the minutiae of flying when they were using more highly automated systems. But they weren’t necessarily using the newfound mental space to perform higher-order computations. Instead, a full twenty-one per cent of the pilots surveyed reported thinking about inconsequential topics, just as Shaw and Renslow had done.
Another study of 16 highly experienced 747 pilots showed that their manual flying skills remained strong, even with higher levels of automation in place. But their complex reasoning actions suffered. They proved less capable of deciding what to do to ensure the safety of the plane, even though they were still able to actively enact the decisions they made.
The pilots’ ability to make complex cognitive decisions, however—what Casner calls their “manual thinking” skills—had suffered a palpable hit. They were less able to visualize their plane’s position, to decide what navigational step should come next, and to diagnose abnormal situations.
Honeywell is one of several operations involved in the avionics world and their history in it goes to the very beginning; Sperry’s gyroscope was the foundation of the company. Earlier this year I got to tour their research facilities near Phoenix where the new flight deck control systems are studied and evaluated. Many of the topics discussed by those researchers mirror the one discussed in the New Yorker article. The day I was there they were working on Electronic Flight Bag testing, with demo models in their simulator to test the impact of things like minor turbulence on the ability of pilots to navigate effectively and find the appropriate details on an approach plate chart. When they ran some of us through the cycle it was no surprise that we were terribly distracted by the electronics and “forgot” to fly the plane. Not an identical comparison, to be sure, but it was poignant enough a moment to leave me reminded that this sort of stuff is vitally important.
Or, as a good friend and automation proponent said in describing this same story, “I feel like computers are supposed to work for human beings.” And that’s great, so long as the automation doesn’t actually decrease the reliability of the transaction.
Yes, more things in the cockpit are automated now than in the past. But it is important that the pilots remain actively engaged in the process of flying the plane. The need to be in the feedback loop, not next to it. That’s a very real risk being faced. Because computers aren’t perfect either. They depend on the inputs we give them and the code we write for them. We need both automation and human interaction. Or the automation needs to be WAY better than what we’ve got today.
Never miss another post: Sign up for email alerts and get only the content you want direct to your inbox.