as noted by professor james reisen, record we nounrenowned quote, in their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid. end of quote. our investigation experience provides three lessons learned that support professor reisen's statement. the first is that the theory of removing human error by removing the human assumes that the automation is working as designed. so the question, as always, what if the automation quits or fails? will it fail in a way that is safe? if it cannot be guaranteed to fail in a way that's safe, will the operator be aware of the failure in a timely manner and will be the operator then be able to take over to avoid a crash? an example of the automation failing without the operator's knowledge occurred right here in washington and you may remember the metro crash near the ft. totten station in 2009 that tragically killed the train operator and eight passengers. in that accid