For decades, Hollywood has attempted to bring human thinking to robotic creations. Terminator with Arnold Schwarzenegger sticks out in my mind as one of the first movies I recall to make this effort. The latest offering in this genre is Chappie. The plot is based on a not-to-distant futuristic world in which crime is combated by a mechanized police force. When one police droid, Chappie, is stolen and given new programming, he becomes the first robot in this fictional society with the ability to think and feel for himself – just like a human.
Nicholas Carr, in his latest book, The Glass Cage, deals with the phenomenon of automation. He explains: “There is no question that computers are efficient at performing many demanding human tasks, whether of the brain or the body, they are able to replicate our ends without replicating our means. And they can do it with superhuman speed by following explicit instructions. But here is the ultimate question: Can computer automation ever provide the same tacit (implicit) knowledge available to humans?” Are they limited to being mind-less, even as they become ever more useful?
In light of this question, I pondered the impact that automation and artificial intelligence can have on the delivery of services in four important areas that affect us all quite often. And yes, one of those areas is indeed investing.
But let’s deal with Chappie first. Police robots can carry a bomb to a safe distance at the direction of a human officer, but they can’t know why it is so important to do so. A police robot cannot “know the neighborhood,” or its residents, in the same manner that an officer who patrols there regularly can. A robot can’t read non-verbal cues, which provide subtle insight into serious or dangerous situations. Thus, while it may make for some entertaining motion picture action, it is unrealistic to think robots could ever be placed in positions of authority (law enforcement, for example) that are so vital to our society. What about other professions? Next, let’s look at airline pilots.
Carr’s investigation revealed that, while few people believe human pilots could ever be completely removed from the cockpit of passenger jets, researchers “have studied what’s gained and lost when pilots share the work of flying with software. They’ve learned that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human-factors expert, calls ‘a deskilling of the crew.’ Eventually, thought and action become seamless. Skills fade.”
Of course, the scary part of this scenario for us passengers is what will happen when the pilot – having been lulled into a sense of automation security – is faced with an unexpected and sudden danger in the cockpit. Will reduction in a pilot’s ongoing attention to the airplane’s instruments, and thus his “feel” for the aircraft, hamper his ability to react naturally (implicitly)? Or will he have to “think about it,” thus burning valuable reaction time?
In an airplane, auto pilot is clearly mostly a blessing and a valuable tool. But perhaps you would agree that, in some cases, it also has the potential to be a detriment, especially if a pilot’s vigilance and training are not continuously applied. So, even if automation can never replace the human pilot, it is somewhat disconcerting that a pilot’s skills may be diminished as a result of relying too much on it.
Carr posits that “multitasking is the opposite of mindful presence.” In regards to the medical profession, he states:
“The intrusiveness of the computer creates another problem that’s been widely documented. EMR (electronic medical records) and related systems are set up to provide on-screen warnings to doctors, a feature that can help avoid dangerous oversights or mistakes. If, for instance, a physician prescribes a combination of drugs that could trigger an adverse reaction in a patient, the software will highlight the risk. Most of the alerts, though, turn out to be unnecessary. They’re irrelevant, redundant, or just plain wrong. They seem to be generated not so much to protect the patient from harm as to protect the software vendor from lawsuits. Studies show that primary-care physicians routinely dismiss about nine out of ten of the alerts they receive. That breeds a condition known as alert fatigue. Treating the software as an electronic boy-who-cried-wolf, doctors begin to tune out the alerts altogether. They dismiss them so quickly when they pop up that even the occasional valid warning ends up being ignored. Not only do the alerts intrude on the doctor-patient relationship; they’re served up in a way that can defeat their purpose.”
“Being led by the screen rather than the patient is particularly perilous for young practitioners, [Dr. Beth] Lown suggests, as it forecloses opportunities to learn the most subtle and human aspects of the art of medicine — the tacit knowledge that can’t be garnered from textbooks or software. It may also, in the long run, hinder doctors from developing the intuition that enables them to respond to emergencies and other unexpected events, when a patient’s fate can be sealed in a matter of minutes.”
Again, by showing the apparent tipping point at which automation can be a detriment to a doctor treating her patients, the implication is that replacing the human doctor altogether would be disastrous – and impossible. Now, what about in the financial advice arena?
There is no question that new investment vehicles and more efficient retirement planning calculations have been created with the aid of computer technology. But can the entire task be handed over to a computer? Some in the profession would argue, “Yes.” The industry calls them robo-advisors. The premise is that algorithms are now so powerful that an educated technician could watch a computer screen and adequately support a client’s investment needs, wants and wishes. But isn’t the role of a trusted financial professional that of police officer, pilot and doctor all rolled into one?
Like a respected police officer who knows his community, doesn’t your financial advisor seek to keep you financially secure by knowing you personally?
Just as a pilot has seen every situation in the air and flown safely through it before – navigating nuanced conditions based on experience – the seasoned financial advisor has “flown” through choppy markets and managed a safe portfolio landing for many “passengers.”
And, like a physician, your financial advisor knows your history, the context within which your needs, wants and wishes reside. No other person’s financial circumstances, goals and family dynamics are exactly like yours. A computer cannot know you like a fellow human being, especially not one who is always working in your best interest – the very definition of a fiduciary.
Ultimately, what automation lacks in each example I’ve cited here is simply wisdom. Wisdom can be defined as “knowing what is true and right – coupled with just judgement to action.” That is why, no matter how hard technology tries, this critical factor will never be replaced – in a squad car, a cockpit, an operating room or in your financial advisor’s office.
As Mr. Carr so keenly observes, “Artificial intelligence is not human intelligence. People are mindful; computers are mindless.”
And that will never change.
The opinions expressed by featured authors are their own and may not accurately reflect those of the BAM ALLIANCE. This article is for general information only and is not intended to serve as specific financial, accounting or tax advice.
© 2014, The BAM ALLIANCE