Sometimes the big questions hit us: What does it mean to be human?
Machines will never be human. They will never be able to think, feel, or rely on intuition in the same way that we do. Yes, AI is getting to the place where it can teach itself to do things, where it can calculate all the possible outcomes of moves (and countermoves, and the outcomes of those moves) on a chess board to determine how to win the game, but that doesn’t mean that it should get the descriptor of ‘super-human’ applied to it.
Last year DeepMind’s newest AI software dubbed AlphaZero was shown as the first to be ‘multi-skilled AI,’ as in, it can learn games like Go, Shogi, or chess from scratch, whereas previous AI systems were highly specialised to play just one of these. While it cannot play the three games at once, it points to a future where AI will be more generalised and adaptable. The latest gaming tests Alphabet did with AlphaZero were against the specialised programmes for each game, so the fact that it succeeded is an significant breakthrough.
Then we have chatbots and AI assistants.
The Turing Test (developed by and named after the pioneer of Machine Learning Alan Turing in the 1950s) was developed to test a machine’s ability to exhibit a high enough level of intelligent behaviour that they pass for a human.
In line with this American radio show Radiolab recently conducted their own “Turing Test” with chatbot Mistuku (recognised as one of most human-like conversational AI programmes in the world), successfully fooling a significant portion of the audience into thinking they were conversing with a human.
Now, last month at Google I/O 2018, Google demoed their new digital assistant Duplex, an AI feature that is designed to take phone calls, make appointments, and generally do the inane tasks that we all avoid. Of note is that it’s a voice-based programme designed to pass for a human. Long gone is the robotic text voice of yesteryear, Duplex has a natural cadence, carrying a very natural rhythm complete with small voice cues (“mmhmms” and uhs”) that we use to fill gaps in conversation or indicate we are listening.
While currently constrained to well defined conversation topics and tasks (like booking a hair appointment or making a dinner reservation), this points to an ever-nearing time where AI could play a significant role in our day-to-day lives.
All this is a great advancement for technology, humankind, etc., but a deeper question is, why-oh-why do we need them to be human? Why do we insist on applying human descriptors to machines, and what is driving this need to create in our own image? Is it because if we knew we were chatting to a machine we would not be as comfortable or likely to trust it? Are we afraid of a lack of empathy or appropriate action? On deeper reflection, isn’t it more unsettling to be conversing with a robot that is programmed highly enough that it passes for a sentient being? And, aren’t we actually limiting the capacity of bots and the like by insisting on this as a feature?
Similarly Steve Worswick, creator and developer of Mistuku reflected on some frustrations with the mindset that we tend to have towards AI:
One of my biggest frustrations about entering contests… is having to “dumb down” my entry so it passes for a human. This involves putting deliberate spelling mistakes and changing many answers to be less accurate. For example:
Human: How high is Mount Everest?
Bot: 8,848 m (29,029 ft)
Human: No idea but I know it’s the tallest mountain
While the bot’s answer is more intelligent and useful, it’s certainly not humanlike… Let’s use AI and chatbots as a useful tool rather than trying to deceive people.
One argument for making bots more human-like is the natural aforementioned skepticism or mistrust of them, so we name them, give them human voices and apply human descriptors. Yet this accommodation is exactly what feeds into the limitation. By using ‘human’ descriptors we place certain expectations (and unconscious restrictions) on machines, while often making the dangerous assumption that they will inherently carry the same morals and ethics as well.
Is our insistence on the constructed ‘humanity’ of these machines actually limiting their potential? (outside of the obvious ethical parameters conversation). Like computer programs, cars, etc., bots are a tool, and should be designed and yielded to fulfil their potential. This means appreciating their machinery and construction, and leveraging this to enhance our lives in whatever way we build them to, outside of ‘human’ attributions and limitations.