7 The good and the bad

As with the development of many technologies—cars, electricity, nuclear power—the construction of artificial minds is likely to have both negative and positive impacts. However, there is a sense in which building minds is much more fraught than these other technologies. We may, after all, build agents that are themselves capable of immorality. Presumably we would much prefer to build Commander Data than to build HAL or the Terminator. But how to do this is by no means obvious. There have been several interesting suggestions as to how this might be accomplished, perhaps most notably from Isaac Asimov in his entertaining and thought-provoking exploration of the three laws of robotics. For my purposes, however, I will sidestep this issue—not because it is not important, but because more immediate concerns arise from considering the development of these agents from a technological perspective. Let me then focus on the more immediately pressing consequences of constructing intelligent machines.

The rapid development of technologies related to artificial intelligence has not escaped the notice of governments around the world. One of the primary concerns for governments is the potentially massive changes in the nature of the economy that may result from an increase in automatization. It has recently been suggested that almost half of the jobs in the United States are likely to be computerized in the next twenty years (Rutkin 2013). The US Bureau of Labor and Statistics regularly publishes articles on the significant consequence of automation for the labour force in their journal Monthly Labor Review (Goodman 1996; Plewes 1990). This work suggests that greater automatization of jobs may cause standard measures of productivity and output to increase, while still increasing unemployment.

Similar interest in the economic and social impacts of automatization is evident in many other countries. For instance, Policy Horizons Canada is a think-tank that works for the Canadian government, which has published work on the effects of increasing automatization and the future of the economy (Arshad 2012). Soon after the publication of our recent work on Spaun, I was contacted by this group to discuss the impact of Spaun and related technologies. It was clear from our discussion that machine learning, automated control, robotics, and so on are of great interest to those who have to plan for the future, namely our governments and policy makers (Padbury et al. 2014).

This is not surprising. A recent McKinsey report suggests that these highly disruptive technologies are likely to have an economic value of about $18 trillion by 2025 (Manyika et al. 2013). It is also clear from the majority of analyses, that lower-paid jobs will be the first affected, and that the benefits will accrue to those who can afford what will initially be expensive technologies. Every expectation, then, is that automatization will exacerbate the already large and growing divide between rich and poor (Malone 2014; “The Future of Jobs: The Onrushing Wave2014). Being armed with this knowledge now means that individuals, governments, and corporations can support progessive policies to mitigate these kinds of potentially problematic societal shifts (Padbury et al. 2014).

Indeed, many of the benefits of automatization may help alleviate the potential downsides. Automatization has already had significant impact on the growth of new technology, both speeding up the process of development and making new technology cheaper. The human genome project was a success largely because of the automatization of the sequencing process. Similarly, many aspects of drug discovery can be automatized by using advanced computational techniques (Leung et al. 2013). Automatization of more intelligent behaviour than simply generating and sifting through data is likely to have an even greater impact on the advancement of science and engineering. This may lead more quickly to cleaner and cheaper energy, advances in manufacturing, decreases in the cost and access to advanced technologies, and other societal benefits.

As a consequence, manufacturing is likely to become safer—a trend already seen in areas of manufacturing that employ large numbers of robots (Robertson et al. 2005). At the same time, additional safety considerations come into play as robotic and human workspaces themselves begin to interact. This concern has resulted in a significant focus in robotics on compliant robots. Compliant robots are those that have “soft” environmental interactions, often implemented by including real or virtual springs on the robotic platform. As a result, control becomes more difficult, but interactions become much safer, since the robotic system does not rigidly go to a target position even if there is an unexpected obstacle (e.g., a person) in the way.

As the workplace continues to become one where human and automated systems co-operate, additional concerns may arise as to what kinds of human-machine relationships employers should be permitted to demand. Will employees have the right not to work with certain kinds of technology? Will employers still have to provide jobs to employees who refuse certain work situations? These questions touch on many of the same subjects highlighted in the previous section regarding the ethical challenges that will be raised as we develop more and more sophisticated artificial minds.

Finally, much has been made of the possibility that the automatization of technological advancement could eventually result in machines designing themselves more effectively than humans can. This idea has captured the public imagination, and the point in time where this occurs is now broadly known as “The Singularity,” a term first introduced by von Neumann (Ulam 1958). Given the vast variety of functions that machines are built to perform, it seems highly unlikely that there will be anything analogous to a mathematical singularity—a clearly defined, discontinuous point—after which machines will be superior to humans. As with most things, such a shift, if it occurs, is likely to be gradual. Indeed, the earlier timeline is one suggestion for how such a gradual shift might occur. Machines are already used in many aspects of design, for performing optimizations that would not be possible without them. Machines are also already much better at many functions than people: most obviously mechanical functions, but more recently cognitive ones, like playing chess and answering trivia questions in certain circumstances.

Because the advancement of intelligent machines is likely to continue to be a smooth, continuous one (even if exponential at times), we will likely remain in a position to make informed decisions about what they are permitted to do. As with members of a strictly human society, we do not tolerate arbitrary behaviour simply because such behaviour is possible. If anything, we will be in a better position to specify appropriate behaviour in machines than we are in the case of our human peers. Perhaps we will need laws and other societal controls for determining forbidden or tolerable behaviour. Perhaps some people and machines will choose to ignore those laws. But, as a society, it is likely that we will enforce these behavioural constraints the same way we do now—with publically sanctioned agencies that act on behalf of society. In short, the dystopian predictions we often see that revolve around the development of intelligent robots seem no more or less likely because of the robots. Challenges to societal stability are nothing new: war, hunger, poverty, weather are constant destabilizing forces. Artificial minds are likely to introduce another force, but one that may be just as likely to be stabilizing as problematic.

Unsurprisingly, like many other technological changes, the development of artificial minds will bring with it both costs and benefits. It may even be the case that deciding what is a cost and what is a benefit is not straightforward. If indeed many jobs become automated, it would be unsurprising if the average working week becomes shorter. As a result, a large number of people may have much more recreational time than has been typical in recent history. This may seem like a clear benefit, as many of us look forward to holidays and time off work. However, it has been argued that fulfilling work is a central to human happiness (Thagard 2010). Consequently, overly limited or unchallenging work may end up being a significant cost of automation.

As good evidence for costs and benefits becomes available, decision-makers will be faced with the challenge of determining what the appropriate roles of artificial minds should be. These roles will no doubt evolve as technologies change, but there is little reason to presume that unmanageable upheavals or “inflection points” will be the result of artificial minds being developed. While we, as a society, must be aware of, and prepared for, being faced with new kinds of ethical dilemmas, this has been a regular occurrence during the technological developments of the last several hundred years. Perhaps the greatest challenges will arise because of the significant wealth imbalances that may be exacerbated by limited access to more intelligent machines.