A helpful framework for understanding the evolution of technology over time and where we are in this process currently is to break up the historical record into six stages of development:
- The Age of Tools (prehistory to 1500 AD)
- The Age of Machines (1500-1830)
- The Age of Systems (1830-1945)
- The Age of Automation (1945-1990)
- The Age of Autonomy/AI (1990 to today)
- The Age of Sentience (time: unknown)
The first four stages were delineated by Martin van Creveld in his book, Technology and War: From 2000 BC to the Present, written in the late '80s, which we have expanded to include the fifth and present stage as well as the last, which many refer to as the Singularity.
Though many still conceptualize the current state of technology through its prior stages of development, i.e. as mere tools or machines, the most relevant description now includes that of autonomy, not just the ability to run without human intervention but, additionally, to learn and adapt while acting on its own behalf, something we usually equate with intelligence. For that reason, the present age may equally be described as the age of artificial intelligence.
The most interesting, and perhaps controversial, places where AI and autonomous systems either operate currently or are likely to very soon is in the financial markets and on the battlefield with autonomous weapons systems, a.k.a. "killer robots".
This time on FS Insider, we spoke with Dr. Heather Roff (@HMRoff), a research scientist at ASU’s Global Security Initiative, to get a briefing on the latter.
Here is a clip of what she had to say when it came to how these systems may start to exhibit unforeseen behaviors due to their growing complexity:
AI Still in Its Infancy
These types of truly autonomous systems are in their infancy, Dr. Roff stated. What’s more, it really depends on how we define autonomous versus automated systems.
If automated means a system that can home in on a target, or find a preselected target, those types of systems have been in usage for decades, said Dr. Roff.
“If you want to talk about autonomous weapons systems that can select their own targets and attack them, this capability is still not really operational yet,” she noted.
In other words, as far as we know, militaries aren't yet using technologies on the battlefield that are independently able to identify, select, and attack targets.
Artificial general intelligence is a system that has common sense and general reasoning capabilities, Dr. Roff stated. An AGI might have any number of attributes, including the ability to reason, plan and anticipate, develop natural language processing, image recognition, understand basic concepts and related tasks.
Google is working toward something along these lines with Deep Mind, though they don’t want to have anything to do with military applications of their technology, Dr. Roff noted, and they have publicly denounced autonomous weapons systems.
An AGI is a far cry from the notion of a super intelligence, Dr. Roff added, such as that depicted with SkyNet, the super-advanced malevolent artificial intelligence in the Terminator movies.
“While they’re qualitatively different, the worry, of course, is that an AGI could learn to become a superintelligence very quickly … without us taking necessary precautions,” she said.
Luckily, we’re still years off from even an AGI, she added. Still, the challenge is that we must guide the development of such a system in an appropriate, ethical manner.
“We have to be very careful about how much we attribute to the system that we’re creating in terms of capabilities,” she said.
Listen to this full interview with Dr. Heather Roff on our website by logging in and clicking here. Become a subscriber and gain full access to our premium weekday interviews with leading guest experts by clicking here.