Artificial intelligence seems to be nearly everywhere these
days, yet most people have little understanding of AI technology, its
capabilities and its limitations.
Despite evocative names like “artificial intelligence,” “machine
learning” and “neural networks,” such technologies have little to do with human
thought or intelligence. Rather, they are alternative ways of programming
computers, using vast amounts of data to train computers to perform a task. The
power of these methods is that they are increasingly proving useful for tasks
that have been challenging for conventional software development approaches.
The commercial use of AI had a bit of a false start nearly a
quarter century ago, when a system developed by IBM called Deep Blue beat chess
grand master Garry Kasparov. That generation of AI technology did not prove
general enough to solve many real-world problems, and thus did not lead to
major changes in how computer systems are programmed.
Since then, there have been substantial technical advances
in AI, particularly in the area known as machine learning, which brought AI out
of the research lab and into commercial products and services. Vast increases
in computing power and the massive amounts of data that are being gathered
today compared to 25 years ago also have been vital to the practical
applicability of AI technologies.
Today, AI technology has made its way into a host of
products, from search engines like Google, to voice assistants like Amazon
Alexa, to facial recognition in smartphones and social media, to a range of
“smart” consumer devices and home appliances. AI also is increasingly part of
automobile safety systems, with fully autonomous cars and trucks on the
horizon.
Because of recent improvements in machine learning and
neural networks, computing systems can now be trained to solve challenging
tasks, usually based on data from humans performing the task. This training
generally involves not only large amounts of data but also people with
substantial expertise in software development and machine learning. While
neural networks were first developed in the 1950s, they have only been of
practical utility for the past few years.
But how does machine learning work? Neural networks are
motivated by neurons in humans and other animals, but do not function like
biological neurons. Rather, neural networks are collections of connected,
simple calculators, taking only loose inspiration from true neurons and the
connections between them.
The biggest recent progress in machine learning has been in
so-called deep learning, where a neural network is arranged into multiple
“layers” between an input, such as the pixels in a digital image, and an
output, such as the identification of a person’s face in that image. Such a network
is trained by exposing it to large numbers of inputs (e.g. images in the case
of face recognition) and corresponding outputs (e.g. identification of people
in those images).
AI will not replace
software, as electricity did not replace steam.
To understand the potential societal and economic impacts of
AI, it is instructive to look back at the industrial revolution. Steam power
drove industrialization for most of the nineteenth century, until the advent of
electric power in the twentieth century, leading to tremendous advances in
industrialization. Similarly, we are now entering an age where AI technology
will be a major new force in the digital revolution.
AI will not replace software, as electricity did not replace
steam. Steam turbines still generate most electricity today, and conventional
software is an integral part of AI systems. However, AI will make it easier to
solve more complex tasks, which have proven challenging to address solely with
conventional software techniques.
While both conventional software development and AI methods
require a precise definition of the task to be solved, conventional software
development requires that the solution be explicitly expressed in computer code
by software developers. In contrast, solutions with AI technology can be found
automatically, or semi-automatically, greatly expanding the range and
difficulty of tasks that can be addressed.
Despite the massive potential of AI systems, they are still
far from solving many kinds of tasks that people are good at, like tasks
involving hand-eye coordination or manual dexterity; most skilled trades,
crafts and artisanship remain well beyond the capabilities of AI systems. The
same is true for tasks that are not well-defined, and that require creativity,
innovation, inventiveness, compassion or empathy. However, repetitive tasks
involving mental labor stand to be automated, much as repetitive tasks
involving manual labor have been for generations.
The relationship between new technologies and jobs is
complex, with new technologies enabling better-quality products and services at
more affordable prices, but also increasing efficiency, which can lead to
reduction in jobs. New technologies are arguably good for society overall
because they can broadly raise living standards; however, when they lead to job
loss, they can threaten not only individual livelihood but also sense of
identity.
An interesting example is the introduction of ATMs in the
1970s, which transformed banking from an industry with highly limited customer
access to one that operated 24/7. At the same time, levels of teller employment
in the U.S. remained stable for decades. The effects on employment of
automation because of AI are likely to be particularly complex, because AI
holds the potential of automating roles that are themselves more complex than
with previous technologies.
We are in the early days of a major technology revolution
and have yet to see the great possibilities of AI, as well as the need to
address possible disruptive effects on employment and sense of identity for
workers in certain jobs.
Source: Techcrunch

No comments:
Post a Comment