Would Artificial Intelligence Guide Education or Grab It?

Published February 15, 2018

What is artificial intelligence (AI), and should we welcome it as a major force in education and other areas of our lives? Closing out 2017, Education Week published essays written by futurists that seemed to give a big “yes” to the second part of that question.

The Encyclopedia Britannica explains AI is, at least in part, “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”

That’s heavy stuff! Machines that think with us, do our thinking for us, or even are flat-out as smart as us. Does advanced AI bode well for civilization?

One of the EdWeek essayists, Arizona State University futurist Michael Bennett, presents vignettes of a “plausible near future” in which a 5th-grader’s virtual mentor has monitored her “cognitive and emotional development since shortly after her conception.” Thus, on a snowy day, “Maestro” guides the child to a favorite storybook, “having determined the intervention will induce an optimal psychological state for the school day’s lessons.”

“As a society, we must get used to the concept of ‘technological legislation,’ the notion that widely distributed technological systems and devices often govern our lives more effectively than local, state, or federal laws,” Bennett added.

Wow! So I guess this means it’s off to the scrapheap for school boards, PTAs, representative governments, and potentially even the Bill of Rights, which has protected Americans’ individual liberties for more than two centuries. According to some, people should just trust everything, our children included, to High Tech.

Strangely, none of EdWeek‘s cosmic thinkers explored the possible downsides of dominant AI. Indeed, the featured backpage author, Harvard’s Christopher Dede, argued today’s students must continually “unlearn the old ways” while learning to use digital tools and media to prepare for “a lifetime of amplified collaborative intelligence.”

Although the EdWeek writers seemed all too willing to embrace AI’s control of our lives, it is reassuring to know that there are some very smart people, one of them being famed British physicist Stephen Hawking, who believe that developing artificial intelligence to the max could have dire consequences. Currently, we mostly have a narrow, weak form of AI that’s designed to perform specific tasks, such as an Internet search or driving a car. However, as Hawking made clear, a strong or generalized AI conceivably could go beyond assigned tasks and outperform humans in just about every cognitive realm.

“The primitive forms of artificial intelligence we already have proved very useful,” said Hawking in a December 2014 BBC interview. “But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate.”

As the Boston-based Future of Life Institute (FLI) makes clear, the goal should be to ensure AI’s impact on society is beneficial, not destructive. One of its analyses offers two examples of ways AI could become dangerous: (1) By being programmed to become autonomous weapons systems to produce mass casualties, and in that process becoming enormously difficult for anyone to turn off and stop the killing. (2) By being designed to do good deeds but resorting to destructive methods to achieve its goals. Example: “If a super-intelligent system is tasked with an ambitious geo-engineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.”

The potential effects of artificial intelligence on education of our children also ought to be on our radar screens. We need a more serious analysis than Education Week’s cheerleading for computer-directed learning.

Perhaps carefully controlled AI devices might help tutor pupils who have fallen behind in reading and math. It is hard for teachers to do catch-up instruction while keeping the class as a whole on schedule, and there are rarely enough trained tutors to go around. Thus, AI devices might be useful in bringing algorithms for improving literacy to bear in working with kids when teachers cannot.

On the other hand, do we really want AI agents to become mentors or social-emotional guides to our children practically from the moment of their conception? As the FTI notes, it is unlikely that even super-intelligent AI will ever “exhibit human emotions like love or hate,” or “become intentionally benevolent or malevolent.” However, when thwarted in reaching its goal, it could lurch off-track in a destructive direction. How would that work with an uncooperative 10-year-old?

A huge question is what place would exist, if any, for parental rights in this brave new world. We need to put our natural intelligence to work on such issues while we still can.

[Originally Published at American Spectator]