A recent article appeared on the Forbes website, “Researchers Attempt to Predict and Prevent Suicide Using Deep Learning and Math“, discussing a team of scientists working on an algorithm to predict suicidal patterns and prevent suicides. One of the scientists states the project goal being: “if our algorithm can identify and stop just one or two, we will feel really good about that.”
On the one hand, this looks like a good idea that will help to save lives. On the other hand, it brings to mind the nightmares of Philip K. Dick‘s (PKD) scifi short stories. PKD is probably my single most favorite scifi author; and his scifi stories and novels contain as many warnings against future nightmares as they do predictions of future technologies.
In the PKD story, “The Minority Report” (PKDR/PKD4), murder and violent crimes are prevented through a combination of precognition abilities and computer algorithms that produce three reports. If two of the three reports agree there will be a murder, the computer produces a card with the name of the would-be murderer, who is then sought out and apprehended by the pre-crime department. However, the existence of the minority report indicates the future crime accepted for prevention may not show the whole picture. In fact, the possibility of innocent persons being imprisoned for crimes that were never going to be committed is raised by the narration of the story.
While pre-crime exists in a fictional setting, the overlooked minority reports, that might clear someone of a future crime, raises the question of whether this suicide algorithm’s identification of a suicidal patterns might result in similar problems of non-suicidal persons being institutionalized. The behaviors might just meet a list programmed into a computer and not indicate an actual a future suicide. Some of us with depression would definitely want assurances that something was in place to prevent this result.
The Forbes article also explains that project development involved “student[-]developed algorithms to do statistical analysis … to look for key factors related to suicide risks and apply deep learning methods to these large and complex datasets”. When deep learning methods are discussed in connection to the algorithms, questions of artificial intelligence (AI) are also immediately raised.
In “The Defenders” (PKD1) and “Second Variety” (PKD3), PKD offers two differing views of what it could look like when an AI determines humans are their own worst threat.
In a more benign take on AI, the story, “The Defenders”, features robots designed to fight a war between the U.S. and Russia. When humans leave the surface due to the increasing lethality of the weapons of war they’ve created, the robots determine the two groups of humans are bigger threats to the earth and that their war doesn’t even make sense. Instead, the robots dupe humanity into believing the war is going on through the building and destroying of model replicas of human cities, while also cleaning and preserving the actual untouched cities. The robots made an analysis of human behavior that determined they should keep the humans underground until their societies have moved on from warfare to a focus on survival in a generation or so.
However, PKD presents the nightmarish concept of robots designed to fight a war who’ve determined human life is a bigger threat than the robots on either side in the story, “Second Variety”. These robots are designed to reproduce and to use learning processes to create even more deceptive and lethal versions of themselves to outwit the other side. Further the two sides of the robot war eventually focus on making those deceptions more effective against the robots on the other side. The war continues to be fought by the robot combatants with humans soldiers largely forgotten, except for being seen as a threat by the robots on both sides.
To clarify, I do not think the algorithm will use deep learning methods to gather humans into camps and exterminate them Terminator-franchise-style.
However, we must bear in mind that any technology that involves computerized systems learning on their own, making decisions on their own, and enforcing objectives has the potential to become a very serious threat long-term unless safety precautions are put into place to prevent the machines from determining non-machines are all a threat.
Sources
Dick, Philip K. The Collected Stories of Philip K Dick Vol I. V vols.
New York: Citadel Twilight, 1990. (PKD1)
—. The Collected Stories of Philip K Dick Vol II. V vols.
New York: Citadel Twilight, 1995. (PKD2)
—. The Collected Stories of Philip K Dick Vol III. V vols.
New York: Citadel Press, 2002. (PKD3)
—. The Collected Stories of Philip K Dick Vol IV. V vols.
New York: Citadel Twilight, 1991. (PKD4)
—. The Collected Stories of Philip K Dick Vol V. V vols.
New York: Citadel Press, 1992. (PKD5)
—. The Philip K Dick Reader. New York: Citadel Press, 1997. (PKDR)
Clipart stolen from Clipartmax.
Like this:
Like Loading...