A recent study found that AI can exploit vulnerabilities in human habits to influence our behavior.
Before you freak out about having your mind-controlled by an evil AI supercomputer, you’ll be glad to hear that this study only tested the ability of an AI system to control human study participants in very limited, game-like settings.
This study was run by CSIRO’s Data61, the digital arm of Australia’s national science agency, and found that an AI system was able to influence participant behavior in a variety of game scenarios.
Although the article does touch on the importance of data privacy and regulation, there is a noticeable lack of discussion surrounding the negative applications of this technology (and coincidentally, the author of this article is Jon Whittle, the director of CSIRO’s Data61). The author notes some of the positive applications of this sort of system (such as training someone to have healthier eating habits), but a technology that influences someone to eat an apple instead of a chocolate bar could just as easily be used to influence someone to eat a chocolate bar instead of an apple.
The article even mentions applications of influencing policy and public opinion as to potential future applications. Here at RAISO, we are wondering at what point does this influence actually become manipulation and how do we classify whether a certain application of this technology is good, bad, or somewhere in between.