Are AI and data technology outpacing the law?

202006_a.jpg

Challenges facing lawmakers at the interface of robotics, cybersecurity, and humans. © Blue Planet Studio/Shutterstock.com

The pace of advances in AI technologies over the past few decades has made it an exciting time for technology; however, AI's rapid progress in recent years may be leaving law and policy behind in several areas according to Keio University's Fumio Shimpo, including infringements on the right to privacy, data protection issues, criminal liability, information security, and consumer protection. In his recent paper in the Global Privacy Law Review, Shimpo also highlights the importance of robust international agreements to enable the smooth transfer of the kind of data today's AI equipped technology can amass.

AI technologies have followed a "boom and bust" trajectory ever since they first moved from the realm of myth to the real-world programming of the 1950s. As the limitations of algorithms running to fixed rules became increasingly apparent, R&D funding in the sector dwindled. The Internet and its potential as a repository for expert knowledge from around the world refuelled optimism for the capabilities of AI in "knowledge-based systems" and "knowledge engineering," but these hopes too hit a wall in the late 1980s. One of the key distinguishing features of the "Third AI Boom" that followed is the technology's aptitude for "machine learning" and the sheer quantity of data it can access. "Now AI is being used in devices without us even being aware of the fact," Shimpo explains in the paper. "Thus, it is clear that there is a very real personal data-handling confidentiality threat where AI is gaining 'insights' into our everyday lives without us knowing that this is happening."

Data protection is not a new concern-Japan passed the Japanese Personal Information Protection Act back in 2003. However, Shimpo asserts that even with the revisions made in 2015, the act is still unable to definitively position which side of the law mega IT companies lie on when they make use of vast quantities of personal information from people when they, for instance, operate their smart phones.

The judgements made by AI are another concern. AI technology decision paths may be prone to discriminatory or criminal output, which requires informed measures to counter. Consumer choice may also be compromised because the suggestions an online "chatbot" type service might make start to stray further down the transaction channel. There is also a blind spot opening up in terms of possible security breaches of protections for moral safety. Shimpo points out that for years, progress in robotics has proceeded on the premise of an agreed "absolute safety standard." With AI at the controls, however, these safety standards may not, for example, be able to prevent a drone from invading private air space or engaging in voyeuristic or terrorist activity.

For over a decade now, Shimpo has been insisting that we discuss this issue in the context of a whole, new legal area which he terms "Robot Law." But even with the development of these laws, he notes that AI and data protection policies in a given jurisdiction may be undermined without some kind of agreed international framework that tackles the transfer of data across borders. Efforts in this direction are already underway. At the G20 Osaka summit in June 2019, the Japanese government presented the concept of Data Free-Flow with Trust (DFFT), which would offer paths toward free and trusted data flow. This was primarily to promote international rule-making, which may also lead to responsible AI data collection and analysis. Japan and the EU also successfully constructed a framework for the mutual transfer of data in January 2019.

Ultimately, Shimpo concludes, when taking on the task of bringing the law and policy up to speed with AI and data technology, it is important to address specific issues without exaggerating or underplaying the potential hazards that lie ahead. "We need to bask in the warmth of the light of AI innovation by ensuring that we have been able to mitigate any of its chilling effects."

Published online 30 November 2020


About the researcher

yamano

Fumio Shimpo ― Professor

Faculty of Policy Management

Fumio Shimpo's areas of academic expertise are Constitutional Law, Cyber-Law, and Robot Law. He is the Commissioner for International Academic Exchange at the Personal Information Protection Commission of Japan, and he also served as the Vice-Chair of the OECD Working Party on Security and Privacy in the Digital Economy (SPDE). He is also the Executive Director of the Japanese Constitutional Law Society, Director of the Japan Society of Information and Communication Research, Director of the Law and Computer Society, and Senior Research Fellow at the Institute for Information and Communications Policy of the Ministry of Internal Affairs and Communications.

Links

Reference

  1. Fumio Shimpo. The Importance of 'Smooth' Data Usage and the Protection of Privacy in the Age of AI, IoT and Autonomous Robots, Global Privacy Law Review, Volume 1, Issue 1, 2020, 49-54. | article

Related research

Fumio Shimpo. The Principal Japanese AI and Robot Law. Strategy and Research toward Establishing Basic Principles, Journal of Law and Information System, 2018, Vol. 3, p. 44-65. | article