Wednesday, April 19, 2023

Industry insiders welcome regulation of artificial intelligence

In an interesting development, a group of about 75 Canadian artificial intelligence (AI) experts, researchers and industry startup CEOs have come together to call on the Canadian government to significantly expedite their planned regulation of AI development.

A government bill is in progress, known as the Artificial Intelligence and Data Act (AIDA), although it is part and parcel of the bigger Bill C-27, and it is currently expected to take upto two years for consultations and drafting.

The industry group, which includes some influential figures and pioneers in the field of deep-learning, argues that "generative" AI is developing at such a pace that some kind of regulation is needed NOW, not in two years time, and it is recommending that the AIDA provisions be separated from the broader Bill C-27, and pushed through as soon as possible, preferably before the government breaks for summer.

The draft law has already been criticized as being too vague. It is not even clear which instances of AI will be affected by the proposed law, other than the specification of "high impact" AI systems. The government's intent was to write more specific regulations AFTER the act passes into law.

The group of researchers argues that the bill, vague or not, really cannot wait another six months, and that it is essential to have at least a baseline set of legal guidelines which can then be tweaked as needed. They cite a number of possible harms from AI, including the perpetuation of biases and discrimination, misinformation and the dissemination of errors, labour market turmoil, and effects on human mental health, many of which may increase in importance (and others which my arise) as development continues at the current breakneck pace.

Canada is not the only country dealing with AI issues. Germany is currently calling for tougher rules on ChatGPT over copyright concerns, and Italy has completely banned ChatGPT until further regulation on it can be established. There are concerns over the ability of AI to produce deep-fake porn. Ultimately, all countries will need to establish some level of regulation.

It's interesting to see this level of regulatory warning from industry insiders. A more typical profile would be for the industry to want to push ahead with no holds barred, and more socially-conscious politicians and protest groups looking to put on the brakes. That, if nothing else, should give us a heads up.

UPDATE

To be clear, the Canadian AI warnings referred to above are far from the only voices of concern.

A high-profile open letter calling for a pause on AI, with 27,000 (and counting) signatories including the likes of Elon Musk and Steve Wosniak, was released in late March by the Future Of Life Institute (you can see the complete open letter here). The letter advocates a six-month moratorium to give AI companies and regulators time to formulate safeguards to protect society from the technology's potential risks.

Most recently, Geoffrey Hinton, one of the most influential figures in the field, sometimes referred to as the "Godfather of AI", left his position at Google specifically so that he can speak freely and continue to warn the world about the dangers of AI. Hinton is particularly concerned about its ability to overwhelm the Internet with fake photos, videos and text, impairing people's ability to distinguish fact from fiction, and the potential for AI to outsmart humans and its potential to disrupt the labour market. He has admitted to having profound regrets about some parts of his life's work, but he stresses that he is not leaving Google in order to complain about them, describing the company as having "acted very responsibly".

Mr. Hinton, now 75 and an emeritus professor of computer science at the University of Toronto, as well as the leader of a Google-acquired AI startup, pioneered some of the work on neural networks some ten years ago. If Geoffrey Hinton is worried, I am worried.

UPDATE UPDATE

Hell, even the CEO of OpenAI, the company behind ChatGPT, is calling for some sort of "global licensing and regulatory framework". We should probably listen.

No comments: