Regulators in both the EU and the UK are finally planning to act on growing concerns around the use of artificial intelligence, although they appear divided on exactly how to tackle the seemingly rampant technology.
This week, the UK Information Commissioner's Office has joined with the Alan Turing Institute to open a consultation on draft guidance the two organisations have drawn up for organisations developing AI decision-making systems.
In an article to support the launch of the consultation, ICO executive director for technology policy and innovation Simon McDougall said: "What do we really understand about how decisions are made about us using artificial intelligence? The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works. And when people don't understand a technology, it can lead to doubt, uncertainty and mistrust."
He quotes the ICO's own research which shows that over half of consumers are concerned about machines making complex automated decisions about them.
The regulator insists the four principles of its draft guidance - be transparent, be accountable, consider context and reflect on impacts - are already rooted in GDPR anyway. The consultation is due to run until January 24 2020.
The World Economic Forum recently confirmed its intention to develop global rules for AI and create an AI Council that will aim to find common ground on policy between nations on the potential of AI and other emerging technologies.
Meanwhile, the EU has published seven guidelines for the development and implementation of AI ethics as part of its AI strategy, which is targeting investment of €20bn in the technology annually over the next decade.
However, the new European Commission president, Ursula von der Leyen, is planning to take things much further and has pledged to introduce GDPR-style legislation to regulate AI during her first 100 days in charge.
In a speech to the European Parliament Von der Leyen said: "We must have mastery and ownership of key technologies in Europe. These include quantum computing, artificial intelligence, blockchain and critical chip technologies.
"With GDPR, we set the pattern for the world. We have to do the same with artificial intelligence. Because in Europe we start with the human being. It is not about damming up the flow of data."
Even so, Osborne Clarke's head of artificial intelligence and machine learning John Buyers believes designing effective regulation of AI would be highly challenging, claiming that GDPR was an "uneasy bedfellow" with AI and generates significant compliance issues.
"The challenges with AI - specifically deep learning - are its breadth of application across every walk of life, and the fact that it is not always transparent or predictable," he said. "Beyond high-level ethical principles, one-size-fits-all regulation sounds attractive but the complexity of different sectors will be difficult, if not impossible, to boil down into a single overarching law."
In a recent blog, AntWorks co-founder and CEO Asheesh Mehra called for cross-industry collaboration. He added: "In order to properly regulate, governments and policymakers will need to work closely with professional bodies from each industry, who can advise the decision makers on policy and regulation on best practice with regard to what the technology is needed for, how they'll make it work, how it may impact their workforce.
"The only way we can realistically see AI and automation take the world of business by storm is if it is smartly regulated. This begins with incentivising further advancements and innovation to the tech, which means regulating applications rather than the tech itself."