UK: Consultation on copyright and patents legislation for AI

UK: Consultation on copyright and patents legislation for AI

The UK’s Intellectual Property Office (the “UKIPO”) launched a consultation on how the copyright and patent system should deal with Artificial Intelligence (AI).

The aim of the consultation is to determine the right incentives for Artificial Intelligence (“AI”) development and innovation while continuing to promote human creativity and innovation. In particular, the consultation is looking into three areas in detail, these areas arose out of the Call For Views on AI and Intellectual Property (see here):

  1. Copyright protection for works created by computers that do not have a human creator. In the United Kingdom, they are presently protected for 50 years. But, if they are to be safeguarded at all, how should they be protected? Read our previous post on whether an AI-Generated Work may be used to make a Copyright Claim.
  2. Licensing or exceptions to copyright for text and data mining, which is often significant in AI use and development.
  3. AI-created inventions are protected by patents. Should we safeguard them, and if yes, how should we safeguard them?

Without a question, AI is a transformational technology that has the potential to have a massive influence on human existence and is already doing so in some areas. We may be decades away from a totally self-contained AI, but legislators are already grappling with the complex issues that occur as a result (see our article on the World Intellectual Property Organization’s AI Issue Paper for more information).

Consultation on copyright and patents legislation for AI

For the time being, there are more questions than solutions, and any proposed answers frequently raise new ones — who is accountable for the activities of an autonomous AI? Who is the creator or the owner? Who owns or creates the work?

Across the world governments are trying to establish how to allow for safe and effective testing, development and implementation of AI without hampering its advancement. The UK government in particular is jockeying for position as THE jurisdiction for AI creation, as evidenced in the Strategy. The Strategy sets out the government’s ten year plan to make the UK a “global AI superpower”. The Strategy sets out three broad objectives it hopes to achieve:

  • Invest an d plan for the long-term needs of the AI ecosystem to continue UK “leadership as a science and AI superpower”;
  • Support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions;
  • Ensure the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.

By offering clear regulations, applicable ethical standards, and a pro-innovation regulatory environment, the UK hopes to establish itself as the ideal place to live and work with AI. The government’s AI council has played a key role in gathering information to guide the Strategy’s development, notably through a roadmap released at the start of the year that presented a series of recommendations based on input from the UK’s AI community.

However, the current AI framework (or lack thereof) has been characterized as a wild west of testing and creation with no oversight, especially when compared to pharmaceutical industry legislation. Some argue that, given the potential harm that AI could cause, it should be regulated similarly to medicines, with closely monitored trials, licenses, and authorisations, and that there should be certain safeguards in place. Others argue that such strict control would discourage innovation and delay growth, and that such controls are unnecessary (and, incidentally, unachievable in an internet-connected society). Certainty, the House of Lords’ view that “blanket AI-specific regulation, at this stage, would be inappropriate“, but it remains to be seen how the UK intends to regulate such a far-reaching and capricious technology.

Importantly, the Strategy acknowledges that no single definition of AI is suitable for every scenario, but recommends the following definition “Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks.” This definition could show how future UK legislation may define AI, whereas the current definition of AI in the National Security and Investment Act 2021 is a narrower definition (for the purposes of foreign direct investment analysis) as “technology enabling the programming or training of a device or software to—

  • perceive environments through the use of data;
  • (ii) interpret data using automated processing designed to approximate cognitive abilities;
  • (iii) make recommendations, predictions or decisions.”

There have been some intriguing developments in the AI legal sphere over the last few months. Dr Stephen Thaler’s AI system, DABUS (which stands for “device for the autonomous bootstrapping of unified sentience”) was named as the inventor of a patent at the South African and Australian patent offices, marking the first successes in Dr Thaler’s long running inventorship battle (previously failing at the UK, EU and US patent offices).

The list of UK IP Firms can be found here.