We are witnesses to rapid development of Artificial Intelligence (AI) worldwide. The pace of development is so steadfast that many professions in due time could be completely replaced by machines. Experts of World Economic Forum (2020) or Mckinsey's (2021) forecast that most likely to be replaced at work by machines are people working as cashiers, drivers, translators, journalists, even bookkeepers.
We are already well aware of the fact that ChatGPT created by the company OpenAI has taken the world by storm. Recently, it has been proved that it can even pass some of what are considered the most difficult exams in the United States – US Medical Licensing Exam, the Wharton MBA and the Bar. Many AI and computer programs are already used to rewrite texts, generate headlines and similar tasks. However, in order to avoid the Matrix or Terminator scenario (where the machines who were, ironically, created by humans took over the world while humans fought for survival), a legal framework must be set out.
What are the legal implications that revolve around AI?
AI is machine (computer) intelligence created by humans in controlled surroundings, capable of mimicking cognitive functions of the human brain. Its potential, however, is far greater due to the fact it is created in the digital world and can gather information from all corners of the internet. Billions of terabytes in data and information can be gathered with unfathomable speed, hence an AI-imbed machine or program can learn and solve far quicker than an average human.
Though it has tremendous potential, AI can be a source of many problems. From a legal perspective, AI related issues are currently predominant in the fields of Intellectual Property, Data Protection and Privacy, as well as Security. This is why countries work “around the clock” to tackle these problems, since there is a huge gap between reality and the current legal framework. The USA was one of the first to adopt an Executive Order 13859 in 2019, which contains 10 general principles regarding the development of AI (some of them are public trust in AI, public participation, transparency and disclosure, scientific integrity and quality of information etc.).
The Republic of Serbia followed in these steps by creating a draft of Ethical Guidelines for Development, Implementation and Use of Reliable and Responsible AI at the end of 2022. The final version waits for adoption but the principles are, in essence, the same as the ones of the US. However, the Republic of Serbia did create a National Platform for AI the year before, in 2021, making a basis for more focused development of AI.
It is fair to mention that Guidelines and principles are not binding, but they put things into perspective and create a solid basis for adoption of laws and regulations in this area, and these are binding.
Who gets the credit for the AI generated work?
An interesting dilemma arose from the current worldwide situation - Can AI be an author, designer, inventor despite being humanless, amoral? For now, it seems like it cannot. A couple of cases give us a better perspective.
First is the DABUS Case – two patent offices, European Patent Office (EPO) and UK Intellectual Property Office (UKIPO) declined two patent applications in which DABUS, an AI program, was named an inventor. The creator of the program is Dr. Stephen Thaler, however, did nothing in terms of creating the patentable discoveries (plastic food box and warning light) because he did not have the knowledge to create them. In fact, he only fed DABUS with data on various subjects and let the program learn and do everything by itself. Both EPO and UKIPO admitted that the patent exists according to all legal standards and respective applicable regulations, but both declined the application because the regulations demand that an inventor must be a named person. Finally, Thaler also could not be accepted as the inventor for the reason of not engaging in creation of the inventions and so the inventions are left unprotected.
The second case is the more recent one – Getty Images v. Stability AI. The former company sued the latter for unlawful copying and processing “millions of images protected by copyright” to train its software. The software, based on the textual command and/or description, would create a “new” image by using its experience with publicly available images available at Getty Images, thus combining all of the experiences into one “original” image (after all the program was trained to act in such a way). The problem is that it had done it without the consent of the author/s). Further details on the case are yet to be uncovered. So far, the public is aware that the lawsuits were filed before the UK and the USA courts.
These cases show the great perils AI carries predominantly for all aspects of Intellectual Property (IP). As we can see AI can make infringements or uncertainties in their protection. For example, according to patent regulations an inventor must be human – either as a standalone individual, employee of a company or subcontractor in his/hers parent company. All of these terms are legal categories and AI does not fit into any. Furthermore, AI cannot be a Contracting Party, which indicates the rights to sue and be sued, grant licenses and so on.
In the world of IP the suggestions are already emerging – most revolve around the solution that AI is stated as an inventor, whereas the applicant and owner of inventions, designs, copyrights etc. should be considered the creator of AI program since the creator brought it to life, enabling it to perform the task it was designed for.
What are the collected data used for?
AI shows great improvements (and risks) in other areas as well, such as Security and Data Protection, namely biometrics. Recently a project has been presented in front of Venture Capital Funds where an AI generated “person” acts as the job interviewer, i.e. as an HR basically. It values candidates based on their commandment of language, facial expressions, clarity of speech, tone, body language overall. So it uses all of the available data to recommend the right candidate to the company using an AI interviewer, thus greatly reducing time and costs for the company seeking for talents. After a certain few are selected by an AI program, the company continues with the selection process and has a final saying.
This project has both benefits and risks. The program could be abused to get information about people without their permission, monitor them, and analyze their faces and voices. This is a violation of privacy. Additionally, the existing Guidelines in this domain, apart from being non-binding, only regulate a certain type of AI that can collect data and create new things, like ChatGPT (Weak AI or Generative AI). There is a more powerful type of AI that can think like a human and improve itself beyond human abilities (Stronger AI) which is outside of the scope of existing Guidelines.
Bearing in mind all of the above, the lawmakers have a Herculean task ahead – encompass a whole AI matter in an objective and quality way, a matter that is still developing rapidly and advancing faster than law itself. This task requires outside-the-box thinking, user friendly and consumer-oriented solutions, rather than trying to assert dominance of a country and its Government over individuals and companies under its jurisdiction. A long term solution constructed with great care and quality can benefit all parties.
Authors:
Miloš Vučković, Senior & Managing Partner
Aleksandar Čermelj, Associate
*The information in this document does not represent legal advice and is provided for general informational purposes only.
**Partner, Senior Associate, Associate and/or Junior Associate refers to Independent Attorney at Law in cooperation with IVVK Lawyers.
16/06/2023