Written by León Lanis V., Paralegal
Artificial Intelligence has been in the spotlight of business and legal news this year, mainly because of the enormous (and not yet fully understood) potential of automated systems such as ChatGPT. Many of these technologies are beneficial for companies: they reduce costs, reduce the risk of human error, are highly customisable and so many more things that we are yet to discover. Nevertheless, as with any trending technology, there are legal risks and challenges that may affect your company. In this blog we will dive into 4 key considerations you should contemplate when implementing AI (specially open AI engines such as ChatGPT) in your business.
INTRODUCTION – HOW DOES AI WORK?
This is a tricky question, mainly because there are many different applications and systems, and they all work differently from each other. With that said, there are some common aspects that can help us understand how AI works.
Artificial Intelligence is any software that can learn patterns of information to make predictions or perform a determined task with the information it learns. This is made through a process known as “Machine Learning”, where a computer is fed with a big amount of information and helps it order, classify and understand the data input it is receiving. The language and algorithm in AI that it uses to learn varies from system to system. One of the most advanced algorithms being applied to date is “neural network”, which basically models how the human brain works, thus, making an AI engine able to understand not only how humans think, but to replicate and answer as a human.
1. DATA PRIVACY
On this blog, we regularly discuss the issues of privacy and how companies may be benefited by complying with local and global standards of protecting the company’s and user’s personal or private data. One of the biggest issues involving AI engines such as ChatGPT in day-to-day business is the lack of privacy protection from the company offering the engine. This has been a great concern, especially in Europe, where OpenAI (the company behind ChatGPT and other engines) was recently banned from operating in the region due to its negligence in data protection and not being able to comply with the General Data Protection Regulation or GDPR.
Using engines with bad privacy compliance not only poses a threat to your company’s secrets, but it is a great liability for your clients or users, as they may be negatively affected by the lack of protection in this matter.
In this regard, it is highly important to have clear policy in the workplace, in order to have a structured set of rules detailing which engines your company approves, when your employees can use such technologies and to which extent are you willing to share information with a trusted AI software.
2. CYBERSECURITY
A very important matter that has been in the concerns of many companies this last decade is information security. As we have discussed in previous blogs, there are many cyber threats that are dangerous to your business. One of the biggest issues in this matter is Phishing, which is the crime of sending fake emails or messages to someone or a company seeking that the target opens an infected link, downloads infected content or reveals certain valuable information. Criminals nowadays are using AI to perfectly mimic, for example, a CEO’s emails or even voice through phone calls. It is really necessary, in this sense, to have up to date firewalls to protect your company from such attacks, but also train your staff in recognising and handling these incidents.
3. BIASED INFORMATION
As aforementioned, AI uses the data it has been fed to work, which makes it susceptible to gather misinformation or biased data and continue its model based on such input. Many companies today allow engines such as ChatGPT to be used in day-to-day work, for example using it to gather information, make reports, etc. In this sense it is really important to have a company’s policy, as mentioned above, but as well to have a fact check of the information before using it, as many times these engines may be learning with bad data.
4. INTELLECTUAL PROPERTY
There is a trend in AI called “generative AI”, which is basically an artificial intelligence which is capable of creating art, videos, voice files and computer codes with just a prompt written by a user. This has revolutionised the art industry, especially with the boom of NFTs during the pandemic. The risk is that any person can generate these creations with just writing a prompt, and thus threatening your company’s intellectual property. For example, a few months ago, the luxury brand Hermés won a case against an AI artist that generated art with the company’s products, selling digital art for thousands of dollars stating it was an “official collection”. It was the case as well for Coca Cola, when a publicity generative AI engine created a Coca Cola commercial with just a prompt.
Bearing that in mind, it is really necessary to have your trademarks up to date or register your company’s intellectual property, because if a generative AI engine creates something related to your company, it may be misleading for your clients.
Conclusion
While it may be very beneficial for your company to use AI, it is important to be mindful of the many legal risks and challenges that may negatively impact your business. It is really important to have a very clear internal policy relating to this matter, in order to ensure your staff and your clients understand how you are protecting the company from the challenges of this trending technology. If you need assistance minimising the legal risks of using AI in your business, get in touch with our experienced team today.