When the ChatGPT (Chat Generative Pre-trained Transformer) application was made available to the general public in November 2022, it created a media storm in the field of artificial intelligence technologies and generated a veritable craze. ChatGPT made visible the dynamics of artificial intelligence research that has been developing for several years now, the consequences of which have been predicted and documented. It is now clear that generative AI tools will have a major impact on public administrations and businesses in a wide range of sectors, with systematic effects on their productivity and performance.
Cette note d’information existe aussi en français.
This is why Cigref has launched a Task Force, led by Baladji Soussilane, Vice-President Digital & IT of the Air Liquide Group, and facilitated by Marine de Sury, Cigref’s Mission Director, bringing together more than forty of its members to exchange practices and pool experiences. The purpose of this memo is to list the various recommendations put forward by Cigref so that its members, and potentially other organisations, companies, government departments, academies or associations, can tailor them to their own context and challenges.
Companies are reacting in different ways to the use of generative AI tools. Some prefer to prohibit, at this stage, any internal use of company data or the opening of accounts using employees’ professional email addresses, in order to prevent the exfiltration of strategic or sensitive data. On the other hand, others are taking advantage of the opportunity to create an appetite for these new technologies and generate business opportunities. To do this, they share private generative AI tools in SaaS mode or hosted internally, set out guidelines indicating what can and cannot be done (for example, prohibiting the use of company documents on public tools) and put in place a « control tower » to regulate usage.
Some players are offering services that ride the « Generative AI » wave, requiring documents to be uploaded for analysis and offering no guarantee as to how the information contained will be used. The same applies to ChatGPT-type tools when they train LLMs (Large Language Machines) with conversations/corpuses of company data.
Whatever their position, all organisations are unanimous in saying that the biggest risk is to miss out or fall behind the transformation brought about by generative AI. Risks and security need to be managed in parallel, not as a prelude to thinking about opportunities.
The first part of this memo lists the recommendations for use and good practices concerning the use of generative AI systems that it is important to share internally. Generative AI is already shifting the boundaries of productivity and creativity, and therefore offers real opportunities to be seized. However, they also present risks that need to be identified in order to better protect against them. This is the subject of the second part of this document.