top of page

Are you ready to write a new AI policy? 

According to recent survey studies conducted by McKinsey & Company, a prominent management consulting firm, and Littler, a reputable law firm, it has been observed that a significant number of businesses lack comprehensive policies to effectively govern the utilization of artificial intelligence (AI) by their personnel within the workplace. Several entities have established policies, while certain policies are in need of revision. Human resources (HR) departments have the ability to mitigate the primary hazards associated with artificial intelligence (AI), such as inaccuracies, plagiarism, and misappropriation. This can be achieved by collaboratively developing a policy in conjunction with information technology (IT) professionals and legal counsel.

Workplace Artificial Intelligence


According to Mark Girouard, an attorney with Nilan Johnson Lewis in Minneapolis, a significant number of businesses have yet to implement artificial intelligence (AI) policies, despite the need for such measures. Consequently, a considerable number of employees are currently utilizing generative artificial intelligence (AI) without possessing a comprehensive comprehension of the inherent limitations and potential hazards associated with this technology.


Surveys' Results

McKinsey's survey of 1,684 participants from a variety of regions and occupations found that only 21 percent of respondents reporting AI adoption say their organizations have AI policies. In April, the survey was conducted, and in August, it was published. 913 respondents stated that their organizations had implemented AI in at least one function. To account for disparities in response rates, the data were weighted by each respondent's country's contribution to global GDP.
Littler's September survey of 399 respondents, the overwhelming majority of whom were from the United States (96 percent), found a slightly higher adoption rate of AI policies than McKinsey's survey (38 percent). According to respondents of a July survey conducted by Littler, the IT, HR, and legal departments share a relatively equal amount of responsibility for administering AI-driven HR tools.

According to Asha Palmer, the senior vice president of Skillsoft, an online training company in New York City, the significance of such a policy has increased with the growing adoption of generative AI in many organizations' routine operations.

According to the McKinsey study, companies prioritize the mitigation of AI risks pertaining to inaccuracies and adherence to regulatory compliance. As Girouard says, it is anticipated that these issues may escalate in the future due to the emergence of further inquiries regarding the security and credibility of diverse AI sources, as well as the implementation of new legislations that regulate their utilization.

Mitigating Issues with Artificial Intelligence Inaccuracy

According to the findings of a report conducted by McKinsey, the most significant issue that businesses have regarding AI is its level of accuracy. However, just 32% of respondents stated that they were taking steps to reduce inaccurate information.
According to Girouard, businesses can take a number of measures to reduce the risks associated with inaccurate AI predictions. Among these are the following:

  • Determining which employees are allowed to utilize AI in their work and which employees are not; determining which types of work AI can be applied to.

  • Increasing the resources available to employers and their level of diligence in reviewing facts, particularly with regard to material that is made public.

"Today's AI tools are notorious people pleasers, with tendencies to look for, if not outright hallucinate, answers and data that fit with what they believe people want to see," according to Girouard. "It's best for information that was generated and informed by AI to be preliminarily flagged as such, and to task humans to fact-check AI-derived data before disseminating to broader audiences."

machine learning.jpg

Keep an eye out for plagiarism and misappropriation.

According to McKinsey, AI plagiarism is another major issue. Girouard advised employers to update and expand plagiarism-related language from other permissible usage rules or corporate codes of conduct and ethics.

He noted that because AI is new, an AI policy should not only state what is and isn't appropriate but also educate staff about AI and its risks.
Intellectual property counsel reviewing the AI policy "can help with developing language to reduce the risk that employees will infringe on other parties' intellectual property or divulge their own organization's intellectual property in their AI interactions," Girouard added.

"What you give generative AI, you can't get back," Palmer suggested in an AI strategy to prevent IP theft. Indeed, it can distribute it to anyone without your consent. It belongs to them. Don't give generative AI anything you don't own. Do not offer it company, customer, or proprietary information.

Avoid Other Dangers

According to Palmer, it is imperative for an AI policy to address both the issue of prejudice resulting from AI and the potential infringements on privacy.

In New York City, it is mandatory for employers to perform a yearly external evaluation of artificial intelligence (AI) bias on the technological platforms utilized for making hiring or promotion determinations. Furthermore, employers are required to publicly disclose the results of these audits on their respective websites.

According to Girouard, the disclosure of certain firm information to AI tools may compromise its secrecy or violate legal constraints, as data may be designated as confidential or subject to consent and usage limitations.

Maintain a Current Policy Framework


Not only do policies need to be drafted, but they also need to be kept up to date. According to Palmer, Skillsoft has included the following statement in its AI policy because generative AI is constantly developing. The statement reads as follows: "We're committed to maintaining a responsible, sustainable GenAI policy for our team that is up-to-date, adaptable, and clearly defines our ongoing expectations for the technology."
Any policy about artificial intelligence should not be limited to a particular AI technology. The regulatory framework "should encompass technologies more broadly—that is, generative and other AI-informed tools—and not be limited to specific tools," according to Girouard. As an illustration, he emphasized that a ChatGPT policy is already too limited and out of date due to the increasing amount of AI technologies.
According to Elizabeth Shirley, an attorney with Burr & Forman in Birmingham, Alabama, HR personnel should receive from employees some written or other confirmed acknowledgment of receipt of the amended generative AI policy. Shirley said this is something that should be done.

"It is important to have a generative AI policy because without one, employees may presume that they are free to use generative AI for whatever purposes they see fit and with whatever company information they have access to," according to her. "This causes great risks to the quality of work product, as well as to the confidentiality of company and personal information."

Source: SHRM

bottom of page