RKW Law Group Logo

10075 Red Run Boulevard
Suite 401
Owings Mills, MD 21117
(443) 379-4900

10 North Jefferson Street
Suite 200
Frederick, MD 21701
(240) 220-2415

Skynet May Not Yet Be A Threat But Complacency Is

March 27, 2024

Don Walsh

I recently presented a seminar about the legal implications of Artificial Intelligence (AI) in the workplace. Although the sophistication suggested in The Terminator movie franchise is still decades away, most employers are still oblivious to the need to protect their organizations against certain AI hazard issues which may already be directly impacting them.

Although most of the current focus is on cautionary tales for HR departments who are the last stand to protect against the release of sensitive and personal identifying information, the support of the organization needs to be broader than watching HR. Employees at multiple levels could misuse AI in ways which would cause their organizations problems.

On a simplest level, AI tools are constantly digesting all information fed into them. The more information a tool consumes, the more sophisticated a tool’s output becomes. Because the information fed into these tools in some fashion continues to reside in the AI tool used, employees who utilize the tool could be inadvertently providing private or sensitive information which can never be extracted and may be disclosed or used by others. Even ChatGPT’s current terms of service warn of the retention of information for future searches.

For instance, employees could use AI to draft simple pricing proposals for customers; however, the pricing would continue to be retained in the tool for future users to discover. Employees who use an AI tool to create a style or brand similar to the protected copyrights or trademarks of competitors may also violate the intellectual property rights of those third parties. Even using an AI tool to draft a specific letter terminating an employee for misconduct will permit the information to continue to reside in the tool’s “memory” for future searches. All of these scenarios are similar to the disaster Microsoft had with its chatbot “Tay” which quickly became offensive as people kept feeding more and more NSFW commentary into it.

To protect themselves, all organizations need to develop a simple policy to guide their employees who seek to use AI to accomplish work tasks. Although policies should be tailored to the entities pushing them, these standards should include the following:

  • Outline the tools currently permitted to be used
  • Ensure there is no use of private, sensitive, confidential or proprietary information with such tools
  • Ensure that all vendors who will be working with the organization and which use AI tools will also agree to not use the organization’s private, sensitive, confidential or proprietary information with such tools
  • Employees should be discouraged from using an AI platform in a way that violates any intellectual property rights
  • Employees should fact-check any information they receive from AI tools before using it. Employees should be reminded that if they produce work generated from an AI tool, it becomes their work, and all company standards regarding accuracy continue to apply.
  • Employees that use AI tools are not excused from compliance with other company policies, including those against discrimination and harassment
  • Avoid engaging in activities which may infringe upon privacy rights of individuals
  • Users are expected to contact their supervisor/manager/other appropriate individual if they become aware of a breach of data privacy, an AI system failure or any circumstance where an AI tool is generating erroneous output which violates any company policy.
  • A breach of the policy in any way will subject to an employee to discipline, up to and including discharge.

If you need assistance in drafting such a policy, feel free to reach out to any of RKW’s labor and employment attorneys.  

© 2024 RKW, LLC. All Rights Reserved.

Disclaimer | Privacy Policy

Sign up for our weekly newsletter