8-Minute AI: Crafting an Acceptable Use Policy for Artificial Intelligence

As artificial intelligence continues to transform industries, it is crucial for organizations to establish comprehensive AI policies to govern ethical and effective use. AI Use policies serve not only to control AI's rapid evolution, but also to channel that evolution in a way that accelerates its utility by suppressing its risks.

Consider car brakes, for instance. While their function is to reduce speed, they actually enable high-speed travel by allowing drivers to slow down. Well-crafted AI policies can be thought of in the same light – their goal is not to restrict advancement, but to maximize it within a reasonable set of justified rules. These policies will need to be revised repeatedly as AI blooms into different aspects of daily life, including in how organizations govern their processes and data.

A well-structured AI Acceptable Use policy not only ensures compliance with company-defined regulations, but also fosters trust among internal and external stakeholders whose data these policies are designed to protect. Here, we outline some key components that should be used to craft a comprehensive AI Acceptable Use policy, including the need to 1) classify the types of data in question; 2) categorize AI tools and services that employees may encounter, and 3) define what it means to use these tools responsibly.

1. Classifying Data Types

Whether data belongs to an organization or to its customers, it is essential to make clear the distinctions between Sensitive and Non-Sensitive data. In doing so, an organization helps define for employees how they may use AI in a secure manner that optimizes operational excellence both for the individual and the company as a whole.

Data can be grouped into two major categories: Sensitive Data, and Non-Sensitive Data. Sensitive data may include any of the following identifying information:

  • Data indicating association between an organization and its client
  • Data specific to an organization’s personnel, internal ops, practices, or products/services
  • Data specific to client personnel, customers, operations, practices, or products/services
  • Financial Data
  • Personally Identifiable Information (PII)
  • Intellectual Property
Sensitive Data can be grouped in even finer detailed categories, each of which reflecting the varied level of impact to stakeholders should that data become compromised:
  • Highly-Sensitive Data
    • Loss or compromise would cause significant damage if made public
    • May impact the organization, its employees, or its customers
    • Examples:
      • Client financial information
      • Personally identifiable information (PII) such as SSNs, driver’s license numbers, CC’s, bank account number etc.
  • Moderately-Sensitive Data
    • Loss or compromise would cause some damage if made public
    • May include impact to customers
    • Examples:
      • Contact information like phone numbers or email addresses
      • Internal financial information
  • Minimally-Sensitive Data
    • Loss or compromise would make minimal damage if made public
    • Does not include impact to customers
    • Examples:
      • Internal meeting agendas
      • Internal project schedules, etc.

Tools approved for Sensitive data should be rigorously evaluated to ensure that an organization’s own privacy & security measures are enforced within the tool. This may include internally configured instances of AI applications run within an organization’s own tenant, allowing data to remain private to the host entity.

Non-Sensitive Data on the other hand comprises information that either 1) does not belong to a company or its clients; 2) has been anonymized to remove sensitive information, or 3) is public-facing. Making the distinctions clear between Sensitive and Non-Sensitive data types is crucial to ensuring one’s employees are adequately informed and prepared to use AI responsibly.

2. Categorizing AI Tools and Services

Whether employees are curious about AI applications made for job-specific use cases, or if they look to utilize general tools like ChatGPT, an organization’s AI policy must delineate which tools are approved for processing Sensitive Data, and which tools are not. If this line is not drawn, an organization risks inputting sensitive data into untrusted tools out of unfamiliarity or confusion, which must be avoided.

In its own AI Acceptable Use policy, ATX classifies AI tools into these four defined categories:

  • Authorized Tools for Sensitive Data
    • Established or assessed to ensure proper security and privacy features are enforced
  • Unauthorized Tools for Sensitive Data
    • Reviewed and deemed unacceptable for use with Sensitive Data
  • Unauthorized Tools for General Use
    • Reviewed and deemed unacceptable for general use - employees are not permitted these AI tools by any means
  • Unverified Tools
    • These tools have not yet been reviewed
    • Provided that employees abide by considerations specifically outlined in ATX’s Responsible Use section, use with Non-Sensitive Data is permitted so that employees may test and validate the service in question

To maintain the policy’s relevance, organizations ought to maintain a registry of authorized and unauthorized tools, which is regularly updated as new tools are evaluated. How employees interact with these tools must be coded into an AI policy as well – not only to ensure safe use, but to educate employees on AI’s inherent limitations.

3. Outlining Responsible Use

Using AI responsibly is critical to ensure ethical standards and maintain data integrity - emphasizing security, compliance, and alignment with organizational standards is essential. For instance, a responsible AI policy should prohibit the linkage of company devices and accounts to Unauthorized or Unverified AI applications. At the same time, it is essential to create an intuitive and safe pathway through which employees can request new tools to be evaluated and approved for use (or disproved for use). Doing this not only fosters internal AI awareness and competence, but also empowers staff to recommend tools they feel may be useful. In this way, an organization can prioritize compliance with overarching security measures, all the while expanding its AI know-how.

Even when using trusted tools, however, AI-generated outputs should not be accepted blindly. For the foreseeable future, humans will need to review and validate AI-derived materials to ensure correctness prior to publishing. If using AI outputs that call upon real-world statistics and/or make real-world factual claims, tools like ChatGPT can be queried to produce relevant citations (i.e. research papers and other publications) that a user may reference against the output. This, along with general proofreading, will reduce the risk of publishing misinterpretations, unconscious biases, or potential hallucinations (outputs not based in reality) put forth by the model. This rule of thumb should be mentioned in any policy to remind users that they are always responsible for final drafts that include AI-generated content.

Due diligence is also required for more technical AI-assisted activities, like coding. Ensuring that generated code adheres to the organization's coding standards and security protocols, and is policed by regular code reviews and testing, should be mandatory. Fostering a culture of accountability when it comes to vetting that code, just as one would vet a report using AI-generated text, is paramount – and needs to be outlined as such in a governing policy.

Creating an AI Acceptable Use policy is not a one-time task, but an ongoing process that evolves with technological advancements and emerging risks. By classifying data types, categorizing and listing AI tools and services, and defining “Responsible Use”, organizations can build strong foundations for ethical and effective AI deployment. A well-defined AI policy not only protects an organization and its data, but also builds trust with customers and stakeholders, enabling sustainable growth & survivability in an increasingly AI-driven world.

If you are curious about how to get started on your own AI Acceptable Use Policy, please contact us for a template!

Author: Andrew Hutchins