Artificial Intelligence (AI) is rapidly transforming the workplace by streamlining tasks, enhancing communication, and enabling innovative solutions such as better writing, improved customer interactions, and faster IT support. While these advancements bring many benefits, they can also create anxiety about hallucinations/inaccuracies, plagiarism, data security, and privacy. Effectively managing this anxiety requires thoughtful planning and a clear understanding of the risks and safeguards involved.
Practical Uses of AI: Secure and Incremental Adoption
Adopting AI does not mean you have to overhaul everything at once. Instead, you can start small and keep your data secure by storing it on-site or in trusted, protected environments. For example, some AI systems analyze patterns in your company’s data while keeping it separate from external sources. Then, the AI can search outside databases for similar customers without exposing your sensitive information. This approach allows you to benefit from AI-driven insights while maintaining data privacy.
Another practical use is deploying AI-powered chatbots for online banking or customer care. These bots can be custom-built or purchased from reputable providers. The key is to ensure you share only appropriate information and to understand exactly what data is being used, how it’s stored, and what safeguards are in place. This is where risk management comes in: it’s essential to clarify the responsibilities of both your organization and the AI provider in protecting your data.
Whether building or buying AI solutions, established business tools such as Service Organization Control 2 (SOC2) reports and privacy agreements play a vital role. SOC2 is a widely recognized standard that assesses how well a service provider manages data security, availability, processing integrity, confidentiality, and privacy. Requesting a SOC 2 or equivalent report helps ensure your vendors meet the necessary security standards.
Data Protection: Storage, Backup, and Vendor Transparency
Protecting your data is a top priority. This includes knowing exactly where your data is stored - whether on your own servers, in the cloud, or at a secure data center - and ensuring it’s only accessible to authorized individuals. Regular backups are essential, and storing these backups offline can further reduce risk. When working with vendors or developers, ask them to specify what data they use, where it’s stored, and how it’s protected. Transparency here is crucial for security and compliance.
It’s also important to clarify if and how AI systems will access your data, and whether any information might leave your organization. Not all AI solutions require sharing sensitive data, so you can often configure them to work within your own secure environment.
Linking Data Protection to Agreements and Privacy
Strong data protection practices go hand in hand with robust agreements and privacy settings. These measures help you control what information AI systems can access and ensure your organization’s policies are followed.
Agreements and Privacy: SOC2, Privacy Settings, and Copilot’s Learning Process
When using AI tools that adapt based on your interactions, such as Microsoft Copilot, it’s especially important to review privacy settings and agreements. Privacy settings control what information an AI system can collect and use. For example, Copilot may learn about your work patterns, collaborators, and the files you access to provide more relevant assistance. While this can be a powerful productivity boost, it’s essential to monitor and adjust these settings to prevent unnecessary data exposure.
End-users may not always check privacy settings, so an organizational approval process should include a review of risk management and information security. Always seek a written agreement that your data will not be used for training external AI models unless you consent. Sometimes, stronger protections require an upgraded license. Requesting a SOC type 2 document and conducting a risk assessment demonstrates that your vendor takes security seriously.
Monitoring and Governance: Importance of Ongoing Oversight
Ongoing monitoring is critical for identifying and addressing new risks as AI evolves. Governance policies are formal rules and procedures your organization establishes to define, protect, and track sensitive data. Regularly reviewing these policies ensures that your data remains secure and that your AI systems comply with privacy standards.
Monitoring includes checking who can access data, how it’s handled, and ensuring that protections are working as intended. Examples of data requiring strong governance include:
- Information showing relationships between your company and customers
- Details about personnel, internal operations, or company products/services
- Customer data such as financial information, account balances, or credit history
- Personally identifiable information (PII)
- Company intellectual property
Backups and Firewalls: Best Practices for Protection
Reliable backups are a cornerstone of data security. Choose backup systems that cannot be tampered with and consider offline backups for extra safety. No system is 100% secure, which is why security updates (patches) are regularly released for software and platforms.
Firewalls and intrusion prevention systems are essential for guarding your network. A firewall acts as a barrier between your internal systems and external threats, tracking and blocking unauthorized access to your systems. If a firewall does not protect your internet and email, it’s difficult to detect or prevent data leaks. Many tools are available to monitor and alert you to suspicious activity in real time.
Hallucinations, Inaccuracies
Hallucination or inaccurate information from AI is a growing problem. While we all know that AI hallucinates, even lawyers, who are in a position of trust, are using AI without checking to make sure the AI responses are grounded in reality. In many recent lawsuits (over 150 so far) lawyers have referenced cases that were entirely fabricated by AI. Certainly, new standards are needed for researching what AI provides for information.
Plagiarism
Using AI to check and rewrite your thoughts is helpful to you and your readers. However, plagiarism is widely viewed as illegal and immoral. Universities have a new issue with plagiarism that didn’t always exist. They are seeing the AI trend in college classes where papers and test responses are becoming more generic and indicative of excessive use of AI, even in the very places where thought and creativity are required. Worse than that, there are many AI tools now available to enable rewording of AI responses to reduce detection.
AI is plagiarizing research, ideas, art, music, videos, and many other areas of innovation that have traditionally been copyrighted or patented. While the freedom to use AI is powerful, it is also costly if you are the person copied. Encourage your team to credit AI, if they use it, to check the accuracy of references, data, and details that AI provides and avoid straight up copy and paste.
Conclusion
AI is a helpful resource whether managing data, developing code, automating tasks, writing documents, or managing your projects. As AIs usefulness grows, we will all be embracing AI on google, in medical settings, in our jobs, and to do work that humans can’t do.
Here are key items to keep in mind for managing AI risk:
- Credit AI and check the details when using AI to generate writing or data.
- Check for inaccuracies when using AI by asking questions, doing your own research, and verifying data
- Protect your data by storing it securely, backing it up, and using firewalls.
- Be mindful of what information AI tools can access—only share what’s necessary.
- Set clear rules and agreements for how AI is used and how your data is handled.
- Monitor your data’s location and access regularly.
- Balance innovation with safety—start small, review often, and keep sensitive information secure.
- Stay informed through training and by asking questions when unsure.
If you’re ready to establish an AI innovation strategy, governance program, and risk management approach, we’d love to connect and guide you through your options - reach out to Denise Butler at denise.butler@atxadvisory.com
