Skip links

AI Risks: A Practical Approach for Technology Leaders

What is AI?

I am sure most everyone reading this will have some idea of what AI is and how it works, but I would like to take a moment to discuss how we can safely align AI use with business objectives. AI is far from what we think of when we conjure thoughts from sci-fi movies of sentient human-like machines. Its much less mystical and involves more complex mathematics, programming, and a ton of data. At it’s core, artificial Intelligence uses algorithms to mimic human intelligence to perform tasks, analyze data, and make predictions. I’ve put together some study notes on what artificial intelligence (AI) and machine learning (ML) is; you can find it here.

The benefits and dangers of AI

AI can help businesses by providing powerful data analysis, assist humans in tasks, and enhance customer experience. It can be used to provide 24/7 customer service chatbots, generate content for marketing departments, or write code to help programmers solve unique problems. However, like any powerful technology, AI brings risks, like how email enabled global communication but also introduced phishing attacks. Specific AI-related risks include:

  1. Ethical concerns like bias
  2. Misuse such as deepfakes, advanced phishing campaigns and AI enabled polymorphic malware
  3. Limitations such as hallucinations and misinformation
  4. Vulnerabilities like prompt injection and data leakage.

Our role as a technology leader

As technology leaders, our role is to enable the business while balancing risk. We must find ways to safely align AI with the business objectives without compromising security. Here are a few points to consider.

Controls

Risk assessment: Start with a risk assessment, like we would for a new SaaS application your business wants to implement, there are just unique considerations and additional risks. Understand how the AI system will function, the potential risks, and how and where your data will be used. Identify any vendor controls and ensure they meet your organization’s security requirements.

Policy documentation: Develop policies that align with business goals, risk appetite, and regulatory requirements. Consider creating an AI Acceptable Use Policy (AUP) or integrate AI guidelines into your existing AUP to define what is allowed and prohibited.

Education

Educate yourself: To accurately assess and mitigate AI risks, aim to become your organization’s AI subject matter expert. I recommend an AI foundations course as a good starting point. Additionally, you should stay informed on applicable laws and regulations, such as the EU AI Act, and NY Local Law 144, as more AI regulations will likely follow. Finally, consider how existing privacy laws such as GLBA or HIPAA might impact your organization’s use of AI.

Educate your employees: Employee awareness is critical, even if you don’t plan on using AI. For example, shadow use of AI tools can present a risk since employees can easily access AI on personal devices. Train employees on responsible AI use and data handling to prevent misuse or accidental exposure of sensitive data.

Practical Steps for Implementing AI

  1. Use case and risk assessment: Begin with examining the proposed use case of AI to fully understand what the business aims to accomplish. Next, assess the risk of the proposed use case and determine if it aligns with the business’ risk appetite and then communicate this to senior leadership.
  2. Select controls and implement policies: Develop policies based on your risk assessment. Update the AI AUP or integrate AI guidelines into your existing AUP. Where possible, apply controls to mitigate identified risks; many controls will be vendor-implemented, but ensure any complementary user entity controls are properly applied.
  3. Educate employees: Include AI in your security awareness training, covering acceptable use, ethical concerns, and compliance requirements.
  4. Monitoring: Establish ongoing monitoring of AI use to ensure there is no misuse, and confirm existing controls and policies are adequate so that no additional risk is introduced.
  5. Continuous Risk Assessments: Integrate AI into your vendor and third-party risk assessments to ensure they meet your organization’s requirements. AI technology is rapidly evolving; you should perform a stand-alone AI risk assessment at least annually or more frequently if necessary.

AI has the potential to improve productivity, even for small businesses, but it requires careful implementation and oversight. As technology leaders, we must securely enable business operations, balancing innovation with security. By establishing strong policies, educating ourselves and employees, and remaining vigilant, we can help our organizations safely and responsibly harness the power of AI.

This website uses cookies to improve your web experience.
Explore
Drag