August 23, 2023 Blogs 9 min read

Guiding your AI journey: crafting an effective AI policy for responsible governance within your business

By KYND

Adobe Stock 465922439 700x500

Artificial intelligence. It’s been the talk of the town for many months. With the surge of countless AI vendors, it's pretty clear that AI has firmly established its presence and is here to stay. As more than a third of enterprises integrate AI into their operations, KYND has proactively embraced this paradigm shift and developed our own AI usage policy to respond to the rapid development of these technologies.

In this blog we’re going to discuss the importance of AI within businesses and explore how you can take advantage of AI safely and seamlessly integrate it into your organisation's policy framework, ensuring that its implementation augments rather than compromises your business objectives. At the end of this blog, you can find our policy template that will serve as your compass, guiding you to forge your own AI policy tailored to the unique needs of your business.

Why is AI governance important for businesses?

So, why should you put so much faith in this new technology? While it's absolutely true that artificial intelligence offers a plethora of advantages like streamlining processes, automating tasks, cutting costs, and much more, we should also be mindful of some challenges that come along. These challenges are particularly pertinent to businesses. These can include security risks caused by the data consuming behaviours of such tools such as: the errors made by tools that have not been trained with up to date information; the ability of tools to inadvertently breach regulations; the provision of output with potential to introduce new security risks, and more.

Moreover these risks, the hazardous reality is that there is very limited guidance currently available for organisations to take in light of AI’s widespread presence. KYND agrees that using AI can be as beneficial as it is harmful, hence defining the expectations is the best step for businesses to take given how difficult it is currently, to gauge how AI will impact the future of cyber security.

This is precisely why we’ve developed and deployed our own AI usage policy to respond to the rapid development of these technologies. With more and more businesses taking up artificial intelligence technology to optimise and enhance their operational processes, we thought we would share our considerations, insights and a template we made to help your organisation with creating your own AI policy to mitigate the risks we mentioned above.

Key considerations and actions for implementing a strong AI policy for your business

Identifying stakeholders and collaborators

The foundation of developing any policy is knowing who it will impact. Not all members of an organisation will be using the same tools and performing the same activities, so it is important to identify who will be affected by the policy. When we look at the broad, adaptable and constantly expanding capabilities of AI, we see it being a great assistant to the roles of almost all members of an organisation. This means our policy will affect a larger range of people, which can include:

  • Internal teams such as: IT departments, leadership teams, finance departments, human resources etc.

  • External parties such as: Customers, Clients, Partners, Vendors & Regulatory Bodies who may interact with your AI tools/platforms/systems

  • Any individual with access to company information who may interact with AI systems

We recommend involving teams such as your organisation’s IT and legal departments in the development of this policy to enable its tailoring to your organisation's specific needs. With this, your organisation will be better able to provide guidance, define use-accountability, as well as give clarity to stakeholders on relevant issues with AI. Cumulatively, this will create a culture of responsibility, allowing better protection of internal assets, customers, and employees from the risks associated with currently available AI tools. Additionally, we recommend appointing an individual with the responsibility of owning, updating and maintaining the policy including the approved tools listed.

Assessing and addressing AI risks

Another crucial element of creating a policy is understanding the “why”. This will enable organisations to demonstrate proactivity and awareness of current issues as well as aid stakeholders in understanding their specific relevance. This can be achieved through a comprehensive risk assessment.

This can include the assessment of risks such as the following: Data Privacy, Bias & Fairness, Security, Accuracy & Reliability, Legal & Regulatory, Economic, Operational etc. Below are the risks associated with each concern:

  • Data privacy risks: Due to AI tools primarily relying on data and input, any information users provide a tool with will be collected / processed.

  • Bias & fairness: For the same reason, AI tools may inadvertently maintain or amplify biases present in their training data.

  • Security risks: Similarly, AI tools can provide output code that would introduce new security risks based on data present in their training data.

  • Accuracy & reliability: Output provided by AI tools may be implicitly / explicitly false due to out of date, inaccurate, or biased training data sets.

  • Legal & regulatory risks: AI tools have the ability to breach regulations by the provision of licensed scripts in its training data; breach of GDPR / CCPA through illegal functions of certain AI tools; or breach the individual privacy of a specific person or group of people.

  • Economic risks: AI tools can be used to automate processes based on strict / non strict financial conditions. This combined with the potential bias, unreliability, and data collective behaviours of AI could lead to adverse / unexpected outcomes.

  • Operational risks: AI being dynamic means that it may not always behave as intended in the future. This could lead to malfunction or the future disruption of business operations.

Considering these risks, and as a starting point, KYND recommends disallowing the use of tools which are not specifically defined by policy. Having a list of pre-vetted options as an alternative will enable organisations to minimise their risk exposure whilst they explore potential opportunities for AI deployment.

By conducting a risk assessment, everyone is able to anticipate and mitigate potential risks with further guidance being provided to them by employers, creating a safer work environment for all.

Preserving data privacy and compliance

Another fundamental part of this policy will be to specifically outline behaviours / actions stakeholders can take to preserve the privacy and security of data when interacting with AI tools. One of the more prominent issues faced currently is the processing of sensitive information through AI tools. Sensitivity of data can be a subjective topic and so we would recommend each organisation to review the types of data intended to be input into AI tools whilst ensuring practices are compliant with legislation relevant to the jurisdictions in which the company will operate in (GDPR, CCPA, HIPAA, FERPA etc.).

Some examples of sensitive information to consider here include: Personally Identifiable Information (PII), Protected Health Information (PHI), Biometric Information, Strict IP, Credentials, Codes, Processes, Documents, Assets, Physical Security Items / Intelligence etc.

Additionally, with the exception that the company is willing to accept the mentioned risks and their potential consequences, KYND highly recommends the prohibition of users entering any personal or company information into AI tools. Where possible, input should be anonymized, pseudonymized or segregated, where sensitive information such as the above listed examples, is modified to aid in preserving individual and organisational privacy.

For instance, if AI is being used to generate a report concerning a specific client, details of the client should not be fed to the tool, rather the section/variable/table names you intend to pull the information from should be. This may lengthen the bridge of efficiency however it must be recognised that AI tools have not been released as a replacement for human input rather to enhance and optimise the existing processes that already exist.

Moreover, if data sets held by an AI vendor were leaked, sensitive information input into such tools would become publicly available or stored on the dark web by malicious actors, hence sensitive information should never be entered into such tools. Ultimately, data ingested by AI tools will be considered and used as “live” training data by the tool, therefore this information can be considered available regardless of whether the data surfaces or not.

Furthermore, since the current use & regulation of AI tools is still in its infancy, providing stakeholders with as much guidance in this area as possible will aid in reducing the human aspect of risk.

Enabling accountability and oversight

It is important for stakeholders to know who they can bring their queries, concerns and requests to when deploying a policy into a business. As mentioned in the stakeholder considerations, including a dedicated individual responsible for oversight and decisions relating to AI allows employees to raise issues they may be facing in light of a policy being introduced. This further enables your business to improve & make revisions to the policy as the landscapes of AI broaden.

Beyond narrow data protection and security considerations, generative AI tools expose businesses to a broader range of risks, from discriminatory output (also known as “algorithmic bias”), to the erosion of an organisation’s standards, to the inadvertent use of copyrighted material.

Since AI tools make use of existing data, they’re liable to reflect the biases within that data and biases due to selection/availability of data. We’ve seen this take the form of content that reflects wider societal biases, prejudices or discrimination as well as decisions which result in differing outcomes for individuals or groups based on protected characteristics.For these reasons, where AI has been used in a generative process, we highly recommend mandatory human review of many AI uses. These ensure that your organisation continues to fulfil its obligation under wide-ranging legislation such as UK’s Equality Act 2010, or USA’s Americans with Disability Act 1990.

As well as meeting your legal obligations, human review of AI outputs ensures that the quality of delivered output (including tone of voice, professionalism, personalisation and accuracy) continues to meet your organisation’s standards.

With new AI tools being released on a daily basis, it has become far too difficult to track and assess each tool. By considering the above mentioned factors, businesses will be able to narrow the scope of what is available and actionable.

At KYND, we’re all about helping you navigate, understand, and manage cyber risks, whether you’re a portfolio manager or a business. If you have any questions about how to safely implement AI safely or need help fortifying your portfolio or business from cyber risks, click here to get in touch.

If you would like to download our policy template, please complete the below.

Share this article
Join the newsletter

Accreditation & Features