We are on the cusp of conducting a pilot of AI tooling - we intend to use Gemini & CoPilot, providing training, guidance & policy to our user group.  Before we start we should conduct a risk assessment -  my question is: can you recommend a tool or template?

537 viewscircle icon2 Upvotescircle icon10 Comments
Sort by:
Group Director of Information Security in Banking6 days ago

Usage of AI tools be it Gemini or Copilot or Grok, might be an open-ended endeavour which needs to be ring fenced through creating principles of Responsible AI policy rather than conducting an AI Risk Assessment. AI System Impact Assessment comes closer to your requirements. Allow me explain the terminology a bit more.

1. Definition: AI Risk management is an overall management activity that addresses the organization as a whole, including the way AI system developers conduct development projects or USE CASES (be they on Gemini or Copilot), AI system providers manage the relationship to their customers, and AI system users utilize the functions of AI systems.
Refer: ISO 23894 - Information technology — Artificial intelligence — Guidance on risk management

2. Definition: AI system impact assessment, as opposed to general risk management, addresses reasonably foreseeable impacts of AI systems:
a) to a restricted set of relevant interested parties, namely individuals, groups of individuals, societies and environment;
b) to potential or concrete uses of AI systems as opposed to overall governance and management issues such as business strategies, compliance management, financial management or technology strategies.
AI system impact assessment therefore takes a more product or service-oriented view than risk management. It is also intended to be performed directly by teams concerned with the development, provisioning or technical management of the AI system.
Refer: ISO 42005 - Information technology — Artificial intelligence (AI) — AI system impact assessment

Sam has compiled a list of templates on his LinkedIn post for both assessments.
https://www.linkedin.com/posts/sam-burrett_ai-risk-assessment-template-activity-7311223857174966272-J-YC

Lightbulb on3 circle icon1 Reply
no title6 days ago

Hi Faheem, that's really helpful - thank you for your prompt response. I shall endeavour to read further into this with the reference materials you've suggested. Thnx again, Nick

VP of Information Security6 days ago

We work closely with Risk Management Department and determine the compliance checklist:
• Governing Law Compliance - Ensure all activities adhere to the laws of countries with strict personal data protection and privacy rights, such as the EU, UK, and US.
• Document Security - Maintain the integrity and confidentiality of all uploaded files throughout their lifecycle via our DLP tool.
• No Model Training Usage - Confirm that uploaded documents are not used for training any models.
• Retention and Deletion - Verify that documents are deleted promptly once the specified retention period has expired.
Hope this help :)

Lightbulb on2 circle icon1 Reply
no title6 days ago

Hi Sudarat, thank you for those pointers - really useful input, I'll make sure our team have made note and considered these. This is a brand new foray into the world of AI, so any advice is welcome

Director of IT in Healthcare and Biotech7 days ago

While all of the tools mentioned, thus far, mention BIAS you may want to pay particular attention to Section 1557 of the ACA and make sure you are addressing this topic in a manner in accordance with its requirements. I am seeing interpretations of expectations of this section that differ wildly.

Lightbulb on2 circle icon1 Reply
no title5 days ago

Thnx for your response Michael - I'll check that out

Director of IT7 days ago

Hi Nick,

California State Government uses this GenAI Risk Assessment Form. https://cdt.ca.gov/wp-content/uploads/2025/08/SIMM-5305-F-Generative-Artificial-Intelligence-Risk-Assessment-20250822FINAL.pdf

Lightbulb on1
AI Governance Strategist in Travel and Hospitality7 days ago

Great points, Brian and Girish — NIST AI RMF is the right foundation.
Nick, to operationalize it, the Cloud Security Alliance AI Controls Matrix (AICM) might be useful — it's a vendor-neutral control checklist mapped to NIST AI RMF and ISO/IEC 42001, helpful for validating AI risks before deploying Gemini or Copilot. https://cloudsecurityalliance.org/artifacts/ai-controls-matrix

Lightbulb on1

Content you might like

Agree — it’s hard to build a well-integrated stack59%

Disagree — we’ve found an all-in-one solution that fits our needs34%

Not sure — we haven’t invested in GenAI tools8%

View Results

Budget allocation15%

Potential process improvements64%

Onboarding & training bandwidth10%

Security & compliance7%

Reviewing prior purchase overlap2%

View Results