What sorts of challenges have you encountered in adapting your governance framework to address AI adoption across business units, and how are you attempting to resolve these?
Sort by:
From my perspective, we have been very aggressive in adopting tools like Copilot and ChatGPT, providing training and access to thousands of employees. Our CIO has pointed out that there is no such thing as shadow IT anymore because everyone now has access to these powerful tools. The challenge is finding the right balance between enabling innovation and maintaining control. We use tools like CASB and DLP to block unapproved services, but the demand for more AI tools is high. We try not to position security as the main barrier, instead framing some decisions as financial or operational to gain more leverage.
There are two main categories of AI: individual efficiency tools and those used by engineers, such as GitHub Copilot. We are selective about which tools we approve to avoid sprawl and manage costs, but this can create frustration when people have to switch tools after integrating them into their workflows. The key is to provide clear guidance and ensure buy-in from employees, so they understand the reasoning behind these decisions.
Shadow AI is a significant challenge, much like shadow IT was in the past. The main issues for us are accountability and ownership. Business units may initiate AI projects independently, using their own tools and processes, and when we follow up to determine ownership and accountability, it can be difficult to get clear answers. This lack of transparency is a major challenge, as teams may be reluctant to admit they started something outside the established process.
Visibility and speed are critical issues. Business Information Security Officers (BISOs) play an important role in providing the CISO with insight into what is happening across the business, allowing us to respond in a timely manner and partner effectively with business units.
We have formed an AI governance committee that meets regularly to discuss both the potential of AI and the governance and security considerations around its use. One of the main challenges is dealing with shadow AI and the volume of requests to use AI technologies. Technology is still evolving, so we are cautious about moving too quickly. Our primary focus is on making AI available in a secure manner, and so far, business units have been supportive of joining the committee and participating in proper governance.
We are also seeing an overwhelming pace of requests for AI adoption, both for individual productivity and for client solutions. We do not block these requests outright, but we require due diligence and risk assessment. However, the volume and speed of requests make it difficult to keep up. Another challenge is that our team responsible for evaluation and risk assessment has not been specifically hired for AI security; instead, we have upskilled existing staff and leveraged available frameworks. There is still a learning curve, and ensuring the team is fully prepared to evaluate AI risks remains an ongoing challenge.