Have you been able to effectively scale existing controls for non-human identities, or have you found it necessary to add new controls to meet needs resulting from AI adoption?

113 viewscircle icon5 Comments
Sort by:
IT Manager in Bankinga day ago

Our primary risk involves protecting member and employee data, especially when employees submit prompts to generative AI chatbots. To address this, we have implemented an AI firewall that detects and sanitizes sensitive data in prompts before they are sent to the model, then rehydrates the response as needed. Additionally, our web gateway only allows access to approved generative AI websites. These controls are supplemented with policy and training to ensure proper use.

Chief Information Security Officera day ago

For us, controls for non-human identities can often be extended to AI. Segregation of data and governance for service accounts and APIs have evolved with AI adoption. Rather than creating entirely new controls, we are primarily enforcing and adapting existing ones, particularly around API governance when AI is involved. Our approach is more about adjusting and tightening controls than building new ones from scratch.

1 Reply
no titlea day ago

We have also improved our processes for onboarding and offboarding AI identities. It is important to maintain control and ensure proper cleanup, especially after proof-of-concept projects. Larger organizations often have numerous service accounts, and we are now more vigilant in flagging and managing accounts related to AI workflows to ensure they are offboarded correctly when no longer needed.

CISOa day ago

We are exploring several tools to help manage the proliferation of non-human identities, such as service accounts. As an older company, some of these accounts are quite old and lack necessary metadata. Newer companies that started with an AI-first approach can move quickly, but legacy organizations face more challenges due to technical debt. We have conducted proof-of-value trials with advanced solutions that analyze patterns and context to help identify ownership of non-human identities. Manual efforts can address some of the easier cases, but scaling across the enterprise is difficult. The ideal solution would be a policy enforcement tool that could, for example, automatically suspend inactive identities. While there is no silver bullet, these tools provide valuable guidance and help us better understand our environment.

Vice President - Global Head of Information Security, Privacy & Business Continuitya day ago

This is a multi-layered question, but I’ll start with the governance aspect. We have deployed what we call a responsible AI framework, which ensures visibility in both the AI solutions we are using and those being built internally. Our approach is to incorporate the right guardrails without slowing down business operations, and we have developed a comprehensive framework and process to support this.

Content you might like

Threat detection & response 35%

Identity & access management 65%

Cloud security 35%

Security awareness training 29%

Other 6%

N/A

View Results

Yes63%

No37%