As organizations deploy LLMs and other AI systems, how do you recommend security teams address risks such as data leakage, prompt injection and adversarial manipulation? What frameworks or practices should be prioritized to make AI security programs resilient from day one?
A common challenge in our risk-tiering framework for suppliers is that even the lowest risk tier still requires processing. Has anyone implemented a "not relevant" risk tier in their model? This tier would apply to vendors posing genuinely negligible security risk.
I'm building most on... share "why" in comments...
Public Cloud72%
Private Cloud27%
Agree or disagree: Business unit leaders typically oppose the SOC’s recommendations.
Apart from applying the patches just released, what other mitigation tactics are you using to address the zero-day SharePoint vulnerabilities impacting on-premises servers?
Does anyone have recommendations for risk management background services for a large technology consulting company? The employee count is over 20K.