What are the most pressing AI risks software teams are encountering today? What strategies do you find most effective in helping staff mitigate them?

141 viewscircle icon2 Comments
Sort by:
VP of Engineering5 days ago

Organization-wide AI use guidelines help. For example, developers are instructed on how to turn off data sharing in various tools. Security experts should educate the team, and documentation should be provided.

Sr Software Principal engineer (Gen AI and ML Security) in Hardware5 days ago

The biggest risk is using AI without fully understanding it, especially since many AI systems lack transparency. If you build AI in-house, you know its reasoning, but with external systems, you don't have access to that level of knowledge. We focus on AI security and advise against using tools that are not fully understood. Education is our main strategy: We analyze every tool before implementation and train our team thoroughly. We avoid rushing adoption and prioritize understanding and building our own AI systems.

Another mitigation strategy is limiting the AI’s context window to only what is necessary, rather than exposing it to all available information. This approach helps safeguard against cybersecurity risks.

Content you might like

Unsure what technology is available16%

Unsure we could implement it33%

Resistance to change40%

It could negatively impact customers23%

People could lose jobs18%

Our AR process is too complicated15%

Cost18%

Nothing, we have already automated our accounts receivable14%

View Results

Yes47%

No50%

Unsure2%

View Results