If you’re using AI chatbots in a regulated industry (like healthcare or banking), have your end-users shared any discomfort with using them or distrust of their output?

2.3k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
Chief Data Officer in Mediaa year ago

I have heard both concerns from multiple clients. Building small (100M - 1B parameters) language models that run on low-cost hardware works very well. Developing a single platform where all ML and AI tools are available helps keep shadow tool usage to a minimum.

Global Chief Cybersecurity Strategist & CISO in Healthcare and Biotecha year ago

Yes, end-users have expressed discomfort and distrust of AI chatbots across industries, and with good reason! Concerns often stem from data breaches and inaccurate responses. It’s crucial to address these issues by implementing strong data security measures, clearly communicating them to users, ensuring response accuracy, seeking feedback, and being transparent about data handling practices.

Lightbulb on1
Senior Director Of Technology in Softwarea year ago

We are using AI chatbot for our feedback messages. Our bot understands the response from customer and based on tonality, it starts the conversation.

We dont recommend any medicines or health related issues etc on chat but soon would venture out into it.

Lightbulb on1

Content you might like

Yes, this would alleviate pressure on the team40%

Somewhat, AI agents could play a role but humans need to be involved56%

No, this would be too risky4%

View Results

CEO23%

CIO50%

CTO35%

Software department leadership 23%

Team leads27%

Software engineers/developers19%

Someone else (comment to share)

N/A — no plans to use AI agents for software engineering4%

View Results