Has anyone started exploring the incorporation of AI tools and training into their developer communities, particularly in the government space where we tend to be more risk-averse? I’m interested in hearing about the strategies you're using to evaluate these tools and how you distinguish between those offering real value and those that might be overhyped. While tools that simplify code reviews, generate test cases, and ensure automated test coverage are top of mind, I recognize there may be other valuable AI applications. I’d love to hear how others are navigating this space, especially in areas where risk management is a key consideration.
Sort by:
I am happy to help here but I am not sure whether I can provide what the peer is looking for. In our BASF developer community we have an extended exploration program testing the GitHub Copilot. We have collected many insights and best practises and are preparing a broad roll-out. However, we are not in the governmental space but a chemical company. Would that still be interesting ? Then, I could connect the peer to my colleague orchestrating this GitHub Copilot program.
Within our company we have set up an AI taskforce first. This taskforce delivered a plan on how to evaluate the value of AI for our business. It has setup the conditions. Basically the process is to convert concerns into requirements.
Technology wise, we evaluate multiple providers concurrently including OpenAI, Microsoft, Google, AWS. We arranged for corporate contracts, setup single sign-on, and set up private connectivity to these providers. Our prompts are not used for training their models.
ChatGPT licenses are handed out to a variety of users from different disciplines, and we conduct surveys to monitor the value. AI is being used at this moment, we are aware of its shortcomings, and we are investing in it further to improve our business.
We are trying to transform traditional software development team into AI development team. We are taking few approaches:
1. Identify training and upskilling needs based on different phases of AI/ML Lifecycle. Broadly, we are upskilling in following areas
a. Data Engineering
b. ML
c. GenAI
d. Agentic AI
This is high level but i have written a position paper for leadership on what, when and how to do it.
2. Then, we mentor team members to identify AI use cases within their development domains or in proof-of-concept projects they are currently working on or familiar with. This enables them to apply what they’ve learned in a practical context.
3. Once we achieve maturity in item #1 and item #2, we start involving software development team members to apply their AI skills to other projects.
4. In parallel, we are investing in learning GenAI, LLMs, and Agentic AI. We are building RAG applications that enable prompt-based question-and-answer capabilities by connecting to application databases and delivering responses in plain English for end users
5. We’re actively encouraging team members to explore Agentic AI and get hands-on experience with Copilot Studio by building small AI agents.
In this space, we keep seeing new information popping up every week. Hard to keep up. Hope this helps.