Emerging Technology Watch

Trending Questions on AI and Emerging Technologies

Gartner experts share quick answers to recently asked client questions on emerging technologies.

Last Updated September 2025

What does the road ahead look like for agentic AI?

Agentic AI technologies are poised for significant transformation and growth, driven by advancements in AI capabilities, increasing demand for automation and the need for enhanced decision-making processes across various industries. As agentic AI matures, you can expect:

  • Increased agency and autonomy: This will enhance productivity and efficiency, allowing organizations to automate intricate workflows and processes that were previously labor-intensive.

  • Integration into business processes: By 2028, it is anticipated that 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024. This will facilitate more sophisticated interactions between humans and AI systems, improving overall operational efficiency.

  • Focus on use cases with clear value: Gartner predicts that over 40% of agentic AI projects will be canceled by 2027 due to escalating costs or unclear business value. Prioritize agentic AI projects that demonstrate clear business value and ROI.

  • Challenges in adoption: These include complexity in implementation, the need for robust governance frameworks and ensuring data quality

  • Collaboration and governance: Organizations will need to establish clear guidelines for the autonomy granted to AI agents that balance the benefits of automation with the need for oversight.

  • Market dynamics and competition: The market for agentic AI is expected to grow rapidly, with both startups and established companies investing heavily in this area. Yet, the market is currently rife with “agent washing” with many vendors branding as agentic, regardless of the underlying capabilities. True AI system autonomy — acting as reliable, communicative and collaborative multi-AI-agent systems — remains aspirational..

Learn more about how Gartner can help you

Fill out the form to connect with Gartner and discover how we scope emerging technologies — such as AI — and offer tools to select, buy and operationalize them.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

What is the business value of AI ethics?

The business value of AI ethics is increasingly recognized among organizations leveraging AI. Each of its principles relate to business benefits:

  • Human-centric and socially beneficial: Implementing ethical AI practices helps build trust with customers, employees and investors. Organizations that prioritize ethical considerations in their AI initiatives are more likely to maintain a positive reputation, which can boost customer loyalty and strengthen the brand.

  • Fair: Fairness includes bias mitigation, which helps with regulatory compliance. Furthermore, when customers feel they are treated fairly, based on their specific circumstances, it can positively affect retention.

  • Explainable and transparent: When you understand what your AI is doing, you are more confident in bringing it to market. This also positively affects the organization’s risk metrics, such as reputation risk. 

  • Secure and safe: The business value is in data security, privacy metrics and operational risk. 

  • Accountability: Strong AI governance facilitates shorter time to response and lower cost of compliance. 

  • Sustainability: Efficient use of resources and alignment with an organization’s ESG goals is cost effective. 

Where do I apply enhanced reasoning language models? Do I need to use them to upgrade all my GenAI use cases?

Enhanced reasoning language models (RMs) are designed to tackle complex tasks that require logical inference, multi-step reasoning and structured outputs. The best applications for these models include:

  • Providing explainable and structured outputs: RMs provide visibility into AI’s decision process and generate clear explanations of complex analysis or decision logic.

  • Analyzing extensive and complicated information: These models can review and assess vast document sets, generate comprehensive summaries and facilitate complex code analysis, comprehension and refactoring.

  • Tackling complex technical and logical problems: RMs excel in scenarios that involve intricate decision-making processes, such as regulatory compliance checks, financial risk analysis and medical diagnostics. 

  • Automating workflows and using tools: These models can coordinate the specialized skills of individual AI agents to collaboratively execute multistep processes, connect and interact with enterprise systems and integrate real-time external information retrieval. 

While enhanced reasoning models offer significant advantages, not all generative AI use cases should be upgraded with them. Here are some considerations:

  • Complexity of use cases: For simple tasks that do not involve multiple steps and require elaborate outputs, traditional large language models (LLMs) may suffice.

  • Resource requirements: RMs typically consume a lot of computational resources and time due to their planning and validation processes. For applications where speed and efficiency are critical, traditional LLMs might be more appropriate.

  • Current limitations: While RMs show promise, they are still maturing. Their performance can vary, and they may not always provide the desired accuracy or reliability. 

August 2025

What should I know about GPT-5?

OpenAI’s GPT-5 introduces a modular architecture that blends fast-response models with deep reasoning capabilities, improving coding accuracy, multimodal performance and enterprise efficiency. It features expanded context windows, dynamic model routing and safer completions, making it a strong candidate for tasks like document automation, customer service and software development. However, GPT-5 is not a breakthrough in artificial general intelligence (AGI) and still requires strong governance, integration planning and human oversight.

Leaders should treat GPT-5 as a strategic upgrade — not a silver bullet. It’s best deployed in controlled environments to benchmark performance and assess ROI. Organizations must update governance frameworks to reflect GPT-5’s new behaviors and optimize cost-performance by experimenting with model sizing, reasoning parameters and caching strategies.

July 2025

What are the benefits and risks for organizations pursuing an agentic AI strategy?

While an agentic AI can provide significant advantages in terms of efficiency, decision making and innovation, organizations must also be aware of the associated risks, particularly regarding data quality, security and employee acceptance. A balanced approach that includes robust AI governance, risk management and employee engagement is crucial for successfully leveraging agentic AI technologies.

The top benefits of pursuing an agentic AI strategy include:

  • Faster and more informed decision making: Agentic AI can autonomously analyze complex datasets, identify patterns and make choices.

  • Increased efficiency: By automating routine tasks and workflows, such as processes in logistics, customer service and supply chain management, agentic AI can significantly reduce operational costs and improve productivity. 

  • Scalability: Agentic AI systems can handle increased workloads without a proportional increase in human resources.

  • Upskilling the workforce: Agentic AI can empower employees by enabling them to manage complex processes through natural language interfaces.

  • Improved customer experience: AI agents can provide personalized interactions and support and enact tailored marketing strategies.

  • Innovation and competitive advantage: By leveraging agentic AI, organizations can adapt to market changes quickly and maintain a competitive edge in their industries.

The major risks of pursuing an agentic AI strategy include:

  • Data quality and integrity issues: Poor data quality can create inaccurate outputs and decision-making errors, which can have significant operational impacts.

  • Lack of human oversight: This can lead to uncontrolled actions and compliance issues.

  • Integration complexity: Integrating agentic AI into existing systems may require significant and disruptive changes to infrastructure and workflows.

  • Security and privacy risks: New security vulnerabilities include data breaches and unauthorized access to sensitive information. 

  • Regulatory and compliance challenges: Organizations must navigate a complex landscape of regulations regarding AI use, data privacy and ethical considerations. Noncompliance can lead to legal repercussions and reputational damage.

  • Resistance to change: Employees who fear job displacement, or lack understanding of the technology, may be hesitant to adopt agentic AI, hindering successful implementation and usage.

What are the most common use cases for DSLM (domain-specific language models)?

Organizations across industries are increasingly adopting domain-specific language models (DSLMs) to provide tailored solutions that address specific business needs. Common use cases include:

  • Writing and text generation: Sixty-nine percent  of tech providers are developing DSLM solutions to create blog posts, product descriptions, legal documents and technical manuals. 

  • Knowledge management: DSLMs enhance knowledge retrieval and facilitate efficient question-and-answer sessions within specific domains, improving collaboration and decision-making processes.

  • Conversational AI: DSLMs empower chatbots and virtual assistants to provide contextually relevant responses, enabling better customer interactions and support.

  • Data analysis and insights: DSLMs can analyze domain-specific data to provide more relevant and accurate insights than those generated by general-purpose models. This includes applications in finance for fraud detection and in healthcare for patient data analysis.

  • Compliance and regulatory tasks: In highly regulated industries such as finance and healthcare, DSLMs help automate compliance-related tasks, ensuring that organizations adhere to legal standards while improving operational efficiency. 

  • Semantic tagging and classification: DSLMs enhance data organization through semantic tagging, which improves the accuracy of information retrieval and categorization.

  • Personalized recommendations: In retail, DSLMs can provide personalized product recommendations based on customer behavior and preferences.

  • Translation and summarization: This is particularly useful in industries that require multilingual support and quick information dissemination.

  • Healthcare applications: DSLMs can be used to generate clinical notes and assist in diagnostics, which can significantly improve patient care and operational efficiency. 

How can I use technology to combat disinformation risk?

Consider these key technologies to establish disinformation security

  • Deepfake detection: Effective deepfake detection requires a comprehensive approach that combines digital forensics, AI techniques and continuous updates to adapt to evolving threats.

  • Impersonation prevention: These technologies continuously evaluate user behavior across devices and interactions to validate authentic actions by incorporating continuous adaptive trust models. This mitigates risks associated with account takeovers and impersonation attacks.

  • Reputation protection: Narrative intelligence and media monitoring tools manage brand reputation by identifying harmful narratives and tracking their spread across various platforms. 

  • Content verification technologies: These technologies assess the authenticity of content through fact-checking and verification processes. They can help organizations discern legitimate information from disinformation, especially in real-time communications.

  • Digital risk protection services: Monitoring external digital landscapes, such as social media and news platforms, allows these tools to identify and mitigate disinformation threats before they escalate.

  • AI-powered narrative monitoring: By analyzing narratives that spread through social media and other channels, this technology helps organizations understand public sentiment, identify potential disinformation threats and track the evolution of harmful narratives.

  • Identity verification solutions: Often with biometric authentication methods enhanced with liveness detection capabilities, these solutions ensure that the identities of users are accurately verified, especially in contexts where deepfakes and impersonations are prevalent. 

  • Collaborative learning models: Federated learning approaches can train models on decentralized data without compromising privacy, allowing organizations to improve their disinformation detection capabilities while maintaining data confidentiality.

Why is photonic high-speed AI important for the future of artificial intelligence and computing?

By Alizeh Khare

Photonic high-speed AI is poised to play a transformative role in the future of artificial intelligence and computing by enabling faster, more efficient and scalable data processing capabilities, which are essential for meeting the growing demands of AI applications. Its significance is thanks to:

  • High-speed data transfer: Photonic interconnects use light (photons) for data transmission, enabling data rates of up to 4 terabits per second (Tbps). This capability is crucial for handling the massive data requirements of AI workloads, particularly in data centers where rapid processing and communication between components are essential.

  • Energy efficiency: Photonic systems are designed to consume significantly less power than traditional electrical interconnects. This reduction in energy consumption is vital as the demand for computational power increases, especially with the rise of AI applications that require extensive processing capabilities.

  • Scalability: The integration of photonic interconnects into computing architectures allows for greater scalability of compute clusters. As AI models grow in complexity and size, the ability to efficiently scale up resources without a corresponding increase in power consumption becomes increasingly important.

  • Reduced latency: Photonic technologies can significantly lower latency in data transmission, which is critical for real-time AI applications. This improvement in speed and responsiveness can enhance the performance of AI systems, particularly in applications that require immediate data processing and decision making.

As photonic technology matures, it is expected to drive further innovations in AI and computing. The development of industry standards and reduction in manufacturing costs will facilitate broader adoption, making photonic interconnects a foundational technology for future AI systems.

To see previously featured answers to client questions on emerging technologies, visit the archive.

Drive stronger performance on your mission-critical priorities.

TOP