New research reveals why some workers surveyed don’t trust agent AI

Credibility and trust can help determine the acceptance and success of new technology and rapid improvement. If people can’t trust the technology and rely on it to get the job done, they’re less likely to use it or recommend it to others.

The last example is agent AI, an enhanced version of AI.

“The way humans interact and collaborate with AI is taking a dramatic step forward with agentic AI…With their overflowing reasoning and execution capabilities, agentic AI systems promise to transform many aspects of human-machine collaboration…” Harvard Business Review is explained.

But new research underscores the reasons why some workers are reluctant to use AI or its latest iteration.

  • 33% of respondents were concerned about the quality of work being produced by AI
  • 32% said there is a lack of human intuition and emotional intelligence
  • 30% did not trust the accuracy of the AI-generated answers they received

That’s according to the results of a new study conducted by YouGov for Pegasystems (NASDAQ: PEGA ), a software development, marketing and licensing company. YouGov surveyed more than 2,100 working adults in the US and UK in November 2024 who use digital devices for their jobs.

More than half of respondents – 58% – were already using AI agents for various tasks.

“This research highlights that many still have reservations [about the technology] and it’s up to enterprise leaders to strategically and thoughtfully incorporate technology to help ensure adoption,” said Don Schuerman, Pegasystems’ chief technology officer, in a statement provided by the company’s public relations representative.

Delving deeper into the concerns of the employee surveyed:

  • 47% believed that AI lacks human intuition and emotional intelligence
  • 40% were uncomfortable presenting AI-generated work
  • 34% are concerned that the work produced by AI is not as good as their own, which is linked to concerns about the accuracy and reliability of the technology

There are other risks for companies relying on AI, including creating more work and stress for employees and being overly confident about how ready their organizations are to embrace the technology.

AI experts and CEOs shared their concerns and reservations about using agentive AI-enabled programs and tools.

Adopt AI with caution

“Until AI guarantees ethical decision-making and 100% compliance, every business must adopt it carefully, audit it rigorously and override it as necessary. We can’t just let AI run our businesses for us, but we can certainly use it — with strict monitoring — to augment our efforts,” attorney Jonathan Feniak advised via email.

Approach AI with caution

“Executives should approach AI solutions with caution, especially free or mass-market tools. [The] The old saying applies: ‘If the product is free, you are the product,’ warned Elle Ferrell-Kingsley, an ethicist and AI specialist, via email.

“Public AI tools often come with greater risks, especially in relation to sensitive company data. It is generally advised not to enter any confidential information (or customer information) into these systems. However, it can be easy for employees to think of it as a quick fix or a cool new tool to improve their workflow,” she warned.

Understand the limitations of AI

“Confidence in AI is not just about the technology itself – it’s also about understanding its limitations. For example, while AI can simplify data analysis and improve compliance efforts, it is still essential to have human oversight to interpret insights and ensure alignment with broader business goals,” noted Ryan Niddel, CEO of MIT45, via email.

“I see AI as a powerful opportunity, but it works best when paired with human intuition and leadership,” he advised.

PRIORITY

“One of the biggest concerns I see with AI tools is the potential for over-reliance on algorithms without fully understanding the datasets that drive them. In my opinion, companies should prioritize tools that demonstrate the ethical use of data and deliver actionable insights instead of generic results,” Niddel concluded.

Set limits

“We use AI in various ways in our PR agency to speed up the creation of media lists and perhaps test variations on subject lines. So on a tactical level, it makes a lot of sense to use AI in your work,” Shannon Blood, CEO of InFlux Technologies.

Where AI is powering an application that you’re using at a high level to build your business (say to speed up the sales process), there’s no need to give that software an executive-approved green light to done anything and everything. Instead, it’s about testing what works, what’s bringing in new business, or maybe even turning off customers. The pros and cons have to be weighed, as with any technological tool,” advised Blood.

You have AI policies

“If an executive trusts AI and AI-powered tools, they need to create processes and policies. And they should expect that even if they don’t trust the technology or the tools….others on their team—both leaders and subordinates—will trust AI to varying degrees,” David Radin, author of Time management in the age of AIadvised by email.

“A properly constructed AI policy and process must include organizational expectations as well as the people who are empowered to implement the policy, including implementation. It should address issues ranging from copyright infringement (in both directions) to potential loss or damage to others, and a methodology for addressing each. It should also include an ongoing process of communication and training to ensure that all members of the executive team and staff are aware of the policies and how to implement them,” advised Radin.

A balancing act

All technologies come with some level of potential risks and benefits. Business leaders who are too quick to embrace AI and agency AI before they and their employees are fully ready may be creating more problems than they are solving.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top