The new AI tools are remarkably good at many tasks, and getting better, but one area where they fall badly short is in expressing their level of confidence in the answers they provide. The great mathemetician Terence Tao made this point recently in an interview, in an interview with Matteo Wong about the use of AI tools to suggest answers for unsolved problems in mathematics (“The Edge of Mathematics,” Atlantic Online, February 24, 2026). Tao said:
One very basic thing that would help the math community: When an AI gives you an answer to a question, usually it does not give you any good indication of how confident it is in this answer, or it will always say, I’m completely certain that this is true. Humans do this. Whether they are confident in something or whether they are not is very important information, and it’s okay to tentatively propose something which you’re not sure about, but it’s important to flag that you’re uncertain about it. But AI tools do not rate their own confidence accurately. And this lowers their usefulness. We would appreciate more honest AIs.
Additionally, a lot of AI companies have this obsession with push-of-a-button, completely autonomous workflows where you give your task to the AI, and then you just go have a coffee, and you come back and the problem is solved. That’s actually not ideal. With difficult problems, you really want a conversation between humans and AI. And the AI companies are not really facilitating that. If we can work with at least some tech companies that are willing to develop more interactive platforms, that will be much more readily embraced by the people.
Of course, Tao’s points are interrelated. You can only hand off a job entirely to the AI tool if you are entirely 100% confident that the result will be correct and appropriate. Handing off arithmetic to a spreadsheet program is fine. Handing off the strategic priorities of a company to an AI program is something else.
Moreover, it has seemed to me that the focus of AI companies on completely autonomous workflow is not only unrealistic (because “completely autonomous” means assuming perfect trustworthiness for the AI), but also a public relations disaster. Instead of emphasizing how workers can use AI tools to improve productivity, the companies tend to emphasize how AI can replace workers. As one example, a company called Genspark ran an ad during the Super Bowl about how workers could all take the Monday off after the big game, and just let the AI tool do their jobs that day. Cute! But of course, the obvious question is why any company will want to hire workers to return to their easily-replaced jobs on Tuesday, Wednesday, Thursday, or Friday.
David Autor had a pleasantly acerbic comment on this “replace all the workers” mindset in an interview published earlier this year. Sara Frueh was interviewign Autor on the subject: “How Is AI Shaping the Future of Work?” (Issues in Science and Technology, January 6, 2026). Frueh says: “I was looking at OpenAI’s mission statement, which is, `To ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity.’ Is that mission statement made up of two mutually exclusive goals? … Can you build something that takes away the jobs of most humans—if that’s possible—and still benefit all humans?” Autor reacted:
Or even should that be your goal? Actually, I don’t even like that definition of artificial general intelligence. The goal of machines should not be to just do what people do slightly better. Our tools are valuable to us because they allow us to do things we can’t do. So many technologies enable capabilities that we simply don’t possess. Powered flight didn’t automate the way we used to fly. We just didn’t fly.
And that’s true for most of modern technologies. They’re important and consequential, not because they do the same old thing better, cheaper, faster, but because they enable us to do things we couldn’t do, telecommunication, flight, fighting disease with penicillin, seeing the insides of the interiors of subatomic particles, designing, computing things that we could never in a lifetime compute.
So I actually, I find their mission statement an amazing bait-and-switch. Artificial intelligence, by which we mean a machine that outcompetes humans in every domain. I’m reminded of this tweet I once saw that said, `We’re a modest company with modest goals. One, sell a quality product at a fair price. Two, drain the world’s ocean so we can find and kill God.’ And that’s what I feel like when I read the OpenAI mission statement.
An honest AI can be wrong, but it won’t be declaratively and definitively wrong. It would convey its level of uncertainty, and try to point out where additional input and feedback would be useful in moving toward a stronger conclusion. It would replace some of the tasks that workers currently do, but it would also expand the tasks that workers are able to do.









