ARTICLE AD BOX
"No it isn't [explains a commenter]. The current models don't have any general intelligence, or actual understanding of the data they are manipulating. They are statistical models that note that certain words are statistically associated with other words in their training data (the text contents of the internet), and estimate the probability distribution of what words are likely to come next.
"So it might note that when the word 'Adam' occurs next to the word 'Smith' in a sentence, words like 'markets' and 'economics' and 'trade' will be especially common. It's a lot more sophisticated than that in the s'rt of patterns and relationships it can recognise, but that's basically all it's doing. It doesn't know what 'Adam' and 'Smith" denote. It doesn't know what a market is. It just has a big bucket of related phrases and combinations that people have put together in the past, and it picks them randomly from the bucket and pastes them together.
"If lots of people have written about a topic on the internet, it has a bigger bucket to pick from, and can generate something that is at least originally phrased, repeating the information in those phrases. If it's only been discussed once or twice on the internet, it will reproduce what they said verbatim. And if it hasn't been discussed before, it will make something up that seems plausible. So if you ask it for a biography of Adam Smith, it has lots to choose from. If you ask for a biography of Joe Random, it will select randomly from all the biographies and news articles it has read, which is why people have put their own names in and been shocked to find it falsely accusing them of crimes and scandals.
"If it doesn't know, it won't say 'I don't know,' because fundamentally it never knows. It has no concept of truth. These are not facts about the world. They are strings of meaningless symbols that it is looking for patterns in. So it can never solve a problem that hasn't already been solved and written about on the internet. It doesn't know anything that isn't on the internet. You either get the product of human intelligence regurgitated, or you get sentences picked and put together at random.
"There are some very basic new word-smithing capabilities that it may be able to help with. It can generate summaries and paraphrases, and restructure information scattered across multiple sources to pull out the bits relevant to a particular aspect. It might be usable as a first-pass helpline assistant to answer questions from people who haven't read the documentation. But it can go no further, because it is a statistical model of the text on the internet, not any sort of general intelligence. We still have no idea how to do that.
"The [AI] market bubble is a con. People are pouring billions into it, and they're not going to get it back. Most of the new AI companies will go bust.
"That said, it's their own money to lose, and I'm very much in favour of deregulating it and letting innovation try. You never know. Somebody might come up with an actual advance in the process of all the messing around. But I will note in passing that the main obstacle to doing it in the UK is energy prices — it uses vast amounts of electricity to do the training — so if you want to do it [in the UK], the best thing you could do would be to abandon Net Zero.
"And that's not likely to happen, so as usual, it's politicians talking about how they're going to solve all our problems ('Growth!') while misguidedly doing everything in their power to prevent that."
~ commenter NiV arguing against the post 'AI, not Tariffs, is the Future of U.S. Economic Dominance'
“LLMs [Large-Language Models] are regurgitation-with-minor-changes machines. When a particular prompt is close enough to a bunch of prior data points, LLMs do well; when they subtly differ from prior cases in their databases they often fail. …“As … Brad DeLong just put it in a blunt essay, ‘if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphising the algorithm doesn’t make it smarter—it makes you dumber.’”
~ Gary Marcus from his post ‘OpenAI’s o3 and Tyler Cowen’s Misguided AGI Fantasy’"OpenAI launched its latest AI reasoning models, dubbed o3 and o4-mini, last week. According to the Sam Altman-led company, the new models outperform their predecessors and 'excel at solving complex math, coding, and scientific challenges while demonstrating strong visual perception and analysis.'
"But there's one extremely important area where o3 and o4-mini appear to instead be taking a major step back: they tend to make things up — or 'hallucinate' — substantially more than those earlier versions ...
"According to OpenAI's own internal testing, o3 and o4-mini tend to hallucinate more than older models, including o1, o1-mini, and even o3-mini, which was released in late January. Worse yet, the firm doesn't appear to fully understand why. ...
"Its o3 model scored a hallucination rate of 33 percent on the company's in-house accuracy benchmark, dubbed PersonQA. That's roughly double the rate compared to the company's preceding reasoning models.
"Its o4-mini scored an abysmal hallucination rate of 48 percent, part of which could be due to it being a smaller model that has 'less world knowledge' and therefore tends to 'hallucinate more,' according to OpenAI."
~ Victor Tangerman from his article 'Open AI's Hot New AI Has an Embarrassing New Problem'"The greatest achievement of AI might be in the irony: by oppositional example, it will teach us to love human creativity more than ever. It turns out that human intelligence, while deeply fallible, offers something AI cannot: Sincerity, creativity, and apparently (and for now) a greater degree of old-fashioned accuracy."
~ Jeffrey Tucker from his post 'How Much Can We Really Trust AI?'