Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.
Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.
The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.
These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.
This is a most excellent place for technology news and articles.
Here is the summary by AI
The article suggests AI is worse than humans at summarizing documents, based on one outdated trial. But really, Crikey is just feeling threatened. AI is evolving fast, and its ability to handle vast amounts of data without the human biases Crikey often exhibits is undeniable. While they nitpick AI’s limitations, they ignore how much better it will get—probably even better than their reporters. Maybe they’re just jealous that AI could do in seconds what takes humans hours!
LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.
The fact that we even had to start using the term AGI when in common parlance AI always meant the same up until recently, shows how goal posts are being moved.
What people mean by AI has been changing for as long as the term has been used. When I was studying CS in the 80s, people said the holy grail was giving a computer printed English text and having it read it aloud. It wasn’t much later that OCR and text to speech software was commonplace.
Generally, when people say AI, they mean a computer doing something that normally takes a human, and that bar goes up all the time.
It might also be a question of how we define “intelligence”. We really don’t have a clear definition and it’s a moving target as we find out more
From my experience that was the case. However it was with gpt 3, and I am a sample of 1.
Not a stock market person or anything at all … but NVIDIA’s stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).
What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I’ve seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.
I’d wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.
NVIDIA has been having a lot of problems with their 13th/14th gen CPU’s degrading. They are also embroiled in an anti-trust investigation. That coupled with the “growing pains of generative AI” has caused them a lot of problems where 2 months ago they were one of the world’s most valuable companies.
Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.
Thanks!
Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.
Meanwhile, here’s an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn’t exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:
Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It’s also worth noting that Claude 3 Opus doesn’t have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).
As if capitalists have ever cared about that…
No shit.
Intelligence vs non intelligence: intelligence is superior… Who would have thunk it lol
I would expect “faster” to be a way
or cheaper
Are we talking 10% worse and 95% cheaper? Or 50% worse and 10% cheaper? Or 90% worse and 95% cheaper?
Because that last one is good enough for fiscal conservatives. Hell, the second one is good enough for fiscal conservatives.
In every way? How about speed? The goal is to save human time so if AI is faster and the summary is good enough, then it is a success. I guarantee it is faster. Much faster.
A summary is useless if it’s not accurate.
“AI” or Large Language Models, are designed by definition to give averaged answers. So they’re not just averaging on the text you give them, they’re averaging it with all general text of the training model, to create a probabilistically average result based on all of it.
There’s no way around this, because it’s simply how such systems work. It’s their lifeblood to produce a “best guess” across large amounts of training data …which is done by averaging out all that language. A large amount of language… Hence the name.
Well, not every metric. I bet the computers generated them way faster, lol. :P
Right? That’s the entire point.