The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.
What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.
In a few generations of chips, the silicon will have made progress in catching up with the compute workload, and cost per task will drop. That’s the innovation to watch out for now, who will de-throne Nvidia and its H100?
So far what I’ve seen from AI is that it lies and lies and lies. It lies about history. It lies about science. It lies about politics. It lies about case law. It lies about programming libraries. Maybe this will all be fixed some day, or maybe it will just get worse. Until then the only thing I would trust it is about something in which their is no wrong answer.
I never ask it things I don’t know. I don’t think that’s really what’s it’s useful for. It’s really good at combining words though. So it can write a better sentence than I could. Better in a sense that it’s easier for others to understand what my thoughts are if I feed them in as input. Since they were my thoughts originally I can spot the bullshit pretty fast.
Do people really not understand that we are in the early stages of ai development? The first time most people were made aware of LLMs was, like, 6 months ago. What ChatGPT can do is impressive for a self contained application, but is far from mature enough to do the things people are complaining it can’t do.
The point the industry is trying to warn about is that this technology is past its infancy and moving into, from a human comparison standpoint, childhood or adolescence. But, it iterates significantly faster than humans, so the time it can do the type of things people are bitching about is years, not decades, away.
If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.
Great, now factor in the cost of data collection if not subsidizing usage that you are effectively getting free RLHF from…
The one thing that’s been pretty much a guarantee over the last 6 months is that if there’s a mainstream article with ‘AI’ in the title, there’s going to be idiocy abound in the text of it.
AI is a tool to assist plagiarize the work of creators
Fixed it
LOL OK it’s a super-powerful technology that will one day generate tons of labor very quickly, but none of that changes that in order to train it to be able to do that, you have to feed it the work of actual creators- and for any of that to be cost-feasible, the creators can’t be paid for their inputs.
The whole thing is predicated on unpaid labor, stolen property.
At what line does it become stolen property? There are plenty of tools which artists use today that use AI. Those AI tools they are using are more than likely trained on some creation without payment. It seems the data it’s using isn’t deemed important enough for that to be an issue. Google has likely scraped billions of images from the Internet for training on Google Lens and there was not as much of an uproar.
Honestly, I’m just curious if there is an ethical line and where people think it should be.
Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people’s work for free to trained the models. In other words, these generative AI models should belong to everyone.
And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)
I have to admit, playing with these generative models is pretty fun.
There was a smallish VFX group here that was attached to a volume screen company. They employed something like 20 people I think? So pretty small.
But the volume screen employed a guy who could do an adequate enough job with generative tools instead and the company folded. The larger VFX company they partner with had 200 employees, they recently cut to 50.
In my field, a team leader in 2018 could earn about 180,000 AUD P/A. Now those jobs are advertised for 130,000 AUD, because new models can do ~80% of the analysis with human accuracy.
AI is already folding companies and cutting jobs. It’s not in the news maybe, but as industries shift to compete with smaller firms leveraging AI it will cascade.
I had/have my own company, we were attached to Metropolis which unfortunately folded. I think that had a role to play in the job cuts as well. Luckily for me I wasn’t overleveraged, but I am packing up and changing careers for sure.
Generative AI can make each individual artist/writer/programmer much more efficient at their job, but the shareholders and executives get their way and only big companies have access to this technologu, this increased productivity will instead be used reduce headcount and make the remaining people do more work on a tighter deadline, instead of helping everyone work less, do better work, and be happier.
This is the reason I think democratizing generative AI via local models is important, because as your example shows, it levels the playing field between small and big players, and helps people work less while making more cool stuff.
A big problem in Aus is the industry culture. They don’t care about using technology to improve results. They only care about cutting costs, even if the final product doesn’t meet the previous standard.
And we’ve seen that with VFX across the globe, the overall quality dropped drastically. Because studios play silly buggers to weasel out of paying VFX companies what they are due.
From what I hear, even DNEG is in trouble, and were even before the strike.
It’s a race to the bottom it seems.
My honest hope for the film industry is likely the same as yours. That we have smaller productions with access to better post due to improvements in AI-driven compositing software and so on.
But it’s likely that a role that was earning $$$ before is devalued significantly. And while I’m an unabashed anti-capitalist, I think a lot of folks misunderstand what this sudden downward pressure on income can do. Cost of living increasing while wages shrink is an awful combination
I’m 35, left a six figure job, folding my company and starting an electrician’s apprenticeship. To give you an idea around what my views about AI are. And of course this is as an Australian. We have a garbage white collar work culture anyway.
I think there will be a net improvement. But I worry that others will fail to adapt quickly. Too many are writing off AI as this thing that already came and went, but the tools have just landed, and we don’t yet have workflows that correctly implement and leverage these yet.
Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.
Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.
Because it’s too fragile and not ready to be use at scale without causing massive damage
Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.
I don’t find it future-proof, as impressive as some results are
Every cash grab right now around AI is just a frontend for a chatGPT API. And every investor who throws money at them is the mark. And now they’re crying a river.
You’d think at this point that investors would wait for a thing to fill out the question mark second step in their business plan before investing in it, but you’d be way, way wrong.
Every new tech company comes to the investor panel with:
build expressive to run new tool and give it away to end users for free
Because people assume all these investors know what they are doing. They don’t. Now, some investors are good, but they usually don’t go for shit like this. At lot of investors are VCs, rich upper class twits, who can afford to lose money. Pure and simple. It’s like a bunch of lotto winners telling people they know how to pick numbers, betting outside bets once in a while, get lucky, and have selective bias.
Plus, they have enough money to hedge their bets. For example, say you invest $1mil in companies A, B, C, D, E, and F. All lose everything except A and B, which earn you $3mil each. You put in $6mil, got back $6mil. You broke even, tell people you knew what you were doing because you picked A and B, and conveniently never mention the rest. Then rich twits people invest in what YOU invest in. So you invest in H, others invest in H because you did, drives up the value. Now magnify this by a lot of investors, hundreds of letters, and it’s all like some weird game of luck and timing.
But a snapshot in time leads to your 2) ??? Point. Many know this is a confidence game, based on luck, charm, and timing. Some just stumble through it, and others are fleeced, but who cares? Daddy’s got money.
Money works different for rich people. It’s truly puzzling.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world
This is a most excellent place for technology news and articles.
Who could have predicted writing bullshit-y papers for kids in school wasn’t a billion dollar business?
Thats how this works. Blow though VC money to try and “strike gold” fail. Change model to become profitable." Move to the next scam.
if A.I. dies out because capitalism I will wheeze
The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.
What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.
In a few generations of chips, the silicon will have made progress in catching up with the compute workload, and cost per task will drop. That’s the innovation to watch out for now, who will de-throne Nvidia and its H100?
GPT already got way shittier from the version we all saw when it first came out to the heavily curated, walled garden version now in use
AI isn’t paying off if you’re too dumb to figure out how to use the many amazing tools that have come about.
These are the same kind of people who go, “We spent money on Timmy’s clothes for over two years and it’s not paying off.”
Bro, AI is an investment.
removed by mod
So far what I’ve seen from AI is that it lies and lies and lies. It lies about history. It lies about science. It lies about politics. It lies about case law. It lies about programming libraries. Maybe this will all be fixed some day, or maybe it will just get worse. Until then the only thing I would trust it is about something in which their is no wrong answer.
I never ask it things I don’t know. I don’t think that’s really what’s it’s useful for. It’s really good at combining words though. So it can write a better sentence than I could. Better in a sense that it’s easier for others to understand what my thoughts are if I feed them in as input. Since they were my thoughts originally I can spot the bullshit pretty fast.
Do people really not understand that we are in the early stages of ai development? The first time most people were made aware of LLMs was, like, 6 months ago. What ChatGPT can do is impressive for a self contained application, but is far from mature enough to do the things people are complaining it can’t do.
The point the industry is trying to warn about is that this technology is past its infancy and moving into, from a human comparison standpoint, childhood or adolescence. But, it iterates significantly faster than humans, so the time it can do the type of things people are bitching about is years, not decades, away.
If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.
Great, now factor in the cost of data collection if not subsidizing usage that you are effectively getting free RLHF from…
The one thing that’s been pretty much a guarantee over the last 6 months is that if there’s a mainstream article with ‘AI’ in the title, there’s going to be idiocy abound in the text of it.
AI is a tool to assist creators, not a full on replacement. Won’t be long until they start shoving ads into Bard and ChatGPT.
Fixed it
LOL OK it’s a super-powerful technology that will one day generate tons of labor very quickly, but none of that changes that in order to train it to be able to do that, you have to feed it the work of actual creators- and for any of that to be cost-feasible, the creators can’t be paid for their inputs.
The whole thing is predicated on unpaid labor, stolen property.
At what line does it become stolen property? There are plenty of tools which artists use today that use AI. Those AI tools they are using are more than likely trained on some creation without payment. It seems the data it’s using isn’t deemed important enough for that to be an issue. Google has likely scraped billions of images from the Internet for training on Google Lens and there was not as much of an uproar.
Honestly, I’m just curious if there is an ethical line and where people think it should be.
Well see, it shaves off a fraction of the creative work’s statistical signal, and deposits it into this vector database that we created…
Goood. Gooooooooooood.
Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people’s work for free to trained the models. In other words, these generative AI models should belong to everyone.
And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)
I have to admit, playing with these generative models is pretty fun.
There was a smallish VFX group here that was attached to a volume screen company. They employed something like 20 people I think? So pretty small.
But the volume screen employed a guy who could do an adequate enough job with generative tools instead and the company folded. The larger VFX company they partner with had 200 employees, they recently cut to 50.
In my field, a team leader in 2018 could earn about 180,000 AUD P/A. Now those jobs are advertised for 130,000 AUD, because new models can do ~80% of the analysis with human accuracy.
AI is already folding companies and cutting jobs. It’s not in the news maybe, but as industries shift to compete with smaller firms leveraging AI it will cascade.
I had/have my own company, we were attached to Metropolis which unfortunately folded. I think that had a role to play in the job cuts as well. Luckily for me I wasn’t overleveraged, but I am packing up and changing careers for sure.
Generative AI can make each individual artist/writer/programmer much more efficient at their job, but the shareholders and executives get their way and only big companies have access to this technologu, this increased productivity will instead be used reduce headcount and make the remaining people do more work on a tighter deadline, instead of helping everyone work less, do better work, and be happier.
This is the reason I think democratizing generative AI via local models is important, because as your example shows, it levels the playing field between small and big players, and helps people work less while making more cool stuff.
A big problem in Aus is the industry culture. They don’t care about using technology to improve results. They only care about cutting costs, even if the final product doesn’t meet the previous standard.
And we’ve seen that with VFX across the globe, the overall quality dropped drastically. Because studios play silly buggers to weasel out of paying VFX companies what they are due.
From what I hear, even DNEG is in trouble, and were even before the strike.
It’s a race to the bottom it seems.
My honest hope for the film industry is likely the same as yours. That we have smaller productions with access to better post due to improvements in AI-driven compositing software and so on.
But it’s likely that a role that was earning $$$ before is devalued significantly. And while I’m an unabashed anti-capitalist, I think a lot of folks misunderstand what this sudden downward pressure on income can do. Cost of living increasing while wages shrink is an awful combination
I’m 35, left a six figure job, folding my company and starting an electrician’s apprenticeship. To give you an idea around what my views about AI are. And of course this is as an Australian. We have a garbage white collar work culture anyway.
I think there will be a net improvement. But I worry that others will fail to adapt quickly. Too many are writing off AI as this thing that already came and went, but the tools have just landed, and we don’t yet have workflows that correctly implement and leverage these yet.
It’s crazy that with current economic systems, tools that make people work more efficiently have such a negative impact on society.
A powerful tool maybe, but useless
If your drill needs a nuclear plant and monthly subcription to drill a hole, it’s a shitty tool
Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.
Indispensable, nothing less. lmao
Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent 👍
Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.
Because it’s too fragile and not ready to be use at scale without causing massive damage
Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.
I don’t find it future-proof, as impressive as some results are
You sound like the people who thought credit cards would never replace cash.
And you sound like the people who thought cryptos would replace credit cards ;)
Tough shit. It’s the Next Big Thing so everyone has to have it.It doesn’t matter that it’s not useful for most use cases (yet.)
Deleted comment
So much fucking this.
Every cash grab right now around AI is just a frontend for a chatGPT API. And every investor who throws money at them is the mark. And now they’re crying a river.
Woah, shiny bland images that are a regurgitation of stolen artwork!!!
No they ain’t doing shit, they just prompt
You’d think at this point that investors would wait for a thing to fill out the question mark second step in their business plan before investing in it, but you’d be way, way wrong.
Every new tech company comes to the investor panel with:
build expressive to run new tool and give it away to end users for free
???
profit!
And somehow they keep falling for it.
Because people assume all these investors know what they are doing. They don’t. Now, some investors are good, but they usually don’t go for shit like this. At lot of investors are VCs, rich upper class twits, who can afford to lose money. Pure and simple. It’s like a bunch of lotto winners telling people they know how to pick numbers, betting outside bets once in a while, get lucky, and have selective bias.
Plus, they have enough money to hedge their bets. For example, say you invest $1mil in companies A, B, C, D, E, and F. All lose everything except A and B, which earn you $3mil each. You put in $6mil, got back $6mil. You broke even, tell people you knew what you were doing because you picked A and B, and conveniently never mention the rest. Then rich twits people invest in what YOU invest in. So you invest in H, others invest in H because you did, drives up the value. Now magnify this by a lot of investors, hundreds of letters, and it’s all like some weird game of luck and timing.
But a snapshot in time leads to your 2) ??? Point. Many know this is a confidence game, based on luck, charm, and timing. Some just stumble through it, and others are fleeced, but who cares? Daddy’s got money.
Money works different for rich people. It’s truly puzzling.
Have they not tried simply asking the AI how to make it profitable?