Yeah this might actually not be that far from reality.
Computer vision already did a large amount of the lifting, with the massive pushes towards AI, AI will take the rest of us plebians healthcare.
The way claims get sent back during billing I became suspicious a lot of them are getting read by machine (and very poorly) during the first round of mail so don’t worry medical billing will get even more fun thanks to AI
Surprise Medical Billing has mostly been nerfed after the No Surprises Act here in the US. After 2022, so long as you went to an INN provider, then you can’t be charged OON pricing for any OON services that you may have encountered during that visit.
When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship
He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.
AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.
I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch.
There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.
But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).
I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot
No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.
Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.
Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.
I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.
May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.
Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.
“I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
“I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
etc.
For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.
For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.
the techbros that think that with sufficiently advanced AI we could solve climate change are so stupid. like, we might not have a perfect solution, but we have ideas on how to start to make things better (less car-centric cities, less meat and animal products, more investment in public transport and solar), and it gets absolutely ignored. why would it be different when an AI gives the solution? unless they want the “eat fat-free food and you will be thin” solution to climate change, in which we change absolutely nothing of our current situation but it is magically ecological
I don’t trust any of these types. If you haven’t noticed by now morally decent people are never in charge of a any large organization. The type of personality suited to claw their way to the top usually lack any real moral compass that doesn’t advance their pursuit of power.
Ironically, your link is broken on Voyager because it doesn’t treat anything before the https as a link. It just leads straight to the normal pay walled site.
You need to embed the link for it to actually work. And even then, it may not work on this comment because it’ll try to route to my home instance due to having a Lemmy.world link for my image.
… but, I mean, come OOON… he looks like a reanimated Madame Tussaud’s sculpture. Like someone said, “Give me a Wish.com Mark Zuckerberg… but not so vivacious this time.” And he’s the CEO of an AI-related company.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world
This is a most excellent place for technology news and articles.
It’s time to stop taking any CEO at their word.
Edit: scratch that, the time to stop taking any CEO at their word was 100 years ago.
We should never have taken them at their word.
The best time was 100 years ago. The second-best time is now.
Hear hear!
I think the quote that “power corrupts and absolute power corrupts absolutely” is a bit older, and said about all the lessons of history before it.
Somehow humanity doesn’t like the wisest rules out there. And prefers to read Palanick and talk about post-modernism instead of looking at the root.
It’s time to take CEO’s money away!
Name a CEO tech bro that isn’t a raving douche.
Does Woz count? He’s CEO of the Silicon Valley Comic Con (or he used to be anyway).
deleted by creator
That’s a very sharp prediction, thanks. I will run that by some people.
Yeah this might actually not be that far from reality. Computer vision already did a large amount of the lifting, with the massive pushes towards AI, AI will take the rest of us plebians healthcare.
Considering how fractured medical billing is these days, often the techs contracted by your in-network doctors office are actually out-of-network.
Isn’t medical billing fun?
The way claims get sent back during billing I became suspicious a lot of them are getting read by machine (and very poorly) during the first round of mail so don’t worry medical billing will get even more fun thanks to AI
Surprise Medical Billing has mostly been nerfed after the No Surprises Act here in the US. After 2022, so long as you went to an INN provider, then you can’t be charged OON pricing for any OON services that you may have encountered during that visit.
Source: https://www.health.state.mn.us/facilities/insurance/managedcare/faq/nosurprisesact.html
Also, I work in insurance as a software engineer
It’s been a while since I’ve had supplementary procedures, so that’s good to know.
Now I just have to wait for all nine (and a half) bills after emergency services.
When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.
AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.
I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.
But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).
I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot
No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from
ClosedAIOpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.
Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.
I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.
May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.
What are they best suited for?
Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.
For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.
For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.
It’s not snake oil. It is a way to brute force some problems which it wasn’t possible to brute force before.
And also it’s very useful for mass surveillance and war.
the techbros that think that with sufficiently advanced AI we could solve climate change are so stupid. like, we might not have a perfect solution, but we have ideas on how to start to make things better (less car-centric cities, less meat and animal products, more investment in public transport and solar), and it gets absolutely ignored. why would it be different when an AI gives the solution? unless they want the “eat fat-free food and you will be thin” solution to climate change, in which we change absolutely nothing of our current situation but it is magically ecological
It’s beyond time to stop believing and parroting that whatever would make your source the most money is literally true without verifying any of it.
Who is Sam Altman?
(This is a rethorical question)
He is the cousin of Sam Mainman, who is an actual human being.
Don’t forget the lesser known Sam Shiftman and Sam Ctrlman.
I’m just glad Sam Delman is in jail for killing those processes.
I don’t trust any of these types. If you haven’t noticed by now morally decent people are never in charge of a any large organization. The type of personality suited to claw their way to the top usually lack any real moral compass that doesn’t advance their pursuit of power.
The time was last year, but better late than sorry.
Anyone have a non pay wall link?
This trick should come in handy pal
12ft.io/https://www.theatlantic.com/technology/archive/2024/10/sam-altman-mythmaking/680152/
Ironically, your link is broken on Voyager because it doesn’t treat anything before the https as a link. It just leads straight to the normal pay walled site.
You need to embed the link for it to actually work. And even then, it may not work on this comment because it’ll try to route to my home instance due to having a Lemmy.world link for my image.
The best time to stop taking Altman seriously was ten years ago.
The second best time is now.
People did that? Lol
He’s the musk in making
AI skeptics dont live in the real world
You shouldn’t judge people on appearances.
… but, I mean, come OOON… he looks like a reanimated Madame Tussaud’s sculpture. Like someone said, “Give me a Wish.com Mark Zuckerberg… but not so vivacious this time.” And he’s the CEO of an AI-related company.