The thing is, LLMs can be used for something like this, but just like if you asked a stranger to write a letter for your loved one and only gave them the vaguest amount of information about them or yourself you’re going to end up with a really generic letter.
…but to give me amount of info and detail you would need to provide it with, you would probably end up already writing 3/4 of the letter yourself which defeats the purpose of being able to completely ignore and write off those you care about!
No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I’m guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they’ve been heavily trained on. If predictive text doesn’t do it then I would be betting on whatever Yann LeCun is working on.
GPT-2 came out a little more than 5 years ago, it answered 0% of questions accurately and couldn’t string a sentence together.
GPT-3 came out a little less than 4 years ago and was kind of a neat party trick, but I’m pretty sure answered ~0% of programming questions correctly.
GPT-4 came out a little less than 2 years ago and can answer 48% of programming questions accurately.
I’m not talking about mortality, or creativity, or good/bad for humanity, but if you don’t see a trajectory here, I don’t know what to tell you.
I’m not making any moral judgements one way or the other, but I have a strong feeling kids today are just going to grow up with this stuff and it will be normalized and we are going to be the weird old prudes who have a weird sense of personal identity connected to our physical appearance and voice while they’re going around looking like SpongeBob and talking like The Fonz.
The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products.
Who is hiring all these prompt engineers? Who is ‘scrambling’ to find people for this? The jobs I do see have basically replaced “developer” with “prompt engineer” with the same job requirements.
On a technical level, it’s hard to say why Meteor Lake has regressed in this test, but the CPU’s performance characteristics elsewhere imply that Intel simply might not have cared as much about IPC. Meteor Lake is primarily designed to excel in AI applications and comes with the company’s most powerful integrated graphics yet. It also features Foveros technology and multiple tiles manufactured on different processes. So while Intel doesn’t beat AMD or Apple with Meteor Lake in IPC measurements, there’s a lot more going on under the hood.
Crazy thing I’ve been noticing more and more. When I search “[thing I want to know] reddit” there are always one or two comments in the top results from reddit, usually much more recent than the others, very clearly shilling a product. Sometimes it’s an edit purely to include a product the user just thinks is really great that sends you to an affiliate link-ridden site.
So a Board member wrote a paper about focusing on safety above profit in AI development. Sam Altman did not take kindly to this concept and started pushing to fire her (to which end he may or may not have lied to other Board members to split them up). Sam gets fired for trying to fire someone for putting safety over profit. Everything exploded and now profit is firmly at the head of the table.
I like nothing about this version of events either.
“I don’t have any theories that make sense,” Paskalis says. “There is a revenue model in his head that eludes me.”
You don’t need a complex business model for this to make sense. The man has had “fuck you money” his entire life. Things are finally not going his way and he only has one way to respond… by saying “fuck you” to the people he doesn’t like.
Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.
I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product…which in this case is the creation of sentient artificial life with unknown future ramifications…