• 0 Posts
  • 32 Comments
Joined 1Y ago
cake
Cake day: Jun 17, 2023

help-circle
rss

Similar thing happened to the games industry as well, I think. Initially it was creative people and engineers who were focused on what they were making. These days the industry is dominated by suits that just want to extract as much cash as possible from players.


I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.


I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot



The earliest one I bought was an Amiga 500 which I still think is one of the greatest machines of all time. I loved the community - really creative. I also own a ZX81 which was the first machine I ever did any programming on



Alternatively: MSs AI plans and the negative reaction to them has reduced their value cf Apple


Imo waterfall is an imagined beast for most software devs today. I worked on many successful waterfall projects. It was nowhere as bad as the caricature that people imagine.


I’m always sceptical about results like these. I was told that waterfall always failed when I’d worked on successful waterfall projects with no fails. The complaints about waterfall were exaggerated as I think are complaints about agile. The loudest complaints seem to always be motivated by people trying to sell sonething


I’m GenX. I was pissed off about how much boomers took so much and left so little for us. Millenials got it even worse. Gen Z worse again. The concentration of capital in fewer and fewer hands is a looming disaster


I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error


Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy



Always makes me laugh when the US pushes deregulation and small government on the rest of the world and then acts protectionist AF



I find this extraordinarily unconvincing. Firstly it’s based on the idea that random graphs are a great model for LLMs because they share a single superficial similarity. That’s not science, that’s poetry. Secondly, the researchers completely misunderstand how LLMs work. The assertion that a sentence could not have appeared in the training set does not prove anything. That’s expected behaviour. “stochastic parrot” wasn’t supposed to mean that it only regurgitates text that it’s already seen, rather that the text is a statistically plausible response to the input text based on very high dimensional feature vectors. Those features definitely could relate to what we think of as meaning or concepts, but they’re meaning or concepts that were inherent in the training material.



Maybe this is just a British thing? They’re very popular here in NZ


Yet again confusing LLMs with an AGI. They make statistically plausible text on the basis of past text, that’s it. There’s no thinking thing there











I’m happy with age verification. I don’t GAF about whether it’s unconstitutional cos I’m not American and I don’t GAF about the 200 year old opinion of dead revolutionaries