• 1 Post
  • 53 Comments
Joined 1Y ago
cake
Cake day: Jun 24, 2023

help-circle
rss

Why do they struggle so much with some “obvious things” sometimes ? We wouldn’t have a type-C iphone if the EU didn’t pressured them to do make the switch


I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I’m guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven’t read online info about this at the time anyways, I’m just learning this recently


Interestingly enough, even if it would make sense that boeing is now fully focusing on improving quality, it also makes sense to me that airbus must be ensuring and pushing a lot of quality upgrades as well, it would be perfect marketing for them if no mistakes whatsoever happened on airbus’s planes



Right but, AFAIK glaze is targeting the CLIP model inside diffusion models, which means any new versions of CLIP would remove the effect of the protection


People get very confused about this. Pre-training “ChatGPT” (or any transformer model) with “internet shitposting text” doesn’t cause them to reply with garbage comments, bad alignment does. Google seems to have implemented no frameworks to prevent hallucinations whatsoever and the RLHF/DPO applied seems to be lacking. But this is not “problem with training on the entire web”. You can pre-train a model exclusively on a 4-chan database that with the right finetuning you would see a perfectly healthy and harmless model. Actually, it’s not bad to have “shitposting” or “toxic” text in the pre-training because that gives the model an ability to identify it and understand it

If so, the “problem with training on the entire web” is that we would be drinking from a poisoned well, AI-generated text has a very different statistical distribution from the one users have, which would degrade the quality of subsequent models. Proof of this can be seen with the RedPajama dataset, which improves the scores on trained models simply because it has less duplicated information and is a more dense dataset: https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama


Lemmy seems to be very near-sighted when it comes to the exponential curve of AI progress, I think this is an effect because the community is very anti-corp


How did this clickbaity headline got so many upvotes? Are we really cherry-picking some outlier example of a hallucination and using it to say “haha, google dumb”? I think there is plenty of valid criticism out there against google that we can stick to instead of paying attention to stupid and provocative articles


Hahahh, would be hilarious to then get a whistleblower from the team of “boeing’s hitmens” because of bad working conditions


I feel terrible for all the work boeing’s hitman is going to have to do this week 🤦🏻‍♂️



How many android apps are designed by teenage engineering?


Yes, very good point!

I wonder if someday in the future we might use reinforcement learning to iterate over different mechanical designs to explore even more exotic combinations of wheels, springs, hydraulic pistons, steel wires, legs and joints (optimizing for metrics like mobility etc). I even wonder if flexible joints made out of hard rubber could offer any advantages on bipedal motion


I think they specifically chose that to display that it has no “forward” axis, robots don’t need to be 100% anthropomorphic and follow our biological limitations, this is a very significant evolution in design that will allow for better mobility



I hate so much how pinterest occludes and pollutes google images 🙄


  • “Show the movie LG”
  • “I’m sorry dave, I cannot play ‘Harry.Potter.and.the.Deathly.Hallows.Part.2.2011.2160p.MAX.WEB-D.mkv’”
  • “Dammit LG, show the movie!”
  • “I think this conversation has no other purpose, good bye dave”

I know lemmy is fundamentally critic of reddit, but let’s not forget if lemmy ever achieves a significant weight in humanity’s attention, it’s not immune to such disease. The problem is systemic, not inherent of a specific platform. Any place with a lot of eyes will be susceptible to manipulation, even more so now that we have tamed artificial intelligence to write texts just about anything. We as a community need to think about countermeasures to fend this off



Dumb question, evernote has a feature to embed audio recordings within the notes, and, synced across devices

How could this be replicated with something like obsidian/rome/typora/notepad++/notion/something/joplin? any suggestions?


(In case we didn’t have enough with dropbox selling our data for AI training!)


I have the feeling that a big chunk of apple consumers (I know there are many professionals and developers that love apple) don’t even know what RAM is used for and will just buy it because it’s the “cheapest version of the newest thing” without much critical consideration


I’m not deep on how the core of an OS works, but to my understanding, the kernel of linux should be more robust and reliable, shouldn’t it always be performing better than windows on the same hardware?

Where could I read information on the things that hinder performance on linux, does anybody have any educational resources?



Thank you for the info!! Still, given the current status of the web, 42% seems like too little. I guess it’s just a matter of waiting for the word to spread even more


Ok, I get all the hate towards google chrome. But pragmatically speaking, will this boost firefox? I feel like even nowadays a ton of people don’t even know about adblockers, I’m not sure if they will do the switch to a whole different browser…



I agree with the conclusions of the boomers, but for very different I think long-term AI will produce vastly more harm than good. Just this week we got a headline about google, which is a serious and grown company which already makes billions was up to some fuckery against firefox, facebook has been fined a million times for not respecting privacy and amazon workers have to pee in bottles. To my sadness, all movement against the integration of AI in weapons basically to “kill people” will be very noble but won’t do jackshit. Do we think china/rusia are going to give a single fuck about this? Even the US will start selling AI-drones when it becomes normalized. And that’s just AI in war, but there’s another trillion things where AI will fuck things up, artists will be devalued, misinformation will reach a new all-time high, capchas are long dead making the internet a more polluted place, surveillance will be more toxic, the list goes on


It’s reassuring that this opinion is based on many years of experience reading scientific papers, implementing these models and following the trends closely!


I just remembered that destin from smarter everyday did a dedicated video about the privacy of this: https://www.youtube.com/watch?v=U3EEmVfbKNs, then, was it complete bull shit?



Up next, windows 13 is cloud-based only, thus requiring constant internet connection


Here’s the paper: https://arxiv.org/pdf/2302.04222.pdf

I find it very interesting that someone went in this direction to try to find a way to mitigate plagiarism. This is very akin to adversarial attacks in neural networks (you can read more in this short review https://arxiv.org/pdf/2303.06032.pdf)

I saw some comments saying that you could just build an AI that detects poisoned images, but that wouldn’t be feasible with a simple NN classifier or feature-based approaches. This technique changes the artist style itself to something the AI would see differently in the latent space, yet, visually perceived as the same image. So if you’re changing to a different style the AI has learned, it’s fair to assume it will be realistic and coherent. Although maaaaaaaybe you could detect poisoned images with some dark magic tho, get the targeted AI then analyze the latent space to see if the image has been tampered with

On the other hand, I think if you build more robust features and just scale the data this problems might go away with more regularization in the network. Plus, it assumes you have the target of one AI generation tool, there are a dozen of these, and if someone trains with a few more images in a cluster, that’s it, you shifted the features and the poisoned images are invalid


I know that the N stands for netflix, but like, why did people considered it important enough to be in the name? It sounds to me like microsoft deserved that spot


When did netflix become a FAANG company? What do they have that is so valuable? To me it seems like they don’t develop any particularly incredible tech besides streaming and storage


Maybe the 5th episode of the 6th season was written by an AI and they were playing some 4D chess game all along with our minds, because otherwise, I wonder how such fucking trash got the green light to be produced 🤗

Edit: Typo


Hopefully this might collaterally improve wearable tech :[


This is great for linux, but I think many laptops come with a protected BIOS that won’t allow you to boot other OS’s what do you guys do in this case? Also, correct me if I’m wrong!



And they reimburse you that money with a gift card? Is that even legal?