Meta will train AI with data from European users - Stack Diary
stackdiary.com
external-link
Meta has announced that it will begin training its own AI using data from European users. The company claims that it has a legitimate interest in this practice. Users can object to the use of their data; however, there have been complaints regarding the procedure for submitting these objections. To object, users must fill out a form, detailing how the data

The only exception is private messages, and some users have reported difficulty opting out.

@j4k3@lemmy.world
link
fedilink
English
25M
From my experience with Llama models, this is great!

Not all training info is about answers to instructive queries. Most of this kind of data will likely be used for cultural and emotional alignment.

At present, open source Llama models have a rather prevalent prudish bias. I hope European data can help overcome this bias. I can easily defeat the filtering part of alignment, that is not what I am referring to here. There is a bias baked into the entire training corpus that is much more difficult to address and retain nuance when it comes to creative writing.

I’m writing a hard science fiction universe and find it difficult to overcome many of the present cultural biases based on character descriptions. I’m working in a novel writing space with a mix of concepts that no one else has worked with before. With all of my constraints in place, the model struggles to overcome things like a default of submissive behavior in women. Creating a complex and strong willed female character is difficult because I’m fighting too many constraints for the model to fit into attention. If the model trains on a more egalitarian corpus, I would struggle far less in this specific area. It is key to understand that nothing inside a model exists independently. Everything is related in complex ways. So this edge case has far more relevance than it may at first seem. I’m talking about a window into an abstract problem that has far reaching consequences.

People also seem to misunderstand that model inference works both ways. The model is always trying to infer what you know, what it should know, and this is very important to understand: it is inferring what you do not know, and what it should not know. If you do not tell it all of these things, it will make assumptions, likely bad ones, because you should know what I just told you if you’re smart. If you do not tell it these aspects, it is likely assuming you’re average against the training corpus. What do you think of the intelligence of the average person? The model needs to be trained on what not to say, and when not to say it, along with the enormous range of unrecognized inner conflicts and biases we all have under the surface of our conscious thoughts.

This is why it might be a good thing to get European sources. Just some things to think about.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 220 users / day
  • 609 users / week
  • 1.39K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog