This is potentially misleading as we’re not sure what it means. MS did something similar but it was to break up a centralised team and bring the AI ethics experts inside various teams. So rather than them coordinate with another team the AI ethics researchers are part of the same team.
My company started with a security team with about 15 people. Honestly all they did was write up security reports and then tell someone else to do it. Fucking useless.
When they disbanded the team, they did integrate them into other teams. So now they’re actually part of the solution.
And I can totally see the news twisting that story and making it look like “[company] removes entire security team”.
How would generative AI be useful for Facebook? I think major one would be making fake posts to manipulate public opinion.
Now they can have AI generate posts that could advertise specific products or affect people’s opinion about specific topic, and it all would look like a legitimate user. They might even respond to your comments.
That doesn’t look like a team meant for responsible use of AI (assuming they would do their job) would be OK with.
BTW: It’s kind of crazy, but theoretically with it Facebook doesn’t really need users to generate content and still make site seem busy. And you wouldn’t even know it. They could also subtly modify comments of real people and change them to have different meaning. Based on what Facebook already did I don’t think anything is taboo to them.
FWIW facebook makes a ton of tools useful for AI computing, and most if not all of the free AI resources I use, depend on at least some fb developed tools. I know it doesn’t answer your question, but thought it would be of interest.
I understand, but no company (especially Facebook) would spend money on something if they didn’t see a return from it (I remember 20 years ago Google received a lot of praise for being different, now we know they aren’t). When a company like Facebook or Google open source something it is to:
have free contributors to technology they use internally
make sure that their standard dominates so they can still steer in the direction they want
Facebook has a lot of data. They can use that to train models and sell them out as services for other companies as one example. They can also use it on tracking data to help them improve their sites for the things they want to achieve. They can use it to look at your data and determine what a feed that would keep you there longer would look like. All kinds of things really.
One can assume that whatever facebook ends up doing with AI, it will be poorly thought out, kind of half-ass, abusive to customers and have all sorts of negative side-effects and consequences that they either were too laszy to think of or they actually desire for some reason.
Not abusive to customers (advertisers) at first. Until they have a lock in and then they’ll start abusing them too. That’s the third step on the enshitification pathway.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world
This is a most excellent place for technology news and articles.
This is potentially misleading as we’re not sure what it means. MS did something similar but it was to break up a centralised team and bring the AI ethics experts inside various teams. So rather than them coordinate with another team the AI ethics researchers are part of the same team.
Flip side, if these researchers are not comparing notes, they have less ability to push back on irresponsible products
This actually makes sense.
My company started with a security team with about 15 people. Honestly all they did was write up security reports and then tell someone else to do it. Fucking useless.
When they disbanded the team, they did integrate them into other teams. So now they’re actually part of the solution.
And I can totally see the news twisting that story and making it look like “[company] removes entire security team”.
Microsoft and Ethics in the same sentence.
Meta disbanded responsible AI team
Meta formed irresponsible AI team
How would generative AI be useful for Facebook? I think major one would be making fake posts to manipulate public opinion.
Now they can have AI generate posts that could advertise specific products or affect people’s opinion about specific topic, and it all would look like a legitimate user. They might even respond to your comments.
That doesn’t look like a team meant for responsible use of AI (assuming they would do their job) would be OK with.
BTW: It’s kind of crazy, but theoretically with it Facebook doesn’t really need users to generate content and still make site seem busy. And you wouldn’t even know it. They could also subtly modify comments of real people and change them to have different meaning. Based on what Facebook already did I don’t think anything is taboo to them.
FWIW facebook makes a ton of tools useful for AI computing, and most if not all of the free AI resources I use, depend on at least some fb developed tools. I know it doesn’t answer your question, but thought it would be of interest.
I understand, but no company (especially Facebook) would spend money on something if they didn’t see a return from it (I remember 20 years ago Google received a lot of praise for being different, now we know they aren’t). When a company like Facebook or Google open source something it is to:
there might as well only be one silicon valley company at this point. they’re all trying to do the same thing. AI is currently that thing
Facebook has a lot of data. They can use that to train models and sell them out as services for other companies as one example. They can also use it on tracking data to help them improve their sites for the things they want to achieve. They can use it to look at your data and determine what a feed that would keep you there longer would look like. All kinds of things really.
How shockingly expected.
One can assume that whatever facebook ends up doing with AI, it will be poorly thought out, kind of half-ass, abusive to customers and have all sorts of negative side-effects and consequences that they either were too laszy to think of or they actually desire for some reason.
It’ll be abusive to users, not customers. Their customers are advertisers, not the users.
deleted by creator
No. The users are still no more than raw material that gets used. Now maybe the raw material has gained a little worth, that’s all.
Not abusive to customers (advertisers) at first. Until they have a lock in and then they’ll start abusing them too. That’s the third step on the enshitification pathway.