Yani Bellini Saibene (@yabellini@fosstodon.org)
fosstodon.org
external-link
Did you realize that we live in a reality where SciHub is illegal, and OpenAI is not?
@hottari@lemmy.ml
link
fedilink
English
399M

This is different. AI as a transformative tech is going to usher the US economy into the next boom of prosperity. The AI revolution will change the world and allow people to decide if they want to work for money or not (read UBI). In case you haven’t caught on, am being sarcastic.

All this despite ChatGPT being a total complete joke.

In case you haven’t caught on, am being sarcastic.

It sounds like a completely sincere Marc Andressen post to me.

TurtleJoe
link
fedilink
English
249M

This was a case where you needed the sarcasm tag. Up to then, it was a totally “reasonable” comment from an AI bro.

BTW, plug “crypto” in to your comment for AI, and it’s a totally normal statement from 2020/21. It’s such a similar VC grift.

@adrian783@lemmy.world
link
fedilink
English
19M

deleted by creator

Joe Cool
link
fedilink
English
39M

So, I feel taking an .epub and putting it in a .zip is pretty transformative.

Also you can make ChatGPT (or Copilot) print out quotes with a bit of effort, now that it has Internet.

Can you elaborate on the specific ways that chatgpt is a joke?

Cyber Yuki
link
fedilink
English
39M

https://youtu.be/ro130m-f_yk

Adam explains it. Enjoy.

Ah yes, of course. I remember this video. Not all of the specific points, but I do remember Adam Conover really chewing into large language models. Interestingly, that same Adam Conover must have believed AI isn’t actually that useless seeing as he became a leading member of the 2023 Hollywood writers strike, in which AI was a central focus:

Writers also wanted artificial intelligence, such as ChatGPT, to be used only as a tool that can help with research or facilitate script ideas and not as a tool to replace them.

https://en.wikipedia.org/wiki/2023_Writers_Guild_of_America_strike

That said, I’m not going to rewatch a 25 minute video for a discussion on lemmy. Any specific points you want to make against chat gpt?

@wikibot@lemmy.world
bot account
link
fedilink
English
29M

Here’s the summary for the wikipedia article you mentioned in your comment:

From May 2 to September 27, 2023, the Writers Guild of America (WGA)—representing 11,500 screenwriters—went on strike over a labor dispute with the Alliance of Motion Picture and Television Producers (AMPTP). With a duration of 148 days, the strike is tied with the 1960 strike as the second longest labor stoppage that the WGA has performed, only behind the 1988 strike (153 days). Alongside the 2023 SAG-AFTRA strike, which continued until November, it was part of a series of broader Hollywood labor disputes. Both strikes contributed to the biggest interruption to the American film and television industries since the COVID-19 pandemic. The lack of ongoing film and television productions resulted in some studios having to close doors or reduce staff. The strike also jeopardized long-term contracts created during the media streaming boom: big studios could terminate production deals with writers through force majeure clauses after 90 days, saving them millions of dollars. In addition, numerous other areas within the global entertainment ecosystem were impacted by the strike action, including the VFX industry and prop making studios. Following a tentative agreement, union leadership voted to end the strike on September 27, 2023. On October 9, the WGA membership officially ratified the contract with 99% of WGA members voting in favor of it. Its combined impact with the 2023 SAG-AFTRA strike resulted in the loss of 45,000 jobs, and "an estimated $6.5 billion" loss to the economy of Southern California.

article | about

@douglasg14b@lemmy.world
link
fedilink
English
31
edit-2
9M

Honestly couldn’t tell if you were being sarcastic or not because Poes law until I saw your note.

If all the wealth created by these sorts of things didn’t funnel up to the 0.01% then yeah. It could usher in economic changes that help bring about greater prosperity in the same way mechanical automation should have.

Unfortunately it’s just going to be another vector for more wealth to be removed from your average American and transferred to a corporation

@Fedizen@lemmy.world
link
fedilink
English
619M

Make the AI folks use public domain training data or nothing and maybe we’ll see the “life of the author + 75 years” bullshit get scaled back to something reasonable.

Exactly this. I can’t believe how many comments I’ve read accusing the AI critics of holding back progress with regressive copyright ideas. No, the regressive ideas are already there, codified as law, holding the rest of us back. Holding AI companies accountable for their copyright violations will force them to either push to reform the copyright system completely, or to change their practices for the better (free software, free datasets, non-commercial uses, real non-profit orgs for the advancement of the technology). Either way we have a lot to gain by forcing them to improve the situation. Giving AI companies a free pass on the copyright system will waste what is probably the best opportunity we have ever had to improve the copyright system.

They let the Mouse die finally, maybe there is hope for change.

@yokonzo@lemmy.world
link
fedilink
English
79M

Tbf that number was originally like 20+ years and then Disney lobbied several times to expand it

Flying Squid
link
fedilink
English
89M

19 years. It wasn’t life of the author either. It was 19 years after creation date plus an option to renew for another 19 at the end of that period. It was sensible. That’s why we don’t do it anymore.

@Tillman@lemmy.world
link
fedilink
English
-39M

Weird, why would OpenAI be illegal? Bizarre comp.

They steal data from everything including paywalled sources and proprietary data.

@Jknaraa@lemmy.ml
link
fedilink
English
249M

And people wonder why there’s so much push back against everything corps/gov does these days. They do not act in a manner which encourages trust.

What do you expect when people support 90 year copyrights after death?

Consider who sits on OpenAI’s board and owns all their equity.

SciHub’s big mistake was to fail to get someone like Sundar Pichai or Jamie Iannone with a billion-dollar stake in the company.

@Maggoty@lemmy.world
link
fedilink
English
34
edit-2
9M

Oh OpenAI is just as illegal as SciHub. More so because they’re making money off of stolen IP. It’s just that the Oligarchs get to pick and choose. So of course they choose the arrangement that gives them more control over knowledge.

Lemminary
link
fedilink
English
-13
edit-2
9M

They’re not serving you the exact content they scraped, and that makes all the difference.

@LibreFish@lemmy.world
link
fedilink
English
29M

Yes, because 1:1 duplication of copy written works violates copyright, but summaries of those works and relaying facts stated in those works is perfectly legal (by an ai or not).

@unexpectedteapot@lemmy.ml
link
fedilink
English
1
edit-2
9M

If you mean by “perfectly legal” a fair use claim, then could you please explain how a commercial for-profit company using the works, sometimes echoing verbatim results, is infringing on the copyrights in a fair use manner?

@LibreFish@lemmy.world
link
fedilink
English
29M

I do not mean a fair use claim. To quote the copyright office “Copyright does not protect facts, ideas, systems, or methods of operation, although it may protect the way these things are expressed” source

Facts and ideas cannot be copy written, so what I was specifically referring to is that if I or an AI read a paper about jellyfish being ocean creatures, then later talk about jellyfish being ocean creatures, there’s no restrictions on that whatsoever as long as we don’t reproduce the paper word by word.

Now, most of the time AI summarizes things or collects facts, and since those themselves cannot be protected by copyright it’s perfectly legal. On the occasion when AI spits out copy written work then that’s a gray area and liability if any will probably decided in the courts.

@Mango@lemmy.world
link
fedilink
English
89M

removed by mod

Aielman15
link
fedilink
English
709M

I pirated 90% of the texts I used to write my thesis at university, because those books would have cost me hundreds of euros that I didn’t have.

Fuck you, capitalism.

@BloodSlut@lemmy.world
link
fedilink
English
39M

unfathomably based

He has me so inspired imma go pirate a bunch of textbooks just because I can. I don’t even need them.

Star
creator
link
fedilink
English
219
edit-2
9M

It’s so ridiculous when corporations steal everyone’s work for their own profit, no one bats an eye but when a group of individuals do the same to make education and knowledge free for everyone it’s somehow illegal, unethical, immoral and what not.

@Grimy@lemmy.world
link
fedilink
English
379M

Using publically available data to train isn’t stealing.

Daily reminder that the ones pushing this narrative are literally corporation like OpenAI. If you can’t use copyright materials freely to train on, it brings up the cost in such a way that only a handful of companies can afford the data.

They want to kill the open-source scene and are manipulating you to do so. Don’t build their moat for them.

OpenAI is definitely not the one arguing that they have stole data to train their AIs, and Disney will be fine whether AI requires owning the rights to training materials or not. Small artists, the ones protesting the most against it, will not. They are already seeing jobs and commission opportunities declining due to it.

Being publicly available in some form is not a permission to use and reproduce those works however you feel like. Only the real owner have the right to decide. We on the internet have always been a bit blasé about it, sometimes deservedly, but as we get to a point we are driving away the very same artists that we enjoy and get inspired by, maybe we should be a bit more understanding about their position.

@Grimy@lemmy.world
link
fedilink
English
1
edit-2
9M

Thats basically my main point, Disney doesn’t need the data, Getty either. AI isn’t going away and the jobs will be lost no matter what.

Putting a price tag in the high millions for any kind of generative model only benefits the big players.

I feel for the artists. It was already a very competitive domain that didn’t really pay well and it’s now much worse but if they aren’t a household name, they aren’t getting a dime out of any new laws.

I’m not ready to give the economy to Microsoft, Google, Getty and Adobe so GRRM can get a fat payday.

If AI companies lose, small artists may have the recourse of seeking compensation for the use and imitation of their art too. Just feeling for them is not enough if they are going to be left to the wolves.

There isn’t a scenario here in which big media companies lose so talking of it like it’s taking a stand against them doesn’t make much sense. What are we fighting for here? That we get to generate pictures of Goofy? The small AI user’s win here seems like such a silly novelty that I can’t see how it justifies just taking for granted that artists will have it much rougher than they already have.

The reality here is that even if AI gets the free pass, large media and tech companies are still primed to profit from them far more than any small user. They will be the one making AI-assisted movies and integrating chat AI into their systems. They don’t lose in either situation.

There are ways to train AI without relying on unauthorized copyrighted data. Even if OpenAI loses, it wouldn’t be the death of the technology. It may be more efficient and effective to train them with that data, but why is “efficiency” enough to justify this overreach?

And is it even wise to be so callous about it? Because it’s not going to stop with artists. This technology has the potential to replace large swaths of service industries. If we don’t think of the human costs now, it will be even harder to make a case for everyone else.

@Grimy@lemmy.world
link
fedilink
English
19M

I fully believe AI will be able to replace 50% or more of desk jobs in the near future. It’s definitely a complicated situation and you make good points.

First and foremost, I think it’s imperative the barrier for entry for model training is as low as possible. Anything else basically gives a select few companies the ability to charge a huge subscription fee on all our goods and services.

The data needed is pretty heavy as well, it’s not very pheasible to go off of donated or public domain data.

I also think any job loss is virtually guaranteed and trying to save them is misguided as well as not really benefiting most of those affected.

And yea, the big companies win either way but if it’s easier to use this new tech, we might not lose as hard. Disney for instance doesn’t have any competition but if a bunch of indie animation companies and groups start popping up, it levels the playing field a bit.

@Mango@lemmy.world
link
fedilink
English
19M

removed by mod

deweydecibel
link
fedilink
English
3
edit-2
9M

The point is the entire concept of AI training off people’s work to make profit for others is wrong without the permission of and compensation for the creator regardless if it’s corporate or open source.

@givesomefucks@lemmy.world
link
fedilink
English
26
edit-2
9M

And using publicly available data to train gets you a shitty chatbot…

Hell, even using copyrighted data to train isn’t that great.

Like, what do you even think they’re doing here for your conspiracy?

You think OpenAI is saying they should pay for the data? They’re trying to use it for free.

Was this a meta joke and you had a chatbot write your comment?

@Grimy@lemmy.world
link
fedilink
English
19M

If the data has to be paid for, openAI will gladly do it with a smile on their face. It guarantees them a monopoly and ownership of the economy.

Paying more but having no competition except google is a good deal for them.

@givesomefucks@lemmy.world
link
fedilink
English
2
edit-2
9M

Eh, the issue is lots of people wouldn’t be willing to sell tho.

Like, you think an author wants the chatbot to read their collected works and use that? Regardless of if it’s quoting full texts or “creating” text in their style.

No author is going to want that.

And if it’s up to publishers, they likely won’t either. Why take one small payday if that could potentially lead to loss of sales a few years down the row.

It’s not like the people making the chatbits just need to buy a retail copy of the text to be in the legal clear.

@Grimy@lemmy.world
link
fedilink
English
29M

The publisher’s will absolutely sell imo. They just publish, the book will be worth the same with or without the help of AI to write it.

I guess there is a possibility that people start replacing bought books with personalized book llm outputs but that strikes me as unlikely.

@tourist@lemmy.world
link
fedilink
English
109M

Was this a meta joke and you had a chatbot write your comment?

if someone said this to me I’d cry

Hey man, that’s damn hurtful

@webghost0101@sopuli.xyz
link
fedilink
English
11
edit-2
9M

The point that was being made was that public available data includes a whole lot amount of copyrighted data to begin with and its pretty much impossible to filter it out. Grand example, the Eiffel tower in Paris is not copyright protected, but the lights on it are so you can only using pictures of the Eiffel tower during the day, if the picture itself isn’t copyright protected by the original photographer. Copyright law has all these complex caveat and exception that make it impossible to tell in glance whether or not it is protected.

This in turn means, if AI cannot legally train on copyrighted materials it finds online without paying huge sums of money then effectively only mega corporation who can pay copyright fines as cost of business will be able to afford training decent AI.

The only other option to produce any ai of such type is a very narrow curated set of known materials with a public use license but that is not going to get you anything competent on its own.

EDIT: In case it isn’t clear i am clarifying what i understood from Grimy@lemmy.world comment, not adding to it.

It’s not like all this data was randomly dumped at the AIs. For data sets to serve as good training materials they need contextual information so that the AI can discern patterns and replicate them when prompted.

We see this when you can literally prompt AIs with whose style you want it to emulate. Meaning that the data it was fed had such information.

Midjourney is facing extra backlash from artists after a spreadsheet was leaked containing a list of artist styles their AI was trained on. Meaning they can keep track of it and they trained the AI with those artists’ works deliberately. They simply pretend this is impossible to figure out so that they might not be liable to seek permission and compensate the artists whose works were used.

I didn’t want any of this shit. IDGAF if we don’t have AI. I’m still not sure the internet actually improved anything, let alone what the benefits of AI are supposed to be.

A perfectly valid stance to take.

@myslsl@lemmy.world
link
fedilink
English
49M

Machine learning techniques are often thought of as fancy function approximation tools (i.e. for regression and classification problems). They are tools that receive a set of values and spit out some discrete or possibly continuous prediction value.

One use case is that there are a lot of really hard+important problems within CS that we can’t solve efficiently exactly (lookup TSP, SOP, SAT and so on) but that we can solve using heuristics or approximations in reasonable time. Often the accuracy of the heuristic even determines the efficiency of our solution.

Additionally, sometimes we want predictions for other reasons. For example, software that relies on user preference, that predicts home values, that predicts the safety of an engineering plan, that predicts the likelihood that a person has cancer, that predicts the likelihood that an object in a video frame is a human etc.

These tools have legitamite and important use cases it’s just that a lot of the hype now is centered around the dumbest possible uses and a bunch of idiots trying to make money regardless of any associated ethical concerns or consequences.

@Grimy@lemmy.world
link
fedilink
English
19M

You don’t have to use it. You can even disconnect from the internet completely.

Whats the benefit of stopping me from using it?

It doesn’t matter what you want. What matters is if corporations can extract $ from you, gain an efficiency, or cut their workforce using it.

That’s what the drive for AI is all about.

No doubt.

That’s insane logic…

Like you’re essentially saying I can copy/paste any article without a paywall to my own blog and sell adspace on it…

And your still saying OpenAI is trying to make AI companies pay?

Like, do you think AI runs off free cloud services? The hardware is insanely expensive.

And OpenAI is trying to argue the opposite, that AI companies shouldn’t have to pay to use copyrighted works.

You have zero idea what is going on, but you are really confident you do

I clarified the comment above which was misunderstood, whether it makes a moral/sane argument is subjective and i am not covering that.

I am not sure why you think there is a claim that openAI is trying to make companies pay, on the contrary the comment i was clarifying (so not my opinion/words) states that openAI is making an argument that anyone should be able to use copyrighted materials for free to train AI.

The costs of running an online service like chatgpt is wildly besides the argument presented. You can run your own open source large language models at home about as well as you can run Bethesda’s Starfield on a same spec’d PC

Those Open source large language models are trained on the same collections of data including copyrighted data.

The logic being used here is:

If It becomes globally forbidden to train AI with copyrighted materials or there is a large price or fine in order to use them for training then the Non-Corporate, Free, Open Source Side of AI will perish or have to go underground while to the For-Profit mega corporations will continue exploit and train ai as usual because they can pay to settle in court.

The Ethical dilemma as i understand it is:

Allowing Ai to train for free is a direct threat towards creatives and a win for BigProfit Enthertainment, not allowing it to train to free is treat to public democratic AI and a win for BigTech merging with BigCrime

Allowing Ai to train for free is a direct threat towards creatives

No. Many creatives fear that AI allows anyone to do what they do, lowering the skill premium they can charge. That doesn’t depend on free training.

Some seem to feel that paying for training will delay AI deployment for some years, allowing the good times to continue (until they retire or die?)

But afterward, you have to ask who’s paying for the extra cost when AI is a normal tool for creatives? Where does the money come from to pay the rent to property owners? Obviously the general public will pay a part through higher prices. But I think creatives may bear the brunt, because it’s the tools of their trade that are more expensive and I don’t think all of that cost can be passed on.

I don’t think lowering the skill level is something we will need to worry about as over time this actually trickles up, A Creative professional trained with AI tools will almost always top a Amateur using the same tools.

The real issue is Style. If you are an Artist with a very recognizable specific style, and you make your money trough commissions you are basically screwed. Many Artists feature a personal style and while borrowing peoples style is common (disney-esque) it’s usually not a problem because within a unique and diverse human mind it rarely results in unintentional latent copying.

@Grimy@lemmy.world
link
fedilink
English
39M

That is very well put, I really wish I could have started with that.

Though I envision it as a loss for BigProfit Enthertainment since I see this as a real boon for the indie gaming, animation and eventually filmmaking industry.

It’s definitely overall quite a messy situation.

@givesomefucks@lemmy.world
link
fedilink
English
-3
edit-2
9M

You can run your own open source large language models at home about as well as you can run Bethesda’s Starfield on a same spec’d PC

Yes, you can download an executable of a chatbot lol.

That’s different than running something remotely like even OpenAI.

The more it has to reference, the more the system scales up. Not just storage, but everything else.

Like, in your example of video games it would be more like stripping down a PS5 game of all the assets, then playing it on a NES at 1 frame per five minutes.

You’re not only wildly overestimating chatbots ability, you’re doing that while drastically underestimating the resources needed.

Edit:

I think you literally don’t know what people are talking about…

Do you think people are talking about AI image generators?

No one else is…

I think you’re confusing training it with running it. After it’s trained, you can run it on much weaker hardware.

@webghost0101@sopuli.xyz
link
fedilink
English
1
edit-2
9M

I am talking about generative AI, be it text or image both have a challenge with copyrighted material.

“executable of a chatbot” lol, aint you cute

“example of video games”

Are you refering to my joke?

I am far from overestimating capacity, Starfield runs mediocre on a modern gaming system compared to other games. The Vicuna 13b llm runs mediocre on the same system compared with gpt 3.5. To this date there is no local model that i would trust for professional use and chatgpt 3.5 doesnt hit that level either.

But it remains a very interesting, rapidly evolving technology that i hope receives as much future open source support as possible.

“I think you literally don’t know what people are talking about” I hate to break it to you but you’re embarrassing yourself.

I presume you must believe the the following lemmy community and resources to be typed up by a group of children, either that or your just naive.

https://lemmy.world/c/fosai

https://www.fosai.xyz/

https://github.com/huggingface/transformers

https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

https://huggingface.co/microsoft/phi-2 & https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

https://www.theguardian.com/technology/2023/may/05/google-engineer-open-source-technology-ai-openai-chatgpt

True, Big Tech loves monopoly power. It’s hard to see how there can be an AI monopoly without expanding intellectual property rights.

It would mean a nice windfall profit for intellectual property owners. I doubt they worry about open source or competition but only think as far as lobbying to be given free money. It’s weird how many people here, who are probably not all rich, support giving extra money to owners, merely for owning things. That’s how it goes when you grow up on Ayn Rand, I guess.

@kibiz0r@lemmy.world
link
fedilink
English
119M

We have a mechanism for people to make their work publically visible while reserving certain rights for themselves.

Are you saying that creators cannot (or ought not be able to) reserve the right to ML training for themselves? What if they want to selectively permit that right to FOSS or non-profits?

@BURN@lemmy.world
link
fedilink
English
69M

That’s exactly what they’re saying. The AI proponents believe that copyright shouldn’t be respected and they should be able to ignore any licensing because “it’s hard to find data otherwise”

@Grimy@lemmy.world
link
fedilink
English
-29M

Essentially yes. There isn’t a happy solution where FOSS gets the best images and remains competitive. The amount of data needed is outside what can be donated. Any open source work will be so low in quality as to be unusable.

It also won’t be up to them. The platforms where the images are posted will be selling and brokering. No individual is getting a call unless they are a household name.

None of the artists are getting paid either way so yeah, I’m thinking of society in general first.

@kibiz0r@lemmy.world
link
fedilink
English
39M

The artists (and the people who want to see them continue to have a livelihood, a distinct voice, and a healthy engaged fanbase) live in that society.

The platforms where the images are posted will be selling and brokering

Isn’t this exactly the problem though?

From books to radio to TV, movies, and the internet, there’s always:

  • One group of people who create valuable works
  • Another group of people who monopolize distribution of those works

The distributors hijack ownership (or de facto ownership) of the work, through one means or another (either logistical superiority, financing requirements, or IP law fuckery) and exploit their position to make themselves the only channel for creators to reach their audience and vice-versa.

That’s the precise pattern that OpenAI is following, and they’re doing it at a massive scale.

It’s not new. Youtube, Reddit, Facebook, MySpace, all of these companies started with a public pitch about democratizing access to content. But a private pitch emerged, of becoming the main way that people access content. When it became feasible for them to turn against their users and liquidate them, they did.

The difference is that they all had to wait for users to add the content over time. Imagine if Google knew they could’ve just seeded Google Video with every movie, episode, and clip ever aired or uploaded anywhere. Just say, “Mon Dieu! It’s impossible for us to run our service without including copyrighted materials! Woe is us!” and all is forgiven.

But honestly, whichever way the courts decide, the legality of it doesn’t matter to me. It’s clearly a “Whose Line Is It?” situation where the rules are made up and ownership doesn’t matter. So I’m looking at “Does this consolidate power, or distribute it?” And OpenAI is pulling perhaps the biggest power grab that we’ve seen.

Unrelated: I love that there’s a very distinct echo of something we saw with the previous era of tech grift, crypto. The grifters would always say, after they were confronted, “Well, there’s no way to undo it now! It’s on the blockchain!” There’s always this back-up argument of “it’s inevitable so you might as well let me do it”.

@BURN@lemmy.world
link
fedilink
English
29M

Too bad

If you can’t afford to pay the authors of the data required for your project to work, then that sucks for you, but doesn’t give you the right to take anything you want and violate copyright.

Making a data agnostic model and releasing the source is fine, but a released, trained model owes royalties to its training data.

@grue@lemmy.world
link
fedilink
English
11
edit-2
9M

They want to kill the open-source scene

Yeah, by using the argument you just gave as an excuse to “launder” copyleft works in the training data into permissively-licensed output.

Including even a single copyleft work in the training data ought to force every output of the system to be copyleft. Or if it doesn’t, then the alternative is that the output shouldn’t be legal to use at all.

@Grimy@lemmy.world
link
fedilink
English
29M

100% agree, making all outputs copyleft is a great solution. We get to keep the economic and cultural boom that AI brings while keeping the big companies in check.

@burliman@lemmy.world
link
fedilink
English
-3
edit-2
9M

deleted by creator

@TrickDacy@lemmy.world
link
fedilink
English
19M

Whoosh

Because it’s easy to get these chatbots to output direct copyrighted text…

Even ones the company never paid for, not even just a subscription for a single human to view the articles they’re reproducing. Like, think of it as buying a movie, then burning a copy for anyone who asks.

Which reproducing word for word for people who didn’t pay is still a whole nother issue. So this is more like torrenting a movie, then seeding it.

@burliman@lemmy.world
link
fedilink
English
-69M

It’s not that easy, don’t believe the articles being broadcasted every day. They are heavily cherry picked.

Also, if someone is creating copyright works, it is on that person to be responsible if they release or sell it, not the tool they used. Just because the tool can be good (learns well and responds well when asked to make a clone of something) doesn’t mean it is the only thing it does or must do. It is following instructions, which were to make a thing. The one giving the instructions is the issue, and the intent of that person when they distribute is the issue.

If I draw a perfect clone of Donald Duck in the privacy of my home after looking at hundreds of Donald Duck images online, there is nothing wrong with that. If I go on Etsy and start selling them without a license, they will come after ME. Not because I drew it, but because I am selling it and violating a copyright. They won’t go after the pencil or ink manufacturer. And they won’t go after Adobe if I drew it on a computer with Photoshop.

@givesomefucks@lemmy.world
link
fedilink
English
6
edit-2
9M

If I draw a perfect clone of Donald Duck in the privacy of my home after looking at hundreds of Donald Duck images online, there is nothing wrong with that

In your picture example it would be an exact copy…

But even if you started a business and when people asked for a picture of Donald Duck, giving them a traced copy is still copyright infringement… Hell, even your bad analogy of a person’s own drawing, still copyright infringement

The worst thing about these chatbots is the people who think it’s amazing don’t understand what it’s doing. If you understood it, it wouldn’t be impressive.

@Grimy@lemmy.world
link
fedilink
English
09M

You are missing his point. Is Disney going after the one who is selling the copy online, or are they going after Adobe?

@givesomefucks@lemmy.world
link
fedilink
English
0
edit-2
9M

In that analogy, openai is the one selling it, because their the ones using it to prop up their product.

I didn’t think I needed to explicitly state that, but well, here we are.

Have a nice life tho. I’m over accounts that stop replying to one thread of replies and then just go and reply to one of my other comments asking me to explain what I’ve already told them.

Waaaay easier to just never see replies from that account

@Grimy@lemmy.world
link
fedilink
English
19M

Some of us have to work for a living, I can’t reply to every comment the moment it comes in and it seems rude to break the chaine.

In his analogy, openais product was the tool. You can do the same with both img gen and Photoshop, and neither of these prop up their product by implying it’s easy to copyright infringe. That’s why I said you were missing his point but you do you buddy.

Because humans have more rights than tools. You are free to look at copyrighted text and pictures, memorize them and describe them to others. It doesn’t mean you can use a camera to take and share pictures of it.

Acting like every right that AIs have must be identical to humans’, and if not that means the erosion of human rights, is a fundamentally flawed argument.

Flying Squid
link
fedilink
English
19M

deleted by creator

@erranto@lemmy.world
link
fedilink
English
309M

If you have enough money, you can do whatever you want!

rivermonster
link
fedilink
English
99M

Kind of a strawman, I’d like everything to be FOSS, and if we keep Capitalism (which we shouldn’t), it should be HEAVILY regulated not the laissez-faire corporatocracy / oligarchy we have now.

I don’t want any for-profit capitalists to have any control of AI. It should all be owned by the public and all productive gains from it taxed at 100%. But open source AI models, right on.

And team SciHub–FUCK YEAH!

Flying Squid
link
fedilink
English
119M

Yeah, but did SciHub pay Nigerians a pittance to look at and read about child rape? Because- wait, I have no idea what I’m even arguing. Fuck OpenAI though.

a Kendrick fan
link
fedilink
English
09M

OpenAI did those subhuman training of ChatGPT in Kenya, not Nigeria. And since the Kenyan govt is a western lapdog these days, nothing would ever come out of that.

Flying Squid
link
fedilink
English
19M

Oh, well that makes it okay then. My mistake.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 175 users / day
  • 576 users / week
  • 1.37K users / month
  • 4.48K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog