• 0 Posts
  • 11 Comments
Joined 1Y ago
cake
Cake day: Jul 05, 2023

help-circle
rss

Technically, each time that it is viewed it is a republication from copyright perspective. It’s a digital copy that is redistributed; the original copy that was made doesn’t go away when someone views it. There’s not just one copy that people pass around like a library book.


I’m thinking about it from the perspective of an artist or creator under existing copyright law. You can’t just take someone’s work and republish it.

It’s not allowed with books, it’s not allowed with music, and it’s not even allowed with public sculpture. If a sculpture shows up in a movie scene, they need the artist’s permission and may have to pay a licensing fee.

Why should the creation of text on the internet have lesser protections?

But copyright law is deeply rooted in damages, and if advertising revenue is lost that’s a very real example.

And I have recourse; I used it. I used current law (DMCA) to remove over 1,000,000 pages because it was my legal right to remove infringing content. If it had been legal, they wouldn’t have had to remove it.


how do you expect an archive to happen if they are not allowed to archive while it is still up.

I don’t want them publishing their archive while it’s up. If they archive but don’t republish while the site exists then there’s less damage.

I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.

But I don’t republish anything that I archive because that dilutes the value of the original creator.


Yes, some wikipedia editors are submitting the pages to archive.org and then linking to that instead of to the actual source.

So when you go to the Wikipedia page it takes you straight to archive.org – that is their first stop.


You misunderstood. If they view the site at Internet Archive, our site loses on the opportunity for ad revenue.


I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.

I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.

Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?

I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.

So… That’s what I think is wrong with them.

From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.


Here they’re pushing the “must be within 60 miles from the office” trope; I bet they’d say to drive in if it’s after hours.


It’s already been done, and will soon be revealed…

In the middle of his cage match with Mark Zuckerberg, Musk will say “No, I am your father.” After Zuck yells “Noooo!” he’ll follow up with, “Well, just the AI parts.”


I still use Perl for most things – it’s my go-to language when I have to get something done quickly. And quickly doesn’t have to mean small one-liner scripts.

My biggest reason for using it is that mod_perl is still blazingly fast.


I thought this was an article about the X Windows system based on the preview for the article. Boy are those two similar-looking.


Maybe they ‘won’, but I don’t count a pyrrhic victory as winning. It will take years to recover.