• 7fb2adfb45bafcc01c80@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.

      I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.

      Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?

      I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.

      So… That’s what I think is wrong with them.

      From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.