• @1984@lemmy.today
    link
    fedilink
    English
    1082 months ago

    I guess this is an attempt to discredit them.

    After working at many, many companies, security is usually very bad. This is typical. Not changing access tokens is also very common.

    • @Shdwdrgn@mander.xyz
      link
      fedilink
      English
      262 months ago

      Discrediting someone usually has a goal of pushing customers to another source though. There is no other source of this information, so what would be the point?

      • @qfe0@lemmy.dbzer0.com
        link
        fedilink
        English
        108
        edit-2
        2 months ago

        Destroy a source of historical documents so that the past can be contested. Sow doubt, confusion, deniability. Hide evidence of past crimes, or inconvenient documents. Plant documents, etc.

          • ɐɥO
            link
            fedilink
            English
            22 months ago

            I really hate that reddit slang but username checks out

        • @Broken@lemmy.ml
          link
          fedilink
          English
          22 months ago

          Sow doubt. As in spreading it like seeds to take root and grow. 100% in agreement with you, just being a grammar Nazi. Carry on.

        • @Gigasser@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          2 months ago

          Lol, we should create a society of sorts along the lines of the original Bavarian Illuminati. Create a decentralized storage network and archive of knowledge and history. Create a list of important shit that needs to be archived, and delegate standardized chunks (let’s say 5 or 10gb each chunk) of data that are to be downloaded by people. Anytime 5 or 10 people have downloaded a chunk, strike it off the list of priority archival and move onto the next chunk. For this to work, needs alot of people though.

    • queermunist she/her
      link
      fedilink
      English
      50
      edit-2
      2 months ago

      People use Archive links to avoid giving sites traffic.

      This is a problem for advertisers and media corps.

      Not saying they’re the ones doing this, but they’d definitely benefit.

    • Draconic NEO
      link
      fedilink
      English
      382 months ago

      Well right wingers want to ban books and services like IA make that harder since they provide easy access to download or digitally borrow those books. It makes it harder for them to deny people access to those books since they can find them online. Of course, there are other ways people can still obtain those books, IA isn’t the only one, but it’s the easiest and the most convent.

      • @rottingleaf@lemmy.world
        link
        fedilink
        English
        -282 months ago

        I’ll give you my opinion though you haven’t asked for it:

        Some right wingers (libertarian mostly) don’t want to ban books, they want books in fact to be reliably available, and having one centralized Internet Archive to store all of them is not reliable.

        (Or in the same logic for humanity to be knowledgeable and resistant to propaganda, and treating sources’ availability as a given being harmful towards that goal - naive people can believe wrong things.)

        See Babylon V example with kicking the ant hive again and again to some well-meaning goal, of the evolution kind.

        Mind that I don’t think these people have such an intent.

        It’s just in my childhood someone has gaslighted me into trying to be optimistic in such cases. Like “if someone is digging a grave for you, just wait till they’re done, you’ll get a nice pond”. Same as a precedent that is created with one intent and interpretation, but works for all possible intents and interpretations, because it’s a real world event.

        So, other than gaslighting, real effects are real. Including positive ones, like all of us right now realizing that a centralized IA is unacceptable, we need something like “IA@home”, with a degree of forkability without duplicating the data, so that someone who’d somehow hijack the private key or whatever identifying said new IA’s authority wouldn’t be able to harm existing versions and they wouldn’t require much more storage.

        Shit, I can’t stop thinking about that “common network and identities and metadata exchange, but data storage shared per communities one joins, Freenet-like” idea, but I don’t even remotely know where to start developing it and doubt I’ll ever.

        • @towerful@programming.dev
          link
          fedilink
          English
          102 months ago

          4 years ago (best number I can find, considering IAs blog pages are down) IA used about 50 petabytes on servers that have 250 terabytes of storage and 2gbps network.
          From this, we can conclude that 1 TB of storage requires 8mbps of network speed.
          Let’s just say that average/all residential broadband has spare bandwidth for 8mbps symmetrical.
          We would need 50,000 volunteers to cover the absolute minimum.
          Probably 100k to 200k to have any sort of reliability, considering it’s all residential networking and commodity hardware.

          In the last 4 years, I imagine IA has increased their storage requirements significantly.
          And all of that would need to be coordinated, so some shards don’t get over-replicated

          • @rottingleaf@lemmy.world
            link
            fedilink
            English
            -12 months ago

            This seems to confirm my critique of “manual” solutions with torrents and such offered in other comments, resulting in the idea shortly described in the comment you were answering.

            Yes, this would require a lot of people, but some would contribute more and some less, just like with other public P2P solutions.

            From my POV the biggest problem is synchronizing indexes (similar to superblock maybe) of such a storage, and balancing replication based on them, in a decentralized way. Because it would seem that those indexes by themselves would be not small.

            There should also be all the usual stuff with controlling data integrity.

            I think it’s realistic to attract many volunteers, if the thing in question will also be the user client, similar to Freenet and torrents socially, and bigger storage will allow them to faster get things they access more often, as a cache. But then balancing between that and storing necessary, but unpopular parts of the space, is a question.

            I think I need to read up.

            • @locuester@lemmy.zip
              link
              fedilink
              English
              2
              edit-2
              2 months ago

              There are really good, incentivized versions of decentralized storage networks. Unfortunately discussions about them are stigmatized under the “crypto” umbrella so the mere mention typically gets you buried.

              If you have an open mind, check them out!

  • @grue@lemmy.world
    link
    fedilink
    English
    802 months ago

    Okay, enough is enough. The Internet Archive is both essential infrastructure and irreplaceable historical record; it cannot be allowed to fall. Rather than just hoping the Archive can defend itself, I say It’s time to hunt down and counterattack the scum perpetrating this!

    • @psycotica0@lemmy.ca
      link
      fedilink
      English
      402 months ago

      Knowing the folks at IA I’m sure they would love a backup. They would love a community. I’m sure they don’t want to be the only ones doing this. But dang, they’ve got like 99 Petabytes of data. I don’t know about you, but my NAS doesn’t have that laying around…

      • @el_abuelo@programming.dev
        link
        fedilink
        English
        11
        edit-2
        2 months ago

        I wonder if someone can come up with some kind of distributed storage that isn’t insanely slow. Kinda like a CDN but on personal devices. I’m thinking like SETI@HOME did with distributed compute.

        Edit: this is kinda like torrents but where the contents are changing frequently.

        • @psycotica0@lemmy.ca
          link
          fedilink
          English
          102 months ago

          You should look up IPFS! It’s trying to be kinda like that.

          It’ll always be slower than a CDN, though, partly because CDNs pay big money to be that fast, but also anything p2p is always going to have some overhead while the swarm tries to find something. It’s just a more complicated problem that necessarily has more layers.

          But that doesn’t mean it’s not possible for it to be “fast enough”

          • @sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            52 months ago

            And there’s a promising new IPFS-like system called Iroh, which should have a lot less overhead and in general just be faster than IPFS. It’s not quite ready to just switch to right now, but an enterprising individual could probably make something useful with it without too much work (i.e. months, not years).

            I’m using it for a distributed application project right now, but the intent is a bit different than the IA use-case.

      • @notfromhere@lemmy.ml
        link
        fedilink
        English
        32 months ago

        That is an insane amount of storage. How much does it grow every year and is it stable growth or accelerating?

  • @zlatiah@lemmy.world
    link
    fedilink
    English
    472 months ago

    This again??

    This time once archive.org is back online again… is it possible to get torrents of some of their popular data storage? For example I wouldn’t imagine their catalog of books with expired copyright to be very big. Would love a community way to keep the data alive if something even worse happens in the future (and their track record isn’t looking good now)

    • @njordomir@lemmy.world
      link
      fedilink
      English
      132 months ago

      Yep, that seems like the ideal decentralized solution. If all the info can be distributed via torrent, anyone with spare disk space can help back up the data and anyone with spare bandwidth can help serve it.

      • @Shdwdrgn@mander.xyz
        link
        fedilink
        English
        82 months ago

        Most of us can’t afford the sort of disk capacity they use, but it would be really cool if there were a project to give volunteers pieces of the archive so that information was spread out. Then volunteers could specify if they want to contribute a few gigabytes to multiple terabytes of drive space towards the project and the software could send out packets any time the content changes. Hmm this description sounds familiar but I can’t think of what else might be doing something similar – anyone know of anything like that that could be applied to the archive?

        • @njordomir@lemmy.world
          link
          fedilink
          English
          52 months ago

          Yeah, the projects I’ve heard about that have done something like this broke it into multiples.

          For example, 1000GB could be broken into forty 25GB torrents and within that, you can tell the client to only download some of the files.

          At scale, a webpage can show the seed/leach numbers and averages foe each torrent over a time period to give an idea of what is well mirrored and what people can shore up. You could also change which torrent is shown as the top download when people go to the contributor page and say they want to help host it ensuring a better distribution.

        • @rottingleaf@lemmy.world
          link
          fedilink
          English
          02 months ago

          Since I’m spamming with this same idea right now - the description is similar to Freenet (the old one, the Hyphanet), but you’d need some kind of ability to choose parts of which collections of data get stored in your contributed storage, while with Freenet it’s all the network (unless you form a separated F2F net, there is such an option, but no way to be sure that all peers, ahem, store only IA data and not their own porn collections, for example, taking precious storage). I’ve described one idea in my previous comment, but it’s purely an idea, I’m nowhere close to having the knowledge to make such.

      • @rottingleaf@lemmy.world
        link
        fedilink
        English
        12 months ago

        There’s an issue with torrents, only the most popular ones get replicated and the process is manual\social.

        Something like Freenet is needed, which automatically “spreads” data over machines contributing storage, but Freenet is an unreliable storage, basically like a cache where older and unwanted stuff gets erased.

        So it should be something like Freenet, but possibly with some “clusters” or “communities” with a central (cryptography-enabled) authority of each being able to determine the state of some collection of data as a whole, and pick priorities. My layman’s understanding is that this would be similar to something between Freenet and Ceph, LOL. More like a cluster filesystem spread over many nodes, not like cache.

        • @njordomir@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          2 months ago

          You have more knowledge on this than I did. I enjoyed reading about Freenet and Ceph. I have dealt with cloud stuff, but not as much on a technical-underpinnings level. My first freenet impression from reading some articles gives me 90s internet vibes based on the common use cases they listed.

          I remember ceph because I ended up building it from the AUR once on my weak little personal laptop because it got dropped from some repository or whatever but was still flagged to stay installed. I could have saved myself an hours long build if I had read the release notes.

          • @rottingleaf@lemmy.world
            link
            fedilink
            English
            12 months ago

            My first freenet impression from reading some articles gives me 90s internet vibes based on the common use cases they listed.

            That’s correct, I meant the way it works.

    • @SilentStorms@lemmy.dbzer0.com
      link
      fedilink
      English
      32 months ago

      Anna’s Archive does this. I think its a really good way to make it difficult to take them down.

      Hopefully this hack starts some conversations on how they can ensure longevity for their project. Seems they’re being attacked on multiple fronts now.

    • @ikidd@lemmy.world
      link
      fedilink
      English
      262 months ago

      Since it’s Reddit, I would guess copyright sockpuppets are steering the narrative to help damage them further.

  • @obbeel@lemmy.eco.br
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    Apparently, BlackMeta is behind the DDoS attack to the Internet Archive. Apparently they are pro-Palestine hacktivists - their X account also has some russian written in it.

    (Edit) Also, Internet Archive is banned on China since 2012 and Russia since 2015.

  • nickwitha_k (he/him)
    link
    English
    62 months ago

    Quick question for those more in the know: Have these events disrupted IA’s ability to archive pages? I ask because I was recently talking with a security guy about a novel malware that used a hacked webpage for command injection. One possible motive that came to mind, if the archiving was disrupted would be to cover tracks for a similar malware. Inject code, perform malicious activity, revert, then, there’s more time before the control code is discovered.