Archived link

The polyfill.js is a popular open source library to support older browsers. 100K+ sites embed it using the cdn.polyfill.io domain. Notable users are JSTOR, Intuit and World Economic Forum. However, in February this year, a Chinese company bought the domain and the Github account. Since then, this domain was caught injecting malware on mobile devices via any site that embeds cdn.polyfill.io. Any complaints were quickly removed (archive here) from the Github repository.

  • @efstajas@lemmy.world
    link
    fedilink
    English
    2
    edit-2
    5 months ago

    I know. Just the “full-stack meta frameworks” part alone makes any ADHD person feel nausea.

    But why? What’s bad about this?

    I disagree. Geminispace is very usable without scripts

    That’s great, I’m not saying that it’s impossible to make usable apps without JS. I’m saying that the capabilities of websites would be greatly reduced without JS being a thing. Sure, a forum can be served as fully static pages. But the web can support many more advanced use-cases than that.

    If only one paradigm must remain, then naturally I pick mine. If not, then there’s no problem and I still shouldn’t care.

    So you can see that other people have different needs to yours, but you think those shouldn’t be considered? We’re arguing about the internet. It’s a pretty diverse space.

    For me it’s obvious that embeddable cross-platform applications as content inside hypertext are much better than turning a hypertext system into some overengineered crappy mess of a cross-platform application system.

    Look, I’m not saying that the web is the most coherent platform to develop for or use, but it’s just where we’re at after decades of evolving needs needing to be met.

    That said, embedded interactive content is absolutely not better than what we have now. For one, both Flash and Java Applets were mostly proprietary technologies, placing far too much trust in the corpos developing them. There were massive cross-platform compatibility problems, and neither were in any way designed for or even ready for a responsive web that displays well on different screen sizes. Accessibility was a big problem as well, given an entirely different accessibility paradigm was necessary within vs. the HTML+CSS shell around the embedded content.

    Today, the web can do everything Flash + Java Applets could do and more, except in a way that’s not proprietary but based on shared standards, one that’s backwards-compatible, builds on top of foundational technologies like HTML rather than around, and can actually keep up with the plethora of different client devices we have today. And speaking of security — sure, maybe web browsers were pretty insecure back then generally, but I don’t see how you can argue that a system requiring third-party browser plug-ins that have to be updated separately from the browser can ever be a better basis for security than just relying entirely on the (open-source!) JS engine of the browser for all interactivity.

    I ask you for links and how many clicks and fucks it would take to make one with these, as opposed to back then. These are measurable, scientific things. Ergonomics is not a religion.

    The idea that any old website builder back in the day was more “ergonomic” while even approaching the result quality and capabilities of any no-code homepage builder solution you can use today is just laughable. Sorry, but I don’t really feel the burden of proof here. And I’m not even a fan of site builders, I would almost prefer building my own site, but I recognize that they’re the only (viable) solution for the majority of people just looking for a casual website.

    Besides — there’s nothing really preventing those old-school solutions from working today. If they’re so much better than modern offerings, why didn’t they survive?

    • @rottingleaf@lemmy.zip
      link
      fedilink
      English
      -35 months ago

      But why? What’s bad about this?

      What I said, literally.

      But the web can support many more advanced use-cases than that.

      Which can be done with something embeddable, and not by breaking a hypertext system.

      So you can see that other people have different needs to yours, but you think those shouldn’t be considered? We’re arguing about the internet. It’s a pretty diverse space.

      If those people don’t consider mine, then I don’t consider theirs. If I must consider theirs, they must consider mine.

      Look, I’m not saying that the web is the most coherent platform to develop for or use, but it’s just where we’re at after decades of evolving needs needing to be met.

      That says nothing. It’s a market\evolution argument. Something changes tomorrow and that will be the result of evolution. Somebody uses a different system and that’s it for them.

      That said, embedded interactive content is absolutely not better than what we have now. For one, both Flash and Java Applets were mostly proprietary technologies, placing far too much trust in the corpos developing them.

      And today’s web browsers are as open as Microsoft’s OOXML. De facto proprietary.

      There were massive cross-platform compatibility problems,

      For Flash? Are you sure? I don’t remember such.

      and neither were in any way designed for or even ready for a responsive web that displays well on different screen sizes.

      Nothing was. Doesn’t tell us anything.

      Accessibility was a big problem as well, given an entirely different accessibility paradigm was necessary within vs. the HTML+CSS shell around the embedded content.

      Yes, but applet’s problems in that wouldn’t spread to the HTML page embedding it. Unlike now.

      Today, the web can do everything Flash + Java Applets could do and more, except in a way that’s not proprietary but based on shared standards, one that’s backwards-compatible, builds on top of foundational technologies like HTML rather than around, and can actually keep up with the plethora of different client devices we have today.

      I’ve already said how it’s similar to OOXML. Only MS documented their proprietary at the moment standard of their proprietary program and made it open, while Chromium is itself open, but somehow that doesn’t make things better.

      And speaking of security — sure, maybe web browsers were pretty insecure back then generally, but I don’t see how you can argue that a system requiring third-party browser plug-ins that have to be updated separately from the browser can ever be a better basis for security than just relying entirely on the (open-source!) JS engine of the browser for all interactivity.

      That’s similar to the Apple walled garden arguments. It’s valuable in areas other than security because of separating power between some browser developer and some plugin’s developer. And fighting monoculture is also good for security.

      Also people still use plugins, still separately updated, which still get compromised.

      Also plugins can be properly sandboxed.

      The idea that any old website builder back in the day was more “ergonomic” while even approaching the result quality and capabilities of any no-code homepage builder solution you can use today is just laughable. Sorry, but I don’t really feel the burden of proof here. And I’m not even a fan of site builders, I would almost prefer building my own site, but I recognize that they’re the only (viable) solution for the majority of people just looking for a casual website.

      Sorry, I still do feel that burden of proof. Because for a static site like in 2002 I’d just export a page from OpenOffice and edit some links, and then upload it.