• 272 Posts
  • 4.61K Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle





  • Originally charged with second-degree murder… Darren Bulldog pleaded guilty to manslaughter… after negotiations between defence lawyer Anna Konye and prosecutors Aaron Rankin and Britta Kristensen.

    Disgraceful. There was nothing to negotiate!

    Shortly after the victim arrived, Bulldog confronted him about the debt and then “signalled” to the others in the room, who began to assault the 22-year-old, according to the agreed statement of facts.

    Several people in the home began assaulting Crane and then bound his hands and feet with duct tape before administering a “hot shot” — a lethal dose of fentanyl.

    Crane’s body was then taken to a bathroom, where he was dismembered.

    Sounds like first degree (premeditated) murder, and Indignity or neglect of dead body under the criminal code (5 years for that), and they scattered the remains of the deceased in order to hide the body!

    There should have been multiple life sentences for everyone who participated!





  • I’d just have to ignore most “user-generated” content.

    Dead Internet hypothesis is only applicable to user-generated content platforms.

    AI will affect far more than just social media shit posts, though.

    All news (local, national, global). Educational sites. Medical information. Historical data. Almanacs/encyclopedias. Political information. Information about local services (i.e. outages). Interviews. Documentaries.

    I mean, all of these can be easily manipulated now in any medium. It’s only a matter of how quickly AI saturates these spaces.

    Trustworthy sources will few and far between, drowned out by content that can be generated thousands of times faster than real humans can.

    What then?

    I even worry about things like printed encyclopedias being manipulated, too. We stand to lose real human knowledge if AI content continues to accelerate.

    There really aren’t viable alternatives to those things, unless they are created (again), like before the internet was used for everything.


  • Gen ai has been around for a while. It’s made things worse, but it’s not like there aren’t real users anymore. I don’t see why that would suddenly change now

    For context, we went from AI generated images and videos (i.e. Will Smith eating spaghetti) being laughably bad and only good for memes, to essentially video content that is convincing in every way - in under two years!

    The accessibility, scale, quality, and power of AI has changed things, and RAPIDLY be improved even further in a much shorter period of time.

    That’s the difference. AI from 2023 couldn’t fool your grandma. AI from 2025 and beyond will fool entire populations.


  • I think there are going to be tools to identify networks of people and content you don’t want to interact with. This website is pushed by that social media account, which is boosted by these 2000 account that all exhibit bot-like behavior? Well let’s block the website, of course, but also let’s see who else those 2000 bots promote; let’s see who else promotes that website.

    In an ethical, human-first world, that would be the case.

    Do you think that social media platforms, who run on stealing attention from users so they can steal their private data and behaviour history, would want to block content that’s doing exactly that? No way. Not ever.

    And the incentive to make easy money drives users, who otherwise wouldn’t have the skill or talent to be able to create and present content, to type in a prompt and send it as a post… over and over, automated so no effort at all needs to be made. Do this a million times over, and there’s no way to avoid it.

    And once we get to the point where AI content can be generated on-the-fly for each doom-scrolling user based on their behaviour on the platform, it’s game over. It’ll be like digital meth, but disguised to look funny/sexy/informant/cute/trustworthy.

    I’m using tools to blacklist AI sites in search, but the lists aren’t keeping up, and they don’t extend beyond search.

    There will come a point, probably very soon, where companies will figure out how to deliver ads and AI content as if it were from the original source content, which will make it impossible to block or filter out. It’s a horrific thought, TBH.



  • Thank you for your thoughtful reply.

    I grew up when the internet was still dial-up, so I think I could adapt to going back to the “old way” of doing things.

    But unless society moves in that same direction, it would seem that things would become more and more difficult. We can’t rely on old books and already-created content to move us forward.

    I’ve been finding more value in IRL contact with other people these days. But I don’t think everyone has that luxury, I’m afraid.


  • There will always be a place like Lemmy, where AI-generated content will be filtered through real, intelligent, moral, empathetic people. So we’ll continue to block and analyze and filter as much of the churn as we can…

    As much as I appreciate the optimism, that’s not realistic seeing how fast technology is going.

    If you amped up indistinguishable-from-real bot activity by 1000 or a million times, there would be no effective filter. That’s what I know is coming, to every corner of the internet.

    Other options such as paywalling, invite-only, and other such barriers only serve to fragment and minimize the good that the internet can do for humanity.


  • It’s probably taking my content more seriously than necessary, but I take pride in what I post and I want to be seen as a trusted person in the community.

    Plot twist: How do I know you aren’t a bot? /s

    As information multiplies, and people have less time to apply critical thinking or skepticism to what they see, we’ll have an extremely hard time sorting through it all.

    Even if we had a crowdsourced system that generates a whitelist of trusted sites, bots could easily overwhelm such a system, skewing the results. If Wikipedia, for example, had bots tampering with the data at a million times the rate that it does now, would anyone still want to use it?

    One option might be an invite-only system, and even that would create new problems with fragmentation and exploitation.

    Trust is almost becoming a thing of the past because of unprecedented digital threats.




  • I’ve been pretty reserved on my opinion about AI ruining the internet until a few days ago.

    We’re now seeing videos with spoken dialogue that looks incredibly convincing. No more “uncanny valley” to act as a mental safeguard.

    We can only whitelist websites and services for so long before greed consumes them all.

    I mean, shit, you might already have friends or family using AI generated replies in their text messages and email… not even our real connections are “real”.