• Biran
    link
    fedilink
    131 year ago

    I’ve seen many where the captchas are generated by an AI…
    It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

    • Unaware7013
      link
      fedilink
      31 year ago

      Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

      • Bizarroland
        link
        fedilink
        11 year ago

        So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.

        It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.

        You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.

        After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.

        I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.