• @bufordt@sh.itjust.works
    link
    fedilink
    301 year ago

    It’s similar in IT. Almost no one recommends regular password changes anymore, but we won’t pass our audit if we don’t require password changes every 90 days.

      • @bufordt@sh.itjust.works
        link
        fedilink
        101 year ago

        When we first switched to JD Edwards, it still sent the passwords in plain text, and our Oracle partner set up our weblogic instances over http instead of https.

        I had to prove I could steal passwords as just a local admin on a workstation before they made encrypting the traffic a priority.

    • @InfiniWheel@lemmy.one
      link
      fedilink
      71 year ago

      A very non-techy relative works in a company that requires password changes every month. At this point his passwords are just extremely easy to guess and basically go like 123aBc+ and variations of it.

      Yeah, no clue how that caught traction.

      • ddh
        link
        8
        edit-2
        1 year ago

        Our IT department won’t allow password managers. Their current stance on what we should do instead is “Uh, we’re working on it”. So everyone at work uses weak passwords and writes them down in notepad. headdesk

      • @Corkyskog@sh.itjust.works
        link
        fedilink
        31 year ago

        I never understood why this caught on, you even see it recommended for personal applications… which is just stupid. The only reason it existed in the first place is because of concerns of shoulder lookers.

  • AlexRogansBeta
    link
    fedilink
    71 year ago

    I feel this in my bones as an anthropologist when it comes to semi-structured interviews, which frankly have very little to do with anthropological inquiry but have nonetheless become a rote methodology.

  • thanevim
    link
    fedilink
    71 year ago

    Don’t know if this is the intended reference, but this pretty much perfectly describes why we use the Polygraph. As covered (and better explained than I can myself) on Adam Ruins Everything https://youtu.be/nyDMoGjKvNk

  • Hellsadvocate
    link
    fedilink
    31 year ago

    It makes me wonder if we can create AIs that behave close enough to humans by adding an additional neurological baseline noise to the LLM training. Then throwing it in simulations to see whether social sciences might work. I’d be curious to see how true to life something like that would be as well.

    A while ago, some researchers designed a game where chatGPT was assigned to characters and told to act and live like humans. It was interesting to watch. https://www.iflscience.com/stanford-scientists-put-chatgpt-into-video-game-characters-and-its-incredible-68434