• 112 Posts
  • 1.99K Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle



    1. Don’t use the same password on Steam that you use from other services.
    2. Use a long password, with random characters and numbers.
    3. Use a password manager.
    4. Do not click on links in emails, unless you are 100% sure its from Valve. Better yet, visit Steam in your browser or the Steam app and search for the page there directly. Do not login on random websites requiring you to login to Steam.
    5. Do not click on links in chat from people you don’t know or added recently to your friends list.
    6. Set your Steam profile to private, or enable it for friends only at best. Especially if you have lot of skins.
    7. Ultimate weapon: Use official 2FA (2 factor authentication) with Steam app on your phone. Do not lose your phone, as you cannot login to Steam otherwise.

    No security is perfect, but following these basic rules will help you to secure your account. And there shouldn’t be any need to reset the password often. If you feel better, reset it once per year or so. I don’t.







  • I mean lot of stuff is written in forums and social media, where people hallucinate. Or even in real life if you talk to one. Its normal for a human to pick up something in their life, later talk about it as a fact, regardless of where they learned it (tv, forum, videogame, school). Hallucinations are part of our brain.

    Sometimes being aware of the hallucination issue is still a hallucination. Sometimes we are also aware of the hallucination an Ai makes, because its obvious or we can check it. And also there are Ai chatbots who “talk” and phrase in a more human natural sounding way. Not all of them sound obvious robotic.

    Just for the record, I’m skeptical of Ai technology… not biggest fan. Please don’t fork me. :D


  • To be fair, humans tend to hallucinate too.

    I asked a local Ai chatbot DeepSeek-R1-Distill-Quen-14B:

    Does Ai hallucinate and is it getting worse? Answer in one paragraph.

    Answer:

    AI does exhibit a form of “hallucination” where it generates plausible but incorrect information, particularly when faced with uncertainty or creative prompts beyond its training scope. This phenomenon occurs as AI constructs logical-sounding answers using available data, even if those answers aren’t accurate. While advancements in AI make these hallucinations more convincing and harder to detect due to improved coherence, the issue isn’t worsening in severity but rather evolving in how it presents itself.