• 7.06K Posts
  • 1.17K Comments
Joined 11 months ago
cake
Cake day: December 11th, 2024

help-circle




  • m_‮f@discuss.onlineOPtoThe Far Side@sh.itjust.works2025-11-15
    link
    fedilink
    English
    arrow-up
    14
    ·
    17 hours ago

    Some background on this comic:

    Transcript:

    A few years ago, the Citizen-Journal in Columbus, Ohio, made a slight mistake regarding which Far Side caption went with which cartoon.

    The caption for the “slug” cartoon, depicting a mass of slugs worshiping their “god” and chanting some nonsensical intonation, was repeated the following day with the “tree house” cartoon. Instead of the version shown in the upper left corner, what Columbus readers saw was the cartoon at left.

    And how many letters did I have forwarded to me asking for an explanation? Don’t ask.
















  • The norm for me is breakfast, lunch, and dinner, with dinner being interchangeable with supper. I found out recently that the word supper derives from soup that you’d have at the end of the day right before bed, and in some places those are different meals. Some places also call whatever the biggest meal of the day is “dinner”, which might be eaten at noon, which is weird to me. Small breakfast in the morning, smaller meal at noon for lunch, and then bigger meal for supper/dinner in the evening is what I’m used to.













  • Not really surprising that they’re good at analyzing language, since they are Large Language Models after all. Still neat to see, though. Here’s the most interesting bit:

    In the phonology task, the group made up 30 new mini-languages, as Beguš called them, to find out whether the LLMs could correctly infer the phonological rules without any prior knowledge. Each language consisted of 40 made-up words. Here are some example words from one of the languages:

    • θalp
    • ʃebre
    • ði̤zṳ
    • ga̤rbo̤nda̤
    • ʒi̤zṳðe̤jo

    They then asked the language models to analyze the phonological processes of each language. For this language, o1 correctly wrote that “a vowel becomes a breathy vowel when it is immediately preceded by a consonant that is both voiced and an obstruent” — a sound formed by restricting airflow, like the “t” in “top.”

    The languages were newly invented, so there’s no way that o1 could have been exposed to them during its training. “I was not expecting the results to be as strong or as impressive as they were,” Mortensen said.

    I’ve also tried out various LLMs on daily puzzles that it couldn’t have been trained on, like Connections and it does a really good job. I don’t think that the end of humanity is nigh or anything dramatic like that, but IMO this invalidates people that really want to hate AI and claim has 0 intelligence.