• v_krishna
    link
    fedilink
    English
    211 months ago

    I fully agree with this, would have written something similar but was eating lunch when I made my former comment. I also think there’s a big part of pragmatics that comes from embodiment that will become more and more important (and wish Merleau-Ponty was still around to hear what he thinks about this)

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      -411 months ago

      Indeed, I definitely expect interesting things to start developing on that front, and we may see old ideas getting dusted off because now there’s enough computing power to put them to use. For example, I thought The Society of Mind from Minsky lays out a plausible architecture for a mind. Imagine each agent in that scenario being a GPT system, and the bigger mind being built out of a society of such agents each being concerned with a particular domain it learns about.

      • v_krishna
        link
        fedilink
        English
        111 months ago

        Many (14?) years back I attended a conference (now I can’t remember what it was for, I think a complex systems department at some DC area university) and saw a lady give a talk about using agent based modeling to do computational sociology planning around federal (mostly navy/army) development in Hawaii. Essentially a sim city type of thing but purpose built to help aid in public planning decisions. Now imagine that but the agents aren’t just sets of weighted heuristics but instead weighted heuristic/prompt driven LLMs with higher level executive prompts to bring them together.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          -3
          edit-2
          11 months ago

          I’m really excited to see this kind of stuff experimented with. I find it’s really useful of thinking of machine learning agent training in terms of creating a topology through balancing of the weights and connections that ends up being a model of a particular domain described by the data that it’s being fed. The agent learns patterns in the data it observes and creates an internal predictive model based on that. Currently, most machine learning systems seem to focus on either individual agents or small groups such as adding a supervisor. It would be interesting to see large graphs of such agents that interact in complex ways and where high level agents are only interacting with other agents and don’t even need to see any of the external inputs directly. One example would be to have a system trained on working with visual input and another with audio, and then have a high level system that’s responsible for integrating these inputs and doing the actual decision making.

          and just ran across this https://arxiv.org/abs/2308.00352