Seems to make sense for maximum privacy. Put together a large enough model to answer health queries, have OCR and image recognition to read exams, give it web access to search for medication details and, of course, gather raw data from any devices you use to measure weight, heart rate, etc.

But that’s just in theory and we all know things are hard to put together. In practice, have you had any experience getting anything like this working locally?

  • nagaram@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Personally, I think we have too much data on our own health habits.

    Like it’s cool that we can record that data if needed, but smart watches, rings, and things make us track too much info for a none medical professionals. Like I, and most people I know, barely understand what our blood pressure readings mean.

    Its good to be mindful of your health, but genuinely, a food journal and a poop journal would be more useful for your more immediate health and asking an LLM to process that data into trends would be easier to do and more useful for you and your doctor.

    • rkd@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      You can even not have any data layer all together. The only thing missing from a local LLM is knowledge of current medications by name if you want to just say whatever prescription you’re following.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    I feel you can’t just dump in the CSV values from your Xiaomi Scale and Garmin watch… And hope AI will figure out the correct math on your body… And then also come up with good recommendations.

    As far as I know, there are a few local, selfhosted health trackers available. It’s a bit tricky to own the correct gadgets that connect to it… But I don’t think there’s anything with AI.

    I mean to give proper recommendations, you’d need a very elaborate setup. It needs all the sensor values. Then correlate it with what you’re doing all day long… What you eat and how much you drink… The AI (or traditional algorithms) can’t see. So maybe it can calculate your BMI in a thinking step. But it’s a whole lot of math to then figure out if your too fat, or have muscle mass… And then find out what that means for your diet. AI won’t figure that out along the way. So you’re probably looking at a few thousands of lines of code, after reading a few textbooks on biology.

    I mean you can try to vibe-code some agent. But I think your best bet is to look for some open-source software cloning Google Health, or something like that. (And then maybe you can write some MCP server for that. And an agent to interpret the aggregated results.)

    • rkd@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      For sure, context rot is a problem, but that’s also the easiest thing to control for in this case. If sensor data is relevant to you, having some code to process and reduce it to a dashboard you can read is always a good idea, independently of getting an LLM into the loop.

      This becomes more complicated with data you can’t really understand like results from blood tests, for example. But maybe you just don’t summarize any of that.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        It wasn’t really clear to me where you want to go with this. I mean judging by conversations with my friends, there’s a 3h conversation to be had about every nuance of health. The diet, how to work out, how much to eat and drink. There’s rules of thumb. But in reality it’s a very individual thing. And also changes with the situation, for example if you go to the gym or running and want to progress, it’d be an entirely different story a few months later. Also some people have office jobs, some do 18,000 steps each day… And don’t get me started on mental health.

        Diagnosing medical conditions is hard for AI. We got some news on that a few days ago. It’s good at exam questions. But doesn’t perform in reality. So I wouldn’t call it health agent considering those kinds of questions. More a shaman. Or alternative practitioner / healer. (Or it’d need to stick to specific things. Or we need a few more years of progress in AI.)

        I mean, I think the available tools aren’t even half bad? There’s smart watches with all kinds of features, apps, dashboards… Training modes and advise. They can help you define goals, track your period if you have that… Water intake, activity levels. It’s not AI, but there will be summaries, achievements, reminders…
        Just the privacy part is a bit tricky, as most of these ecosystems come as cloud services.

  • Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    Using an llm for anything that requires perfect precision is just not happening. Even these billion dollar setups can’t do it.

    But it is therapeutic to vent at them. With the understanding that you’re in an echo chamber.

    • rkd@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      What part requires perfect precision?

      If you want to parse sensor data, you do it in code before the LLM sees it.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    2 days ago

    No, because a regurgitation machine prone to hallucinations can not replace someone with a decade of training.

    • rkd@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      What we can call a health advisor is not a doctor. In fact, depending on the model, it will actively point you to seek medical help.