• PM_ME_VINTAGE_30S [he/him]
    link
    English
    115 months ago

    Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

    From the conclusion of the actual paper:

    Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

    If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

    • @FierySpectre@lemmy.world
      link
      fedilink
      English
      7
      edit-2
      5 months ago

      For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

      The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

      As a different commenter mentioned, the data collection is largely the interesting part here.

      I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

      • PM_ME_VINTAGE_30S [he/him]
        link
        English
        5
        edit-2
        5 months ago

        They don’t go in depth about how they combine the two for the hybrid model

        Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.

        Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.

      • @errer@lemmy.world
        link
        fedilink
        English
        35 months ago

        ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.

    • @llothar@lemmy.ml
      link
      fedilink
      English
      35 months ago

      I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

      I would love to see comparison against risk factors + human image evaluation.

      Nevertheless, this is the AI that will really help humanity.