• Cosmic Cleric
    link
    fedilink
    English
    -37 months ago

    When you see the cue that the event has happened, you rewind

    The event has happened, or the aftereffects that the event happened. That is my point, the aftereffects matter as much as the event itself. As long as the ‘after’ looks different than the ‘before’ for any reason, that is a marker to give you an indication on which way to go, rewind, or advance.

    And yes, either the effect or the aftereffects has to last long enough to be noticed by humans, less long by AI (faster to detect changes than humans). But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.

    • @starman2112@sh.itjust.works
      link
      fedilink
      3
      edit-2
      7 months ago

      The event has happened, or the aftereffects that the event happened.

      In which case there are visual cues and it’s something that the comment you argued with acknowledged would be eligible for binary search

      But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.

      Nobody said otherwise, you’re arguing with strawmen

      • Cosmic Cleric
        link
        fedilink
        English
        07 months ago

        But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.

        Nobody said otherwise, you’re arguing with strawmen’

        Yes, they have. They’ve used it as a reason why a binary search would not work, that the event duration would be too short to be detectable.

        And that’s not a strawman, that’s making my point, that its not just the event, but the aftereffects of the event, that makes a binary search possible.

    • @starman2112@sh.itjust.works
      link
      fedilink
      2
      edit-2
      7 months ago

      less long by AI (faster to detect changes than humans).

      Many things change things. A bit of smoke in the air might have been from a gunshot that happened 10 minutes ago, or it might have been from a cigarette 15 minutes ago. Binary search relies on changes that indicate a specific thing has happened–a broken window, a bike no longer there, blood stains on the street. Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape. Even the things that AIs are truly useful for, like interpreting video compression artifacts, wouldn’t help, because any number of things can cause compression artifacts. How could it tell what pixels are slightly off color because of a gunshot 3 minutes ago, and what pixels are slightly off color because someone walked past the camera?

      At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams. Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.

      • Cosmic Cleric
        link
        fedilink
        English
        -27 months ago

        Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape.

        Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.

        At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams.

        Is there a point where one technique works better than another technique? Sure. I’m not arguing that. But if you’re dealing with a very long time, you’d still want to do a binary search first.

        Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.

        Depends on how long that tape is, which is what was being originally discussed by the OP.

        A binary search assisted by AI in determining the point in the tape where the effect happened quickly is still a very fast way of doing so (assuming the tape duration is very long), as alluded by others in other topic trees in this topic.

        • @starman2112@sh.itjust.works
          link
          fedilink
          2
          edit-2
          7 months ago

          Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.

          Lmao now I know you’re fucking with me

          Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster

          Once it’s trained

          In another week

          Oh shit, it thought that guy’s cell phone was a gun. See you in another month!

          • Cosmic Cleric
            link
            fedilink
            English
            -2
            edit-2
            7 months ago

            Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.

            Lmao now I know you’re fucking with me

            Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster

            Once it’s trained

            In another week

            Oh shit, it thought that guy’s cell phone was a gun. See you in another month!

            Um, I was being completely serious. Having AI determine shapes/opaqueness is a simple matter for it. And I’m assuming the training would already be done before the event happens, over time.

            You don’t think crime forensics labs won’t be training AI to do these kind of detections going forward? Really?

            (Maybe its a matter of people not truly grocking what AI will do and how it will change things, going forward. /shrug)

            • @starman2112@sh.itjust.works
              link
              fedilink
              2
              edit-2
              7 months ago

              Having an AI search for shapes an opaqueness is still totally useless for a binary search if those semi-opaque shapes happen for 10 minutes 34 minutes into an hour long video

              Again, you’d just feed the whole video to an AI, you wouldn’t have it do a binary search

              • Cosmic Cleric
                link
                fedilink
                English
                -2
                edit-2
                7 months ago

                Having an AI search for shapes an opaqueness is still totally useless for a binary search if those semi-opaque shapes happen for 10 minutes 34 minutes into an hour long video

                Well one of those shapes would happen at the time of the event though, so it’s not useless. One of those would be a gunshot smoke, and could be flagged for review.

                Again, you’d just feed the whole video to an AI, you wouldn’t have it do a binary search

                One day, when computers and AI are powerful enough, this will be the answer, but even then I would like to think behind the scenes they would use a binary search to speed up the processing time.

                • @starman2112@sh.itjust.works
                  link
                  fedilink
                  2
                  edit-2
                  7 months ago

                  The time of the event doesn’t necessarily coincide with any of the times that you’re checking. That’s the whole point of looking for visual cues. Again, if the event happens 34 minutes into the video, and it leaves AI detectable visual cues for 10 minutes, the AI will never find it using binary search. It will skip to 30 minutes, see nothing, skip to 45 minutes, see nothing, skip to 52:30, see nothing, skip to 56:15, see nothing, and fail at some point when it can’t divide the video further. Binary search would fail in this scenario. It’s not just useless, it’s an abject failure, and the AI was a waste of processing power when you could have scrubbed forward five minutes at a time instead. That would have found the visual cue, but would not be a binary search.

                  • Cosmic Cleric
                    link
                    fedilink
                    English
                    -37 months ago

                    The time of the event doesn’t necessarily coincide with any of the times that you’re checking. That’s the whole point of looking for visual cues.

                    But one of them potentially will though. A gun firing leave smoke behind.

                    Even if there’s other smoke in the video, you’re looking at 5 minutes of a 24-hour video, and not scanning through 24 hours of a video manually. And an AI could use a binary search to find any moments of smoke (or not). Not saying it’s a one-size-fits-all solution, just one very important tool in a toolbox.

                    I don’t mean to be rude, but I’m exhausted talking about this topic, and so if you don’t mind, I’m just going to bail at this point.

                    Thanks for keeping it civil.