A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • Bobby Turkalino@lemmy.yachts
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    109
    ·
    7 months ago

    Everyone uses the word “hallucinate” when describing visual AI because it’s normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren’t taking drugs and randomly pooping out images because computers can’t do anything truly random.

    You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because “hallucination” takes place in that case too.

    • Downcount@lemmy.world
      link
      fedilink
      English
      arrow-up
      78
      arrow-down
      2
      ·
      7 months ago

      There is a huge difference between interpolating pixels and inserting whole objects into pictures.

      • Bobby Turkalino@lemmy.yachts
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        43
        ·
        7 months ago

        Both insert pixels that didn’t exist before, so where do we draw the line of how much of that is acceptable?

        • Downcount@lemmy.world
          link
          fedilink
          English
          arrow-up
          57
          arrow-down
          3
          ·
          edit-2
          7 months ago

          Look it this way: If you have an unreadable licence plate because of low resolution, interpolating won’t make it readable (as long as we didn’t switch to a CSI universe). An AI, on the other hand, could just “invent” (I know, I know, normy speak in your eyes) a readable one.

          You will draw yourself the line when you get your first ticket for speeding, when it wasn’t your car.

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            7 months ago

            License plates is an interesting case because with a known set of visual symbols (known fonts used by approved plate issuers) you can often accurately deblur even very very blurry text (but not with AI algorithms, but rather by modeling the blur of the cameras and the unique blur gradients this results in for each letter). It does require a certain minimum pixel resolution of the letters to guarantee unambiguity though.

          • Bobby Turkalino@lemmy.yachts
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            20
            ·
            7 months ago

            Interesting example, because tickets issued by automated cameras aren’t enforced in most places in the US. You can safely ignore those tickets and the police won’t do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.

            “Readable” is a subjective matter of interpretation, so again, I’m confused on how exactly you’re distinguishing good & pure fictional pixels from bad & evil fictional pixels

            • Downcount@lemmy.world
              link
              fedilink
              English
              arrow-up
              22
              arrow-down
              1
              ·
              7 months ago

              Being tickets enforced or not doesn’t change my argumentation nor invalidates it.

              You are acting stubborn and childish. Everything there was to say has been said. If you still think you are right, do it, as you are not able or willing to understand. Let me be clear: I think you are trolling and I’m not in any mood to participate in this anymore.

              • Bobby Turkalino@lemmy.yachts
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                21
                ·
                7 months ago

                Sorry, it’s just that I work in a field where making distinctions is based on math and/or logic, while you’re making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation

                • pm_me_ur_thoughts@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  11
                  ·
                  7 months ago

                  Okay, I’m not disagreeing with you about the fact that its all math.

                  However, interpolation or pixels is simple math. AI generated is complex math and is only as good as its training data.

                  The licence example is a good one. In interpolation, it’ll just find some average, midpoint, etc and fill the pixel. In AI gen, if the training set had your number plate 999 times in a set of 1000, it will generate your numberplate no matter whose plate you input. to use it as evidence would need it to be far more deterministic than the probabilistic nature of AI gen content.

            • abhibeckert@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              7 months ago

              You can safely ignore those tickets and the police won’t do anything

              Wait what? No.

              It’s entirely possible if you ignore the ticket, a human might review it and find there’s insufficient evidence. But if, for example, you ran a red light and they have a photo that shows your number plate and your face… then you don’t want to ignore that ticket. And they generally take multiple photos, so even if the one you received on the ticket doesn’t identify you, that doesn’t mean you’re safe.

              When automated infringement systems were brand new the cameras were low quality / poorly installed / didn’t gather evidence necessary to win a court challenge… getting tickets overturned was so easy they didn’t even bother taking it to court. But it’s not that easy now, they have picked up their game and are continuing to improve the technology.

              Also - if you claim someone else was driving your car, and then they prove in court that you were driving… congratulations, your slap on the wrist fine is now a much more serious matter.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          19
          ·
          edit-2
          7 months ago

          I mean we “invent” pixels anyway for pretty much all digital photography based on Bayer filters.

          But the answer is linear interpolation. That’s where we draw the line. We have to be able to point to a line of code and say where the data came from, rather than a giant blob of image data that could contain anything.

        • Catoblepas@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          7 months ago

          What’s your bank account information? I’m either going to add or subtract a lot of money from it. Both alter your account balance so you should be fine with either right?

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      7 months ago

      Has this argument ever worked on anyone who has ever touched a digital camera? “Resizing video is just like running it through AI to invent details that didn’t exist in the original image”?

      “It uses math” isn’t the complaint and I’m pretty sure you know that.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      2
      ·
      7 months ago

      normie-friendly

      Whenever people say things like this, I wonder why that person thinks they’re so much better than everyone else.

      • Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.

        I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.

        We have reached the winter of my discontent. Modern life is rubbish.

      • Bobby Turkalino@lemmy.yachts
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        7 months ago

        Normie, layman… as you’ve pointed out, it’s difficult to use these words without sounding condescending (which I didn’t mean to be). The media using words like “hallucinate” to describe linear algebra is necessary because most people just don’t know enough math to understand the fundamentals of deep learning - which is completely fine, people can’t know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.

        • Krauerking@lemy.lol
          link
          fedilink
          English
          arrow-up
          16
          ·
          7 months ago

          Or sometimes the colloquial term people have picked up is a simplified tool for getting the right point across.

          Just because it’s guessing using math doesn’t mean it isn’t hallucinating in a sense the additional data. It did not exist before and it willed it into existence much like a hallucination while being easy for people to catch onto quickly as not trustworthy thanks to previous definitions and understanding of the word.

          Part of language is finding the right words to use so that people can quickly understand topics even if it means giving up nuance but absolutely it should be based on getting them to the right conclusion even if in a simplified form which doesn’t always happen when there is bias. I think this one works just fine.

        • cucumberbob@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          It’s not just the media who uses this term. According to this study which I’ve had a very brief skim of, the term “hallucination” was used in literature as early as 2000, and in Table 1, you can see hundreds of studies from various databases which they then go on to analyse the use of “hallucination” in.

          It’s worth saying that this study is focused on showing how vague the term is, and how many different and conflicting definitions of “hallucination” there are in the literature, so I for sure agree it’s a confusing term. Just it is used by researchers as well as laypeople.

        • Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          7 months ago

          LLMs (the models that “hallucinate” is most often used in conjunction with) are not Deep Learning normie.

            • Hackerman_uwu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              7 months ago

              I’m not going to bother arguing with you but for anyone reading this: the poster above is making a bad faith semantic argument.

              In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications.

              This insufferable asshat is arguing that since they all use fuel, fire and air they are all engines. Which’s isn’t wrong but it’s also not the argument we are having.

              @OP good day.

                  • Bobby Turkalino@lemmy.yachts
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    Ok but before you go, just want to make sure you know that this statement of yours is incorrect:

                    In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications

                    Actually, they are not the distinct, mutually exclusive fields you claim they are. ML is a subset of AI, and Deep Learning is a subset of ML. AI is a very broad term for programs that emulate human perception and learning. As you can see in the last intro paragraph of the AI wikipedia page (whoa, another source! aren’t these cool?), some examples of AI tools are listed:

                    including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics

                    Some of these - mathematical optimization, formal logic, statistics, and artificial neural networks - comprise the field known as machine learning. If you’ll remember from my earlier citation about artificial neural networks, “deep learning” is when artificial neural networks have more than one hidden layer. Thus, DL is a subset of ML is a subset of AI (wow, sources are even cooler when there’s multiple of them that you can logically chain together! knowledge is fun).

                    Anyways, good day :)

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      7 months ago

      computers aren’t taking drugs and randomly pooping out images

      Sure, no drugs involved, but they are running a statistically proven random number generator and using that (along with non-random data) to generate the image.

      The result is this - ask for the same image, get two different images — similar, but clearly not the same person - sisters or cousins perhaps… but nowhere near usable as evidence in court:

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        18
        ·
        7 months ago

        Tell me you don’t know shit about AI without telling me you don’t know shit. You can easily reproduce the exact same image by defining the starting seed and constraining the network to a specific sequence of operations.

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 months ago

          But if you don’t do that then the ML engine doesn’t have the introspective capability to realize it failed to recreate an image

          • Gabu@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            13
            ·
            7 months ago

            And if you take your eyes off of their sockets you can no longer see. That’s a meaningless statement.

            • blind3rdeye@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              The point is that the AI ‘enhanced’ photos have nice clear details that are randomly produced, and thus should not be relied on. Are you suggesting that we can work around that problem by choosing a random seed manually? Do you think that solves the problem?

    • Malfeasant@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      7 months ago

      computers can’t do anything truly random.

      Technically incorrect - computers can be supplied with sources of entropy, so while it’s true that they will produce the same output given identical inputs, it is in practice quite possible to ensure that they do not receive identical inputs if you don’t want them to.

      • Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        IIRC there was a random number generator website where the machine was hookup up to a potato or some shit.

    • Kedly@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      7 months ago

      Bud, hallucinate is a perfect term for the shit AI creates because it doesnt understand reality, regardless if math is creating that hallucination or not