• li10@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    8
    ·
    1 year ago

    I don’t understand Lemmy’s hate boner over AI.

    Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

    Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      9
      ·
      edit-2
      1 year ago

      AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It’s genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

      No idea why such hate…

      One can hate disney ceo for misusing AI, but why shitting on AI?

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        1 year ago

        It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.

        The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

        Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.


        Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

        Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

        The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


        At least in my opinion that’s where the negativity comes from.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by overly positive one-sided evaluations of something they own, and worse in an area I actually have quite a lot of knowledge in and can see through a lot of the bullshit, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit of what they’re parroting and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.

          You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.

          As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

          It’s not even negativity (except towards the greedy fuckers trying to take advantage of others and who can Burn In Hell), it’s informed (both historically and by domain knowledge) skepticism.

          • SirGolan@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

            I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              In my experience they’re a great tool to wrap and unwrap knowledge in and from language envelopes with different characteristics and I wouldn’t at all be surprised if they replace certain jobs which deal mostly with communicating with people (for example, I suspect the kind of news reporting of news agencies doesn’t really need human writters to compose articles, just data in bullet point format an LLM to turn it into a “story”).

              What LLMs are not is AGI and using them as knowledge engines or even just knowledge sources is a recipe for frustration as you end up either going down the wrong route by believing the AI or spending more time validating the AI output than the time it would take to find out the knowledge yourself from reliable sources.

              Whilst I’ve been on and off on the whole “might they be the starting point from which AGI comes” (which is really down to the question “what is intelligence”), what I am certain is nobody who is trully knowledgeable about it can honestly and assuredly state that “they are the seed from which AGI will come”, and that kind of crap (or worse, people just stating LLMs already are intelligent) is almost all of the hype we get about AI at the moment.

              At the moment and judging by the developments we are seeing, I’m more inclined to think that at least the reasoning part of intelligence won’t be solved by this path, though the intuition part of it might as that stuff is mainly about pattern recognition.

              • SirGolan@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 year ago

                Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.

                In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.

                Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.

                • Aceticon@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  That’s a good point.

                  I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.

                  LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).

                  However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.

                  I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.

                  This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)

      • HellAwaits@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What I don’t understand is why so many people conflate “hating disney CEO for misusing AI” with “hating AI”. Maybe if people understood the differences, they would “understand the hate”

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        edit-2
        1 year ago

        Ah, yes.

        Remind me again how that “revolution of human mobility”, the Segway, is doing now…

        Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…

        Or how the safest Operating System ever, Windows 7, turned out in terms of security…

        Or how Bitcoin has revolutionized how people pay each other for stuff…

        Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.

        Rational people who are not driven by “personal profit maximization on the backs of suckers” will not use salesspeak and refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) or deem any and all criticism of it as “shitting on it”.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          1 year ago

          “Completely unrelated thing X didn’t live up to its hype, therefore thing Y must also suck” is not particularly sound logic for shitting on something.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            1 year ago

            Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.

            • SirGolan@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              extraordinary claims without extraordinary proof

              What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 year ago

          AI, even at the current state is one of the most incredible creation of humanity.

          If there was a nobel prize for math and computer science, the whole field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.

          You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.

          You can start your research here https://www.deepmind.com/research/highlighted-research/alphafold . Another nobel prize material

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.

            GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).

            Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.

            Edit: that was actually unfair, so let me put things better: some reactions to the hype on this AI remind me of how my grandmother - an illiterate old lady from the countryside who had been very poor most of her life - used to get very confused when she saw the same actor in multiple soap operas. The whole concept of actors and Acting was beyond her life experience so when I was a kid and she had moved to live with us in the “big city”, she took what she saw on TV at face value. I suspect a lot of people who have no previous understanding of the domain and related are going down the same route of reasoning on AI as my nana did on soap operas, so end up confusing the LLM’s impeccable imitation of human language use with there actually being a human-like intelligence behind it, just like my nana confused the “living truthfully in imaginary circunstances” of good actors with the real living it imitated.

            • Zeth0s@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              As you have domain expertise you will agree with us that, despite not being AGI, as it is now, deep learning, reinforcement learning, generative AI are an incredible creation of humanity, that, among other things, are capable already of:

              1. solving long standing scientific challenges such as protein folding,
              2. taking independent decisions and develop strategies that, on specific tasks, surpass human experts
              3. mapping human languages and artistic creations in high dimensional vector spaces where concepts and relationships are retained as properties of the spaces, allowing to perform math and statistical inference, generating original images and text (a thing for which, few decades ago, not many would have guessed such manageable mathematical representation could even exist).

              On top of this we give for granted all the current already existing applications, such as image recognition, translation, text classification…

              You would also agree with us that the potential of current AI methodologies in all fields of science and technology is already enormous, as demonstrated by alphafold for instance. We just need few more years to see even more groundbreaking applications of the exising methodologies, while we wait for even more powerful techniques or, why stop dreaming, AGI in few decades.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 year ago

                What it’s doing is just a natural extension of what was done with basic Neural Networks back in the 90s when it started being used for recognition of human-written postal code numbers on mail envelopes.

                This is why I disagree that this specific moment in the development of AI is “an incredible creation of humanity”. Maybe the domain as a whole will turn out to be as groundbreaking as computers, but the idea that what’s being done now by itself is that is ignorant, premature or both.

                As for the rest, I actually studied Physics at a Degree level and with it complex Mathematics and your point #3 is absolute total bollocks.

                • Zeth0s@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  I was actually taking the time to share with you some very basic resources for you to learn something on basic stuff such as latent space, embedding, attention mechanism, markov decision processes, but your attitude really made change my mind.

                  It’s fine that you clearly don’t have the domain knowledge you claim, but your rudeness is really annoying. Enjoy your life with your achievement of complex math at degree level and learn how to speak

                  BTW, neural networks, even if few decades old, are an incredible achievement of humanity, even knowing how to roughly simulate a human neural network involves understanding of the brain, of non-linear math and existence of computers and (each of them) are astonishing achievements of humanity

    • Not A Bird@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Lemmy and Mastodon to a larger extent hate anything owned by a corporation. That voice is getting more and more louder by the day.

    • deadcream@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      15
      ·
      edit-2
      1 year ago

      It’s just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
      I.e. they don’t really hate AI. You can’t hate something if you have zero understanding what that something is. It’s just an expression of hate for someone who promotes that something.

      • chaogomu@kbin.social
        link
        fedilink
        arrow-up
        12
        arrow-down
        6
        ·
        1 year ago

        AI is not good. I want to be good, but it’s not.

        I’ll clarify, it’s basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it’s already being used to put people out of their jobs.

        Techbros think AI will run rampant and kill all humans, when they’re the ones killing people by replacing them with shitty AI. And the worst part is that it isn’t even good at the jobs it’s being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          It’s a tool which can be used to great effect in the right setting, for example to wrap cold knowledge summarily stated into formats with much broader appeal and to revert the process.

          However it’s being sold by greedy fuckers who stand to gain from people jumping into the hype-train as something else altogether: a shortcut into knowledge and the output of those who have it, because there’s a lot more money to be made from that than there is of something which can “write an article from a set of bullet points”.

          For me the most infuriating aspect of this is that this is hardly the 1st such hype train going to “FleeceTheSuckersTown” coming out of “TechBrosCity” that we’ve seen in the last 2 decades, not even the 2nd or the 3rd - there have been a lot of such things always following the same formula, to the point that the “great men” of the age in Tech (such as Musk) are, unlike the ones in the first Tech boom (that ended in 2000), people who repeatedly used this kind of thing to make themselves rich by fleecing suckers, not makers.

        • _danny@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It’s definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn’t make. I remember my “wow moment” was asking how many M&M’s would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn’t find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.

          I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.

          • chaogomu@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            There’s also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We’re about halfway there.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              ChatGPT is trained on data with a cutoff in September 2021. It’s not training on AI-generated data.

              Even if some AI-generated data is included, as long as it’s reasonably curated and it’s mixed with non-AI data model collapse can be avoided.

              “Model collapse” is starting to feel like just a keyword for “this AI isn’t as good as I wanted.”

            • _danny@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I feel like you’re undereducated on how and when AI models are trained. Especially for the gpt model, it’s not “constantly learning” like other models. It’s being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

              Also, AI are already training other AI, that’s kinda how AI are made… There’s an AI that detects how well a given phrase follows another phrase, and that’s used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

              CGP gray has a good into video on how bots learn, it’s pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷