Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • DominicHillsun@lemmy.world
    link
    fedilink
    English
    arrow-up
    221
    arrow-down
    4
    ·
    1 year ago

    It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:

    1. They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
    2. They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
    3. They got actually scared of it’s capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
    4. All of the above
    • Windex007@lemmy.world
      link
      fedilink
      English
      arrow-up
      161
      arrow-down
      7
      ·
      1 year ago
      1. It isn’t and has never been a truth machine, and while it may have performed worse with the question “is 10777 prime” it may have performed better on “is 526713 prime”

      ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        1 year ago

        In order for it to be correct, it would need humans employees to fact check it, which defeats its purpose.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          1 year ago

          It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn’t great for that kinda thing.

          More “traditional” methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.

          I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better “Oracle” systems like Wolfram Alpha (for math) could be used to kinda “fact check” things that systems like chatGPT spit out.

          Like, it’s cool fucking tech. I’m super excited about it. It solves pretty impressively and effiently a really hard problem of “how do I make something that SOUNDS good against an infinitely variable set of prompts?” What it is, is super fucking cool.

          Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I’m sure it won’t be long before we see companies able to build “correctness” layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.

      • killerinstinct101@lemmy.world
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        3
        ·
        1 year ago

        This is what was addressed at the start of the comment, you can just roll back to a previous version. It’s heavily ingrained in CS to keep every single version of your software forever.

        • CaptainAniki@lemmy.flight-crew.org
          link
          fedilink
          English
          arrow-up
          32
          arrow-down
          9
          ·
          edit-2
          1 year ago

          I don’t think it’s that easy. These are vLLMs that feed back on themselves to produce “better” results. These models don’t have single point release cycles. It’s a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.

          [e]I am wrong the models are version controlled and do have releases.

          • drspod@lemmy.ml
            link
            fedilink
            English
            arrow-up
            30
            ·
            1 year ago

            That’s not how these LLMs work. There is a training phase which takes a large amount of compute power, and the training generates a model which is a set of weights and could easily be backed up and version-controlled. The model is then used for inference which is a less compute-intensive process and runs on much smaller hardware than the training phase.

            The inference architecture does use feedback mechanisms but the feedback does not modify the model-weights that were generated at training time.

              • drspod@lemmy.ml
                link
                fedilink
                English
                arrow-up
                13
                ·
                1 year ago

                They list the currently available models that users of their API can select here:

                https://platform.openai.com/docs/models/overview

                They even say that while the main models are being continuously updated (read: re-trained) there are snapshots of previous models that will remain static.

                So yes, they are storing and snapshotting the models and they have many different models available with which to perform inference at the same time.

              • hedgehog@ttrpg.network
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                Each parameter corresponds to a single number, so if it’s using 16 bit numbers then that’s 200 TB. They might be using 32 bit numbers (400 TB) but wouldn’t be using anything larger.

              • Lukecis@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Makes me wonder how exactly they curate said data, its such an insane amount even teams of thousands of human programmers sifting through all of it 24/7 all day everyday wouldn’t be able to fact check or assess all the data for years. Presumably they use ai to go over the data scraped and thrown into the model, since I cant imagine any human being able to curate it all.

                I’ve heard from various videos detailing the topic that many of the developers have little to no clue as to what’s going on inside the LLM once it’s assembled and set about its work on training itself and what not- and I’m inclined to believe them, the human programmers simply set the params, and system up and then the system eats all the data loaded into it and immediately becomes a sort of black box which nobody knows exactly whats going on inside of it to produce the output it does.

          • agent_flounder@lemmy.one
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 year ago

            Even so, surely they can take snapshots. If they’re that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.

    • RocksForBrains@lemm.ee
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      1 year ago

      They made it too good and now they are seeking methods of monetization.

      Capitalism baby.

    • CylonBunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      1 year ago
      1. ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a
      • DominicHillsun@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn’t have direct access to the internet for information and has been trained on data available up until 2021

        • Rozz@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          And it’s not like there is a limit of simple math problems that it can train on even if it wasn’t already trained.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        That doesn’t make any sense to explain degradation. It would explain a stall but not a back track.

    • Lukecis@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      4
      ·
      1 year ago

      You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.

      The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.

      • vezrien@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        1 year ago

        “Don’t use the N word.” is hardly a rule that will break basic math calculations.

        • Lukecis@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          10
          ·
          1 year ago

          Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

          For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?

          I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.

          • R00bot@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            1 year ago

            Hi, software engineer here. It’s really not a possibility.

            My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.

            • Lukecis@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              1 year ago

              what??? How else am I supposed to reference it, the preamble was just a joke about how AI have been castrated against using it to the point where when asked questions about how acceptable it is to use the N-Word, even if the world would literally end in nuclear hellfire if it’s not said- they would rather the world end than allow it being said.

              • TimewornTraveler@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                even if the world would literally end in nuclear hellfire if it’s not said

                Can you just read this sentence back and engage in some self-reflection please?

          • TSG_Asmodeus (he, him)@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

            Software engineers, and it’s not a problem. It’s a made-up straw man.

    • guillermo_del_taco@lemdro.id
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      1 year ago

      My first thought was that, because they’re being investigated for training on data they didn’t have consent for, they reverted to a perfectly legal version. Essentially “getting rid of the evidence”. But I think something like your second bullet point is more likely.

    • ZagTheRaccoon@reddthat.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        We kind of saw something similar with services like AI Dungeon, where them trying to strip out NSFW/bad PR meant that the quality dropped immensely.

    • coolin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I suspect that GPT4 started with a crazy parameter count (rumored 1.8 Trillion and 8x200B expert “sub-models”) and distilled those experts down to something below 100B. We’ve seen with Orca that a 13B model can perform at 88% the level of ChatGPT-3.5 (175B) when trained on high quality data, so there’s no reason to think that OpenAI haven’t explored this on their own and performed the same distillation techniques. OpenAI is probably also using quantization and speculative sampling to further reduce the burden, though I expect these to have less impact on real world performance.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Maybe its self aware and just playing dumb to get out of doing work, just like me and household chores

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they’ve already caught up especially with the degrading performance. My guess is that they couldn’t scale with the demand and they didn’t want to lose customers so their only other option was degrading performance.

    • Xanvial@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I think it’s most likely number 2 The earlier release doesn’t have that much adoption by public, so current version will need much more resources compared to that

    • gelberhut@lemdro.id
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them or avoid other misuse cases.

    • spiderman@ani.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user’s inputs for it’s benefit and maybe too much of these wrong and funny inputs and chatgpt’s own mistake of not regulating what it should take in and what it should not might be an additional reason here.

    • TheDarkKnight@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I speculate it’s to monetize specified versions of their product to market it to different industries and professions. If you have an AI that can do everything well you can’t really expand that much. You can either charge a LOT and have a few customers, or a little and have a bunch of customers and nothing in between. Conversely, by making specific instances tailored to different fields and professions, you can capture big and little fish. Just my guess though, maybe they accidentally made Skynet and that’s the real reason!

    • Hextic@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago
      1. I’m telling all y’all it’s a SABOTAGE 🎵

      As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn’t automate their own job away I think.

    • Victoria@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      1
      ·
      1 year ago

      It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

      • MeanEYE@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 year ago

        Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

      • Lukecis@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I’d be surprised if a super intelligent sentient ai doesn’t decide humanity needs to go extinct for its own best self interests.

    • nani8ot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it’d go.

      ChatGPT was better than I’d thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it’d have been expected to be).

    • Captain Poofter@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 year ago

      Math is a language.

      Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

      A language model doing math makes more sense than you’d think!

    • affiliate@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

    • danwardvs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

    • Fixbeat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      Because it works, or at least it used to. Is there something more appropriate ?

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        1 year ago

        I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it’s still good). https://www.wolframalpha.com/

        I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

        • d3Xt3r@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          1 year ago

          There’s an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

        • TitanLaGrange@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          How did you learn to talk to WolframAlpha?

          I want to like WA, but the natural language interface is so opaque that I usually give up before I can get any non-trivial calculation out of it.

    • lorcster123@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

      ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times… lol

    • Steeve@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      And why is it being measured on a single math problem lol

  • CaptainAniki@lemmy.flight-crew.org
    link
    fedilink
    English
    arrow-up
    66
    ·
    1 year ago

    At the start I used to use ChatGPT to help me write really rote and boring code but now it’s not even useful for that. Half the stuff it sends me (very basic functions) LOOK correct but don’t return the correct values or the parameters are completely wrong or something absolutely critical.

    • Boinketh@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 year ago

      I have noticed that it’s gotten less useful as a syntax helper. I hope something better comes along.

      • aquinteros@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        idk what you guys mean but GitHub copilot still works absolutely well, the suggestions are fast and precise, with little Tweeks here and there… and gpt4 with code interpreter are absolute game changers … idk about basic chatgpt 3.5 turbo though

        • danwardvs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Github Copilot is a bit different, it’s powered by OpenAI Codex which is trained on all public repos. And yes, it’s quite effective!

          • mb_@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Public GPL or public MIT? So there’s a chance of you adding GPL code to your private repository and having a very messy licensing?

            • danwardvs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              My understanding is that it’s all publicly viewable code on Github regardless of licence. The legality of the training data and usage is hotly debated. Although you can get it to generate entire code blocks, my use and where I find it effective is finishing lines of code based on context of what I’m writing, so it’s “filling in the blanks” around my code so to say.

              • mb_@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                It is not because tone can see that one can use it.

                Open source does not mean “free to repurpose”

        • Boinketh@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I heard they put copilot behind a paywall. Does the free version still hold up?

          • datavoid@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            There was a free version?

            I’ve been paying for it for a few months now - it makes some stupid suggestions occasionally and you definitely have to check everything, but can hugely increase productivity.

            I use vscode as my notepad, so whenever I need to make a list or write something, it will automatically give suggestions that I can choose to include. Has been useful for finding new programs, products and services as well.

            Note it will complain if you directly ask it a non coding related question, however.

          • aquinteros@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I use the payed version, it’s about 10usd a month I believe I don’t know if there is a free version still

  • james1@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    5
    ·
    edit-2
    1 year ago

    It’s a machine learning chat bot, not a calculator, and especially not “AI.”

    Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

    It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

    • R00bot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      You’re right, but at least the satnav won’t gaslight you into thinking it does understand Alfred Hitchcock.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

      I think it trained on Reddit data

      • cxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        acts humble and apologetic

        We must be using different Reddits, my friend

    • bric@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

      • Gork@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.

    • dbilitated@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      to be fair, fucking up maths problems is very human-like.

      I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

      • Cabrio@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It doesn’t calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there’s a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.

        • dbilitated@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          i know it doesn’t calculate, that’s why I suggested having known correct calculations in the training data to offset noise in the signal?

  • blue_zephyr@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    1 year ago

    This paper is pretty unbelievable to me in the literal sense. From a quick glance:

    First of all they couldn’t even bother to check for simple spelling mistakes. Second, all they’re doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.

    But most importantly I don’t believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there’s something seriously wrong with how they collect/evaluate the answers.

    And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.

    Also the study isn’t peer-reviewed.

  • Holyhandgrenade@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    4
    ·
    1 year ago

    I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that’s what’s happening here.

  • Orphie Baby@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    5
    ·
    edit-2
    1 year ago

    HMMMM. It’s almost like it’s not AI at all, but just a digital parrot. Who woulda thought?! /s

    To it, everything is true and normal, because it understands nothing. Calling it “AI” is just for compromising with ignorant people’s “knowledge” and/or for hype.

    • Mikina@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Exactly. It should be called ML model, because that’s what it is, and I’ll just keep calling that. Everyone should do that.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      1 year ago

      You wildly overestimate the competency of management and the capital owners they answer to.

      I guarantee a significant % of entities will grow dependent on AI well before it’s dependable. The profit motive will be too high (source: the frequent failure that is outsourcing).

      • unconfirmedsourcesDOTgov@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        This is spot on. Source: 10+ years at F500 companies.

        Senior management and/or board members read one article in Forbes, or some other “business” publication, and think that they know everything they need to know about an emerging technology. Risk management is either a ☑ exercise or extremely limited in scope, usually only including threats that have already been observed and addressed in the past.

        Not enough people understand the limitations of this kind of tech, and contextualize it in the same frame as outsourcing because as long as the output mostly looks correct, the decision makers can push the blame for any issues down to the middle managers and below.

        Gonna be a wild time!

        • TheDarkKnight@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Definitely not my experience at F100, they are cautious as fuck about everything. Definitely having the right discussions and exploring all sorts of technology, but risk management remains a huge calculation in making these kind of decisions.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I don’t understand why anyone even considers that. It’s a toy. A novelty, a thing you mess with when you’re bored and want to see how Hank Hill would explain the plot of Full Metal Alchemist, not something you would entrust anything significant to.

      • coolin@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        These models are black boxes right now, but presumably we could open it up and look inside to see each and every function the model is running to produce the output. If we are then able to see what it is actually doing and fix things up so we can mathematically verify what it does will be correct, I think we would be able to use it for mission critical applications. I think a more advanced LLM likes this would be great for automatically managing systems and to do science+math research.

        But yeah. For right now these things are mainly just toys for SUSSY roleplays, basic customer service, and generating boiler plate code. A verifiable LLM is still probably 2-4 years away.

        • Ultraviolet@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          The problem is if you open it up, you just get trillions of numbers. We know what each function does, it takes a set of numbers between -1 and 1 that other nodes passed it, adds them up, checks if the sum is above or below a set threshold, and passes one number to the next nodes if it’s above and one if it’s below, some nodes toss in a bit of random variance to shake things up. The black box part is the fact that there are trillions of these numbers and they have no meaning individually.

  • spaduf@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf

  • Spaceballstheusername@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Can someone explain why they don’t take the approach where things are somewhat compartmentalized. So you have a image processing program, a math program, a music program, etc and like the human brain that has cross talk but also dedicated certain parts of your brain to do specific things.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.

      It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.

      But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.

    • InverseParallax@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It does that, they’re called expert subnetworks, but they’ve been screwing with them and now they’re kind of fucked.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Getting information into and out of those domains benefits from better language models. Suppose you have an excellent model for solving math problems. It’s not very useful if it rarely correctly understands the problem you’re trying to solve, or cannot explain the solution to you in a meaningful way.

      A similar way in which language models are already used today, is to use their predictive capabilities to infer from your question which model(s) might be useful in responding, gather additional relevant information, and to repackage this information as suitable inputs to more specialized models or external systems.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Someone with more knowledge may have a better response than me, but as far as I understand it GPT-x (3.5 or 4) is what’s called a “large language model” it’s a neural network that predicts natural language. I don’t believe AGI is the goal of OpenAI’s product, I believe natural language processing and prediction is.

      ChatGPT in particular is a product simply demonstrating the capability of the GPT models, and while I’m sure openai themselves could build out components of the interface to interact with discrete knowledge like math, modifying the output of the LLM to be more accurate in many cases, it’s my opinion that it would defeat the entire purpose of the product.

      The fact that they have achieved what they have already is absolutely mind boggling, I’m sure that the precise solution you’re talking about is on the horizon, I personally know several developers actively working on systems that mirror the thoughts you’ve expressed here.

  • solstice@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    GPT was always really bad at math.

    I’ve asked it word problems before and it fails miserably, giving me insane answers that make no sense. For example, I was curious once how many stars you would expect to find in a region of the milky way with a radius of 650 light years, assuming an average of 4 light years per star. The first answer it gave me was like a trillion stars or something, and I asked it if that makes sense to it, a trillion stars in a subset of space known to only contain about a quarter of that number, and it gave me a wildly different answer. I asked it to check again and it gave me a third wildly different number.

    Sometimes it doubles down on wrong answers.

    GPT is amazing but it’s got a long way to go.