• Johnny Bravo@lemm.ee
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        28 days ago

        Not everybody owns/runs a website since .com bubble, which was the actual narrative during those years preceding the bubble burst.

        1999: Your business needs .com website. 2024: Your business needs AI/chatbot.

        It’s hard not to notice the parallels.

          • omarfw@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            27 days ago

            The vast majority of businesses don’t own domains and host their own websites which is what the .com bubble was. They host pages on Etsy, Facebook, squarespace, WordPress etc etc.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              27 days ago

              Are you saying the .com bubble should have never happened if all small businesses just would have gone into a big site umbrella?

              Also: https://www.statista.com/chart/amp/19058/number-of-websites-online/

              The raw number of individual websites have exploded since .com bubble era. So that argument seems not to hold very well. As you are somehow implying that there are less sites now than then, and that os simply untrue.

              It’s also not true that the .com bubble affected the creation of new websites or the internet technology on the long term.

              • omarfw@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                27 days ago

                The amount of websites isn’t what eventually consolidated. Internet traffic did, and the value of domains went way down. That’s why it’s described as a bubble.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  27 days ago

                  There’s no way internet traffic hasn’t drastically increased. Also, domain price wasn’t what bubbled…

                  And don’t miss my main point. That is that if there’s an “AI bubble” it has nothing to do with AI disappearing, consolidating or even stop increasing if the bubble burst. Same as happened with internet and dotcom bubble.

                  This kind of bubbles mean that there’s a bunch of companies overvalued and will disappear once they cannot keep getting investor money without any real return. But it means nothing to the core technology that will continue and keep being developed.

  • n3m37h@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    28 days ago

    These idiots need to stop with LLM and create more specific machine learning algorithms. Like language translations, program writing get them good at that specific thing. Use those algorithms to train new ones.

    The way they are building it is they poured a massive slab of concrete and refusing to make releif cuts. Over time the stresses cause breaks then shit gets under it and causes a mess. Build a good foundation, don’t jump to the final fucking stage

    • omarfw@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      27 days ago

      The modern tech industry is all about building products that generate hype to impress the shareholders. Building good functional products to impress the consumer isn’t necessary if you can simply put on a flashy show for the geriatric billionaires.

    • dch82@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      28 days ago

      I need an automated code optimiser using genetic algorithms, but the best thing I could find is copilot.

  • itsgroundhogdayagain@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    ·
    29 days ago

    Google doesn’t really let you say no, do they? Maybe there is a setting somewhere to turn it off but Gemini shows up at the top of most searches.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      56
      arrow-down
      6
      ·
      29 days ago

      Maybe because we’re all getting really tired of industry propaganda designed to sell us on the “inevitability” of genAI when anyone who’s paying even a little attention can see that the only thing inevitable about this current genAI fad is it crashing and burning.

      (Even when content like this comes from a place of sincere interest, it becomes functionally indistinguishable from the industry propaganda, because the primary goal of the propagandists is to keep genAI in the public conversation, thus convincing their investors that its still the hottest thing around, and that they should keep shoveling money into it so that they don’t miss the boat).

      OpenAI, the company behind that giant bubble in the middle there, loses two dollars and thirty five cents for every dollar of revenue. Not profit. Revenue. Every interaction with ChatGPT costs them a ridiculous amount of money, and the percentage of users willing to actually pay for those interactions is unbelievably small. Their enterprise sales are even smaller. They are burning money at an absolutely staggering pace, and that’s with the deeply discounted rate they currently get on their compute costs.

      No one has proposed anything that will lower their backend costs to the point where this model is profitable, and even doubling prices (which is their current plan) will not make them profitable either. Literally not one person at OpenAI has put forth a concrete plan for the company to reach profitability. And that’s the biggest player in the game. If the most successful genAI company on the planet can’t figure out a way to actually make profit off this thing, it’s dead. Not just OpenAI; the whole idea.

      The numbers don’t lie; users, at best, find it moderately interesting and fun to play around with for a while. Barely anyone wants this, and absolutely nobody needs it. Not one single genAI product has created a meaningful use-case that would justify the staggering cost of building and running a transformer based model. The entire industry is just a party trick that’s massively overstayed it’s welcome.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        2
        ·
        edit-2
        28 days ago

        I mean, fuck the profitability. What about its massive toll on our already crumbling climate? What about its hallucinations that were told by the massively powerful companies (currently, anyway) to just not worry about? What about the promise of it “revolutionizing” industries (corporate speak for fucking workers in new and exciting ways), what about the paradigm solidifying nature of this tech that they keep lying to us about being a democratizing super tool?

        Everything about this shit is trouble because of the world it was built into. This type of tech (even though most of its capabilities and uses are lies) in the hands of the people it’s in will only serve the rich and gobble up resources when we need to be scaling back our consumption. Instead, that endless investment is fueling our climate collapse.

        Fuck this LLM bullshit. It’s not for us. It will only hurt us in this timeline.

        • Uli@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          28 days ago

          My feelings are mixed. Everything you are saying is true. LLMs, right now at least, are a huge waste of resources. It’s triggering us to move closer to fossil fuels when we should be moving away. Every time I step outside to a nice balmy day, I think, am I going to miss this in a few years’ time? In a few decades, am I going to envy my current self who can do dishes without worrying too much about how much water goes down the drain? Are the generations to come going to look at my occasional can of tuna with contempt and jealousy? Or will they even have the luxury of retrospection?

          I understand what we have to lose and how little we are doing about it. But I have also grown up being subjugated inside a capitalist hellscape. And I’ve spent the past few days having ChatGPT help me set up a CI/CD pipeline and start coding some games I’ve wanted to make for years. It’s allowed me to take a few hours of free time and make progress that I expected would have taken a week. It doesn’t have that effect on every task, but when learning new software, it really feels like having someone knowledgeable sitting next to me to answer my questions and point me in the right direction.

          GPT 3 was kind of a neat party trick - sounds kind of like a person, but a pretty dumb person. GPT 4 sounded smarter, but still couldn’t code for shit. The o1 model still makes mistakes, but it retains the thread of our conversation weeks after the fact and has put together some code that I would have struggled to do myself. Even if it loses more money than it makes right now, I can see the value in progressing development until we achieve AGI.

          People have expressed hopes that AGI will solve a lot of the world’s problems. That it will know just what to do about climate change. That it will crack codes in our DNA and give us endless healthy life. I am doubtful that these dreams will come to fruition. At least not in the way people think. It might have the intelligence to tell us things that we should have already known. Like that we can’t get much better yields in scrubbing carbon from the air than nature itself and we should have reforested far more land than we currently are. And that immortality will take huge amounts of resources and will come at the expense of the health of the masses. More gain for the rich. More suffering for the poor. Business as usual.

          But I think there is a window of time where we can be hopeful about what AI has to offer. And we may even be able to leverage it to solve a big piece of the income inequality puzzle.

          If we make a social media app that is not designed for profit but instead for the good of the people, there are a lot of problems such an app could solve.

          We could design it to seek out real (non-bot) contributors. It will always be an arms race trying to sort real humans from bots but that is no reason to give up. It is a reason to get as far ahead in the race as we possibly can. We should build an app that both recognizes when someone is very likely to be real and when they have also contributed to a cause.

          Imagine an application that tracks creative innovation, such as the creation of a funny video or a new meme format. When someone makes an idea and it is popular, the AI model would determine how much of a given experience is improved by their idea and give them profit residuals based on their contribution. And the more ideas that get built on top of the original idea, the more the newer contributors are rewarded for their contributions.

          Think about if people could design a farm from the ground up using a socialized app for collaboration. Someone could design a camera system to keep track of livestock wellbeing and to head off diseases. They could make AI-empowered systems to track livestock happiness and find ways of increasing quality of life. And creating more humane automated methods of turning crops and livestock into food ready to transport. Some people would focus on creating ideal distribution methods. Others would create stores or restaurants. Others might work on the people themselves, encouraging them to give new more climate friendly meal options a try. Investors would be paid their dues, but there would be no CEO or board of executives. The means of production would belong to the people.

          When people talk about the potential of AI, that’s what I envision. If I can make some passive income with my games and apps, that’s the next project I’ll be diverting my time towards. Because this is a narrow window we have to make this happen. The technology is here, but barriers from climate change and income inequality are only going up. We can lament the fact that AI is currently not profitable and hurting the planet, or we can put more of that energy to use by taking the tools humanity has made and using them to dismantle the systems which made this timeline so intolerable to begin with. The only way to take the current system apart is to make a new one that outcompetes our old ways of life in every measurable way.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            13
            ·
            28 days ago

            This is a long post and I’m not even going to try to address all of it, but I want to call out one point in particular, this idea that if we somehow made a quantum leap from the current generation of models to AGI (there is, for the record, zero evidence of there being any path to that happening) that it will magically hand us the solutions to anthropogenic climate change.

            That is absolute nonsense. We know all the solutions to climate change. Very smart people have spent decades telling us what those solutions are. The problem is that those solutions ultimately boil down to “Stop fucking up the planet for the sake of a few rich people getting richer.” It’s not actually a complicated problem, from a technical perspective. The complications are entirely social and political. Solving climate change requires us to change how our global culture operates, and we lack the will to do that.

            Do you really think that if we created an AGI, and it told us to end capitalism in order to save the planet, that suddenly we’d drop all our objections and do it? Do you think that an AGI created by Google or Microsoft would even be capable of saying “Stop allowing your planets resources to be hoarded by a priveliged few”?

            • Uli@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              28 days ago

              I choose to take your questions as rhetorical, as I think our points do align. I quite agree.

      • basmati@lemmus.org
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        8
        ·
        edit-2
        29 days ago

        It’s really not. It’s a fad like zip disks or ewaste recycling. Only it’s even more expensive while reducing productivity and quality of work everywhere it’s implemented, all for the vague hope it eventually might get better.

        • laranis@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          ·
          28 days ago

          How dare you besmirch the good name of zip disks! There was a good 18 month period in the nineties where they filled a valid use case in the gap between floppy disks and the widespread instantiation of WAN solutions for moving and storing data.

        • asmodee59@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          28 days ago

          I would be really annoyed if it was just a fad seeing as it makes me save at least an hour of work a day.

        • dependencyinjection@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          28 days ago

          Do you think AI and / or AGI is a possibly at all given enough time?

          Because if the answer is yes, then don’t we need people working on it all the time to keep inching towards that? I’m not saying that the current implementations are anywhere close, but they do have their use cases. I’m a software developer and my boss the lead engineer (the smartest person I’ve ever met) has made some awesome tools tools that save our company of 7 people maybe a 100 hours of work a month.

          People used to complain about the LHC and that’s made countless discoveries that help in other fields.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            28 days ago

            Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.

            Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).

            Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?

          • basmati@lemmus.org
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            28 days ago

            LLMs and GANs in general are to AI and AGI like a hand pumped well is to the ISS. Sure, they both technological marvels of their time, but if you’re wanting to float in microgravity there is no possible adjustment you can make to the former to get it to actually behave like the latter.

        • TʜᴇʀᴀᴘʏGⒶʀʏ@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          11
          ·
          28 days ago

          This is such a weak take. It’s constantly getting more efficient, and it’s already extremely helpful- It’s been incorporated into countless applications. OpenAI might go away, but llms and genai won’t. I run an open source local llm to automate most of my documentation workflow, and that’s not going away

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            28 days ago

            “it’s been incorporated into countless applications”

            I think the phrasing you were looking for there was “hastily bolted onto.” Was the world actually that desperate for tools to make bad summaries of data, and sometimes write short form emails for us? Does that really justify the billions upon billions of dollars that are being thrown at this technology?

            • Zos_Kia@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              9
              ·
              28 days ago

              This comment shows you have no idea of what is going on. Have fun in your little bubble, son.

      • omarfw@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        27 days ago

        LLMs are not AI. They’re content stealing blenders wearing a name tag that says AI on it.

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      27 days ago

      Microsoft Copilot integrates OpenAI’s technology like GPT-4, but it is not exactly the same as ChatGPT. Copilot is designed to integrate AI into specific productivity tools like Microsoft Word, Excel, PowerPoint, and other Microsoft 365 apps. It also seems to have different system prompts and fine tuning which leads to different answers and styles.