Retool, a development platform for business software, recently published the results of its State of AI survey. Over 1,500 people took part, all from the tech industry:...
Over half of all tech industry workers view AI as overrated::undefined
I asked chatGPT to generate a list of 5 random words, and then tell me the fourth word from the bottom. It kept telling me the third. I corrected it, and it gave me the right word. I asked it again, and it made the same error. It does amazing things while failing comically at simple tasks. There is a lot of procedural code added to plug the leaks. Doesn’t mean it’s overrated, but when something is hyped hard enough as being able to replace human expertise, any crack in the system becomes ammunition for dismissal. I see it more as a revolutionary technology going through evolutionary growing pains. I think it’s actually underrated in its future potential and worrisome in the fact that its processing is essentially a black box that can’t be understood at the same level as traditional coding. You can’t debug it or trace the exact procedure that needs patching.
I believe I saw this kind of issues was because of the token system. Like if you tell him to find a word starting with a letter, he can’t really do it without hard coded workaround, because he doesn’t know about single letters, only about tokens which are parts of the sentence.
It’s definitly more complicated than that, but it doesn’t mean AI is bad, only that this current implementation can’t do theses kind of task.
I asked chatGPT to generate a list of 5 random words, and then tell me the fourth word from the bottom. It kept telling me the third. I corrected it, and it gave me the right word. I asked it again, and it made the same error. It does amazing things while failing comically at simple tasks. There is a lot of procedural code added to plug the leaks. Doesn’t mean it’s overrated, but when something is hyped hard enough as being able to replace human expertise, any crack in the system becomes ammunition for dismissal. I see it more as a revolutionary technology going through evolutionary growing pains. I think it’s actually underrated in its future potential and worrisome in the fact that its processing is essentially a black box that can’t be understood at the same level as traditional coding. You can’t debug it or trace the exact procedure that needs patching.
It’s definitely feasible, like what they tried to do with Wolfram alpha- but do you have a source for this?
I believe I saw this kind of issues was because of the token system. Like if you tell him to find a word starting with a letter, he can’t really do it without hard coded workaround, because he doesn’t know about single letters, only about tokens which are parts of the sentence.
It’s definitly more complicated than that, but it doesn’t mean AI is bad, only that this current implementation can’t do theses kind of task.