I use Claude for SQL and PowerQuery whenever I brain fart.
There’s more usefulness in reading its explanation than its code, though. It’s like bouncing ideas back off someone except you’re the one that can actually code them. Never bother copying it’s code unless it’s a really basic request that’s quicker to type than to code.
Bad quality and mass quantity in is obviously much quicker for LLMs and people that don’t understand the tech behind AI don’t understand this actually what’s going on, so it’s “magic”. A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and “gross”. There’s minimal check iterations it can do and how would it even do them when it’s knowledge base is more bullshit than it is quality?
Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It’d take longer for AI to self-learn these too without significant input from the trainer.
Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.
Same with writing and image generation. It can give you ideas or handle little details like making sure all your commas are in the right place, the formatting is cohesive, and that you used the right your / you’re, or filling in grass or sky textures in the background or putting a bit of polish on a finished image but it definitely requires some editing to get a truly cohesive final result.
We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.
I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.
But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.
I use Claude for SQL and PowerQuery whenever I brain fart.
There’s more usefulness in reading its explanation than its code, though. It’s like bouncing ideas back off someone except you’re the one that can actually code them. Never bother copying it’s code unless it’s a really basic request that’s quicker to type than to code.
Bad quality and mass quantity in is obviously much quicker for LLMs and people that don’t understand the tech behind AI don’t understand this actually what’s going on, so it’s “magic”. A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and “gross”. There’s minimal check iterations it can do and how would it even do them when it’s knowledge base is more bullshit than it is quality?
Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It’d take longer for AI to self-learn these too without significant input from the trainer.
Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.
Same with writing and image generation. It can give you ideas or handle little details like making sure all your commas are in the right place, the formatting is cohesive, and that you used the right your / you’re, or filling in grass or sky textures in the background or putting a bit of polish on a finished image but it definitely requires some editing to get a truly cohesive final result.
Tbh it would be easier if we could train our own small models on controlled codebases and documentation, instead of random stuff that some people do
We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.
I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.
But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.