Yeah, the quality on Lemmy is nowhere (…)
Go ahead and contribute things that you find interesting instead of wasting your time whining about what others might like.
So far, all you’re contributing is whiny shitposting. You can find plenty of that in Reddit too.
It’s from 2015, so its probably what you are doing anyway
No, you are probably not using this at all. The problem with JSON is that this details are all handled in an implementation-defined way, and most implementation just fail/round silently.
Just give it a try and send down the wire a JSON with, say, a huge integer, and see if that triggers a parsing error. For starters, in .NET both Newtonsoft and System.Text.Json set a limit of 64 bits.
https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions.maxdepth
Why restrict to 54-bit signed integers?
Because number
is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.
Meaning, it’s the highest integer precision that a double-precision object can express.
I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.
It’s not about compatibility. It’s because JSON only has a number
type which covers both floating point and integers, and number
is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.
Ok.
It’s very hard for “Safe C++” to exist when integer overflow is UB.
You could simply state you did not read the article and decided to comment out of ignorance.
If you spent one minute skimming through the article, you would have stumbled upon the section on undefined behavior. Instead, you opted to post ignorant drivel.
I wouldn’t call bad readability a loaded gun really.
Bad readability is a problem cause by the developer, not the language. Anyone can crank out unreadable symbol soup in any language, if that’s what they want/can deliver.
Blaming the programming language for the programmer’s incompetence is very telling, so telling there’s even a saying: A bad workman always blames his tools.
Well, auto looks just like var in that regard.
It really isn’t. Neither in C# nor in Java. They are just syntactic sugar to avoid redundant type specifications. I mean things like Foo foo = new Foo();
. Who gets confused with that?
Why do you think IDEs are able to tell which type a variable is?
Even C# takes a step further and allows developer to omit the constructor with their target-typed new expressions. No one is whining about dynamic types just because the language let’s you instantiate an object with Foo foo = new();
.
I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers.
This is a baseless assertion from your end, and a purely personal one.
My anecdotal evidence is that the best software engineers I know use these tools extensively to get rid of churn and drudge work, and they apply it anywhere and everywhere they can.
the first thing I saw is 150 lines of C# reimplementing functions available in the .NET standard lib.
Once again: https://en.wikipedia.org/wiki/Dunning–Kruger_effect
They existed before LLMs were spitting code like today, and this will undoubtedly lower the bar for bad developers to enter.
If LLMs allow bad programmers to deliver work with good enough quality to pass themselves off as good programmers, this means LLMs are fantastic value for money.
Also worth noting: programmers do learn by analysing the output of LLMs, just as the programmers of old learned by reading someone else’s code.
Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (…)
That’s not a problem, nor Claude’s main problem.
Claude’s main problem is that it is frequently down, unreliable, and extremely buggy. Overall I think it might be better than ChatGPT and Copilot, but it’s simply so unstable it becomes unusable.
I agree. Those who make bold claims like “AI is making programmers worse” neither has any first-hand experience with AI tools nor has any contact with how programmers are using them in their day-to-day business.
Let’s think about this for a second: one feature of GitHub Copilot is the /explain
command, which is used to put together a synthetic description of what a codebase does. Please someone tell me how a programmer gets worse at their job by having a tool that helps him understand any codebase anywhere.
C++ continues to be the dumping ground of paradigms and language features. This proposal just aims to add even more to an overloaded language.
I think you could not be more wrong even if you tried, and you clearly did not even read the proposal you’re commenting on.
This proposal aims to basically create an entirely different programming language aimed at being easy to integrate in extsting codebases. The language just so happens to share some syntax with C++, but you definitely can’t compile it with a C++ compiler because it introduces a series of backwards incompatible changes.
It’s also absurd how you complain about introducing new features. Can you point out any language that is not absolutely dead that is not introducing new features with each release?
C++ programmers mocked languages for being dynamically typed then they introduced auto (…)
I’m sorry, you are clearly confused. The auto
keyword is not “dynamically typed”. It is called “auto” because it does automatic type deduction. It is syntactic sugar to avoid having to explicitly specify the type name in places the compiler knows it already. Do you understand what this means?
Your comment sounds like trolling, frankly.
I feel like this will have zero protection against
Zero protections against what? Against the programmer telling the program to do something it shouldn’t? Not programming language does that. If you resort to this sort of convoluted reasoning, the same hypothetical programmer can also swallow all exceptions.
The main problem you’re creating for yourself is that you’ve been given an open-ended problem but instead prefer to not look for solutions.
Have you ever worked at large old corporation?
I’m not sure you understand that it’s way more than “large old corporations” that use it. Everyone uses it, from large multinationals to small one-taxi shops, and even guys like you and me in personal projects. This has been going on for years. I really don’t know what led you to talk about large old corporations, seriously.
The problem is that you need some support from the language to make it easy to deal with.
Nonsense.
if (result.isSuccess()) {
do_something(result.value);
}
else {
handle_error(result error);
}
I mean, yeah, if your language does not support error values, do not use them.
Nonsense. If adopting info of the many libraries already available is not for you, it’s trivial to roll your own result type.
Even if that was somehow unexplainably not an option, even the laziest of developers can write a function to return a std::tuple or a std::pair and use structured binding.
That’s only true in crappy languages that have no concept of async workflows, monads, effects systems, etc.
You don’t even need to sit on your ass and wait for these data types to be added to standard libraries. There are countless libraries that support those, and even if that is somehow not an option it’s trivial to roll your own.
It’s used because the ones who use it have enough money to pay for any problems that may arise from it’s use, (…)
That’s laughable. Literally the whole world uses it. Are you telling me that everyone in the world just loves to waste money? Unbelievable.
Documentation in software projecte, more often than not, is a huge waste of time and resources.
If you expect your docs to go too much into detail, they will quickly become obsolete and dissociated from the actual project. You will need to waste a lot of work keeping them in sync with the project, with little to no benefit at all.
If you expect your docs to stick with high-level descriptions and overviews, they quickly lose relevance and become useless after you onboard to a project.
If you expect your docs to document usecases, you’re doing it wrong. That’s the job of automated test suites.
The hard truth is that the only people who think they benefit from documentation are junior devs just starting out their career. Their need for docs is a proxy for the challenges they face reading the source code and understanding how the technology is being used and how things work and are expected to work. Once they go through onboarding, documentation quickly vanishes from their concerns.
Nowadays software is self-documenting with combination of three tools: the software projects themselves, version control systems, and ticketing systems. A PR shows you what code changes were involved in implementing a feature/fixing a bug, the commit logs touching some component tells you how that component can and does change, and ticketing shows you the motivation and the context for some changes. Automated test suites track the conditions the software must meet and which the development team feels must be ensured in order for the software to work. The higher you are in the testing pyramid, the closer you are to document usecases.
If you care about improving your team’s ability to document their work, you focus on ticketing, commit etiquette, automated tests, and writing clean code.