A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 170 Comments
Joined 2 months ago
cake
Cake day: June 25th, 2024

help-circle
  • But going a level deeper, the whole position only exists because a company wants to get some job done. Describing it is just a means to achieve that. Not a thing in itself. I think we’re circling about what I consider being the main point: What matters is if a job get’s done. If you do it with a description and it gets the job done, it gets the job done. If you manage to go without and it also gets the job done, it also gets the job done. If you manage people by people and that gets the job done or if AI does it and also gets the job done… Delivering some goods is how a company makes profit. They don’t really care how it’s done because that’s not what it’s about. It just needs to fulfill a few criteria. Be profitable (have a good price/performance ratio) and be sustainable/reliable… It doesn’t matter to them if it’s AI, a human, with a description or without…

    And I already had jobs where there wasn’t any proper job description (just something on the paper). That usually leads to severe issues if there is any dispute. But nonetheless it worked out well for me and my employer. I know people who are in similar situations. Or had their job descriptions updated because things changed. So I don’t welcome that as it will result in issues. And it shouldn’t be like that. But speaking from experience, a job can be done without a description if circumstances are right. I also regularly see people organize their old stuff when retiring, read their old job description from decades ago for fun and that’s not really what they’ve been doing the last 20 years.

    I think our fundamental disagreement is, you say it’s currently usually done like this and therefore that’s the only way to do it. That might be a conservative perspective. But by logic, that doesn’t follow. Just because something works some way, that doesn’t exclude there being other possibilities or ways to achieve the same thing.



  • I think the question then becomes: What’s more important (and to whom?) Doing what’s in the job description? Or actually getting the job done? These are two separate things. And I see arguments for both, depending on context.

    And you have a point with the algorithms. They follow the goals that they’re given by their masters. Exactly to the outcome you’ve outlined. But the goal is configurable. You could as well give it the goal to maximise team efficiency. Or employer satisfaction. Or company revenue. Practically anything that you can obtain some metric.


  • But why have an extra website? I just don’t see the point. That’s just extra work. And a good amount of people will never find out it’s there and get here in a different way. We could just do away with all of that.

    This way the Lemmy devs have to maintain a seperate website. Curate a list of instances. I have to agree with that. Everyone has to remember to always post links to join-lemmy.org. Users who’re just lurking and deciding to go to the menu and just click “sign up” will miss that information. All of that to save someone from doing a single click?


  • I get what you’re saying. I think we’re getting a bit philosophical here with the empathy. My point was: Sometimes, what matters is if something get’s a job done. And I see some reason to believe it might become capable, despite doing it differently and having shortcomings.

    I think it’s true that empathy get’s the job done. But I think it’s a logical flaw to say, because empathy can do it, it’s ONLY empathy that can do it. It might very well be the case that it’s not like that. I think we don’t know yet. I’m not set on one side or the other. I just want to see some research done and get a definitive answer instead of speculating.

    And I see some reason to believe it’s more complicated than that. What I outlined earlier is that it can apply something loosely resembling a theory of mind and get some use out of that. But we can also log every interaction of someone. Do sentiment analysis and find out with great accuracy if someone sitting at a computer is happy, angry, resignating or frustrated. AI can do that all day for each employer and outperform any human manager. On the flipside it can’t do other things. And we already have “algorithms” for example on TikTok, Youtube etc that can tell a lot about someone and predict things about them. That works quite well as we all know. All of that makes me believe there is some potential to do things like what we’re currently discussing.



  • Because what the user wants is to sign up on Lemmy. We have to meet them at exactly that point. When they click on “Sign up”. And then our motive comes at play. We want to say to them »Hey stop, In case you don’t know, the Fediverse works differently than other social media platforms you might be used to. You have to make a decision here.« And I think it’s super effective to do it at that point. Doing it later is too late, because then they’re already signed up on a random place without being informed. And doing it earlier is inconvenient. They might not be motivated to find out yet. Or it’s a hassle to gather that info yourself without even knowing it’s a thing.

    And why is the way the Fediverse works looking bad? If it seems that way to you, you’re might be wrong here. So even more reason to prominently display these kinds of things.

    And it doesn’t have to be complicated. As with software design in general, don’t overwhelm the user with options. Just offer a description on what’s happening and why. And give some sane default options. Don’t make it more than let’s say 5-10. And it’s just one click more in the process. It’s not too hard for the user to click once more if it’s for a good cause. And if done right, they could just randomly click on something if in doubt.

    I’d just make it like in my earlier proposal. Add the page. Force the user to choose. Either they want to answer a few questions and get a tailored instance. Or sign up at the current instance. Or sign up at one of the 4 other instances we promote for a better distribution. And then continue to ask for a username and password.


  • Yeah. I think in theory I disagree with you. But in practice I agree. I’ve seen people do exactly that. And almost everytime that behaviour comes from a place that also causes more issues. These people are better off with something else. I agree. And usually they’re annoying (to me) anyways so I don’t consider them a loss for the platform.

    And I also regularly complain about the internet having become less than it used to be. Back then when it took some skill and effort to operate a computer and be on the internet, it was filled with intelligent people and people who were there for some reason. That meant they were motivated enough to go through all the hassle. You could engage with them in a different way than you nowadays do with the average user. Now everyone is here and lots of places and discussions feel different. It certainly affects things. So there is that.

    It’s the same question if I ponder whether Linux should be used by more people. There are some other dynamics at play with that. But in the end, growing to a broader audience (on the desktop) is certainly going to change it. And I’m not sure if in a good way.


  • Yeah. I mean the fundamental issue is: ChatGPT isn’t human. It just mimics things. That’s the way it generates text, audio and images. And it’s also the way it handles “empathy”. It mimicks what it’s learned from human interactions during training.

    But in the end: Does it really matter where it comes from and why? I mean the goal of a venture is to produce or achieve something. And that isn’t measured in where it comes from. But in actual output. I don’t want to speculate too much. But despite not having real empathy, it could theoretically achieve the same thing by faking it well enough. And that has been proven in some narrow tasks already. We have customer satisfaction rates. And quite some people saying it helps them with different things. We need to measure that and do some more studies of what’s the actual outcome of replacing something with AI. It could very well be that our perspective is wrong.

    And with that said: I tried roleplaying with AI. It seems to have some theory of mind. Not really of course. But it get’s what I’m hinting at. The desires and behaviour of characters. And so on. Lot’s of models are very agreeable. Some can role play conflict. I think the current capabilities of these kinds of AI are enough to fake some things well enough to get somewhere and be actually useful. I don’t say it has or hasn’t people skills. I think it’s somewhere on the spectrum between the two. I can’t really tell where because I havent yet read any research considering this context.

    And of course there is a big difference between everyday tasks and handling a situation that went completely haywire. We have to factor that in. But in reality there are ways to handle that. For example AI and humans could split up the tasks amongst them. And things can get escalated and humans make difficult decisions. But that could already mean 80% of the labor gets replaced.



  • I’m also not sure about that. Do they really need to be bothered with that? Can’t they just expect a social media platform to do whatever? Without learning anything? I mean they might just want to use something and not be bothered. And arguably they’ll have more freedom here then they’d have for example on Reddit where this isn’t any issue. I’d say design the software to get out of their way, cater for them and have them here. I mean ultimately there is a limit. Sometimes you need to know how things actually work to get anywhere. But I still refuse to accept your point. I think that should be kept to a minimum. And users should be eased into it at the point it becomes necessary to know. That can be done by good software design.


  • Well, there are some proposals to change this. I’d say it’s fixable by technology to some degree. For example instead of a sign up page that directly signs someone up with the specific instance, we could have a more general Fediverse signup page. Maybe ask the new user a few questions what they envision their instance to be. If they’re more aligned with this set of rules or the other. If they want “free speech” or a place with more moderation and less argumentative people. And then make some suggestions.

    Or instead of just signing them up with whatever instance they visited first, display a list of the current instance and 5 other random ones, shuffle them and make them deliberately click on one of them.

    That’d all help. Of course it can’t be solved 100%. But we could at least make an effort to do something about it.


  • I don’t think I agree. The big difference in total users and monthly active users tells me lots of people abandon their accounts. As long as that policy is in place, it’ll naturally shrink because people leave and there aren’t any new users anymore to replace them. The only question is at what rate that’s going to happen.

    And I also don’t agree with people who don’t understand the Fediverse shouldn’t be here… People should be here because it’s a nice place and they have a good time engaging here. The exact technology behind it shouldn’t matter too much. If at all.


  • On the flipside, that also scares people away. New users want to take part and immediately hit a barrier. The place where everyone mingles is closed off. They have to learn why that is and how the Fediverse is supposed to work, find some instance overview list and make a choice. Be angry for a short while until they understand the concept and realize it’s for the better… I think that’d be detrimental to the cause. I rather live with the issues that come with big instances than with a complicated onboarding process. But I think people already complained about onboarding on the Fediverse in general. I think we need to solve that issue first and then we can go ahead and also add some mechanism to steer people towards a more even distribution. But I don’t see anyone working on any of that for Lemmy. Until then, I’d say don’t do it.

    (And btw: I don’t want to see lemmy.world shrink, which wold be the outcome. What I’d like to see is other nice instances come into existence and grow to a similar size. Because they’re a nice place and people can identify themselves with the community there. It’d foster good behaviour if things happened because of some good reasons. Not just you grow because you’re already the biggest. That doesn’t foster anything. It’s just like playing Osmos.)


  • First of all we had big instances die. Like feddit.de and kbin.social That always damages a big part of the network. If things were distributed more evenly, it’d be a smaller chunk of the fediverse that vanishes in such a case.

    Then, being way bigger than the others gives someone disproportionately bigger power. If you’re not having any issue with that, you might as well join Reddit. And the first big Lemmy instance (lemmy.ml) arguably explots(?) that. They’ll act against you once you say something negative about communism, China, … and that’s not okay to do. Now we have lemmy.world as the biggest instance and it’s way better. But still I’ve also read people complain about their moderation practices.

    If we have some dominating entities, they’ll disproportionately shape the tone, atmosphere and behaviour on the whole network. We might or might not want that.

    In the end I think what actually happens should reflect the vision and the capabilities of the software. The Fediverse is supposed to be an interconnected network of instances. If the technology works as intended (and the vision behind the Fediverse is correct) I expect that to manifest in the way it actually grows. If it favors one or two large instances, we either might have an issue with the technology/software and it’s not able to truly achieve it because of some shortcomings. Or the idea behind all of it might not be more a theoretical concept than viable in the real world.

    If we want to look at it in the end-state, we have email as an example. That’s a super old federated standard and now also dominated by a few big players. It’s still possible to host your own email. But not really fun because of lots of complications that come with it.

    [Edit: The dynamics could also be viewed as competition succeeding. If someone does their job well, they’ll naturally attract people?! And that’s not necessarily a bad thing. I’m not sure what to make of this. And I’m not sure if that’s the dynamics at play here in the first place.]


  • I’m not even sure about the “people skills” of ChapGPT. Maybe it’s good at that. It always says …you have to consider this side but also the other side… …This is like that, however it might… It can weasel itself out of situations (as it did in this video). It makes a big effort to keep a very friendly tone in all circumstances. I think OpenAI has put a lot of effort in ChatGPT having something that resembles a portion of people skills.

    I’ve used those capabilities to rephrase emails that needed to tell some uncomfortable truths but at the same time not scare someone away. And it did a halfway decent job. Better than I could do. And we already see those people skills in use by the companies who replace their first level support with AI. I read somewhere it has a better customer satisfaction rate than a human powered callcenter. It’s good at pacifying people, being nice to them and answering the most common 90% of questions over and over again.

    So I’m not sure what to make of this. I think my point still remains valid. AI (at least ChatGPT) is orders of magnitude better at people skills than at programming. I’m not sure what kind of counterexamples we have… Sure, it can’t come to your desk, look you in the eyes and see if you’re happy or need something. Because it doesn’t have any eyes. But at the same time that’s a thing I rarely see with average human managers in big offices, either…


  • Sure. There are lots of tedious tasks in a programmers life that don’t require a great amount of intelligence. I suppose writing some comments, docstrings, unit tests, “glue” and boilerplate code that connects things and probably several other things that now escape my mind are good tasks for an AI to assist a proper programmer and make them more effective and get things done faster.

    I just wouldn’t call that programming software. I think assisting with some narrow tasks is more exact.

    Maybe I should try doing some backend stuff. Or give it an API definition and see what it does 😅 Maybe I was a bit blinded by ChatGPT having read the Wikipedia and claiming it understands robotics concepts. But it really doesn’t seem to have any proper knowledge. Same probably applies to engineering and other nighboring fields that might need software.


  • I don’t think so. I’ve had success letting it write boilerplate code. And simple stuff that I could have copied from stack overflow. Or a beginners programming book. With every task from my real life it failed miserably. I’m not sure if I did anything wrong. And it’s been half a year since I last tried. Maybe things have changed substantially in the last few months. But I don’t think so.

    Last thing I tried was some hobby microcontroller code to do some robotics calculations. And ChatGPT didn’t really get what it was supposed to do. And additionally instead of doing the maths, it would just invent some library functions, call them with some input values and imagine the maths to be miraculously be done in the background, by that nonexistent library.