• 0 Posts
  • 147 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • The events that are the least emphasized are those that were carried out by dominant powers, particularly when they are still around today writing the history and propaganda books. The way events are handled is seemingly subtle, and the most powerful way they avoid emphasis is to simply never frame the violence they did in terms of its most wide impacts.

    For example, you mentioned that you are from India. The greatest violences done to India in the last few hundred years were from the Raj, so the British. And those greatest violences were not the actual acts of ships and soldiers, but in things like this:

    • Dismantling of industry and craftmanship in the subcontinent, converting production to the schemes of empire. Namely, producing crops like cotton to supply a British industrialized textile monopoly. This directly created poverty where before there was immense high-value production.

    • Famines caused by extreme poverty and the imposition of imbalanced production where farmers had to farm export crops and even export food crops when there were famine risks. The British also did this to Ireland and other colonies.

    • The less-talked-about but still incredible violence of poverty in general. Placing a hold on industrialization also meant no balanced infrastructure for the greater public (only what served export and British control), limited hospitals, poor education, more frequent death of one’s children, and so on.

    • The tweaking of caste to be more racist and classist (per English tastes), creating internal strife and misery.

    • Emphasizing other ethnic divides to use marginalization as a scapegoat for suffering and exploitation. The British created or escalated many of the ethnic rifts in the subcontinent, making issues like exodis from and neocolonialism in Kashmir or the partition more likely and more dramatic.

    People that attempt to tally these things lay hundreds of millions of deaths at the feet of the British Raj. Yet such numbers are not well-known!

    In fact, the liberal economist Amartya Sen even applied this kind of logic to modern India and suggested that capitalism in India killed around 100 million people from 1947 to 1979. But how often do you hear Westerners talk about the mass death campaign of ongoing capitalism, citing millions every decade? Very few, because this is treated as “normal” and “natural” and not something imposed by the dominant system all around us.

    A similar example is looking at the published numbers about deaths in Gaza. What we hear is an outdated number of people confirmed dead. It has almost halted for months. Is this because Israel stopped bombing children, hospitals, schools, refugee camps? No, it is because they explicitly targeted and disruoted the entire system responsible for doing these counts, the healthcare system. But even then, let us say the counts continued. Is this everyone killed by Israel’s genocide there? No! These numbers do not include the people dying from poor sanitation (Israel cut off water and electricity), of diseases, of malnutrition, of any kind of malady that could have been treated by the medical system the Israelis destroyed. The numbers of civilians killed by deprivation is usually larger than those directly killed in war. It is rarely reported as the death count of a given war, or in this case genocidal occupation.

    So, the greatest missed events are those hidden from us without our knowledge. By controlling the definitions of terms like “killed in war” or “died under colonialism” or “excess deaths”. The events hidden by our thought patterns ingrained into us since we were young, taught to us by teachers and books and journalists and entertainment media. They weren’t all in on some grand conspiracy, either. At least, not most of them. They were also miseducated in the same way. It is a reflection of the ruling class, filtering down in myriad ways until it dictates our very thoughts.



  • I’m sorry you’ve had to experience that transphobia on Lemmy. It is unfortunately common. And sometimes it even lurks as internalized transphobia in people that do not think of themselves as transphobjc. For example, there are Lemmy instances that actually promote chasers.

    I believe all instances if transphobia should be called out and obvious examples should result in bans. Sometimes it is good to let people have a chance to accept criticism and retract but I am biased towards more often banning. Comments that are transphobic should also be removed.





  • That’s twice in a row you’ve just made something up on my behalf rather than criticize what I actually said. The first was that I allegedly say tankie means communist (I obviously disagree) and now you say I am calling MLK a tankie, lmao.

    For your benefit, I will remind you that I said you would have called him one, as in back when he was alive and organizing. This is for the reasons I already stated and that you have not responded to in any way.


  • Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices?

    It’s not that we cripple it, it’s that the term “AI” has been used as a marketing term for generative models using LLMs and similar technology. The mimicry is inherent to how these models function, they are all about patterns.

    A good example is “hallucinations” with LLMs. When the models give wrong answers because they appear to be making things up. Really, they are incapable of differentiating, they’re just producing sophisticated patterns from a very large models. There is no real underlying conceptualization or notion of true answers, only answers that are often true when the training material was true and the model captured the patterns and they were highly weighted. The hot topic for thevlast year has just been to augment these models with a more specific corpus, pike a company database, for a given application so that it is more biased towards relevant things.

    This is also why these models are bad at basic math.

    So the fundamental problem here is companies calling this AI as if reasoning is occurring. It is useful for marketing because they want to sell the idea that this can replace workers but it usually can’t. So you get funny situations like chatbots at airlines that offer money to people without there being any company policy to do so.

    If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

    There are a lot of very intelligent academics and technical experts that have completely unrealistic ideas of what is an actual real-world threat. For example, I know one that worked on military drones, the kind that drop bombs on kids, that was worried about right wing grifters getting protested at a college campus like it was the end of the world. Not his material contribution to military domination and instability but whether a racist he clearly sympathized with would have to see some protest signs.

    That petition seems to be based on the ones against nuclear proliferation from the 80s. They could be simple because nuclear war was obviously a substantial threat. It still is but there is no propaganda fear campaign to keep the concern alive. For AI, it is in no way obvious what threat they are talking about.

    I have persobal concepts of AI threats. Having ridiculously high energy requirements compared to their utility when energy is still a major contributor to climate change. The potential for it to kill knowledge bases, like how it is making search engines garbage with a flood of nonsense websites. Enclosure of creative works and production by some monopoly “AU” companies. They are already suing others based on IP infringement when their models are all based on it! But I can’t tell if this petition is about that at all, it doesn’t explain. Maybe they’re thinking of a Terminator scenario, which is absurd.

    It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

    Technology is both a reflection and determinent of social relations. As we can see with this round if “AI”, it is largely vaporware that has not helped much with productivity but is nevertheless very appealing to businesses that feel they need to get on the hype train or be left behind. What they really want to do is have a smaller workforce so they can make more money that they can then use to make more money etc etc. For example, plenty of people use “AI” to generate questionably appealing graphics for their websites rather than paying an artist. So we can see that " A" tech is a solution searching for a problem, that its actual use cases are about profit over real utility, and that this is not the fault of the technology, but how we currently organize society: not for people, but for profit.

    So yes, of course, real AI could be very helpful! How nice would it be to let computers do the boring work and then enjoy the fruits of huge productivity increases? The real risk is not the technology, it is our social relations, who has power, and how technology is used. Is making the production of art a less viable career path an advancement? Is it helping people overall? What are the graphic designers displaced by what is basically an infinite pile of same-y stock images going to do now? They still have to have jobs to live. The fruits of “AI” removing much of their job market hasn’t really been shared equally, nor has it meant an early retirement. This is because the fundamental economic system remains in place and it cannot survive without forcing people to do jobs.




  • Okay so both of those ideas are incorrect.

    As I said, many are literally Markovian and the main discriminator is beam, which does not really matter for helping people understand my meaning nor should it confuse anyone that understands this topic. I will repeat: there are examples that are literally Markovian. In your example, it would be me saying there are rectangular phones but you step in to say, “but look those ones are curved! You should call it a shape, not a rectangle.” I’m not really wrong and your point is a nitpick that makes communication worse.

    In terms of stochastic processes, no, that is incredibly vague just like calling a phone a “shape” would not be more descriptive or communicate better. So many things follow stochastic processes that are nothing like a Markov chain, whereas LLMs are like Markov Chains, either literally being them or being a modified version that uses derived tree representations.



  • Tankie was originally a Trotskyist term for the people that supported tolling tanks into Hungary in the 50s.

    Of course, the term “authoritarian bootlicker” is a funny one, as its purveyors have a habit of recycling and promulgating the propaganda pushes of the US State Department and opposition to that tendency is often what gets one labelled a tankie. Like when MLK spoke positively of Castro’s revolution or a Vietnam united under Ho Chi Minh rather than targeted for bombing by the US. Though I am being generous: so many people using the term are so politically illiterate that they apply it to basically anything vaguely left that they disagree with.

    I think you’d be calling him a tankie.






  • “AI” is a parlor trick. Very impressive at first, then you realize there isn’t much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an “AI” that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

    Let’s say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, “what is the relationship between sugar and obesity?”. What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won’t be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.