I write articles and interview people about the Fediverse and decentralized technologies. In my spare time, I play lots of video games. I also like to make pixel art, music, and games.

  • 45 Posts
  • 63 Comments
Joined 9 months ago
cake
Cake day: November 30th, 2023

help-circle
  • Good question. For now, we have a basic process for submitting icons, which requires adding details about the project repo, information on the icon, and Copyright attribution for whoever created / owns the brand.

    We recently incorporated a JS library that allows us to generate the font from the SVG files themselves, which also builds the preview pages that can be viewed at icons.wedistribute.org. With a bit of extra automation on Codeberg, we could basically update the preview page and generated set every time a new icon gets merged in to the main branch.

    Our goal is to get to a point where new releases automatically get created, and an archive of the assets gets attached as well. That way, once a milestone gets completed, a new release will get put out with minimal amounts of work.




  • Interesting insights!

    The original reason we started this was actually for our own development. Our site includes project icons and colors in dedicated tags, which link to dedicated topic hubs. As we started working on this, we realized that there wasn’t a really good resource, and that we would have to build something from scratch.

    Those symbols that you see are typically Unicode. Icon fonts are generally a CSS hack, in that a collection of SVGs have been converted into a font. The Unicode strings can be thought of as “letters” for that font. You’re absolutely right that there are accessibility limitations, but the tradeoff is that people get an easy way to use their favorite project icons to represent where they are on the Web.

    At the very least, you won’t have any uBO problems with our site, as the font is incorporated directly into the theme we’re using. We’ll likely explore making a WordPress plugin next, so people can add these to their profiles and menus and other places.






  • Not yet, but it doesn’t seem like it would be too hard to add in support for that. I think one of the core ideas here are that you could take NeoDB and use it as a foundation for any review system you want to integrate. Hook up to a service on the search side, support data import on the backend, suddenly you have a way to not only create the reviews, but populate the objects being reviewed with the necessary metadata.



  • I agree with you in spirit, but some of this stuff needs to be spelled out for people interested in the space. Not every person that builds for ActivityPub is overly aware of technical and cultural expectations. A lot of that knowledge exists in someone’s head somewhere, and the Fediverse does a pretty poor job of making assumptions about those people.

    Case in point: one of the stories linked in the piece discusses a guy that implemented ActivityPub on his own, got it to work, but didn’t know enough about the space. People thought it was a crawler, turns out it was a blogging platform, but the drama ignited to the point that someone remote-loaded CSAM on the dude’s server using Webfinger. Dude was in Germany, and could have gone to prison simply for having it.

    We can’t hold two contradictory positions, where we invite people to build for this space, and then gaslight them over not knowing things that nobody told them about. More than ever, we need quality resources to help devs figure this stuff out early on. This article is one small step in service to that.










  • They kind of fucked up everything in approaching this by not talking to the community and collecting feedback, making dumb assumptions in how the integration was supposed to work, leaking private posts, running everything through their AI system, and neglecting to represent the remote content as having came from anywhere else.

    The other thing is that Maven’s whole concept is training an AI over and over again on the platform’s posts. Ostensibly, this could mean that a lot of Fediverse content ended up in the training data.







  • I can’t tell whether this is serious or sarcastic 😅

    As far as the “global square” part of the equation is concerned: yeah, you’re right! A firehose of public statuses requires indexing to work, as a basic foundational premise.

    However, there’s nothing preventing someone from standing up a PDS, opting out of the firehose / big graph service, and instead leaning on federation between individual PDSes. I’m not saying it would necessarily be a common use-case, but it’s definitely not impossible.


  • It’s a different approach with different ideas. It uses open protocols, focuses on data and account portability, and incorporates peer-to-peer concepts in its architecture. The vision behind Bluesky is to build a global square with these concepts.

    I definitely wish they would’ve extended ActivityPub and collaborated on the wider network, but I kind of understand wanting to start from scratch and not get involved with the cultural debt Mastodon brought to the network.



  • Misskey is a little bit odd, in the sense that there’s constantly new forks in various stages of development. New forks emerge just as quickly as old ones die off.

    It may be that the frontend and backend both being written in one language helps make the system easier to hack on. I can’t say for sure. What’s weird is that some of these forks go in really odd directions, like rewriting the whole backend in a different programming language.

    The other thing is that, despite their proliferation, the effort is somewhat fragmented into all of these little projects. I’m not sure how viable any of these forks are in the long term.


  • Thank you for these insights!

    Yeah, aside from developer muscle, an effort like this requires deep knowledge of the existing system. Or, failing that, a commitment to learning it.

    It’s also not something that can be done as a side project, if it hopes to compete with the main project to the point of replacing it. Something like that requires an ungodly amount of effort and dedication. Someone would have to commit years of their life to solely working on that.



  • It’s an interesting and frustrating problem. I think there are three potential ways forward, but they’re both flawed:

    1. Quasi-Centralization: a project like Mastodon or a vetted Non-Profit entity operates a high-concurrency server whose sole purpose is to cache link metadata and Images. Servers initially pull preview data from that, instead of the direct page.

    2. We find a way to do this in some zero-trust peer-to-peer way, where multiple servers compare their copies of the same data. Whatever doesn’t match ends up not being used.

    3. Servers cache link metadata and previews locally with a minimal amount of requests; any boost or reshare only reflects a proxied local preview of that link. Instead of doing this on a per-view or per-user basis, it’s simply per-instance.

    I honestly think the third option might be the least destructive, even if it’s not as efficient as it could be.