• 29 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2024











  • Two possible issues w/that w.r.t my use case:

    • not in official Debian repos – not a show stopper but definately points against it for installation and maintenance burdons across migrations
    • apparently read-only access for users. This is fine in simple cases where I would just be sharing with others, but a complete solution enables users to share with others on the same server by uploading. Otherwise everyone with a file to share must run rejetto hfs.

    Nonetheless, I appreciate the suggestion. It could be handy in some situations.







  • What’s the point of spending a day compressing something that I only need to watch once?

    If I pop into the public library and start a ripping process using Handbrake, the library will close for the day before the job is complete for a single title. I could check-out the media, but there are trade-offs:

    • no one else can access the disc while you have it out
    • some libraries charge a fee for media check-outs
    • privacy (I avoid netflix & the like to prevent making a record in a DB of everything I do; checking out a movie still gets into a DB)
    • libraries tend to have limits on the number of media discs you can have out at a given moment
    • checking out a dozen DVDs will take a dozen days to transcode, which becomes a race condition with the due date
    • probably a notable cost in electricity, at least on my old hardware





  • Indeed, I really meant tools that have some cloud interaction but give us asynchronous autonomy from the cloud.

    Of course there are also scenarios that normally use the could but can be made fully offline. E.g. Argos Translate. If you use a web-based translator like Google Translate or Yandex Translate, you are not only exposed to the dependency of having a WAN when you need to translate, but you give up privacy. Argos Translate empowers you to translate text without cloud dependency while also getting sensible level of privacy. Or in the case of Google Maps vs. OSMand, you have the privacy of not sharing your location and also the robustness of not being dependant on a functioning uplink.

    Both scenarios (fully offline apps and periodic syncing of msgs) are about power and control. If all your content is sitting on someone else’s server, you are disempowered because they can boot you at any moment, alter your content, or they can pull the plug on their server spontaneously without warning (this has happened to me many times). They can limit your searching capability too. A natural artifact of offline consumption is that you have your own copy of the data.

    if it aint broke dont fix it

    It’s broke from where I’m sitting. Many times Mastodon and Lemmy servers went offline out of the pure blue and all my msgs were mostly gone, apart from what got cached on other hosts which is tedious and non-trivial to track down. It’s technically broken security in the form of data loss/loss of availability.


  • I have nothing for these use cases, off the top of my head:

    • Lemmy
    • kbin
    • Mastodon (well, I have Mastodon Archive by Kensenada but it’s only useful for backups and searching, not posting)
    • airline, train, and bus routes and fares – this is not just an app non-existence problem since the websites are often bot-hostile. But the idea is that it fucking sucks to have to do the manual labor of using their shitty web GUI app to search for schedules one parameter set at a time. E.g. I want to go from city A to B possibly via city C anytime in the next 6 or 8 weeks, and I want the cheapest. That likely requires me to do 100+ separate searches. When it should just be open data… we fetch a CSV or XML file and study the data offline and do our own queries. For flights Matrix ITA was a great thing (though purely online)… until Google bought it to ruin it.
    • Youtube videos – yt-dl and invideous is a shitshow (Google’s fault). YT is designed so you have to be online because of Google’s protectionism. I used to be able to pop into a library and grab ~100 YT videos over Invideous in the time that I could only view a few, and have days of content to absorb offline (and while the library is closed). Google sabotaged that option. But they got away with it because of a lousy culture of novice users willing to be enslaved to someone else’s shitty UIs. There should have been widespread outrage when Google pulled that shit… a backlash that would twist their arm to be less protectionist. But it’s easy to oppress an minority of people.

  • You just wrote your response using an app that’s dysfunctional offline. You had to be online.

    Perhaps before your time, Usenet was the way to do forums. Gnus (an emacs mode) was good for this. Gnus would fetch everything to my specification and store a local copy. It served as an offline newsreader. I could search my local archive of messages and the search was not constrained to a specific tool (e.g. grep would work, but gnus was better). I could configure it to grab all headers for new msgs in a particular newsgroup, or full payloads. Then when disconnected it was possible to read posts. I never tested replies because I had other complexities in play (mixmaster), but it was likely possible to compose a reply and sync/upload it later when online. The UX was similar to how mailing lists work.

    None of that is possible with Lemmy. It’s theoretically possible given the API, but the tools don’t exist for that.

    Offline workflows were designed to accommodate WAN access interruptions, but an unforeseen benefit was control. Having your own copy naturally gives you a bit of control and censorship resilience.

    (update) Makes no sense that I have to be online to read something I previously wrote. I sometimes post some useful bit of information but there are only so many notes I can keep organised. Then I later need to recall (e.g. what was that legal statute that I cited for situation X?) If I wrote it into a Lemmy post, I have to be online to find it again. The search tool might be too limited to search the way I need to… and that assumes the host I wrote it on is even still online.




  • We need a reform and a robust way to interact digitally with the government, pay taxes and also send messages etc.

    I think that’s nearly impossible. Some people use the Tor network and govs tend to block it. For me, “robust” means being strong enough to handle Tor traffic, but I don’t think anti-Tor ignorance could ever be flushed out.

    Some people also use very OLD devices, like myself, and refuse to contribute e-waste to landfills. That crowd is also hard to cater for. For me, “robust” also means working with lynx browser, but I don’t think the chase-the-shiny incompetence of only supporting new devices could ever be flushed out.

    So I must ultimately disagree because if the gov were to achieve what they believe is robust, it would be a recipe for ending analog transactions that everyone excluded from their digital systems rely on. They should strive for robustness, but never call it robust. They should recognise that digital tech always excludes some people and so analog systems are still needed.

    By the way: If your emails frequently lands in spam folders you should check your mail servers IP if it’s on some spam filter list.

    That is exactly the problem. My mail server runs on a residential IP – deliberately so. My comment stands: it’s naive to make a sender responsible for email landing in a spam folder when the sender has no control or even transparency over the operation of the recipient’s mail server.