Blog Posts
-
Married, One Way or Another
October 20, 2019
In the end we went to Reno.
In the beginning, was a yellow envelope that sat on top of the fridge for longer than it should have, in a beautifully lit kitchen of a bad idea apartment in the wrong end of the wrong town.
In the middle, was a series of offices in a German city hall; a fax machine in a kiosk in a downtown shopping district; a post man hand-delivering a battered envelop with our address listed as both destination and return.
In the middle, was the limbo of not knowing under what category we’d be able to file ourselves. Are your papers in order? This is Germany; your papers are never in order. They are always in process.
I could say a thousand times and ways that we come as a package deal. I told a German lawyer a year ago that I’d marry Eric anywhere and anywhen I could; he said it was romantic. There wasn’t much romantic about being denied on a technicality; about last minute flights squeezed into school holidays; about seeing any part of the USA again, let alone of Nevada.
Some parts of Reno were beautiful. The trees by the river, the sky with clouds rolling through looking like they were trying to tempt the pot smokers into liking rain by their shape alone. Brisk mornings walking to Starbucks.
Some where distasteful. Lack of basic stores in town, insanely high prices, masses of street surveillance cameras. So many homeless people, the parks looked like an emergency shelter in the aftermath of … something. Seeing thin blue line hoodies worn openly. Though Germany did no better on that last, with a guy with ‘pepe’ tattooed on his knuckles on the train on our way home. Some things are everywhere.
Chi was able to join us for a couple days, which I was very glad for. It’s hard to have close friends and almost never see them. The actual wedding was as simple as possible; we didn’t really have inclination to complicate it. Our lives are complicated enough. Proofread, sign here, show up there, say the thing, wait an hour, pick up papers. The irony was not lost on me that we had to come all the way from Germany to enjoy some Nevada efficiency.
The next day, we took Kimlet to an interactive science museum down the road. She had a blast; I’m surprised it was possible to drag her out of there. Like so many buildings in Reno, it seemed halfway made and halfway designed, with no wifi (there was an eduroam node fading in and out, never quite usable) and no accoustic dampening. Painful echo seems to be an aesthetic there. Naturally, one only the children don’t mind.
Andrew came by for coffee… I’m still averaging a decade from one time I see a friend to the next (two, in the case of my uncle); and that’s for the friends I ever see! We had a day to spare thanks to contingency planning. Not that this got any more of Reno seen, but really, by then I didn’t want to see any more of it. I just wanted to get out with our bank account nominally intact.
-
End of POTS & Chat Fragmentation
July 17, 2019
In reply to Doc Morbius https://mastodon.cloud/@dredmorbius/102357651020681668
I consider POTS dead in the water. Simple reason: It only works if you have an address, and expect to be present at it. The only people I can imagine having POTS at this point, either
a) Can’t avoid it because it’s a Requirement for ring-in of guests in their apartment complex. They keep it unplugged unless they’re expecting someone.
b) Refuse to get cellphones, are over 65 years old, disdain phone calls anyway, and just have it on General Principles because they had it for the last 40 years so why stop now.
c) Got it bundled with their DSL and never use it.
My generation and beyond… seriously? If we even have a stable address, the last thing we want is a ringer plugged into the wall. It can’t even take texts, are you kidding? I hand someone a phone number they’re gonna SMS is; what’s my wall gonna do with that?
So we’ve got mobiles. Often plural. I used to know folks with a pager and a phone, or a pager and two phones. Now it’s not so rare that they’ve got two or three phones. Probably none of them with what would be recognized as a ‘phone plan’, and they might not even have sim cards in all of them. And this is where it gets interesting, because telephony means connectivity, and do people really want to be connected? When? Why? //paging system operator//
Now I’m not talking about the folks who can afford a data plan, who have credit cards, who walk into a Sprint office and buy the latest Galaxy without flinching. I mean real-deal folks; the ones you’d sit next to on the bus. The increasing majority. In Europe, hell… I spend less now on an unlimited data plan than it would have taken to get basic service in the states, and I didn’t have to get an expensive credit check run to do it. But lets stick to the states briefly.
One of those phones is the ‘work’ phone - only the boss has their number, and only because they have to be contactable if there’s a shift schedule change. There’s always a shift schedule change. It’s pre-paid per minute. They don’t answer it anyway; anyone but the boss is spam.
One is the ‘family’ phone; it’s for emergencies. Known numbers get answered, voicemail gets picked up, but if you didn’t have a really good reason for calling you’re gonna get cussed up and down because, again, pre-paid per minute and they’re not happy you’re spending their money. It’s a bit of an extravegance and may be combined with the work phone in some cases, or skipped entirely. Unless they’ve got kids in school; then it can’t be skipped.
Anyone with a side-hussle probably puts it on its own phone. That way it can’t run the emergency phone out of minutes. (Note I’m talking swap-meet out of date ten-dollar phones here… resold low end of low end.)
Most contact with family or friends, doesn’t happen by SMS or phone calls. That costs money. No, it happens by Facebook over the wifi when you hit a place with signal like the library or a friend’s house. Via the phone because they probably don’t even own a computer.
So the mobile phone network is already ending… as I said, not everyone bothers having (or can afford to have) an active sim card. But so far, you still need the boss or the kids to be able to get through. So pre-paid emergency lines are the big thing and for so long as that isn’t solved another way they will remain so. Which leaves the rich few basically funding the infrastructure, because there’s no mid-range option; but that works fine as they’re insanely overcharged. (See again how inexpensive all of this is in europe.)
What I halfway expect to see is a resurgance of cheap pagers. They’re more efficient for emergency contacts, and ‘get to a wifi’ can relatively easily replace ‘get to a POTS phone’. Even if wifi is a lot harder to get without a purchase there due to the lack of Freifunk points.
The messenger service proliferation… is a phenomenon I mostly see among the tech crowd. Outside tech, the states use Facebook messenger and europe uses Whatsapp. Inside tech, oh gosh… it’s bad. I’ve got Telegram and Wire and Signal and Briar and Hangouts and .. still I can’t keep track of everyone, but every app costs me another 60Mb of space, which adds up fast on a mid-range phone. I’ve basically got a whole messaging app per contact!
Any application that managed to collate these would be a major success. This is what catapulted Pidgin, in its prior incarnation as gaim, into the top spot on desktop. Any such application, by dint of being an amalgamated pile of partially compatible parsers, is guaranteed to be a security nightmare. (cf. libpurple, to this day) And the walled gardens would fight it without mercy, because they like their back doors into user data cleverly disguised as applications.
I expect the next-generation messenger to appear as a tack-on feature to a mesh or bump net file transfer app. But that’s my personal pie-in-the-sky tech dream, which doesn’t increase its practical likelihood. Rather, I hope one will be written, as I think we need it.
But again, I’m looking at a different set of users than usually show up on statistical measures organized around purchase records. Most of the people I’m thinking about are not very tech savvy but they can manage what they need to, or they know someone. They will go long and far out of their way to do something without spending a lot of money they don’t have. If they spend, they spend cash, second hand. Or trade. Many of their friends are in physical proximity, but their families often aren’t.
What information they desire to interchange, has the capacity I think to drive a new platform. But these are not people who put everything out there; they aren’t building OpenStreetMaps (although the ones in the know love it), their knowledge is for their friends only. So what do they need? I wish I knew… someone should ask them.
Now; as for the topology problem. //take it slow; wait for them to ask you who you know// The solution I’m seeing in practice is basically already FoaF. On services without friends lists visible to friends, where FoaF cannot be determined, texts and calls do not get returned unless they contain some explanation of where you got that number from. If for some reason they do get picked up, the conversation had better contain where you got that number, or you don’t get talked to again.
On systems like Facebook, it’s more like .. if you’re FoaF you get a pass to be considered as a contact, if you’re friended you can message, otherwise you’re spam. The only places I don’t see this are Twitter and to a lesser degree Mastodon. And as soon as it goes culturally out the window, welcome to sea lion town.
This came up recently in the context of physical mail, and sending mail ‘care of’ one person in order to reach another. Which I have had to do, in the form of attaching cover letters requesting the forwarding of documents to persons to whom were attached cover letters… It gets ‘interesting’ quickly. This is rather closer to what I would like to see a version of. Ie., if you want to message someone and you’re outside their radius, you need to pass it through someone intermediate (closed or open envelope) who has the power of discretion as to whether they pass the message on or not (read-receipt naturally implemented).
My suspicion is that this would clean up the network a Lot. However, as per thread discussion, yes this recreates every single good-old-boys problem we ‘solved’ in brief by allowing everyone to speak openly to everyone. Though invite-only mailing lists never stopped being where the good stuff was at.
It’s basically a means of distributing the burden of secretary onto one’s friends network. I suspect some people would adamantly insist on retaining an infinite radius of direct contact. There would be issues with at what point in a network to register what someone’s radius was. There would be… a very large technical side to this. And it would create and worsen some types of social inequality and group isolation.
However, if not that then what? And here is where I get to my main point. I propose, rather than a solution, a simulation. Pure artificial stupidity – simulated users on simulated nodes engaging in meta-behaviors of posting, liking, friending, dogpiling, sea lioning… everything we can imagine. This is the kind of thing computers are good at emulating, and by pulling the behaviors from a statistical distribution it doesn’t have to veer into AI or other such hype. Just a model. Then we can begin to really play with these designs and their potential effects.
-
Academic Daily Workflow (iPad)
November 19, 2018
I bought an iPad mini a few years back as an academic daily driver. I take my notes on it, use my books on it, and generally can get nearly anything I need to do done via it. This post is an overview of how I do that, what apps and features I use, and what those features actually do for me.
(I’m mainly writing it because I would love to swap the device out for a Remarkable tablet, but am concerned about loss of features potentially impeding my productivity more than eink would improve it.)
Hardware
Gen. 4 iPad Mini with generic sceen cover and case, Adonit Jot pen, generic Bluetooth headphones.
Books
3rd party apps are essential for this; iBooks eats files. There appears to be no reasonable export mechanism from it.
Books typically enter my world by incomplete reference. “Look it up in Cox, it’s that one about primes of the form of something.” or “Notes of Milne, it’s on his website.” Or worse, “Wasn’t there something on that? I think the guy’s name started with B.” And the inevitable “No, the other one by Lang.”
I search up [default browser], and after suitable finding or purchase drop the pdf into [ReAddle pdf expert]. This is my main centralizer for pdf content. It used to have a really great feature…
You used to be able to go into the menus and hit something to share via it starting up an http server. Then you’d just connect to the iPad as a website by IP address and be able to shuffle files back and forth from any computer. This was a sufficient backup solution and allowed my to sync my books, assignments and downloaded notes to my laptop.
Unfortunately as of the latest version, that feature has been replaced by one which uses a 3rd party website and requires a full net connection. This not only means I don’t trust it, but also that it doesn’t work on the flaky nodes of the campus network where I may have high speeds on the local segment and nothing reliable to the outside net.
I presently have no good backup solution, and that’s a problem. I work around it by retaining urls and doing a secondary document download when I hit my laptop, but that’s bad flow and doesn’t synchronize annotations.
(Contrast: The main sync solution for my phone is that it can recognize USB sticks and I have one with a micro-USB plug on one end and a standard USB plug on the other; all I really need to improve there is making an automated rsync script on both ends, or one that handles when the phone is plugged directly to the computer, either way.)
ReAddle is tolerable for annotation. You can write on pdfs in an amiable number of ways, although the zoom on inserting handwritten notes is in many cases not sufficient. It has the ability to delete, insert, reorder, or excerpt pages from pdfs, and I use this at length. I’ve had plenty of cases of books missing a page I had to find physically and insert, or wanting to excerpt out chapter from larger volumes for separate annotation. The search capability is sufficient.
Languages
I pair ReAddle for pdfs with [Linguee] for languages. This is my go-to dictionary, far in preference to the iPad available defaults. Because they use such a large corpus, it’s far better at handling domain specific jargon such as mathematical language. Many of the technical books I read are in German at this point, and my German is awful, so this is essential.
Pair is not a flowery statement here. The workflow is that Linguee goes up in a split screen window beside ReAddle, then words to search are pulled up by highlighting them in the pdf and selecting ‘copy.’ This automatically drops them into Linguee without manual pasting.
Despite its small screen, I very often use the iPad Mini in split-screen mode, either 2:1 or 1:1 style. While it can do vertical split-screen, I never use that. What I would like very much would be split screen with the windows atop each other instead of beside, but no combination of rotation lock commands will produce that.
Note taking
My primary notes app is [Noteability]. I think I use nearly everything about this app at this point. It’s the first thing I pull up when I sit down to a lecture, and the last I close after revisions for the day.
In lectures where audio is allowed, the audio recording feature can be a wrist and sanity saver. You can on playback watch the pen strokes appear in realtime as the audio proceeds, as well as add new pen strokes which will be marked to the point in the audio stream at which they were produced. This can wipe away the entire stress for me of writing quickly to capture the whole of lectures.
However, this is also a productivity trap since total capture is a false goal. But that’s one for another time. Consider here at least the vital capture of quickly described hints to homework problems, and ability to repeat complicated explanations after going back to do further readings. And also being able to add personal recording of ideas and verbal readings to lecture notes after the fact (great if you think faster out loud). I wish it could export videos; that would make it cool for giving online lectures. But at least the audio is in a standard format.
While note material can be reorganized without breaking the replay timestamps, there are limits. If a document has been reorganized repeatedly, Noteability will often become sluggish on it and may crash, so I don’t tend to use that for more than brief actions. Using digital notes, however, is a great boon when faced with professors who keep revising equations they’ve already written. I would be swearing up and down if I were trying to keep up in pen.
Synchronization is moderately good. There is a WebDAV backup which can connect to NextCloud smoothly, and allows selection of a subset of folders for upload (as pdf or native format). I use that consistently to make pdf formats available and as a personal backup. However, if encountering a network error even once, it not only ceases to attempt uploads but deconfigures which folders are synced. There is a non-persistent notification this has occurred. It’s very easy to not know you stopped syncing, or enable sync too rapidly and accidentally sync folders you didn’t want to publish.
There is an in-app feature for side-by-side view of two documents. Despite the small screen, I find this readable and use it frequently. For dense lectures, I will pull up the notes of the previous session on the left. For dull lectures, I will pull up a personal notebook or a shorthand practice sheet. In this way I have more context for what is happening (and can annotate the left panel as well if some change is made).
This has the unexpected advantage that I can see a birds eye view of the page I’m writing in the main note, which allows me to better structure the document as I go. I typically use the zoomed writing window in handwriting mode, which provides an ample viewing panel at the bottom of the screen for current lines of text or equations.
In some instances, the most useful second reference document is a pdf. Then I pull ReAddle up in side-by-side. This is also a good configuration for making more extensive notes based on a pdf than would fit within its margins. While this halves the writing area, it’s still usable.
Although I don’t use OCR conversion to text in Noteability, it does work well enough to recognize my handwriting, and appears to be integrated with the in-app search. I’ve had search results come up that were apparently from handwriting body text, and quite useful.
I use multiple colors of annotation to distinguish between realtime notes and later revisions. Whether this actually improves comprehension, I don’t know. A prior version of this workflow involved exporting initial notes as pdf to render them immutable, then annotating revisions on top.
Scan and Print
I use [Genius Scan] as my scanner. It can produce large files by default, as can Noteability export, at least compared to the upload limits I’ve seen on sites for (for instance) disability note-taker file sharing. For this reason I pair it with [PDF Compressor] where size matters.
As both of these retain file copies, the workflow with them involves a cleanup phase after the pdf has reached its destinations. Failure to perform that cleanup can clog the device with large redundant files quickly.
Using the automatic mode for detecting pages in batch processing, I can copy a set of notes for an entire semester (~50-70 pages) in a few minutes easily. This makes it very low hassle for someone to provide me notes, compared to students whose workflow involves taking the sheets down to the xerox scanner and manually setting them one side at a time into the tray until complete.
Printing is a campus-specific matter; we have followme printers with a central queue, so I use [default browser] to drop files in the queue. This is a small improvement over my old tactic of emailing them all to myself before hitting a campus computer room.
Forms that can be submitted by email or online-fax get scanned in, sent to ReAddle as pdf, annotated there, exported as flat pdf, then emailed to destination.
Email etc.
I do use apps for gmail and protonmail, but not out of necessity, and would be comfortable ditching them. My campus mail is not configured outside of the web interface (I tried; it didn’t like me). So in effect all important emails go through [default browser]. Likewise all coursework discussions, obtaining homework files, etc.
Downsides
There are unsatisfactory aspects to this workflow, despite all its advantages.
— PDF annotation is still a marginal matter. In theory [Liquid Text] would fix this, but I couldn’t get used to its workflow the last time I tried it.
— Calendaring and time tracking aren’t really well integrated. Then again, do they belong here or just on my phone?
— I often end up keeping screen brightness turned massively down, almost to the point of unreadability, just to save battery. All downsides of lit rather than eink screens apply.
— It’s an iPad; I can’t just plug my USB drive into it to move files, or an SD card to expand storage. And not being able to easily move photos etc. out of the Apple ecosystem means the device is getting really full.
— The small screen does feel cramped when trying to handle complex content or multiple sources at once.
There are also some tasks where I still pull out a full blown computer, which may or may not be useful to integrate.
— LaTeX authorship, both articles and beamer presentations.
— Jupyter notebooks.
— Longform markdown writing. (Would need a keyboard?)
Or perhaps the line should move the other direction, and more of my network based tasks move onto the full computer.
But the largest weaknesses of my iPad workflow appear in those situations where I find myself returning to physical paper. Which happens to me consistently every semester as complexity increases, or battery life becomes a concern, or I just want the feel of pen in hand again. In some cases I simply fall back to my pre-Noteability workflow of writing on paper then scanning and reading on iPad. In others, I end up making fully paper volumes out of my initial digital notes.
I still write my homeworks on paper. I don’t always remember to scan them in… The few times I’ve done them digitally have been a boon, yet I don’t feel like I can think as clearly on the screen, so I still end up working on paper. Plus I need at minimum the assignment, a reference book, a draft page, and the final page; typically all at once.
I still dayplan on paper. That may be changing now that I can side-by-side within Noteability, but I waffle about whether to use calendars or handwritten planners a lot. And I just plain enjoy the feeling of my pen in hand. I was a pen snob before going digital, and still appreciate it a lot.
Should I Switch?
Would any of my remaining paper use transfer onto a Remarkable directly? Maaaaybe. Maybe not. The fact that it has no scanner camera is a big big problem. So is not being able to split the screen into two views.
It’s easy to see from this workflow that I have a complex relationship with the written page. It needs to become even more intense as my scholarship increases in depth. My daily driver isn’t just an ebook reader, or a note taker, it’s also a print queue manager, scanner, emailer, downloader… A creater, modifier and mover of pages in and out of the digital domain, as well as across the network, in both directions. (This is in part driven not just by academic use, but also by fulfilling paperwork on tight deadlines with limited resources.)
The Remarkable tablet wouldn’t do this for me.
That isn’t a condemnation; rather, an acknowledgment that its role would differ. My entire workflow would change. The boundaries between devices and ease or difficulty of various uses would alter entirely. Saying this is a bad thing would be like saying eink isn’t good for video - of course not; that’s not its purpose.
I also need to consider that as my career progresses, my use case alters. I won’t be spending so many hours of my week in lectures anymore after this semester. Rather, thesis work will become the focus, and I have little experience on which to base my expectations of that.
-
Oops, we made a net
September 12, 2018
The social network. A model we take for granted… It began as a simple set of links, human to human, ‘x knows y’. At best somewhat directed. In that form it’s not quite innocent—you can tell a lot about people from who they know. But it’s relatively powerless.
Add communication. X follows Y, Y posts and X sees it. Now we have a simple forward-propagation network. It may have loops, but content moves slowly and mutates heavily. Everyone has a nice day, aside from cussing at the bad MySpace page formats. Except for a few refresh obsessives, most of us go on our day unobstructed.
Add comment threads / replies. This goes two ways. Either they’re second class objects, leading to forum or livejournal style posts-with-discussion, or they’re first class objects leading to the twitter or mastodon situation. But the maliability of the generated comments gives these a flexible effect.
Add likes / favorites. Now the trouble begins. They’re fast and cheap and they do one thing the simple forward-propagation network didn’t. They quickly and succinctly back-propagate. You know whether you got likes. You soon realize why.
Add boosts / shares. And the nodes of your back-propagating perceptron now have a convenient means of both acting as filters and getting around the single greatest weakness of the typical model, namely exponential loss of back-propagation effectiveness with increase in node depth, since likes / favorites hit straight to the source and boosts / shares interact with the connection formation mechanisms.
This is a fully functional neural network. With entire human minds for nodes. Operating at the speed of your keyboard.
Add hashtags. Now the nodes connect by a layer orthogonal to their pre-existing mechanisms, allowing the rapid re-formation of connective clusters. As news events ebb and flow, the remains of these constructs will come to dominate the connective pattern. This selects for collective excitation patterns eminating from a few key players. You now have a sensory apparatus. Users sort themselves by chosen distance from various sensory clusters, as outward propagators act as filter nodes.
Add federation. By means of local and fedi timelines, a concept of locality is reclaimed which may not correspond to the sensory cluster patterns. However, over time it increasingly seems to. These provide broad arrays of lower activation input (one doesn’t always look at fedi unless bored) in addition to the high activation personal timeline. Something akin to a regional mood is achieved.
Add celebrities. Which here I define as anyone posessing a hefty skew of followers:follows, or who is unduly prone to being boosted / shared. With functioning sensory clusters, a single node is no longer recognized as a good source of information to have such a broad effect. Allergic responses follow, attempting to dampen that node, until rejection becomes inevitable. No aspect of this system beyond the nodal scale retains awareness that its effects are upon humans.
Ebbs and flows give way to flash mobs and consternation as we wonder what went wrong. Nothing went wrong. Something went inhuman / ahuman. We did. The neurons don’t know what the brain is pondering, and we’re the neurons. Shit posting and flash mobs, meme wars and refresh addiction, are all just part of being part of something. A pity we don’t quite know what, because we’re going to have to square with it.
Maybe this is something humans have always had the drive to do, if not the ability at this scale. But it’s more than simply communication. It’s a computationally complete AI built on top of a network of natural intelligences. And we have no idea what it’s up to.
-
The next time you see the present,
April 4, 2018
consider believing your eyes. Since last I wrote, the walled gardens took advantage of being something far more insideous. It’s common now to decry their influence. Less common to leave them. I didn’t leave them either; more fool me. Yet so long as the social connections we need are with people who are there, it’s not feasible to just up sticks. But a component of the problem is (presently) dependent on the interface, and that can be altered.
.
I’ll give you another word for the automated collection, ad placement, algorithmic timelines, and autosuggest lists: Man in the middle attack. {eavesdropping, injection, replay and drop, and injection again respectively}; just remove all mention of encryption and substitute “Mallory” with “server”. Is there any longer any doubt that it’s an attack? The worst part is on what, but that will become clear.
I went into my facebook account and loaded each followed page individually. About 1/5 of the content was stuff that had never appeared in my feed. RSS export is no longer supported so I can’t trivially move off timeline there. I may move to email notification, if that’s still available. They probably don’t have sufficient cause to bother making that not chronological. I did the same verification on twitter. About 4/5 of the content had not appeared on my timeline. (Taking into account posting times; it’s not like I just ‘missed’ old content because I was offline and it’d already scrolled down.) Some accounts had made upwards of 20 posts without one of them appearing. (Anyone bored enough to make a plugin for performing this analysis properly instead of guesstimating?) Instead I was treated to ‘likes’, sponsored posts, and replays of posts I already read.
Only posts from locked accounts seemed to appear consistently. An RSS pull of my twitter follows provided far more interesting reading than the curated (mitm attacked) so-called timeline. Maybe I picked a bad moment, or I have interesting choice in follows. I don’t know or care; the curated (mitm’d) experience isn’t what I signed up for. The desolate feeling landscape of replays and personal tragedies I’d become accustomed to on twitter was not at all representative of what my follows were actually posting. Everything about how I felt there existed only in that distorted lens. What if you bought a pair of rose colored glasses, but over time they turned harsh and muddy until they showed you a world of grime devoid of joy. Wouldn’t you take them off?
Interpretation of the mind renders worlds of text as affective as the virtual realities of our fiction. You don’t have to cover your eyes to alter your perception.
.
Collated apart from their native interfaces, the feeds are initially overwhelming. I’ve migrated between services so many times, generally only using one heavily at a time, that each has follows up to my magic number—the quantity necessary to create an ideal reward profile for hitting refresh. Not so many that I get anxious trying to keep up; not so few that I get frustrated by having nothing new to read.
Are there somehow exactly that many interesting people per service? Hardly. There are thousands of interesting people, and thousands of boring ones; I follow some few of each. Excepting the refresh reward cycle, this number would not be at all optimal. Refresh is not a desirable behavior. So one of two options must be practiced: Either I need the computer to help me obtain some information from all the data sources I desire by pre-processing them, or I have to pare the list down. A lot.
Even if I desire the former in some cases, I must begin with the latter. After all, the lists as they stand were not curated for impact of data they provided, only for a pleasant refresh cycle. This is not easy. Oh, on the technical level it is, but humanly it’s another matter entirely. Stop looking there? Stop involving myself in the melodrama of those 20 or 80 or 180 people I don’t even know? But… But nothing. Watching their show play out isn’t achieving anything.
It’s more productive to rant in a notebook than to hit refresh. But refresh seeps back in. The habit of finding new interesting follows recurs, dogging me even into mastodon now. No, no, enough. There is always another interesting follow… No, enough. No refresh. No new follows (at least for now). A time limit to reading—if it’s not complete within time, unfollow someone. This is going to hurt but it’s a lot more sustainable than logging off while leaving the temptation lurking ready to swallow me whole the next time I have cause to log in.
When I’m ready… RSS follows, topic follows, carefully ferreted out blogs away from the main flow, actual books and journals. Not yet. I don’t want those to fill my time again, when there’s a real world out here in need of those hours.
.
I’d considered giving up the net entirely other than necessary tasks. We’ll see how far I need to go. And what I deem necessary.
-
Simon Invariant Presentations
June 1, 2017
-
Celiac Food Roulette, Explained
May 16, 2016
Feeding me is a little like feeding a very confused slightly-alien who didn’t remember to bring important things like a tricorder or a sonic screwdriver, so can’t actually tell whether something is edible or not. I therefore take a lot of risks living on this planet, but haven’t rightly stopped to calculate (even in a back of the console sort of way) what size those risks are.
Bounding the desired risk
The effects of mistaking something for food vary in strength and duration, and depend on whether I’ve made other mistakes recently. But for a back of the envelope perspective, lets presume a single mistake causes approximately a week of significant illness. Then, if I want to be ill half the time, I should make the average risk of meals such that about one every two weeks goes wrong. Given about 5 meals or snacks in a day, that’s 1 in 70.
But noone wants to be sick half the time. Suppose I accept an average meal risk of 1 in 1000. That equates to an average of 12 days per year ill, in addition to the sundry colds, flus and seasonal allergies that already take their toll. Little enough for the effects that take months rather than weeks to dissipate to be under control, but doesn’t sound impossible.
So given that range, where a 1 in 70 average meal risk is dismal but a 1 in 1000 may be acceptable, what is the actual risk of food? Can such an acceptable bound be achieved, and if so then what is required to do it?
Bounding the achievable risk
Some foods will instantly create an effective 100% risk for the meal in which they appear. Anything labelled “contains wheat,” or with ingredients like “wheat flour” or “barley malt” or “rye” anything. These are the problems almost anyone with a passing familiarity with celiac will recognize. In theory I could blow an entire risk allowance on one bagel, but in practice the effects of a large dose are so phenominally worse than the cross-contamination and minor-ingredient effects I based the estimated one week duration on that it would be a horrible mistake.
Next up are those labeled “shared equipment” with wheat processing. I’ve seen cases where this was ok, but only extremely rarely. I’d call it a 99% risk. Then there are oats. Oats are a real problem. They don’t have to be labeled as contains wheat, but they are almost always cross-contaminated unless strictly tested. Where by almost always, I loosely assign a 99% risk to any oat product not strictly and persistently tested for gluten contamination.
Then comes “shared facility,” which is far more variable. It can mean anything from running right next to wheat things with flour in the air to an entirely different building on the site with good controls between. At a rough guess I’ll call these a 60% risk, although the variance is huge. Once in a while you’ll see “shared facility but [description of seperation procedures]”; I’m not counting those as shared, really. It tends to be consistent within a product and brand though.
Restaurants with GF menus fall somewhere between shared facility and shared equipment, with tremenduous variation but a tendency toward the high risk end. Because so many products are involved, as opposed to a steady few in manufacturing, risks are less consistent between meals and higher overall. I would put the average risk without further prior knowledge around 90%. Non-dedicated home kitchens tend to be less well seperated or cleaned, verging back up toward 99% risk.
Then come the subtle but significant risks…
Soy sauce: Well labeled on prepackaged food but a major source of oblivious failures in restaurants. Soy sauce contains wheat; tamari does not.
Malt flavoring: Can be made from corn, is usually made from barley. I’ve seen arguments about this one, apparently there’s a move away from barley so it may have dropped to around 80% risk.
Misreading the label: There are a lot of little details to go through, and sometimes one just gets sloppy or tired. Or eye strain; those fonts are tiny. I’d say I pick up something that looks ‘fine’ at first read, then later discover wheat or shared facility listed on it, about 5% of the time. For risk bounding purposes, I’ll run with that number, even though sometimes I catch it before consumption (but not often).
Natural flavors: Can contain malt flavoring. This is very rare, but does happen on occasion. I have no good data as to how often so I’m ballparking it at 1% risk.
Undeclared allergen, cross-contamination, etc.: Problems that can cause recalls… eventually. In the mean while, they cause pain. Not frequent but existent issue, call it 1% risk. More like 10% for oats.
Stuff I forgot: Maybe 3% risk. I forget things a lot and by definition I don’t remember what they are. But there are sneaky ingredients. Like starches. And .. I forget.
For simplicity, I’m going to lump all factors below malt flavoring together as about a 5% risk attached to any product not strictly labeled as gluten free. Due to FDA regs and good testing, I’m going to loosely and laughably claim labeled products are safe. Things can and do still go wrong, but this is a back of the envelope calculation.
All of these risks are without prior knowledge. With prepackaged products, consistency in production means prior experience is an excellent guide, so a personal database of known-safe-enough products can be built up over time. This is absolutely the key to cheap GF eating, since well tested and labeled products tend to be expensive, although that’s been getting better. Things can and do still go wrong, of course, so some bound would have to be put on that if this weren’t the back of an envelope.
Intersection of bounds
Suppose all untried foods are chosen as well as possible from foods not strictly labeled GF. By the above, I’ve given this a 5% risk. One untried food a day doesn’t sound unreasonable while still figuring things out… Until you note this equates to a 1 in 100 average risk, which isn’t much better than the 1 in 70 that was marked unacceptable. The desired risk threshold of 1 in 1000 can be met with one untried food per ten days, presuming all previously tried foods that remain in the diet are safe. Since mistakes usually have to be made twice for certainty, and not all low grade ones are caught quickly, this is generous.
A more numerically pleasant and realistic bound would be one untried not-GF-labeled food per two weeks. But only prepackaged foods, with full ingredients lists, that are not from dubious facilities. There is no room in such a scheme for food at a party, shared kitchens, friends trying to cook, restaurants, or indulging in any other risky bad ideas.
Suppose among these risky bad ideas one makes value judgements that give ‘risky’ meals, including those with the 5% risk scenario considered above, an overall average 50% risk. So some new restaurants, occasional friends kitchens, but mostly trying out new foods as safely as possible. Then the acceptable rate of risk taking drops by a factor of ten, to one every twenty weeks. Or just over two attempts at food discovery outside GF labeling taken in a year. (Three if you didn’t make the two weeks estimation above.)
There is a balance, I think. But it involves getting sick more often than is desirable, doing a lot of smiling stiffly at parties while turning things down, and gritting of the teeth when it’s summer new product season because advertising really does work and so many things look tempting.
For myself, I think my takeaway will be to try for no more than one careful-but-risky whim a month. And penny pinch and plan in order to take GF labeled well tested foods with me more often. Because I am dearly tired of being ‘careful’ yet sick half the time due to curiousity. One new restaurant a year and one new food a month is a ridiculously tight bound, even though it sounds pretty accurate to me. Thankfully, cooking from scratch can bound this in a far less psychologically stressful manner. But that’s a different calculation entirely.
Addendum: Tiny risks add up
Suppose, rather than being perfectly safe, prior good foods and GF labeled foods have an associated risk of only 0.05% (1 in 2000) per meal (which may contain multiple foods, so each must have a lower risk than that). Then, right there, the entire risk allowance is cut in half. With frequent risks, little bits matter a lot.
-
[gardenbreak] 1: Put a face on
March 7, 2016
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Sitting here with my carefully written triplicate hardcopies, I feel more than a bit better. I back-of-envelope calculated the bits of entropy in my old shallows-of-imagination passphrase, and it came out to somewhere between 23 and 44 bits. Ouch. Then I try to actually make the relevant move... And find I can't bring myself to click the 'change passphrase' button. All of human psychology rebels. This is my most intimate connection to my computer, the secret only it and I know. The thing I've told to it and to no human ever. And I mean to wipe that away with an utterly impersonal text string? A machine generated nonsense? Blasphemy! I won't even remember it fully until I've used it for a week. I'll be tied to a piece of paper, constantly afraid of losing that and all my access with it. *click* **Plot 1: You are you.** Not whoever you say you are, but... A crypto key. Just that. Not an email address, not a face, not a URL, or a username. Welcome to the wide wonderful world of PGP. (Not that it's the only horse in this race, but out of the gate it's the simplest way to get things working.) What you signed, you wrote. Everything else is questionable. Normally this alone is a rabbit hole of almost unbearable proportion for the average user. But normally people attempt software 'integration' at a level that is frankly infeasible right now. I'm taking a different tack, at least temporarily. This seems like overkill, doesn't it? We're used taking a combination of acts-like-them and centralized authority to mean identity online. But that means doing a big old "I here am me there" dance for every occasion of platform drift and risking impersonation at most turns. AND that the only verification on what you post is the site authentication. Faked emails, faked FB posts, apps we didn't remember authorizing, ... it's not a small problem. Also I'm philosophically opposed to 'acts like them' as an identity measure, since it inhibits change. **Step 1: Rabbit, meet hole.** Build a GPG key attached to a good passphrase. If you already have a GPG key, any decent client should let you change the passphrase. If you don't... I'm sorry, this will be an annoying part. Go here: https://www.gnupg.org/ Linux: apt-get or yum install or whatever you do 'gpg'. Windows: install gpg4win And I'm sorry, but it's probably going to be really annoying. I don't have a nice answer to that. Take a deep breath and follow tutorials; thankfully you only need to get as far as generate a key. Once you have a key, upload it to keybase.io. (And at least one old-fashioned keyserver, but still, keybase is a good place to start for this.) But do everyone a MUCH bigger favor and **DO NOT put a private key on keybase.io**. Ever. Which also means not using keybase to generate the key. Yes it's a neat tool, and so very convenient, ... and takes your private key out of your hands. Your private key is your most personal data-possession. Don't let it out. It should only touch computers that are trusted to act as you, to speak as you. So far so normal... This is the point at which most people talk about installing enigmail, which leads down a thunderbird rabbit hole, and suddenly your entire infrastructure is one giant headache of new problems, because you touched /mail/ and that always goes wrong. Besides, we want to deal easily with crypto on /everything/, not just mail, and preferably without hitting the command-line all the time. Or trusting keybase.io with privkeys. I use enigmail, I like it fine, but: I was looking for a seahorse-to-gedit plugin. And apparently there used to be one, but... bitrot. So. Looks like for a text editor that handles gpg operations relatively cleanly, the choice atm is http://www.geany.org/ with geany-plugin-pg. I like it better than fighting with a command-line or trying to find plugins that work in Firefox for more than ten minutes between website updates before someone gets tired of trying to 'support' gmail. The plain old text input box never changes. I can cut-paste just fine. So an editor that can handle encrypt/decrypt/sign/verify is enough. Install, enable plugin, click Tools->GeanyPG->Sign. Yay! (Turning on Document->Linewrapping helps too.) As much as I would love everything scripted and integrated and pretty... This works. The requirement that every (large) thing I write be signed as written by me can thus be covered. (Although it temporarily won't be, realistically, because iPad and Android need covering.) Making the results of that non-ugly and taking care of signatures for smaller items (like tweets) is another problem for another day. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJW5zSDAAoJENnrcPCV5+y/RVUQAK4ZT1mBpDvOi3m8wSdyjfGJ My991GUXtWDynEtU1uWYCt46jV1vlTwBO8LJ19QfY8KwsZej0PWmU9Bt450FLOyV 4yq8gijcrvPjTR0lbjr38YxZOao0bRoVh+gxuGNJhNuu1PXVoixkQx62IFojSt0P wr9tOZ5/NdYUrzhgZLVjYYP6RhdpptkrI2ilSxMEtNR6g4GbZ3uZFA5R0ZUv40TS rgj6KZxDhlVwcbROB6XE9YvxQc4e83J7AmTWXM7P1ZZozDUxgORgYjRY56CNTdLA /933DuVR6O2YNZSp6kT+97ro+Z3flKdKkj84x7Z/0+ImOI2tZgFk7kekEyUXqoya parU5nMz/NSQ8NU1+LJU1zNSMVG9zgLyltwhekVaa3oY/O0cx+4g0cixyQYgqWVD JyuUQ/Xg3NSCDUXamIM8j1sKuRzSmGrVWV2Obg6+ywEpTS4FYrKLPOrNfldjyA69 nCm3SVLYfkSrW8PVhan+RroCKq5X9jKrtiXHIA57xd28ETyS1gCoVBUAVAQRxjQC T17XVa2KBQRA0qcjKEfhL7L9fJEA5DcaO3r+1oxRnI6+ExeDMapdkpo/fkDrujQ/ yBEK+vKXkpOUwEfivZWnu9FYqHs28Yen/oUA78rBLrwgA+TCv8+XuDeBKeoPVb1u fY4u4SBiEy4svuuR/RPc =jJVP -----END PGP SIGNATURE-----
-
[gardenbreak] 0: Get out your dice
February 28, 2016
Gardenbreak is going to be my project of getting off the walled garden bandwagon of Twitter, Facebook, etc. and onto something more effective, hopefully without losing half my friends and news sources in the process. This is going to be a long, slow series of very small pieces. Made longer and slower because I will be playing along myself. I’m aiming for a mix of next-do-this and why-do-that, but with a few mysteries along the way. Note: In general, despite many of the tools involved, this is /not/ about information security. Just communication control.
Plot 0. Identity sunt. Without a walled garden, there is no central authority with claim to determining whether you are you. I’m therefore turning to known means of estabilishing identity at least internally consistently if not consistently in relation to external identity through the use of cryptography. This means signed messages, which means public/private key pairs, which means passphrases.
I’m also handling the many-walled-gardens problem in part through simplifying login management. Which means having a password manager with, you guessed it, a passphrase. Since this will be a recurring theme, due to being the sanest form of human memorizable private key, it’s worth getting out of the way first. (Not that I didn’t do some of this before, but…)
Step 0. Roll a new character. Get out your dice and a sheet of paper. (No, really.) Perform the Diceware algorithm (http://world.std.com/~reinhold/diceware.html) to get at least three passphrases of seven words each. If you have the time, make seven of them, in case you need more later and don’t feel like re-rolling. But that’s 245d6 so it might take a while. Write them down on physical paper. This is important. Do not write them down on a computer if you can avoid it.
Do NOT run the algorithm 200 times and pick the ones you like. (This was amazingly difficult for me to not do.) Your likes are exactly the factor the algorithm is designed to eliminate from the equation. Thankfully, human psychology being what it is, you will probably eventually come to like whatever it picks. DO, if you think it’ll be fun, use alternative Diceware word lists - there are some fun ones, and lots of languages. It’s a pity they don’t have one in Klingon; that would increase adoption rates quickly.
Put that piece of paper somewhere only you control. Like in your purse, next to your credit cards. Come on, it’s not like losing those would be any better… Actually, since you can’t get your passphrases reissued, it would. Put a second copy somewhere only you control AND you’re not likely to mislay, like where you keep your tax documents. They will probably get lost once or twice anyway. Just don’t write them down digitally on a computer. It’s the internet you’re defending against, not all of humanity.
If you don’t have, dislike, or distrust dice, there is https://www.dmuth.org/diceware/ (which also has links to lots of discussion about how and why to do the Diceware algorithm). I’ve read the code, it operates client-side and sends nothing back, so it should have the same trust level as your browser. Which shouldn’t be very high.
Most adherents to Diceware will decry using a digital generator, and with reason. (See also: Not writing the passphrases down on a computer.) But, and I am going to make this note here since it will need making sooner or later, there WILL be trust gaps in a realworld bootstrap process, because if you already had a trusted process, why would you need to bootstrap? So just do something and keep going. It will still be better than pulling a passphrase out of the shallows of your imagination.
{Gritty bits:
Seven words in Diceware should give you ~90 bits of entropy. Any more than this is a bit needless since there are other, weaker links in your security chain at that point for almost any practical use.
}
-
Looking for old posts?
October 4, 2015
I haven’t ported everything over here yet. If you’re looking for older writing or code, here are some places you could try:
-
feonixrift.wordpress.com - Semi-technical old blog. Actually, probably the most useful stuff I’ve written.
-
sourceforge.net/u/feonixrift - Code is replicated on github, but useful for old dl stats for rtprio.
-
-
Gardening in the Dark
July 20, 2015
I spoke with someone whom I’d hardly known, and now will not, because they’ve moved. We shared so many interests, I wanted to show them videos from the last cannon event. Oh but they don’t Facebook. This… was a problem for me? Really? Infosec, you know, they explained. I smile, nod. I know. They don’t know how deeply I know, and never will. Infosec, you know. I say “public face,” while thinking “personal brand,” while thinking “I’m screwed,” because this is exactly the kind of branding moment in which the choice to participate in a garden reveals its walls. The world of people visible to me automatically excludes those I seek so long as I am not like them. So long as I am (solely? primarily?) within the walls.
I can spend weeks, years even, training the automated algorithms to give me what I want to see. Out of the subset of things those algorithms are willing to show at all. From the set of items within the garden. But if I do that, I will never see the wilderness, let alone be it. You cannot walk out of the wilderness if you never walked in.
Those gardens offer a tempting ease, and the ease is not the problem. Faust is. The walls are not the problem; if they were, there would be no problem in gardens without them. But as soon as those gardens are planted, sure as rain, the walls start to rise. Gardens are the problem. I’m going for a walk.
-
Gnupg SSL Cert Errors
April 5, 2015
You try to do one little thing… and it turns into a herd of yaks. We’ve got a serious yak problem around this here internet.
keys.gnupg.net uses an invalid security certificate...
No, it uses more than one. Lets elucidate.
keys.gnupg.net. 85040 IN CNAME pool.sks-keyservers.net. pool.sks-keyservers.net. 60 IN A 140.211.169.202 pool.sks-keyservers.net. 60 IN A 173.79.12.47 pool.sks-keyservers.net. 60 IN A 176.9.100.87 pool.sks-keyservers.net. 60 IN A 192.146.137.11 pool.sks-keyservers.net. 60 IN A 198.84.249.106 pool.sks-keyservers.net. 60 IN A 211.155.92.83 pool.sks-keyservers.net. 60 IN A 37.59.144.15 pool.sks-keyservers.net. 60 IN A 46.229.47.134 pool.sks-keyservers.net. 60 IN A 78.47.176.74 pool.sks-keyservers.net. 60 IN A 130.83.63.25 140.211.169.202 uses an invalid security certificate. The certificate is only valid for the following names: *.fedoraproject.org, fedoraproject.org (Error code: ssl_error_bad_cert_domain) 173.79.12.47 uses an invalid security certificate. The certificate is only valid for the following names: keys.stueve.us, *.stueve.us, stueve.us, *.stueve.tv, stueve.tv (Error code: ssl_error_bad_cert_domain) 176.9.100.87 uses an invalid security certificate. The certificate is only valid for git.ccs-baumann.de (Error code: ssl_error_bad_cert_domain) 192.146.137.11 uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The certificate is only valid for the following names: hkps.pool.sks-keyservers.net, *.pool.sks-keyservers.net, pool.sks-keyservers.net, pgpkeys.co.uk The certificate expired on 03/09/2015 05:47 AM. The current time is 04/05/2015 03:40 PM. (Error code: sec_error_unknown_issuer) Iceweasel can't establish a connection to the server at 198.84.249.106. 211.155.92.83 uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The certificate is only valid for the following names: hkps.pool.sks-keyservers.net, *.pool.sks-keyservers.net, pool.sks-keyservers.net, pek1.sks.reimu.io (Error code: sec_error_unknown_issuer) 37.59.144.15 uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The certificate is only valid for the following names: hkps.pool.sks-keyservers.net, *.pool.sks-keyservers.net, pool.sks-keyservers.net, pgpkeys.eu (Error code: sec_error_unknown_issuer) 46.229.47.134 uses an invalid security certificate. The certificate is only valid for the following names: 2015.alpha-labs.net, alpha-labs.net, *.alpha-labs.net, *.mc.alpha-labs.net, static.domian.alpha-labs.net (Error code: ssl_error_bad_cert_domain) 78.47.176.74 uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The certificate is only valid for the following names: hkps.pool.sks-keyservers.net, *.pool.sks-keyservers.net, pool.sks-keyservers.net, sks.openpgp-keyserver.de (Error code: sec_error_unknown_issuer) The server at 130.83.63.25 is taking too long to respond.
Not one entry with a valid ssl cert; not ONE. Our yaks, they are very shaggy this season. We are bootstrapping the future on a house of cards masquerading as a jenga set, instead of a sturdy scaffold. The wonder of it is, it’s working.
-
New site setup
October 18, 2014
I am sorry the concept of ‘blog’ has taken over the internet. It’s not bad, it’s just not the only game in town. Lets play.