Skip to main content

Mike Sugarbaker

Software for interiority

2 min read

I learned the word from Douglas Coupland's Microserfs, I'm ashamed to say. But it's what's going on inside the head. How much software is left that has anything to do with that?

Not business software, surely; that's all performance. Not image software or page layout; those are canvases, the things you have to fight to make them look the way they do inside you. Maybe music software, but only in the most technical way, arguably a more technical way than code editors.

Research tools ought to do it. They ought to have at least some tools in the toolbar that are for thinking with. But they don't seem to. It's just search, navigate somewhere, hardcode a link, painstakingly weave a web that you're never going to put to a use besides distraction. Where is the page in Notion - and I'm sure it exists, but still, where is it really? - that hands you two ideas at random and asks you to synthesize them, or else separate them for good?

Or: to really think with software, you need a lot of undo: so much undo that it almost becomes something else. Let me walk down a path, rewind it, then pluck it out like a green branch from a tree and save it somewhere. Efforts at doing something like this are generally a mess, maybe because they have to go on a screen with everything else. Or maybe I'm asking the impossible again.

Maybe this would end up being a half-decent way to use LLM-based generative AI: turn up the temperature just a bit, go off the beaten path, but have my collection of notes and topics in mind. Find the themes that are hiding from me. But the kind of training you'd need, and all the GPU time and energy and heat already making a galaxy of customer-service chatbots.

We haven't made real tools for thought, just tools for nailing down thoughts we're already finished having. What I'm after is an aesthetic, software that doesn't feel like I'm interacting with anyone else but that I don't have to write for myself. Software that wasn't there to help me make anything for anyone, and didn't make me feel even like I'd taken something meant for productivity and subverted it to useless ends. How could software say to you, with its being, this isn't about ends?

Mike Sugarbaker

The eternal stumble continues

3 min read

First, I don't judge myself, or any other web developer, for getting pulled into the single-page application way of building to begin with. We all said it resulted in better developer experience, and it did, for a while; we said it resulted in faster apps, and sometimes it did, but really we just weren't checking. It felt fast to us on our fancy, up-to-date dev machines, but I don't think I was alone in thinking it just felt cooler, and that that was enough.

The ol' hipness pendulum seems to be swinging away from SPAs now, and back to building pages on the server, augmenting them with a touch of client-side code. A library called htmx is becoming popular enough for that augmentation that it's become emblematic of the approach; other brands for this new model of the old ways include "hypermedia" (a sure way to get my attention), HOWL, and the truly awful acronym HATEOAS. (Whenever I see this acronym I want to pronounce it "Hatey-O's." Have you eaten yours today?)

My personal reasons for getting on this train are pretty big: I crashed out of the React scene for a few reasons, but the biggest was complexity. The jungle of files you had to hack through by the end there, with GraphQL and Apollo and Redux - holy gods, Redux - and server-side shells for everything (I'm not talking about good code here) and JSX through build steps and all the configs. I would feel dizzy whenever I started to think about changing anything. The call of YouTube from the other tab got louder and louder. So no, when people get radical about the present state of things and say to chuck it all in history's fire, I don't judge.

But I don't have to judge - someone will do it for me. People are rolling their eyes saying "oh boy, guess it was time for a new hype cycle, guess there was too much money to be made on contrarian takes," and like, thanks for inching closer to a critique of capitalism, comrade. It's not as if there's any money to be made with complexity!

But would we have made everything much too complicated anyway, before the train managed to turn around? I think so. I think it is okay, natural, and maybe inevitable that the field of computing lurches back and forth like a giant, busted strandbeest in its search for the best way to build information systems. How could our thinking ever not have been socially influenced, to the point of herd behavior? How could we not have let the implications of machines that you write into being carry us off toward endless complexity?

This post isn't really about technology, but I will end by saying that like Ursula Le Guin did, I think hard times are coming. So efficiency in computing is going to become a lot more important than it is now, but at the same time, the severe C++ priests who demand it at every turn are also not the keepers of the one true path forward. When it comes to machines that are written into being, we will always have to think about tools, they'll never be a settled question. But we will have to learn what it really means to think about people first.

Mike Sugarbaker

Arc has been my default browser for two weeks

7 min read

I've been using web browsers, the kind you run on regular ol' computers, for... 29 and a half years? Let's call it 30. So forgive me if I think it's worth writing about when I switch to a new one. I switched my default from Firefox to Arc a couple weeks ago. I have some thoughts about it!

The first thing I have to correct for: I'm sure that fully half of what I'm enjoying is the novelty of it. Shiny new interface, and even just faint threats of new ways to drive around on the internet? Fuhgeddaboutit! My ADHD is fully powered and sparking with energy. Combine this with my recently-rekindled web development fervor and we're at least possibly in full midlife-crisis mode here, but it's still possible to try and take a hard look at what's up.

Tabs and spaces and favorites, oh my: while it's sometimes disorienting to have one's tabs on the left rather than across the top, what's really head-twisty about Arc for me is that there aren't bookmarks the way I'm used to; there are just different varieties of tabs. Most people are apparently not bookmark users anyway, and prefer to rack up tabs, so why not pave the cowpaths, I suppose. This hasn't caused any real trouble for me apart from occasional momentary confusion, though - all my Firefox bookmarks are sitting here in a folder, in what Arc calls the pinned-tabs section. If you're highly concerned with a tab and think it's a keeper, you'll want to drag it "above the fold" into the pinned tabs for a given workspace (that's another thing - Arc favors these "spaces" instead of making new windows for everything, and you can associate them with conventional browser profiles if you want). If you leave a tab down in the dross, it might get cleaned up for you, which means spelunking into the archived tabs, which is easy enough but induces a little anxiety.

There are also the favorites, which are big ol' buttons you can make at the top of the sidebar. These don't create a new tab the way I keep expecting, but will open an existing one if it's still in memory... yet not particularly represented on screen in any way. This induces a little anxiety too, but it also just works the way I want, pretty much all the time, so I'm letting it ride. (This is reminding me of how Beaker impressed me just as much with its general completeness and absence of annoyingly unsupported things as it did with its new features. This might mostly just come with the territory of embedding WebKit, although one should note that Arc is not an Electron app, but is written in native Swift and feels that way: snappy and clean.)

Belly up to the bar: the URL bar is... not entirely gone, but mostly you interact with this popup thing that'll remind you of Spotlight, or other sorts of launcher utilities. This thing, that also manifests when you ask for a new tab with the familiar command-T, offers the usual URL-bar options plus a bunch of built-in commands and shortcuts - things like Open Downloads or New Google Doc and such. This is hot, hot, hot and puts me in mind of my beloved late launcher Enso and its in-browser progeny Ubiquity. Make these commands a thing we can write as extensions, BCNY!

(On the subject of extensions, though: Arc calls these Boosts, which is apparently supposed to reflect their nature of boosting the functionality of specific sites, or of sites in general but by means of injection into the page. They're just Chrome extensions, written to the same APIs and linking to the same help docs. But Arc seems to cut out the popup-menu-ish extensions that I think are the most popular, and that would be a natural fit to sit amongst the favorites buttons. Just in general, Boosts feel underbaked, despite the relatively elaborate UI that's been built out for them.)

The extra jazz: Notes and Easels. Notes amount to a no-op at this stage; they're a text editor in your browser, and that's kind of it. That's not nothing, and there is some sharing functionality I don't understand which is probably aimed at blogging-or-something at some point in the future - all to the good, but nothing that's really there yet. Easels are more interesting, albeit more problematic.

Easels are virtual scrapbook pages. You can type in some fun fonts, do little doodles... but the point is they're tightly coupled with capturing, which is just a screenshot of a portion of a page. That is, they're screenshots until you click the little Play button to make them... live? Yep, they turn into little frame displays of a live page, which is cool and has fascinating possibilities for dashboards and such. I'm concerned about how there's no view-source option for easels. There's no capacity to edit them outside of Arc itself, perhaps because you can open them to editing by others ("anyone"!?!??!), indicating they might be meant as part of the eventual "multiplayer" feature that BCNY has mentioned as a possible business model.

Last but a long way from least is the audio and video functionality. As someone who makes much more use of YouTube and Twitch than a middle-aged man perhaps ought, this has been the single biggest win of my time with Arc. When you play video or sound, then go to another tab, the video pops up in a mini-window that's resizable and quite full-featured; audio-only media will get you a little mini-controller at the bottom of the sidebar. Both are super handy! The only note I have is it would be cool if the video player also popped up when you switch away from Arc entirely, although I'd wager it's impossible to tell when the frontmost tab is covered up visually.

The future: BCNY is venture-backed, and it's reasonable to be suspicious of any commercial venture that has a closed-source browser for you to download. There is such a strong need for UI innovation in browsers that they have me excited, but they sometimes don't seem aware of currents of thought along those lines that one would hope they'd be. For example, they don't have a fediverse account, and don't generally show evidence that they understand the history of efforts to make the web into the internet computer they claim to want. I sometimes worry that this vision of such a device is not coming from internet people.

I'm talking about feeds. Some capacity to do what RSS readers did, and probably what social media clients do, belongs in browsers. Browsers have already circled back to one of the major purposes people went to feeds for - readability - and made it a feature on its own (Arc's readability mode is allegedly present, but hidden and buggy); the other major purpose of feeds, that of catching updates, avails itself of all sorts of approaches. Imagine a browser that had configurable means of checking a URL in the background and letting you know if something interesting (the meaning of "interesting" being something you specify) has changed since the last check. Maybe it has custom tooling for, ahem, certain websites from which people often wish to see the latest.

I'm honestly not sure whether that's a bigger or smaller feature than just building a damn feed reader into the browser, which has still never been tried apart from that one awful half-assed one that Firefox made back in the day. And I understand the reluctance to give people another inbox they have to check. But giving us all the opportunity to say for ourselves what "follow" means would be halfway to the information revolution of which Arc has been waving the flag. I know y'all know what a memex is, BCNY; now show us.

Mike Sugarbaker

And then he started writing a whimsical-ass web site again

2 min read

Because every medium that dies just becomes art, right? Although my messianic drive to save the web or something is also back in play, thanks to Robin Sloan's exhortation to fuck around and find out in 2023. It is certainly an opportune time to experiment with this platform, as existing platforms-on-the-platform begin to collapse, losing their grip on our thinking. But mostly, I just wanted to program again - but to do it in exactly the way that brings me joy. This includes using Clojure, the Lisp I find coziest, and... not only no JavaScript but precious little other client-side code. (Were we rooked?)

I think Mr. Sloan's manifesto is right that it's not time for "products" but for weird tendencies; for development processes that are lived in the open and in social context, not teaser campaigns and invite-only rollouts. But right now I'm wary of writing too much about what I'm doing, rather than doing it, which I think has contributed to other projects petering out. Plus I am 100% back on my bullshit with regard not just to independent web development but to both the weird hypertext systems I used in college, and HyperCard again for god's sake. As a middle-aged man it is naturally embarrassing to be so precisely middle-aged. So maybe I'll say more next time. (Unless you find the secret posts.)

Mike Sugarbaker

Why AI audio is a different ballgame

5 min read

(Sorry I'm late.)

In case you missed it, the latest thing we're calling AI (or Machine Learning or whatever) is ethically very problematic! A legal case has been brought by a few professional illustrators who have had their hard-won, marketable styles straight-up ganked by text-to-image processors. While the second most discoursed-about ML-generation medium, that of text, is not criticized as often on an intellectual-property basis as on its tendency to present statistically-likely nonsense as authoritative-sounding truth, I still worry about its contribution to AI's brand in general as ripping off artists and creators who most often weren't even given opportunity to opt out.

Why would I be worried personally? Because I'm enchanted by the sounds of Dance Diffusion, an application of the Stable Diffusion method to the generation of audio. Like its image-creating sibling, Dance Diffusion starts either from noise - fully random data - or from some starting data with noise transparently overlaid. It then "denoises" the data toward what its model says is likely. As with image generators, giving the model some starting data can result in what's known as "style transfer" - rendering the preexisting content in a style closer to the model, while keeping the fundamentals of the starting point intact. This is, for the most part, how I've been using DD - to do things like ask a model trained on nothing but drum solos to transform a clip from a rap vocal, and other such. (Models that do text-to-sound generation, or creation of audio from written prompts, are beginning to emerge as of this writing but are still fairly limited.)

I have mostly done my style transfers with pre-made models trained on relatively small sets of recordings. My favorite results have come from a model called "unlocked-250k," which gets its first name from its training data, the Unlocked Recordings collection on the Internet Archive. This collection, despite what one might assume from its presence on IA for free download, is mostly under copyright, and not under any sort of unusually permissive license. (None of it is "in print" or otherwise commercially available.) So why are these recordings here, in this model that's in the default model selection for this tool? How does it keep not occurring to nerds that this is a problem?

But here's the thing: when it comes to music, we already have the tools to deal with this situation. They're called ASCAP and BMI. In fact, these tools were created in response to a nearly identical problem: technological changes to the way musicians' IP is distributed, which made said distribution much more indirect and, uh, diffuse.

I doubt these institutions are perfect - I'm certainly not at a point in my music career where I'm in a position to need to know lots about them. (Also I'm completely eliding the issue of voice actors, for whom style transfer is already becoming a threat - but they do have a union!) But I bet they could be talked into handling artists' contributions to the statistical probability of a piece of music's direction and form as it evolves out of noise. Those contributions are individually small, but we generally know exactly what group of artists ought to be getting them, for which recordings. And in the case of very composed starting data, like the hummed basslines and melody fragments I've been making so I can transform them into weird blurts of orchestra, the songwriting is not actually in the picture in the final product (there's some mechanism that gets royalties to arrangers and producers, but I don't know that it's one of the same ones). So I'd expect what users of audio AIs end up paying, as royalties or fees, wouldn't be as large as if you sample something outright. Everybody wins!

I can think of a number of things to quibble back and forth about in such an arrangement (what about consent? You don't get to opt out of having your song played on the radio; is this similar?), but the point is this sort of problem can be understood and handled, and appropriate institutions can be created to attempt to deal with it. Is it conceivable that visual artists could respond to AI by forming a similar layer of institutions? I doubt they have the muscle, plus maybe there is too much work-for-hire happening in illustration, compared to recorded music, to make that approach make sense. But it all puts me in mind of Elinor Ostrom's work debunking (before it was published??!?!) the so-called tragedy of the commons. Everything stays complicated about human beings working together, but invested people with a commitment to each other can work out creative solutions. That gives us an alternative to the simple, absolute cancellation of an entire, fascinating line of research and activity. I hope we take it, in whatever form.

Mike Sugarbaker

What is web abolition?

6 min read

Let's do our part for internet content by starting with someone else's Twitter thread:


For several months now I've been having these periodic waves of compulsion to build a web site. I don't even know exactly what web site. Something nostalgic, just for fun; something that helps people create a world made of writing, like the weird, failed alt-wikipedias that obsessed me in my early twenties. Something game-like, maybe... or just a literal game? I've started to cut code for a few of these ideas (I'll still write code, as it turns out, just not for anyone else), but more often I've ruled them out or lost steam before even getting started. Any single one of them would fail to be what I want, which is a weird fever-dream amalgam of all of them.

Hopefully this has all been an extinction burst. The truth is, the web's done. It's best considered legacy now, all of it. This is the oldest of news to zoomers, who've never seen it as a virtuality to explore, but as the place where most forms of bureaucracy live (plus there's a Facebook interface there, I guess, but you'd only use it if you had to). The future almost certainly lies in networks that are roughly monkeysphere-sized, made not for the whole world but for a semi-gated group of people, who are probably connected by some context that isn't solely an online one; Slack or Discord is the paradigm, although given recent events, it sure seems worth looking at open source alternatives.

For a lot longer than several months, I've been involved with efforts to look past the web infrastructure we've got, towards something more useful and less costly and harmful. For a while, this meant the so-called IndieWeb, a collection of efforts to get the things we like about social media back into independent blogs. Then it meant the more radical fixes of the peer-to-peer web. Recently I've had my eye out for entirely new protocols - fresh takes on something like gopher, maybe with some of the interactivity of HTTP, and hopefully some of the writeability that was supposed to be a part of browsers from the beginning. That would be awesome, I've been thinking. Sort of.

And lo, here is Gemini, a new protocol which is almost those things. It was created by a few coders who were still hanging out on Gopher, which alone pleases my soul and gives me that feeling of good-hearted willful obscurity that goes with a lot of my favorite subcultures over the course of my life. It cuts a lot of features out of the web, which is a good and productive way to proceed, but writeability and interactivity are among them, sacrificed for user privacy. (They think the fact that you can send server queries that are rich enough to let you edit the web without editing HTML is inherently tied to everything that's gone wrong! What if they're right? What are my weird, dying dreams of textscapes in that case?) Gemini is like the IndieWeb, and like Beaker turned out to be, in that it doesn't offer anything that an average user, who seems pretty unmoved by privacy and such if their continued enthusiasm for Facebook is any hint, will actually experience as better - as motivation to switch. But the virtuality and exploreability of Gemini is delightful, if not strictly necessary anymore. (I recommend Lagrange.)

So what is a movement toward realizing the web as decaying infrastructure, something whose main verb is "crumble," not "connect"? I envision something like a Matrix client with a web browser bolted onto its side, the way that early versions of Netscape also supported gopher. That web browser would include JavaScript blocking and image blocking by default, sandbox all domains from each other's cookies, and possibly make use of gateway servers to further insulate the user. I'm sure that doesn't cover everything, but you get the idea.

At this late date, we have the gift of knowing what people want from the web, so we could make our client a lot less generic, with features that support RSS, photo feeds, maybe even some IndieWeb-style distributed social stuff. Or what the hell, bolt a Facebook client on there too! Their terms of service don't forbid this from what I can tell (Twitter's do, but Twitter should be abandoned, not embraced and extended). All to the point of positioning the web as something that isn't the center, for those few of us left who need help with that.

I admit I've also been trying idly to think of ways to augment places like Slack and Discord; something that a virtual world, visual or textual, can offer these groups of people... but most likely, my nerd scenes only ever valued such virtualities because we lacked the real-world context these new chat spaces have. Once your community is real and online both, the only adventurous journey that motivates you is... unionizing? Or in the case of non-corporate Slacks, maybe abolishing cops and landlords. If you want something to explore, look out on the streets.

So there's your nice tall glass of goodbye to all that. Don't get me wrong, the web isn't going away or anything. We're stuck with it, just like it itself is stuck with HTML 5 and JavaScript, 25-year-old technologies that were largely intended for other uses (thanks to the Twitter widget above, I have had to add <P> tags to this post manually). Where else are Google results going to come from but a hundred million blogs and forums that just won't quite die? Nothing dies on the internet. Not even Gopher died. If the Internet Archive were a company, I'd want to invest in it;* it's the only operation that feels relevant to the future of "new media," and it's specifically all about its past.

(A fun inside-baseball place to go from this conclusion: when do we short Google? Not immediately, for sure, but it's starting to look a little bit dead-man-walking, like how Yahoo looked ten years ago. Its only hope as a company is either to divest from search almost all the way, or else to really double down on owning the web even more than it does now. Hilariously, one of the best ways to do the latter would be to give direct financial and rhetorical support to the various IndieWeb initiatives. The fact that this will never occur to them would be the most damning evidence that they're in their decline as a company, if it weren't eclipsed by their massive failure to deal on any level with diversity and the general footgun shootout of their corporate culture.)

* You can invest in the sense of donating, which I very much encourage, if you still have the means after donating to things that are much more humanly material.

Mike Sugarbaker

Fuck you, logins!

5 min read

First, hands up if you remember Yacht Rock.

Second, when my old WordPress site got thoroughly hacked, I had the opportunity during the recovery process to reread every post I've ever made here at Gibberish. There are a few common threads, but the predominant one jumped out at me because posts I made about it as much as fourteen years ago are still current: why is it still so much damned work to do certain bits of web development that literally every site does? Why hasn't any layer of the tech infrastructure grown to handle them, or at least help?

Take as an example my number one nemesis, user authentication. Whether you're doing OAuth for logging in via Twitter and Facebook, and therefore doing intense multistep tangoes of cryptographic token trading, or just trying to let people recover their forgotten passwords without creating security holes, getting auth right is so hard that you can't even trust it to a library. Half the time, the libraries for your chosen platform are incomplete or have major security bugs; the rest of the time, their authors throw up their hands and say (...not really wrongly) that individuals should take responsibility for understanding such sensitive areas of their code.

But apps of any seriousness need to have logins... right?

Sometimes, when something is a hard problem to solve, that's because it's the wrong problem to solve. The web wasn't designed to support single-page applications, and it certainly wasn't particularly designed for what are essentially single-user applications. Canva is the example that springs to mind right now, but there are plenty of others: tools that are centered on individuals creating things, that happen to run in a web browser. Collaboration features sometimes come in, but those are only salient for a handful of apps; features that support discovering and sharing content from other users are often kind of a joke and always a bolt-on that could be entirely separate. Apart from those, these apps could have been written as native apps. Many argue that they should be.

The first single-user web app was the web itself - making a page of your own, before anyone dreamed that a service would ever do it for you. Everyone talked about how great and revolutionary "view source" was; it democratized creation, and so forth. But then things got a lot more complicated, and despite code-sharing services like CodePen and (on a lower level) StackOverflow, the web's default openness became increasingly meaningless. When so much of the meaning in the web was in its connections, why not do your publishing through connections? Why have anything of your own?

"View source" is not enough, hasn't been enough for a long time, was never really enough. The lack of a full, secure set of in-browser tools for trading and sharing code, and the resulting need for server-based services to pick up the slack, is possibly the fundamental flaw that led us to our current state of total cultural capture by a handful of huge corporations, mainly Facebook in my country. The CEO of Facebook is widely expected to run for president sometime soon.

Well, I'm working on a single-user web application, because the languages we build the web with are languages I speak, and because it's still so powerful to just turn up in the user's browser without their having to download and install anything. But I'm building it against APIs that are only available in Beaker, an experimental web browser that adds support for the peer-to-peer protocol known as Dat. Dat implies a whole new way of doing multi-user web apps, as distributed networks of compatible files on various Dat sites, brought together via JavaScript and augmented by more traditional web services. But that stuff's not even what I care about: when you're browsing a site via Dat, Beaker gives you a "fork this site" button.

If you find something you like, you can clone it, and just start making changes to your own copy. Like it's a HyperCard stack you got off a BBS in 1989.

That leaves out the importance of the web's connections - Beaker doesn't track or share these forkings, the way GitHub does - and the multi-user side of Beaker's capabilities is definitely going to be its main selling point. But I think this simple, SneakerNet-style brand of sharing could be more important than anyone guesses, perhaps even Beaker's own creators, for the simple reason that nobody has to log into anything. Sharing and collaboration can still be done, but those are separate applications, finally snapped clean. And no, Beaker isn't the first browser to do things like this - it's just modern, insightfully designed, and built on an IPFS-like base that does plenty of cool tricks. Since it's built with Webkit, you could make it your go-to web browser and scarcely notice you were doing it (It Happened To Me!). Unless you're on Windows - they're still working on that - I strongly recommend picking it up and exploring.

Mike Sugarbaker

There is no front-end web development crisis

10 min read

I mean, that’s a thing, right? Us devs are all talking all the time about the tool chain and the libraries and the npms and just how hard building the web has become. But we’re talking about it wrong. The problem is not that writing JavaScript today is hard; the problem is we still have to write so much of it.

There isn’t a front-end development crisis; there is a browser development crisis.

Think about how many rich text editors there are. I don’t mean how many repositories come up when you search for “wysiwyg” or whatever on GitHub; I mean how many individuals out there had to include a script to put a rich text editor in their page. And for how long now? Ten years? Sure, we got contenteditable, but how much human suffering did that bring us?

Where is <textarea rich=”true” />? Where, for the Medium-editor fans, is <textarea rich=”true” controls=”popup” />?

Believe me, I have already thought of your cynical response. There are 9,999 reasons to call this a pipe dream. I don’t have time for them, thanks to all the Webpack docs I have to read. I’m not talking about things that’d break the web here – I don’t want us to try to build jQuery or React into the JS engine. I’m talking about things that are eminently polyfillable, no matter how people are deploying them now. And do I want to start another browser war? Yes – if that’s the only way my time can be won back.

Lots of web pages have modal content – stuff that comes up over the top of other stuff, blocking interaction until it’s dismissed. It was pointed out to me on Twitter that there have already been not one, but two standards for modals in JS, both of which have been abandoned. But they tried to reach toward actual, application-level modals, which already constitutes a UX disaster even before you add the security problems. By contrast, the web modals you see in use today are just elements in the page; a <modal> element, that you can inspect and delete in Dev Tools if you want, makes perfect sense. It might not replace a ton of code, but every little bit helps.

It doesn’t stop with obvious individual elements, although that may be the best initial leverage point (reference the <web-map /> initiative and its Web Components-based polyfill, the better to slot neatly into a standards proposal). There are plenty of cowpaths to pave. We need to start looking at anything that gets built over and over again in JS as a polyfill… even if for a standard that might not have been proposed yet.

You know what a lot of web sites have? Annotations on some element or another, that users can create. They have some sort of style, probably; that’s already handled. They might send data to a URL when you create them; that could be handled by nothing more than an optional attribute or two. While you’re at it, I want my Ajax-data-backed typeahead combobox. But now that we’re talking to servers…

You know what a lot of web sites have? Users. I’m not the first to point out that certificates have been a thing in browsers for pretty much the entire history of the web, but have always had the worst UX on all the civilized planets. There is no reason a browser vendor couldn’t do a little rethinking of that design, and establish a world in which identity lives in the browser. People who want to serve different content to different humans should be able to do it with 20% of the code it takes now, tops. (Web Access Control is on a standards track. Might some of it require code to be running on the server? Okay – Apache and Nginx are extensible, and polyfills aren’t just for JS; they’re for PHP too.)

And all of that implies: you know what a lot of web sites have? ReST APIs. Can our browser APIs know more about that, and use it to make Ajax communication way more declarative without any large JS library having to reinvent HTML? Again, it’s been like ten years. ReST is a thing.

While we’re talking reinvention, remember the little orange satellite-dish icons that nobody could figure out? Well, if we didn’t want to reinvent RSS, maybe we shouldn’t have de-invented it to begin with. In the time since we failed to build adequate feed-reading tools into browsers and the orange icons faded away, nearly all of the value of the interconnected web has been captured for profit by about three large companies, the largest being Facebook. For all practical purposes in America, you can no longer simply point to a thing on the web and expect people who read you to see it. Nor can you count on them seeing any update you make, unless you click Boost Post and kick down some cash.

Users voted with their feet for a connected web, which had to be built on one company or another’s own servers – centralized. It had to be centralized because we weren’t pushing forward on the strength of the web’s connective tissue, making it easy enough to get the connections users wanted. And credit where it’s due to Facebook and Twitter (and Flickr before them) for doing the hard work of making the non-obvious obvious – now we know, for example, that instead of inscrutable little orange squares in the location bar, we should put a Follow button in the toolbar whenever a page has an h-feed microformat in it. Or a bunch of FOAF resources marked out in RDFa, for that matter.

Speaking of microformats and RDF and bears oh my[1], it might be time to stop laughing at the Semantic Web people, now known as the Linked Data people. While we’ve been (justifiably) mocking their triplestores, they’ve quietly finished building a bunch of really robust data-schema stuff that happens to be useful for a clear and present problem: that of marking things up for half a dozen different browser-window sizes. Starting with structured data is great for that. Structured data may also be helpful for the project of making browsers help us do things to data by default, instead of having to build incredibly similar web applications, over and over and over again.

But Mike, you’re thinking, if the browsers build all these things we’ve been building in JS as in-browser elements, then everything will look the same! To which I say, yes – and users will stand up and applaud. They love Facebook, after all, and there ain’t no custom CSS on my status updates. It’s not worth it. Look, I don’t want to live in a visual world defined by Bootstrap any more than you do, but it’s time for the pendulum to swing back for a little while. We need to spend some time getting right about how the web works. Then we can go back to sweating its looks. And it’s not as if I’m asking for existing browser functionality to go away.

But Mike, you’ve now thought hard enough that you’re furiously typing it into a response box already, you have no idea. Seriously, you have no idea how hard it would be to do all this. Well, you don’t spend 20 years building the web, as I have, without getting at least some idea of how hard some of this will be. But you’re right, it will be stupid hard. And I’ve never been a browser engineer, so I have no real idea how hard.

And you, I counter, have no idea how worth it all the hard work will be. To break Facebook’s chokehold on all the linking and most of the journalism, or, if that doesn’t move you, to just see what would happen, what new fields would open up to us if connection were free instead of free-to-play. To bring users some more power and consistency whether individual web builders lift a finger or not. And yes, to bring front-end web development back a little bit towards the realm of the possible and practical.

Flash is dead; that is good. Apple may have dealt the decisive blow, but browser vendors did most of the legwork, and now as a direct result we have browser APIs for everything from peer-to-peer networking to 3D, VR, and sound synthesis. All of that is legitimately awesome. But for all the talk about defending the open web, that stuff only got done because a software-platform vendor (or three – Google and Microsoft’s browser teams helped a bunch) detected a mortal threat to the market of its product. When Mozilla threw its first View Source conference in Portland last year, that was my biggest takeaway: Mozilla is a software platform vendor, first and foremost, and will make decisions like one. It happens to be a nonprofit, which is great, but which may also contribute to its proclivity to protect itself first. That self-interest is what will drive it to do things.

So. Dear Mozilla: there is a new mortal threat to the market of your product. It is the sheer weight of all this code, not in terms of load time, although that’s bad enough, but development time. The teacups of abstraction are wobbling something awful, and we need you to have our backs. You employ people who are way smarter than me, and they can probably think of way better things to put into place than the examples I’ve got here. That isn’t the point. The point is there has to be less code to write. Pave some cowpaths. Make Browsers Great Again. Or something. Please. Thank you.

[1] Because I know they’ll get all in my mentions, I hasten to add that microformats were created by an entirely different tribe of developers than RDF-and-such, and were in fact created as a direct response to how awful RDF was to deal with at the time. And yeah, it was pretty awful to deal with… at the time. Now it’s better, and I kind of think team Linked Data has regained the edge. I tried really hard not to make this piece into a Semantic Web/IndieWeb beef tape. I’m sorry.

Mike Sugarbaker

That funny-looking Roundhead kid

7 min read

One of my favorite memories of childhood is lying on the floor of my Dad’s office at Northbrae Community Church – he was the minister, about which I have a great story that has been cut for time – in Berkeley, California, reading Peanuts strips, of which my Dad had several collections shelved alongside Bibles, commentaries, philosophy and whatnot. I want to say those comic strip collections had pride of place on that bookshelf, but the truth is I was lying on the floor and that’s the only reason I spotted them there on the lowest shelf. So who knows?

Somehow I got to reading some about Charles Schulz and his approach to his work. Maybe there was an interview in the back of one of the books. Early on, I took in his opinion that the reason we all love Charlie Brown so much is that “he keeps on trying.” To kick the football, to talk to the little red-haired girl, to win a baseball game, to belong.

I didn’t connect with that sentiment. It’s not clear what I did do with it, but I never felt like that was a reason to love Charlie Brown. At the time I just loved him instinctually. Here was this kid who, like me, didn’t really fit in, and got a lot of shit thrown at him for no reason that he could see – maybe just because others needed some entertainment. Just like him, I didn’t have the tools to deal with that harassment, not without poisoning myself a little bit inside every time, and just to mix metaphors and switch over to the Peanuts animated cartoons, none of the adults seemed to be speaking my language when I asked them what to do. Their advice – just ignore them when they pick on you! – might as well have been a series of muted trumpet sounds.

I didn’t love Charlie Brown because he kept on trying – I loved him because the alternative was loving a world that thinks some people are just better than others, and that those people who don’t seem to have the world’s favor should certainly never ask why or why not. They should just keep on trying. (Charles Schulz, by the way, was a lifelong Republican donor.)

Now, I’m notorious for reading literature a bit shallowly (and yes, Peanuts is literature, up there with The Great Gatsby as some of the greatest and most iconically American of the 20th century, but that’s another post), and I miss layers of meaning sometimes. My dad pointed out as I was writing this that reading Charlie Brown more generally as hope, and specifically as a tragic hero defined by his inability to give up hope, is a pretty strong reading that also supports that Schulz quote. Personally, I could see Schulz connecting with Charlie Brown more on the level of commitment to one’s job; the fact that Schulz could do the same gags with Charlie Brown for 50+ years and never have to deal with him changing is something he could feel good about (n.b. his own career as a cartoonist, and the occasional strips about Brown’s father, a barber, and his connection to that craft). Charlie Brown kept showing up for work, which Schulz and others could admire and enjoy on more than one level.

But permit me an indulgence. Lately I’ve been nursing this crackpot theory that the American Civil War actually started in England in the 1600’s. I have another theory on the side, more straightforwardly supportable, that said war is also ongoing. To get at my case for its beginning, though, I’ve gone to Albion’s Seed: Four British Folkways in America by historian David Hackett Fischer. One of the so-called folkways – a “normative structure of values, customs and meanings” – Fischer chronicles is that of the Royalist side of the English Civil War that became known as the Cavaliers.

The Cavaliers were, as you might guess, known for having horses when their opponents more often didn’t, but also for mostly being wealthy and interested in letting you know they were wealthy, and for their interest in having big estates with really, really big fuck-off lawns; a particular style of being landed as well as moneyed. The English Civil War separated the monarchy from political power – if not quite for good, and as it turns out, Puritans make lousy rulers – but it didn’t separate the Cavaliers from the kind of power that they had. And when England got cold for them in the 1640’s, a lot of them moved to more receptive territory in the colonies, namely in Virginia and points south. Fischer draws a strong correlation between this migration and the “Southern Strategy” that put conservatism back into its current power in America.

In the English Civil War, the King and the Cavaliers were opposed by a bunch of factions which, thanks in part to the close-cropped Puritan hairstyle, became collectively known as Roundheads. I was so happy when I heard that. I imagined that round-headed kid, good ol’ Charlie Brown, in peasant clothes holding up a pike, demanding an end to the divine right of kings. Permit me that.

I allow that Charlie Brown is an awkward symbol for forces aligned against conservatism. He doesn’t win much, for starters. There’s also the uncomfortable invitation to misogyny in the relationship between failed jock Charlie Brown and frequent football holder Lucy Van Pelt, which a certain flavor of person will accept wholeheartedly. Speaking of which, one facet of Charlie’s woes is a major contributor to the entitlement we now see in certain nerd cultures gone sour. (There was a point when it could easily have done that in me. I’m still not entirely sure how I avoided this.)

Instead, I ask you to respond to Charlie-Brown-the-symbol the way I did as a child, but couldn’t articulate until recently: negatively. I want you to tell him to stop being who he is, to grow out of his perhaps-essential nature and start making demands. But stay his friend, by demanding that the forces that make his world step into the frame and be seen, lose the muted trumpets this time, and name their reasons for letting this world exist. Charlie Brown has hope, but he shouldn’t need it.

This is obviously personal for me. I didn’t become tough and wise by virtue of recreational abuse at the hands of my peers; any wisdom I have I was able to get in spite of their best efforts. Any strength is left over from what they sapped. Some kids might respond to abuse and interpersonal adversity by getting stronger, but if you’re writing off the ones who don’t as losers, or trying the same methods over and over of teaching them to cope, you’re indulging yourself in a toxic, convenient fantasy. Making others feel small to feel bigger yourself is no more inevitable a part of human life than humans killing one another for sport. Polite society eliminated one of those; it can lose its taste for the other.

When people become identified with a power they take for granted, they go halfway into bloodlust when you threaten to mitigate that power in even the smallest way. In the end, that’s the basis of conservatism. But the power to take a shit on someone, at some point, when we’ve decided it’s okay, might be one that we all identify with. So I don’t have a lot of hope that we’ll change this in my lifetime, or even make a dent. But I want to stop kicking the football. I want to start asking the question.

Mike Sugarbaker

You knew you were tired, but then where are your friends tonight?

7 min read

In late October I declared November to be NaNoTwiMo – National No Twitter Month – and took the month off of Twitter. I pledged neither to read posts nor to make them, except in emergencies. I declared an emergency for the day I finally got user creation working for theha.us, my multi-user instance of the up-and-coming “distributed social network” tool Known. (I say “up-and-coming” when I ought to say “coming someday,” since the distributed part is still unimplemented, but uh, I’ll get into that later.) And I decided not to count the occasional trip to the profile page of a tech person who’d recently announced something – the public nature of Twitter often makes it more useful than email for open-source-related communications. And I cheated a few times.

Why do this when Twitter is more or less where I live online these days? Because Twitter, corporately speaking, is steadily becoming less committed to letting me direct my own attention. I can turn off the display of retweets, but not globally – just one friend at a time – and Twitter now also occasionally offers me something from someone a friend follows, apropos of nothing. I can use a list, for those times that I only want updates from the people dearest to me, but lists now ignore my no-retweets settings. Without that ability to turn down the noise when I want, I find that using Twitter makes me less happy. And this is all to say nothing of Twitter’s then-ongoing refusal to do anything systemic to manage its abuse problem and protect my most vulnerable friends. (Things have since gotten a hair better on that front.)

In a post on Ello that’s no longer visible to the public, net analyst Clay Shirky wrote, “really, the only first-order feature that anyone cares about on a social network is ‘Where my dogs @?’” It is devastatingly, sublimely true. It is astonishing how much people will put up with to be where their people are.

For November, when I had something to say I generally put it on Ello. My account, like Shirky’s, is set only to be visible to other registered Ello users (I have invites if you’re curious). I’m not sure why I’m doing that, as it doesn’t make things private per se; Shirky is also aware of this and thoughtful about how different levels of privacy influence a piece of writing. It feels right sometimes to talk this way in a different room, even if the door isn’t closed. The most surprising thing about the last month is how many people – how many of my friends – not only came over to Ello when I raised it as an option, but stayed. They didn’t burn their Twitter accounts down behind them, and they didn’t show up a lot; I’m often the only voice I can see above the fold in my Ello Friends stream. But there were Monica and Jesse and Jenny and Megan, showing up now and then, posting things that are longer than 140 characters, the way we thought we would (and did for a while!) at Google+.

But that’s not a movement. It’s a pleasant day trip, and it might be over.

It’s an article of faith in the tech community that a social network can always hollow out the way MySpace did when a new competitor reaches a certain level. But that was a different world. Almost ten years ago, right? Getting all the kids to move is a whole other ballgame from moving the kids, plus their parents, plus the brands and photo albums and invitations and who knows what else. Not to be too specific; I’m just citing Facebook as an example, my beef isn’t with them in particular. (Facebook also beat MySpace in part by being perceived as high status, and what’s higher status than every celebrity you could name having an @-name?)

The last ten years have made us awfully demanding in some ways. If you ship social software to the web, it had better have every feature that people might want and have it immediately, because it will be taken for always-and-forever being what it is when the first wave of hype hits. No minimum viable product is going to win over the mass. Even more frustrating is the IndieWeb movement: I may be about to display myself here as one of those who give up hope when a feature is missing, but I’m also in a position to know that the rate of progress of open-source distributed social networks has been ludicrously slow. We finally have an almost-viable open-source product, analogous to WordPress – that’s the aforementioned Known – but it still has no interface for following people, whether on the same site or elsewhere. The code infrastructure is there, but there’s no way to use it yet. I guess all its hardcore users are still using standalone RSS readers like good Web citizens or something, but the mainstream was never interested in fiddling with that. (Nor will standalone RSS readers support private posts.) Given the, er, known impatience of the mass for anything that doesn’t do all of the things already, I’m starting to worry that the indie web won’t have what it needs to get traction when the time is ripe (that is, when Twitter finally falls over).

Maybe I’m only running a Known instance, or caring at all, out of nostalgia. I’m old enough to remember the web we lost. On the other hand, there’s an important sense in which we got what we (I) wanted – we’re all together, all connected… and it’s terrible. Clay Shirky has an idea – a whole book in fact – about the cognitive surplus of a population having been liberated by the 40-hour work week and creating a kind of crisis where we didn’t know what to do with ourselves, until television stepped in. Like the gin pushcarts on the streets of London after the industrial revolution, television stopped us from having to figure out what was wrong and fix it. In (Shirky’s) theory, the internet is our equivalent to the parks and urban reforms that made gin pushcarts obsolete – but what if all that connection is actually a crisis of its own? I think a lot about something Brian Eno wrote in 1995 in his book A Year With Swollen Appendices (he was writing about terrorism, but it applies): “the Utopian techie vision of a richly connected future will not happen – not because we can’t (technically) do it, but because we will recognize its vulnerability and shy away from it.”

We may be shying away already, by using mass-blocking lists and tools and the like. Maybe that’s not so bad, provided that Twitter’s infrastructure can keep up. But then, we’re usually willing to do as little as we can to stay comfortable instead of getting to the root of the problem. I’m back on Twitter now, using a second account in place of a list, which isn’t ideal (lists can be private). But where else am I going to tell my friends when I’ve found something better?