Three things from this week.
The second best thing this week was Tailscale Up. The best thing was picking up my wife at the airport the next day. These are very different things but I can’t very well put a conference ahead of seeing my s.o. for the first time in a month, even if it was a really good conference. And it was a really good conference. It was just the right size to feel connected and conversational but also structured tightly to value everyone’s time. If you’re new to Tailscale, it’s an easy way to connect your business or personal machines over a private network built on top of Wireguard. My interest is in the technology and adjacent communities.
I drove up to San Francisco and met up with Jeff. This is conference number one million for us over the past 20-odd years. We ran into a few other Googlers and Xooglers between talks. Catching up with Will Norris was nice. We talked about Home Assistant integrations, OpenID Connect, and some selfhosting-related stuff. Selfhosting turned out to be an underlying theme of the conference. Cozy coders welcome. Run your services and connect them privately over the internet when you need to. It was wild to hear so many people bringing up projects that I rely on too. Jeremy, one of the MCs, mentioned Miniflux and it’s now the feed reader running on my LAN.
The talks were varied and interesting. Brad Fitzpatrick trolled us expertly with his lingo bingo lightening talk. Corey Quinn delivered a whole lot of energy with an over-the-top performance. The presenters were all great and the demos were all refreshingly real but this week I’m going to tell you about three things I learned from conversations in person at the conference.
Ed told me about Bare Metal as a Service. My first reaction was “isn’t that just renting a datacenter?” but there’s a lot of nuance to it. Ed had a very easy heuristic that gets to the heart of the matter. People who don’t need bare metal lose interest very quickly. But people who do are very, very interested. I think some of it has to do with how a business regards the compute that it uses. For example, someone like me coming from a technical background would decide that I want to run some software and let users access it over the internet. Maybe my software needs a certain mix of CPU, RAM, and GPU. So I’d buy some computers with those parts and plug them in. Seems obvious.
There are, of course, other perspectives though and a different solution is obvious from a different perspective. Suppose I’m a business guy doing business things. I have some users who are on the internet. I want my users to be able to access another service that my businessy business friends will set up for me. In this case my thinking isn’t centered around the compute but around my operations budget. Through this lens I see the computers as something that I want to rent from someone else for the time that I need them. In this case the computers become an operational expense rather than a capital expenditure. This was my takeaway but I’m sure there are a lot of other relevant details. I guess it also sounds a lot like cloud services but the BMaaS providers promise options for dedicated hardware in specific locations.
Accessing hardware from containers
Justin Garrison told me about some privileged container options. He built a shiny 4-node cluster last year. Some of the blinky bling on that case is driven by CircuitPython on an RP2040. CircuitPython and MicroPython devices show up as USB mass storage and generally run the Python program that you save to the drive. It’s a tidy solution and is reachable from inside a container but I believe requires a little special configuration which he sorted out.
The last time I played with a proper tiny Kubernetes cluster I was exploring the idea of camera applications deployed in containers on the RPi. I was stuck for a long time getting access to I2C and camera devices (under
/dev/video) from inside a container running in K3S. Eventually I sorted out how to run privileged containers and decided that would have to be enough. I dropped containerization from that project in the end. But while talking with Justin, he suggested there might be some more refined options out there. It has been a couple years and some cursory searching tells me that device plugins may be worth looking at again. They’re tied to the idea of advertising extended resources from a node. In general terms, some node would have the special hardware access available and then it would advertise that as an extended resource.
The other option that I just came across is just setting capabilities. It’s more in the vein of allowing privileged access but it could be less messy.
Rust is coming for me
Xe talked about libtailscale in their presentation. The library is useful for writing code that uses Tailscale directly - as opposed to incidentally using it as a network interface on the machine where the code is running. It’s also interesting that the library is wrapped in a C language interface while the code is actually written in Go. C is still the common interface with the broadest reach despite many attempts over many years to bring in more refined mechanisms. Chatting with Xe a bit later was a reminder that Rust is coming for me though. I generally don’t dive into new programming languages just for the sake of trying it out. Rust has been taking root though. Perhaps one of these days something I want to do is finally going to pull me into it for real. In the meantime, I appreciate all the people out there who build the wrappers and tie things together so we can keep the C family alive for just a little bit longer.
It’s time to find some more excuses to make the trek up to the city. And time to get into some more small scale conversations. Speaking is different from writing - I think I may have confused some people when I told them that at home I use “a Debian Linux box and weasel on Windows”. Doesn’t everyone pronounce WSL as weasel?