There’s an issue with the way the shutdown command works on gammacam P6. I really like using the little OLED bonnet user interface to control the system and I’d like to fix things so that I can reliably shut it down without needing to check in over ssh.
Here’s a quick overview of how the menuing system works.
First off, the UI is split into two parts. One file has the hardware-specific part and another that has no hardware dependencies. Currently the only hardware supported is the “Adafruit 128x64 OLED Bonnet for Raspberry Pi”. The idea of splitting this into two files is that the core of the UI code can be run on any hardware, it doesn’t need the Raspberry Pi or an appropriate display. If you normally test your code directly on the hardware where you use it then this might not sound important. But in ui.py there’s a FakeDisplayHat that’s not used in the production code. I wrote that class to work out some kinks in the code that makes the menus easy to configure and also to confirm that the code would send off the expected commands. If you look at this in the future then hopefully that testing is more robust but at the moment there are just a few commented-out remnants.
Ideally there’d be a complete controllable mock UI that could be driven in a terminal and a fake UI that could be used for running unit tests1. There’s a cost to building and maintaining mocks and fakes though, and I’m just one developer. The small decoupling I’ve made really makes a big difference but you do have to tinker with things to switch to and from testing mode.
Adding new hardware
The way the menuing system works is pretty simple. The abstraction is very thin - it expects that you have a display with buttons connected to one of the nodes. It relies on the debouncer library from Adafruit to receive a button press. This abstraction is meant to only be broad enough to cover a small number of devices that are known now. It will need to be extended or adapted to handle boards that have any significant differences from the colour OLED that I’ve been using. When such a board is added there will be a little refactoring and some more details can be abstracted out.
One candidate for the next display is the Monochrome E-Ink Bonnet. I actually hoped to include it in the initial release actually but I had a lot of trouble dealing with the refresh rate. It would be a good option for working outdoors since it doesn’t need backlighting at all. Bringing in support for an E-Ink display like this would require a few changes:
The background fill colour is white rather than black. So the ui_<hardware>.py file would need to provide a colour palette.
The refresh rate is really, really low. This would be a challenge that I’m not sure how to tackle.
The eink bonnet has only two buttons rather than a joystick and buttons. For this I’d probably make one button the selector and the other an action.
The resolution is smaller. This is a bit of a challenge but the menus are mostly text.
In general I like to take advantage of the whatever capabilities the hardware gives. If I have a joystick and two buttons then the UI on my rig shouldn’t be hampered because there are other implementations that have fewer buttons and a smaller screen. Unfortunately sometimes we do make that trade in order to keep software maintainable. If you want to see it supported or have suggestions there’s an open issue on Github to add this display.
Adding tiny hardware
Another target I’d like to aim at for the UI are these tiny monochrome OLEDs. There are a lot of these available in the same size from other vendors too. This is enough space to show a hostname, an IP address, the amount of free local storage, or some other project-specific info. In this case I’d branch out and add another display-only UI class to the code - it wouldn’t make sense to try to scale down to something this small with no inputs. It would be fun to put one of these on every node in the cluster and have local status directly available at all times.
How the menuing code does what it does
The core of the UI is the simple UiAction class. It’s a @dataclass because I love using them. The UiAction class just directly contains all the things we need on the menu. This will get more complicated when more displays are added. Right now it just serves to set up the actions_ list in the main Ui class where it holds the text for the top line, a function to populate the next line, and functions for responding to button presses.
The UI code is simple and purpose-built. The UI code generally doesn’t directly access the camera or imagery functionality on the node where it happens to be running. Instead, the UI uses the same grpc client that a user outside the cluster would use. This provides consistency across the different interfaces and the flexibility to allow the UI to be installed on a node which doesn’t have a camera at all. The trade-off is the extra compute effort required to run a grpc client for all tasks. This is worthwhile for this application since we have outsized compute compared to the imagery needs. One giant exception to this consistency is the shortcut used to get the current image count - there’s no way to access this from the grpc client and so it currently scans the data directory over and over to get a file count. This is an expedient hack but I’d really like to find a better solution.
So the UI does some nice work and it serves the purpose well but it’s clear that there’s room for improvement. As always, contributions are welcome.