There’s an old saying that I mistrust deeply: users don’t know what they want. I think they do, and I think that products that are designed based on the unshakeable conviction that users don’t know what they want routinely turn out to be terrible.
You don’t see it because sometimes this frame of mind produces good results: specifically, it produces good results when the team that builds it wants the same things that users want. The bad results that it produces — which far outnumber the good ones — don’t live long enough to be reviewed and discussed over and over and over again on Hacker News.
In my experience, users do know what they want. It’s just that they can’t always state it in a useful form — which is completely understandable, given that it’s really not their job to do that. They’re a bit like an ancient oracle.
Continue reading Users: The Most Unhelpful Oracle
I’m getting old, and there are things that I don’t notice too well anymore. But I’m pretty sure UI elements have been getting bigger. Everywhere.
That makes a lot of sense on touch devices. Fingers are big and bulky, and the widgets that press them need to be big and bulky. But why are UI elements getting bigger everywhere, even on devices that aren’t touch-enabled, or for applications that are nowhere near being productively used on touch-enabled devices, like IDEs?
There is a reason that everyone is citing ad nauseam these days: Fitts’ law. It’s “popular” formulation states that bigger widgets are easier to hit, and the obvious interpretation is that we need to make them bigger. The reason why you can fit about as much code on your 24″ Full HD monitor as you could fit on the old Trinitron you had back you saw The Matrix at the cinema is SCIENCE!
I want to argue that:
- This is an incomplete and narrow interpretation of Fitts’ law, and
- That this is an unproductive use of Fitts’ law, because
2a) It is routinely applied without any numerical analysis, and
2b) It fails to account for other metrics, and consequently it rarely results in good design trade-offs
Continue reading The Limits and Interpretation Pitfalls of Fitts’ Law
It’s very hard to write an introduction to an article about BLE without sounding a little ridiculous. What are you going to say, that it’s all around us today? It’s been all around us for five years. It’s the #1 choice for IoT applications today, owing in no small part to the fact that you can connect to any IoT device with a phone.
Today, I’m going to “talk” you through one of the most common, but also one of the most illustrative tasks that BLE development involves: writing a custom service (or a “vendor-specific service”, in BLE jargon). We’re going to do it from scratch, and we’ll discuss all the background on why we do things as we do — a lengthy discussion but, I hope, a useful one.
Continue reading Custom BLE Services with the nRF SDK
Modbus is a quaint protocol. It’s one of my favourite protocols — it’s not very convenient to use, but it’s pretty convenient to implement and remarkably flexible for an otherwise pretty opinionated protocol. Its specs are very self-contained and easy to follow.
That being said, like all protocols that are a) from an entirely different era of computing and b) royalty-free, there are a lot of non-conforming devices out there. When you run into one, you quickly start to doubt the specs, your documentation, your code and eventually your sanity. My favourite stumbling block? The endianness of the CRC value.
It’s not so much that nobody gets it right — in fact, it’s the one thing that even non-conforming devices get right, because their developers end up swapping the bytes until they get the order right, otherwise the device can’t talk to anything. It’s just that a lot of people don’t understand why they got it right.
Continue reading The Modbus CRC Endianness Kerfuffle
It’s been nearly 20 years since Rob Pike’s infamously won the “Best at saying what we’re all thinking” prize with his talk about how systems software research is irrelevant. And, while systems software research is doing slightly (though not glamorously) better than in 2000, it’s still mostly circling the drain.
That being said, it’s not a field that’s devoid of challenges. But I think the main challenge of the next 10-15 years is going to be even less glamorous than we like to think.
I think the main challenge of the next 15 years will be to keep existing software and, more importantly, existing programming models, up and running.
Continue reading The Compatibility Struggle Looming Over the Horizon
Or a C++ Engineer. Or a C Engineer. Or a JS Engineer. If a job ad reads anything like that, it’s bad. If it’s representative of a company’s recruitment efforts, it’s very likely that you don’t want to work there.
Continue reading There’s No Such Thing as a Java Engineer
It’s impossible to discuss Electron without the topic of space being brought up, and once that happens, you have to survive the talk about how storage is cheap today and space just doesn’t matter anymore.
Here is why I think all that is bogus — for bonus points, without any unironic use of the terms “engineering”, “real programmers” and “web developers”
One of the easiest ways to “settle” a technical discussion is to resort to an analogy. It took me three minutes of browsing HN (why do I keep doing that to myself?) to find the first one today. When operating systems and computers are discussed, a car analogy is sure to pop up within minutes.
I want to argue that reasoning by analogy is bad when technology is involved. And, more generally, that analogies and “common sense” are bad things to rely on in matters of science and engineering.
Continue reading Reasoning by Analogy is Lazy
I just read that the Max Planck Society discontinued its agreement with Elsevier and this sent me whirling back to a time when I was involved in research — and that was the time when I gained even more of an appreciation for the programming community.
Continue reading Computers, Programming and Free Information
An article about C Portability Lessons for Weird Machines has been making the headlines on the Interwebs lately. It’s full of interesting examples, though none of them are from machines relevant to the last two decades of high-end computing.
I think these lessons are still relevant today, though, and that you should still pay attention to them, and that you should still write “proper” code. Here is why.
Continue reading How Relevant are C Portability Pitfalls?