Lowest Common Denominator Users

Historically, computer hardware development was held-back by lowest-common-denominator hardware and software. Mainframe screens were designed to be character-based displays with a single colour. If a VDU designer came on the scene and looked at the technology of those screens, he might have realised that the technology was relatively old; TV’s can display ‘moving’ images! In colour! But what would the point be to adding colour capabilities to a screen that was used on a system that could not display colour?

Of course, IBM has more than one employee (I hesitate to use IBM as an example, but it is easy to imagine it was them) and one of them was probably an expert on the signaling that happened between the mainframe and the dumb terminals (let’s call him the interface expert). Now, the interface guy and the VDU guy could get together (and maybe a software expert also) and realise that it would be theoretically possible to make the mainframe talk to different types of dumb terminal. Perhaps they figured out a way to make the mono terminals receive colour signals, but ignore all the signals they did not understand… or perhaps more likely, they created a mechanism through which the mainframe would recognise the type of terminal, and adjust the type of information sent to it (no-colour information, or with-colour information).

That example – of having some type of configuration that described or implied the capabilities of the output device – is the start of the important feature of ‘Device Drivers’. I don’t mean to imply that this was the very first type of ‘device driver’, but it is a suitable example for the narrative.

Device Drivers

Device drivers offer modern computers (and as we have seen, some not-so-modern ones) the ability to de-reference the exact hardware in use, and instead use (for large swathes of devices) an intermediate format for communication. In practice, even these device drivers have been de-referenced by even larger scale frameworks or interfaces.

To put it in context, let’s examine some more history. Way back in DOS-land, application writers had to write many different printer programs, for each different type of printer they wanted to print to. As each make and model of printer uses different ‘languages’ to tell them what to print, these printer control programs had to speak these different languages… but in addition they were biased to understanding the language of the source application. Over the years, the separation between the application and the printer has increased; the key difference is that now, with Windows and undoubtedly other operating systems, the operating system introduced a broad concept ‘graphics device interface’ (based on Windows terminology).

The GDI is a very broadly specified framework… for programmatic input it tries to be as flexible as possible to allow applications and OS to display many different things, and for output it tries to allow for the fact that there are thousands of possible output devices, with different capabilities. So now, in Windows-land, application writers can almost totally ignore the specific printers that might be used with the application, and can instead concentrate on building a capable, single, interface to the GDI. Hardware manufacturers, however (e.g. the printer manufacturers) have been forced to create the device drivers for their printers (but at least now, being a generic interface means that only one driver is required per device per operating system across the whole world!)

The cost to the average application developer of these enhancements is that there may be some additional complexity in talking to the GDI – even if they only want to use the most basic ‘text-only’ type facilities of the device. This effort is more than compensated for, however, when the historic alternative was to code specific output routines for every type of printer that you might wish to support!

There is a cost to the system in more CPU-time in ‘generic’ code before output is optimised for the specific device, but with the general increase in hardware capability, this is a hidden cost.

The User

Like hardware devices, users have differing capabilities. Command-line Interfaces (CLI’s) did an excellent job of prohibiting many people from using computers, through (if not complexity) a lot of typing, non-standard interfaces (for programs), and perhaps most importantly, it not being at all clear what you are actually able to do! For example, Unix implementations have thousands of handy little programs, but they (in CLI’s) are not readily obvious to the user. In contrast, a GUI application may have a little picture with words on it that say “Word Processor” and hey – most people will be able to guess what that might help them do.

GUI operating systems could themselves be considered ‘Input Device Drivers’ – they provide a typically easy-to-use interface for the human user… and of course we are all used to the flexibility of such interfaces, and the many ways you could achieve results. The difference, when compared with output device drivers, is the embedded (and expected) flexibility of the GUI… which is not something that can be tolerated by much hardware. Nevertheless, the GUI operating system does ‘de-reference’ the inner complexity of the computer, and replace it with a ‘standard’ interface.

The issue is that many software development companies are realising that their ‘average’ user knows less than ever before! They know less, and their expectations of results are probably nearly as high as the people (family, friends, colleagues) who have been using computers for years. The trend has therefore been to change applications and operating systems to account for those users… and also those users who may say something like:

I don’t care about files or windows or printers, I just want to be able to write a letter to my bank manager!

What? You don’t care about the things that would be helpful for you to know how to do what you want to do? And the problem is, the value of the ‘idiot pound’ is worth £billions! OK, so ‘idiot’ is very unfair there… but the truth is, people who have no interest in learning still want to do the things that they’ve been told computers can help them do. But management and marketing departments like money, so they want to appeal to these people; “Now even easier to use!” So how do they make a product easier?

Let’s consider an example of a simplification of an application that, as an ‘advanced’ user, I consider to be a frustration, but was probably intended to make Windows a little bit easier to understand for novices. I am talking about the default setting to hide file extensions in Windows Explorer. Now, it is true that you can enable the display of those same file extensions in Windows Explorer (so I am happy enough)… right until the point that any friend / family / colleague wants me to help them understand something – especially by phone. Now, they almost certainly won’t have enabled this option (they don’t even know it’s there). So, instead of me being able to ask them “what type of file is that” I might have to ask them to enable the option, or perhaps use other techniques to identify what type a file is. And how much effort would it really have been to explain to that user that all those little ‘.doc’s and ‘.jpg’s indicated what type a file was? And perhaps also, that a file’s type indicated what sort of thing was in it and / or what it was likely to be used for, and maybe even what application would be used to open it. That’s not so confusing is it? [Note 1]

Instead, this particular option hides something of value, creates an obstacle for me every time I set up a new copy of Windows (I have to set several options for explorer), and has much worse effects in the long run for people doing support.

The problem with the ‘perceived difficult stuff’ is that it tends to have been there already. Think file extensions are too complicated? Redesigning the whole file system to somehow avoid them is too complex (and anyway, file extensions are easier in many ways than the alternatives), so they just hide them!

Now of course, the problem with hiding stuff is that we have to provide people with the tools to ‘unhide’ it. With our device driver concept, the extra work (e.g. with complying with a complex interface) is handled by extra CPU clock cycles, which are getting quicker and quicker. With the current (typical) model of user interface changes, the extra cycles are actually forced on to the user, and often repetitively.

Another problem, discussed in a recent entry (Stopping the Proceeding For Idiocy) is that of Modal Dialog Boxes (or alerts). In this case, the software designer has taken the view:

We’re not sure if the user knows what is about to happen, so let’s make sure that they know, by stopping them in their tracks and telling them!

Again, the cost is now loaded onto the user, all users, because someone was worried some of the users might not understand.

I also commented on some of the new changes in Word 2007, and how they were ostensibly to ease convenience, but could easily result in more work for certain types of task. In this case, the interface had been changed to add a ‘default presets’ selection for margin sizes in the way of the dialog where one could gain absolute control of the margins. So again, for a user to access those details, another obstacle had been created. To some extent, this is just ‘hiding’ again, in a slightly different form.

Meanwhile, some application developers are retracing the steps taken by Philippe Kahn of Borland fame when he started Starfish software (see the Wikipedia SideKick entry). The broad stroke of his idea was simple; keep it simple. The concept seems to be one we are seeing a renaissance of right now in certain quarters of the web.

But we are not talking about simple applications designed from the ground up to be limited in functionality, and high in simplicity. We are talking about already-complex applications and operating systems that need to retain that power, whilst being easier to use for all users.

We have to find a way to fulfil the marketing team’s needs for a ‘simple’ application, but we also have to find a way to move those resulting ‘user cycles’ back into ‘CPU clock cycles’.

Notes

  1. It is true that there would be a lot of such little pieces of information that are necessary to learn to use Windows (or other similar GUI’s). However, many of them are so useful that the user will possibly experience them tens of times within their first few uses of the software (resizing windows, for example). Others of them, like the example file-extension, will only be necessary to learn when considering files specifically; and possibly only within the context of Windows Explorer (file types are not hidden from users in Save dialogs, for example).

Leave a Reply

Your email address will not be published. Required fields are marked *