A widespread business scenario
Omitting the general definition of usability (just see the entry on Wikipedia to get an idea), I want to discuss a particular and precise field of software usability: the business one. From here on, by "usability" I will only intend that kind of software.
When it comes to usability, it is often not done from the point of view of the different methods of use. The methods of use can be distinguished by frequency (intensive or sporadic) and by object of use (operating system or application). For example, your phone's operating system is used frequently. One used occasionally could be a borrowed laptop that has a different operating system than the one you usually use. A frequently used application could be an internet search engine or a word processor. While an occasionally used application could be a public administration website to carry out a one-off fulfillment or desktop software specifically procured to solve a contingent problem. In each of these scenarios, the designers may have chosen to appropriately measure the usability rate. Anyone who sells a commercial operating system has every interest in making it very usable and therefore attractive. The same utilitarian reasoning applies to an application such as a search engine, an e-commerce site, or a word processor. The public administration, on the other hand, may not have the same sensitivity when releasing an application in favor of citizenship because there is no “direct” profit involved and above all because there is a sort of monopoly. Below I will make some arguments from which I exclude micro companies, which are similar to the "consumer" market. In fact, given the limited turnover, not being able to afford to make reckless choices, where possible micro companies have an eye on the usability of the software they rely on.
With the exception of operating systems and “office automation” programs, which take great care of it for the reasons already mentioned, usability is often penalized in companies with dozens of employees upwards. Sometimes because we come across monopoly software suppliers and much more often because the monopoly is created internally: a small part of the company (the top management) dominates and, in fact, it harnesses the remaining part with choices based on political criteria that often don’t include important technical aspects such as usability.
The most obvious circumstance occurs in an external monopoly or quasi-monopoly regime. If there is a vendor of very specialized software that has no competitors or has, for example, only one, then usability is left to its good heart or luck (along with other aspects such as security, but it’s another matter). Less obvious is that software usability easily becomes secondary when one of these two circumstances occurs: 1) the software is commissioned to a company trusted by a corporate executive (in perfect good faith, because trust is a criterion of choice that generally works); 2) the software is made in-house by company itself that, through its own manager, commissions it to the internal IT department.
But why is there never a revolt from end users who have to deal with unusable business software? I think there are two main reasons. The first one is a political and legal reason: the company requires the use of certain software, the work in a company is subordinate and, as such, the employees must use that software. And that's that. The second one is a technical reason, which also has heavy side effects in the common feeling on usability: with the habit and frequency of daily and intensive use, one learns to use even an unusable software! What are the side effects of all this? Habituation, first of all, which no longer allows us to distinguish an unusable software from a usable one and to appreciate the innumerable advantages of this latter. The technological ignorance that dampens the awareness of the real potential of a computer: the end user gets used to thinking that there is nothing strange if to go from point "A" to point "B" the computer goes through a tangle of curves instead of a simple straight line. Unproductivity, as hardly usable software typically produces less than it could. Mental stress, because a little usable software requires more attention and concentration by the end user than a usable one, even if the user does not notice it out of habit. Physical stress, because a little usable software could require many movements of the limbs, head, eyes etc. more than necessary. The increase in errors, as unusable software leaves the user with some calculations that it could do itself. The wrong beliefs, since an unusable software often has non-uniform and homogeneous operating schemes so that the user could be convinced that a certain operation "is always done that way" (perhaps wrong) or that another operation "is unfeasible” (while maybe it can be done, in a different way from what one expected). So on, many other problems can logically be stated.
That said, in summary, if usability were properly taken care of, a company would benefit both directly, as it would leverage savings / profits, and indirectly, as it would help prevent processing errors, customer disservices and negative repercussions on health and mood of the workers. Widespread doubt: how much does it cost the programmer to implement usability well? Probably an equal time or, at worst, a little longer. Usability implies the rationalization of processes: if in the analysis phase it is necessary to spend more time, during the implementation phase everything is recovered and perhaps it is gained. But even if it takes a lot longer, and if even 1 minute more of a programmer's work equals 1 tenth of a second less end-user work, would it be worth it? For the programmer, 1 minute is related to the single program that is developed once, so it is worth 1 minute more of work forever, while the tenth of a second is related to many days of work by many users, so it becomes 1, 10, 100, 1000 minutes less, depending on the case and, net of higher costs, the profit for the company is always considerable.
In the next section, I will discuss two aspects of usability, among the many possible, about which in the business environment I always find boundless margins for improvement, with all the advantages of the case: the prevention of errors and the speed of use.
Preventing Errors and the Computer-Employee
A colleague of mine told me an emblematic anecdote. Decades ago, in a medium-sized company there was software that was used to carry out an important and delicate accounting calculation. This software had been developed internally by the company and had to be runned on the last working day of each month, so there was a procedure that followed the start-up schedule asking to enter the right date. As a precaution, the end user was asked at least twice to confirm the correctness of this date before moving on. After some time, one of the programmers thought of preventing possible errors, so he integrated the planning procedure with a final step in which, in case of entering a wrong date, the message "Caution! The last working day of the month is not that you entered but it’s [xx / xx / xxxx] instead" were shown. In retrospect, the procedure might as well do it all by itself!
Given a workflow, there are various human-machine interaction schemes with which errors can be handled. The episode I have told is attributable to the 1-2-2-2 scheme of the table below, which, however, is not the worst possible.
|TYPES OF DATA REQUIRED
|Fix everything by hand
|Only at the end
|Redo all over again
|Step by step
|Fix only the wrong data
In this table I have selected four important aspects of error handling in the interfaces and, for each of these, I have listed the possible operating modes by ordering them from worst to best. The worst scheme is therefore 1-1-1-1, which is not infrequently widespread among companies that maintain outdated software because they are considered robust and reliable. This latter scheme, however, is deleterious in many respects.
- Request non-essential data. This indicates laziness or inability of the programmer and is the main cause of exposure to errors: non-essential data must never be requested! Whatever the last working day of the month is, you already know. If it is necessary to choose between one or more options, not all compatible with each other, as they are selected it is advisable to deactivate the incompatible ones among the remaining ones rather than showing a final error such as "Warning, choice A is incompatible with choices B and C ! ". Unless it is a quiz software, if you already know a piece of data, you go to take it directly from the database, you do not ask the user, who can very well enter it wrong (for example, starting from the name of a locality, if unique, the postal code can be uniquely extrapolated). And so on with a series of practices that seem obvious to apply but, thinking locally, one realizes that they are quite frequently disregarded.
- Don't prevent mistakes. This is one of the most frequent causes of wasting time. For example, there are systems that launch long and complex elaborations giving as the only answer “elaboration started”. It will then be the user who must subsequently verify the outcome, through a specific query function. If the user knows that the procedure takes twenty minutes on average, this translates into an equal waste of time if the initial parameters entered by him are incorrect. For example, if among the parameters of the request there was a range of dates (from day xx / xx / xxxx to day yy / yy / yyyy), it would be better to instantly check if the final date is equal to or greater than the initial date and even if both the dates exist. If some of the required parameters were derivable from others (for example if a user has a maximum number of operable choices, it can be avoided that he goes beyond the allowed limit), it is good that it is not the man who makes the calculations that should do the computer but rather that it is guided and supported step by step. The result of an initial inattention is noticing a failed processing after those famous 20 minutes, now lost. A middle way between automatic error prevention and non-prevention is the communication mode, that is, when the programmer provides mechanisms based purely on warnings and alerts. For example, print "attention, insert a correct date in the format xx / xx / xxxx" but then allow the user to write anything and send the request forward. Of course, all this is not very effective if not combinated with automatic prevention.
- Do not detect errors. Let's imagine a scenario in which a customer telephones a company to find out the outcome of an order he has previously placed. The operator who replies asks the customer for the date of the order, the customer does not remember it exactly but provides a time interval in which it certainly occurred. If the operator on the phone enters the date range on the computer by reversing the initial one with the final one and this error is never detected, at the end he will see "no order found!" and will tell the user that unfortunately the order has never been received. At that point, the user could make a complaint or place a second order believing, in trust, that the first one was never successful. This is what can happen if errors are never detected. If they are detected at the end, however, a more or less great waste of time is obtained (as in the case of the 20 minutes described above). The ideal, of course, would be to detect errors step by step in an interactive context and leave at the end only those detections that require comprehensive data processing.
- Solve everything by hand. Let's imagine that a process, which includes updates on the database, goes into error and crashes in the middle. If the programmer has not foreseen the restoration of the initial state, the user is forced to reconstitute it himself or, in other cases, if it is not possible to return to the starting point, he is even forced to compose the updates by hand to obtain the final state. If, on the contrary, the programmer has foreseen the restoration of the initial state, the user may have to re-enter all the parameters he already entered before to carry out the processing. The ideal, of course, would be the possibility to go back to the beginning, finding however all the previously entered data on the screen, to be able to correct them if necessary and restart the process. It seems perhaps the most obvious of the observations but instead it often happens, for example in the contexts of "business process management" or "ticketing", that a certain "request", for which the user has filled in countless fields, is rejected for only one wrong field and is then forced to type it entirely from the beginning without being able to start from the same content already filled in previously.
From this picture it emerges that poor error management can cause the most disparate problems, from wasted time and wear and tear of workers to damage to customers and economic and reputational losses. It also emerges that the worker must often replace the computer and for a computer (and also for a programmer) there is no worse defeat and debasement than this.
A correct and careful management of errors, on the other hand, becomes a strategic choice for a company, with a view to taking care of speed, profit, reputation and working climate. Without wasting time in circumstances that should not even exist, it is thus possible to direct and concentrate the action of workers only on the phases of activities that generate value.
Fast users, slow programmers: use the mouse carefully
Computer interfaces have adapted over time to the advent of so many new data input devices. Speaking of the most common and the most general cases (to simplify the discussion I will not address particular issues, such as accessibility), we have moved from purely keyboard input to mixed “keyboard/mouse” to mixed “keyboard/mouse/touchscreen”. Leaving aside this last case, in the business environment the most widespread combination remains “keyboard/mouse”. Since the mouse has existed, however, this has had two roles, one positive and the other negative, respectively: evolution and surrogate of the keyboard. In the first case, I believe that in the management of software that has to do with graphics, CAD and multimedia, as well as more generally for the rapid management of certain visual features, such as the resizing of portions of the screen or the selection of one or more objects among a myriad of graphics on one screen, the mouse offers endless advantages. But when it is used as a lifesaver by graphic designers and software programmers who, thanks to him, do not “waste time” designing usable interfaces, it causes unexpected damage.
Several times in the past I have thought of measuring the use of the mouse, in terms of the amount of clicks, the amount of direction changes, the average length of the individual movements and the total length of the path traveled by the pointer on the screen. All this to then make the comparison with the use of the keyboard and evaluate the alternation of it with the mouse: amount of device changes and average time of use after each change. I never wanted to carry out these measurements, however, because probably the only significant data that I could have obtained would have been related to health aspects not of my competence nor of my interest (for example trying to study stresses and possible damage to limbs and joints on various time frames). On the production efficiency side, these measurements would have been superfluous as to understand whether or not it is appropriate to use the mouse in a given situation, it is sufficient to use another approach, very linear, schematized below.
- Does the action to be performed require the typing of written texts? Keyboard.
- Does the action to be taken require graphic manipulation on the screen? Mouse.
- Otherwise both of them.
The first two points concern the "natural" uses of the mouse and keyboard. Trivial uses such as word processing or typing data into text fields for the keyboard and moving icons, enlarging windows, selecting portions of the screen or technical drawing for the mouse. Point 3) is more articulated, the discourse that lies at the border, that is when it is not so obvious which is the most pertinent device between mouse and keyboard. In the business environment, the user must be induced to use a single device as much as possible - out of habit he will succeed - and, adding pros and cons, it would be better if the keyboard were. The user himself will realize with the frequency of use or from the suggestions of some colleague, that the best tool is the keyboard but the interface programmer must not create obstacles in this direction. If I can just click "ok", I will never use the "enter" button on the keyboard. If instead of "ok" I find "enter" and I have the possibility to click on "enter" and, above all, to believe that I have to press "enter" on the keyboard and then this key really works, that's it. In the vast majority of cases, the keyboard will beat the mouse but unfortunately it is not always easy to predict in which use cases the keyboard is really more efficient than the mouse. For this it is necessary that both can be used indiscriminately and without favoritism. This favoritism is the key point. Here are two examples:
- A list of multiple choice items. If the programmer displays a list of this kind, giving the possibility to make choices with the keyboard using "tab" to scroll down, "shift + tab" to scroll up and "space" to select, probably the user will tend to use the mouse. If it adds a “select all” field, in many cases it will make the task more hasty for both the keyboard and the mouse. If it adds a search engine to filter items, it will make using the keyboard less problematic as the user will not have to struggle up and down, especially in very large lists. Finally, if he puts a "selector" bar that flashes slowly on the first element of the list and offers the possibility to scroll through the elements with the "up" and "down" keys, at that point the keyboard will be fully equivalent to the mouse and, with practice, will surpass it in efficiency.
- A date field. If the programmer displays a "date" field, arriving at which (either with a mouse or keyboard) a calendar appears from which to select a date, forcing the user to scroll through it, he will probably find it difficult to program this calendar so that it can be used easily even from the keyboard and the user will use the mouse. If, on the other hand, it provides that the calendar can be opened at the user's discretion, making the date directly typeable from the keyboard, then it will create a perfect balance and make the calendar an extra facility for users who do not already know the date to enter but must go and search based on the most common criteria such as day of the week, holidays etc.
In general, the technique to maximize the use of the keyboard is to draw attention to the use of it and outline the options (or make them easily searchable / selectable). It is necessary to design an interface upstream that has fairness of choice between keyboard and mouse and not to make, as many try, downstream an association of hotkeys to an interface designed for use as a mouse.
Calling attention to keyboard use means using graphical representation techniques that make screen elements look like part of a large word processing document. So constantly use elements such as cursors or selection bars, both slowly flashing, to implicitly indicate that you can also interact with the keyboard and the user will naturally behave accordingly.
Schematic means logically grouping options and defining logical steps. Typically it is done as a tree: it starts from a starting point, called the root (but it makes more sense to think of the trunk), to which the main branches are attached: a few selectable options, therefore easily scrolled from the keyboard or with single keys of shortcut. Behind that option other branches unwind and so on branching out. Typically the general menus of a program are made in this way (for example "File" -> "New" -> "Empty document"). If it is not possible to logically group the options, as on long lists of elements with multiple choice, you can use those tricks that I have specified in the previous example n.1 (search box, key for total selection, cursors for keyboard scrolling with arrows directional etc.).
These precautions that:
- favor the use of the very agile keyboard, together with the exploitation of the users' habits,
- indirectly force the programmer to rationalize and synthesize as much as possible the usage patterns and logic of the software
ultimately create speed!