Classnotes for The Fluency User-Interface Builder Document

knownspace logo


Gregory J. E. Rawlins

Department of Computer Science, Indiana University, Bloomington, Indiana 47405, USA.

Class Notes:

Fluency would not only not be possible, it, literally, wouldn't be thinkable without the four principles continually enunciated in class:

  • Hide as much as possible
  • Push intelligence down as far as possible
  • Make the abstract concrete
  • Put a box around the complex, variant, or expensive.

Here's a list of the 12 design patterns you'll need to know to fully understand this document: Observer, Command, Proxy, Adapter, Composite, MacroCommand, Factory Method, Abstract Factory, MVC, Singleton, Null Object, and Mediator + 4 more we haven't covered in class: Strategy, Prototype, Bridge, and Interpreter.

You need to recall what a 'Dictionary' is from data structures. It's any implementation of a Collection interface that specifies that it can add(), delete(), and find() elements. Normally it can also test for empty() and report its size().

The section on problems in the current implementation is necessary to help guide this term's project development but it may also sound like I'm blaming past students. I'm not. All of Fluency's faults are mine. I'm only now developing the skills needed to run a major software research project in an academic setting with students who start with variable knowledge and ability, and who also have no experience with modern tools and design patterns, or of working in teams, and on large projects, and who also have limited time to do real work since, until this term, it used to take most of the term just to give them the basic skills needed for modern software development. This is the first term that I've figured out a way to intermix both tools and real team experience early enough in the term so that project development can begin in earnest much earlier than in years past. Further, like most programmers, I have no special skill in user interface design, although I can at least recognize when something is wrong. Figuring out the right thing to do about it, though, is a different matter, especially given the state of open-source visual tools. Even simple ones, like graph browsers or layout algorithms, are still hard to find in the commercial world, and almost non-existent in the open-source world. More often than not, past students had to roll their own, unable to leverage earlier open-source graphical projects partly because those projects were so extremely limited, so badly designed, or so badly written. The credit for Fluency as it stands now, thus goes to a few outstanding students over the years who each worked long and hard on Fluency near the end of their respective terms, and far beyond. Thus, Fluency's user interface wasn't as planned as its architecture was---it simply happened as various programmers fought their weak and buggy open-source graphical tools to get something working up on screen so that you can critique and extend it today.

Programmers are poor user-interface developers for several related reasons.

  1. Programmers make decisions when programming that ease their current programming task, thus, almost invariably, dumping the core of the problem back in the author's lap. The 'ok' button in the current Fluency's properties palette is a typical example. Why is it there? Why can't the same meaning be conveyed by the author doing the natural thing and simply moving away after typing something? It's there because making selection modal is easy for programmers (an action is modal if the author can't do anything else until completing the action). Modal selection also reduces complexity, since it cuts the number of possible paths through the program. thus reducing user freedom but increasing programmer controllability. An ok button is frequently only a slightly disguised modal dialog box. Such things are also easier to program than a more thoughtful author-friendly interaction. Finally, it's also there because it's what programmers (and users) see everyday. It's familiar. Unless someone has used one of the few well-designed user interfaces out there, no one pays any attention to the extra effort involved in a dialog box, but each little frustration adds up until the user interface can become unusable.
  2. Programmers often confuse the possible with the probable (Cooper2003). Just because something is possible (some obscure and rarely used Action on some Widget, say) programmers will give it equal billing with even the most popular and normal actions on that Widget, thus cluttering every menu with every possible option, and thus making normal choices hard to make---or even find. It's very common to see menu options listed alphabetically, for example. That's good from a programmers' point of view because:
    • as control freaks, they love having all options at all times,
    • any new option can just be thrown in and the sorting algorithm will sort it appropriately.
    The result is bloated hard-to-use menus. Programmers, like most human beings, do such things because it's what they're used to doing, and they're also the easiest thing to do---simply throw everything in at all times and let the user figure it out.
  3. Programmers are often too wrapped up in their current little piece of functionality---typically on the backend where authors can't see---so when it's done they just throw up a menu or add a menu option or tell the author he has to right-click while pressing shift and holding his or her left arm at an angle, or whatever, without thought to what cognitive burden that choice adds to the user interface as a whole. Through these forces, the resulting user interface inevitably becomes more cluttered, more scattered, more complex, more incoherent, and thus more and more valueless. In each case, the author is always left holding the complexity bag.

There are also more social and political and economic reasons that we have the user interfaces we have today. First, computer applications are still growing because computer penetrance in the general population is still growing. It's not worth a company's while to develop a near-perfect user interface because possessing such a user interface doesn't guarantee that the company will seize or hold on to its market share. Companies like Microsoft hold on to the share they do not because they produce brilliant software but because they own the desktop and have enough money to buy products when such products finally manage to service a new population segment tolerably well. A startup company is more motivated to produce good software, but may not have the resources. And all companies are in a race where every year or so the population of users doubles. Thus, whatever products they offer, the population of people they can offer them to will double in a year. Why spend a lot on continually improving what you have when you can throw something together for the new population, which will be at least as large, and much less computer literate than, the old population? A good user interface can still be a net plus, so companies are motivated to produce them, but with our present system of producing them, the production costs are far too high to make them economically attractive the way they are in a static or slowly changing engineering field, like say, washing machines or cars or airplane cockpits. There, interface problems have been well-studied for generations. Further, in many consumer devices, not only have they been well-studied because they've been around a long time, but physical dangers, like electical shock, brake failure, aiplane crashes, and so on, have placed forces on the developers to make sure everything is done right. few such forces yet operate on computer software today, even as it comes to dominate many markets today.

Design Pattern References:







Abstract Factory:




Null Object:


Pipes are related to the Pipes and Filters design pattern of Buschmann et al, 1996, which is intended for largely linear non-visual data processing tasks, as in the Unix command line, or various parallel processing tasks: