The KnownSpace Datamanager
"Halfway To Anywhere"


Computing In Crisis

Software production is beginning to reach the limits of what's possible within the present well-exploited paradigm of how computers must work. Now that our hardware is surpassing, or has already surpassed, most of our more mundane needs for computers, we must turn our attention largely to software. It's time to reevaluate how we build, test, deploy, interface, and verify software.

Much of what we presently do is a result of running on automatic. Hardware's exponentially increasing competence has brought us to a place that no one (except perhaps Turing) foresaw fifty years ago and now we're feeling the early birth pains of a new way of approaching how computers should and could be used.

Our thinking about how computers must work stems from a philosophical, mathematical, and engineering world that's now over fifty years old. Yet we seem not to be reexamining what computers are for and how they can work, perhaps because we've all been trained within that paradigm and the problems with it have not yet become insurmountable. Failing a clear and present crisis, it's easiest to simply continue teaching and thinking within the same cycle of attitudes and concerns that have persisted for the past fifty or so years.

Take Unix, for example. It's Stone-Age characteristics are partly a result of the low levels of funding in academia (relative to the business and military worlds) but it's still indicative of a certain attitude to computers and to users that most computer scientists have. The attitude is this: the computer must never be allowed to change its behavior unless that change has been thoroughly preplanned for it by the programmer. Further, the user's time doesn't much matter, so he or she simply has to acclimate to the computer's (or rather, the programmer's) wishes rather than the other way around.

This attitude leads to a world where users are continually frustrated in their use of computers---except of course for the lucky few who have minds just like ours and so can master the intricacies of working with a computer just as we do.

It's easy to see where that attitude comes from. The first computers were enormously expensive and extremely finicky. Only a certain type of puzzle-solving mind had even a hope of interacting with it and those minds absolutely loved all the intricacies of making it do anything at all useful. Because the computer was so enormously expensive, its time was judged far more important than the programmer's or the user's time. So from the start, the emphasis was placed on using the computer as efficiently as possible. But things are different today when we can buy a quite powerful computer for less than the price of a good vacuum cleaner.

The attitude that programmers and users don't much matter compared to the computer was only reinforced by the theoretical scaffolding computer scientists erected when trying to understand what the computer was capable of. The early mathematicians, physicists, and engineers working to clarify the machine's capabilities assumed certain things about the primitiveness of the machine that we still hold today, even though today's machines are far more sophisticated.

To make predictability and analyzability as simple as possible they enforced the rule that the machine's behavior from run to run must always be exactly the same. Thus, from the absolute beginning, the machine was disallowed from adapting its behavior, no matter how large its experience.

For example, consider compilers. To a compiler, each program submitted for translation is the very first program it's ever seen---even if it's previously compiled millions of programs. No history is kept so there is no possibility of the compiler detecting patterns of usage in the failures it sees. Thus, the user can only attempt to learn the rules that the compiler lives by as exactly as possible and then diligently and continuously abide by them. In other words, the user must become a machine to best use the machine.

The programmer too must become a machine to use the machine. Not only do programmers have to put up with more of the machine's idiosyncracies, but they are closer to the bare machine, and so are usually dealing only with programs produced solely for other programmers. Their programming environments, their editors, their languages, none of them make much allowance for simple human weakness. Perhaps as a result of that they then translate that attitude to the machine directly to the users.

This extreme bias toward the machine and away from the human must now be corrected. Various waves have moved through software development over the decades but the pressure to change has never been this intense because the hardware simply wasn't up to what we wanted to ask of it. Of course we're still a long way away from smart computers, but we can do a lot to improve how easy computers are to use without having full-blown artificial intelligence.

To make a start in this change of viewpoint requires rethinking pretty much every part of computer science and engineering. There is much that computers do well, but almost all of it belongs to a class of problems that might be called puzzles: they're problems that are small and well-defined enough that before the program ever runs the programmer can imagine all the possibilities that can arise and can figure out exactly what to have the computer do when each of those things happen.

It's a problem domain where extreme efficiency in execution is very important; so important in fact that it overrides every other concern. And that places the bulk of the effort involved in human-computer interaction far toward the human end. There will always be such problems, of course. But that isn't all that computer science must now address. And the ratio between those problems and other, more open-ended problems is decreasing rapidly because the hardware has improved so dramatically in the past ten to twenty years that it's possible for us to start trying to solve those new problems. So of course we're trying. But we're failing---because we're trying to solve them within the same set of constraints that we evolved to help us handle the old style problems.

The newer problems aren't the same as the puzzle-like problems computer scientists are trained to handle, but their solutions are nevertheless still required to fit within the same old constraints. No program can run forever, for example, (except for operating systems, network managers, and other programs that must be immortal), thus no program can work on improving its performance while the user isn't actually using it. No program can adapt its behavior over time. No program can vary its response time, so no program can prioritize its internal schedule of tasks to better handle emergencies when they arise---every task is treated with eactly the same urgency (unless of course the programmer was clever enough to program in an option). No program can be deliberately non-deterministic. And so on. And while all these properties are desirable, they may not be the right ones to help us solve the more complex, more open-ended, less well-defined problems we increasingly find ourselves facing today.