Writing software is hard, particularly when the schedules keep programmers “nose to the grindstone”; every so often, it's important to take a breather and look around the world and discover what we can find-ironically, what we find can often help us write software better.

Philosophy seems a strange partner to the software craftsman, but a brief dip in the waters of abstract thought will often help hone skills later useful to the craft of code, models, and dealing with idiot users.

While cruising the Internet the other day, I ran across a great article from Eric Sink, who opined that there are three categories of software:

MeWare: The developer creates software. The developer uses it. Nobody else does.

ThemWare: The developer creates software. Other people use it. The developer does not.

UsWare: The developer creates software. Other people use it. The developer uses it too.

(He also points out that there's probably a fourth category, NobodyWare, where nobody, not even the developer, uses it, but we'll reserve that category purely for software created by 13-year-old anime fans.)

In many respects, the heart of the “bad software” problem is the vast amounts of ThemWare being created-software written for the developer by the developer is, in many ways, the easiest to build, because we can see exactly what we want it to do and how it should work. “Dogfooding” your software (a take on the phrase “to eat your own dog food”) is a tried and true approach to creating usable software, but it only works in those situations where the developers have even a remote clue about the domain.

Since I'm probably not going to be a purchasing agent, university professor, or accountant any time soon, I'll leave those other two categories alone for now and focus on the middle one.

How do we create good ThemWare? That's easy: ask the users. Right?

The On-site Customer

Let's see a quick show of hands: how many of you have had projects where the users were utterly clueless idiots, who clearly had no business touching a keyboard, much less trying to give you requirements for the software you were trying to build?

My guess-a somewhat conservative one, mind you-is that at least three-quarters of you are holding a hand high in the air. (Go ahead and put it down-it's a bit embarrassing to be in a crowded coffee shop, reading with an arm in the air.) The clueless user is ubiquitous; it's become a time-honored staple of the software industry, so much so that it's almost a waste of breath to even debate the topic.

That's not my purpose here-I'll be the first to agree that some users are clueless when it comes to software. The users I had to deal with while working on the accounting system for a major university were a great example: some of them were-literally-geniuses in their field, and yet they still exhibited the classic symptoms of technical idiocy.

And why shouldn't they? Most of them are paid to do something other than work with software, such as research, write, audit, serve customers, or manage-not figure out software. In fact, in an era and society which strongly suggests (if not outright demands) a degree of vertical skill specialization unheard of by our ancestors, it stands to reason that those users would be less competent at using computers than we are, just as I am vastly less competent at predictions based on macroeconomic theory or isolating protein genes.

This raises an uncomfortable question: if the professors at the university don't consult me about how to do their jobs, why should I consult the users about how to do mine? After all, if I'm the expert, and I'm the one who understands the technology, why should I take the time to get their thoughts and ideas about how it should work, much less take anything they say seriously?

Yet this is precisely what Agile says I need to do.

From Software to Civilization

Let's see where an analogy gets us. Assume we're not talking about software anymore, but about something vastly more important-like creating and governing a society, such as the one I operate in (the USA).

When you think about it, on the surface of things, our form of government-republic or democracy, choose your degree of accuracy as you wish-doesn't make a whole lot of sense. How much experience does the average American (you) have with fiscal and monetary policy? National security? International finance? Welfare benefits?

Let's even take it down to a (presumably simpler) domestic/community level: how many of you have any experience with the zoning of land? Specifically, marking it as public, slated for a park, against marking it as private, slated for some kind of private development (housing, perhaps)? Are you familiar with its economic and social impact on the local community? Are you comfortable with marking that land parcel next to your house as a public park space, increasing the tax burden on you and your neighbors, both by denying the city the land taxes it would generate as commercial or residential space and by creating greater maintenance for the Parks and Recreation Department? What about, when it's marked private, the lost social benefit and increased tax base of the higher-valued property around it when it becomes a nice park with swings and grass and a volleyball pit?

Unless your background includes some real estate assessment, not to mention some psychological analysis (to weigh the social benefit of said park), chances are any opinion you might have on the subject is entirely subjective-you can think about the benefits to you as an individual, and maybe to you as a societal unit (such as your family), but that's as far as your analysis goes.

Which, as it turns out, is entirely normal.

Reason, the Slave of Passion

According to Drew Weston, author of The Political Brain, most voters vote with their heart first-when presented with information on an issue, the prospective voter makes a decision first, then looks for the facts or information that backs or defends that decision.

This isn't hyperbole-this is established fact.

In a scientific experiment just prior to the 2004 Presidential election, participants grouped into three categories (strongly Democratic, strongly Republican, or politically neutral) were presented with contradictory statements by the candidates and asked to rate the degree of contradiction from 1 to 4. Not surprisingly, on average, strongly Democratic participants found Senator Kerry's contradictory statements to be pretty easily reconcilable (a 2) and Governor Bush's statements to be entirely not so (a 4), and the strongly Republican participants came in at almost exactly the same scores going the other way. The fascinating part of this experiment was that, under an MRI, scientists could see, firsthand, the emotional centers of the brain firing during this process, not the logical or reasoning centers.

It gets worse. It turns out we're entirely irrational on a number of subjects, including our own ability to gauge the price of eggs.

According to traditional economic theory, when the price of eggs goes down just a little, we buy just a little bit more eggs because we derive more utility (in the economic sense) from that little bit more eggs. Conversely, when the price of eggs goes up just a little, we should buy just a little bit fewer eggs, because now the utility of a dollar's worth of eggs is just that much less.

But we don't. When the price of eggs goes up, consumption cuts back by two and a half times more than the amount it goes up when the price goes down the same amount. This means that people fear the lost value from the higher price two-and-a-half times more than they see the savings stemming from the lower price, even when the price difference is the same. Ditto for orange juice.

Translation? We fear the potential of lost value far more than the actual lost value itself.

Want to see it in action? Consider this: the next time you find yourself weighing some kind of service or good that has several variations across a spectrum-cable service, cell phone service, even a new laptop-think about the thought process. How many of us choose the flat-rate cell phone plan, even though we'll never actually use all those minutes, because we don't want to face the pain of the monstrous bill in that one month we go over the plan limits? How many of us choose the more expensive flat-rate unlimited talk time plan “just in case”?

In economic terms, it's entirely irrational behavior. When the numbers come back in, even if we do have that one month where we grossly blow past the limit for a single month, even with the overage charge, we generally still come in at far less money with the reduced-time plan. I can even show you those numbers to you ahead of time, and chances are, you'll still take the more expensive plan because you (like the rest of us) fear the pain of potential loss more than you feel the pain of current loss.

You, my friend, are an irrational creature. So am I. So is everybody else.

Why on earth would we put you in charge, then?

Sir Winston's Insight

Winston Churchill is famously quoted as saying, “Democracy is the worst form of government ever invented. Except for all the rest.” Even in a republic, we're putting technical decisions in the hands of those who are effectively clueless on any subject other than how to win elections. Cultural elitists will even go so far as to point out how certain leaders could only be elected because of the “unwashed masses,” the presumed hordes of easily-swayed, under-informed individuals who vote entirely with their selfish hearts, and not an “enlightened” viewpoint born of education and intelligence.

If you hold that opinion (and secretly, who among us highly-educated types doesn't?), you're in good company. Aristotle and Plato, two of the finest philosophical minds Ancient Greece could produce, were both of the opinion that democracy wasn't a great way to run things. Aristotle understood democracy to be a government run “with a view to the advantage of those who are poor,” and for that reason felt it was dangerous to the many. He specifically contrasted it against a polity wherein “the multitude governs with a view to the common advantage.” Plato went one step further believing the best form of government to be that of the philosopher-king, a benevolent dictator well-educated in the process of exercising reason and governing with the interests of the whole in mind.

To Aristotle, the key to finding the best contribution to the good of both individual and the state was virtue, the excellence or proper functioning of a thing, whatever that may be, based on its purpose. The purpose of a knife is to cut, and thus its virtue lies in its sharpness. The purpose of a government is to rule, thus its virtue is not just in ruling well (such that every citizen can live, i.e., has enough food, water and shelter), but to rule virtuously (such that every citizen can “live well,” that is, living virtuously themselves).

Roughly speaking, Aristotle described the problem like this: “it is possible for one or a few [citizens] to be outstanding in virtue, but where more are concerned it is difficult for them to be proficient with a view to virtue as a whole.” In other words, citizens will tend to be focused on their own needs, and not so much the needs or virtue of their fellow citizens.

Come to think of it, that sounds like the users I've had to work with on various projects-each was concerned with their particular use cases or features, and really could care less about the impact those features or requirements might have on the whole of the rest of the system. “So what if relaxing all the relational constraints makes the data harder to work with in the long run? It means I can still fill out this form and submit it, so I'm happy, so do it that way.”

So why do we persist in this idea that users are the right people to include as part of the system development again? Given the generalized ignorance of voters… I mean users… it would seem that our best bet for creating good software would be to follow Aristotle's and Plato's lead, and simply put the smartest guy in charge. (That would be one of us, of course.) Right?

This turns out to be a bad idea, for several reasons.

The Wrongness of the Philosopher-Project Lead

The first, most basic problem, is that since it's the users who are most often paying for the software (in one way or another), basic capitalistic ethos ("The customer is always right.") says that we need to build what they want.

The second problem centers around our presumption of perfect knowledge: as the technical lead or architect on the project, it's very easy to assume that we know what will happen when the project is complete-if we turn off the relational integrity constraints, we'll get malformed data.

But we can't know this-we're assuming it, based on past experience. No matter how often it's happened before, we can't know that it'll happen again. This time, the company might be training their users to verify their input before clicking Submit, or they have a downstream operator verifying the data through an out-of-band mechanism, such as a phone call to the customer. It's perhaps not as efficient as simply having the integrity constraints there, but frankly, that's missing the point-we can't know the future.

Even if we could, somehow, know the future outcome of a decision, it doesn't help, because we can't know the outcomes of other decisions. Suppose we do know that relaxing the integrity constraints will cause data loss. Suppose we enforce the integrity constraints based on that knowledge. “Ever in motion, the future is,” the great philosopher Yoda said, and he's right. Now that the integrity constraints are there, what other consequences are happening as a result? Are we losing customers because our users have to demand information from them at inopportune times? ("I'm sorry, sir, I know your house is on fire, but I need you to tell me your work fax # before I can move on to the next step in this call.")

The Freedom to Frak Up

Most of all, because this is software for the users, these kinds of decisions are their call to make, and that means giving them the freedom to screw it up.

Only when we screw it up do we make the mistakes that enable us to learn. Users are the same way-only when given the opportunity to make mistakes will they have an opportunity to learn.

Will they? That's entirely up to them, and unfortunately, it's a fact of life that a good number of them will choose to blame the programmers for the consequences of those decisions rather than themselves. But without acknowledging their desires and giving them what they want, we take away any moral responsibility they have for that software, and any accusation they level at us about the software not being what they want becomes absolutely true. Unfulfilled desires, no matter how ridiculous, have no motivation to change so long as they remain unfulfilled. Only when we give them what they want, and it turns out to not be the everything-they-ever-wanted they expected it to be, do they begin to realize that their decisions have consequences.

In fact, this is true of all three categories of software (MeWare, ThemWare and UsWare); it's just that the loop between “mistake” and “fix based on that mistake” is that much closer when we, the developers, are also the users.

Now, if you'll excuse me, I have to contemplate an age-old philosophical question: if a build fails in the forest, and no developer is around to see it, does it still make noise, and if not, why the heck not?

(For those interested in a light introduction to philosophical topics, I highly recommend the “Blackwell Philosophy and Pop Culture Series.” In particular, this essay was inspired by a similar essay in "Battlestar Galactica and Philosophy: Knowledge Here Begins Out There.")