For an industry that prides itself on its analytical ability and abstract mental processing, we often don't do a great job applying that mental skill to the most important element of the programmer's tool chest - ourselves.

Over the last year or so, I've been writing a lot about management and related topics. But two incidents in the last several months have finally pushed me over the edge into getting back to a topic that is closer to the individual developer again. The first is the (as of this writing) ongoing saga around the San Bernardino shooter's iPhone, and the US court order demanding that Apple comply with the FBI request for assistance in breaking into the device to retrieve information from it that the FBI deems useful and/or necessary to the investigation. The second is the ongoing saga around the Volkswagen emissions benchmark “hack” that changed the engine's performance characteristics. This was based on information fed to the emissions benchmark from the sensors elsewhere in the car so that when the vehicle was being tested for noxious emissions, it operated in a less-performant but eco-friendly way, and otherwise pumped out much higher toxins than allowed by various national environmental standards (including both Germany's and the US's).

In each of these cases, software lies at the heart of the story. But my concern is this: To what degree are software developers responsible for their actions in each case; to what degree will we be held responsible by the legal framework in which we live (be that in the US or elsewhere); and to what degree should we hold ourselves responsible?


Although it would be easier to debate or discuss if readers have a relatively basic understanding of each situation, each one can be abstracted away from the details pretty easily.

In the VW case, software engineers were instructed by management - and as of right now, it's not clear exactly from where those orders came, but popular perception holds that it definitely “came from above,” rather than originating with one of the engineers themselves - to deliberately change the operation of the vehicle under particular conditions. Whether the engineers understood that those conditions were the ones that would more than likely identify the vehicle as being tested for emissions standards is not entirely clear to me, but this is not new ground for us as an industry.

Relational databases have been doing this for years: A number of different benchmark tools, called the TPC-A (through -E) benchmarks, are routinely run against the various RDBMS vendors' products to determine relative performance numbers. And, as the story goes, database vendors cheat outrageously inside their databases in order to optimize specifically to those use cases, thereby distorting the actual numbers in order to return faster (and therefore presumably better) numbers against their competitors.

This raises an interesting ethical question: If a software developer is asked to implement a feature that either leads to, or is, or is itself directly unethical, immoral, or illegal behavior, is that software developer liable for the damages caused?

Ethics, Not Legalities

Just to be clear, the argument I want to have here is based around the morals and/or ethics of the issue, not its legalities. The reason for this is manifold: To start, the classic IANAL (I am not a lawyer) disclaimer is in full force here, and I defer any and all questions of the law to my friend and colleague, John V. Petersen, who also writes in this publication and may be so moved as to take up this question in a future issue. Secondly, though, is the simple fact that laws change with the times - the question of slavery in the US being just but one extreme example - and so what holds as legal today, may not be so tomorrow or vice versa. It is fully safe, I believe, to assume that the laws around software and liability will change as the role, nature, and responsibility of software increases in our daily lives.

This is not without precedent - in the earliest days of the Industrial Revolution, no laws around machine safety, even for food production, were in place. As the impact of industrial processing grew, however, and abuses grew (particularly against those who worked the machines - read Upton Sinclair for more graphic details), legal frameworks were established and liabilities enforced. One might argue that the pendulum has swung too much the other way by this point, and another might argue the reverse and that still more protections are necessary, but either way, the fact remains that laws change.

“Just Following Orders”

Programmers must build what they are told to build. We are, after all, performing “work for hire,” most of us, and in the same vein that we garner no benefit (in the form of profits, royalties, license payments, or some other money) from having written the software, neither can we therefore be held responsible for its effects.

In many ways, this is the same argument used at the Nuremburg Trials against the various members of the German Army and SS for their crimes against humanity. In a military hierarchy, if a superior officer gives you an order, you're compelled to obey it or face court-martial. That means, of course, the officer who gave that order holds the responsibility for its effects, but as the individual who carried it out, you are immune from its effects.

The Last Responsible Moment

But the “following orders” argument holds significant flaws: One is that, except for those programmers who are a part of the armed forces, we aren't in a strict hierarchy, and there are no legal consequences from disobeying a superior. If instructed to implement a feature that you find unethical, immoral, or illegal, you are certainly capable of refusing to do so, and many in the industry would applaud you for it. You'd be unemployed, granted, but you wouldn't be the one responsible - legally or otherwise - for said feature. Somebody else might, who can't really afford to be unemployed for any length of time, but that's their problem, not yours.

Perhaps the right way to think about this is in the same way we think about technical decisions - that the individual who has the last responsible moment before an incident occurs is the one to whom we assign responsibility. Thus, since you are the one who wrote the code, you are the one responsible for it, irrespective of what your boss or management chain demanded.

This is not without precedent - in the US, many states have a similar concept in place around the application of fault in a driving accident. If one driver had an opportunity to avoid the accident and didn't take it (for whatever reason), that driver is at fault regardless of the actions or behavior of the other driver. This ensures that I can't use your speeding as an excuse to get out of my drunk driving, or vice versa; instead, it remains entirely focused on the exact moment of the crash.

And this makes a certain amount of sense, except that it's too hard, in most scenarios, to determine where that last responsible moment was. The code crashed because of a NULL reference on which an object was invoked - certainly, this is the fault of the developer who wrote that method call, right? Unless, of course, that object reference is an argument into this function, and it's clearly documented that it cannot be NULL; then it's the fault of the individual who wrote the code that passed it in, right? Unless, of course, that reference is actually set by the object/relational mapper when pulling data out of the database, and that relationship is supposed to have at least one member and it doesn't, in which case it's the fault of whomever wrote last to that table...and so on. And even if you believe in the most paranoid “defensive programming,” as soon as you move into multithreaded/parallel programming, references can change even in the small amount of time between the null-reference check and its usage. We could do this for days.

This all presumes, of course, that developers can even find the bug in question - in some cases, that's assuming a lot.


Regardless of where you fall on this question, it's not going away any time soon. In fact, as software becomes more ubiquitous (is that even possible?), it stands to reason that software will, before too long, be directly responsible for more and more aspects of our lives. It already directs when we begin our day (alarm clocks), when we engage with others (schedulers), and for many, when we go to sleep (personal health monitors like the Fitbit or Band). And, if the reports from Google are any indication, before too long it will be driving our cars.

Which, of course, brings this question all the way back to a much more practical, real question: if two self-driving cars get into an accident, with whom, exactly, does the fault lie for insurance purposes? If a self-driving car drives off the road for some reason, and there is a provable software bug that caused it, are the developers who worked on that code now liable for the damages to the car, the surrounding terrain, or even the families of those who were injured or killed?

Because - and let's get this clear right from the start - if we don't think about this, it's certain that others will when the time comes. And we may not like their answers.