Monday, November 8, 2010

Performance Metric

In programming, people have to assemble instructions on how to get a computer to do something. Complex tasks can take years of time, and it's not always clear how long things are taking. Managers of programmers therefore love to take measurements. Most commonly, they measure the number of lines of code that have been written. More code, more information, therefore more progress, right?
Well, not exactly. You want a certain way of doing things, and an efficient program will do it with less code. (Because computers run instructions per second, and if you have less instructions, then it runs faster, see?) Many programmers love to lampoon their bosses insistence on measuring progress by lines of code, most famously Bill Atkinson's recording of the removal of 2000 unnecessary lines of code, which promptly threw his manager's metrics into a tizzy. The story ends with his managers never asking for this particular measurement ever again.
So, what are some better measurements?
* Features vs. Bugs
Good code offers a number of features that make the software attractive. It also has few bugs, code that doesn't work properly or has unexpected results. The more features and fewer bugs are found in the program, the more progress has been made.
* WTFs per minute
Have someone who isn't the original programmer read the code. The less confused they are by it, the better. ("WTF" being an abbreviation for a particular something a person who is confused or dismayed would say.) Now, admittedly, some of the most genius programming is still immensely confusing, but code that is hard to read or understand is harder still to maintain. Maintenance is necessary, because sometimes assumptions that were valid last year are invalid today. Tax laws change every year. The year 2000 problem emerged from 1970s era computers having code that assumed it would be changed in 30 years. (It wasn't changed until practically the last possible second.) Architecture changes over time too. My computer today is 64-bit, and all values have twice as much space available. If I specified an "word" sized variable on my older 32-bit computer, I'd be able to store numbers from 0 - 65,536, but on my newer computer, now I can store numbers from 0 to 2,147,483,647. Twice as much memory is used. If I ran the Fast Inverse Square Root code (a confusing but genius algorithm) on my computer today, everything it handled would be wrong. Why? The variables that were correct in 1995 now no longer line up correctly. The constants are now wrong. Everything would have to be re-aligned to work again.
* Customer Satisfaction
Most code is written for people who aren't programmers or mathematicians, to help get their work done. The author of a simulation suite says that good software is like a butler, in that it solves your problems, cleans up your messes, and then escapes your notice as it prepares to help you again. So good code would be fun and helpful to use, and the tester is absorbed and not complaining. Bad software would have the tester frustrated and screaming, complaining about a thousand different things. Ideally, this would carry over to the eventual end user buying and being very satisfied with the software.

Can you, my readers, name a better way to measure the development of something abstract as software?

No comments:

Related Posts Plugin for WordPress, Blogger...