Why I code – why I code.
Style – why style is important.
Languages – thoughts on some languages.
What code should do – performance and clarity.
What code shouldn’t do – errors and user needs.
Planning for failure – expecting the unexpected.
Development vs Execution – where is time best spent.
Voodoo Programming – deliberate versus magic.

Why I code

I code because it appeals to my obsessive-compulsive nature. I’m attracted to it as a moth is to light; I’ll never achieve nirvana, but I’ll never stop trying, either.
To me, coding is the art of taking an action and breaking it into basic components. Take, for example, a seemingly simple act such as taking a sip of a drink. Something so simple, and yet to write code for it would be extraordinary. The average person processes all of the data required in almost no time or thought. Is it a bottle, a can, or a cup? Does it have a handle? Does it have a top? How, exactly, do I even pick it up? How tight do I hold it so that it doesn’t slip and isn’t crushed? Is it level so I don’t spill it? How about now? How about now? If it’s not level, which direction and amount do I need to rotate it so that it is level?
I also code because, in my opinion, I’m good at it. I have no desire to be in management, as that will take me away from my beloved code. Some learn to code as a step to management or because they heard that it pays well. I code because I love to code.
I am almost entirely self taught. I wrote my first program on a an old IBM keyboard-pad type of computer. I don’t even know what kind it was. I graduated to a Commodore 128 and started using the good ol’ sprite engine and Basic. I had no sort of storage at the time, so I just left my computer on all the time. If I had to turn it off, I’d print it all out and type it all in the next time. I used that computer for about 8 years before I could afford my first “real” PC.
I did end up taking some college courses for programming and even tried to get a degree (a couple times) but the Official School Environment doesn’t mesh well with the way I learn. I haven’t given up, however!


Code style, to me, is extremely important. Style is not only useful for maintenance of code, but it shows the coders respect for the code itself. If code doesn’t have style, it doesn’t have organization. If it doesn’t have organization, it’s difficult to understand. If it’s difficult to understand, it’s difficult to maintain.
For my own uses, I break the term “style” up into four distinct areas: organization, format, commenting, and functional flow.
Well organized code is clear, at a glance, how the various modules relate to each other. A reasonably well designed project will have intuitive modules whose purpose is clear.
Code format consists of braces, indentation, and general look of the code. Is the code consistent? Is the style clear and intuitive? Is it easy to read and understand?
Good style doesn’t mean the code contains lots of comments, it means that they are of good quality, concise, and where necessary.
Akin to format, good functional flow of the code improves readability, reduces errors, and eases maintenance. Function or method size, use of shortcut conditionals, and functional statement formatting are all examples of functional flow.


A great many people spend a great amount of time debating which language is the “best.” In my limited experience, there is, and never will be, a “best” language. Every language has strengths and weaknesses and each is appropriate for what it was designed to do.
Assembly language is the best for creating small, high performance code, but the tradeoff is development time and portability. C and C++ are good for making small and fast code, but also provides enough rope to shoot yourself in the foot. .Net is good for tools and user interfaces, but at the cost of slower operation and fine control. I could go on about Lua, Python, Java, etc, but what it really comes down to is this: does the language support the features and libraries you require?
Another consideration is the capability of other developers on hand. If everyone uses C# for development and one person insists on using Assembly, no one else can maintain that code. It’s easy to move up (or down, depending on your point of view) the chain of development languages, but extremely difficult to go the other way. Assembly developers can maintain C, C developers can maintain C++, C++ can maintain C#, etc.
I try to make it a point to not be bound to any one language in particular. When it comes to languages, I have mastered a few, am pretty good at others, and am rusty at the remainder, but I’m always learning.

What code should do

Code needs to work in every instance for every condition. Note that when I say “work,” I don’t necessarily mean successfully or that it can never crash. If an error does occur, the program should fail as gracefully as possible. If the software can’t fail gracefully, the only option left is to crash.
Software should operate as efficiently as reasonably possible. Design the code thinking about the worst possible case. When that case becomes reality, the software will respond efficiently. Allocate only the memory necessary. Don’t do what doesn’t need to be done. Binary searches are your friend. Understand how data flows through the system.

What code shouldn’t do

Software shouldn’t work only if the user uses it correctly. Too many times have I heard the excuse “well, they’re not supposed to do that!” If the user isn’t supposed to do something then they shouldn’t be able to do it in the first place. The developer has full and total control over the software; they can make it do anything they want it to do.
Code should never work on one machine, but not on another identical machine. If the software relies on external files, it should have acceptable defaults or display a useful error to the user. If particular settings are required for the software to work, the software needs to check or ensure that those settings are in place.
“Crashing” isn’t necessarily a bad thing. When under development, using asserts that halt the program identify the who, what, when, where, why, and how an error occurred. Use these to fix problems during development, but handle the cases correctly so that when the asserts are removed, the program can (hopefully) still continue.

Planning for failure

Occasionally, software does fail. Be it from corrupt data, the computer being out of memory, or plain old fashioned programmer error, code will always have limits.
So what do you do? Plan for the worst! Anything, and I mean anything, can crash unexpectedly. Yes, even addition. The trick here is to know how to identify risk and have a strategy to deal with it before it’s a problem. Typically, this includes: checking return values, validating pointers, and predicting and preventing math errors.
When the software does fail, display and log an error message with information about the state of the software. More often than not, users will always go for the “Ok” button when given two choices. This has become a side-effect of our prompt driven life. If you’re lucky, the user will read the error. If you’re really lucky, the user will remember parts of it. If you’re outstandingly lucky, the user will actually write it down or take a screenshot. If you’ve got a log of the error, however, it should contain enough information to determine what happened and give you a good head start in reproducing and fixing it.

Development vs Execution

An application’s expense of time occurs in two places: development and execution. During development, the focus is on getting a working product in the field in as little time as possible. Execution time is how long the user is sitting there waiting for the application to finish what it’s doing. There is a fine balance of the time it takes for an application to be developed versus the time it takes an application to do stuff.
While I don’t advocate optimizing every aspect of an application, the amount of time the user waits for the application needs to be considered. Assume for a moment that we’ve just written some software that counts widgets. If it were to take 8 hours to optimize a piece of code that would save one user one minute per day, would you do it? What if the software is only used by 10 people. What if the software is used by 10,000,000? In the latter case, if you spent the 8 hours to save each person one minute per day, you’d be paid back (well, theoretically) in 4 seconds.
For games, this occurs in very small segments of time. If the goal is to run at 60 frames per second, each frame has exactly 16.667 milliseconds to do all of its work. If it takes longer than this, the game will lag and feel less responsive.
Of course, there is a flip side to this…
If the user spends the entire day running the software and only gains one minute, is it really worth the time? Realistically, probably not as you’re only giving the person back about 0.07% of their day. Congratulations, you get to blink again! If the user only runs it for about 10 minutes a day, however, you’re giving them 1/10th of their time back. Now we’re getting somewhere!
What it basically comes down to is the pure math. Does the time saved have meaningful value? Is that value greater than the value of the time spent optimizing the application? If you’re writing software that affects multiple people (such as processing a queue of people) you’ll need to take all of their time into account.

Voodoo Programming

Aah yes, one of my favorite topics and biggest pet peeves. Voodoo programming is when a developer writes “magic” code. They don’t understand how or why it works, but it works. A typical line would be “I don’t know what I did, but it works now!” or, one of my personal favorites, “I just kept trying stuff until it started working.”
My biggest problem with this is that the problem is not solved; only the case that was tested. Because it was basically done using trial and error, the developer doesn’t understand how and why the code works. A problem cannot be solved until it’s understood, otherwise it’s just foreplay.
The next issue that voodoo programming presents is maintenance. How can code be maintained that isn’t understood? If the developer doesn’t understand why it’s doing what it’s doing, they definitely can’t make meaningful modifications to it.
An infinite amount of monkeys on an infinite number of pianos will eventually produce Mozart, but it’s not a work of art, it’s a work of chance.

Back to Code