I’m a big fan of simplicity. It’s easier to learn, understand, maintain, and optimize. It’s also easier to write for, debug, and handle errors. As such, I tend to think a lot about how we code in C and C++. I experiment and refactor… a lot. Probably more than is healthy for my limited at home coding time, but that’s partially the point of what I’m doing at home.

So starting my ramble, this is a typical (albeit somewhat contrived) use of a C++ object. An object is declared somewhere, someone passes a pointer to it and it does something:

class MyObject {
  public:
    MyObject();
    ~MyObject();

    void JumpUpAndDown();

  private:
    int m_myInternalBits;
};

void myFunc(MyObject *theThing) {
  if (theThing == null) {
    return;
  }

  theThing->JumpUpAndDown();
}

In my opinion, this approach is more difficult than it needs to be for several reasons:

First, I don’t need to know what MyObject’s private bits are. I don’t care how it stores whatever it is that it stores. If I do, I more than likely have to read the implementation anyway. This isn’t data hiding, it’s strongly-suggested-access-restriction.

Second, every time I go to use theThing, I have to check that it’s a valid pointer. Why? I’m calling a function on a piece of data (C++ actually calls data on a function on data, but that’s for another post.) If the pointer isn’t valid, shouldn’t the function be intelligent enough to not use it? Also, the majority of times that theThing is compared against null, it’s not. If it is null and is an error, it should have been caught at object creation, not object use.

Third, the act of checking is contagious. Whatever calls myFunc likely did some validity checking, so theThing was probably validated before the function was called. The computer is spending cycles and branches doing these checks, but worse, we humans are spending time and brain power handling cases that occur only in extremely rare circumstances.

Finally, it makes code and debugging very complex. If we have to validate data at every step, we have to handle multiple conditions and have multiple ways of doing things. We could also be hiding issues by inserting null checks until the crash goes away. Sure, it doesn’t crash anymore, but that’s because that critical line of code also isn’t running!

Let’s try something completely different. What happens if we only check for pointer validity when it’s critical and at the time where it would actually cause the crash? How difficult is it to write such that if the pointer isn’t valid, it doesn’t matter?

Let’s start by converting the above to a more C style OOP approach:

typedef struct _MyThing_t* MyThing_t;

MyThing_t MyThing_Create();
void MyThing_Destroy(MyThing_t);
void MyThing_JumpUpAndDown(MyThing_t theThng);

void myFunc(MyThing_t theThing) {
  MyThing_JumpUpAndDown(theThing);
}

Interestingly, this has a number of immediate impacts.

First, it’s half the size. Hopefully, smaller code means easier maintenance.

Second, changes to the interface or implementation have different impacts. If the interface changes (add/remove/modify a function) both styles must recompile everything that references the interface. If internal data changes, only the original example must recompile everything since it has published it’s workings to the world (even though they’re not supposed to use them.) This potentially improves iteration time, especially in large complicated code bases that take a while to compile or have complicated include trees.

Third, validity checks are no longer necessary which reduces complexity and eliminates checking contagion. Because a MyThing_t is only ever accessed internally, we actually have no way to validate it. In fact, I’d argue that we don’t have a right to validate it as that crosses the “need to know” boundary. We could check to see if it’s a null pointer, but that’s about it. The caller has no idea what’s in the object, so they can’t possibly know if it’s valid or not, so why check it?

Here’s a more realistic example. This is a snippet of UI code in my engine:

  void add_child(window_t parent, window_t child)
  {
    if (instance(parent))
    {
      if (instance(child))
      {
        make_orphan(child->ui, child);
        add_to_head(parent->ui, child);

        child->parent = parent;

        send_message(child, WMSG_EVENT_PARENT_CHANGED, 0, (uintptr_t)parent);
        send_message(parent, WMSG_EVENT_GAIN_CHILD, 0, (uintptr_t)child);

        release(child);
      }
      release(parent);
    }
  }

The only checks in this code are the reference counting. If the child didn’t have a previous UI (it’s sort of like a window group,) make_orphan instantly succeeds. After all, it technically did exactly what it advertised: the window is an orphan. Besides, the only time this is possible is the first time the window is created. If the window ever changes parent’s or moves to another UI hierarchy, the UI would be valid.

Here’s more code, which renders a sphere. This is the lowest level rendering interface there is and, as such, is the most verbose:

    data_cluster_t cmd = data_cluster_create();
    render::push_fill_mode_to_cluster(cmd, FM_SOLID);
    render::push_alpha_blend_enable_to_cluster(cmd, true);
    render::push_alpha_blend_mode_to_cluster(cmd, ABM_ONE_MINUS_SOURCE);

    render::push_vertex_buffer_to_cluster(cmd, sphere_vertex_buffer);
    render::push_index_buffer_to_cluster(cmd, sphere_index_buffer);

    render::set_shader_param_mat44_to_cluster(cmd, modelview_index, modelview);
    render::set_shader_param_vec4_to_cluster(cmd, color_index, color);

    const uint32_t index_count = render::index_buffer::length(sphere_index_buffer);
    render::draw_object_to_cluster(cmd, PM_TRIANGLE, index_count);

    render::pop_index_buffer_to_cluster(cmd);
    render::pop_vertex_buffer_to_cluster(cmd);

    render::pop_alpha_blend_mode_to_cluster(cmd);
    render::pop_alpha_blend_enable_to_cluster(cmd);
    render::pop_fill_mode_to_cluster(cmd);
    render::insert_cluster(cmd);
    data_cluster_release(cmd);

My engine sends all render commands to a command buffer that gets executed later. Individual commands are not guaranteed order so they are pushed to a “cluster” which can then be inserted as an atomic unit into the command buffer. This guarantees state atomicity, allowing me to “render” from all threads at the same time without having to worry about state changes.

After I moved to this style, I was able to remove about 30% of the code in my user interface code and even simplified a number of algorithms.

Of course, this style of coding isn’t appropriate for all use cases. For example, if code is written in such a way that a system needs to do something whether or not they have a valid MyThing_t, you would need to do some kind of validity check (or modify the code so it only runs that state when it actually does have a valid object?) It’s also definitely not intended to be a topic that I expect people to jump up and down and go “Wow! That’s the best thing evar! Everyone Should Convert!” because, frankly, it may be a terrible idea. Who knows, right?

And, finally, in what feels like a litany of caveats and commas, this isn’t meant to be about Defensive or Offensive Programming or a critique on error codes versus exception handling. I’m more than happy to discuss those, but I didn’t want to address it here; this was about simplifying code.

Simplicity

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.