Idle hands…

“Idle hands are the devil’s workshop”, as the saying goes. In other words, idleness  brings about professional regression. 

Each of us is granted a finite amount of time. If the decision is to squandered that time, the a non-renewable resource is lost: time. Meanwhile the competition has made an infinitesimal improvement that will be compounded daily. This adds up to a great advantage. 

The choice is ours: small, daily improvements or basically nothing. Idleness is too expensive and must be eliminated. 

Idle hands…

Exception Handling of Memory

This bit of code caught my eye:

throw

The throw is allocating a string, which is caught by its call chain. If the string is allocated on the stack, how can the call chain handle that memory after the stack is popped clean of memory? Conversely, if the allocation is in dynamic memory, where is its ‘delete’?

Fortunately, I work with someone who knows quite a bit about this topic. It turns out, I know a lot less about programming than I thought! The ensuing lecture encouraged me to dig deeper (there are a lot of references on exception implementation.) Wowsa, there is just more & more out there than meets the eye.

Exception Handling of Memory

On The Job Training – Survival Tip

On the Job Training (OJT) is very popular. It’s likely you will end up living with the hubris produced by OJT. It’s not you, it’s a mess. So what can be done?

Create an independent project that performs the minimal essence of your hubris. Use this to experiment, test, revert (commit this to a source control system somewhere). It will be much faster to build & change than the actual project. And the learning will be far better!

A major problem with OJT approaches, is that it’s easy to make bad/incorrect/poor choices. As implementation moves along, the errors compound. Your reference implementation will serve as the baseline to unravel the compounded mistakes. If the project has had one OJT after another person in this role, it quickly can become a quagmire.

On The Job Training – Survival Tip

Fix Now & Improve Your Life

If you’ve had the experience of picking up software that has been poorly managed, a decision will confront you: fix it or ignore (work-around) it?

Better to fix it now.

The other option is of the assumption the poor code “isn’t that bad”and there are no consequences to stepping around the potholes and crafting yet more patches on patches. Until you are surprised to find that several months have gone by and you’re still pulling your hair out dealing with now really bad stuff.

If the fix is implemented immediately, the payback re-occurs every day from there on out. The old junk is no longer there, needing to be read, deciphered into some bizarre former context, then re-deciphered into a new context… Instead, you simply focus on the problem at hand in a productive manner.

(I’m not advocating wholesale re-writes; rather small incremental changes such as eliminating duplicated code, un-used strings, etc. These small bits of clutter, interfere with the bigger picture. And it keeps you in the practice of sharpening your skills.)

In my experience, you will be more productive by simply fixing the poor stuff immediately.

Fix Now & Improve Your Life

Yoking Oxen (Active Mentoring)

I was discussing my distain for “On the Job Training” (OJT) with my son who has a different experience. He harkened back to days past where young oxen were trained by yoking with trained oxen. This worked very well.

One particular attribute of yoking is that the training is constant, in real-time and 100% in the real world. One might think of this as paired-programming!

Mentoring may be a form yoking, but if it’s done in a hand-off’ish manner, there is a lot to loose. Or not much to gain. My son’s experience is more along the lines of traditional oxen yoking, which he found favorable.

Yoking Oxen (Active Mentoring)

Antifragile Software Culture

Nassim Nicholas Taleb’s “Antifragile” book has a very powerful observation applicable to software development:

Sensitivity to harm from volatility is tractable, more so than forecasting the event that would cause the harm.

Taleb, Nassim Nicholas (2012-11-27). Antifragile: Things That Gain from Disorder (Incerto) (Kindle Locations 339-340). Random House Publishing Group. Kindle Edition.

There are many software development measures to indicate the quality/wellness/adaptability/correctness of the code: cohesion, coupling, bug density, # of unit tests, code coverage, etc. Many software developers, managers & executives simply look to these measures as being academically interesting, but of little or no business value.

Taleb’s statement directly supports the value of software measures.

For argument’s sake, code coverage gives a measurement of the testing completeness. If code coverage is not measured or is close to zero, clearly any change is very high risk. As code coverage approaches a meaningful number (assume 80%), it’s easy to see that the volatility of the system is much more under control than the zero case.

Let’s take the oppositeĀ approach: no or little code coverage with reliance on hunches or guesses as to the ability to predict an event that breaks the system. In other words, the culture is to be completely reactive to bugs, customer complaints and so on.

In today’s world, software attacks are a totally new burden on the development team. This non-trivial burden is the last thing a team needs and fits the case where naive hunches regarding the vulnerability of software are 100% wrong. We have no idea where the next attack is coming from and do not have the time or resources to fix them. (But they must be fixed.)

The need for software development metrics is higher than ever.

Antifragile Software Culture

Building Software in Model-T Factories

Ford Motor Company frequently updates their factories for each new model. In fact, a factory will completely cease production whilst updates take place – sometimes for weeks at a time. These are planned, scheduled, detailed, negotiated, etc. as it is a very costly event.

Not performing these shutdowns is even more costly, as Ford would quickly go out of business.

Software people tend to have a different approach: we think updates can be done in place while delivering business value. Shutdowns are not what we do.

So then, how well do we do it? Do you really think you can refactor your object-oriented database interface into a NoSQL database via a series of 2-week iterations? What percentage of the team will work on the refactoring? How will the integration between the final refactored branch be handled? How are other new features & defects being integrated into the NoSQL branch? How are you handling turnover during this (better to pretend it won’t happen, huh?) refactoring effort? What percentage of your team has the intellectual bandwidth to manage this much concurrent change? Do you have it? Does your team really think they understand the depth & breadth of the change? How do they have the expertise to affirm that?

Let’s consider a formal “shutdown” to do some really heavy lifting.

If the team stopped delivering new features for 4 or 6 weeks would the customers really care? Even notice? Are your customers so demanding that frequent updates are a must-have? Have you ever discussed this with them? Would the team perform better with a simple definition of success? (The refactoring.) Would bugs be easier to triage & resolve? Wouldn’t it be great for developers, testers, documentation, etc to be able to have a single focused discussion(s)? Would progress be easier to track & evaluate? Would your confidence be higher at each step along the way? Would everyone be glad to have a definitive start/stop?

Perhaps shutdowns have some consideration after all.

Building Software in Model-T Factories