Counterintuitive Facts in Software Development

Robert Ruzitschka
5 min readMay 10, 2023

--

what bing image creator thinks is a fitting illustration for my article

Software Development is a complex activity. It is, as Dave Farley said, a “discipline of discovery and learning”.
Still, for many years the software development process was handled more like manufacturing and this is the reasons why many fallacies and misconceptions around the process are existing.
Looking at it from the other side, there are many statements that describe important characteristics and facets of software development that sound a bit counter-intuitive and I will try to collect some of these.
They are not so easy to grasp and in many cases not even specific for software development because dealing with complex systems in general is something that we as humans are not so good at.

Human beings are usually inclined to trying to make sense of the world they are experiencing. It is very difficult for us to accept the fact that we don’t understand what is going on and we usually try to make sense of it in some way or the other. This constructivist approach often leads to simplified theories about cause and effect and leads us to wrong conclusions about how to structure our ways of working.

A disclaimer that is quite obvious: This list is not exhaustive and I will start with four examples, I am happy to get your input for additional ones. As always, context is king so nothing written below is absolutely true or false — it always depends on a specific situation and we must avoid to fall into the trap of simplification!

1) Adding more people to the team won’t make it faster

The wisdom exemplified in Fred Brooks’ all-time-classic book “The mythical Man Month”. Work can’t be distributed evenly at any stage of product development. Adding more people will increase the cost of coordination in a way that will actually slow down the team and decrease the potential outcome. In other words: Throwing more people on a delayed project won’t work!
Late joiners need on-boarding, this takes capacity from the existing team and slicing the problem into more independent pieces is not trivial.
If the deadline is fixed, reduce scope.
Overall, there is a sweet spot for team size and quoted numbers are typically 7 plus minus 2. Above that, close collaboration gets difficult and coordination efforts eat capacity benefits.

2) Releasing less often increases risk

Never touch a running system, right? If you don’t change anything, you can’t break things so the risk for production issues is minimised. Sounds quite logical but doesn’t stand the test of reality. A system that never requires changes does not exist. Even if no functionality is added (which will make the system less satisfactory for its users, according to Lehman’s First Law of Software evolution) change pressure on the system will increase. Why? Because software does not exist in a vacuum but in the real world and even if the software is kept unchanged, the world around it changes. Hardware fails, security holes need to be patched, libraries are not maintained any more. So capabilities to change systems need to be maintained always. But how do you know that your change process works if you don’t actually do it? Changing a system after a longer period of time actually is quite a risky endeavour. If you do it often, you know the process, you have tested it and can be confident it works in case of an emergency — exactly when this confidence is needed the most.

3) Bigger releases don’t reduce testing effort

A statement I heard quite often in more traditional organisations: “We don’t want to do more releases as it will multiply our testing efforts. We don’t have capacity for this.”
This statement again shows a misunderstanding in the nature of changes. Changes to a system in most cases have some kind of interaction, at least potentially and either intended or unintended. If we want to assure the quality of our releases after the changes we must not only test the individual changes but also their interactions. The number of interactions grows non-linearly with the number of changes. Ultimately, the effort to test bigger releases is higher than the effort to test multiple smaller releases. If this is not the case, then the testing is not comprehensive and interactions between changes are neglected.
The consequence is reduced quality and higher risk of production problems. It is clear that the effort two change the system must be minimised. If rolling out releases is a very cumbersome process, it will be difficult to deliver more often. Nevertheless, the non linear scaling of testing efforts is still existing.

4) Pair programming is the most efficient way to produce high quality software

I think it is undisputed that code reviews provide value. Besides giving (hopefully well-minded) feedback to a software developer they are also a tremendous opportunity to learn and share experiences. There are multiple ways to do code reviews, one way that is quite common are pull requests. Code requests do have som benefits but also a lot of drawbacks:
- feedback is delayed
- the quality of feedback gets worse if there are many big pull requests
- reviews are usually done by more senior guys, the need to spend a lot of time on pull requests, so they need to reserve time slots to avoid continuous expensive context switching which in turn in many cases delays response time
- it requires a lot of discipline to provide good reviews as communication via tools is typically lacking context

There are strong indications that the most efficient way to write high quality software is pair programming. Many of the disadvantages of the pull request process are avoided: Feedback is immediate, both developers working on the same functionality have complete context, communication via multi modal tools or in the best case while sitting together and talking to each other is easy and there is no need to switch contexts.
This immediately leads to software with better architecture and less bugs.
Hey, but there are two people working on the same thing, so don’t we waste a lot of capacity? Well, we must consider the whole life cycle and include also maintenance, refactoring and bug fixing efforts in the overall effort calculation for this. Exactly these efforts are the most significant over time in software product development and they are greatly reduced by the fact that we delivered high quality software from the start. So the pairing pays off — if you want to deliver high quality software. Who doesn’t?

I am sure there are many other counterintuitive facts about software development, if you have some ideas don’t hesitate to add a comment with your proposal.

Getting a better and fact-based understanding how software development as a complex activity dealing with complex systems works, will help us to make better decisions along the way.

--

--

Robert Ruzitschka
Robert Ruzitschka

Written by Robert Ruzitschka

Physicist working in Software Engineering for many years. DevOps Community Lead/Engineering Coach. Austrian based in Vienna.

Responses (1)