Traditionally software was designed as a large up-front planning task, using techniques such as the water fall model of software development. In these approaches the entire system was analysed and the software and hardware components architected up front, culminating in a complete design that just needed to be acted on by programmers to delivery the entire working system.
This is actually the way that most projects are planned in the non-software world, for example, when a building is constructed the plans are approved up front, the materials are calculated and plan for the construction is produced. Then it is just a matter of following the plan, with the right individuals for the job, with the right materials, in the right order, and the building gets built.
Of course, inevitably problems that occur in the delivery phase; something wasn’t thought through properly, some materials don’t perform as planned, the environment doesn’t behave as expected, the external dependencies don’t come in on time. This means that even when all the planning has been done in advance, even to a high quality, there is still a large uncertainty around delivery dates, and overall cost of the project. It is rare that a project can be delivered without some form of re-evaluation of the design axioms emerging from knowledge encountered during delivery.
Things get interesting when you begin to really take on board that however much you believe it can all be planned in advance, the reality is that you moment you make any changes to the system, new information emerges that wasn’t present when the original plan was made.
I challenge you to recall any circumstance in which you planned everything first, and then implemented the entire project without making any changes to that plan on the way. I bet you can’t!