Facts and Fallacies of Software Engineering

This content is taken from the Table of Contents of the book. I can think of no more important step to professional and product development than to (at minimum) be aware that these are enduring myths of software.

Studying and pondering the implications is the next step.

This listing was created to use as a searchable index. Since I am currently reviewing one a week, it is useful to know there are 65 items total (55 facts, and 10 fallacies).

Facts and Fallacies of Software Engineering, Robert L. Glass (2003)




  1. The most important factor in software work is the quality of the programmers.
  2. The best programmers are up to 28 times better than the worst programmers.
  3. Adding people to a late project makes it later.
  4. The working environment has a profound impact on productivity and quality.

Tools and Techniques

  1. Hype (about tools and techniques) is the plague on the house of software.
  2. New tools and techniques cause an initial loss of productivity/quality.
  3. Software developers talk a lot about tools. They evaluate quite a few, buy a fair number, and use practically none.


  1. One of the two most common causes of runaway projects is poor estimation.
  2. Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that estimates are obtained before theh requirments are defined and thus before the problem is understood. Estimation, therefore, usually occurs at the wrong time.
  3. Software estimation is usually done by the wrong people.
  4. Software estimates are rarely corrected as the project proceeds.
  5. Since estimates are so faulty, there is little reason to be concerned when software projects do not meet estimated targets. But everyone is concerned anyway.
  6. There is a disconnect between software management and their programmers. In one research study of a project that failed to meet its estimate and was seen by its management as a failure, the technical participants saw it as the most successful project they had ever worked on.
  7. The answer to a feasibility study is almost always “yes”.


  1. Reuse-in-the-small is a solved problem.
  2. Reuse-in-the-large remains a mostly unsolved problem.
  3. Reuse-in-the-large works best in families of related systems.
  4. There are two rules of three in resuse:
    1. It is three times as difficult to build reusable components as single use components, and
    2. a resuable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.
  5. Modification of reused code is particularly error-prone. If more than 20 to 25 percent of a component is to be revised, it is more efficient and effective to rewrite it from scratch.
    • Corollary: It is almost always a mistake to modify packaged, vendor-produced software systems.
  6. Design pattern reuse is one solution to the problems inherent in code reuse.
    • Corollary: Design patterns emerge from practice, not from theory


  1. For every 25 percent increase in problem complexity, there is a 100 percent increase in solution complexity. That’s not a condition to try to change (even though reducing complexity is always a desirable thing to do); that just the way it is.
  2. Eighty percent of software work is intellectual. A fair amount of it is creative. Little of it is clerical.

About the Life Cycle


  1. One of the two most common causes of runaway projects is unstable requirements.
  2. Requirements errors are the most expensive to fix during production.
  3. Missing requirements are the hardest requirements errors to correct.


  1. Explicit requirements ‘explode’ as implicit requirements for a solution evolve.
  2. There is seldom one best design solution to a software problem.
  3. Design is a complex, iterative process. Initial design solutions are usually wrong and certainly not optimal.


  1. Designer ‘primitives’ rarely match programmer ‘primitives’.
  2. COBOL is a very bad language, but all the others are so much worse.

Error removal

  1. Error removal is the most time-consuming phase of the lifecycle.


  1. Software is usually tested at best to the 55 to 60 percent coverage level.
  2. 100 percent test coverage is still far from enough.
  3. Test tools are essential, but rarely used.
  4. Test automation rarely is. Most testing activities cannot be automated.
  5. Programmer-created, built-in debug code is an important supplement to testing tools.

Reviews and Inspections

  1. Rigorous inspections can remove up to 90 percent of errors before the first test case is run.
  2. Rigorous inspections should not replace testing.
  3. Post-delivery reviews, postmortems, and retrospectives are important and seldom performed.
  4. Reviews are both technical and sociological, and both factors must be accommodated.


  1. Maintenance typically consumes 40 to 80 percent (average, 60 percent) of software costs. Therefore, it is probably the most important software lifecycle phase of software.
    • Corollary: Old hardware becomes obsolete; old software goes into production every night.
  2. Enhancements represent roughly 60 percent of maintenance costs.
  3. Maintenance is a solution – not a problem.
  4. Understanding the existing product is the most difficult maintenance task.
  5. Better methods lead to more maintenance, not less.



    1. Quality is a collection of attributes. (ED NOTE: Glass lists seven)
  1. Quality is not user satisfaction, meeting requirements, achieving cost and schedule, or reliability.


  1. There are errors that most programmers tend to make.
  2. Errors tend to cluster.
  3. There is no single best approach to software error removal.
  4. Residual errors will always persist. The goal should be to minimize or eliminate severe errors.


  1. Efficiency stems more from good design than good coding.
  2. High-order language code can be about 90 percent as efficient as comparable assembler code.
  3. There are tradeoffs between optimizing for time and optimizing for space.


  1. Many researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth far less than their advocates believe, and (b) there is a shorage of evaluative resarch to help determine what the value of such concepts really is.


About Management

  1. (x) You can’t manage what you can’t measure
  2. (x) You can manage quality into a software product


  1. (x) Programming can and should be egoless
  2. (x) Tools and techinques: one size fits all
  3. (x) Software needs more methodologies


  1. (x) To estimate cost and schedule, first estimate lines of code.

About the Life Cycle


  1. (x) Random test input is a good way to optimize testing.


  1. (x) “Given enough eyeballs, all bugs are shallow”.


  1. (x) The way to predict future maintenance costs and to make product replacement descisions is to look at past cost data.

About Education

  1. (x) You teach people how to program by showing them how to write programs.

Word Frequency