Jose Sandoval Google
 Resume     Book     Software     Drawings     Home     Subscribe to RSS feed Search Web Search josesandoval.com

Software Engineering and dogs
Friday, December 17, 2004

Everyone and their dog, has writen about Software Engineering. I don't have a dog, but, I've been known to stop and smell the proverbial virtual flower from time to time, and thus, I started writing this entry about Software Engineering and some of my very personal views on the field.

The Software Engineering Institute defines Software Engineering as:
    Software engineering covers the development of software systems. Software engineers focus on applying systematic, disciplined, and quantifiable approaches to the development, operation, and maintenance of software.

The Scientific Method
can be defined as:
    The scientific method is the process by which scientists, collectively and over time, endeavor to construct an accurate (that is, reliable, consistent and non-arbitrary) representation of the world.
If we were to intersect both approaches, you would think we end up with the secret recipe to build 100% quality software.

The fact of the matter is that nothing is 100% free of defects. Thus, we have to deal with the statistical concoction of "rate of failure." In manufacturing settings, we can use this particular number to measure the failure rate of a given process. We've been doing it for a few years now, to a certain degree of predictability and satisfaction.

I started writing this entry a couple of weeks back, and while working on the draft, Microsoft started talking about Software Factories. The Microsoft marketing machine at work: they almost make it sound (and perhaps wish) that Software System building should have the same predictable failure rates as any other manufacturing process. However, that is not the case: Software Engineering cannot be considered a manufacturing process. There is no such "piece work" that can be handed down or automated as in a factory line. No matter how many times Microsoft says it, it won't be true - Well, at least not yet - Perhaps in the future, but, not now.

There are some who have said (including myself) that Software is not built, but, grown. Software is "grown" iteratively and follows an optimization pattern. I.e. At each pass through, we build on top what's already been coded and each iteration yields a better module than the one existing before.

I'm sure you've already thought about software in this way. For example, given a set of business rules, with proper analysis already completed, you'd start building an arbitrary Software component in the following manner:

while (constraints are true) {
   1. Architectural Design
   2. Details Design
   3. Coding + Unit testing
   4. QA

   Check your constraints. I.e. Software is good enough for our purposes
   OR there's no more money left to continue development
   OR management pulls the plug on the project
   then our constraints are set to false.
}

At the end of this while loop, you hope to end up with a Software module that meets the original requirements and solves the business rule given to the Software Engineers in a reasonable amount of time and at a reasonable cost - Note that constraints are many, and arbitrary. Sometimes projects are just canceled for obscure reasons.

Each iteration in our while loop, above, is a "meta" iteration containing inner iteration that allow the Software to grow. There are different steps that can be used, in a different order and whatever life cycle you prefer: Iterative, Water fall, spiral, etc, etc.

I can also say here, that the quality of each iteration depends on various criteria: quality of Software Engineers, quality of development environment, personal problems of each developer, management incompetence, etc, etc. Everything affects the growing process.

So, I come to the conclusion of my entry.

The fields of Software Engineering and Computer Science (academic and professional), have existed for less than a century. We are still in the early stages of development, compared to other fields. The advancement of technology is tightly related to the Moore's Law - Meaning that our tasks of Engineering Software Systems will only grow in complexity - There is no slow down for the growth of information and technology.

In addition to Moore's Law, we have other theoretical matters to considered. The field of Mathematics has a "Fundamental Theorem" for everything. We have fundamental theorems of: Algebra, Calculus, Arithmetic and some other I can't recall. Not surprisingly, Computer Science's Fundamental Theorems are based on the same Mathematical Sciences. And we also have limitation in the computing paradigm. Does P = NP? If you know, there is a million dollars wainting for you.

One of the things that interests me the most, though, is the practical limitations of our current Software Engineering processes, our testing methodologies and quality of the software we build (or grow).

A few years back, Fred Brooks wrote in his book The Mythical Man Month, that the problem we encounter have to do do with the complexity of the systems themselves. I.e. Software is inherently complex and this complexity is one of the main characteristics that we cannot take away or even abstract, in order to make a suitable repeatable model (Similar to the Scientific method).

In other words, and to greatly simplify his essay, we suffer from combinatorial explosion. For example, if we consider a computing model based on a Turing machine implemented in a digital/binary architecture, we end up with n! ways of 0's and 1's to be combined, and each state of the machine yields a totally different result. Fully testing anything this complex, is humanly impossible - Also, it is very unlikely that every combination of the system is actually encountered in the life of any Software System.

Note: n! = n factorial = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1

Mind boggling - It means that we will never be able to tame the beast - We have no silver bullet.

What can we do?
For the moment, all we can do is to to invent new ways (and terms) to fake control over our processes. We can't just sit idle hoping that eventually we will write perfect code. A la: "1000 monkeys, in 1000 type writers will eventually code the perfect Java JVM."

So, many companies have emerged claiming their trade marked processes yield quantifiable results: ISO 9000, 6 Sigma, SWCMM (Capability Maturity Model), to name a few - Most of these processes have been adapted to Software Engineering, and are based on a proven track of success on the purely manufactoring field.

There is no doubt that attempting to follow either of these processes yields better software. After all, any intelligent action we take to better our building methodologies, will definitely improve the resulting systems.

We must note, that all the companies offering such services are for-profit companies, so there is an incentive for their methods to become the quality measurement standard of all Software Engineering. So, as with anything else in life, there are many attempted solutions to one problem, and it's up the implementer to pick the best approach for the case at hand. And as always, there are no guarantees any process will work and it is very constly to implement either one - A price that many companies are willing to pay for, in order to stay competitive.

BTW, I don't know much about the ISO process. However, I'm familiar with SW-CMM. And the only thing I know about the 6 Sigma process, is where the name comes from: the name is based on Normal Distribution theory. I.e. everything manufactured by a company under the 6 Sigma umbrella can "guarantee" (more of a "claim," really) that all manufacturing errors lie outside 6 standard deviations, having a N(0,1) p.d.f. However minimal the number of defective "whatevers" are, they will always exist, but only 3 in 1,000,000. So, no assurance of defect free process, just a really cool sounding name: 6 Sigma.

Applying any statistical measurements to Software Engineering is quite tricky - How do you measure productivity? How do you estimate progress?

Lines of Code are not a very good indication on productivity, so how can you measure the "rate of failure" of any particular system? I'm sure this very topic keeps many CS departments and PhD advisors busy reading and reviewing disertations all accross the top Universities of the world.

At some time in the future, one of those disertations will have a good solution, or at least a good suggestion that will lead us in the right direction and get us closer to invent the silver bullet to our software beast - "If you had 1000 PhD students typing in a type writer, they will eventually...blah, blah..."


11:30 PM | 1 comment(s) |

Comments:

Quite interesting ..:-)
By Blogger Dan, at 10:16 AM



This page is powered by Blogger. Isn't yours?

Guestbook
© Jose Sandoval 2004-2009 jose@josesandoval.com