Sunday, December 12, 2010

Long Projects

I recently found out that a project I started years ago was just cancelled. I was devastated. How could this happen? We did everything right with this project! It even had a code name that indicated it would be a success.

This was a technology disruptive project. It would change the way we do our work and work with our business partners. It had strong backing from many management tiers. These managers were committed. They knew the perils of supporting a multi-year project like this and also meeting their commitments to deliver a steady stream of new products in the meantime.

From the beginning there were constant challenges because we could not get enough engineers on the project. I was moved off the project in a cascade of reorganizations. But as I watched from afar, many talented engineers were moved on the project.

And yet, I smelled the trouble from afar. The project communications were stellar. I was impressed with the progress. They were creating deliverable systems and getting all the key stakeholders involved. More and more of a working system was delivered. But every time I asked how much time before it would be delivered I was gold “two years”. Another year passed and I asked the questioned again and got “two years”.

That “two years” is kiss of death. In fact “two” of anything is the kiss of death. We have an engineer in another organization who we have been waiting months on for a design. Every month he tells us it will be done in “two weeks”.

After six months we gave up and now we did the design ourselves. It’s not as solid a design because he is an expert in this area but we are learning. We’ll have to build the design expertise ourselves.

Long projects notoriously fail. Here is some research from

The project I started on was about four years old.

I recalled when we made the key decision. We were going to do something that was more malleable and could be done slowly over time or something disruptive, which was ultimately chosen. At the time, it appeared the disruptive technology would be more robust and indeed that is still the case. But ultimately, the business could not sustain the investment to make this disruptive change.

What does this all mean for architecture? Architects have to make decisions that the organization can follow through on. I had my doubts seeing our organization not follow through on other disruptive changes. And yet I felt at the time, that this environment was different. In the end, however, business needs usually win. And software architecture may have to work with business demands versus the most elegant architecture.

Saturday, November 13, 2010

Design Patterns in Everyday Architecture

Software Design Patterns were a sensation when this book about them was published. I became aware of this book in 2001 when a team I worked with was using design patterns. The book promoted software design patterns similar to using a cookbook in other engineering disciplines like building architecture or electronics. The authors define a Design Pattern:

A design pattern systematically names, motivates, and explains a general design that addresses a recurring design problem in object-oriented systems. It describes the problem, the solution, when to apply the solution, and its consequences. It also gives implementation hints and examples. The solution is a general arrangement of objects and classes that solve the problem. The solution is customized and implemented to solve the problem in a particular context.

It all made sense. And yet…

I have a confession to make. I haven’t used Software Design Patterns in a very long time. I think I developed an aversion to them. In the last project where we actively used them as a team, we developed a massively BDUF (Big Design Up Front) and the project was an utter failure, ultimately being canceled and many people on the team being laid off. Since then, while I am a proponent of thinking about the project up front, I also know that the team can and will get lost in oodles of documentation. Our team did and ultimately that documentation was of little value to our stakeholders.

Our team used to work like this: we would see a problem that needed to be solved and then look through the Design Patterns book for a solution.

The problem with doing that is this: often the pattern didn’t precisely solve our solution. Since we worked on embedded systems, object oriented solutions often not used to save memory. Our patterns seemed to be a lot simpler then what was presented in the book.

I found a lot of criticism of design patterns on the internet. One person felt that design patterns only existed because of language restrictions and that in languages like Perl, the patterns are built in and ready for use. (This poor guy got a ton of backlash for not believing in Design Patterns).

The biggest problem I had with design patterns is that the ones I wanted to use weren’t in the book. That said there is an effort to create a larger repository of design patterns. Since I’m working mostly with XML for my work, I plan to explore this XML repository.

My experience with Design Patterns left me running away from them. While they have some obvious detractions, it’s worth it to look at them again and see if I can find a use for them in my daily architecture work, where decisions are not usually made up front but on a daily basis.

Saturday, October 16, 2010

Software Architects: The Best Job

I was very happy to see this article on CNN’s Money web site:

This article ranks the Software Architect as the #1 ranked job in America!

I couldn’t agree more with the ranking of course. Like the person profiled, I work on business strategy some days and other days I’m deep in code. And I’m everywhere in between.

There is a projected 34% job growth in this area over the next ten years.

Wednesday, October 6, 2010

Agile Processes – So What’s the Architecture?

About seven years ago I attended a Best Practices conference for Software Development. The seminars were on RUP (Rational Unified Process ) and CMMI (Capabilities Maturity Model ). At that time, I was working with my own company’s internal product release processes that were waterfall in nature. There were a handful of seminars on agile development.

Fast forward two years: I’m attending the same Best Practices conference। And the seminars couldn’t be more different. Agile development was the subject of the vast majority of topics presented with titles like: Agile Development and XXX (you fill in the XXX). There were lively roundtables with book authors on agile. One poor fellow asked skeptical questions about agile methods and was roasted by the roundtable leaders.

And one seminar that stood out but itself because Agile was not in the title: Documenting Software Architectures with one of the authors of the same titled book। This seminar was packed and one of the most well attended of the conference.

But don’t software architecture methods and agile methods directly conflict with each other? Don’t they have different values?

Being a software architect and systems engineer at my current company, I have observed and been part of BDUF (Big Design Up Front) and features that are YAGNI (You Ain’t Gonna Need It)। I’m guessing that poor fellow that was roasted in the roundtable was also part of this effort. And yet those efforts sometimes have some incredible forethought. The foundation was laid down to allow for robust change in the future. As someone who has worked with existing architectures, I have great respect for the foresight the earlier designers had when that serendipity occurs “Wow, this is already in place and all we have to do is….”.

During a business trip with the SDM at MIT, we were visiting a software company in Dublin, Ireland that was an early user of agile methods in 2001. One of the students asked the lead software engineer how they will deal with legacy issues given the way they were using agile. The lead designer didn’t know and thought it would be a problem in the future.
Fast forward and it’s 2010। My company has rolled out agile methods and we’ve all been using them for several years now. We show improvements in productivity and reliability. And what has happened to architecture? It’s more important than ever.

Like most companies, our software systems are large and complex। An overall framework is needed at the beginning of a project and decisions that are the hardest to change during the lifecycle of the project are made.

Many architectural decisions are made during the project and the key for the architect is to communicate those changes to all stakeholders. See my previous blog on communication tools for architectural decisions.

At that conference, Agile zealots accused architects of being software dinosaurs and architects accused Agile zealots of being cowboys। I don’t really see the contradiction in the two. I’ve come from using CMMI to using Agile. Both are disciplined processes and Agile is more disciplined and constrained with the focus on the final product. Complex systems have architecture whether it is done iteravely or not. It just smart to make some of the overarching decisions before coding begins. The architect’s role continues as someone who manages key decisions and tests the code to make sure it is executing those decisions.

When those decisions are not made up front, it can be disastrous for the project (at least some that I’ve been involved in)।

I wonder there are any software developers that don’t use Agile method? Maybe not. And I also wonder if there is another better process in our future. Regardless, classes like Documenting Software Architectures will always be interesting because we all need to know “What’s the architecture?” Architecture occurs whether you have someone working on it or not. It’s better to be aware up front what it is and how it meets the business requirements of the project.

Sunday, September 12, 2010

Systems Tools: SysML

Systems Engineering is about specifying functionality, where that functionality resides in the system, and there what interfaces are required – is by definition a complex activity that involves an array of players and a multitude of considerations. As if the pressure of efficiently and accurately developing systems in the face of tight budgets isn’t enough, competitive considerations are forcing companies to develop increasingly sophisticated systems faster.
(Systems Engineering Best Practices White Paper, IBM/Rational Software, December 2009, by Hoffman, Sibbald, and Chard)

Where I work, systems engineers are also system architects. They, with the rest of the development team, make day to day decisions that evolve software architecture. I can’t speak for other professions, but the disciplines are deeply entwined in the world of software.

One tool that I’ve been looking at adopting is based on an extension to the Unified Modeling Language (UML) called SysML. SysML has views that deal with multiple aspects of the system – functional and behavioral, structural, performance, and slew of other models like cost and safety.

INCOSE (International Council on System Engineering) defines SysML as:

  • A graphical modeling language in response to the UML for Systems Engineering RFP developed by the OMG, INCOSE, and AP233a UML Profile that represents a subset of UML 2 with extensions
  • Supports the specification, analysis, design, verification, and validation of systems that include hardware, software, data, personnel, procedures, and facilities
  • Supports model and data interchange via XML Metadata Interchange (XMI®) and the evolving AP233 standard (in-process)

The four key diagrams of SysML structure (similar to a class diagram in UML), requirements (new to SysML), behavior (uses standard UML behavior diagrams), and parametrics (which are used to express system constraints).

Requirements can be mapped to structure and behavior as parametrics can be. Moreover, models can be customized by extending SysML with mechanisms called stereotypes.

I plan to use SysML in the near future and report back buy my success with model driven development has been poor. I’m not sure if I and the teams I work with lack the discipline and technical expertise to generate such a complex model of a system or that it simply doesn’t work.

Have any of you in the blogosphere used SysML?

Wednesday, September 8, 2010

IEEE 1471 – What’s required for Software Architecture?

I recently read through the IEEE Std 1471-2000. In one of the last sections it gives an example of what is needed for an architectural description (an AD) to meet the standard’s requirements. A large group of industry representatives worked on this standard for several years and this standard was approved in 2000.

The first section is the basic information about the architecture. This includes the data of issue, the organization issuing the architecture, change history, a summary, the scope and context, a glossary and references. This is standard for the technical documents we write in my organization.

The second section must identify “architecturally relevant stakeholders”. This would include users, acquirers, developers, and maintainers (at a minimum). With this section, the concerns of the architecture are also included: the purpose, the appropriateness of the system in filling the purpose, the feasibility of constructing a system, the risks to all the stakeholders, and the general maintainability of the system. What concerns are most important to each of the stakeholders?

An example of this section would be a stakeholder that takes the software system being built and uses it in another system. This stakeholder is interested in the interfaces to the system and not much the internal design. But the developer /stakeholder is very interested in the internal design.

The third section describes viewpoints which are required. The specific views for an architecture are not specified but the standard says the views must be “selected, defined, and analyzed for coverage and that the rationale be given for their selection”. Each viewpoint must have a name, the stakeholder interested in that view, the concerns addressed, the modeling technique or method used to construct the view, the source information (like author, date, etc.).

The viewpoints should cover the stakeholders concerns. Note my blog entry where I lay out the minimum viewpoints I need to define an architecture.

The fourth section lays out more requirements about views. Each view will be only one viewpoint. Hmmm…this seems to favor Unified Modeling Language (UML). Check out Object Process Methodology which has one diagram with many views.

The next section requires that the views must be consistent with each other. This can be hard to do in practice if your tools don’t do this for you. Peer reviews can also help here.

Section six specifies that the architectural rationale must be included. This seems most important in evolving systems. See my blog post on recording architectural decisions – this is a practice we do in my organization on a daily basis.

Reading this specification reminded me of a time long ago when I was asked to come up with an architecture of a very large system. I came up with several views and assembled of team of cross functional designers from across the system. I spent months working on the diagrams trying to get agreement. I could not get convergence – even on what views to include. If I had this standard in hand at the time, it might have given me the framework to work on defining the views better in relation to the stakeholders.

I recommend reading the specification – it’s short.

Sunday, August 29, 2010

Ch-Ch-Ch-Changes: Software Interface and Software System Dynamics

When a software system is created, there are typically interfaces to that system so that third party developers can write programs that use the functionality of that system. These external interfaces, once released, must be supported practically forever. Once a third party developer uses them, we the system designers, must make sure any changes we make to this interface be both forward and backward compatible.

Forward and backward compatibility put considerable constraints on the interface designer, especially when changes are to be made. The definition is:

Forward compatibility or upward compatibility (sometimes confused with extensibility) is the ability of a system to gracefully accept input intended for later versions of itself.

If we use the same sentence structure, we can define backward compatibility as:

Backward compatibility or downward compatibility is the ability of a system to gracefully accept input intended for earlier versions of itself.

As an example, think of some of a software library like the Boost C++ Libraries. If a change needs to be made in an interface, it must be done very carefully to ensure at least backward compatibility. If a method must be deprecated, it must not be done immediately. The method is supported but it is documented that support for this method will be removed in the future. And it is often years before that happens.

Much of my work is on existing systems so I have the difficult task of making sure my changes to external interfaces are both forward and backward compatible. This is difficult to do intellectually and I often miss things that break compatibility. I’ve built up what our team has learned into a set of standards we follow to assure that we can keep this compatibility.

But our interfaces are getting more complex and we are still missing things.
This is causing an unfortunate dynamic on our team. The first behavior is tremendous fear. Team members fear that what they release will not be forward and backward compatible so they get in the mode of carefully trying to find every problem imaginable.

Here’s a systems dynamics model of our current problem. The models were developed using Vensim and there is a free trial download of the software at their website.

The first picture is of a simple model. Note that the development rate is constant if the rate of problems found equals the rate of problems fixed.

Note: The graphed results of a simulation of this model are shown in the picture of the model (see the flat rate of software productivity and the rate of problems found and fixed).

Now we’ll add the fear factor to the model. When fear occurs, more problems, whether real or imaginary, are discovered. The software development rate goes down.

When this occurs, schedule pressure starts occurring (note that this dynamic is not shown in the model). Schedule pressure creates more fear which causes more problems to found, real and imaginary, both of which need to be analyzed. Schedule pressure might create an increase in productivity (“you must fix more bugs – please work as much as you can to do this!”). But it can’t match the fear factor and the software development rate decreases even more.

My theory is that if we create a more supportive team environment, we can counter the fear and get our development rate back up. The team will have the following operating conditions:
1. All problems raised are considered and accepted respectively
2. Problems are classified and analyzed by a cross disciplinary team (people with variety of expertise) and decision is made – do we do this change or not
a. This should decrease the churn we have from changes – we probably won’t do as many changes as we have in the past.
3. This team will also recommend the best fix for the problem so the rate of problems fixed will increase.
Voila – software productivity is back up and even increases.

I’m trying this experiment on our team and will report back the results in a few months. I feel confident that this supportive team model will change our dynamics.

Wednesday, August 25, 2010

Architectural Decisions – Accidental or On Purpose?

“Every interesting software-intensive system has an architecture. While some of these architectures are intentional, most appear to be accidental.” - Grady Booch ( )

Last week I wrote about what software architecture is and that I rely on a software architecture consisting of goals which map to structure and behavior. But in my everyday work in the field of software design, I rarely get the luxury of creating such a complete mapping.

What typically happens is that I join a project that already has had the vision established and may not have any artifacts explaining that vision. I have to do some archeology in the code to discover what the original vision was. And more often than not, there are several conflicting visions in place.
Typically where I work, we make design and architecture decisions on a daily basis. And as we iterate through our work and develop code and test our solutions, we find changes we need to make. To make sure these decisions are accidental and that they merge with our team’s vision of our architecture, we meet weekly to review changes. We often argue.

Part of my job as a software architect is to examine and analyze various points of view from team. There are two tools I use that are invaluable to getting the team to converge on a solution (and sometimes convergence is not possible, and a decision has to be made, but in the majority of cases, convergence does occur).

The first tool is a decision matrix. This is from the book “Getting Started in Project Management” by Martin and Tate ( ) - double click the image to get a close up:

This works well but is very quantitative and the human nuances of decision making are often lost in this format. We also sometimes break the rule above “Do not change the numbers to affect the selection of a “favored” solution.”

My favorite format is to use a Mind Map ( And the tool I use is Mind Manager ( ) but there are free tools available for download and templates available for Visio.
Here are two the templates I work with:

I just noticed that I have hundreds, yes, hundreds of these maps in place. We refer back to them when we forget why we made a certain decision and when someone new to the team questions the decision. Sometimes we re-evaluate our decision and use our map as a launch point to see why our thinking as changed.

In the article Booch says:

"Thus, having an accidental architecture is not necessarily a bad thing, as long as
• the decisions that make up that architecture are made manifest and
• the essential ones are made visible as soon as they are instituted and then are allowed to remain visible throughout the meaningful life of that system."

By using these tools you can achieve, and thrive, with an accidental architecture. Let me know if these tools are useful to you or what tools you use.

Wednesday, August 18, 2010

Software Architecture Definition?

When I was a student in the MIT Systems Design and Management Program, I had to take a course titled “Systems Architecture”. The course was about overall systems and not specifically on software. In that course, there was a paper I read that forever shaped my view on what software architecture should be. The paper is titled “A Taxomony of Decomposition Strategies Based on Structures, Behaviors, and Goals” by Phillip J. Koopman (DE-Vol. 83, 1995 Design Engineering Technical Conferences, Volume 2, ASME 1995). In this paper, Koopman describes architecture as containing structures, behaviors, and goals along with a variety of decomposition strategies. In my own work in software architecture, I find that I must tie my structure and behavior specifications to business goals and when I don’t have all three pieces, my architecture often doesn’t withstand the rigors of peer review from a wide audience.

How does this definition of architecture components stand up to other definitions? The standard on software architecture - IEEE 1471-2000 - says
Software architecture is the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution.

The standard defines the following bold terms:

A system is a collection of components organized to accomplish a specific function or set of functions. The term system encompasses individual applications, systems in the traditional sense, subsystems, systems of systems, product lines, product families, whole enterprises, and other aggregations of interest. A system exists to fulfill one or more missions in its environment. [IEEE 1471]

The environment, or context, determines the setting and circumstances of developmental, operational, political, and other influences upon that system. [IEEE 1471]

A mission is a use or operation for which a system is intended by one or more stakeholders to meet some set of objectives. [IEEE 1471]

A stakeholder is an individual, team, or organization (or classes thereof) with interests in, or concerns relative to, a system. [IEEE 1471]

This definitions correlates to Koopmans article in the following ways: the goals of the architecture are the mission; the behavior maps to the relationships and the principles, and the structure maps to the fundamental organization.

In practice I see many designs that show mapping and decomposition with behavior and structure (think UML class diagrams and sequence diagrams) or a mapping between goals and behavior (UML Use Cases and Sequence Diagrams) but very architecture artifacts that successfully map all three.

I will be looking at other definitions of architecture in future blog posts.

Sunday, July 25, 2010

Polymorphism: A Place in Systems Engineering?

This article originally appeared in the Fall 2009 edition of MIT SDM Pulse newsletter.

Polymorphism: A Place in Systems Engineering?

As a software architect and a systems engineer, I’ve been working with larger systems that include software, hardware, and mechanical subsystems. I work on designing software and systems for a multifunction device that scans, prints, and faxes documents. In my work, I often try to translate principles of software design to larger systems.

Once such principle is polymorphism. How does polymorphism work in systems engineering? A key principle in software engineering is that software objects should have only one function; polymorphism allows for abstract software objects to have more than one function (more on the meaning of abstract below). In the non-software world, some objects-that-you –can-hold-in-your-hands can also have more than one function. System engineers often want to fit many functions into a single piece of hardware and polymorphism is a pattern embraced by system designers.

So what the heck is polymorphism? If you are not a software engineer, you may not be familiar with the term but you are probably familiar with the principle. The basic idea is from a Greek root meaning “having multiple forms”. In object-oriented programming, polymorphism is the ability to execute a function specific to a context. The classic text-book example is with shape objects. A Shape is a general object and a Circle and Rectangle are more specific objects that inherit from a Shape. A Circle is a Shape and a Rectangle is a Shape.

In C++ here is a partial definition of a Shape object. The “virtual” function that is set to “0” means this class is an Abstract Base Class or an ABC. This object can never be instantiated or created in executable code but exists for other objects to inherit from.

class Shape {



virtual void draw()=0;


Now let’s define a rectangle and a circle class. These objects inherit from the Shape class. Inheritance is an “is a” relationship. That is, a Rectangle “is a” Shape and a Circle “is a” Shape.

class Rectangle: public Shape {
   Rectangle(int width, int height);
   void draw();
   int width;
   int height;

class Circle: public Shape {
   Circle(int radius);
   void draw();
   int radius;

The Rectangle and the Circle object would have their own implementations or code for the draw function. Each shape has its own formula or algorithm for drawing their shapes.

So why is this useful? In a program, the Shape object can be used to draw any shape without knowing what exact shape it is at compile time. For example, the line of code


will draw correctly (with some help from the compiler). This is why C++ is a polymorphic language. You don’t need to write this kind of code:

if (Circle) then

draw a circle

else if (Rectangle) then

draw a rectangle

So this is useful with software – less code and it is easier to understand the abstraction. How is this useful in larger systems? Polymorphism can work with a vehicle that can fly, drive over roads, and float in the water. Imagine that you the driver, could drive this vehicle in the same way without know how the vehicle when over these different mediums. You just knew how to drive and it worked the same way no matter where you were. That’s polymorphism and that’s useful to the customers of my product – which is the machine that has multiple functions.

Another example of polymorphism is from my days at MIT in the SDM program. In systems engineering one day, we talked about a seat cushion in a boat. This seat cushion serves as two functions – as padding for a seat in one context and as a floatation safety device in another (this cushion has straps on it). The design of this seat cushion has been influenced by the fact that it has two functions. However it’s basic core function or Abstract Base Class is to lift a human – whether it be on a seat or in the water.

Here’s an example of how polymorphism can save Unit Manufacturing Cost (UMC). Let’s return the vehicle example. If you want a vehicle that drivers over land and floats in water you could buy two vehicles, one that runs over land and another that floats and moves in water. Or us engineers could design you one vehicle that does both. [i]

The multipurpose vehicle is more expensive than a single purpose vehicle and should be less expensive than two vehicles (one for land, one for water). That is the challenge with polymorphism. If our customers did indeed want a multi-function device and if we could reuse systems to perform multiple functions in a cost effective manner, then polymorphism works for our business.

Polymorphism has a role in product platform design. For example, a platform has a set of common functionality. How that functionality is implemented will vary on what system the platform is implemented on. Another example from my Product Design and Development class is a power tool which had a product platform of a power supply. This power supply fit onto a variety of power tools and could be switched between power tools. Like the shape and the draw function, the basic functionality of obtaining power to do work is the “base class”. The function of doing the work varies from power tool to power tool.

But polymorphism isn’t always the best idea. In product design, designing hardware to have multiple functions often has tradeoffs. For example, if you had to design a seat cushion that did not have to be flotation device, you would not be as constrained as when you had to design it as two things. Here’s where software and hardware design differs. Notice in the shape example, the Shape is a distinct object and the Circle and Square objects are distinct. Each one serves a purpose. There is no tradeoffs with polymorphism as far as the function of the object goes (there is memory cost). The Shape object is never actually created in software, but used as an abstract reference. During run time of the code, the shape object will actually always be either a Circle or a Square. Each object is distinct with only one purpose. With objects-you-can-hold, the object is two things at once. For example, the seat cushion is something that provides padding and floats a person in water. The tradeoffs are in the physical object, where in software, there is no tradeoff in the objects themselves.

Polymorphism does have an expense associated with it and should be used only when you need different functions in different contexts. The base functionality should be the same in both contexts – to draw a Shape, to drive a vehicle, to life up a person, to power a tool for work. Polymorphism is a powerful design pattern, popular is software, that can be used in larger systems to build in flexibility in product design and one that will be increasingly used in our design push to build smaller, cheaper, better products for our customers.

[i] Incidentally, such a vehicle was designed and built during WWII and many of them are still in use today with Boston Duck Tours: