Software Design Patterns

Alexander Liss

12/02/98; 03/28/99


Method, Data, Function and Binder *

Organizing Method Variants *

Interactive Construction of "Method" *

Layers and Modules *

Library - a System of Concepts *


There is literature dedicated to stable systems of software design ideas. We can name a few authors, S.Meyers, J.Alger among them. E.Gamma, R.Helm, R.Johnson and J.Vlissides coined a good term for it - "design patterns", which we use gladly.

Here we bring a few of our patterns, some of them we did not see before, others are rearrangement and combination of well known patterns, particularly described by above mentioned authors. These patterns present solutions of problems, which we actually had to solve.


Method, Data, Function and Binder

Here we describe one design pattern, which makes design of some libraries or tool kits clear and some Application Programming Interfaces (API) actually useful.

We describe everything here in terms of C++, but the idea is applicable to other languages as well, for example, they can be applied to C API's.

There is a problem, which anyone who works often with heavy computation, knows. Numerical methods and models take functions and data as parameters and return data as result. The problem grows, when we have to call some other object's member function as a parameter of our procedure.

Definitely, all this can be done straightforward with void pointers, pointers to members and casts, but this approach is error prone.

We describe here a few steps of separation, which help to solve the problem. We illustrate this with optimization procedure.

First, we separate the method from everything else. We know, it is cool to make the object to optimize itself, but it does not work well and we want optimization procedure to be developed and maintained separately by specialists. Hence, we have class Method, which gets what it needs as "parameter" and returns result.

Second, we separate data and functions. Data is kind of obvious object. Functions are not real functions but hooks to real functions. Some might argue that there are too many objects already - we combine the input data object and result data object. This is actually useful - the same member of data can carry initial value before the call to the optimization procedure and the result after.

Now we have three classes

class Data{...};

class Functions{...};

class Method



Method(Data & data, Functions & functions);

int run();


Method needs Data and Functions to run.

In the case of optimization:

class Data


double left,right,x;


class Functions


virtual double f(double x)=0;


We have to use pointers to functions or abstract functions - we do not have a function yet, we need hooks only. We like abstract functions better - they are easy to hook to member functions.

Now we want to optimize the function

double profit(double);

which is a member of the class A:

class A



double profit(double);


We create a special Binder class, which knows class A

class ABinder: public Functions


A *a;


ABinder(A& object){a=&object;}

double f(double x){ return a->profit(x);}


Actually, ABinder's function f() can do much more than passing the call, it can count steps of optimization, check timer, etc.

Now we call Method with ABinder instead of Functions and we are OK.

If we want to call Method from inside the class A, which is what we do when we want object A coolly optimize itself, than we do following somewhere in the A's member function:

ABinder binder(*this);

Data data;

// set data

Method method(data, binder);

int status =;

// get result from data

We have to create little servants Binders each time we want optimize something - this is not a big price to pay for nice separation. One, who is in doubt yet, should recall the use of library's sorting routine from C++ code, especially when there are different criteria used for sorting.

Binder can be parameterized with a switch inside its function f().

class ABinder: public Functions


int type;

A *a;


ABinder(A& object,int atype)

{a=&object; type=atype}

double f(double x)







This reduces the number of Binder classes with the same variety of Binder objects.


Organizing Method Variants

This pattern is an application of known patterns to the "Method" object.

"Method" is an object, which is created when the computation is needed and deleted after the computation is finished. Usually it does not have any internal data, which does not depend on input and has to be computed at the object initialization of the object. If it has, it is reasonable to present this data as a separate object, which is maintained separately.

Hence, the "Method" does not use copy constructor and assignment operator.

Usually, there are a few ways of implementation the "Methods", all or at least some of them are tricky and have to be done by specialists, and there is not too many ways to implement the "Method". Hence, the "Method" is implemented as a part of a library, a "server" which is called by "clients". "Clients"-objects and "clients"-users do not need to know anything about details of implementation - we might change it later. Designers of different variants of the "server" can be different specialists, and it is desirable to keep them working independently.

These requirements lead to following multi-file design.



File "client.h":

// forward declaration

class LocalMethod;

class Method


LocalMethod *method; // does the work

// forbid the use

Method(const Method &);

Method & operator = (const Method &);


enum{ Method1 // names of types of methods


int status;

// functionality, for example

int f();

Method(/*something and */ int type);

~Method(){delete method;}



File client.cpp:

// pass the call, for example

int Method::f(){return method->f();}

// create working object with factory

Method::Method(int type):method(0),status(0)


method = LocalMethod::newObject(type);





File "server.h", (only "Method" variants designers see it):

#include "client.h" // need enumerator in Method

class LocalMethod



int status;

// functionality - abstract functions, for example

virtual int f()=0;


virtual ~ LocalMethod(){}

// factory

static LocalMethod * newObject(int type);




File "server1.h":

#include "server.h"

class Method1:public LocalMethod



// functionality, for example

virtual int f();


virtual ~ Method1();



And so on - for each "Method" variant (i) a pair of files "server(i).h" and "server(i).cpp".


File "factory.cpp":

#include "server1.h"


LocalMethod * newObject(int type)


LocalMethod *t;



case Method::Method1 t=new Method1; break;


t=new Method1;



return t;



This creates plenty of files, but separation of programmers and ease of maintenance justifies it.


Interactive Construction of "Method"

This is design pattern, which we apply to the creation of the object at run time, which represents a model, optimization method or other "Method" object, which is used many times after it is set, for example, for computation. It can be applied in other situations also.

The creation of such object requires an input from an user, who sets different parameters, which determine the structure of the object - number of rows and columns in the table, types of procedures, which have to be deployed, etc. The process of setting of these parameters is guided by the application - some values make particular potential turns in the process irrelevant, others require setting of some special parameters, etc.

In this design we have to decide, where the logic, which governs parameters setting and corresponding object construction resides. While it looks natural to place it close to graphical interface, there is one argument, which makes such decision undesirable.

This "Method" object will be changed: the way it computes, it is structured, etc. If we place structuring decisions in one module of the software and computing decisions in the other, we will have a perpetual maintenance and debugging problem. We need to put them in one place. This place is the object itself.

Now we have the other problem - How do we pass to the graphical interface this guiding logic? In addition - How do we make this design independent from particular graphical interface that we can share the code between different platforms?

We found a solution in following construction.

We have two objects, one is our "Method" object and the other is its "Proxy" for the graphical interface. Both objects have modules, which can exchange information on their special "language". These modules communicate with their "host" objects, the way it is convenient to the object, ant they communicate between themselves with "messages".

This "language" and conversion between "language" and "messages" is entirely internal business of this pair of objects "Method" and "Proxy", hence they are very flexible.

"Proxy" presents data in the form convenient for a graphical interface.

This pair works as follows.

The "Method" starts with some default setting and presents the initial structure of data to "Proxy". "Proxy" in turn presents it to graphical interface (GUI).

Now we have a dialog:

User changes data. GUI sends it to "Proxy", "Proxy" sends to "Method". "Methods" reconstructs itself according to new setting and sends to "Proxy" new data. "Proxy" sends it to GUI and a user has an opportunity to make next change, and so on.

Obviously, there are speed-ups possible, because not every change in data leads to reconstruction, mostly it leads to setting of a new value in bounds of current data structure.

To implement the design we need "Method", which has a kind of dynamic constructor (and, hence, dynamic destructor) with proper setting of initial values. The ordinary "static" constructor and destructor are useful also - they manage the structure, which is not changed as a result of user input.

The simplest, while somewhat wasteful implementation of the "language" used internally by the "Method" and "Proxy" is a set of cells-holders of data values with some additional information about their placement on the user's screen.

If we need to present the "Method" with the help of abstract base class, it can look like that:

class BaseMethod{

public:int status;

BaseMethod(){ status = initial(); }

// destroys both static and dynamic parts

// need to call destruct() in derived class

virtual ~BaseMethod(){}

// initial form and data of the dynamic part of the model

virtual int initial()=0;

// dynamic constructor

virtual int construct(Message & data){return 0;}

// dynamic destructor

// clears structure constructed by the construct()

virtual void destruct(){}

virtual int compute()=0;

// presents current statevirtual int current_state(Message & data)=0;};

Layers and Modules

There is an important and often overlooked issue of separation the development areas (groups of files) of different software developers. It is closely related to separation of different logical units in different groups of files to minimize recompilation in the case of correction of the code. Obviously, it leads to the style of creation of interfaces and their separation from the implementation. The ideas, which we present here, are well known; we offer only a different point of view and different terms or concepts to describe them.

The traditional solution of presenting an interface as a base class (preferably abstract) and particular variants of implementation as derived classes (preferably with the support of object factory), often does not provide sufficient separation of the "work area" of the user of interface from the "work areas" of different developers involved with different variants of implementation. The user has to see declarations of derived classes. In the case of a small change in the implementation of the derived class, the user's "work area" is affected.

We found useful an approach, which we demonstrated above with wrapping the message.

In the interface header file we have

class InnerLayer;

class OuterLayer


InnerLayer *inner;


enum{ type1,type2, ...,typen};

// interface functions

OuterLayer (int type);

~OuterLayer ();


This file does not change often. OuterLayer passes the work to the object of class InnerLayer.

The constructor OuterLayer(int type) relies on the object factory

InnerLayer *newObject(int type);

which is implemented in the separate file. To reduce pollution of the namespace it is reasonable to make this function a static function in the class InnerLayer.

Class InnerLayer is an interface class itself, but it is a local interface, visible to developers of variants of implementations and invisible to users. It is a base class with virtual member functions. It does not need to be as refined as the OuterLayer class and it can undergo frequent corrections, because it is shared by a small group of developers.

Variants of implementation - classes derived from InnerLayer each have a separate group of files.

This organization requires only a few additional files, but it simplifies the development and maintenance of the code.

The implementation of OuterLayer's copy constructor and assignment operator is done with the cloning function - virtual function in each class "Variant" derived from InnerLayer

InnerLayer * newClone(const InnerLayer& b)

{ return new Variant(*this);}

Class InnerLayer and derived from it now can be refined through iterations-improvements. It is desirable to construct this system of classes through composition of independent modules. Usually it takes a long time to extract such modules, which present an independent concept and can serve a few derived classes simultaneously, but the work is well worth efforts. Independent modules simplify design, improve reliability of the system, expend functionality and, again, reduce recompilations in the case of small corrections.

This set of independent modules forms next layer of the design.

Instead of one class OuterLayer we can use a few different classes, which all rely on inner layers. This is an inexpensive way to achieve diversity.


Library - a System of Concepts

While we advocate for the separation of "working areas" of different programmers, we are strongly against hiding a library implementation and forcing its use through learning its interface.

We perceive a good Library as a treasury, where programming ideas and concepts accumulate in coherent system. The main advantage of a Library is not supplying functionality, but supplying well-tested concepts and coordinating the development through coordination of thinking of developers.

Obviously, there is a place for supplying ready functionality, which is bundled together with a few lines of code, to deliver extensive and well-coordinated system. In this case, hiding of the diversity of available functionality and providing an access to it through special objects-interfaces is appropriate. However, we always have to keep in mind the area of applicability of such approach. Narrowing down this area often allows very effective focused development, while generalizing leads to unmanageable

A Library, which works as a system of concepts, should be judged from the learning point of view.

Its elements should be relatively simple, they should form relatively independent logical groups, and they should encapsulate non-trivial solutions.

Overly complex elements of a Library are contra-productive, even when they deliver great functionality, even when they are elegant. They cannot serve as a conceptual basis of thinking of a programmer.

Non-trivial solutions are solutions, to which one arrives after a few trials. Hence, there is no sense in opening a Library to chaotic updating. Each change and addition should be carefully considered. Also, the health conservatism of a Library is welcome - it is better to add a new concept, than to change one, which is well embedded in the thinking already.

From the other hand, a good Library (special or general) cannot stay still - new ideas in programming and new hardware and software environment should prompt changes in it. Sometimes, the changes in thinking are so great, that a need for a new Library arises.

In any set of developed concepts there are many interrelations between concepts of the set, similar, there are many interrelations between library elements. However, usually it is possible to extract subsets of files, which can be used independently from other such subsets. It is not done based on a simple logical scheme, but it emerges from the process of the library development.