The Art of Optimization

Alexander Liss

Part II

11/19/97; 11/21/99

 

Classification *

1. Classification of Parameters and Measurements *

2. Classification of Optimization Models *

3. Classification of Optimization Procedures *

4. Classification of Optimization Methods *

Optimization Models *

Models *

Model Walk *

Iterations of Formal Description *

Building of the Goal Function *

The Model and Data Gathering *

Base Models *

Library of Models *

 

 

We bring here an extended Classification and an extended summary related to Models in our system. Inevitably, some information presented in the Part I is repeated here.

Classification

The Classification is based on a set of different viewpoints from which we can look at the decision-making process - properties of the process. Some of them are quite obvious, other are less obvious and reflect an experience of making decisions in different areas.

 

1. Classification of Parameters and Measurements

The Classification of Parameters and Measurements emerged as a reaction on misuse of methods of optimization and methods of evaluating quality of solutions. Proper manipulation of parameters, which are not proper for the description of the particular problem leads to wrong results. We still have this kind of problems with many "famous" characteristics: inflation, cost of living, IQ, etc. Characteristics are not bad by themselves, indeed they are very useful, however their use is often improper - some expect from them something they were not intended to deliver, high precision in particular. We wanted to specify what are those proper parameters. It appears to be not a simple task. It is easy to come up with something theoretically right, but useless. Say, we can demand, that each measurement should be accompanied with the information about its precision. Sounds nice, but not practical.

We came up with two complementary ideas.

First, we described carefully what is measurement and what can be expected from measurements of different type. For example, giving a test and computing according the rules IQ of a person is a measurement. However what can we do with that number - the result of this measurement? Can we say that a person with IQ 150 is 50% smarter than a person with IQ 100? And if not, how can we use it in the model, or in any other evaluation? If one IQ is 101 and the other is 103 can we say that second is better than first, or this difference can be caused by the factors which have no relation to what we want to measure? In other words, what is the error margin in our measurement? Because, if we make important decisions based on results of measurement which differ so little that all of them can be caused by random errors, how is this decision better than rolling a dice. However this subject has been discussed already many times in literature, and the results of these discussions are similar to what we offer.

Second, we address the issue of adequate description of the property of the object with the parameter that we can measure. Let us start with the explanation, that this is a problem to begin with. We start with the example. Let us say that we have to estimate how much it costs us to produce something, for example, the same box with sheet metal that we had worked before. The obvious approach is to add up prices of materials, cost of work and depreciation of machinery and buildings. This is a traditional accounting approach and it works quite well. However let us look closer at what we are doing, how much this estimate of the cost reflects what we really want to measure, especially how well we can use the numbers we get to compare different variants, in particular cost which occur in different moments in time. To make this measurement of the cost we use fluid values - current prices. On top of it, we use depreciation formulas, which were invented for different accounting goals, not to be a part of this measurement process. Definitely we did not take into account inflation. All together numbers, which we get, do not reflect the cost as the property of this product with the precision we expect, only because all calculations were done precisely. The very nature of the data used for this measurement does not allow us to get high level of precision. It does not mean that the measurement is bad, it means we have to be aware of its limitation and use its results properly.

We define three distinctive classes of parameters. Each class is suited differently to describe properties. It is very important how we mix these classes of parameters. It is important in the model - if we use "weak" parameters, which can not deliver high precision, we should not expect high precision from the model.

Even more important, which classes of parameters are used in the procedure of measurement. This stays hidden, very often. The same result is true here as with models - we can not expect high precision of measurement, if we use "weak" parameters in the measurement procedure.

To our best knowledge, it was D.M.Komarov who pointed out this problem explicitly and came up with the idea of three classes of parameters.

In the scientific community, the introduction of a new type of the parameter goes together with the measurement of it. Each case is a sign of a substantial advance in understanding of nature, society etc. New parameter reflects a new concept. It is important how this concept is applicable to the problem, which we want to solve. The measurement reflects how much this new concept can be quantified.

We have to pay close attention to the way the parameter is measured. Misunderstanding of it leads to really bad mistakes.

In general, measurement includes several steps:

How much can we expect from results of measurement and to which degree we can improve measurement if we need it, depends on the type of the parameter and the type of measurement used. For example, if we measure someone's acceptance of risk, we can not expect a big precision of measurement, or it is easy to achieve a high degree of precision, if we measure the length of the object, but not in the case when we have to rely solely on our visual perception without use of tools.

What we can do with results of measurement is determined by the type of measurement. Sometimes results of measurement can be used only to determine that one object is better than another in some sense. Sometimes we know, as in the case of measuring of the volume, that the ratio of results of measurement of different objects has a distinctive sense. There is a case in between, such as temperature, where real sense has a difference in temperature between two objects, but the ratio does not reflect any real property of objects.

The precision of the measurement is an important issue and a difficult issue.

It is understandable how to estimate a statistical characteristic of the precision - measure a few times the same object and compare results. Average them, if it is possible - remember, it is not always possible to add up results of measurement and take the average value as a better measurement. The variance will give an estimate of statistical mistakes of measurement.

But what if the tape, which we use to measure, is wrong? For the established parameters (length, weight, etc.) we have institutions, which help us keep "measuring tape" in order.

However, if we use non-technical parameters, we have to look closely at the measurement procedure itself, and we have to analyze do we measure a property we want to measure, or do we measure some other property, which is close to what we want, but it is not exactly what we want.

We have to analyze sources of information used with the measurement, what kind of parameters we use in a process.

1.1. Classification of Parameters

Parameters and characteristics we divide on three classes:

1.1.1. technical (length, mass, time, fuel efficiency etc.),

1.1.2. money (cost, price, interest rate etc.),

1.1.3. conventional (acceptance of risk, warmth of color, etc.).

The precision of optimization depends on the class of values used as parameters and goals in the process of optimization. There are mistakes of the models, which we can estimate and derive from it mistakes of the optimal solution. However it is not always possible to find these mistakes. What is known for sure is - the available tools and experience in precise measurement greatly differ from technical value to money values and from money values to conventional values. Obviously we can have big mistakes using technical values, however as a rule they are most precise and there are functions - relations and other technical studies about them available. This makes technical values very attractive as a basis of the optimization. In some cases we can use technical values to also describe goals, especially if there are some norms available. In this case the entire optimization problem is in the area familiar to engineers and it can be solved more effectively and more precisely. Hence we recommend limiting the use of values to technical, every time it is possible.

We use money values to reach compromise in "incomparable" areas. Hence we have to use money values in many cases when we describe goals. Sometimes money values are main parameters of the model, as a cash flow. Money values as tools of description of the reality by their nature are less precise than technical values (even when they are measured precisely). Note inflation, fluctuation of the interest rate and prices, and the lack of precision becomes obvious. However there is big experience of an appropriate use of money values to make decisions, and we rely on it in optimization.

We use conventional values when available technical and money values are not enough to solve the problem. These values are the least precise tools of the description, and they have to be used carefully.

 

1.2. Classification of Measurements

We provide the classification of measurements based on two properties:

1.2.1 What kind of operations can we perform with results of measurement?

We cannot add measurements of the temperature in some scales, and so on. In optimization it is very important to keep in mind what we can (and cannot) do with the values we use, especially when we build a goal function.

1.2.2. What are sources of information used for measurement procedures?

It is used as an indirect way of assurance of proper measurement precision. We can do everything properly, but if our "measure tape" is imprecise we have big mistakes. For example, we can execute a survey (a poll) precisely, but if underlying model, which determines the way how sampling is done is wrong, the entire result is wrong (usually mistakes of this model are not estimated).

 

2. Classification of Optimization Models

Many models are built from modules. We should look at model modules from a few different points of view, which we describe below and ask

- How modules of the model are combined?

2.1. How do we incorporate previous decisions in the model?

2.1.1. A previous decision can supply initial values of parameters we want to optimize.

2.1.2. If we have a model based on the previous decision, we check the increments - vary parameters a little and watch what happens to the constraints and the goal. Models using increments are much simpler, because we can assume that the functions involved depend linearly from the value of increment. Obviously, if the value of the increment is too big, this assumption is wrong. It is a responsibility of the model designer to point out the boundaries of applicability of this type of models.

2.1.3. If we optimize a Group of Objects (we optimize a set of objects, for example a set of machine tools of the same type, but different power), "small" increments include adding new objects to the group and removing some of the group's objects. In this case the quantity of objects in the group is changing also.

2.2. How is the functioning of the object of optimization described?

2.2.1. We look at the structure of the object, which we want to optimize - if it is divided on the elements, and how elements are connected.

2.2.1.1. There are models, which do not divide the object on elements. The important class of models of this kind is a class of operation research models.

2.2.1.2. If there is a division on elements, we have a general case, with complicated connections. There are two important cases we would like to look at separately:

2.2.1.2.1. Elements are connected sequentially, possibly with a few additional connections (as feedback for example).

2.2.1.2.2. Elements are not connected, but have to be optimized as a group, we call it "the Group of Objects".

2.2.2. If we look at the source of information for model's functions, we divide models on two classes

2.2.2.1. theoretical,

2.2.2.2. empirical.

2.2.3.1. An important class of models is macro models. Macro models are widely used, they are built based on some kind of general theory, applicable to the different types of objects, for example statistical analysis, or fundamental physical equations, etc.

 

2.3. How are preferences described?

2.3.1. Sometimes the preferences can be described with one goal function.

2.3.2. More often, we have to use a goal function, a few constraints and a few equations, especially if we use norms.

2.3.3. We can provide a description of a few contradicting goal functions and the method of construction a compromise goal function and constraints in particular situation.

2.3.4. There is no need to present preferences with classical goal function - any algorithm, which allows to compare any two potential solutions will suffice.

2.3.4.1. This can be a given algorithm, for example hierarchically organized two or more goal functions.

2.3.4.2. Also it can be an algorithm, which requires input from specialists each time we need to compare two potential solutions.

 

2.4. What is the structure of the set of variants?

The set of variants of possible solutions is defined by the possible values of parameters. It affects which kind of optimization procedure can be used, what kind of data gathering system can be employed, how the solution has to be interpreted etc.

2.4.1. It can be:

2.4.1.1. finite,

2.4.1.2. infinite;

2.4.1.2.1 discrete,

2.4.1.2.2. continuos,

2.4.1.2.2.1. one dimensional,

2.4.1.2.2.2. has many dimensions,

2.4.1.2.3. a discrete set of continuos components.

This mostly affects optimization procedure.

2.4.2. If parameters are changing in time, or if they are distributed in space, this affects many things in the way we use the model.

2.4.2.1. If they are changing in time, in the majority of models they stay the same for some period in time and jump to the next value.

2.4.2.2. If they are distributed in space, mostly this is a finite elements model.

2.4.3. Some models include in the set of optimized parameters, parameters, which we do not control (for example, with the design parameters can be optimized production volume, parameters of the technological process, etc.). The applicability of a model of this kind has to be specially checked.

2.5. What kind of tools supplement the model, which help us check the model applicability and correct the model, to make it applicable to particular problem?

There are three major sources, which can help us check the applicability of the model and find how to correct it in the needed direction.

2.5.1. If there is a better model, which describes the situation in more details (deeper) and more fully (wider), we can use it to check how good is the model in question in particular situation. We do not use a better model instead of our model, because it is too expensive to use, or there is not enough data, or simply because our model delivers the result of sufficient quality.

2.5.2. Sensitivity coefficients allow us to check the assumptions that some parameters are fixed. This is a known and an efficient tool.

2.5.3. We can compute the model and compare results with experimental data. This can be a special data, gathered for this purpose, or some retrospective data.

 

3. Classification of Optimization Procedures

We look at the variety of optimization procedures from a few viewpoints:

3.1. What is the structure of the set of variants and how do we approximate it?

This classification we provide in the model classification.

3.2. What are assumptions about existence and quantity of "best" variants?

3.3. Can we reduce it to the case, when all functions involved (goal and constraints) are linear?

The optimization procedure can be constructed from a few "basic" procedures. In this case we apply our questions to the "basic" procedures and ask

- How do the "basic" procedures work together?

 

 

4. Classification of Optimization Methods

4.1 How are calculations organized?

4.1.1 Do we use databases and which ones?

Databases happened to be an important part of the method of optimization -

we screen variants of solution, and sometimes want to get back to the solution which we discarded before,

we change formal description and want to navigate easily among different formal descriptions.

All this done with databases - some of them can be really simple. If we do not use databases at all, it already says that the method does not support fast manipulation with the data and the model, maybe because the problem is simple and does not need extensive computerized support.

4.1.2. Do we use a model of functioning of the object and which type of it?

If we do not use the model of functioning of the object, most likely we have a very simple problem or a problem where we do not have formal description.

4.1.3. How do we translate a system of preferences into the goal function, constraints and set of norms?

To make an optimization model we need to describe formally a system of preferences - a method to find which of two variants of the solution we prefer. We add it to the model of functioning and we can optimize the solution. The system of preferences does not need to be described in the rigid form of the goal function, as it is assumed in many optimization procedures, it can be presented as a set of rules (norms), which is supplemented with constraints, etc.. After all, a computer can execute a search if the preferences and acceptable variants can be distinguished with the help of any algorithm.

4.1.4. Do we use an optimization procedure and which one?

Sometimes, we can find a solution without an optimization procedure - based on norms only. Types of optimization procedure we define based on the presence of non-linear constraints or goal function and on the presence of discrete variables. One-dimensional optimization we define as a special type.

 

4.2 To which degree different parts of the optimization process have formal description and are automated?

4.2.1. Do we use a database for the initial accumulation of information and which one?

4.2.2. Do we use models and norms and how much we rely on them?

4.2.3. Do we use formal procedures to find proper correction for the:

4.2.3.1. set of parameters and characteristics,

4.2.3.2. model of functioning of the object,

4.2.3.3. model of preferences?

Formal correction procedures are relatively rare, but if we do have them, this speeds up and simplifies decision-making

4.2.4. Do we have automated and to which degree they are automated the:

4.2.4.1. data input,

4.2.4.2. calculations,

4.2.4.3. optimization,

4.2.4.4. realization of final decision?

When our decision-making process is a part of technological process (as decisions of operators of technological process in chemical industry) often we have an automated data input and automated implementation of final decision.

If we use optimization procedures, most likely, they are computerized.

 

4.3 How work is scheduled, particularly, do we run different tasks in parallel?

4.3.1. Do we have separate tasks in the process, and how they are scheduled?

4.3.1.1. We have separate tasks

4.3.1.1.1. Do we run some tasks in parallel?

4.3.1.1.1.1. We run some tasks in parallel.

4.3.1.1.1.1.1. Do we use redundant parallel tasks and how do we combine their results?

 

 

4.4. Which set of parameters and characteristics do we use and how we measure them?

The set of parameters and characteristics, that we use in decision-making process, it's properties, subset of parameters, that we treat as variables, value of which we have to determine - all this is very important, it affects the quality of decision in a great degree. Types of parameters and measurements are taken from Classification of Parameters and Measurements.

4.4.1. Which types of parameters and characteristics do we use and for which purpose?

We need to look at the Optimization Method from this point of view, to make a difficult decision about suitability of the method, to solve our problem. We judge the method: what kind of precision we want to achieve, and where is the method's "weakest link", which might prevent us from achieving it.

If we employ money parameters to optimize technical parameters, then most likely, this is an obstacle in achieving of high precision, because money parameters by their nature reflect the needed for optimization of technical parameters concepts with the precision much lower, than technical parameters themselves.

Hence, the expected method's precision depends on our use somewhere in it of the "weak" types of parameters - the weaker the type, the weaker the method.

4.4.2. Which types of measurement do we use and for which purpose?

If we use a "weak" measurement of the parameter we employ as a goal function, for example a measurement which result can be one of a finite set of values, we can not expect a high precision of the solution. Because there is a whole family of solutions, which deliver the same value of the goal function

4.4.3. How do we combine different types of parameters and characteristics, and different types of their measurement, to insure applicability and reliability of results?

4.4.4. Parameters which we treat as variables:

4.4.4.1. what is the structure of this set,

4.4.4.2. how precise should be result,

4.4.4.3. how does this set affect the type of the model,

4.4.4.4. how does this set affect the type of optimization procedure?

 

4.5. What is a place of the chosen problem description among possible descriptions?

The usual approach is using of macro methods of full analysis and detailed methods of partial analysis in alteration. The computation is organized as a chain of corrections of initial decision.

4.5.1. How many of factors, that can influence situation in question did we incorporated in our description?

4.5.1.1. Methods of partial analysis, look at the part of the problem, at some part of the parameters needed to describe the problem - an object and circumstances. Usually we assume that the other part of parameters has current values. They are used because in this way the detailed description is easier.

4.5.1.2. In contrast methods of full analysis take in consideration all relevant factors, but they do not need to be detailed.

4.5.2. How far in details did we go in our description?

4.5.2.1. Macro methods take in consideration factors from many areas, describe the object in general in connection with other objects and events, however they lack details, they employ general parameters which themselves defined as some kind of average values.

4.5.2.1.1. Based on previous decisions.

It is hard to overestimate the importance of previous decisions as a basis for the current ones. In macro methods they are taken as is, or they are corrected, to reflect new conditions and new goals. This correction can be a part of the formal model. Usually it is done with introduction and optimization of incremental values of parameters.

4.5.2.1.2. Based on opinions of specialists.

4.5.2.1.3. Based on macro model of the object.

4.5.2.1.4. Based on extrapolation in time.

Sometimes a cautious use of the forecast methods, which are an extrapolation of parameters in time, can yield a satisfactory solution

4.5.2.2. Detailed methods deal with some aspect of the object, they describe the object in great detail, their parameters either can be measured directly, or they can be computed in a known way from directly measured values.

4.5.3. Do we use a combination of descriptions, and how do we combine them to find the solution?

Methods which describe the situation in full spectrum of factors involved and in full detail are not practical - it is expensive or impossible to design these kind of methods or to gather the needed information to compute them, and it is much more than is needed to solve the problem.

 

4.6. How do we reduce uncertainty and its negative influence on the decision?

We have to deal with the uncertainty in any decision-making. The uncertainty occurs because we make a decision today that have to be right in the unknown circumstances of tomorrow. In addition we never have complete knowledge about the object of decision and the corresponding situation. We are learning in the process of decision-making - this is our main way to deal with uncertainty.

The most effective way of dealing with uncertainty is to build a system, an organization, whose purpose is the insuring of effective decisions in spite of uncertainty. This is hard and expensive.

The next best thing is the use of norms and models. They accumulate the knowledge and experience of many people in the convenient, ready to use form.

We mentioned above one special way of building the Optimization Method - a combination of macro methods and methods of partial analysis, which is a method of "fast learning" in uncertain situation.

There are many special inventions in the decision-making that help reduce uncertainty or help reduce the influence of the uncertainty on results of the decision (which is not the same).

Only the use of these special inventions, the "tools" of decision-making, is an object of our interest here.

4.6.1. Reduction of uncertainty can be done differently. Mostly it is done with gathering of additional information. This, in turn, can be tricky.

4.6.1.1. The change of the model to adjust to situation.

4.6.1.2. The use of a set of models instead of one model. Most often this set of models is organized hierarchically, so that results of one model are fed into other models.

4.6.1.3. A combination of a theoretical model with experimental procedures. These procedures can supply needed information for the model, and we optimize based on the model alone. Or they can be combined with the model in the one optimization process, as it is done in experimental optimization.

4.6.1.4. The use of probability theory and statistics to formally describe and gather additional information, which can help us improve the quality of solution.

4.6.2. Many of these "tools" of decision-making are based on the same underlying idea: if the level of uncertainty is high, make a decision that can be corrected later according to new available information.

4.6.2.1. Instead of fixing the value of the parameter in the time of manufacturing, we make the parameter variable and give the customer the ability to fix it according to their particular circumstances.

4.6.2.2. Instead of a "universal" object, we produce a set of objects, so that a customer can choose one, which fits his needs better.

4.6.2.3. We produce a chain of models of the product, each time introducing relatively small manageable innovations. On the way we learn the market, adapt manufacturing, prepare new technical decisions and keep customers interested (new feature of the product is an important attraction). This approach is especially used in software production. It is an interesting example, where the technical problem (the lack of information of what is needed and acceptable for the customers), was turned into the lucrative business of selling software upgrades.

 

4.7. Do we consider different types of solution and which ones?

4.7.1.1. We consider the production of one universal object and the production of a set of more special ones, and choose a production variant, which better suits our goals.

4.7.1.2. We consider independent production of the set of objects and a set of objects with modular structure where we can exchange modules from different objects.

4.7.1.3. We consider some parameters to be variable, and do not fix their value until the moment of final decision.

4.7.1.4. We optimize a product, its future modifications and a schedule of their introductions. Note that this kind of methods have to have a small sensitivity of the part of solution related to the current time to variation of the part of solution related to the future times. Because, most likely, we will correct values of the part of solution related to the future times, when we will gather more information in the future.

 

Optimization Models

Models

Acceptance of the model goes through several steps, which include test use, where area of applicability is found, and refining of description, where standardization is achieved. This is a long process, and we have to speed it up.

The description of an area of applicability of a model and procedures which can be used to check an applicability of a model in particular situation are parts of a model.

If we know some objects or situations, which are described by the model in question, or we can produce them, we have a chance to check how close the model is to reality.

We can use another model, or a set of other models, and check how factors, which are captured by another model but not captured by the model in question influence the quality of the model in particular situation.

On this way we can estimate coefficients that show how sensitive results are to these factors and find a direction of the model correction based on this information.

The collection of models includes general models formulated in general terms and specific models formulated in terms specific to industry, area, etc.

General models are close to computation, but further from real problems, specific models capture specific problems and provide a link between a specific problem and an abstract general model, which is ready to be computed.

The description of specific models can include a reference to a general model, if such a model exists in the collection. This structure simplifies both description and correction of models.

The collection of computerized models follows the suit: it includes general models, some of which can call optimization procedures, and concrete ones that use some general model. Some models can be used as modules in other models.

Forecast procedures can be a part of the model of functioning. Experienced decision-makers use as much as possible the available knowledge about relationship between parameters, and carefully choose parameters which values are extrapolated to the future. In all respects this is modeling.

Developers of our system supply standardized general models, model's modules and methods of their use in the conjunction with optimization procedures.

 

Model Walk

In a process of the solving a problem we might use a set of models, switching from one to another to get better understanding of the problem. We can study in depth, in detail a facet of the problem with assumption that other facets of the problem are known. In the other hand we can study all aspects of the problem, but restrain ourselves to aggregate parameters without paying attention to details.

How far we go in this experimentation with different points of view depends on available resources, mainly of time to the moment when we have to deliver a solution.

In our system, we assume that this kind of model walk is a natural part of the decision-making process and we must support the ability to perform it in a convenient way. It means we supply an excessive set of parameters from which we choose needed ones for particular model and a set of interrelated models, which can share results of calculations.

We do not assume that we have a ready model in the library. We take a "nearest" one and adjust it to our circumstances. This is not a formal process, it requires in depth understanding of the situation.

There are a few rules, which we follow in our system.

The model has to be tied to the particular decision-making process, which it serves. For example, if this is a stage of a product design, then usually, we do not model a production process. Only if this is an advanced design, which optimizes a product together with the production process, then we model them together.

Usually, we adjust the model to work with existing data gathering systems. Models are much more flexible tools than data gathering systems. It is much easier to adjust a model to work with data already gathered for decision-making in different areas (in combination with some scientific data), then to set up and maintain a new data gathering system and verify its results.

 

Iterations of Formal Description

Formal description of the optimization problem is achieved as a result of a chain of corrections, when new details are added and unneeded are dropped etc. There are some recommendations, which can be given here, which come from the experience of solving this kind of problems.

We use a few different simple models which describe different aspects of the problem, and either combine them into more precise model, or combine their results.

One group of parameters, which should be optimized we assume to be fixed in one step and build a model in this assumption. On the next step we assume the other group to be fixed and correct the model, and so on.

We check what happens if one of our assumptions is off a little - we have to change a model correspondingly and see how much the result is changed. If the change is substantial we need to examine this assumption.

The particular case of this last method is the estimate of the sensitivity of results of optimization to mistakes of data. In many cases it is possible to build a special model, associated with the original one used for the optimization, which is used to compute this sensitivity, usually as coefficients of sensitivity. This model is a useful part of the optimization model and it is used on the stage of Analysis. It allows fast estimates of the model applicability, which otherwise require a lot of calculations.

 

Building of the Goal Function

The initial description of the optimization problem usually includes a few different goal functions. It is possible to optimize with a set of contradicting goal functions, but it is not recommended. Because, usually, this kind of optimization gives a huge set of possible solutions any two of which we can not compare - first is better than second according to one goal and it is worse according to the other. We need the goal, which is a compromise between original set of goals.

There are only a few methods to write down this compromise, which allow easy interpretation. If there is a need for more complicated compromise description, it is better to introduce some additional characteristics, reformulate the basic set of contradicting goals and use these simple tools of compromise still.

Weight coefficients, as a means of achieving a compromise, should be avoided. Usually, their justification is weak, their measurement is of low precision, and it is too easy to adjust weight coefficients so that optimal solution matches about any given solution.

However technical weight coefficients which, for example, help to combine hierarchically ordered goals are useful.

If goals are expressed in the same type of values and the measurement of this type of values supports addition, we can subtract (or add) goals and have the result as a compromise goal. Usually, the interpretation is obvious.

The other method is to take a ratio of two goals. In fact we introduce in this case a new type of value, and it helps if we can clearly interpret it.

A useful tool of the formal description of the preferences is a combination of the goal function and constraints. Usually, it allows easy interpretation and easy correction of the formal description.

In some cases we can describe local goals. These goals affect only a few variables, and we demand that they have to be optimized first. The optimal solution deviates from these locally optimal values only if there is no possibility to fit into the constraints. This can be a powerful tool. We use it in "soft constraints".

Maximization of the minimum of the values of characteristics can be a potent candidate for the goal function. Similar, minimization of the maximum of the values of characteristics can be used.

If we can replace a goal function formulated with money or conventional values with equivalent one formulated with technical values we should do so. This improves the quality of the solution. For example, if the cost of the metal frame is defined solely by its weight, then it is much better to minimize weight, than to minimize cost. In this case, we have a clearly defined engineering problem, and we use technical values only.

Similar, we have to replace a goal function formulated in conventional values, with one formulated in money values, if we can.

The building of the goal function is an iterative process: we try different variants and look at results of the optimization until find one, which reflects our understanding of the current situation.

 

The Model and Data Gathering

Usually a model designer has to keep in mind what kind of tools (mathematical, computer) are available in every step of the design. Hence, the model design requires a good understanding of a few subjects. In our system, we went a great length to separate optimization procedures from the modeling. This allows model designer to think less about practical optimization and concentrate on the model. However there is a subject which is inseparable from the model design, this is an organization of data gathering.

It is much less expensive (in terms of money and time) to change the model and adjust it to the possibilities of data gathering, then to find a needed data in an acceptable form. The rule is familiar: use all available tools to get things done.

The above example of measurement of the river's width is an appropriate illustration of this thought.

 

Base Models

Similar to the concept of the base (and derived) class in C++ we introduce the concept of the base model. As base classes base models encapsulate the description and functionality common to many different models, which are presented as derived models from this base model. This description and functionality does not need to be complete - this is a way to simplify the description, the way to make it more manageable - easier to correct.

 

Library of Models

A Library of Models used in decision-making is a natural extension of a set of recognized and used norms and methods of calculations specific to the particular area. Acceptance of these norms and methods goes through several steps, which include a test use, where the area of applicability is found, and refining of the description, where the standardization is achieved. In the standardization of models we follow the suit.

Models in the decision-making process are mostly computerized. It takes a long time to program, debug and validate them. It takes even longer time to convince decision-makers, that these particular models can be used, and make them comfortable with their use. This means that models have to be introduced in decision-making community slowly and carefully. First models are introduced for experimental use. If they are accepted, they can be refined, described in an extended way, coordinated with other models in use and finally placed into the library of accepted models.

The library of accepted models itself undergoes a process of regular revisions, to accommodate changes in our knowledge and changes in the area (industry) where it is used.

Models in the library are viewed as a system. Some models are more general, and they are not used directly, but as a basis for other models. Some models are formulated already in industry terms and prepared for direct use, but they do not provide all necessary calculations, instead they provide a reference to some general model, which supplies this functionality. Model description should be coordinated with description of other models in the library. The degree of this coordination varies with the size of the library and amount of efforts dedicated to ensure it.

Modular structure of models should be encouraged: models should be described in the form that they can be used as modules in other models, and if possible, the description of the model should rely on description of other models as its modules.

The organization of such a library of models dedicated to the decision-making process has additional positive side effect. New models developed in this environment have more chances to be reused. It allows more resources to be converted to the models development.

This organization of models reflects an existing way of work with models in the scientific community. It organizes models to speed up their unification and the use of models by decision-makers.

The structure of the library, the exchange of general models between different areas and the carefully guided process of model revisions leads to model standardization. Standardization in turn increases the rate of the model reuse, simplifies the learning process and helps exchange knowledge between different industries.

Initially the library of models consists of known, accepted in the industry methods of calculation, which can be a part of the model of functioning of the object.

In our system

are parts of the model description. This is very important, because the information about the models use in the literature is scarce.

The model description, including applicability related parts of the description, undergo a process of revisions to incorporate new available knowledge.

Model description should include recommendations on the model use, including examples of its use and methods of correction, which make the model applicable in particular situation.

General models refer to general methods of evaluating an applicability of a model and finding proper correction.