The Art of Optimization

Alexander Liss

Part I

11/19/97; 11/21/99

**Common Sense Optimization ***

Optimization in the Decision-making *

Variety of Decision-making Processes *

Previous Works *

Main Ideas *

Accumulated Knowledge *

Forecast in Optimization *

Example *

Optimization Procedures *

Where to Start *

**System and Technology of Optimization ***

Introduction *

Classification *

*Classification of Parameters and Measurements **

*Classification of Optimization Methods **

Optimization System *

*Structure and Development of the System **

*Library of Models **

*Classification of Optimization Models **

Training *

Optimization Technology *

*The Process **

*Calculations **

*Two Loops **

*Variants **

*Parallel Computing **

*Models **

*Model Walk **

*Building of Optimization Models **

*The Model and Data Gathering **

*Optimal Solution **

*Set of Parameters and Characteristics **

*Measurement of Parameters and Characteristics **

*Correction **

*Reports **

*Resources Allocation **

Software *

*Specifications **

*Parameters **

*Goal Description **

*Model **

*Optimization Procedure **

User Interface *

**Optimization Procedures ***

Linear Optimization *

*Description **

*Special Cases **

*Standard Form **

*Simplex Method **

*Error Accumulation **

Optimization on the Segment *

*Four point scheme **

Stochastic Optimization *

*Boolean Variables **

*Real Variables **

Classification of Optimization Procedures *

Optimization Procedures *

**Optimization of the Group of Objects ***

Group of Objects *

Base Model *

Examples of Optimization Models *

*Given demand and minimum cost **

*Production adjustment **

*Choosing Suppliers **

*Market Penetration **

Local Models in the Group of Objects Model *

Models Application *

Set of Models of the Group of Objects *

Model Variants *

Models Based on the Set of Existing Objects *

**Optimization Models in Finance ***

Fixed Income Portfolio *

*Funding **

*Account Adjustment **

*Cash Management **

*Reducing Risk with Portfolio of Securities **

*Trading Models **

*Investment models **

*Mutual Funds Optimization **

*Investment Optimization Based on Trading Model **

Concept of Wealth *

Optimization is a familiar activity to anyone. An example is comparative shopping - we check prices of similar goods in a few different stores and choose the one, which has a better combination of the quality and the price. For different people this optimal combination of the quality and the price can be different - it depends on the particular requirements and budget. The other example is tax preparation. We compute the amount of taxes for different variants of tax options - joint or separate, itemized federal only or itemized state or both, and find, which is smaller. In the first case, we deal with real objects and situations, in the second case - with potential objects and situations, however the nature of optimization stays the same.

Optimization is screening of variants. Some variants are dropped, because they do not fit into our constraints. The rest of variants compared based on our goal and the variant, which satisfies our goal the most, we choose as a solution.

There is no problem that in many cases our constraints and our goal are not described formally initially - the formal description will emerge in the process of optimization - we learn as we go.

In fact, this is the most appealing part of the optimization - it allows us to gather needed information relevant to the particular decision we have to make. For example, when we shop we learn about different models and different applications of the object we are looking for. In the end of the process, we know much more about the object, and we can justify our decision much better, than in the beginning.

The most important examples of the optimization come from the experiences of the workplace. We all can bring a few examples from our own experiences, but it is difficult to find one, which is common to many of us.

This familiar to everyone optimization we call *common sense optimization*.

In some areas of decision-making, optimization was developed to the high degree. Special tools were added to simplify screening, to make it faster and less expensive, and to make sure that all variants worthy of attention are accounted.

However, methods of optimization, which we learned in the college are mostly useless. We do use common sense optimization, but we practically were not able to apply the known mathematical and computerized procedures of optimization, which supposed to be so helpful. This is a kind of frustrating experience, which many of us share.

In general, there are so many interesting and potentially powerful methods, which supposed to be useful, but they are not, From the other hand, there are some simple methods, which make a difference in our everyday work. Nevertheless, these simple methods are not easy to apply - the method, which does wonders in one environment can be useless in the other.

Do we need sophisticated methods of optimization, when what we are doing is optimization already?

We should ask:

- Which methods work in our particular circumstances?
- How much can we gain from using more complex and potentially more effective methods?
- How much can it cost us?

Our answer is - we can gain enough to justify our efforts, but it is not trivial to make it work.

We are interested in optimization used as a method in the decision-making (there are plenty of other uses of it, which are not our area of interest).

We define the are of our interest even more narrowly. There is decision-making based on experience and gut feeling. In many cases, this is the only available way to make decisions or an acceptable way. We analyze and organize the area of decision-making, where there is enough knowledge that the use of models and computers can bring better results, than decision-making based on experience only.

This is not always the case. For example, political decisions are made in a complex environment with small amount of formal knowledge, where computations can play only supportive role. In many cases, business and administrative decisions are similar. However, a vast majority of technical and financial decisions can benefit from the optimization based on formal descriptions and models.

Similar to what we know about common sense optimization, the optimization with the use of formal descriptions and models is, in a great part, a process of learning. We insist that optimization as a process of learning is the most important part of the art of optimization in the decision-making. However, there is very little literature, which helps to learn this art.

It was not accidental that we used the word "art" describing optimization. Decision-making and the use of tools of decision-making, optimization in particular, is an art, and it will stay an art, we are sure. However, it is possible to organize methods of this art, which prove to be successful over the generations; and it is possible to teach this art.

This is what we are set to do here.

The most impressive results in this area are achieved, when models are used as tools of formal description of the object of optimization and the circumstances in which the object has to function. There are many obstacles on the way of this sophisticated optimization. It can be a lack of models, or the cost and time required to make such description can be prohibitive. In addition, models can be misused - wrong models can be used, or good models can be applied improperly, or results can be interpreted improperly and so on.

We want to organize optimization to simplify the use of formal description, models and computers. Consequently, in the areas where it is not used now, it can become a ubiquitous practical tool. In the areas where it is used already, the quality of results can be improved and time spent to achieve them can be reduced.

This is a difficult task. From one hand, computers and formal descriptions (models) are good in keeping track of the enormous amount of important details we have to take in consideration, when we make complicated decisions. From the other hand, it is hard to organize this diverse information into a comprehensive system, and it is hard to make it flexible that we can maneuver, when circumstances are changed or the boss changes his mind.

Decision-making processes can be very different. The difference can be in the types of specialists involved, in methods used etc.

For example, we need to design a device with known parts, using known technological processes that achieves the best value of some technical characteristic. This is a pure engineering problem, the solution of it can involve an optimization, but the goal function, constraints, the set of possible variants are well known, well understood, and values can be measured with a substantial precision.

Another example, we have a set of potential suppliers and a few locations, where we have to send parts purchased from them. We have a huge amount of possible combinations. In addition, the price depends on the volume of order and sometimes can be negotiated, and we do not want to depend entirely on one supplier for any type of parts, etc. This is an economic problem. A goal function and constraints are not so obvious and have to be determined yet in the process of decision-making, however we can come up with some initial variant of them. Computation can be complicated and can demand a substantial amount of computer power.

In the case of optimization of specifications we have a third example, where the solution is a compromise between the technical level and the cost involved, a compromise that allows to pass to engineers a pure technical goal and constraints. To find this solution we need a set of special models, which allow the computation of the average cost and technical trends.

If we made already similar decisions, and new conditions are only slightly different from previous ones, we need only a small amount of calculations to find a new solution, based on previous one. This can be a best decision-making process in this case.

If we do not have enough expertise or resources to find a proper solution, then we can hire a few firms to find this solution for us independently and go with the best one. This is the decision-making process also.

If we have to choose a decision-making process that suits our problem, time constraints and available resources, we can face a difficult decision by itself. Usually there are no sophisticated calculations that can help, and important and difficult part of it is keeping in sight all methods, which can be useful in these circumstances.

In the seventies in the former USSR a group of scientists led by David M. Komarov set up to organize the decision-making process that computers and models could be utilized to their potential.

It started with the optimization of standards. Standards in the USSR where approved very much the same way as they are approved in USA or on international level today. It was a lengthy process of negotiation of parties affected by the proposed standard, were technical issues yielded to some other issues, as how much the standard is convenient to some parties. However, there were some more or less obvious technical issues, which should be taken in consideration first. For example, the sizes of the paper documents should be coordinated that a document of standard size folded in half were of the standard size again. The other example, if there were standard sizes of packages and standard sizes of boxes, which could be filled with these packages, then for each package had to be a few boxes' sizes, that a set of packages could fit into every box of the size from that subset without empty spaces.

An especially thorny issue was the standardization of the products (designs), that they can be used together. This problem does not exist in the market economy - the market forces its participants to cooperate in the coordination of parameters of their products. Not so in the command economy (which was called planned economy), where this coordination was achieved exclusively by administrative decisions.

It was easy to realize that the moment when standardization phase starts, it is already too late to change anything in the design. Thus, the approach was expanded and the work had started to organize the process of making technical decisions with the use of computers and models (mathematical, computerized, etc.).

This work was inspired by the other needs of the Soviet economy too. Soviet economy in that time consisted of two loosely connected parts: the military industrial complex and the civilian sector of economy. Military industrial complex had to keep up with the advances of the Western military development. They had all needed resources to do so. Technologically they were behind Western counterparts, however they compensated this with scientific advances. The civilian sector was insulated from any kind of competition, level of consumption of the population was artificially maintained low in terms of quantity and especially in terms of quality, and the technological state of this sector was ridiculously low. Many attempts were made to move technological advances from military industrial complex to the civilian sector. They were not successful. Partially the "blanket secrecy" policy in the military industrial complex and bureaucratic barriers were to blame. However, deeper and more general causes were there also. Recall the recent post cold war transition in the US economy. Practically all attempts to adjust the military oriented production to the civilian production were unsuccessful. The only economically viable solution was to close down some of them, consolidate the others and reuse the skilled work force in other places usually after retraining. This shows how difficult the problem is.

Knowledge in the proper forms can be transferred from industry to industry, from one group of specialists to the other. That was an underlying idea of the project. The task was to find these forms and to organize the process of decision-making that the transfer of knowledge and the education of specialists become a natural part of the decision-making.

This was done in the time when "magical thinking" was prevalent in the field. People were looking for "the method", which solves all problems at once. It was a fresh thought that we have to rely instead on the accumulated knowledge and well trained specialists in their respective fields - engineers, economists, etc., and build a system, which simplifies their cooperation for the better quality of decisions.

A lot was done in that time. Results were presented in a plethora of documents and in a few books. However, the sea change in the social and economic life of the country, made all this a secondary issue for a while.

We can not take these results as they are. Many ideas were presented in the form of recommendation, without explanations. Some ideas are not acceptable to the market economy. Too little attention was paid to the implementation of the ideas with the computers.

We have to adapt them.

There are a few main ideas.

We took a few ideas from military. This is not surprising - military makes decisions fast, makes them in the conditions when there is a lot of uncertainty, and it gets maximum from the available information. This is exactly what we want.

Other ideas we brought from the experience of decision-making in industry.

They are:

1. If we want to improve solutions (technical, financial, etc.) we have to build upon the wealth of knowledge and experience of specialists, upon the existing education system and upon existing computational tools (including computers).

The introduction of the specially organized system has to bring verifiable results. We definitely should not monkey around with "get results quickly" methods, which appear now and then. We improve existing methods of decision-making, methods that were developed and used by smart people, and we are able to improve them only because we introduce a new system and a new education.

2. Decision-making consists of two contradicting processes. One is meticulous gathering and organization of the information needed to find a solution. The other is learning about the problem itself, where the understanding of the problem is changing, and this can easily lead to the reformulating of the problem itself. These processes are meshed together - the solution, which we found on one step, leads to reformulating of the problem on the next step.

3. There are examples of excellent decisions made with sophisticated use of optimization made by very talented individuals. However, if we want it to be a widely used approach, we have to make it available to the average specialist, which currently makes good technical decisions.

4. Decision-making and optimization as a part of it is not a science, and it will never be a science. However, there are many things, which can be done in this area. First, we need to organize a flow of needed information to specialists, which are pressed with time. Second, the effective use of this information has to be taught. It means, we need the system, which supplies specialists with the needed information and a technology of optimization together with needed tools, which we can offer to them.

5. The system and the technology, have to be updated as new results are coming in. They have to accumulate all workable methods. There are so many methods, which look nice, but are useless. This is the challenge to find what really works and to incorporate it in the system.

6. We should have a solution in any moment. In the beginning this is a weak solution, based on our experience only. With time, we replace it with better and better solutions, as our understanding of the problem grows.

This idea has an additional benefit - we have an instant audit system, we know how well we perform with the optimization, how much the optimized solution is better than the "base" solution, which can be found without special optimization procedures.

7. We might use a few teams of specialists trying to solve the same problem independently. This can be somewhat expensive, but the quality solution can be found much faster. We can find this approach in military procurement.

From our point of view this is a method of optimization, and this is an interesting example of how far our understanding of what is an optimization can be from what is understood as optimization in academic research.

8. Before military starts the operation, it prepares scenarios. It does not have enough information to make a decision today, however it can find a few classes of situations, which it might get into in the future. In the future, when the operation starts it will not have enough time to make quality decisions, unlike before the operation, when it has plenty of time. Hence, it makes assumptions about conditions corresponding to a class of the situation and finds the optimal behavior for these conditions. As time goes on and additional information is coming, it updates these variants of solution. In many cases, these variants of solution converge, because more information is available.

This is a method, which is highly recommended in the case of finding optimal solutions in the industry, finance, etc. Use of models and computers makes preparation of scenarios on early stages of the process of decision-making possible and relatively inexpensive. In the time when the deadline is close and there is not much time left to accommodate new information and adjust the solution to new circumstances, these scenarios can be very helpful.

9. If one does not have needed data to find a solution, one has to use one's head - change the formal description of the problem, that the needed data can be obtained. We illustrate this with an example.

We need to measure the river's width because we have to build a bridge over this river. We can do it with a cord, if we can cross it, or with an instrument, if we have it. Let say, we cannot do it that way, because the river is under enemy fire, and we do not have an instrument.

Let say, we can mark a point C on the other bank of the river. It can be some recognizable object, which we can see in our optical instruments. In addition, we can mark and measure a segment AB parallel to the river on our bank. We have a triangle ABC. We can measure its two angles ABC and BAC and the side AB. Now, we recall trigonometry and we have a result.

However, we might need a more complicated model.

Let say, we can not measure well enough the segment AB (the terrain is tough, or someone is shooting). We can make this segment a part of another triangle ABD, which has point D on our side of the river. In this triangle we can measure the other side AD and angles BAD and ADB. Now, we can find AB and the width of the river.

Here, we are using a well known method - if there is a nice model to use, but some pieces of data needed for this model are unavailable, then we build additional small models to find these missing pieces.

However, there are cases, when this method does not work and we have to change the model substantially.

Let say in the case above, we cannot build this other triangle ABD, because we cannot see the other side of the segment B, and hence, we cannot measure needed angles. If we still can see a marked point on the other bank C, then we can solve the problem.

We build a triangle using this point C and points A and D. Now, we have quite a picture, but we can solve it, if we can measure AD and angels DAC and ADC and get a needed distance.

From this example, we want to learn a simultaneous adjustment to existing circumstances both the data gathering system and the formal description of the problem.

This is the ability, which we want to see in specialists applying optimization to solve problems.

Society worked out a few methods of accumulating knowledge valuable in decision-making. We should carefully built upon them.

The work in a particular area hones skills and allows previous similar decisions to be used as a basis for new decisions. This leads to higher quality solutions, which can be found sooner. However, when the situation is changed drastically, old experience can be useless.

One of such method of accumulation knowledge is a set of norms.

Technical norms, rules and approved methods of computing come to help. They are developed by high level specialists, reflect existing experience and can be taught. They are of a great importance. For example, the use of new materials in some areas was delayed for decades only because the appropriate norms did not exist.

They have some additional advantage - they separate a technical problem from economic and even social problems associated with the technical decisions. Take for example the "safety margin" norms. Can one put a money value on the human health or life? However, these norms do just that, albeit indirectly. They protect an engineer from the necessity to make morally hard decisions.

The other method of accumulation of knowledge is a set of models.

Years ago, the need to use models to improve quality of decisions sounded somewhat utopian. Now, when Boeing develops a sophisticated airplane with help of models only, this does not surprise anyone.

While norms, rules, etc., allow computation of values of parameters, which one controls, from the values one wants to achieve, the models are made for the experimentation.

The experimentation is an inevitable phase of any complex development. Usually it is done with prototypes. This kind of the experimentation is limited and expensive.

Models greatly expend our ability to experiment. Models also allow a convenient way of incorporation in a decision-making process of the knowledge, which we can get from scientific research. This is something, the value of which is hard to overestimate.

As flexible and versatile the models are, in the real word they do not replace norms, rules, and skills. They supplement them.

When the employee leaves the company, his skills leave with him. Therefore, companies have learned to fix norms, rules and methods of computation in documents and to teach them newcomers.

Paradoxically, models in many companies are not accumulated properly. There is no systematic model description and no systematic education of young specialists. These models are the wealth of a company, they are not and cannot be taught in the college, simply because the diversity and specialization is so big.

Norms are frequently used in combination with models.

Norms employ reverse calculation: from desirable values of characteristics, which we use to judge the decision, to corresponding values of parameters, which we control.

Models use direct calculations: from controlled parameters to characteristics.

When we use norms in models, often we present norms as constraints and build computation procedures correspondingly. This is a very convenient way of incorporating norms in models.

In some cases, the set of norms is so complete, that we do not need to employ optimization procedures and can find a solution based on norms only.

Because norms are constantly revised, we see norms as an accumulated experience. However, the less ordinary solution we have to find the less we have up to date norms, which we can use. Models can be updated, or created much faster, hence they are more useful in the changing situation.

If we apply to models the same prudent approach of careful documentation and updating, which we practice with norms, we can improve the process of decision-making - its quality and its speed.

It is a common practice to make a decision, based on similar decisions, which have been made before by the decision-maker himself, by a colleague, or by a competitor (decisions of competitors are often carefully studied). Usually, the previous decision has to be modified to fit new circumstances, and the only thing, which is known about it, is a decision itself. The person, who has the access to previous decisions (an experienced one) can make decisions faster and of better quality. Hence, previous decisions are accumulated one way or the other.

It would be much more effective to remember not only previous decisions, but the essence of the knowledge, which led to those decisions - models. The modification of the models according new circumstances is much more understandable, than the modification of the decision itself.

Many of us are familiar with dreadful process, when an administration wants your forecast of the particular area of industry you are working in. The idea behind it is - "we collect opinions of specialists at hand, somehow average them and we will have a pretty good picture of future development inside the company and in the world".

We all know that this does not work - usually forecasts are off the mark, because there is not enough information. "Averaging" of different opinions make things worse, because people have in mind different paths of development, which are irreconcilable.

We envy success of industry's "visionaries", and we forget that so-called "visionaries" mostly do not foresee the future; they make this future happen. They spend time, energy and money to introduce their "vision". This is not a forecast.

However, we still spend time doing that forecast.

Sometimes, when there are decisions similar to one we need to make, which were made before, some try to find a new solution with an extrapolation of old ones. Usually, this is a bad idea - the most important things happen, when this method does not work. Either accumulated small changes lead to drastic change in the situation, or there are other factors, which came to the play and their influence should be reflected in our decisions.

The precision of forecasts is low. This is one of the reasons norms are so widely used in engineering, finance etc. When norms are set, the future situation is taken in consideration one way or the other. Therefore, the specialists, who use norms, do not need to think about future situation, they work with familiar technical parameters.

The other way to save specialists from a forecast is to give them technical goals.

For example, we want our engineer to design a part from the sheet metal, and we want him to make it inexpensive, obviously.

We can demand from him the justification of the design, which shows, that in the future prices, this is a least expensive way to do what is required. He knows that future prices are very fuzzy. Most likely, he goes for something that is like previous decision, or something that fits well into existing technological process. After the decision is made, he adds the economic justification.

We can make our life easier, if we know that the major cost component is defined by the amount of the spent sheet metal (including cut off). In this case, we ask an engineer to minimize this value. This is a clear technical goal, it is easy to measure precisely, and it is possible to find a comprehensive solution. We did not eliminate an economic justification, we moved it to other specialists, which decided (using their experience), that in given circumstances minimizing the cost is the same as minimizing the amount of spent metal.

In different circumstances, it could be an amount of time the worker spends making a part. In different circumstances, the engineer should get a different technical goal.

This is an effective way to deal with forecast in the optimization - a separation of the problem.

A special team of specialists produces technical specifications based on its own models and extrapolations in time. These specifications are used for an optimization based on technical parameters. We utilize the experience of specialists in forecasts. They choose parameters to forecast, or current technical goals to set, which are not sensitive to the future uncertainty. This is an art in itself.

The forecast is a major source of uncertainty. Experienced decision-makers use as much as possible the available knowledge about relationship between parameters, and carefully choose parameters, which values are extrapolated to the future. In all respects, this is modeling.

It is strongly recommended that parameters, which we have to extrapolate, have a tendency to change linearly with the time. Let us look at one example. If we have the possibility to reinvest at 5% annually, our investment (with reinvestments of the interest) will grow as an exponent. Hence, exponential growth is a natural growth of the investment. Now, if we want to forecast the market index, we look at the logarithm of this amount. The change in time of this value should be close to linear. This is our parameter to forecast. Other parameters we recalculate from this one.

Linear functions allow simple interpretations; hence, we can bring more of our experience and "gut feeling" into forecast.

As one can see, norms are incorporated into optimization in the straightforward way: present them as a relation between parameters and make it a part of the model used for the optimization (as a constraint, or as a goal function).

In contrast, forecasts need a more careful approach, because they introduce big mistakes, which sometimes are intolerable, and because there is a history of irresponsible use of them.

We see the solution, in the careful deployment of forecasts, with the use of special models, and defining a place of forecasts in the decision-making system that forecasts' mistakes do not affect final decisions too much.

There is no practical example, which talks to everybody's heart. We use an illustrative example - an optimization of a box.

We have to make a box from the square sheet of metal of the size L*L. We cut small squares of size h*h off the corners and bend sides.

The height of the box is h and the width

w=L-2*h,

hence the volume is

V=(L-2*h)^{2}*h.

Our goal function is

V(h)=(L-2*h)^{2}*h,

and we have to maximize volume:

V -> max.

If L is given, then we can build a graphic of the function V(h) and find the value h0, where it reaches maximum.

It is possible that we have some additional constrains on the box width and height:

w < W and h < H,

where W and H are given.

Note that these constraints can contradict each other. In this case, we have to change our description of the problem.

If we have a few different sizes

L_{1}, ..., L_{n}

of metal sheets, then we can compute corresponding optimal values of box's sizes

h_{1}, ..., h_{n}

and compare, which box utilizes metal better with the function

C_{i} = V_{i}/S_{i} ,

where

V_{i} = (L_{i}-2*h_{i})^{2}*h_{i}

and area of the spent metal

S_{i} = L_{i}^{2} .

Note that in this last case the goal function is

C(L,h) = (L-2*h)^{2}*h / L^{2}

and variable L is taken from the finite set of values

L_{1}, ..., L_{n} .

Now, we decide to use instead of the square bottom, a rectangular one of the size

a*b,

however, we want to fit it in some sheet metal, which we have at hand; hence, we want side a to be one of the values

L, L/2, or L/3

and we want the bottom to be close to rectangular:

a/2 < b < 2*a.

The volume of the box is

V = (a-h)*(b-h)*h

and the area of the spent metal

S = a*b.

We want an effective use of metal, and we maximize the goal function

C = V/S -> max,

C(a,b,h) = (a-h)*(b-h)*h / (a*b) -> max,

constraints we had described above.

We can see how easy it is to change the description of the problem, when we use models. We simply add or change constraints, change goal function, and expend or change a set of parameters, which we vary. The computation of the result can be not easy, but this where we use computers to do this routine work for us. We insist that this is a very reasonable trade off - the quality of the decision depends on the quality of the formal description of it mach more, than on quality of the final computation. In addition, the optimization procedures are well developed already by mathematicians and specialists in operation research and computers, and we can use them.

Now we are at the point, where we have to say a few words about optimization procedures. In many cases, when people say "optimization" they have in mind only optimization procedures - linear and non-linear optimization, integer optimization, and so on. This is what is taught in the colleges, and it is well developed.

As one can see, we use this word in a broader sense.

How important are optimization procedures for optimization in decision-making?

From the point of view of the decision-maker, the quality of the formal description of the problem - the quality of the model, is far more important than the quality of optimization procedure. For him, optimization procedure is only a sophisticated method of variants screening - it does this simple job faster.

From the point of view of the software designer, this is an important and difficult issue. Different optimization procedures have to be available to solve the same problem - sometimes only with trial and error we can find what works and what does not work. Optimization procedures have to be easily paired with models, which, as we saw, can change.

Still there is no good computer library of optimization procedures - this shows that the problem is difficult.

In some cases, without good optimization procedure we cannot solve a problem on a satisfactory level. This type of problems is favorite topic of mathematicians and specialists in operation research. Usually, the models in these cases are simple enough, but computation is difficult.

There is an important issue here. There is a definite gap between needs of industry and academic research in the area of optimization. In industry, people cannot get hands on optimization, and in academic community there is a shortage of real problems to solve, with real models to work on and real data. Also, there is a gap between academic research of the methods of optimization with the use of computer models and practical use of these methods in industry. This gap persists in spite of advances in some industries and substantial benefits of the deployment of these methods.

The gap persists in spite of honest efforts in part of academia. There are examples of impressive results of optimization, usually associated with frustrating experience from the process of optimization - too many efforts, which are appreciated too little.

The situation begs for some system - a bridge between industry and academia in this area.

We do not want to invent optimization; we want to advance the use of optimization in industry.

The optimization already became an important tool in all industries and services. There are many elements of it, which are developed to the "de facto" standard in the particular industry or company.

Solicitation of proposals for delivery of goods or services from competing firms, and choosing the best one, based on merits of the proposal and estimated ability of the firm to realize it, has to be viewed as an optimization. Procedures of proposal solicitation and choosing the best one - are tools of optimization.

In many industries physical or computer modeling as a part of the product development is a common place. The way it is used shows that these models are tools of the optimization.

Norms and rules define values of one set of parameters (or the object structure) in relation to another set of parameters (or structure). They accumulate the theory and experience of the product development. They are used for direct computation of needed values of parameters. They formalize factors, which are difficult to deal with.

From point of view of optimization, they are restrictions on the set of available variants. They reduce the set of variants chosen for screening. Whenever these norms and rules are available, they are invaluable tools of optimization.

Mathematicians focus on computational procedures of optimization, when there is a formal description of the problem to the degree that

- the set of potential variants is known and described with parameters,
- constraints are known and fixed, and described as functions of these parameters,
- goals are described formally to the degree that they can be presented as (one) goal function.

From this point mathematicians offer fast screening procedures, which are especially effective, if constraints and the goal function are linear functions of parameters, or if we can distinguish variants by the value of only one parameter.

For other cases, iterative procedures are available, which move from initial solution to the next, which is better than previous one. In many cases, these procedures allow to achieve an acceptable solution in an acceptable time.

Specialists, who work for the company for many years, accumulate a specific experience. They pass it to new specialists. Young specialists learn the art of decision-making and optimization from experienced specialists.

There are cases, where exists a system, which maintains models and supports their use in the decision-making process. In these cases, competitive advantage of it is obvious

- It is easy to teach new specialists,
- The knowledge is accumulating in the system, making it more valuable every day,
- Decisions made with its help are better, can be found faster and with less expenses,
- Decisions are easier to adjust to changing market conditions,
- Decisions are better explained - it is easier to convince a customer or a manager that they are good decisions.

The implementation of such system can not be a one-time project. This is an ongoing process of improvement of the quality of the decision-making process. This requires a gradual change in the company's culture. This should not be expensive, but it should be a long-term commitment.

What are the sources of potential benefits of such system? There are a few of them.

The additional support structure in the decision-making is beneficial. Better coordination between different departments can be achieved, and more decisions can be delegated, only because the decision-making process is made more formal and comprehensive. This can be done not by constraining of this process, but by giving busy decision-makers new standard tools, which make their life easier.

Faster and better utilization of research results, general scientific research, special research done in academic world, or research in the company, make decisions better, and the introduction of innovations faster and easier.

Better use of specific experience of the company's specialists, who can contribute to the company's knowledge base, allows faster and more efficient training of new specialists.

We offer the solution of the problems described above. This is a result of careful evaluation of existing ideas and methods, and binding ones, which were useful, into the system. In this work, we rely on experience, our own experience and experience of other specialists, who were successful in optimization. As much as we can, we explain our choices.

We present a way to build a working system, which improves the quality of the decision-making process. These recommendations emerged as a result of long development and they have many coordinated details. All these details are important and the best way to implement our recommendations is to take them as is, implement the system and change it step-by-step, adjusting to particular needs.

The system helps to reach a competitive edge (technical and economic) according current technical and economic situations. It is a tool, which decision-maker uses to find better decisions, to improve ones, which he has already at hand. This system does not make decisions - it helps to make them.

The system has a built-in self-evaluation - we start with some decision and change it to optimal, and each time we can see an improvement and its cost.

However, the main benefit of its implementation is hidden - it educates specialists and improves the culture of decision-making.

It provides a channel for utilization of results of academic research, which in many cases are available practically for free. It improves the exchange of knowledge between different departments. It allows the effective use of knowledge and expertise of specialists, without placing them under the usual stress of deadlines.

It allows formal and proper accumulation of the special, often unique, knowledge in the company. It

- organizes the optimization in decision-making as a special technological process,
- standardizes proper methods and models,
- brings advanced procedures of the optimization closer to practical problems.

In it, we have following basic areas of development

- Classification of Optimization Methods,
- Optimization System,
- Optimization Technology,
- Optimization Models,
- Software.

The field, which we are organizing, is too new and its scientific analysis is undeveloped, hence we have to start with Classification of Optimization Methods.

We use this Classification, when we build and update the Optimization System, as a description of the area, which we are organizing. We also use it, when we have to choose a method to solve a problem, and we use it when we build a new model as a part of the system of the models.

We need the Optimization System similar to a system of norms and rules, used in engineering. We need a System that provides the methodological support for specialists using the optimization as a part of decision-making.

We design this system as a hierarchical system according to the levels of generality.

The higher levels of hierarchy are close to academic research - they provide general models and general optimization procedures, and approaches to decision-making.

The lower levels are close to practical problems, with all fullness of details.

We build all needed levels in between to bridge the gap between academic research and practical decisions. We build them according to existing specialization and existing industrial divisions.

This system creates and updates Classification and the Library of Models and provides training.

It uses the experience of the practical optimization at its lower levels as a feedback and a source of information about practical problems and solutions. It uses higher levels to prepare the base part of models needed to solve problems, optimization procedures and software, and passes this information to lower levels.

The Optimization Technology helps to perform more effectively and faster the part of the decision-making process that can utilize a formal knowledge and a formal information, delivering well-explained results to the final stage. It deals with the process of optimization and models, which are used, in this process.

Models are essential elements of an effective optimization. Their organization, accumulation and reuse can contribute greatly to effectiveness of the process of product development and in other decision-making processes where models can be used.

The Optimization Technology gives recommendations about use of models, while the Optimization System organizes Libraries of Models and training.

Software, which can support changing models and changing methods of optimization, is unusual and there are some specific difficulties in its development. We provide general recommendations in this area.

We found that difficulties of implementation of practical optimization software in a form of the stand-alone program, or in the form of a library are caused rather by the lack of proper concepts than by the lack of efforts. We describe these concepts here.

A few words have to be said about specialization.

One of the reasons for specialization is that ordinary people can not handle tasks beyond some level of complexity. This level depends on the quality of available tools, and it is moving up with the technological and educational advances.

The decision-making process is a complex process, and often we have to rely on the specialization within it. The Optimization Technology supports and encourages specialization.

We know good examples of using computers and optimization in finding the best solution in given circumstances done by one talented person, but the Optimization Technology is oriented towards ordinary people. We achieve high quality of solution through the standardization and accumulation of experience of many decision-makers in a form stable enough and familiar to specialists in their areas.

On the other hand, the Optimization Technology brings new sophisticated tools in the decision-making process and this alone lifts the level of problem complexity, starting from which we have to rely on specialization.

In other words, we will need fewer specialists with lower level of experience to make the same quality decisions.

The Classification of Optimization Methods is used for choosing a proper method to solve the problem and for the Optimization System development.

The development of the Classification of Optimization Methods starts on general level of the System and continues on levels that are close to practical use. It is the main tool of organizing of the knowledge and of organizing the vast amount of inventions in the area of decision-making.

The Classification is based on a set of different viewpoints, from which we can look at the decision-making process - properties of the process. Some of them are quite obvious, other are less obvious and reflect an experience of making decisions in different areas. Viewpoints, which we present here, are ones, which survived a long process of criticism and elimination - we wanted to keep the Classification simple and illuminating.

We base our Classification of Optimization Methods on the more general Classification of Parameters and Measurements, which can have much wider applications. We bring these two classifications first.

Classification of Parameters and Measurements

The Classification of Parameters and Measurements emerged as a reaction on misuse of methods of optimization and methods of evaluating quality of solutions.

For example, proper manipulation of parameters, which are not proper for the description of the particular problem, leads to wrong results. We still have this kind of problems with many famous characteristics: inflation, cost of living, IQ, etc. Characteristics are not bad by themselves. Indeed, they are very useful; however, their use is often improper - some expect from them something they were not intended to deliver, high precision in particular.

We wanted to specify what are those proper for the description of the particular problem parameters. It appears to be not a simple task. It is easy to come up with something theoretically right, but useless. Say, we can demand, that each measurement should be accompanied with the information about its precision. It sounds nice, but not practical.

We came up with two complementary ideas.

First, we described carefully what is measurement and what can be expected from measurements of different type.

For example, giving a test and computing according the rules an IQ of a person is a measurement. However, what can we do with that number - the result of this measurement? Can we say that a person with IQ 150 is 50% smarter, than a person with IQ 100? If not, how can we use it in the model, or in any other evaluation? If one IQ is 101 and the other is 103 can we say that second is better than first, or this difference can be caused by the factors, which have no relation to what we want to measure? In other words, what is the error margin in our measurement? Because, if we make important decisions based on results of measurement, which differ so little that all of them can be caused by random mistakes, how is this decision better than rolling a dice?

This subject has been discussed already many times in literature, and the results of these discussions are similar to what we offer.

Second, we address the issue of adequate description of the property of the object with the parameter that we can measure.

Let us start with the explanation, that this is a problem to begin with. We start with an example. Let us say that we have to estimate, how much it costs us to produce something, for example, the box with sheet metal, which we described above. The obvious approach is to add up prices of materials, cost of work and depreciation of machinery and buildings. This is a traditional accounting approach and it works quite well. However, let us look closer at what we are doing. How much does this estimate of the cost reflect, what we really want to measure? How well can we use the numbers, which we get this way to compare different variants? Are they enough to take in consideration the cost, which occur in different moments in time?

To make this measurement of the cost we use fluid values - current prices. On top of it, we use depreciation formulas, which were invented for different accounting goals, not to be a part of this measurement process. Definitely, we did not take into account inflation. All together, these numbers do not reflect the cost as the property of this product with the precision, which we expect. This low level of precision comes in spite of precisely performed calculations. The very nature of the data used for this measurement does not allow us to get high level of precision. It does not mean that the measurement is bad, it means we have to be aware of its limitation and use its results properly.

We define three distinctive classes of parameters. Each class is suited differently to describe properties. It is very important how we mix these classes of parameters. It is important in the model - if we use "weak" parameters, which can not deliver high precision, we should not expect high precision from the model.

Even more important: which classes of parameters are used in the procedure of measurement. This stays hidden, very often. The same result is true here as with models - we can not expect high precision of measurement, if we use "weak" parameters in the measurement procedure.

To our best knowledge, it was D. M. Komarov, who pointed out this problem explicitly and came up with the idea of three classes of parameters.

In the scientific community, the introduction of a new type of the parameter goes together with the measurement of it. Each case is a sign of a substantial advance in understanding of nature, society etc. New parameter reflects a new concept.

It is important to know how this concept is applicable to the problem, which we want to solve. The measurement reflects how much this new concept can be quantified.

We have to pay close attention to the way the parameter is measured. Misunderstanding of it leads to really bad mistakes.

In general, measurement includes several steps:

- The description of a set of objects that can be measured (for example, thin objects of rectangular shape),
- The consistent assignment of the parameter to an object, as a characteristic of some property of these objects (length of the longer side of the object),
- Establishing a measurement procedure, that in a consistent way assigns a numeric value to each object from the set (use of the special tape, which is applied tight to the longer side of the object, and not to its diagonal, or short side, and reading a number on the tape),
- The description of circumstances in which repetitive measurement of the same parameter of the same object yields (about) the same result (for example, if we measure blocks of salt, we should keep them in the dry conditions all the time),
- Evaluating of the precision of this measurement procedure - a description of possible differences in results of measurement made in the same circumstances. (Distance between marks on the tape, reading conditions, alignments the beginning of the tape with the corner of the rectangular object, how elastic the tape is - all affects precision of the measuring the length of this object).

How much can we expect from results of measurement and to which degree we can improve measurement, if we need it, depends on the type of the parameter and the type of measurement used.

For example, if we measure someone's acceptance of risk, we can not expect a big precision of measurement. It is easy to achieve a high degree of precision, if we measure the length of the object, but not in the case when we have to rely solely on our visual perception without use of tools.

The type of measurement determines what we can do with results of measurement. Sometimes results of measurement can be used only to determine that one object is better than another, in some sense. Sometimes we know, as in the case of measuring of the volume, that the ratio of results of measurement of different objects has a distinctive sense. There is a case in between, such as temperature, where real sense has a difference in temperature between two objects, but the ratio does not reflect any real property of objects.

The precision of the measurement is an important issue and a difficult issue.

It is understandable how to estimate a statistical characteristic of the precision - measure a few times the same object and compare results. Average them, if it is possible (it is not always possible to add up results of measurement) and take the average value as a better measurement. The variance gives an estimate of statistical mistakes of measurement.

What if the tape itself, which we use to measure, is wrong?

For the established parameters (length, weight, etc.) we have institutions, which help us keep "measuring tape" in order. However, if we use non-technical parameters, then we have to look closely at the measurement procedure itself. We have to analyze: do we really measure a property, which we want to measure, or do we measure some other property, which is close to what we want, but it is not exactly what we want.

We have to analyze sources of information used with the measurement, what kind of parameters we use in a process.

Classification of Parameters

Parameters and characteristics we divide in three classes:

- technical (length, mass, time, fuel efficiency etc.),
- money (cost, price, interest rate etc.),
- conventional (acceptance of risk, warmth of color, etc.).

The precision of optimization depends on the class of values used as parameters, and used to describe constraints and goals in the process of optimization.

There are mistakes of the models, which we can estimate. Then we can derive from this the mistakes of the optimal solution. However, it is not always possible to find these mistakes.

What is known for sure is following - the available tools and experience in precise measurement greatly differ from technical value to money values and from money values to conventional values. Obviously, we can have big mistakes using technical values, however as a rule, they are most precise and there are functions-relations and other technical studies about them available. This makes technical values very attractive as a basis of the optimization.

In some cases, we can use technical values to also describe constraints and goals, especially if there are some norms available. In this case, the entire optimization problem is in the area familiar to engineers and it can be solved more effectively and more precisely. Hence, we recommend limiting the use of values to technical, every time it is possible.

We use money values to reach compromise in "incomparable" areas. Hence, we have to use money values in many cases, when we describe goals. Sometimes money values are main parameters of the model, as a cash flow, for example.

By their nature, money values as tools of description of the reality are less precise, than technical values (even when they are measured precisely). Note inflation, fluctuation of the interest rate and prices, and the lack of precision becomes obvious. However, there is big experience of an appropriate use of money values to make decisions, and we rely on it in optimization.

We use conventional values when available technical and money values are not enough to solve the problem. These values are the least precise tools of the description, and they have to be used carefully.

Classification of Measurements

We provide the classification of measurements based on two properties:

- What kind of operations can we perform with results of measurement?

- What are sources of information used for measurement procedures?

First is obvious - we cannot add measurements of the temperature in some scales, and so on. In optimization it is very important to keep in mind what we can (and cannot) do with the values, which we use, especially when we build a goal function.

Second is used as an indirect way of assurance of proper measurement precision. We can do everything properly, but if our "measure tape" is imprecise, then we have big mistakes. For example, we can execute a survey (a poll) precisely, but if underlying model, which determines how sampling is done, is wrong, the entire result is wrong (usually mistakes of this model are not estimated).

Classification of Optimization Methods

The Classification of Optimization Methods is a result of a long process of selection of characteristics of the Optimization Methods - selection of their main properties. It is valuable for that alone, it brings a vocabulary needed to describe the universe of these methods and the place of the particular method in this universe. It was designed to be extended, however to be extended in a particular manner. We do not think, that general characteristics of methods need to be added to it. However, in each industry, in each company there are special methods and ways of their usage, which have to be reflected in this Classification.

In this Classification, we concentrated on a few basic questions:

- How are calculations organized?

- To which degree different parts of the optimization process have formal description and are automated?

- How the work is scheduled, particularly, do we run different tasks in parallel?

- Which set of parameters and characteristics do we use, and how do we measure them?

- What is a place of the chosen problem description among possible descriptions?

- How do we reduce uncertainty and its negative influence on the decision?

- Do we consider different types of solution and which ones?

These are our basic questions, points of view, from which we want to look at the method. Now we unfold them into hierarchical system of questions, and we form some classes of methods on the way.

How are calculations organized?

- Do we use databases and which ones?

- Do we use a model of functioning of the object and which type of it do we use?

- How do we translate a system of preferences into the goal function, constraints and set of norms?

- Do we use an optimization procedure and which one?

Databases happened to be an important part of the method of optimization

- we screen variants of solution, and sometimes want to get back to the solution which we discarded before,
- we change formal description and want to navigate easily among different formal descriptions.

All this done with databases - some of them can be simple. If we do not use databases at all, it already says that the method does not support fast manipulation with the data and the model, maybe because the problem is simple and does not need extensive computerized support.

The model of functioning of the object, the name explains itself, is an important part of our design. We return to it again and again, because it is a main tool of the knowledge accumulation. How the types of this model are defined is a subject in itself.

One type of the models is the Optimization Model of the Group of Objects.

Another type is the model where the object is presented as a sequence of its modules.

Many models can be presented as an expended combination of models of those two types.

If we do not use the model of functioning of the object, most likely we have a very simple problem or a problem where we do not have formal description.

To make an optimization model we need to describe formally a system of preferences - a method to find which of two variants of the solution we prefer. We add it to the model of functioning and we can optimize the solution. The system of preferences does not need to be described in the rigid form of the goal function, as it is assumed in many optimization procedures. It can be presented as a set of rules (norms), which is supplemented with constraints, etc. After all, a computer can execute a search if the preferences and acceptable variants can be distinguished with the help of any algorithm.

We had mentioned that sometimes, we can find a solution without an optimization procedure - based on norms only. Types of optimization procedure we define based on the presence of non-linear constraints or goal function and on the presence of discrete variables. One-dimensional optimization we define as a special type.

To which degree different parts of the optimization process have formal description and are automated?

Successful optimization processes go through distinctive steps:

- gathering of information (formal descriptions, norms, rules, similar decisions, etc.),
- building of the optimization model,
- computation, screening of variants,
- analysis of results, attempts to implement the solution,
- if the solution is not satisfactory - correction, which goes through gathering of additional information, correction of initial assumption and the model,
- computation again, and so on.

This distinctive presentation of the decision-making process as a correction loop (until proper decision is found) was made by D. M. Komarov and we found it very useful.

When our decision-making process is a part of technological process (as decisions of operators of technological process in chemical industry) often we have an automated data input and automated implementation of final decision.

If we use optimization procedures, most likely, they are computerized.

Formal correction procedures are relatively rare, but if we do have them, this speeds up and simplifies decision-making.

Now we look at details.

- Do we use a database for the initial accumulation of information and which one?

- Do we use models and norms and how much do we rely on them?

- Do we use formal procedures to find proper correction for:

the set of parameters and characteristics,

the model of functioning of the object,

the model of preferences?

- Do we have automated:

the data input,

the calculations,

the optimization,

the realization of final decision,

*and to which degree they are automated?*

How work is scheduled, particularly, do we run different tasks in parallel?

- Do we have separate tasks in the process, and how they are scheduled?

If we have separate tasks

- Do we run some tasks in parallel?

and

- Do we use redundant parallel tasks and how do we combine their results?

For some specialists these questions sound strange, but they reflect the reality of effective optimization. We had mentioned, how military use this technique on the big scale. The same technique can be effective on the small scale as well.

Which set of parameters and characteristics do we use and how do we measure them?

The set of parameters and characteristics, which we use in decision-making process, it's properties, subset of parameters, which we treat as variables and value of which we have to determine - all this is very important. It affects the quality of decision in a great degree. Here we use Classification of Parameters and Measurements, which gives types of parameters and measurements from our point of view.

- Which types of parameters and characteristics do we use and for which purpose?

- Which types of measurement do we use and for which purpose?

- How do we combine different types of parameters and characteristics, and different types of their measurement, to insure applicability and reliability of results?

- Parameters which we treat as variables:

what is the structure of this set,

how precise should be result,

how does this set affect the type of the model,

how does this set affect the type of optimization procedure?

To make a difficult decision about suitability of the method to solve our problem, we need to look at the Optimization Method from this point of view. We judge the method: what kind of precision we want to achieve, and where is the method's "weakest link", which might prevent us from achieving this precision.

If we employ money parameters to optimize technical parameters, this most likely will be an obstacle in achieving of high precision. Because money parameters by their nature reflect the needed for optimization of technical parameters concepts with the precision much lower, than technical parameters themselves.

Hence the expected method's precision depends on our use somewhere in it of the "weak" types of parameters - the weaker the type, the weaker the method.

If we use a "weak" measurement of the parameter, which we employ as a goal function, then we can not expect a high precision of the solution. For example if we use a measurement, which result can be one of a finite set of values, then we can not expect a high precision of the solution. This happens because there is a whole family of solutions, which deliver the same value of the goal function.

What is a place of the chosen problem description among possible descriptions?

This is a subject, of which practical discussion one hardly can find in existing scientific and technical literature.

There are "dimensions" in this "space of descriptions":

- How many of factors, that can influence situation in question did we incorporate in our description?

- How far in details did we go in our description?

Macro methods take in consideration factors from many areas, describe the object in general in connection with other objects and events, however they lack details, they employ general parameters which themselves defined as some kind of average values.

Detailed methods deal with some aspect of the object, they describe the object in detail, their parameters can be measured directly, or can be computed in a known way from directly measured values.

Methods of partial analysis, look at the part of the problem, at some part of the parameters needed to describe the problem - an object and circumstances. Usually we assume that the other part of parameters has current values. They are used because in this way the detailed description is easier.

In contrast, methods of full analysis take in consideration all relevant factors, but they do not need to be detailed.

Methods, which describe the situation in full spectrum of involved factors and in full detail, are not practical. It is expensive or impossible to design these methods or to gather needed information to use them. In addition usually, they deliver much more, then it is needed to solve the problem, and hence one would be better with simpler methods.

The practical approach is using of macro methods of full analysis and detailed methods of partial analysis in alteration. The computation is organized as a chain of corrections of initial decision.

Hence, we have an additional characteristic of the method:

- Do we use a combination of descriptions, and how do we combine them to find the solution?

Among macro methods we specify following classes of methods based on

- previous decisions,
- opinions of specialists,
- macro model of the object,
- extrapolation in time.

It is hard to overestimate the importance of previous decisions as a basis for the current ones. In macro methods they are taken as is, or they are corrected, to reflect new conditions and new goals. This correction can be a part of the formal model. Usually it is done with introduction and optimization of incremental values of parameters.

Sometimes a cautious use of the forecast methods, which are an extrapolation of parameters in time, can yield a satisfactory solution.

How do we reduce uncertainty and its negative influence on the decision?

We have to deal with the uncertainty in any decision-making. The uncertainty occurs because we make a decision today that have to be right in the unknown circumstances of tomorrow. In addition we never have complete knowledge about the object of decision and the corresponding situation. We are learning in the process of decision-making - this is our main way to deal with uncertainty.

The most effective way of dealing with uncertainty is to build a system, an organization, whose purpose is the insuring of effective decisions in spite of uncertainty. This is hard and expensive.

The next best thing is the use of norms and models. They accumulate the knowledge and experience of many people in the convenient, ready to use form.

We mentioned above one special way of building the Optimization Method - a combination of macro methods and methods of partial analysis, which is a method of "fast learning" in uncertain situation.

Many special inventions in the decision-making help reduce uncertainty or help reduce the influence of the uncertainty on results of the decision (which is not the same).

The use of these special inventions, these "tools" of decision-making, is an object of our interest here.

Reduction of uncertainty can be done differently. Mostly it is done with gathering of additional information. This, in turn, can be tricky.

Sometimes we need to change the model to adjust to situation.

Sometimes we use a set of models instead of one model. Most often, this set of models is organized hierarchically, so that results of one model are fed into other models.

Sometimes, we combine a theoretical model with experimental procedures. These procedures can supply needed information for the model - then we optimize based on the model alone, or they can be combined with the model in one optimization process, as it is done in experimental optimization.

Sometimes we use probability theory and statistics to formally describe and gather additional information, which can help improve the quality of solution.

Many of these "tools" of decision-making are based on the same underlying idea: if the level of uncertainty is high, make a decision that can be corrected later according to new available information.

Following are three examples.

1. Instead of fixing the value of the parameter in the time of manufacturing, we make the parameter variable and give the customer the ability to fix it according to their particular circumstances.

2. Instead of a "universal" object, we produce a set of objects; so that a customer can choose one, which fits his needs better.

3. We produce a chain of models of the product, each time introducing relatively small manageable innovations. On the way we learn the market, adapt manufacturing, prepare new technical decisions and keep customers interested (new feature of the product is an important attraction). This approach is especially used in software production. It is an interesting example, where the technical problem (the lack of information of what is needed and acceptable for the customers), was turned into the lucrative business of selling software upgrades.

Do we consider different types of solution and which ones?

For example, we consider the production of one universal object and the production of a set of more specialized objects, and we choose a production variant, which better suits our goals.

We can also consider independent production of the set of objects and a set of objects with modular structure where we can exchange modules from different objects.

When we need to fight uncertainty, we consider some parameters to be variable, and do not fix their value until the moment of final decision.

We can optimize a product, its future modifications and a schedule of their introductions. Note that this kind of methods have to have a small sensitivity of the part of solution related to the current time to variation of the part of solution related to future times. Because most likely, we will correct values of the part of solution related to the future times, when we will gather more information in the future.

How far can we go in these directions and what is appropriate, has to be decided according to the situation.

The goal of the Optimization System is

- accumulation of the experience in decision-making,
- development of standardized methods that guide the process of decision-making,
- development of the standard Library of Models and model's modules for use in this process.

Structure and Development of the System

The Optimization System is hierarchical: a part of the System associated with particular department uses tools which are made available by another part of the System responsible for the more general development. In turn, the experience of the concrete decision-making processes is accumulated and studied on a general level and is made available to others.

On the each level the Optimization System starts with the adaptation of general recommendations and collections of proven methods and proceeds from version to version with additions, improvements and revisions.

The development of the Optimization System requires coordinated efforts of specialists in different areas. Different level of experience is needed for specialists involved in this development. Specialists who develop Classification of Optimization Methods need a great degree of experience, here we have to employ senior and even semi-retired specialists. This work is not a subject to deadlines. Each new idea related to the organization of a decision-making process clearly described and placed into Classification (in the particular industry, company, or department) eventually will be helpful for any specialist - an inexperienced one, learning a new area, or even en experienced one, working under pressure of deadlines.

Specialists who are involved in the building of the Library of Models do not need to be experienced - a college education usually is enough. They might need supervision to not overextend the Library with models, which are not useful.

A further standardization of decision-making process in particular company or department can be done only with the consensus of persons actively involved in this process and has to be based on existing practice, which have proved to be successful.

A Library of Models used in decision-making will be a valuable asset in any company. These models are a natural extension of a set of recognized and used norms and methods of calculations specific to the particular area. Acceptance of these norms and methods goes through several steps, which include test use, where the area of applicability is found, and refining of the description, where the standardization is achieved. In the standardization of models, we follow the suit.

Models in the decision-making process are mostly computerized. It takes a long time to program, debug and validate them. It takes even longer time to convince decision-makers, that these particular models can be used, and make them comfortable with their use. This means that models have to be introduced in decision-making community slowly and carefully. First models are introduced for experimental use. If they are accepted, they can be refined, described in an extended way, coordinated with other models in use and finally placed into the library of accepted models.

The library of accepted models itself undergoes a process of regular revisions, to accommodate changes in our knowledge and changes in the area (industry) where it is used.

Models in the library are viewed as a system. Some models are more general, and they are not used directly, but as a basis for other models. Some models are formulated already in industry terms and prepared for direct use, but they do not provide all necessary calculations, instead they provide a reference to some general model, which supplies this functionality. Model description should be coordinated with description of other models in the library. The degree of this coordination varies with the size of the library and amount of efforts dedicated to ensure it.

Modular structure of models should be encouraged: models should be described in the form that they can be used as modules in other models, and if possible, the description of the model should rely on description of other models as its modules.

The organization of such a library of models dedicated to the decision-making process has additional positive side effect. New models developed in this environment have more chances to be reused. It allows more resources to be converted to model development.

This organization of models reflects an existing way of work with models in the scientific community. It organizes models to speed up their unification and the use of models by decision-makers.

The structure of the library, the exchange of general models between different areas and the carefully guided process of model revisions leads to model standardization. Standardization in turn increases the rate of the model reuse, simplifies the learning process and helps exchange knowledge between different industries.

Initially the library of models consists of known, accepted in the industry methods of calculation, which can be a part of the model of functioning of the object.

When one creates a model to describe a situation or an object, one instantly learns boundaries where the model can be used, but if the model is taken from the library, then there is a danger of inappropriate use of it.

There are some general procedures, which can be used to check an applicability of a model in the particular situation. First, if we know some objects or situations, that are described by the model in question, or we can produce them, we have a chance to check how close the model is to reality. Second we can use another model or a set of other models and check how factors that are captured by another model but not captured by the model in question influence the quality of the model in the situation.

On this way, we can estimate coefficients, which show the level of sensitivity of results to these factors and find a direction of the model correction based on this information.

In the Optimization System

- the description of the area of applicability of the model,
- procedures that can be used to check an applicability of a model in the particular situation,
- the description of correction methods, which make the model applicable

are parts of the model description. This is very important, because an information on model use in the literature is scarce.

The model description, including applicability related parts of the description, undergo a process of revisions to incorporate new available knowledge.

Model description should include recommendations on the model use, including examples of its use and methods of correction, which make the model applicable in particular situation.

General models usually refer to general methods of evaluating an applicability of a model and finding proper correction. Concrete models have to be more specific.

Classification of Optimization Models

We should look at Models we want to place into the Library from different points of view:

- How do we incorporate previous decisions in the model?

- How is the functioning of the object of optimization described?

- How are preferences described?

- What is the structure of the set of variants?

- What kind of tools supplement the model, which help us check the model applicability and correct the model, to make it applicable to particular problem?

Many models are built from modules hence we apply these questions to the modules separately and check

- How modules of the model are combined?

How do we incorporate previous decisions in the model?

A previous decision can supply initial values of parameters we want to optimize.

If we have a model based on the previous decision, we check the increments - vary parameters a little and watch what happens to the constraints and the goal. Models using increments are much simpler, because we can assume that the functions involved depend linearly from the value of increment. Obviously, if the value of the increment is too big, this assumption is wrong. It is a responsibility of the model designer to point out the boundaries of applicability of this type of models.

If we optimize a Group of Objects (we optimize a set of objects, for example a set of machine tools of the same type, but different power), "small" increments include adding new objects to the group and removing some of the group's objects. In this case the quantity of objects in the group is changing also.

How is the functioning of the object of optimization described?

We look at the structure of the object, which we want to optimize - if it is divided on the elements, and how elements are connected.

There are models, which do not divide the object on elements. The important class of models of this kind is a class of operation research models.

If there is a division on elements, we have a general case, with complicated connections. There are two important cases we would like to look at separately:

Elements are connected sequentially, possibly with a few additional connections (as feedback for example).

Elements are not connected, but have to be optimized as a group, we call it "the Group of Objects".

If we look at the source of information for model's functions, we divide models on two classes

- theoretical,
- empirical.

An important class of models is macro models. Macro models are widely used, they are built based on some kind of general theory, applicable to the different types of objects, for example statistical analysis, or fundamental physical equations, etc..

How are preferences described?

Sometimes the preferences can be described with one goal function.

More often, we have to use a goal function, a few constraints and a few equations, especially if we use norms.

We can provide a description of a few contradicting goal functions and the method of construction a compromise goal function and constraints in particular situation.

There is no need to present preferences with classical goal function - any algorithm, which allows to compare any two potential solutions will suffice.

This can be a given algorithm, for example hierarchically organized two or more goal functions.

Also, it can be an algorithm, which requires input from specialists each time we need to compare two potential solutions.

What is the structure of the set of variants?

The set of variants of possible solutions is defined by the possible values of parameters. It affects which kind of optimization procedure can be used, what kind of data gathering system can be employed, how the solution has to be interpreted etc.

It can be finite, or infinite; infinite can be discrete, continuos or a discrete set of continuos components. Continuos set can be one dimensional, or it can have many dimensions. This mostly affects optimization procedure.

If parameters are changing in time, or if they are distributed in space, this affects many things in the way we use the model.

If they are changing in time, in the majority of models they stay the same for some period in time and jump to the next value.

If they are distributed in space, mostly this is a finite elements model.

Some models include in the set of optimized parameters, parameters, which we do not control (for example, with the design parameters can be optimized production volume, parameters of the technological process, etc.). The applicability of a model of this kind has to be specially checked.

What kind of tools supplement the model, which help us check the model applicability and correct the model, to make it applicable to particular problem?

There are three major sources, which can help us check the applicability of the model and find how to correct it in the needed direction.

If there is a better model, which describes the situation in more details (deeper) and more fully (wider), we can use it to check how good is the model in question in particular situation. We do not use a better model instead of our model, because it is too expensive to use, or there is not enough data, or simply because our model delivers the result of sufficient quality.

Sensitivity coefficients allow us to check the assumptions that some parameters are fixed. This is a known and an efficient tool.

We can compute the model and compare results with experimental data. This can be a special data, gathered for this purpose, or some retrospective data.

Many of ideas, we present here, are very unusual - for different groups of specialists different ideas are unusual. Here we bridge the gap between academic research in modeling, optimization and computer science from one side, and the reality of decision-making with all its uncertainty of the problem description, limited available information and all those deadlines. In addition, there were many attempts to use computers in a way, which discredited the idea all together. Many of ideas described here came to life as a result of concentrated efforts to fight this irresponsible use of "computation". For example, there was an attempt to find an optimal future solution simply by extrapolation of parameters, which were optimal in the past. A very bad idea, because even if the environment is changing slowly, we can get to the point where optimal parameters are substantially different from the corresponding parameters in the past. The other bad idea was a mindless linear combination of different parameters as an important criterion. The classification of parameters and measurements sprung to life, when it was needed to show that the criteria used to make important decisions were not used properly (values were added when their measurement did not support addition) and variants were "distinguished" by the values of the criteria which are so close that the difference can be easily caused by the mistakes of measurement. We still see this kind of problems.

The training has to concentrate on difficult to understand ideas.

Experienced specialists, who already forgot what they learned in the college, have to be familiarized with possibilities of the model use and fast optimization procedures, so that they keep in mind these additional tools which they can use for the real life decisions.

Young specialists just after the college need to learn how the real problem dictates which decision-making tools are used, and how a close deadline can eliminate about all of the nice methods they learned in the college.

For the majority of the specialists the explicit description of the correction loop as a tool of getting a formal description of the problem is a new idea. Practically everyone did it this way one time or the other, and still the explicit description of it seems to be a novelty. Similar the idea of a "model walk" when the "play" with different models appears to be a powerful tool of learning about a problem.

The Optimization Technology is a methodology and a set of computerized tools that help to reach a competitive edge in a process of making decisions. This technology helps to keep in view technical possibilities, economical constraints (keep the cost in check, assure a substantial profit to offset the market uncertainty, and provide enough funds for the future development, etc.) when one is looking for the solution that is consistent with technical trends and brings enough profit to justify efforts.

Final decisions are based on the assumptions that hardly can be described formally to degree where they can be used in quantitative analysis. Usually, they (final decisions) are based on some intuitive or conventional compromise. In order to make the proper final decision one needs to get some insights into the technical and economic edge that can be reached today. In addition, one needs a set of variants of the potential decision that are on this edge. The Optimization Technology is designed to help in analysis of the part of this edge that corresponds to one's goals.

The process of preparation of a set of potential solutions is an iterative process in which one consequently adds an available information as it is gathered in a guided manner. The more time one has the higher is the quality of the decision. Eventually the final decision must be made, because otherwise it means that one accepts status quo, or the decision is made for him. The moment when the final decision must be made sometimes based on clear deadlines, but often it is a part of the final decision by itself. Often this part of the decision is based on not formal methods.

In the process of the implementation of final decision, some not anticipated obstacles can arise. It brings one back to a "drawing board". The Optimization Technology assumes that this is a part of a normal decision-making process and helps to anticipate this step in it and avoid a potential disaster.

The Optimization Technology helps to perform more effectively and faster the part of decision-making process that can utilize formal knowledge, delivering results to the final stage in an understandable and easy to use form. It utilizes all kinds of methods that proved to be useful in particular area of decision-making. Higher productivity is reached by standardization of proper methods and use of sophisticated techniques of variants screening known as optimization.

The process of decision-making under our guideline is divided in three parts.

First part is an iterative analysis of the edge. Here we set up a set of variants of basic assumptions: technical, economic and our system of preferences. Based on it and on all available knowledge about a subject, we calculate a corresponding set of optimal (best in its assumptions) variants. This set of potential solutions instantly gives insights into the nature of current edge. With this new knowledge we gather some additional information, add new information that was not available before, drop some variants of basic assumptions as unrealistic or useless, add some new variants of basic assumptions that we decided are interesting in the light of new data, and correct basic assumptions of some other variants. After that we repeat calculations, and so on.

The second part collects results of the first part in a general database for a future analysis and provides the first part with the information of an available time for its work. Here the final decision is made.

Third part is one where an attempt to implement the final decision is made; if this is impossible, the information about obstacles is gathered.

Parts two and three are not formal and we concentrate on the part one. This part is a chain of calculations, corrections, and gathering and utilization of additional information. This brings a natural division of each step in this chain on:

- information gathering and input
- calculation
- analysis
- correction.

The information input is reasonable to base on some kind of general database - we need to validate and correct information of many different types, need to save it for reports, etc.

In our system, the specially organized calculation module supports calculations in the process.

An important step in the formal description of decision is a choice of a set of characteristics, based on which we judge a quality of a solution, and a choice of a set of parameters (variables), value of which affects value of these characteristics.

The core part of calculation module is a "stimuli - response" module - a model (analytical, hardware, computer, or combination of above), that can be used to find a value of characteristics in relation to value of parameters - variables. If we have this module we already can achieve a substantial improvement in a quality of decision by experimenting with potential solutions. We can achieve better results, if we utilize some sort of automatic variant screening - an automatic optimization procedure. An automatic procedure does not eliminate "manual" one only supplements it. There are so many factors, which are not described formally, but an influence of which on the final decision can be tested "manually". This gives us a second module, which organizes "manual" calculations and an automatic optimization.

The calculation always involves an input of data, often a big amount of data, which have to be verified, corrected, saved, etc. It can be performed in effective way with the help of some kind of database, specially tuned to the model. This gives us a third module - a data input database.

For many reasons, in our system the core part of the calculation module - the model, is divided in two parts:

- the model of functioning and
- the model of preferences.

The model of functioning establishes the relation between parameters (variables) that reflect basic properties of the analyzed product, technological process, etc.

The model of preferences is a set of constraints and a goal function.

This consistent division simplifies correction - the model of preferences is corrected often, while the model of functioning is corrected much less often. It assures reuse - the model of functioning can be used as a model for another case, or in other calculations not related to the current decision-making. It uses an existing specialization - the model of functioning is a traditional part of analysis and education and can be built by specialists different from ones, which build the model of preferences. From the other hand, this division does not place any real restriction on the model, because the model of functioning or model of preferences can be trivial.

Tools of analysis used to find proper correction or justify a solution occupy a special place in our system. Some of those tools are more formal, as sensitivity analysis, other less formal, but all are potent tools of optimization.

A special technical figure of our approach is two loops, one inside the other.

Big loop:

- the initial data input, or data correction,
- the work of the small loop,
- analysis,
- the report final or intermediate.

The small loop of optimization procedure (automatic or manual) includes:

- the input of control parameters of the optimization procedure,
- if needed, the input of starting values of parameters of the object (which usually are default model values),
- steps of the procedure,
- the procedure report (usually data, which allows us to stop the procedure, when the solution of acceptable precision is found).

There is the same underlying idea in both loops: a chain of corrections, each new step is made with the use of additional information gained on the previous one. Outer loop - works with the formal description of the problem, inner loop searches for the best solution of the formally described problem. We want the user to keep in mind this idea and to drive these chains actively, when he tries to solve the problem.

The analysis, which proceeds the correction, can be automated, as in automatic optimization procedures, or can be informal.

We have to manage two sets of variants: variants of formal description and, for each of them, variants of possible solutions, which we have to screen.

Variants of Formal Description

We adopt a military approach to the management of variants of formal description. It starts with a few variants of description (scenarios), which are getting more precise as new information is gathered. Some of them are dropped on the way, other produce variants of their class, and so on, while we have time and resources to gather information.

Hence, when we formulate a problem, we consider a few different types of a formal description - this is a beginning of the classification of variants. We proceed with subtypes, and so on.

The logical organization of the variants of formal description is a "forest" - a set of "trees", where each "tree" is hierarchically organized classes of variants and their subclasses.

In general, formal and computerized procedures in our system take a set of potential variants of a solution corresponding to a given formal description and reduce it to a small subset of best solution variants among which we choose one using informal methods. Usually, we take one best solution variant for each variant of formal description, and we do not need to use informal procedures.

After that, we compare best solution variants for different descriptions.

Solution Variants

For the given variant of the description the structure of variants of the solution can be complex, while developed automated procedures of variant screening exist only for simple structures. In our system we present a set of variants as a set of subsets, each of the structure, that automatic procedure can screen. We find the best solution variant for each subset and choose the best among them.

We define a simple database - Journal, to help in the accumulation of "best solution variants". It stores parameters of the model, and we need the model to read and interpret them.

The possibilities of parallel computing can be utilized in optimization.

The very nature of optimization allows parallel computation - we divide a set of variants of a solution into subsets (which can overlap) and search for the best solution in each subset independently (using different computers or different processes on the same computer). The overall best can be found either after the search in subsets is finished, or by means of additional parallel process, which collects intermediate results of the partial searches and compares them as the process goes.

The Optimization System takes care of standardization of models and building the Model Library. There is a difficult area of creating models. In our system most of models come from the practical problems and their solutions.

Acceptance of the model goes through several steps, which include test use, where area of applicability is found, and refining of description, where standardization is achieved. This is a long process, and we have to speed it up.

There are procedures, which can be used to check an applicability of a model in particular situation.

When we know objects or situations, which are described by the model in question, or we can produce them, we check how close the model is to reality.

We can use another model, or a set of other models, and check how factors, which are captured by another model but not captured by the model in question influence the quality of the model in particular situation.

The collection of models includes general models formulated in general terms and specific models formulated in terms specific to industry, area, etc.

General models are close to computation, but further from real problems, specific models capture specific problems and provide a link between a specific problem and an abstract general model, which is ready to be computed.

The description of specific models can include a reference to a general model, if such a model exists in the collection. This structure simplifies both description and correction of models.

The collection of computerized models follows the suit: it includes general models, some of which can call optimization procedures, and concrete ones that use some general model. Some models can be used as modules in other models.

Developers of our system supply standardized general models, model's modules and methods of their use in the conjunction with optimization procedures.

Note that forecast procedures can be a part of the model of functioning. Experienced decision-makers use as much as possible the available knowledge about relationship between parameters, and carefully choose parameters which values are extrapolated to the future. In all respects this is modeling.

In a process of the solving a problem we might use a set of models, switching from one to another to get better understanding of the problem. We can study in depth, in detail a facet of the problem with assumption that other facets of the problem are known. In the other hand we can study all aspects of the problem, but restrain ourselves to aggregate parameters without paying attention to details.

How far we go in this experimentation with different points of view depends on available resources, mainly of time to the moment when we have to deliver a solution.

This kind of model walk is a natural part of the decision-making process and we must support the ability to perform it in a convenient way. It means we supply an excessive set of parameters from which we choose needed ones for particular model and a set of interrelated models, which can share results of calculations.

We do not assume that we have a ready model in the library. We take a "nearest" one and adjust it to our circumstances. This is not a formal process, it requires in depth understanding of the situation.

There are a few rules we follow in our system.

The model has to be tied to the particular decision-making process it serves, for example if this is a stage of a product design, usually we do not model a production process, only if this is an advanced design which optimizes a product together with the production process. Then we model them together.

Usually we adjust the model to work with existing data gathering systems. Models are much more flexible tools than data gathering systems, and it is much easier to adjust a model to work with data already gathered for decision-making in different areas (in combination with some scientific data), than to set up and maintain a new data gathering system.

Building of Optimization Models

Iterations of Formal Description

Formal description of the optimization problem is achieved as a result of a chain of corrections, when new details are added and unneeded are dropped etc.. There are some recommendations, which can be given here, which come from the experience of solving this kind of problems.

We use a few different simple models which describe different aspects of the problem, and either combine them into more precise model, or combine their results.

One group of parameters, which should be optimized we assume to be fixed in one step and build a model in this assumption. On the next step we assume the other group to be fixed and correct the model, and so on.

We check what happens if one of our assumptions is off a little - we have to change a model correspondingly and see how much the result is changed. If the change is substantial we need to examine this assumption.

The particular case of this last method is the estimate of the sensitivity of results of optimization to mistakes of data. In many cases it is possible to build a special model, associated with the original one used for the optimization, which is used to compute this sensitivity, usually as coefficients of sensitivity. This model is a useful part of the optimization model and it is used on the stage of Analysis. It allows fast estimates of the model applicability, which otherwise require a lot of calculations.

Building of the Goal Function

The initial description of the optimization problem usually includes a few different goal functions. It is possible to optimize with a set of contradicting goal functions but it is not recommended. Usually, this kind of optimization gives a huge set of possible solutions any two of which we can not compare, because first is better than second according to one goal and it is worse according to the other. We need the goal, which is a compromise between original set of goals.

There are only a few methods to write down this compromise, which allow easy interpretation. If there is a need for more complicated compromise description, it is better to introduce some additional characteristics, reformulate the basic set of contradicting goals and use these simple tools of compromise still.

Weight coefficients, as a means of achieving a compromise should be avoided. Usually, their justification is weak, their measurement is of low precision, and it is too easy to adjust weight coefficients so that optimal solution matches about any given solution.

However technical weight coefficients which, for example, help to combine hierarchically ordered goals are useful.

If goals are expressed in the same type of values and the measurement of this type of values supports addition, we can subtract (or add) goals and have the result as a compromise goal. Usually, the interpretation is obvious.

The other method is to take a ratio of two goals. In fact we introduce in this case a new type of value, and it helps if we can clearly interpret it.

A useful tool of the formal description of the preferences is a combination of the goal function and constraints. Usually, it allows easy interpretation and easy correction of the formal description.

In some cases, we can describe local goals. These goals affect only a few variables, and we demand that they have to be optimized first. The optimal solution deviates from these locally optimal values only if there is no possibility to fit into the constraints. This can be a powerful tool. We use it in "soft constraints".

A hierarchical goal function is a set of functions of different rank, and we consider the function of the lower rank only when the variants are equivalent from the point of view of any function of higher rank. In many cases this is a useful way to describe preferences, for example, local goals.

Maximization of the minimum of the values of characteristics can be a potent candidate for the goal function. Similar, minimization of the maximum of the values of characteristics can be used.

For example, we have a few similar suppliers and we want to deploy them equally: we minimize a maximum value of the cost related to these suppliers.

By the way there is a simple way of incorporating this kind of goal functions - minimization of maximum value, as a part of the linear optimization problem. Let

y(b), b from B

be our set of characteristics, and we want

max y(b) -> min.

We introduce a new variable x and a set of constraints

y(b) ( x, b from B

and minimize x

x -> min.

Above we replaced one goal function with another equivalent one, which delivers the same optimal solution. We did this to make computations easier. However, there are cases, when such replacement has more important reasons.

If we can replace a goal function formulated with money or conventional values with equivalent one formulated with technical values we should do so. This improves the quality of the solution. For example if the cost of the metal frame is defined solely by its weight, it is much better to minimize weight, than to minimize cost - in this case we have a clearly defined engineering problem, and we use technical values only.

Similar, we have to replace a goal function formulated in conventional values, with one formulated in money values, if we can.

The building of the goal function is an iterative process: we try different variants and look at results of the optimization until find one, which reflects our understanding of the current situation.

Usually a model designer has to keep in mind what kind of tools (mathematical, computer) are available in every step of the design. Hence, the model design requires a good understanding of a few subjects. In our system, we went a great length to separate optimization procedures from the modeling. This allows model designer to think less about practical optimization and concentrate on the model. However there is a subject which is inseparable from the model design, this is an organization of data gathering.

It is much less expensive (in terms of money and time) to change the model and adjust it to the possibilities of data gathering, than to find a needed data in an acceptable form. The rule is familiar: use all available tools to get things done.

The above example of measurement of the river's width is an appropriate illustration of this thought.

We have some formal description of the problem, which includes constraints and a goal and we are looking for the solution, which fits into the constraints and delivers the best value of the goal. However constraints and the goal are given with some mistakes, and it is possible that we can come up with slightly different formal description of the same problem. We want the solution we found to be still close to its best value for all these different formal descriptions. This is what we understand as an optimal solution.

From the mathematical point of view this is a "near optimal" solution for any particular formal description. There are many ways to define this type of the solution, and there are fast procedures, which help to find one of it.

One of the approaches is building a chain of solutions, members of which eventually are near the "absolute" optimum. We stop generating members of this chain, when we "feel" that solution we have is "near" enough. This is how iterative methods work.

The fact, that we really do not need an "absolute" optimum, that "near" optimum is enough, is of the great help in the case of discrete problems - there are fast computational procedures, which can find "near" optimal solution.

There is one approach to formal description of what is "near", which is really helpful in many cases, and specialists making decisions based on results of optimization procedure have to get accustomed with it. We say that a probability that a solution is optimal is big enough (let say 0.99). Conceptually this is not much different from the chain of solutions, which we generated above. Specialists only have to get some experience with it.

Sometimes we have to go a step further: this probability is a member of iterative chain, and we stop it when we "feel" it is enough. Again, it takes some experience to get accustomed to it.

Set of Parameters and Characteristics

When quantitative methods can be employed in the decision-making process, the choice of parameters and characteristics taken in consideration is crucial. This choice involves a great deal of experience and our system supplies a few tools that help it. When we begin building a model, we choose a point of view - type of abstraction to use in a model. This determines the set of parameters and characteristics used in the model, in general. This kind of work requires understanding of the problem and knowledge of available methods that can be used to solve it. For this stage of the decision-making process, our system can offer only a more organized than usual description of methods, which can be used.

Starting from this point, we initialize a chain of corrections - we experiment with a model and adjust it until its precision is enough to make a decision.

The set of parameters and characteristics can decrease on the way also, because some of them appeared to be not important for solution of the problem. We eliminate unneeded parameters to simplify the model, and consequently to make it easier to use and to save resources dedicated to the information gathering.

The set of parameters and characteristics can increase on the way, because the model appeared to be describing a situation in the extend that is not enough to be useful in decision-making process, i.e. values of characteristics used in a judgment of the quality of proposed solution can not be determined with needed precision by values of parameters. There are other important factors involved that we did not take into consideration.

There are two main directions, in which we increase or decrease a set of parameters and characteristics. One of them is - we work with parameters independent from others, another - we work with subset for which there is some aggregated parameter in the set. We can add an independent parameter, or we can add a set of parameters instead of an aggregated one. Or we can remove an independent parameter, or we can substitute a subset of parameters with an aggregate parameter.

In some cases, we add parameters that we cannot control, and analyze how the consequences of a decision are changed, when values of these parameters are changed. If the influence is big we need to correct the initial approach to the problem.

The process of correction of the model serves as a base for the justification of the use of this model.

Measurement of Parameters and Characteristics

A problem with parameters and characteristics can arise when people expect from quantitative analysis more than nature of measurement of parameters and characteristics can allow or precision of this measurement can allow. We supply a few guidelines how to judge a measurements influence on a decision-making process and how to build this process accordingly. This problem less likely happens with parameters of model of functioning and more likely with the model of preferences.

In general, measurement includes several steps (see Classification of Parameters and Measurements):

- Description of a set of objects which can be measured.
- Consistent assignment a parameter to an object, as a characteristic of some common property of these objects.
- Establishing a measurement procedure, that in a consistent way assigns a numeric value to each object from the set. If the procedure is applied to the same object in the same circumstances, it has to deliver about the same value; the difference between these values is described by precision of this measurement procedure.

The problem is that we cannot always manipulate these numeric values as numbers. What we can do with these numeric values is determined by the type of particular measurement. Sometimes these numeric values can be used only to determine that one object is better than another in some sense. Sometimes we know, as in a case of volume, that if we combine objects in some way, the numeric value of combined object is a sum of numeric values of its components. In this case, we can take results of measurement add them or multiply by a scalar. There is a case in between, as temperature. The use of these numbers requires special precaution.

We explicitly forbid the use of results of measurement in the way that is not supported by this type of measurement.

Description of preferences is a relatively young scientific discipline and there is still some abuse in the usage of results of measurement of preferences. The problem usually occurs when a new characteristic is built as a sum or other combination of known ones. Keeping aside the issue of usefulness of such combinations, sometimes, the very nature of measurement of characteristics-components does not allow treating results of measurement as numbers and adding, multiplying or dividing them.

Precision of measurement is a degree in which results of different attempts to measure the same object resemble themselves. Precision of results that we can get from the model directly depends on the precision of measurement of parameters and characteristics.

Usually the least precise part is a model of preferences. Precision of the model of functioning has to correspond to the precision of model of preferences. There is no sense to spend more trying to receive more precise results of measurement in one part of a model, if in another part these results are obscured by the mistakes of measurement of other parameters. "Correspond" means that the mistakes of one model have to have similar effects on the solution, as mistakes of the other.

Mistakes of measurement affect the ability to distinguish between two potential solutions, which is better. The needed level of precision to do so depends on the localization of potential solutions. We determine this needed precision of measurement of different parameters and characteristics in the chain of corrections, as it is done when we are choosing a set of parameters and characteristics. This is an important principle of optimization.

Correction is one of the most important principles in our system. We use correction everywhere when we have to deal with all kinds of uncertainty, mostly uncertainty caused by the absence of knowledge where we need some.

We build a chain of corrections, starting from some reasonable amount of information, and gather additional needed information on the way. As a result, we save resources because we are narrow focused in our information gathering, and we can coup with unknown future, because we correct the assumptions as situation is changing.

In the selection of parameters and characteristics we start with some reasonable set and build a chain of corrections, adjusting models to a degree, where they are sufficient for this particular decision-making process.

We determine the precision of measurement of parameters and characteristics to achieve a balance between precision of measurement of different parameters and characteristics and to achieve a sufficient level of this precision in the chain of corrections.

When we build a model, we use a chain of correction to find more appropriate one.

Even data input, involves input validation and correction.

If an implementation of final solution fails, we perform a correction of the decision.

We distinguish and recommend five types of reports:

- process reports,
- final decision reports,
- implementation reports,
- reports for external customer,
- development reports.

The process report is a report of an ongoing analysis that helps to choose the next correction.

The final report is based on assumption that one, who reads it, is familiar with our system, and we can omit some known details.

If implementation of the final solution failed, we need an implementation report, to find what went wrong and how can we correct the solution.

We can not assume that an external customer is familiar with our system. For an external customer we need a special proof that a recommended solution is good for him. It includes proof that solution is done according specifications and the comparison with a few other relatively obvious solutions.

When there is no applicable model in a library, a new model is developed. To document this activity we need special reports - model development reports.

Each step in a decision-making process has a tendency to take as much time and resources as possible, especially if different specialists are involved in different steps. To balance these tendencies we use our basic principle: in any moment, we are ready to deliver a solution. In the beginning, this is a weak solution, some obvious, not optimized one. As process matures - we gather information, organize it in models, etc., we can deliver better solution. We can draw a curve of proposed improvements of the quality of solution with the time, and we allocate resources in time to fit this curve. If we do not know the moment when we have to make a decision, we make this improvement steady. If we have a deadline, which we know with some precision, we draw the curve to maximize improvement achieved during the period up to this deadline. In some cases, this can be described formally, and resources allocation and reallocation can be made a part of the correction process.

This type of software is difficult to develop because we need to connect automatic optimization procedures to the models, which should be easy to replace or correct. In addition, we need a convenient user interface to the procedures and models.

We start with explanations of the difficulties facing anyone who wants to do this part of the work. After, we provide a set of interrelated concepts, which make the development manageable.

There are a few basic requirements for software of this sort, which make it design not so obvious. We want:

- The system to be a module in some Decision Support System, or in the other program, which is already in use in a particular place. It is a server, which comes up on demand and goes out, when the job is done.
- Models to be modules in the system. We do not want to reprogram the system each time we use another variant of the model. The model itself has to have two parts - the computational part and the user interface part. These parts need to communicate each with other uniformly - we correct models often, and we need to correct both parts in unison.
- To be able to communicate with other computers possibly with different ways of data encoding and run subtasks on them.
- To be able to communicate with a database for data input and output of results.
- Optimization procedures to be modules - it should be easy to add a new procedure, or to switch from one to the other during the search for the solution.
- Have consistent user interface; this makes learning and the use of it easier. We want the part of interface related to the optimization procedures to be a separate module, plugged together with the procedure.

We need to support a "forest" of variants, we need a kind of journal for the intermediate results of optimization.

We need to coordinate the activity of a user and the activity of automatic optimization procedure (they compete for the control over the process).

Hence, we have to produce a set of software objects, which work together.

They include

- control system,
- user interface
- models - visual and computational,
- optimization procedures
- databases.

Each data item, which can be changed during the optimization (by a user, who sets values, or by an automatic procedure, which screens variants) has to be presented as a special object, which holds all information related to this parameter.

We need different types of parameters: real numbers, Boolean values as yes-no, finite values as a day of the week, etc. Hence, we need different types of such objects.

In real life, parameters come with bounds; for example, the value of a parameter can be restricted to be non-negative. This information - types and values of bounds, belongs to this object.

Proceeding naturally, we also place in this object information about the desirable value of the parameter. For example, we might want the non-negative parameter to be as small as possible or we want it to be as close to some given value as possible - we call it "soft constraints".

On some steps of optimization, we want to have values of some parameters to be fixed. We have to be able to specify in the object that the value is fixed.

Finally - we need a set of parameters. This set have to be flexible, we have to be able to add new parameters as we build a model.

In our system, we use a goal description as a set of local goals. For each local goal, we specify its rank. Hence, a goal value is a set of ordered by the rank numbers.

If for two variants of goal value first numbers of their ordered sets are the same we choose better variant by the value of second number and so on.

Immediately, we have a convenient way of description of soft constraints. We associate a local goal with a soft constraint. The original goal becomes a local goal also. Local goals corresponding to soft constraints have higher rank, than an original goal.

We use ranks to reach compromise between different contradicting goals in original formulation of the problem. This is a useful tool also.

We need as many tools as we can get to describe our preferences. Sometimes we have to experiment with these descriptions to find an acceptable "approximation" of our system of preferences. Sometimes the best way to describe them is presenting goals as hard constrains, or as soft constraints.

Hence, while this way of presentation of goal is unusual, it is definitely useful.

Models are supplied by the model designer, however they must be of the special form.

There is a trade-off between a model of the fixed format, which can be computed fast and a model of flexible format, which has to construct itself each time the value of parameters changes. We opt for flexibility.

Each change of parameters (manual or automatic) is done with the message, which first is analyzed by the special function (supplied by the model designer), which sets the structure of the model. After the values of parameters are set.

Models in our system have two parts - Visual and Computational.

Visual part of the model is supposed to be raw material for the interface designer and a conduit of interaction with the computational part of the model. Hence it is simple - a set of lists of visual elements. These lists correspond to different presentation of the model and its parts.

Here are classes of the model parts presentation:

- model of functioning,
- model of preferences,
- variables,
- analysis,
- report.

The first procedure of the Computational Model is initialization, where parameters' structures are defined.

After that, the system performs a chain of three step operations:

- Sets a data for linear optimization one group of linear parameters after the other,
- Calls linear optimization with this data,
- Calls a proper procedure, which performs direct calculations of the values of parameters that can be done after this step of linear optimization.

In addition, in the Computational Model we have analysis procedures and report procedures.

The Computational Model and the Visual Model are parts of the same object.

Many automatic optimization procedures require an initial point, from which they take-off with the chain of improvements.

When a user learns a model, he would like to have an example to start with.

When we change the value of the parameter, which leads to the change of the model structure, we can get a model with blank parameters, which we do not know how to handle.

Blank fields, which do not allow us to proceed with the calculations before everything is set and checked is a good design for accounting software, but not for optimization, where correction and experimentation is the essence.

Therefore, we demand that all parameters have default values. This solves the problem of "initial point" and user's experimentation.

If someone wants some other initial point, he can change values of the parameters manually, and this will become the current state of the model.

This has some subtle differences from the usual approach. Usually a model (as a set of functions) and its data (as a set of numbers) are separate entities. In our case, they are combined: different data means a different state of our computer model.

This state can be saved.

Hence, our model is a persistent object.

The iterative optimization procedure has to be driven by the user: he has to change its parameters when the speed of the procedure is low.

Hence, we need to monitor this procedure, and we have to be able to stop it and change its parameters. We accomplish this with a special object - procedure control data, which depends on the type of the procedure. This data is updated by the procedure, and it includes parameters of this procedure.

According to our design, a model designer specifies the Search Space (a segment, a finite set, all possible combinations of a few variables, a "choice tree" of variables' values, etc.) for the model variables.

We need many different optimization procedures, some are ready now, and some we expect to be developed in the future. The simplest among them is direct screening, when we have a relatively small amount of variants and we check each of them. The next step is to divide all the variants on classes and check representatives for all classes. A more sophisticated technique is sliding. When we have an effective optimization procedure, which can work only with small amount of parameters, we divide the set of the parameters on groups, assume values of the parameters fixed in all groups except one and optimize only this one group. We move from one group to the other, improving the result on each step.

There are a few ideas in optimization, which are combined to produce an optimization procedure. Each idea can be presented as a simple optimization procedure itself, which can be used as a module in a complex procedure.

In our system, we supply modules of optimization procedures and complex procedures.

It is desirable to have a similar User Interface in all implementations of our system. There are some features, which have to be present in it.

The User Interface has to give a feeling of the driving of the process, driving in two directions:

- the formal description of the problem, which gets more complete as the user learns details of the problem when he tries different variants, and
- the optimization procedure.

The starting point of the User Interface has to be a kind of control panel, each special button opens a path into particular tree of views, where data input and correction is done.

From any window, there is a direct path back to the control panel. It should have capabilities to be hidden and recalled.

Data input and display have to be shaped by the model, the user has to have the ability to choose an appropriate view of the model, which makes interpretation of the particular task easy. In particular, we have to use the consistent division of the model on the model of functioning and the model of preferences.

The presentation of the model of functioning can have a few different views.

The model of preferences is simple, however a user has to have the ability to change it easily - this is a frequently used tool.

In addition, it is possible to have a combined presentation, which depends on the problem.

The most frequently used part of the model presentation is the part, which works with values of parameters - variables and their bounds. Changing the values of the bounds is a powerful tool of the model correction and of the study of solution properties. We present values of parameters together with bounds and other related information.

Note that the changes of the model parameters (variables) can lead to changes of the model presentation: different size of the data table, different additional variables are needed, etc.

A special part of the User Interface deals with the management of variants of formal description: their structuring, storing, and the choosing a subset of the promising ones. This can be done with the well-structured system of files.

In many cases, the user needs the ability to "play" with different variants of the model parameters. Some of these variants can be saved for future analysis. We should provide this ability.

Solution variants are organized in a file system with a familiar directory structure. Related to the solution variant information is also stored as a file.

The separate parts of the user interface are the analysis part and the report part.

We can enumerate the "gates" of the control panel - starting points into different paths:

- computation driver
- model (by departments)
- analysis
- report.

Computation driver includes:

- parallel processes control,
- display of the process' state,
- results of optimization (accumulated from all processes).

Many of optimization procedures require starting point as a parameter. We set this parameter through model windows: when we start a new optimization process we set its parameters in appropriate process control window, and we set all model variables in the model windows, and after that we can start an optimization process.

When a user changes the model, all optimization processes are stopped, and the new model's data is sent to optimization processes.

Linear Optimization is an important module of automatic optimization procedure. It is well studied and well developed. We formulated it in our terms.

The general form of a linear optimization problem is: find values of a set of n variables

x[0], ..., x[n-1]

that

1) have to belong to the allowed area described by set of direct constraints:

lower(x[i]) £ value(x[i]), or

value(x[i]) £ upper(x[i]), or

lower(x[i]) £ value(x[i]) £ upper(x[i]), or

value(x[i]) = level(x[i]),

2) satisfy set of m linear constraints for m variables

y[0], ..., y[m-1]

VALUE[j]=a[j][0]*value(x[1])+...+a[j][n-1]*value(x[n])

lower(y[j]) £ VALUE[j] or

VALUE[j] £ upper(y[j]), or

lower(y[j]) £ VALUE[j] £ upper(y[j]), or

value(y[j]) = VALUE[j],

where j=0, ..., m-1,

3) maximizes (or minimizes) a goal function

g[0]*value(x[0])+...+g[n-1]*value(x[n-1]),

4) level(y[j]) is used in the description of the "soft constraints".

In our system the Linear Optimization problem is presented in a form

x[0], ..., x[n-1] is a set of pointers to 'primary' parameters and

y[0], ..., y[m-1] is a set of pointers to 'secondary' parameters and

coefficients a[j][i] are a part of the model that describes relation between this two sets of parameters,

bounds and levels, are used to describe the area where values of parameters can be,

the coefficients of the goal function

g[0], ... ,g[n-1]

which are chosen from the known preferences to find one solution that is the best one among acceptable solutions. It is possible that we need to extend the set of variables to describe the goal in this way.

Unbounded Variables

The solution of the Linear Optimization problem resides in a place where several boundaries produced by constraints met. If we have some solution that satisfies constraints but is not optimal we can slide it along boundaries until the value of the goal function is increasing. If we can find a direction in which we can move a solution without meeting any boundary we have a case of unbounded problem - we do not have a reasonable solution. From this we can see that unbounded variables in a properly defined Linear Optimization problem are rather an exception than a rule, but they are often helpful.

Unbounded variable x in Linear Optimization problem has to be represented with a pair of non-negative variables *xplus* and *xminus*:

x = *xplus* - *xminus*.

In an optimal solution one of the components will be zero and value of another with appropriate sign will be a value of x.

Coefficients for *xplus* and *xminus* we produce in obvious way from coefficients for x.

When we want to use an absolute value of unbounded variable in goal function we simply change sign of coefficient of goal function for *xminus*, because the absolute value of x is

*xplus* + *xminus*.

When we have an unbounded variable, which we want to be as close to zero as possible, we make corresponding coefficient of goal function with big absolute value (positive for minimization and negative for maximization).

Soft Constraints

Soft constraints are a powerful tool of formal description of real problems. Soft constraints allow us to keep the value of a parameter (primary, or secondary) as close as possible to some specified level. We add a variable - a distance between the value and the level and minimize it. To make this a part of the linear optimization problem we make coefficients of goal function, which correspond to this additional variable with big absolute value (positive for minimization and negative for maximization). We can combine this soft constraint with bounds for the parameter.

We force a value of the parameter V to be as close as possible to some level L, we add the unbounded variable x or effectively a pair of non negative variables *xplus* and *xminus*

x = *xplus* - *xminus* = V - L

and minimize the absolute value of it |x| = *xplus* + *xminus* . We place in the goal function big artificial coefficients for *xplus* and *xminus* with the same sign (positive for minimization and negative for maximization).

After the optimization one of the artificial variables will be equal zero, and another one will be equal the distance between the parameter and the value.

If we want the parameter to be as close as possible to some level and not exceed it, we add only one non negative variable *xminus* with a similar big coefficient in the goal function and coefficient -1 in the corresponding constraint.

Similarly, in the case when a parameter must be above some value and as close as possible, we use only one additional variable *xplus*.

Linear Hierarchical Goal Function

Soft constraints and other cases can be easily described with Linear Hierarchical Goal Function - a hierarchical goal function, where all functions of the set of functions comprising it are linear. Usually all functions of higher rank have many optimal solutions and each next functions reduces the set of optimal solutions. The last function allows choosing one solution in this reduced set.

The optimization procedure in this case is a small modification of classic procedure.

Non-linear Goal and Linear Constraints

There is often a case in financial models: constraints are linear, however a goal is supposed to be something as a yield - a ratio of some linear functions. Even if this is not a ratio of linear functions often it is easy to approximate it with such a function.

Goal

G(x) = a(x)/b(x) -> max,

where x is a vector and a(x) and b(x) are linear.

This can be presented as:

G(x) = t, and t -> max,

or

a(x)/b(x) = t and t -> max,

or

a(x) = t * b(x) and t -> max.

For each fixed t

a(x) = t * b(x)

is a linear constraint. Now we increase t and check that a system of linear constraints with this additional one has a solution. Eventually we hit a point where we do not have a solution. We have to catch an edge point between the area where we have a solution and the other area where there is no solution. This is an easy problem: we gradually reduce a segment where this point sits. Note, for this algorithm we need to check that a solution exists and we do not need to solve the optimization problem - this is faster.

If we know that b(x)(0, for any x which is a solution, then we can present a problem in the form

a(x)/b(x) ( t and t -> max,

or

a(x) ( t * b(x) and t -> max.

For each fixed t

a(x) ( t * b(x)

instead of

a(x) = t * b(x)

which can be computationally more convenient.

We consider this case - a goal function, which is a ratio of the two linear functions to be a case of linear optimization and supply the corresponding functionality in the linear optimization procedures.

Integer optimization

Theoretically, we get a problem of linear integer optimization by adding a set of constraints:

value(x[i]) / round_unit(x[i]) is integer

for some variables. However this addition makes the problem very difficult. There are some general approaches to solving this type of problem branch and bound method, for example, but they are slow. Some kinds of the problem can be solved with special custom made "rounding" procedures.

We have a similar a situation, when we have "gaps" in the area of possible values of variables. Again the branch and bound method is applicable here, and again it is generally slow.

In our system, the ability to handle soft constraints and unbounded variables is a part of optimization procedure. The Linear Optimization procedure is a two layered procedure, first layer translates original problem into a standard form, where all variables are non negative and some of them have an upper bound and the second one solves the standard problem.

The linear optimization procedure translates data presented in our form, with all the bounds and soft constraints into a problem of the standard form. The problem in the standard form is solved by one of a few different optimization procedures. These procedures can be developed to the degree, that they can handle some integer variables and "gaps".

There are many "standard forms" of the linear optimization problem. We favor a "bounded" one.

1. If there are bounds for x[i], substitution

z[i] = VALUE(x[i-1]) - lower(x[i-1])

or

z[i] = upper(x[i-1]) - VALUE(x[i-1])

leads to the case with non negative z[i], sometimes with additional upper bound

upper(x[i-1]) - lower(x[i-1])

i=1,...,n.

2. If we have constraint

a[j][0]*z[1]+...+a[j][n-1]*z[n] = value(y[i])

or

lower(y[j]) £ a[j][0]*z[1]+...+a[j][n-1]*z[n]

or

a[j][0]*z[1]+...+a[j][n-1]*z[n] £ upper(y[j])

or

lower(y[j]) £ a[j][0]*z[1]+...+a[j][n-1]*z[n] £ upper(y[j])

we add variable

z[k] = VALUE(y[j]) - value(y[j])

or

z[k] = VALUE(y[j]) - lower(y[j])

or

z[k] = upper(y[j]) - VALUE(y[j])

and we have a constraint that looks like

z[k] = a1[j][0]+a1[j][1]*z[1]+...+a1[j][n]*z[n]

where

z[k] = 0

or

0 £ z[k]

or

0 £ z[k] £ b,

where upper bound is

upper(y[j]) - lower(y[j]).

Corresponding goal function's coefficient is zero.

On this step we silently introduced new artificial coefficient 1. This can affect solution precision in the case when a[j][i] are too small or too big, because of inevitable mistakes of truncation which we have during calculations. It means that we have to normalize an original matrix of coefficients making them close to 1 by absolute value.

3. We enumerate variables in the homogeneous form. The problem now is in the form: find a solution

u[1], ... ,u[N]

with non negative u[i], some of which can have an upper bound, or equal zero, which satisfies a set of constraints equations

u[j] = c[j][0]+c[j][1]*u[m+1]+ ... +c[j][N]*u[j][N]

j = 1, ..., m

and maximizes goal function

g[0]+g[1]*u[m+1]+ ... +g[N]*u[N].

Now we have a set of non-negative variables, some of which can have an upper bound, a set of constraints-equations and a goal function.

The linear optimization problem in this form is solved by Simplex Method, which consists of two phases. On the first phase we find a solution that satisfies the given set of constraints. We do it step by step. In each step we do not violate the constraints which we had satisfied already.

On the second phase we find an optimal solution among ones that satisfy all constraints.

There are different ways of implementation phase one of the method. We favor a method based on the same type of steps as phase two, but with an artificial goal function built from the constraints which are violated if we take a point with all z[i]=0.

The idea of Simplex Method is that by virtue of linear problem we need to test only finite set of variants of solutions (which reside on vertexes of the figure curved out by original set of constraints or its subsets). We step from one variant to the next, which has a better value of the goal function, until we find one we can not improve.

The variant, which is a vertex, is one where appropriate subset of variables has zero values, and the other variables are recalculated from equations. This is used to build an algorithm. We take a variable with zero value which increase suppose to improve the goal function, and increase it to maximum possible value. We hit a constraint placed on other variable, which depends on this one. Now this variable has a zero value.

In the case with upper bound in each moment that upper bounded variable hits this bound we make an exchange of variable:

u -> upper - u,

now its value is zero.

We proceed with this improvement of the goal function.

The difficult case is when some of c[j][0] = 0. Sometimes in this case we have to go from one solution to another that delivers the same value of goal function, but we have to use the improvement of the goal function on each step as an assurance against endless looping. To solve this problem we change zero coefficients on small non-zero ones (according the error margin) when we need it.

It is possible that in this way we can increase a variable indefinitely. It means that problem is 'unbounded' and a meaningful optimal solution does not exist.

For the first phase, if we have some set of values u1[1], ..., u1[N], which satisfies at least some of the constraints, we can find next one that does not violate constraints that previous one satisfies which delivers better value of the special artificial goal function. We start with the solution

u[m+1] = 0, ..., u[N] =0,

with the goal function, which is the sum of all constraints that are violated in this point. This function improves when we go close to constraint satisfaction, and continues to improve when we have satisfied one constraint, and we are moving to satisfy another. On each step we satisfy some constraint.

If we cannot find variable that improvement of the goal function is possible it means we cannot find better solution, i.e., this is optimal solution (phase two) or there is no solution at all (phase one).

Each step of linear optimization procedure (this step is a variable exchange) leads to an error accumulation due to truncated number presentation in the computer. This is a big problem with linear optimization, mostly because there are many variables and many steps. This can be interpreted as solving a slightly different problem after each step. In some cases it has a disastrous consequences, for example, if somewhere inside a system of equations we have something equivalent to a pair 0 £ x, and x £ 0, which means x=0, and after a few steps we have E1 £ x and x £ -E2, where E1 and E2 two small positive numbers, we have no solution. This is not something of a pure theoretical interest, this a situation which happens in practical calculations often. The main idea of linear optimization procedure is moving from one vertex of polyhedron (produced by equations) to the other where the goal function has a better value. Because of error accumulation the procedure can lose the path in the places where improvement of goal function is really small.

We can fight this problem with the similar weapon - each time the procedure stalls, we widen the appropriate area: we change constraints according the margin of error of appropriate parameters in the direction which makes it more likely to move procedure ahead. It makes procedure robust with high probability, we changed somewhat a definition of robustness, and we made it statistically robust.

The experience shows that this approach leads to practical, acceptable procedures.

It is an interesting example how the concept of "near optimal" solution allows us to use simpler optimization procedures, and find an acceptable solution.

The building block of many optimization procedures is a procedure of an optimization of the function with one variable on the segment, where it is known, that this function has only one maximum.

A building block in our system is somewhat more complicated: it works with "ranks" (which present the goal), takes in consideration the margin of error for the argument and for the function, can handle long plateau (where values of the function are the same), can work with continuous variables and with integer variables. It works on the entire line - it starts with some initial point, and localizes a segment where the maximum resides. However the main ideas are the same and it is helpful for anyone involved in optimization to know them.

Let f(x) have a maximum on the segment [a,b]. We add two points c and d:

a < c < d < b,

and analyze values of the function in these points

f(a), f(c), f(d), f(b).

Because we have only one maximum we have to have either

f(a) < f(c) < f(d), or

f(c) > f(d) > f(b).

More precisely, we can have some values equal each other, this makes the analysis a little bit more complicated, but the main idea is the same.

Now in the first case, maximum is for sure not in the interval [a,c), and we have a smaller interval [c,b] to look into and similarly in the second case - we can drop an interval (d,b].

We add one more point. We have a 4 point scheme again, and we can shorten the segment again. We stop when the segment is short enough.

We describe a few variants of stochastic optimization to give a general understanding of the method.

Boolean variables:

x = (x0,x1,...,xn), xi from {0,1}

Goal function:

f(x) -> max.

Technical definition: generalized probability is a value from minus infinity to plus infinity with a smooth map to the ordinary probability in (0,1), where zero value of generalized probability corresponds to ordinary probability 1/2.

Randomization

We define

a random value yi with realizations in {0,1}, realization of yi is xi and

a random value

y = (y0,y1,...,yn),

hence, realization of y is x and

pi = P{yi = 1} - generalized probability

p = (p0,p1,...,pn).

We define a randomized goal function

F(p) = M(f(y)) - mean value

which is defined on n dimensional space of generalized probabilities.

The problem

F(p) -> max

is a randomized optimization problem.

Optimization

We use statistical estimate FF(p) of F(p):

x(0), x(1), ..., x(m) - realizations of y with given probabilities p

FF(p) = 1/m * ( f(x(0)) + f(x(1)) + ...+ f(x(m)) )

We use also an algorithm based on stochastic gradient of FF(p):

p(i+1) = p(i) + d(i) * G,

where the length of the step (scalar)

d(i) = O(1/i)

and vector G is a (normalized) stochastic gradient of FF(p) in the point p(i). It determines the direction of the step.

The simplest way to calculate G in the point p is this one. We select some direction g in the point p and make a small step h in this direction:

pp = p + h * g,

if FF(pp) > FF(p), we select G = g, otherwise G = -g.

To catch the conversion, we watch real probabilities, which we compute from generalized probabilities.

Real variables:

x = (x0,x1,...,xn),

Goal function:

f(x) -> max,

Constraints

0 £ C(x)

where C(x) can be a vector and £ means each component of it is non negative.

We use an algorithm based on stochastic gradient of f(x):

x(i+1) = x(i) + d(i) * G,

where the length of the step (scalar)

d(i) = O(1/i)

and vector G is a (normalized) stochastic gradient of f(x) in the point x(i). It determines the direction of the step.

As above, the simplest way to calculate G in the point x is this one. We select some direction g in the point x and make a small step h in this direction:

xx = x + h * g,

if f(xx) > f(x), we select G = g, otherwise G = -g.

Stepping this, we can get out of the area defined by the constraints. We need some "projection" mechanism, to project x(i+1) back into the area. The simple way to do this is as follows. We keep track of a good point z ( or a few such points) which has a good value of the goal function (for example a best one so far) and belongs to the area. Now if x is outside the area, we "draw" a line between x and z and "fast optimize" on this line - find some point on this line which is inside the area.

We look at the variety of optimization procedures from a few viewpoints:

- What is the structure of the set of variants and how do we approximate it?

- What are assumptions about existence and quantity of "best" variants?

- Can we reduce it to the case, when all functions involved (goals and constraints) are linear?

The optimization procedure can be constructed from a few "basic" procedures. In this case we apply our questions to the "basic" procedures and ask

- How do the "basic" procedures work together?

The classification of the set of variants we provided above in the model classification.

By far the most important method of optimization is the simplest one - the direct search. If the area to search consist of a relatively small amount of points, then it is reasonable to check all of them and pick the best one. The variables can be finite, Boolean, or integer with lower and upper bounds, all together they form a finite search space.

When variables are real and the search space is not finite at all, we still can use direct search. We choose a set of points in the area close enough to each other - a lattice, and search through this set of points.

A general optimization procedure has to handle a hierarchically organized set of variables: we might have a set of variants and a set of parameters for each variant, for each value of parameters from any of these sets there are specific for this value variants, and so on. Usually this can be solved with a specially organized direct search.

However, when the space grows big (in the sense that it takes too long to compute the model in all points), we have to use special "smart" procedures.

Optimization procedures are developing in many directions. There are extensive theoretical studies in this area, however results of these studies often can not be used in practical applications yet.

For example, there are special procedures fine tuned to a specific problem, which can not be used anywhere else. There are well-developed procedures which can not be used in the cases, when quantity of variables more than 20. There are procedures prepared to deliver an exact solution in about any case in expense of the speed of procedure. None of this is acceptable in our case.

We need a set of procedures to be:

- Fast, because we use corrections and iterations and often have to finish optimization to a deadline.
- With standardized way of using them, because we will switch from one procedure to another in the process of corrections, or if we can not find a proper solution fast enough.
- General, because we do not want to change a procedure as soon we make a small correction of initial assumptions.
- Standardized, because we have to maintain them (correct mistakes, improve).

In addition, we would like to have them built from a set of standard blocks.

Analysis of current state of optimization procedures shows, that we can use as blocks of the optimization procedure (after some standardization efforts):

- "segment" optimization (one variable in the segment),
- "gradient based" procedures, for the small number of variables,
- linear optimization,
- procedures based on "stochastic gradient",
- discrete optimization based on "branch and bound" method.

One of effective ways of combining blocks of direct search, gradient and segment optimization is "sliding": we divide parameters on groups, and optimize one group in a time, with other parameters fixed to values found on previous step.

These general-purpose procedures could be supplemented by special procedures developed and fine tuned for a specific class of problems.

In general, the problem of Optimization of the Group of Objects arises when

- we have a set of (potential) objects,
- each object from the set has parameters, which have to be determined, and
- we have an area of objects "application" (use).

We assume that objects of the same type, but with different values of parameters are different objects. The object is in fact a class of units.

Units of one object (one class) are "applied" only in the part of the area of "application" - in the sub-area corresponding to this object. These sub-areas can intersect.

The "application" of the unit of the object in each "point of application" produces some benefits and costs something. We would like to conduct a cost-benefit analysis of this situation over all objects and all "points of application". To do so we need to know how many units of the object are "applied" in a particular "point of application".

Different optimization models belong to the class of Optimization of the Group of Objects. Some of them are static - all functions and all forecasts are averaged out in time, and there is no explicit time in the model, other have to be dynamic, because this is where the main action is, as optimization models which include product modification. Some of them do not analyze parameters of objects, they simply have a big set of different objects to choose from. Some of them are macro models, which have to utilize non-linear functions, for example, the cost of production as a function of production volume.

There are many examples of the optimization problems where this type of models is appropriate:

1. Often a manufacturer produces a set of products of similar functionality. They are (partially) exchangeable, but it is reasonable to produce a set of them, each object of which is tuned to particular segment of the market. If the market segmentation is not obvious and the usage of the objects from the set overlaps we have this particular type of a problem - we can not optimize objects separately, even how many objects we need can be an unknown.

The major contradiction here is: the smaller quantity of objects in a set, the higher the production volumes the more economical is the production, but the bigger quantity of object types, the better they can be tuned to particular needs of particular customers, and the more economical and attractive is the use of them.

2. There is a way to improve the effectiveness of production without decreasing the number of products produced. This is done with unification - make many common parts and common production technological processes in products from the set. To do so we have to optimize a group of parts and technological processes together with final products.

3. There are cases when objects are used in sets and we have to optimize objects together with the sets, which we treat as special objects.

4. Complex systems are designed by specialists working in different departments, concentrated on their part of the problems. As a result, similar functions in the system are performed by different parts and subsystems. A close look at this problem allows a reduction of a number of parts used. This is an Optimization of the Group of Objects.

5. There are a few types of raw liquid products (oil, for example) which come to the mixing plant, output of the plant is a set of other liquid products with defined parameters which are produced by mixing original ones. The demand and supply are changing over time, but they are known. We have to produce a mixing schedule. This is an Optimization of the Group of Objects, where a unit is a unit of measurement and the quantity of units does not need to be integer.

With all this variety of models it is possible to design a uniform approach to this class of optimization problems and a uniform notation. This notation is natural to contemporary mathematical notation and well coordinated with ways of coding in C++ (set of objects - collection of objects, etc.).

Set of objects

A ,

an object from the set A

a,

its parameters p are from the set

P[a] ,

and there is a set of restrictions on them - a model

M[a](p) ,

the area of "application"

B ,

a "point" in this area

b ,

quantity of units of object a in the "point" b - a volume:

v[a,b] ,

the sum of v[a,b] for all b from B and given a

V[a] ,

the sum of v[a,b] for all a from A and given b

V[b] ,

an area where an object a is "applicable"

B[a] ,

a subset of objects "applicable" in the "point" b

A[b] .

A major part of the model is a system of equations representing all kinds of balances. For example, if we have a set of production types B0 and a set of usage types B1 , our area of application consists of those two sets, and a system of balance equations reflects a simple idea that quantity of all objects a in use ( a sum of v[a,b] when b belongs to B1) is equal to quantity of all objects a produced ( a sum of v[a,b] when b belongs to B0).

If we optimize a set of parts together with objects, and it is known how many parts of particular type are in each object, we have additional balance equations: quantities of objects determine quantities of parts.

Now we have to describe exchangeability (partial) of the objects - the reason why we optimize them together. The obvious approach, and sometimes useful one, is to bring into the model a matrix of exchange coefficients: how many objects a0 are needed to replace objects a1 in the point b (conditions b). However we use as a basic approach a different one, which is more general.

We measure the level of the benefit which occurs when object a is applied in the point b in the volume x. It is possible that we need a few characteristics to do so or that we need to invent an artificial one to replace coefficients of exchangeability. This set of characteristic we denote z and the function

z = z[a,b](p) * x,

where p are parameters of the object a, is proportional to the volume with coefficient z[a,b](p).

Now we can apply this function to actual volumes v[a,b] and we know actual benefits: z[a,b](p)*v[a,b].

If we have objects a0 in the volume x1 and a1 in the volume x2 applied in the point b, the benefit from them is a sum

z = z[a0,b](p0) * x0 + z[a1,b](p1) * x1.

For example, if we optimize a set of machine tools, z shows how much work of given type b can be done in an hour with particular tool a. If we have a few tools, the amount of work is a sum of amounts of work done with separate tools.

The cost characteristics

u = u[a,b](p,x),

where p - parameters of the object a and x - an appropriate volume.

Now we can apply this functions to actual volumes v[a,b] and we know actual cost: u[a,b](p,v[a,b]).

The goal function is determined using cost - benefit analysis. Particular components of the cost and the benefit are determined by all parameters: an object a and its parameters p, a point b and a volume v[a,b]. The overall cast and overall benefit we get as a sum taken by all objects and all points.

We can have a few characteristics of cost and a few characteristics of benefit and we face a problem of a compromise: we have to build a goal function from these "overall cost" and "overall benefit" characteristics. This is a usual problem in the optimization; nothing specific exists in the case of the Group of Objects.

We can have some additional constraints, volumes can have upper and lower bounds, or can be fixed. Some objects have to belong to any solution, etc.

What we optimize and what is calculated from the model depends on the problem and parameters that we can control. In some cases, we only have to choose

S

a subset of A and for each a from S parameters p from P[a]. In other cases, we also have to choose some (maybe all) volumes v[a,b].

In our system we pay more attention to the ease of the model correction than to the speed of computation for a reason - the problems of formal description are usually much harder than computational ones. However, in the case of the Group of Objects optimization, computational problems can be very hard. This happens mostly because we have to find a subset of a usually big set of objects. To solve this problem we use special models and special optimization procedures.

One of the helpful tools is a stochastic optimization with randomized variables - instead of finding an answer to the question "Does this object belongs to the solution?" we find the answer to other question "How big is the probability, that the object belongs to the solution?" After that we make our decision, which object we will include into the solution and which not. This makes the final stage of decision-making somewhat harder, but it brings us closer to a solution, which we might not be able to find otherwise.

The other approach is an optimization of V[a] for all a from original set A. If the values of V[a] are zero for given a, then the object a is not in a solution. This produces a big quantity of variables, but sometimes it can be presented as a problem of linear optimization and it is manageable.

Also there are special models, with really rigid requirements on the properties of functions involved, models for which mathematicians have already found fast optimization procedures.

There are

- a few variants of the production of the objects (a set B0 - a subset of B) and
- a few variants of the use of the objects (a set B1 - a subset of B),
- v[a,b], when b is from B0 - is a production volume for this production conditions and
- v[a,b], when b is from B1 - shows how many objects are in use for the corresponding usage conditions.

The demand is given as a requirement: for each b from B0 (usage) there is a value of the benefit E(b) which has to be achieved by the combination of objects from the solution which are used in the conditions b.

Usually cost functions in the case of the usage are proportional to the volume:

u=u[a,b](p) * x,

when b is from B1 - usage.

This leaves us with a set of coefficients u[a,b](p) (generally depending on the parameters of the object) as a part of the model data.

Partial exchangeability happens, when some objects in some usage conditions do not deliver some type of benefit. It means that they are not exchangeable with the object, which does deliver this type of benefit. This can happen when z - benefits, is a vector (has a few components).

We are interested only with cost in the case of production, but usually, the function of cost as a function of the volume of production, has some properties, which make computations difficult.

First, and the most important, each production has initial cost. It means that even for very small volumes cost is not close to zero. This makes our little trick with V[a] as parameters and V[a]=0 as a sign that a is not in the solution S, useless. We are forced to specify directly which object belongs to a set.

The other property: when volume increases, the cost of the production of a unit of the object falls (more effective technology can be utilized, the technology, which requires bigger initial spending). It means that cost, as a function of the volume of production is non-linear function.

However, we can get a linear function of the cost of production, if we describe the production in more details. Each type of production b (b from B0) we break into subtypes according to these different technologies. This is already a model correction, which leads to larger set B0, and larger amount of data, which we have to gather, but the model itself becomes simpler.

In many cases, we do not need to take into consideration benefits, which we get from the use of objects when demand is known. We already have our main contradiction: we want to minimize cost, but we have to satisfy demand. Therefore the optimization model in this case is - minimize the cost, but satisfy constraints, especially the demand constraint.

In the case above, if we do not start production of new objects (we only adjust existing production to new demand), we can assume, that the cost of production is proportional to the production volume. Everything else stays the same. Now we can use V[a]=0 as a sign that the object should be taken out of production, and the entire problem can be a problem of linear optimization.

With suppliers, we have a logistic problem: we have suppliers B0 and buyers B1 and all possible combinations of them

B = B0*B1,

if b0 is from B0 and b1 is from B1, then b from B

b=(b0,b1).

Now volume v[a,b] shows how much supplier b0 sends to the buyer b1. This variables are used to calculate the transportation cost.

To calculate the production cost of supplier b0 we need a variable

V0[a,b0]

which is a sum of v[a,(b0,b1)] for this b0 and all b1 from B1. This is a production volume of the object a for the supplier b0.

The rest of the model can be build similar to models above.

We introduce a set of products to the market (to a new market or to an established one, where we have to push competitors). We assume that customers are making rational decisions - they compare cost and benefit of the new product, compare it with other similar products on the market and choose the best one for their circumstances.

If we can build a model of this decision-making: v[a,b] - quantity of units of the object a bought for the use in conditions (market segment) b is a value, which is defined from the "local model" or local model set some restrictions, we can incorporate this model into the Optimization Model of the Group of Objects.

For example, in the conditions b we compare cost (price plus exploitation cost) of all objects which deliver needed level of benefits (with the proper adjustment for needed quantities of units), and choose one which cost is close to minimal.

The specifics of this model - the existing set of objects has to be a part of optimal solution.

In this model we optimize all v[a,b] and the parameters of the objects (their prices).

This model can be a good tool for the study of the situation, for example we can find the relationship between prices levels and market share for each object.

Local Models - models which allow to determine some of volumes v[a,b] for particular point of application b, can be used for different purposes.

First the obvious one is to model decision-making by the customers, when we do not control values v[a,b] and have to find the way to calculate them for each possible solution S.

This is an interesting example when an optimization is used as a tool of the forecast (forecast of quantities of the objects bought by consumers).

However even when we control values v[a,b] and it is possible to make them variables in a "global" optimization problem, it can be useful to determine them locally.

For example, we have a problem of unification, when we study if it is reasonable to change the special block of the system which performs particular type of functions in the particular type of conditions b by the one of generally design blocks from the set A . There are cases, when results of global optimization look strange from the local point of view: optimal solution designates for the use in b the object from S, which is not the least expensive or the most effective, because of the benefits of production in big quantities. It is not easy to implement this type of a solution. In many cases by sacrificing a little from the goal function value, we can deliver a solution, which looks perfectly reasonable locally. We have to add this "local reasoning" as a local model, which looks like the model with consumers.

Sometimes there are too many variables v[a,b] and to speed up a computational procedure we introduce local models coordinated with the goal function, that the optimal solution of the modified model is close to the original one.

Therefore, local models deserve more careful study.

Local models are mostly about the structure of the set of sets S[b] - for each possible solution S and point b we define a subset of S which is applicable in b.

If description of points is detailed (which requires many characteristics and large amount of points and data), then we can assume, that there is only one object from S which should be applied in the point b: S[b] has only one object. Otherwise we need to find which a from S should not be in the set S[b].

This can be presented in a simple form. Let's say we can determine the cost of the application of the object a in the point b (a price plus a cost of the exploitation averaged and discounted in time) C[a,b] . Then the cost we expect from the object in the point b is

min C[a,b] , where we compare all a from S.

The difference

d[a,b] = C[a,b] - min C[a,b]

is non-negative.

If we demand that d[a,b] should be zero, we, practically, restrict sets S[b] to the case when they do not overlap for different b and hold only one object.

If we give some leeway d[a,b] < d0 we allow a few objects to be applicable in the "point" b and we need additional information to find their volumes v[a,b].

When we define C[a,b] we take into consideration, that we need different amounts of units of different objects to deliver the same level of benefit.

Similarly, we can work with benefits. We can define benefit functions E[a,b] and

d[a,b] = - E[a,b] + max E[a,b],

where the maximum is taken with a from S, and proceed with similar analysis.

There is one observation we will illustrate on a particular Optimization Model of the Group of Objects - Given Demand and Minimum Cost.

The benefit for each type of the use conditions b is fixed E(b). We minimize cost and optimize all v[a,b] without any additional constraints (in addition to the balance equations).

We can modify the model that the cost of production is a linear function of the production volume x

u0[a,b0] + u1[a,b0] * x

and only one type of production conditions b0[a] is used for any object a from optimal solution.

The part of the cost, which depends on volume, can be written

a sum of v[a,b]*{u1[a,b0[a]] + u[a,b1]},

where summation goes by all b1 from B1 - use conditions, and by all a from S - solution.

We know that for each b1 a fixed value E(b1) is equal to

a sum e[a,b1]*v[a,b1]

where summation goes by all a from the solution S.

We denote

C[a,b1]=( u1[a,b0[a]] + u[a,b1] )/ e[a,b1]

and if we want the cost to be minimal we have to make v[a,b1] zero when C[a,b1] is not equal to a minimal value for given b1 and all a from the solution S.

This is exactly what we had before for the case when customers choose the best for themselves.

Note, that here we are have only a discrete optimization problem - we have to choose a subset S and assign the proper production type to elements from it. After that we can determine all volumes.

We have two groups of characteristics and related functions:

- cost, in the production and in the use,
- benefits, mostly in the use.

There is a helpful observation: if the model is detailed enough, we distinguish different technological processes, different conditions of the object use, etc., then we can assume, that these functions are linear. However, it is not always reasonable to use a detailed model - we might not have enough information for that, or we do not need this kind of precision of the result or we are not ready for expenses the data gathering for the detailed model requires. In this case we go with the macro model and non-linear functions.

We had mention above the nature of non-linearity of the cost of the production: higher volumes of production allow the use of technological processes, which require more initial expenses, but the cost of the production of one unit of the object is lower. These functions of the cost of production as functions of production volume, usually can be taken from the department involved in economic analysis of the production.

Benefits of the object's use depend on the usage conditions. When volumes are small, the use is effective and benefits are big. As volumes grow, the object is also used in less effective conditions, and benefits from the use of the unit of the object fall. This is a source of non-linearity of benefit function. In general, benefit has to be defined as a function of all exchangeable objects, and this function is non-linear. This can be described as boundaries of modal applicability - we assume linear function, and this assumption is reasonable up to some level of benefit for each given "point" b.

The real situation is dynamic: the volumes of production and usage are changing over time, cost and benefit occur in different moments in time, etc. Yet, our model is static. It means that values and functions are averaged in time.

Volumes - quantities of units of objects, are the average values of real quantities in time. Hence, they are not necessarily integer numbers.

Economic components of the sets of characteristics we denote as cost and benefit are averaged in time also. Averaging in time in this case means discounting future values with the proper weight function.

There are cases when the use of a dynamic model is inevitable, especially if some dynamic schedule has to be optimized.

There is one example from related area when static model is not sufficient. Let say we know reliability characteristics of the product, and based on them we calculate the cost of the warranty repairs, as a ratio to a production cost. Usually the computation is straightforward. However, if we are gradually reducing production volumes, then the bulk of past production, which generates warranty repairs, is bigger than current production values; hence, this ratio is higher than expected, and we cannot use it to estimate amount of the warranty repairs.

In majority of problems the quantity of variables v[a,b] is big and the optimization problem can be difficult. With the growth of computer capabilities grows the desire to solve ever-bigger problems, hence in this case, unlike in other cases in our system, we have to pay attention to the availability of computation methods.

The other effective approach - the use of simple models and gradual expanding of it to take in consideration more factors, does not help here also. First, the model is simple enough already. Second, we start with some "reasonable" but not optimal solution. Usually an optimal solution is 10%-30% better than "reasonable" one, we can tell this from the experience. We do not have much of room to maneuver, so we need a good optimization procedure.

To find a reasonable solution usually we need a formal description of the problem. However if this 30% of anticipated improvement from the use of the optimization procedure are not worth the trouble, we can be satisfied with a reasonable solution which we got from formal description.

The variety of Optimization Models of Groups of Objects is huge. There are a plenty of publications describing particular cases of the creation and use of these models, and each time there are arguments why the model creator chooses a particular approach.

Our goal here is to describe a set of such models in a coherent way that it is easy to move from one model to another. Models included in the set and the ways they are described were chosen from experience. Instead of describing models in complete details, we offer a set of "building blocks" of models, which should be used when one builds an Optimization Model of the Group of Objects. This simplifies description.

We include also simple models, which have some fast computational procedures, because a starting point in solving the problem is a simple model or a few different simple models.

We can have problems with interpretation of the model's variables and parameters. If we build a model "from scratch" its parameters and variables are what we had arrived to from the problem description. If we have a set of models to choose from, we have to interpret model parameters. For example, if we are optimizing specifications for a new bridge across the river, in addition to existing bridges, we have to use traffic measurements as model elements. In this case we need not average traffic values, but pick values (rush hours' values).

Here we summarize concepts and notation of Optimization Models of the Group of Objects and bring variants of their interpretation.

1. There is the set of objects A , an object from the set a, its parameters p, which are from the set P[a] , a set of restrictions on them - a (local) model M[a](p) .

This set can have an internal structure, for example it can consist of devices, which are used directly, and their parts which are not used directly.

Local models related to objects M[a](p) are actually a set of constraints on the parameters values. Usually, if they are present at all, they include some money parameters, as price, and they are not complicated.

2. Generally speaking, objects of the set A are potential objects. However among them can be actual objects, a subset R, which are in the production, in the use etc. It is possible that as a result of optimization we decide to remove some of these objects. Their presence makes the problem in some respect easier: we have more precise data, and we can utilize simpler models.

3. There is the area of "application" B , and a "point" in this area b.

This set can have its own internal structure, for example, it can consist of the types of technological processes used in production B0 and the types of conditions of the use B1. Further, subset B0 can be divided according to the objects B0[a].

Note that there is the natural division of the set B according to the stages in the life cycle of the product - each stage can be described independently. For example, the stages of production, distribution (including transportation) and use. We should include the distribution stage, if, for example, some transportation costs are big and can affect optimal solution - logistic becomes a part of the optimization problem.

4. There is an area where an object a is "applicable": B[a] and a subset of objects "applicable" in the "point" b: A[b].

Sometimes it is known that B[a] do not overlap for different a (there is no two objects applied in the same point, it means that A[b] do not overlap for different b), or they are allowed to overlap only in a specific way.

5. There is a quantity of units of object a in the "point" b - a volume: v[a,b] , and the sum of v[a,b] for all b from B and given a: V[a], and the sum of v[a,b] for all a from A and given b: V[b] .

A set of values v[a,b] is a major subset of model variables (at least on initial stages of the model building). These variables determine the specifics of the Group of Objects optimization problem. Their values have to be found from the overall model because cost and benefit related to different objects and points of applications are interrelated.

It is possible to bound values v[a,b] or V[a] or V[b]. These bounds can be direct, or presented as some system of constraints.

There are cases when we optimize v[a,b]. For example, if we are engaged in the unification:

there is a system, where we can find different blocks performing similar function,

it is possible to use a set of "standard" blocks instead of customized ones in some of these places and achieve lover cost and higher quality with expense of adaptation of "standard" blocks to specific conditions.

In this case volumes v[a,b] are under our control, and we optimize them.

Some of values v[a,b] can be determined "locally", in this case we need local models. For example it can be a local optimization model, which reflects the customers decision-making, when they buy a particular product paying its price (the price is the product's parameter) after comparing it to other similar product.

There is one variant of this model, which is very useful.

Let us say C[a,b] is this local goal which customers optimize, for example, a ratio of the benefit which can be achieved by using the object to the cost of the object and its exploitation.

We define values

C0(b) = max C[a,b], where a are from S,

and

d[a,b] = - C[a,b] + C0(b),

and we set a constraint:

v[a,b]=0, if d[a,b] > d0,

where d0 is a given "error margin", which shows how much customers are ready to sacrifice in the value of this goal to achieve some other goals which we did not take in consideration in our model.

This gives us a more realistic description of local optimization.

6. The values v[a,b] are parameters in a system of equation-balances. For example, if we have a set of production types B0 and a set of usage types B1, the quantity of all objects a in use (a sum of v[a,b] when b belongs B1) is equal to quantity of all objects a produced ( a sum of v[a,b] when b belongs B0). If we optimize a set of parts together with objects, and it is known how many parts of particular type are in each object, then the quantities of objects determine quantities of parts.

7. Gasoline of different types in the same quantities can run a car different length in given conditions. This length can be taken as a measure of benefit.

Two television sets can be exchanged one to one (no one uses two small sets instead of a big one) so we have one component of benefit characteristic - it delivers TV images. However there is another component also - a size of the screen. With television sets we have a vector benefit characteristic.

We want to measure the level of benefits, which occur when object a is applied in the point b in the volume x. It is possible that we need a few characteristics to do so, or we need to invent an artificial characteristic.

This set of characteristics we denote z, and there is a function which is a part of the model

z = z[a,b](p)*x,

where p are the parameters of the object a and the coefficient proportionality is z[a,b](p).

The proportionality of benefits to the volumes for the entire group of exchangeable objects in the point b holds up to some level of benefit, after that the application of objects gets ineffective and benefit delivered by each additional unit is smaller, than by the previous one. As it was pointed above this is a boundary of applicability of this model.

It is possible to bound benefit values, overall, as well as for particular subsets of objects and points of application. For example, cases with fixed demand, are cases when the values of the benefit are bounded.

8. In some cases it is easy to determine exchange coefficients: how many objects a0 are needed to replace objects a1 in the point b. We use them to calculate a benefit function: the ratio z[a0,b](p) / z[a1,b](p) is an exchange coefficient.

The cost characteristics

u = u[a,b](p,x),

where p - the parameters of the object a, b - the point of application and x - the appropriate volume, can be money values, but can be technical value also, such as the size of a television set, or the amount of spent metal.

Cost function can be non-linear - for example, because of more effective production with bigger production volumes. However with proper level of details of the description of production it is linear

u[a,b](p,x)= u0[a,b](p) + u1[a,b](p) * x.

It is possible to bound cost values, overall, as well as for particular subsets of objects and points of application.

9. The goal function is determined from the overall cost-benefit analysis.

The usual approach is to fix the benefit in each point (or to set its lower bound) and minimize the cost. In the case of the given demand this is obvious, however it can be useful in other cases too.

What we optimize, depends on the problem. In some cases we only have to choose S - a subset of A, and maybe for each a from S parameters p from P[a]. In other cases we also have to choose some (maybe all) volumes v[a,b].

10. For each point of application b from B there is a subset of S : S[b] - objects from S which are applicable in b. This subset is defined by non zero values v[a,b]. Sometimes there is a constraint on the structure of these subsets, for example, each can hold only one object.

There are important model variants, which we have, if we adopt some assumptions:

- In one application point only one object is applied,
- The object applied in the given application point is optimal based on some "local" criterion,
- v[a,b] are variables, and their optimal value have to be found based on the "global" criterion,
- Objects can be exchanged only on one-to-one basis,
- Objects are ordered, and the object with bigger number can exchange the object with smaller, but not vice versa,
- Objects can be distinguished by the value of one parameter only,
- Points of application can be distinguished by the value of only one parameter,
- Functions of the cost and benefit are linear for all parameters.

Here we present a set of simple special models, which can be sufficient as they are, or can be used on initial stages of optimization.

Note that it is possible to formulate these models without the concept of points of application at all - in terms of exchanging one object with the other and coefficients of exchangeability. However, it is difficult to move from this type of model to more general one; hence, we opt for general description.

1. If the set of existing objects is used in different conditions than ours, and we want to extend its area of the use (to a different market for example). We do not need to vary the objects and their parameters, but we have to find appropriate values of the volumes.

If we extend our production, we use variables-increments (additional volumes), however costs and benefits are proportional to (original) volumes.

Only the market capacity limits our expansion and our problem is to find additional volumes or a ratio of new volumes to old ones.

2. If there is a set of existing objects, only part of which will be in the final solution, then the natural inclination is to look at what happens, if we remove one of the objects or add one to them or make small changes to their parameters. This gives a general understanding of the possible benefits of optimization and difficulties of the problem. At the same time, corresponding models can be made simple. In our system, where switching from the model to model is easy, it is a reasonable approach to a problem.

If we have the same set of types of the use, and we do not anticipate the change to the demand, we already know the needed level of the benefit and we minimize the cost.

2.1 In the case where we drop one object from the set we need to adjust volumes - other objects have to cover the absence of that one. The cost changes are usually proportional to volumes changes.

2.2 In the case where we add one object, the new object pushes out (partially) some of old ones. The volumes of old objects have to be adjusted, maybe some of them to zero value. The cost related to the new object follows general rules - the cost as a function of the volume can be non-linear, or we have to describe production conditions in detail according to the different technological processes.

2.3 If we make small changes to the parameters of the objects, we have a situation close to removing one object from the set - adjustment of volumes for all objects, most likely small. On top of it we have to optimize values of small parameters changes. Small parameters changes lead to changes to all the functions involved, which we can assume are proportional to changes to parameters. We have to find the coefficients proportionality. Usually we can do this by taking the derivatives of the appropriate functions.

Note that usually, resulting problem is a problem of linear optimization. Its solution has parameters values at their bounds.

Some bounds reflect the essence of the problem. Other are set to reflect the fact that this model is a kind of approximation. If the optimal value is limited by this kind of a bound, further analysis is needed.

3. So far, models based on the Set of Existing Objects brought us one specific way of description: instead of non-negative volumes, cost and benefit, we use their increments, which can be negative. We have to modify the corresponding constraints, so the volume change does not lead to the negative volume.

As a result, the change in the cost and the change in the benefit are proportional to the change in the volume. The added object still has to be treated in the ordinary way.

If parameters are changed - the value of the change of the parameter have to be a variable. We have to add constraints, which reflect a restriction that changes has to be small. All involved functions are linear functions of these value increments.

4. We have a special case when points of application can be specified with one parameter: B is a segment.

For example, we have a set of machine tools, which are used with the production of parts of different sizes. For each tool, there is a segment of sizes assigned to be used with this tool. These segments do not overlap, because the smaller the size the less effective the use of the tool.

We decide to buy an additional tool, to release the pressure on the set of tools, and to decrease the cost of the parts production. We have a set of available variants of the tools and we know in which segment each of them will fit, breaking it in two.

The model is as above, only instead of summation we need to use integration and the fixed level of benefit is the fixed quantities of parts of the given size which have to be worked with.

In this case, available variants usually are described with one parameter, and the set A is a segment, also. If we know this additional object, we can calculate all volumes; hence, we have a problem of optimization on the segment.

Here we brought a few examples of the Optimization Models in Finance. Models are simple and general, with clear interpretation of the goal and constraints. They can be used as a starting point in optimization, either as a base for the model tuned to particular circumstances or as a tool to get some initial solution, which has to be adjusted to reflect factors which are not taken in consideration in the simple model.

We have specific cash needs according to a given schedule. It can be project needs, college payments, retirement supplement, etc. We have to come up with the money up front, but we can invest money in a bond portfolio, and use its cash flow to cover the known cash outflow.

We have a set of dates d when the cash outflow is planed and when the initial investment is made.

There is an accumulated amount of cash produced by the bond a up to the moment d. This accumulated cash includes the coupon and the payment at maturity.

At the moment of initial investment, we incur cost, which is determined by the bond prices and quantities of bonds bought.

We calculate accumulated cash outflow in the moment d, and we demand, that the difference between accumulated cash produced by the portfolio and accumulated cash outflow should not be negative.

Now we minimize the cost of the portfolio.

This is a linear optimization problem, however with integer variables.

We can add plenty of constraints on bonds quantities and still stay within the linear optimization problem.

We might have a lot of "float" - excess of cash between moments of inflow and outflow. This cash can be invested at the fixed interest rate with the help of the Guarantied Investment Contacts (GIC). We can expand our model to include GICs - we know periods of time where we might use GICs, smallest periods between cash flow dates. For each of these periods we define a variable - the amount of the cash invested this way. The beginning of the investment is analogous to the cash outflow and the end to the equivalent cash inflow plus interest. If we add these elements to the model we still have linear model.

If the original set of fixed income securities, which we can use, is small, we can have a situation, where we can not use a security with a good yield, because it matures a day later, than we have a date of a big cash outflow. We can solve this problem with the line of credit with fixed interest rate. Similar to the case of GICs, for the each period between two consecutive dates of cash flow define a variable - an amount of money borrowed. In the beginning of the period we have a cash inflow and in the end the equivalent cash outflow with the interest. If we add these elements to the model we still have linear model.

If we have already a bond portfolio in the account, but our cash outflow has increased and we need to add funding, we have a similar optimization problem. We need to add obvious constraints: amounts of bonds are not smaller than in original portfolio. When we calculate the cost of new bonds, we have to take as an argument of the cost function the difference between an amount of bonds in the adjusted portfolio and in the original one.

A business has an estimate of cash inflow and cash outflow. The excess of cash is invested in (short-term) fixed income securities. If we take a worst case scenario of cash flow, we can be sure, that we do not need to sell securities - we hold each to maturity.

Each time we have new cash coming in we have to adjust our portfolio. We might have some adjustment of cash flow schedule, we have new prices of securities, and we might change our investment horizon. Each time we face an optimization problem.

We build the model as it is done above.

However, we need a set of additional special constraints.

For each moment d we define a constraint: the accumulated cash produced by the portfolio, plus the accumulated cash inflow, minus the accumulated cash outflow, should not be negative.

Because this is not a case of funding, but a case of investment, we need an appropriate goal. For example, we can maximize the yield of the added part of the portfolio - the part, which is above of needs of covering the cash outflow. This makes the problem non-linear.

There are a few directions, in which the model can be expanded.

We can define a few scenarios of cash flow, and we demand that for any of these scenarios we will not incur cash shortfall. This is a sophisticated way of building a "worst case scenario".

Also we can add borrowing of funds, to cover some short-term cash shortfalls and add some flexibility to the portfolio as it was done for the funding.

In the model above we did not take into consideration cash flow from the future investments, mostly because we did not know them, and we wanted to stay on the safe side. However, we can assume that all future investments are done with some security paying a minimum coupon, or with some saving account paying a minimum interest rate. This can be incorporated into the model also.

Reducing Risk with Portfolio of Securities

Securities are placed in the portfolio and traded as a part of the portfolio, because there are benefits of managing them together, which can not be realized if they are separate. Mostly the benefits are related to the statistical effect - an uncertainty of the future states of the portfolio as a whole can be made smaller, without additional knowledge of future states of securities, which are included into portfolio. Hence a portfolio is a tool of the risk management. There are additional benefits also - smaller trading cost, administrating cost, etc.

The first step of the risk management is a classification of securities according their risk level, and limiting the risk level of the portfolio accordingly. This is a technique of assets allocation.

The second step is a quantitative description of the volatility of the securities states and placing restrictions on the characteristics of the volatility of the portfolio. This is already a model, which can be a part of the optimization model.

On top of it, there are forecasts of the trends of changes of the securities' characteristics. We need these forecasts, to choose between securities of similar level of the risk. However forecasts are done with different degree of precision, and we also have to take this into consideration.

In addition, we can consider forecasts of the cash inflow and outflow (for example, in the mutual fund - customers' investment and redemption, trading fees, taxes), or the use of options and other derivative products.

We offer here a general approach to building of financial trading models of the particular class based on optimization. Models built this way can be incorporated in our system and we can benefit from all functionality, which we have in this system.

We need to describe a few concepts first.

We base all trading on possessions, we have different types of possessions, which we denote C , and quantities v[c] where c is from C.

Trading is done with the help of trading operations. There are examples of trading operations: We set the order to sell some stock at given price for the limited period of time. We bought a bond with the intent to keep it to maturity, and we route the interest payments to the money market fund. We "marked" a hundred shares of a particular company stock, and we wrote a call - a promise to sell these shares at a specified price for the limited period of time.

Each trading operation is a set of rules, which determines our behavior when some events happen. The trading operation "takes a hold" on some of our possessions. There is a cost associated with each trading operation and benefits. This can be computed with the model of trading operation. The model of trading operation also shows which possessions have to be "put on hold" when the trading operations is in force and how many of them. We denote the set of possible trading operations B, and the amount of operations b from B in force v[b].

Many trading operations are chains of elementary trading operations. Therefore we have to be able to present a trading operation b from B as such a chain of elementary trading operations.

If we introduce the holding of the possession of the particular type as a trivial trading operation, we can say, that our trading portfolio is a set of trading operations. There is a model associated with each trading operation, which allows us to check that this portfolio is supported by our possession.

When the time comes to make a trading decision we check the current state of our possession and trading portfolio. Some trading operations were executed since we made our previous decision, and our possessions have changed. Some trading operations were executed only partially, and our possessions have changed, and trading operations have changed - they transformed themselves into other operations. Hence, we have a new trading portfolio.

We want to transform existing current trading portfolio into the other one. This is done with the transformations of operations.

Each transformation of operations is a set of rules, which show how a set of operations can be transformed into the other set of operations. There is a cost associated with each transformation of operations and benefits. This can be computed with the model of transformation of operations. The model of transformation of operation also shows which operations participate in transformation and how many of them. We denote the set of possible transformations of operations A.

Many transformations of operations are chains of elementary transformations of operations. Therefore we have to be able to present a transformation of operations a from A as such a chain of elementary transformations of operations.

We initiate a set of transformations of operations S, each transformation a with its quantity v[a], and we want it to be optimal.

With the help of the models of transformation of operations we can compute the overall cost and benefits associated with this transformation S. Also, we compute cost and benefits associated with transformed trading portfolio. This data is used to form a goal function and constraints.

Now we need "balance equations" - we have to be sure that current trading portfolio allows the transformation. The current trading portfolio has to have enough trading operations to be a source of the combined transformation S in quantities v[a], a from S, and we have to have enough possession to support the trading portfolio which we get as a result of the transformation.

Now we vary S and associated quantities, and find the best combined transformation among possible variants of transformation.

We can easily take in consideration the tax consequences of the portfolio transformation - we need to add some accumulated values, as this year profit and loss, loss of previous year, etc.

The main strength of our approach is flexibility, and this is important in the environment where new financial products, and new trading ideas are coming very often.

There are a few recommendations on data organization. The most natural way is to organize data as a software object database. It is natural to present a possession type as a software object. The other software object is a trading operation type together with the model of trading operation. Next one is a transformation of operations type together with the model of transformation of operations. Software objects, which are chains of other software objects, can be presented with lists.

Note that computationally this can be a difficult problem. Hence, we build simplified variants of this model and build special optimization procedures.

Note also, that commodity trading model can be build following the same approach.

The problem of investment optimization is a problem of optimization of investment portfolio: we have some investment portfolio (including cash in a bank) and we make decisions what and when and in which quantities to add and to remove from the portfolio, keeping the current balance intact - portfolio reorganization. Simple methods optimize only one step of such reorganization and take in consideration only one step. More complicated methods take in consideration possible future steps also.

Technically, investment models are simpler than trading models - the quantity of types of possessions used for investment is smaller, operations are simple (usually, buy and sell), and there is not so many of transformations, because there is not so many complex operations.

Conceptually they are more complicated, hence their application requires considerable skills and experience.

Trading operations are short term. Money is well adopted to compare profits and losses which occur in moments separated by the small period of time, hence money works well when we estimate the effectiveness of the trading operation, and when we compare effectiveness of different trading operations.

Investment operations are usually "open ended" - we do not set a time for the termination of investment. Therefore, investment operations and investment portfolio as a whole need long term estimates. But when the period of time is big, money does not work well. We know this and we try to correct estimates with the concept of inflation. Still in many cases this is not sufficient.

It seems to be a nature of money. Money "reflect" the current balance in economy, hence it should work well in short periods of time. In long periods, we need other concepts and measurements, as the concept of wealth. However, the measurement of wealth is not developed as well as we would like to for our models.

This is a serious problem with investment optimization models. It affects the way we construct the goal function, the way we discount future earnings (and losses) when we build one averaged in time characteristic of the effectiveness of the portfolio.

The other problem - we have to use a long-term forecast of future standing of actual and possible investments (stock price for example). Mistakes of these forecasts are huge. Partially this is compensated by the discounts of future values (in the goal function), but resulting mistakes are still big.

We insist that we can not base our investment decisions on average values of the forecast, we must take in consideration mistakes, because otherwise our decisions are not different from gambling.

All this makes the use of investment optimization models not so simple. Each time we introduce such model we need a "reality check". We have to analyze possible results of this model and compare them with known recommendations of "investment gurus". The general recommendations of Warren Buffet we present here as a model, which we can use for this purpose.

Warren Buffet's Model

Warren Buffet noted, that we have a discrepancy in description: we want to increase the wealth, however we describe investments in terms of prices and tendencies of their change. He decided to describe the investments themselves in terms of wealth.

When such description is possible, the investment decision is relatively easy - move your investment from the area of slow wealth accumulation to the area with fast accumulation, when the prices are favorable. This does not solve the problem of imprecise measurement of wealth - we still have to make decisions of moving from "cash" investment and back, but it makes it more manageable.

However this method is only applicable with money one "does not need" - can wait indefinitely for the withdrawal of money from the portfolio, can wait through any market downturn. (Peter Lynch likes to emphasize this investment condition).

Each portfolio reorganization can be triggered only when a pair of investments shows a clear need for exchange. Because we measure wealth so imprecisely, the forecast has to show a big difference in the future monetary value, the difference which can not happen because of mistakes of forecast, and especially mistakes of the measuring of the future wealth with future money.

Note that if we work carefully with mistakes of the forecast - use "sure" numbers - bounds, and use deep enough discount of future values, we will arrive to similar conclusion in our model.

Investment for Retirement

If we invest for retirement, we expect to supplement our pension and social security benefits with selling parts of the investment portfolio. This goal contradicts the rule of investment we cited above - we have to invest only money we will not rely on. The obvious solution is a fixed income portfolio and we know already how to optimize it according to the known cash outflow. However, it is possible to modify our model that we still can use investment in the stock market.

The cause of the rule limiting our investing ability is the possibility of the prolonged market downturn, when we should not sell securities, but we should wait until market returns to the level projected by the market trend. Only if we have the ability to survive this period without touching the portfolio, then we can work with the stock market investment. This can be done with the kind of self-insurance - a special bond portfolio, which we use in the case of the market downturn, and we move funds from the stock portfolio to this bond portfolio as soon the market recovers. Again we have an insurance bond portfolio of the specified size, which we maintain until the next market downturn.

This is a kind of "asset allocation" approach, with which we expanded the area of applicability of the stock market investment optimization model.

The biggest advantage of our approach is its flexibility - we can describe in the model special conditions, which affect investment decisions. We will illustrate it with mutual funds.

Mutual funds in addition to the goal of accumulating the wealth have other goals - they have to accommodate funds withdrawal requests, and they have to meet some special performance criteria to be competitive in eyes of their customers. A mutual fund is a special business, and it has to work according the rules of this business. Optimization models can be very helpful in this case, especially, if there are

- forecasts of the funds inflow and outflow,
- delays between the moment of the investment or withdrawal request and the actual operation (which is an equivalent to the option).

All this can be easily incorporated into the model.

Additional goals and constraints, which mutual fund has to satisfy, do not come cheap. These additional restrictions have to impede the main goal - the wealth accumulation. In some cases the fund has an access to the type of investment the individual investor does not have, for example, index investment (investment which mimics some market index) which needs a lot of money, or overseas investment which needs a special expertise and special investment vehicles. In this cases the fund can accumulate the wealth fast enough in spite of its impediments. In other cases a specially tuned and managed individual investment portfolio should be more effective than an investment in the mutual funds.

Investment Optimization Based on Trading Model

The prudent approach to the optimization of the trading portfolio should not trigger the portfolio reorganization often - mistakes of the forecast are big and there are trading fees and taxes, which have to be justified before such reorganization takes place. We will use this conservative nature of the trading portfolio optimization model to build an investment optimization method.

We will convert an "open ended" investment operation into "close ended" trading operation. We set a schedule of computation of trading portfolio - decision moments. Each open ended holding (without set date of its liquidation) we declare to be liquidated at the next decision moment. When next decision moment comes, it is most likely, that we will not liquidate the holding, but "roll it over".

This method does not require complicated discounting in time of the future profits and losses - we have only short-term operations on each step. However, it does not allow utilization of a specific form of the forecast, when we can anticipate the "final" value of the stock, but we do not know the dynamic of the stock price while it reaches this value. In other words, we loose the ability to make long term decisions.

The concept of wealth gives us a chance to illustrate the problem of relation between parameters, which we use and the concept, which we want to describe with these parameters.

Money values (prices, interest rates, etc.) exist to describe current economic state - current exchange value of goods and cervices, and their balances. They are well suited to solve this type of accounting problems.

Fixed income models, which we presented here, have to be viewed as pretty precise models. We have a balance to maintain, and we find the current least expensive portfolio, which allows this.

However, as soon we want to maximize the wealth, we get into the situation similar to one where we have to optimize technical parameters but need to use an economic goal function. We do not have parameters to describe the wealth adequately.

Let us start with the concept. We loosely define wealth as an ability to get goods and services. Wealthy individual can lead a modest life, and the other individual, who has a lot of illegal money, which he can not spend, can not be called wealthy. If the average citizen in one country have to spend fewer efforts to get goods and services, than a citizen in the other country, and has an access to the greater variety of them, we assume that the first country is wealthier, than the second.

Money that is invested and can not be touched does not add to the current wealth, investment is intended to add to the future wealth. The usual way to increase ones wealth is investment. One temporarily gives up the current wealth (limits ones ability to acquire goods and services), for the future one.

With the time the needs and desires of the individual are changing, also the market ability to satisfy them are changing (which is reflected in changing prices), hence the wealth of individual is changing even if money is not spent.

We do not have a good set of parameters to describe "wealth". Hence we should expect to have problems describing our preferences and building models, which have to help us to accumulate wealth. It is only natural that we have a lot of specially designed systems, which help accumulate wealth without formal description of the problem, or with very limited description.

In the investment portfolio optimization, we run into problems, when we describe the goal, when we estimate a price of the investment, and when we make investment decisions based on parameters, which describe the wealth only indirectly and imprecisely.

Financial trading is the other way to accumulate wealth. Trading, financial trading included, adds to the availability of goods and services. It delivers goods and services to the market participants in the proper form and in the proper time.

Trading models do not rely on the concept of wealth to formulate a goal function, and constraints, but they still rely on this concept, when they have to evaluate prices of investments - the initial data of optimization. Because the stock prices reflect the wealth accumulating ability of the business.