Q: Using ASP to solve a non-linear problem I got excellent results and an "optimum" solution. Decision variables range from 0-700 and constraints and one set of constraints is in the 60,000 range. However, when the range of those constraints is increased by a factor of ten, ASP comes up with an answer that is clearly not optimum. My results are graphed in such a manner that it is easy to see a less than optimum solution. Furthermore, Solver states it has a "converged" solution rather than an optimum solution. I am using scaling, but the solution is still clearly worse than the original case. When I reduced the constraint values back the original levels, the model ran perfectly and gave consistently accurate results. I cannot understand how just changing the constraints by a factor of 10 can have such a huge impact on the results.

Help indicates changing the "converge" value might help. I tried this but there was no change. Are there some recommended "options" values which can help?

A: I'm assuming you are using the LSGRG Nonlinear Solver, since you're seeing "Solver converged to a solution" and you're adjusting the Convergence tolerance. (One option would be to try the LSSQP Solver or KNITRO Solver, but let's stick with the built-in Solver engines for the moment.) There are two topics to consider here - the way Automatic Scaling works inside the Solver engine, and the way scaling can affect objective and constraint values in your worksheet formulas, before the Solver engine ever sees them.

When you set the "Use Automatic Scaling" option to True, the LSGRG Solver uses "typical value scaling". Essentially it takes the values of the objective and constraint functions at the initial values of the decision variables (what you have on the spreadsheet when the optimization starts), and internally divides the objective and constraint values it works with, on future iterations, by these initial values. If the initial values are very different in magnitude from the subsequent values on each iteration, and the final values as the Solver approaches the optimal solution, this scaling will lose its effectiveness, and the Kuhn-Tucker local optimality conditions (that would give you the message "Solver found a solution") won't be satisfied. So the take-away is to use initial values for the decision variables that make the objective and constraints as close in magnitude as you can to the expected final solution.

This all happens inside the Solver engine. But it is possible for scaling to affect your Excel model itself. To take an extreme example, if A1: = 1E+10 and A2: =1E-10. A3: =A1+A2 will be 1E+10, and A4: =A3=A1 will be TRUE -- that is, the value of A2 will "disappear". This is called a "loss of significance" in numerical computation. This example involves 20 decimal places; IEEE floating-point numbers are able to represent approximately 15 decimal digits (52 bits). If you have a large model and it is difficult to find the cells or formulas where there can be a loss of significance, you can sometimes find them (not always, this is an approximate test) by selecting the Scaling Report (Reports - Optimization Reports - Scaling) when the LSGRG Solver has stopped.

I hope this is helpful.

Daniel+Fylstra daniel@solver.com

Frontline Systems Inc. 775-831-0300 x120

## Comments

0 comments

Article is closed for comments.