LOPEZ GARCIA DE LOMANA, ADRIAN
2005-12-20 16:08:50 UTC
Hi all,
I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.
Warning: Desired error not necessarily achieved due to precision loss <----------------------------
Current function value: 152.114542
Iterations: 5
Function evaluations: 23
Gradient evaluations: 13
¿Because of the machine (quite new, dual Intel XEON 3.0)? The python is version 2.4.1
The code I'm using is at:
http://diana.imim.es/~alopez/questions/code.py
Also I plot a graph showing the last value of the function against the norm of the gradient, as you can see at
Loading Image...
most of the times is stopping at very low gradient norm values, but other times is stopping while the norm gradient AND the function value is higher than the stopping conditions. Red crosses come from the optimizations given the gradient analytically and green ones from the same optimizations where the gradient has been calculated numerically .
Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
Any help or commentary will be welcomed,
thanks,
Adrián.
I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.
Warning: Desired error not necessarily achieved due to precision loss <----------------------------
Current function value: 152.114542
Iterations: 5
Function evaluations: 23
Gradient evaluations: 13
¿Because of the machine (quite new, dual Intel XEON 3.0)? The python is version 2.4.1
The code I'm using is at:
http://diana.imim.es/~alopez/questions/code.py
Also I plot a graph showing the last value of the function against the norm of the gradient, as you can see at
Loading Image...
most of the times is stopping at very low gradient norm values, but other times is stopping while the norm gradient AND the function value is higher than the stopping conditions. Red crosses come from the optimizations given the gradient analytically and green ones from the same optimizations where the gradient has been calculated numerically .
Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
Any help or commentary will be welcomed,
thanks,
Adrián.