Discussion:
[SciPy-user] fmin_bfgs
LOPEZ GARCIA DE LOMANA, ADRIAN
2005-12-20 16:08:50 UTC
Permalink
Hi all,

I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.

Warning: Desired error not necessarily achieved due to precision loss <----------------------------
Current function value: 152.114542
Iterations: 5
Function evaluations: 23
Gradient evaluations: 13


¿Because of the machine (quite new, dual Intel XEON 3.0)? The python is version 2.4.1

The code I'm using is at:

http://diana.imim.es/~alopez/questions/code.py

Also I plot a graph showing the last value of the function against the norm of the gradient, as you can see at

Loading Image...

most of the times is stopping at very low gradient norm values, but other times is stopping while the norm gradient AND the function value is higher than the stopping conditions. Red crosses come from the optimizations given the gradient analytically and green ones from the same optimizations where the gradient has been calculated numerically .

Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?

Any help or commentary will be welcomed,

thanks,

Adrián.
Travis Oliphant
2005-12-21 06:53:06 UTC
Permalink
Post by LOPEZ GARCIA DE LOMANA, ADRIAN
Hi all,
I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.
Warning: Desired error not necessarily achieved due to precision loss <----------------------------
Current function value: 152.114542
Iterations: 5
Function evaluations: 23
Gradient evaluations: 13
This is a potential problem with the Quasi-Newton minimizers. If you
look in the code at where this error shows up you will see that it
happens because of a divide-by-zero problem. In practice either the
gradient is getting updated or the function value isn't getting updated
on an iteration. I'm not sure what to do in this situation as it
theoretically shouldn't happen and so is perhaps due to the fact that
the function is not able to change in sufficiently small ways (thus the
precision-loss error). One approach is to reset the Hessian
calculation and move on (but I did not like that result on some problems
I was working with). Another approach is to set rhok = some large
number and move on. You could edit optimize.py to do that.

Right now, the code is not guessing what to do but stopping and letting
you know of the failure of the quasi-Newton method on your function. I
don't think anything is wrong with fmin_bfgs per-say (except for a
better way to deal with this situation).
Post by LOPEZ GARCIA DE LOMANA, ADRIAN
Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
The stoppage is occuring not because you've found a minimum but because
the method has essentially failed. You could try again from a different
starting location or try a different optimizer. There is no "always
best" optimizer that I know of. Thus, the family of optimizers that
exists in scipy. Try them and see if they work for you.

Best,

-Travis
LOPEZ GARCIA DE LOMANA, ADRIAN
2005-12-21 10:07:39 UTC
Permalink
Thanks for your answer.

I've found the variable "rhok" at the optimize.py, and I would like to run it again setting a larger number, but, what is large?

rhok = 1 / Num.dot(yk,sk) ------> rhok = 0.1?, rhok = 1000000.0?

Thanks,

Adrián.

-----Original Message-----
From: scipy-user-***@scipy.net on behalf of Travis Oliphant
Sent: Wed 21/12/2005 06:53
To: SciPy Users List
Subject: Re: [SciPy-user] fmin_bfgs
Post by LOPEZ GARCIA DE LOMANA, ADRIAN
Hi all,
I'm testing the routine fmin_bfgs from scipy.optimize and I have some problems. I do not understand why it gives me this warning.
Warning: Desired error not necessarily achieved due to precision loss <----------------------------
Current function value: 152.114542
Iterations: 5
Function evaluations: 23
Gradient evaluations: 13
This is a potential problem with the Quasi-Newton minimizers. If you
look in the code at where this error shows up you will see that it
happens because of a divide-by-zero problem. In practice either the
gradient is getting updated or the function value isn't getting updated
on an iteration. I'm not sure what to do in this situation as it
theoretically shouldn't happen and so is perhaps due to the fact that
the function is not able to change in sufficiently small ways (thus the
precision-loss error). One approach is to reset the Hessian
calculation and move on (but I did not like that result on some problems
I was working with). Another approach is to set rhok = some large
number and move on. You could edit optimize.py to do that.

Right now, the code is not guessing what to do but stopping and letting
you know of the failure of the quasi-Newton method on your function. I
don't think anything is wrong with fmin_bfgs per-say (except for a
better way to deal with this situation).
Post by LOPEZ GARCIA DE LOMANA, ADRIAN
Does anyone face the same problem? Is there any problem with the fmin_bfgs routine? Why is stopping while the gradient norm is bigger than gtol? What does it mean that the precission was lost?
The stoppage is occuring not because you've found a minimum but because
the method has essentially failed. You could try again from a different
starting location or try a different optimizer. There is no "always
best" optimizer that I know of. Thus, the family of optimizers that
exists in scipy. Try them and see if they work for you.

Best,

-Travis
Steve Schmerler
2005-12-21 12:10:38 UTC
Permalink
Hi

I think this is a general Python and not pure scipy topic:

#############################################################################################

In [26]: from scipy import arange

In [27]: ar = arange(0, 0.5, 0.05); a = ar[4]

In [28]: a
Out[28]: 0.20000000000000001

In [29]: b = 0.2

In [30]: b
Out[30]: 0.20000000000000001

In [31]: a == b
Out[31]: True

In [32]: c = 1.0 - 0.8

In [33]: c
Out[33]: 0.19999999999999996

In [34]: a == c
Out[34]: False

In [35]: a - b
Out[35]: 0.0

In [36]: a - c
Out[36]: 5.5511151231257827e-17

#############################################################################################

If a and c are "equal" regarding the numerical accury (a - c < 1e-16),
aren't they supposed to be considered equal when comparing them?

cheers,
steve
--
Should array indices start at 0 or 1? My compromise of 0.5 was rejected
without, I thought, proper consideration.
-- Stan Kelly-Bootle
Robert Kern
2005-12-21 12:29:36 UTC
Permalink
Post by Steve Schmerler
If a and c are "equal" regarding the numerical accury (a - c < 1e-16),
aren't they supposed to be considered equal when comparing them?
No, for floating point numbers there are several possible ways to define
"equality". Python uses "the bit patterns are equal". You also probably want to
use a relative tolerance (roughly, (a-c)/a < eps) rather than the absolute
tolerance that you describe. Use scipy.allclose() for full control.
--
Robert Kern
***@gmail.com

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
Travis Oliphant
2005-12-25 05:49:52 UTC
Permalink
Post by LOPEZ GARCIA DE LOMANA, ADRIAN
Thanks for your answer.
I've found the variable "rhok" at the optimize.py, and I would like to run it again setting a larger number, but, what is large?
rhok = 1 / Num.dot(yk,sk) ------> rhok = 0.1?, rhok = 1000000.0?
There is really no way to tell, but I'd try rhok > 1, maybe rhok=10**3
or rhok=10**6.

-Travis

Loading...