Discussion:
[SciPy-user] Problem with scipy.optimize.leastsq: Improper input parameters
Adam Ginsburg
2008-03-08 22:27:36 UTC
Permalink
I'm not sure if my previous e-mail (below) went out earlier, but I've spent
a while trying to understand what the problem really meant, and I think it's
that I'm trying to use leastsq for something it was not intended for.
However, the Levenberg-Marquardt algorithm should be capable of this task.

Specifically, I'm trying to minimize a function of 5 variables, while
leastsq expects the number of functions m to be larger than the number of
variables n. I figured out that my arg() variable(s?) determine the size of
m, which seems like a bad idea to me but I can deal with it. My new error,
then, is a claim that my 2d array somehow doesn't work (traceback below).
Am I wrong in thinking that the function should be able to minimize over a
2D space, or am I just not specifying that properly in my function call?

Thanks, and sorry to be flooding the list a little,
Adam

---------------------------------------------------------------------------
<type 'exceptions.ValueError'> Traceback (most recent call last)


<type 'exceptions.ValueError'>: object too deep for desired array
Error in sys.excepthook:
Traceback (most recent call last):
File "/sw/lib/python2.5/site-packages/IPython/iplib.py", line 1714, in
excepthook
self.showtraceback((etype,value,tb),tb_offset=0)
File "/sw/lib/python2.5/site-packages/IPython/iplib.py", line 1514, in
showtraceback
self.InteractiveTB(etype,value,tb,tb_offset=tb_offset)
File "/sw/lib/python2.5/site-packages/IPython/ultraTB.py", line 872, in
__call__
self.debugger()
File "/sw/lib/python2.5/site-packages/IPython/ultraTB.py", line 729, in
debugger
while self.tb.tb_next is not None:
AttributeError: 'NoneType' object has no attribute 'tb_next'

Original exception was:
ValueError: object too deep for desired array
---------------------------------------------------------------------------
<class 'minpack.error'> Traceback (most recent call last)

/Users/adam/classes/probstat/hw7.py in <module>()
52 #a,b,x0,y0,s = fmin(chi2gaussnoglob,x0in,args=[im,RON])
53 #print "Parameters a: %f b: %f x0: %f y0: %f sigma: %f chi2:
%f" % (a,b,x0,y0,s,chi2gaussnoglob([a,b,x0,y0,s],im,RON))
---> 54 leastsq(chi2gaussnoglob,x0in,args=(im),Dfun=None)
55 #leastsq(chi2gauss,x0in,maxfev=1000)
56

/sw/lib/python2.5/site-packages/scipy/optimize/minpack.py in leastsq(func,
x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn,
factor, diag, warning)
267 if (maxfev == 0):
268 maxfev = 200*(n+1)
--> 269 retval =
_minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag)
270 else:
271 if col_deriv:

<class 'minpack.error'>: Result from function call is not a proper array of
floats.
/sw/lib/python2.5/site-packages/scipy/optimize/minpack.py(269)leastsq()
268 maxfev = 200*(n+1)
--> 269 retval =
_minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag)
Hi, I've been trying to get the Levenberg-Marquardt minimization routine
leastsq to work and have received the error "Improper input parameters" when
using my own non-trivial function.
a,b,x0,y0,s = ARR
noisemap = sqrt(im+RON**2)
mychi2 = (( ( im - gi(a,b,x0,y0,s,im) ) / noisemap )**2).sum()
return mychi2
myx,myy = indices(im.shape)
gi = b + a * exp ( - ( (myx-x0)**2 + (myy-y0)**2)/(2*s**2) )
return gi
where in this case im and RON are global variable, though I have also
tested with im and RON specified as input parameters using the args=() input
for leastsq.
In [123]: leastsq(chi2gauss,[1,1,1,1,1],full_output=1)
(array([ 1., 1., 1., 1., 1.]),
None,
{'fjac': array([[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 0.]]),
'fvec': 24222.5789746,
'ipvt': array([0, 0, 0, 0, 0]),
'nfev': 0,
'qtf': array([ 0., 0., 0., 0., 0.])},
'Improper input parameters.',
0)
Can anyone help me out? What about my input is improper?
Thanks,
Adam
Dave
2008-03-10 10:59:26 UTC
Permalink
I may be on the wrong track here, but what happens if you remove the squaring
and summing from the chi2gauss - i.e.

def chi2gauss(ARR):
a,b,x0,y0,s = ARR
noisemap = sqrt(im+RON**2)
return (im - gi(a,b,x0,y0,s,im))/noisemap


-Dave
Adam Ginsburg
2008-03-11 21:33:56 UTC
Permalink
Thanks Dave, Mark. I eventually figured out the input parameters and
got my minimization to work out; the biggest problem was that you
can't simply output a 2D array if you're fitting image data, you have
to do something like the process outlined http://www.scipy.org/Cookbook/FittingData#head-11870c917b101bb0d4b34900a0da1b7deb613bf7
(which I had a surprisingly difficult time finding - maybe links to
the cookbooks could be put in the documentation?). Basically, my
issue wasn't putting in the errors vs. the function, but that I didn't
ravel() my data.

I have come across a new problem in my leastsq fitting when trying to
pass in my own analytic derivative/gradient function, which I finally
discovered after a few days of hunting is because a patch has not been
applied to the latest version of scipy, specifically this one: http://article.gmane.org/gmane.comp.python.scientific.devel/5848
. I'm in the process of applying the patch and recompiling all of
scipy with it. Is there any reason the patch wasn't applied / should
I be asking the scipy-dev list this question?

Thanks for the help,
Adam
Adam Ginsburg
2008-03-11 21:56:50 UTC
Permalink
OK... follow-up... can anyone help me re-compile __minpack.h with the patch
in place? I have no idea how to do it, and I can't get scipy itself to
compile, I think because it's failing a lot of dependencies. I installed
scipy from a .deb package originally, not from source, and I'm really not
sure whether it will be worse to try to resolve all those dependencies or
just work around the problem.

Thanks,
Adam
Chiara Caronna
2008-03-27 17:32:16 UTC
Permalink
Hi,I hope this is the right mailing list.I need to perform matrix multiplication with big matrices and I realized that numpy is much slower with compared to matlab? does anyone know why?for example to multiply two perform this calculationa= matrix 2000x10000b= matrix 10000x2000c=a x b (c matrix 2000 x 2000)it takes roughly 100-300 sec, depending on the pc...while on matlab it is almost instantaneous...what can I do?This is my python installation:Python 2.5.1 (r251:54863, Mar 7 2008, 04:10:12)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import numpy
numpy.__version__
'1.0.3'
_________________________________________________________________
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
Conor Robinson
2008-03-27 21:10:08 UTC
Permalink
In my experience numpy can be _much_ faster than matlab. Do you have
a code snippet? Furthermore, did you compile numpy on your machine
and with what? Do you have a fortran compiler? Something sounds fishy
because I've multiplied much larger matrices in no time at all eg.
your example x100. You may have not linked the BLAS or ATLAS libs
during compile time. Even writing a raw python function to multiple
matrices of the size your dealing with should take less time.

1. Make sure you linked BLAS or ATLAS as well as compilers check the config file

2. A code snippet of how went about this would help.

3. Pick up a copy of the numpy manual.

HTH,
Conor

On Thu, Mar 27, 2008 at 10:32 AM, Chiara Caronna
Post by Chiara Caronna
Hi,I hope this is the right mailing list.I need to perform matrix multiplication with big matrices and I realized that numpy is much slower with compared to matlab? does anyone know why?for example to multiply two perform this calculationa= matrix 2000x10000b= matrix 10000x2000c=a x b (c matrix 2000 x 2000)it takes roughly 100-300 sec, depending on the pc...while on matlab it is almost instantaneous...what can I do?This is my python installation:Python 2.5.1 (r251:54863, Mar 7 2008, 04:10:12)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import numpy
numpy.__version__
'1.0.3'
_________________________________________________________________
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
_______________________________________________
SciPy-user mailing list
http://projects.scipy.org/mailman/listinfo/scipy-user
David Cournapeau
2008-03-28 06:17:54 UTC
Permalink
Post by Conor Robinson
In my experience numpy can be _much_ faster than matlab.
For matrix multiplication, relative speed between matlab and numpy will
mostly depend on the blas. Up to 6.5 at least, matlab used atlas, which
means that numpy should be as fast, if not faster (if you compiled ATLAS
by yourself).

Here, the problem is that the debian package does not use ATLAS for
numpy.dot, I think. If you build numpy by yourself, the problem is easy
to fix, and that's what I would recomment if speed really is an issue.

But for packages, it is not so easy to fix, because numpy.dot uses the
cblas interface. Not all BLAS implementations have a CBLAS
implementation, which is problematic since debian packages have to use
the lowest common interface between all the available BLAS. It may be
possible to use the BLAS interface instead as well in numpy, which is
the best solution I think in the mid-term, but this requires someone to
step in to do the work.

cheers,

David

Loading...