Skip to content Skip to sidebar Skip to footer

Lmfit Minimize Fails With Valueerror: Array Is Too Big

I am trying to use the 'brute' method to minimize a function of 20 variables. It is failing with a mysterious error. Here is the complete code: import random import numpy as np imp

Solution 1:

"What is causing this problem"

Math

You can't brute force a high dimensional problem because brute force methods require exponential work (time, and memory if implemented naively).

More directly, lmfit uses numpy (*) under the hood, which has a maximum size of how much data it can allocate. Your initial data structure isn't too big (10x10), it's the combinatorical table required for a brute force that's causing problems.

If you're willing to hack the implementation, you could switch to a sparse memory structure. But this doesn't solve the math problem.

On High Dimensional Optimization

Try a different minimzer, but be warned: it's very difficult to minimze globally in high dimensional space. "Local minima" methods like fixed point / gradient descent might be more productive.

I hate to be pessimistic, but high level optimization is very hard when probed generally, and I'm afraid is beyond the scope of an SO question. Here is a survey.

Practical Alternatives

Gradient descent is supported a little in sklearn but more for machine learning than general optimization; scipy actually has pretty good optimization coverage, and great documentation. I'd start there. It's possible to do gradient descent there too, but not necessary.

From scipy's docs on unconstrained minimization, you have many options:

Method Nelder-Mead uses the Simplex algorithm [], []. This algorithm is robust in many applications. However, if numerical computation of derivative can be trusted, other algorithms using the first and/or second derivatives information might be preferred for their better performance in general.

Method Powell is a modification of Powell’s method [], [] which is a conjugate direction method. It performs sequential one-dimensional minimizations along each vector of the directions set (direc field in options and info), which is updated at each iteration of the main minimization loop. The function need not be differentiable, and no derivatives are taken.

and many more derivative-based methods are available. (In general, you do better when you have derivative information available.)


Footnotes/Looking at the Source Code

(*) the actual error is thrown here, based on your numpy implementation. Quoted:

`if (npy_mul_with_overflow_intp(&nbytes, nbytes, dim)) {
    PyErr_SetString(PyExc_ValueError,
        "array is too big; `arr.size * arr.dtype.itemsize` "
        "is larger than the maximum possible size.");
    Py_DECREF(descr);
    return NULL;`

Post a Comment for "Lmfit Minimize Fails With Valueerror: Array Is Too Big"