Hello all,
I often find myself writing the following code:
try:
features = np.asanyarray(features)
except:
features = np.asanyarray(features, dtype=object)
I basically want to be able to use fany indexing on features and, in most
cases, it will be a numpy floating point
On Mon, Aug 15, 2011 at 7:09 PM, Charles R Harris wrote:
>
>
> On Mon, Aug 15, 2011 at 6:56 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Mon, Aug 15, 2011 at 3:53 PM, Matthew Brett
>> wrote:
>>
>>> Hi,
>>>
>>> On current trunk, all tests pass but running the (forgive
On Mon, Aug 15, 2011 at 6:56 PM, Charles R Harris wrote:
>
>
> On Mon, Aug 15, 2011 at 3:53 PM, Matthew Brett wrote:
>
>> Hi,
>>
>> On current trunk, all tests pass but running the (forgive my language)
>> doctests, I found this:
>>
>> In [1]: import numpy as np
>>
>> In [2]: np.__version__
>> Ou
On Mon, Aug 15, 2011 at 3:53 PM, Matthew Brett wrote:
> Hi,
>
> On current trunk, all tests pass but running the (forgive my language)
> doctests, I found this:
>
> In [1]: import numpy as np
>
> In [2]: np.__version__
> Out[2]: '2.0.0.dev-730b861'
>
> In [3]: np.lookfor('cos')
> Segmentation faul
Hi,
On Wed, Aug 10, 2011 at 5:17 PM, Matthew Brett wrote:
> Hi,
>
> On Wed, Aug 10, 2011 at 5:03 PM, wrote:
>> On Wed, Aug 10, 2011 at 6:17 PM, Matthew Brett
>> wrote:
>>> Hi,
>>>
>>> On Wed, Aug 10, 2011 at 12:38 PM, Skipper Seabold
>>> wrote:
On Wed, Aug 10, 2011 at 3:28 PM, Matthew
Hi,
On current trunk, all tests pass but running the (forgive my language)
doctests, I found this:
In [1]: import numpy as np
In [2]: np.__version__
Out[2]: '2.0.0.dev-730b861'
In [3]: np.lookfor('cos')
Segmentation fault
on:
Linux angela 2.6.38-10-generic #46-Ubuntu SMP Tue Jun 28 15:07:17 U
In article
,
Torgil Svensson wrote:
> Try the fromiter function, that will allow you to pass an iterator
> which can read the file line by line and not preload the whole file.
>
> file_iterator = iter(open('filename.txt')
> line_parser = lambda x: map(float,x.split('\t'))
> a=np.fromiter(itert
Hi, I put together a set of tools for inverting, multiplying and
finding eigenvalues for many small matrices (arrays of shape (N, M, M)
where MxM is the size of each matrix). Thanks to the posoter who
suggested using the Tokyo package. Although not used directly, it
helped with figuring the correct
Hi Andrea,
I believe benchmarks should be like Hans Mittelman do (
http://plato.asu.edu/bench.html ) and of course number of funcs
evaluations matters when slow Python code vs compiled is tested, but
my current work doesn't allow me to spend so much time for OpenOpt
development, so,
Hi Dmitrey,
2011/8/15 Dmitrey :
> Hi all,
> I'm glad to inform you that general constraints handling for interalg (free
> solver with guaranteed user-defined precision) now is available. Despite it
> is very premature and requires lots of improvements, it is already capable
> of outperforming comm
On Mon, Aug 15, 2011 at 8:53 AM, Andrea Gavana wrote:
> Hi Chris and All,
>
> On 12 August 2011 16:53, Christopher Jordan-Squire wrote:
> > Hi Andrea--An easy way to get something like this would be
> >
> > import numpy as np
> > import scipy.stats as stats
> >
> > sigma = #some reasonable standar
Hi all,
I'm glad to inform you that general constraints handling for interalg
(free solver with guaranteed user-defined precision) now is available.
Despite it is very premature and requires lots of improvements, it is
already capable of outperforming commercial BARON (example:
http
Hi Chris and All,
On 12 August 2011 16:53, Christopher Jordan-Squire wrote:
> Hi Andrea--An easy way to get something like this would be
>
> import numpy as np
> import scipy.stats as stats
>
> sigma = #some reasonable standard deviation for your application
> x = stats.norm.rvs(size=1000, loc=125
The reason is there can be multiple dtypes (i.e. with different .num)
representing the same kind of data.
Usually in Python this goes unnoticed, because you do not test a dtype
through its .num, instead you use for instance "== 'uint32'", and all works
fine.
However, it can indeed confuse C code in
Hi,
A student of mine using 32-bit numpy 1.5 under 64-bit Windows 7 noticed that
giving a numpy array with dtype=uint32 to an extension module the
following codelet would fail:
switch(PyArray_TYPE(ARR)) {
case PyArray_UINT16: /* do smth */ break;
case PyArray_UINT32: /* do smth */ break;
ca
Hi Chris & Brennan,
On 15 August 2011 00:59, Brennan Williams wrote:
> You can use scipy.stats.truncnorm, can't you? Unless I misread, you want to
> sample a normal distribution but with generated values only being within a
> specified range? However you also say you want to do this with triangula
16 matches
Mail list logo