yields
array([[ 0., -1., 0., 0.],
[ 1., 0., -1., 0.],
[ 0., 1., 0., -1.],
[ 0., 0., 1., 0.]])
Hope this helps,
Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
in on the discussion. I suppose the moral of the story is
that there's
still not a clear cut "best" way of building wrappers and that your mileage
may vary
depending on what features you need.
Thanks,
Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
rs or variadic
templates will often require a some extra C++ code to work them in to
something
that Cython can understand. In my experience, those particular limitations
haven't
been that hard to work with.
Best,
Ian Henriksen
On Wed, Aug 31, 2016 at 12:20 PM Jason Newton wrote:
> I just wan
Personally, I think this is a great idea. +1 to more informative errors.
Best,
Ian Henriksen
On Mon, Jun 13, 2016 at 2:11 PM Nathaniel Smith wrote:
> It was recently pointed out:
>
> https://github.com/numpy/numpy/issues/7730
>
> that this code silently truncates floats
ing 64 bit integers default in more cases would help here.
Currently arange gives 32 bit integers on 64 bit Windows, but
64 bit integers on 64 bit Linux/OSX. Using size_t (or even
int64_t) as the default size would help with overflows in
the more common use cases. It's a hefty backc
doing
operations with integers, I expect integer output, regardless of floor
division and overflow.
There's a lot to both sides of the argument though. Python's arbitrary
precision integers alleviate overflow concerns very nicely, but forcing
float output for people who actually want integers is not at all ideal
either.
Best,
Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Apr 11, 2016 at 5:24 PM Chris Barker wrote:
> On Fri, Apr 8, 2016 at 4:37 PM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>
>> If we introduced the T2 syntax, this would be valid:
>>
>> a @ b.T2
>>
>> It makes the intent
f we introduced the T2 syntax, this would be valid:
a @ b.T2
It makes the intent much clearer. This helps readability even more when
you're
trying to put together something that follows a larger equation while still
broadcasting correctly.
Does this help make the use cases a bit clearer?
Best
common reshape
operations.
It seems like it would cover all the needed cases for simplifying
expressions
involving matrix multiplication. It's not totally clear what the semantics
should be
for higher dimensional arrays or 2D arrays that already have a shape
incompatible
with the one
On Thu, Apr 7, 2016 at 1:53 PM wrote:
> On Thu, Apr 7, 2016 at 3:26 PM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>> On Thu, Apr 7, 2016 at 12:31 PM wrote:
>>
>>> write unit tests with non square 2d arrays and the exception / test
&g
On Thu, Apr 7, 2016 at 12:31 PM wrote:
> write unit tests with non square 2d arrays and the exception / test error
> shows up fast.
>
> Josef
>
>
Absolutely, but good programming practices don't totally obviate helpful
error
messages.
Best,
-Ian
___
Nu
On Thu, Apr 7, 2016 at 12:18 PM Chris Barker wrote:
> On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>> Here's another example that I've seen catch people now and again.
>>
>> A = np.random.rand(100,
r
becomes
difficult to find. This error isn't necessarily unique to beginners either.
It's a
common typo that catches intermediate users who know about broadcasting
semantics but weren't keeping close enough track of the dimensionality of
the
dcasting usually prepends ones to fill in
missing
dimensions and the fact that our current linear algebra semantics often
treat rows
as columns, but making 1-D arrays into rows makes a lot of sense as far as
user
experience goes.
Great ideas everyone!
Best,
-Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
re
the case here too.
In general, you can't expect Linux distros to have a uniform shared object
interface
for LAPACK, so you don't gain much by using the version that ships with
CentOS 5 beyond not having to compile it all yourself. It might be better
to use a
newer LAPACK built from source with the older toolchains already there.
Best,
-Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Dec 22, 2015 at 12:14 AM Ralf Gommers
wrote:
> That would be quite useful I think. 32/64-bit issues are mostly orthogonal
> to py2/py3 ones, so may only a 32-bit Python 3.5 build to keep things fast?
>
Done in https://github.com/numpy/numpy/pull/6874.
Hope this helps,
-Ian
__
>
> The Python 3 build runs much faster than the Python 2. You can close and
>> reopen my testing PR to check what happens if you enable the numpy project.
>
>
I'm not sure why this is the case. MSVC 2015 is generally better about a
lot of things, but it's
surprising that the speed difference is so
>
> Also, am I correct that these are win64 builds only? Anyone know if it
> would be easy to add win32?
>
It'd be really easy to add 32 bit builds. The main reason I didn't was
because appveyor only
gives one concurrent build job for free, and I didn't want to slow things
down too much. I can get
On Fri, Dec 18, 2015 at 3:27 PM Nathaniel Smith wrote:
> On Dec 18, 2015 2:22 PM, "Ian Henriksen" <
> insertinterestingnameh...@gmail.com> wrote:
> >
> > An appveyor setup is a great idea. An appveyor build matrix with the
> > various supported MSVC
On Fri, Dec 18, 2015 at 2:51 PM Ralf Gommers wrote:
> On Fri, Dec 18, 2015 at 5:55 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Fri, Dec 18, 2015 at 2:12 AM, Nathaniel Smith wrote:
>>
>>> Hi all,
>>>
>>> I'm wondering what people think of the idea of us (= numpy) stop
On Mon, Jul 13, 2015 at 11:08 AM Nathaniel Smith wrote:
> I think that if you can make this work then it would be great (who doesn't
> like less code that does more?), but that as a practical matter
> accomplishing this may be difficult. AFAIK there simply is no way to write
> generic code to pro
cipy.
>
Supporting Eigen sounds like a great idea. BLIS would be another
one worth supporting at some point. As far as implementation goes, it may
be helpful to look at https://github.com/numpy/numpy/pull/3642 and
https://github.com/numpy/numpy/pull/4191 for the corresponding set of
change
the current amount of work that is
coming from the 1.10 release, but this sounds like a really great idea.
Computed/fixed dimensions would allow gufuncs for things like:
- polynomial multiplication, division, differentiation, and integration
- convolutions
- views of different types (see the
se and reverse the order of the SVD, but taking the
transposes once the algorithm is finished will ensure they are C ordered.
You could also use np.ascontiguousarray on the output arrays, though that
results in unnecessary copies that change the memory layout.
Best of luck!
-Ian Henriksen
ning when the default behavior is expected.
The first point would be a break in backwards compatibility, so I'm not
sure if it's feasible at this point. The advantage would be that all all
arrays returned when using this functionality would be contiguous along the
last axis. The shape
On Mon, Oct 13, 2014 at 8:39 PM, Charles R Harris wrote:
>
>
> On Mon, Oct 13, 2014 at 12:54 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
>
>> On Mo, 2014-10-13 at 13:35 +0200, Daniele Nicolodi wrote:
>> > Hello,
>> >
>> > I have a C++ application that collects float, int or complex
ision to return inf since that is what IEEE-754 arithmetic
> specifies.
>
> For me the question is if the floor division should also perform a cast
> to an integer type. Since inf cannot be represented in most common
> integer formats
27 matches
Mail list logo