nabble is the other site that probably should be notified of the mailing
list move.
On Wed, Mar 22, 2017 at 10:11 AM, Nathan Goldbaum
wrote:
> Are you sure about astropy? They recently moved to google groups.
>
>
> On Wednesday, March 22, 2017, Ralf Gommers wrote:
>
>> Hi all,
>>
>> The server
How is it that scipy.org's bandwidth usage is that much greater than
matplotlib.org's? We are quite image-heavy, but we haven't hit any
bandwidth limits that I am aware of.
Ben Root
On Wed, Mar 15, 2017 at 9:16 AM, Bryan Van de ven
wrote:
> NumPy is a NumFocus fiscally sponsored project, perhap
You are going to need to provide much more context than that. Overhead
compared to what? And where (io, cpu, etc.)? What are the size of your
arrays, and what sort of operations are you doing? Finally, how much
overhead are you seeing?
There can be all sorts of reasons for overhead, and some can e
!
Ben Root
On Mon, Feb 27, 2017 at 2:31 PM, Matthew Brett
wrote:
> Hi,
>
> On Mon, Feb 27, 2017 at 11:27 AM, Charles R Harris
> wrote:
> >
> >
> > On Mon, Feb 27, 2017 at 11:43 AM, Benjamin Root
> > wrote:
> >>
> >> What's the timelin
That's not a bad idea. Matplotlib is currently considering something
similar for its mlab module. It has been there since the beginning, but it
is very outdated and very out-of-scope for matplotlib. However, there are
still lots of code out there that depends on it. So, we are looking to
split it o
Might be os-specific, too. Some virtual memory management systems might
special case the zeroing out of memory. Try doing the same thing with a
different value than zero.
On Dec 26, 2016 6:15 AM, "Nicolas P. Rougier"
wrote:
Thanks for the explanation Sebastian, makes sense.
Nicolas
> On 26 D
"only be used by engineers/scientists for research"
Famous last words. I know plenty of scientists who would love to "do
research" with an exposed eval(). Full disclosure, I personally added a
security hole into matplotlib thinking I covered all my bases in protecting
an eval() statement.
Ben Roo
Perhaps the numexpr package might be safer? Not exactly meant for this
situation (meant for optimizations), but the evaluator is pretty darn safe.
Ben Root
On Thu, Oct 27, 2016 at 5:33 PM, John Ladasky wrote:
> This isn't just a Numpy issue. You are interested in Python's eval().
>
> Keep in m
+1. I was almost always setting it to True anyway.
On Thu, Oct 20, 2016 at 1:18 PM, Nathan Goldbaum
wrote:
> Agreed, especially given the prevalence of using this function in
> downstream test suites:
>
> https://github.com/search?utf8=%E2%9C%93&q=numpy+assert_
> allclose&type=Code&ref=searchres
Why not "propagated"?
On Fri, Oct 14, 2016 at 1:08 PM, Sebastian Berg
wrote:
> On Fr, 2016-10-14 at 13:00 -0400, Allan Haldane wrote:
> > Hi all,
> >
> > Eric Wieser has a PR which defines new functions np.ma.correlate and
> > np.ma.convolve:
> >
> > https://github.com/numpy/numpy/pull/7922
> >
With regards to arguments about holding onto large arrays, I would like to
emphasize that my original suggestion mentioned weakref'ed numpy arrays.
Essentially, the idea is to claw back only the raw memory blocks during
that limbo period between discarding the numpy array python object and when
pyt
This is the first I am hearing of tempita (looks to be a templating
language). How is it a dependency of numpy? Do I now need tempita in order
to use numpy, or is it a build-time-only dependency?
Ben
On Fri, Sep 30, 2016 at 11:13 AM, Stephan Hoyer wrote:
> One way to do this is to move to vendo
There was similar discussion almost two years ago with respect to
capitalization of matplotlib in prose. Most of the time, it was lower-case
in our documentation, but then the question was if it should be upper-case
at the beginning of the sentence... or should it always be upper-cased like
a prope
Don't know if it is what you are looking for, but NumPy has a built-in
suite of benchmarks:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.testing.Tester.bench.html
Also, some projects have taken to utilizing the "airspeed velocity" utility
to track benchmarking stats for their projects
One thing that I have always wished for from a project like mypy is the
ability to annotate what the expected shape should be. Too often, I get a
piece of code from a coworker and it comes with no docstring explaining the
expected dimensions of the input arrays and what the output array is going
to
s such as 'left' or
'right' or maybe something akin to what at_least3d() implements.
On Wed, Jul 6, 2016 at 3:20 PM, Joseph Fox-Rabinovitz <
jfoxrabinov...@gmail.com> wrote:
> On Wed, Jul 6, 2016 at 2:57 PM, Eric Firing wrote:
> > On 2016/07/06 8:25 AM, Benjamin Ro
I wouldn't have the keyword be "where", as that collides with the notion of
"where" elsewhere in numpy.
On Wed, Jul 6, 2016 at 2:21 PM, Joseph Fox-Rabinovitz <
jfoxrabinov...@gmail.com> wrote:
> I still think this function is useful. I have made a change so that it
> only accepts one array, as Ma
While atleast_1d/2d/3d predates my involvement in numpy, I am probably
partly to blame for popularizing them as I helped to fix them up a fair
amount. I wouldn't call its use "guessing". Rather, I would treat them as
useful input sanitizers. If your function is going to be doing 2d indexing
on an i
That seems like the only reasonable behavior, but I will admit that my
initial desire is essentially a vectorized "unique" such that it returns
the unique values of the stated axis. But that isn't possible because there
can be different number of unique values in the given axis, resulting in a
ragg
As a bit of a real-life example where things can go wrong with naming. The
"pylab" name was accidentally hijacked a couple years ago on pypi, and
caused several bug reports to be filed against matplotlib for failing
scripts. Some people thought that one should do "pip install pylab" to do
"from py
Oftentimes, if one needs to share numpy arrays for multiprocessing, I would
imagine that it is because the array is huge, right? So, the pickling
approach would copy that array for each process, which defeats the purpose,
right?
Ben Root
On Wed, May 11, 2016 at 2:01 PM, Allan Haldane
wrote:
> O
shape is (256, 256). So the image seems to have
> > been loaded as a color image.
> >
> > 2016-04-29 13:38 GMT-03:00 Benjamin Root :
> >> What kind of array is "img"? What is its dtype and shape?
> >>
> >> plt.imshow() will use the default col
What kind of array is "img"? What is its dtype and shape?
plt.imshow() will use the default colormap for matplotlib if the given
array is just 2D. But if it is 3D (a 2D array of RGB[A] channels), then it
will forego the colormap and utilize that for the colors. It knows nothing
of the colormap con
Working on closing out some bug reports at work, and I ran into one about
comparisons to 'None' will result in elementwise object comparison in the
future. Now, I totally get the idea behind the change, and I am not here to
argue that decision. However, I have come across a situation where the
chan
Yeah! That's the bug I encountered! So, that would explain why this seems
to work fine now (I tried it out a bit on Friday on a CentOS6 system, but
didn't run the test suite).
Cheers!
Ben Root
On Sun, Apr 17, 2016 at 1:46 PM, Olivier Grisel
wrote:
> Thanks for the clarification, I read your ori
haniel Smith wrote:
>
>> On Apr 14, 2016 11:11 AM, "Benjamin Root" wrote:
>> >
>> > Are we going to have to have documentation somewhere making it clear
>> that the numpy wheel shouldn't be used in a conda environment? Not that I
>> would expec
Are we going to have to have documentation somewhere making it clear that
the numpy wheel shouldn't be used in a conda environment? Not that I would
expect this issue to come up all that often, but I could imagine a scenario
where a non-scientist is simply using a base conda distribution because
th
You might do better using scipy.spatial. It has very useful data structures
for handling spatial coordinates. I am not exactly sure how to use them for
this specific problem (not a domain expert), but I would imagine that the
QHull wrappers there might give you some useful tools.
Ben Root
On Tue,
ovitz <
jfoxrabinov...@gmail.com> wrote:
> On Tue, Mar 29, 2016 at 1:46 PM, Benjamin Root
> wrote:
> > Is there a quick-n-easy way to reflect a NxM array that represents a
> > quadrant into a 2Nx2M array? Essentially, I am trying to reduce the size
> of
> > an expens
Is there a quick-n-easy way to reflect a NxM array that represents a
quadrant into a 2Nx2M array? Essentially, I am trying to reduce the size of
an expensive calculation by taking advantage of the fact that the first
part of the calculation is just computing gaussian weights, which is
radially symm
On Tue, Feb 23, 2016 at 3:30 PM, Nathaniel Smith wrote:
> What should this do?
>
> np.zeros((12, 0)).reshape((10, -1, 2))
>
It should error out, I already covered that. 12 != 20.
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
On Tue, Feb 23, 2016 at 3:14 PM, Nathaniel Smith wrote:
> Sure, it's totally ambiguous. These are all legal:
I would argue that except for the first reshape, all of those should be an
error, and that the current algorithm is buggy.
This isn't a heuristic. It isn't guessing. It is making the s
re like a bug to me.
Ben Root
On Tue, Feb 23, 2016 at 1:58 PM, Sebastian Berg
wrote:
> On Di, 2016-02-23 at 11:45 -0500, Benjamin Root wrote:
> > but, it isn't really ambiguous, is it? The -1 can only refer to a
> > single dimension, and if you ignore the zeros in the ori
> On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root
> wrote:
>
>> Not exactly sure if this should be a bug or not. This came up in a fairly
>> general function of mine to process satellite data. Unexpectedly, one of
>> the satellite files had no scans in it, triggering an e
Not exactly sure if this should be a bug or not. This came up in a fairly
general function of mine to process satellite data. Unexpectedly, one of
the satellite files had no scans in it, triggering an exception when I
tried to reshape the data from it.
>>> import numpy as np
>>> a = np.zeros((0, 5
tlib's mlab submodule.
>> >
>> > I've been in situations (::cough:: Esri production ::cough::) where I've
>> > had one hand tied behind my back and unable to install pandas. mlab was
>> > a big help there.
>> >
>> > https://goo.gl/M7Mi
Re-reading your post, I see you are talking about something different. Not
exactly sure what your use-case is.
Ben Root
On Fri, Feb 12, 2016 at 9:49 AM, Benjamin Root wrote:
> Seems like you are talking about xarray: https://github.com/pydata/xarray
>
> Cheers!
> Ben Root
>
&
Seems like you are talking about xarray: https://github.com/pydata/xarray
Cheers!
Ben Root
On Fri, Feb 12, 2016 at 9:40 AM, Sérgio wrote:
> Hello,
>
> This is my first e-mail, I will try to make the idea simple.
>
> Similar to masked array it would be interesting to use a label array to
> guide
Huh... matplotlib could use that! We have been using our own internal
function left over from the numerix days, I think.
Ben Root
On Thu, Feb 11, 2016 at 2:12 PM, Nathaniel Smith wrote:
> Oh wow, yeah, there are tons of uses:
>
> https://github.com/search?q=%22np.iterable%22&ref=simplesearch&ty
I like the idea of bumping the stacklevel in principle, but I am not sure
it is all that practical. For example, if a warning came up when doing "x /
y", I am assuming that it is emitted from within the ufunc np.divide(). So,
you would need two stacklevels based on whether the entry point was the
o
Are there other functions where this behavior may or may not be happening?
If it isn't consistent across all np.random functions, it probably should
be, one way or the other.
Ben Root
On Tue, Jan 19, 2016 at 5:10 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> Hi all,
>
> There is a
Travis -
I will preface the following by pointing out how valuable miniconda and
anaconda has been for our workplace because we were running into issues
with ensuring that everyone in our mixed platform office had access to all
the same tools, particularly GDAL, NetCDF4 and such. For the longest t
A warning about HDF5. It is not a database format, so you have to be
extremely careful if the data is getting updated while it is open for
reading by anybody else. If it is strictly read-only, and no body else is
updating it, then have at it!
Cheers!
Ben Root
On Thu, Jan 14, 2016 at 9:16 AM, Edis
The other half of the fun is how to deal with weird binary issues with
libraries like libgeos_c, libhdf5 and such. You have to get all of the
compile options right for your build of those libraries to get your build
of GDAL and pyhdf working right. You also have packages like gdal and
netcdf4 have
TBH, I wouldn't have expected it to work, but now that I see it, it does
make some sense. I would have thought that it would error out as being
ambiguous (prepend? append?). I have always used ellipses to make it
explicit where the new axis should go. But, thinking in terms of how
regular indexing
Maybe use searchsorted()? I will note that I have needed to do something
like this once before, and I found that the list comprehension form of
calling .index() for each item was faster than jumping through hoops to
vectorize it using searchsorted (needing to sort and then map the sorted
indices to
I believe that a lot can be learned from matplotlib's recent foray into
appveyor. Don't hesitate to ask questions on our dev mailing list (I wasn't
personally involved, so I don't know what was learned).
Cheers!
Ben Root
On Fri, Dec 18, 2015 at 5:27 PM, Nathaniel Smith wrote:
> On Dec 18, 2015
Would it make sense to at all to bring that optimization to np.sum()? I
know that I have np.sum() all over the place instead of count_nonzero,
partly because it is a MatLab-ism and partly because it is easier to write.
I had no clue that there was a performance difference.
Cheers!
Ben Root
On Th
Heh, never noticed that. Was it implemented more like a generator/iterator
in older versions of Python?
Thanks,
Ben Root
On Mon, Dec 14, 2015 at 12:38 PM, Robert Kern wrote:
> On Mon, Dec 14, 2015 at 3:56 PM, Benjamin Root
> wrote:
>
> > By the way, any reason why this works?
&
Devil's advocate here: np.array() has become the de-facto "constructor" for
numpy arrays. Right now, passing it a generator results in what, IMHO, is a
useless result:
>>> np.array((i for i in range(10)))
array( at 0x7f28b2beca00>, dtype=object)
Passing pretty much any dtype argument will cause t
Oooh, this will be nice to have. This would be one of the few times I would
love to see unicode versions of these function names supplied, too.
On Wed, Nov 25, 2015 at 5:31 PM, Antonio Lara wrote:
> Hello, I have added three new functions to the file function_base.py in
> the numpy/lib folder. T
How is this different from using np.newaxis and broadcasting? Or am I
misunderstanding this?
Ben Root
On Tue, Nov 24, 2015 at 9:13 PM, wrote:
>
>
> On Tue, Nov 24, 2015 at 7:13 PM, Nathaniel Smith wrote:
>
>> On Nov 24, 2015 11:57 AM, "John Kirkham" wrote:
>> >
>> > Takes an array and tacks o
Just pointing out np.loadtxt(..., ndmin=2) will always return a 2D array.
Notice that without that option, the result is effectively squeezed. So if
you don't specify that option, and you load up a CSV file with only one
row, you will get a very differently shaped array than if you load up a CSV
fi
My personal rule for flexible inputs like that is that it should be
encouraged so long as it does not introduce ambiguity. Furthermore,
Allowing a scalar as an input doesn't add a congitive disconnect on the
user on how to specify multiple columns. Therefore, I'd give this a +1.
On Mon, Nov 9, 201
allclose() needs to return a bool so that one can do "if np.allclose(foo,
bar) is True" or some such. The "good behavior" is for np.isclose() to
return a memmap, which still confuses the heck out of me, but I am not a
memmap expert.
On Thu, Nov 5, 2015 at 4:50 PM, Ralf Gommers wrote:
>
>
> On We
I am not sure I understand what you mean. Specifically that np.isclose will
return a memmap if one of the inputs is a memmap. The result is a brand new
array, right? So, what is that result memmapping from? Also, how does this
impact np.allclose()? That function returns a scalar True/False, so what
Py beats Pandas?
Ben
On Mon, Nov 2, 2015 at 6:44 PM, Chris Barker wrote:
> On Tue, Oct 27, 2015 at 7:30 AM, Benjamin Root
> wrote:
>
>> FWIW, when I needed a fast Fixed Width reader
>>
>
> was there potentially no whitespace between fields in that case? In which
>
Conda is for binary installs and largely targeted for end-users. This topic
pertains to source installs, and is mostly relevant to developers, testers,
and those who like to live on the bleeding edge of a particular project.
On Tue, Oct 27, 2015 at 10:31 AM, Edison Gustavo Muenz <
edisongust...@gm
FWIW, when I needed a fast Fixed Width reader for a very large dataset last
year, I found that np.genfromtext() was faster than pandas' read_fwf().
IIRC, pandas' text reading code fell back to pure python for fixed width
scenarios.
On Fri, Oct 23, 2015 at 8:22 PM, Chris Barker - NOAA Federal <
chr
The change to nansum() happened several years ago. The main thrust of it
was to make the following consistent:
np.sum([]) # zero
np.nansum([np.nan]) # zero
np.sum([1]) # one
np.nansum([np.nan, 1]) # one
If you want to propagate masks and such, use masked arrays.
Ben Root
On Fri, Oct 23, 201
In many other parts of numpy, calling the numpy function that had an
equivalent array method would result in the method being called. I would
certainly be surprised if the copy() method behaved differently from the
np.copy() function.
Now it is time for me to do some grepping of my code-bases...
I'd be totally in support of switching to timezone naive form. While it
would be ideal that everyone stores their dates in UTC, the real world is
messy and most of the time, people are just loading dates as-is and don't
even care about timezones. I work on machines with different TZs, and I
hate it
There is the concept of consensus-driven development, which centers on veto
rights. It does assume that all actors are driven by a common goal to
improve the project. For example, the fact that we didn't have consensus
back during the whole NA brouhaha was actually a good thing because IMHO
includi
Ow! Ow! Ow! I am just a meteorologist that has an obsession with looking up
unfamiliar technology terms.
I need a Tylenol...
Ben Root
On Fri, Sep 25, 2015 at 12:27 PM, Anne Archibald
wrote:
> goto! and comefrom! Together with exceptions, threads, lambda, super,
> generators, and coroutines, all
Most of the time when I wanted to use goto in my early days, I found that
breaks and continues were better and easier to understand. I will admit
that there are occasional nested if/elif/else code that get messy without a
goto. But which smells worse? A "goto" package or a complex if/elif/else?
Be
To expand on Ryan's point a bit about recusal... this is why we have a
general policy against self-merging and why peer review is so valuable. A
ban on self-merging is much like recusal, and I think it is a fantastic
policy.
As for a BDFL, I used to like that idea having seen it work well for Linu
You realize that this is part of his plans for world domination, right?
+1, Congrats! I wish I had enough skill and time to contribute to more
projects.
Cheers!
Ben Root
On Sep 9, 2015 3:56 PM, "Charles R Harris"
wrote:
> Hi All,
>
> Stephan Hoyer now has commit rights to Numpy to compliment al
it is
entrenched in our codebase.
Ben Root
On Thu, Aug 27, 2015 at 11:33 AM, Sebastian Berg wrote:
> On Do, 2015-08-27 at 11:15 -0400, Benjamin Root wrote:
> > Ok, I just wanted to make sure I understood the issue before going bug
> > hunting. Chances are, it has been a bug on our
that at the REPL doesn't produce a warning, so i am guessing that
it is valid.
Ben Root
On Thu, Aug 27, 2015 at 10:44 AM, Sebastian Berg wrote:
> On Do, 2015-08-27 at 08:04 -0600, Charles R Harris wrote:
> >
> >
> > On Thu, Aug 27, 2015 at 7:52 AM, Benjamin Root
> &g
Ok, I tested matplotlib master against numpy master, and there were no
errors. I did get a bunch of new deprecation warnings though such as:
"/nas/home/broot/centos6/lib/python2.7/site-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/colorbar.py:539:
VisibleDeprecationWarning: boolea
Aw, crap... I looked at the list of tags and saw the rc1... I'll test again
in the morning Grumble, grumble...
On Aug 26, 2015 10:53 PM, "Nathaniel Smith" wrote:
> On Aug 26, 2015 7:03 PM, "Benjamin Root" wrote:
> >
> > Just a data point, I just
Just a data point, I just tested 1.9.0rc1 (built from source) with
matplotlib master, and things appear to be fine there. In fact, matplotlib
was built against 1.7.x (I was hunting down a regression), and worked
against the 1.9.0 install, so the ABI appears intact.
Cheers!
Ben Root
On Wed, Aug 26
I used to be a huge advocate for the "develop" mode, but not anymore. I
have run into way too many Heisenbugs that would clear up if I nuked my
source tree and re-clone.
I should also note that there is currently an open issue with "pip install
-e" and namespace packages. This has been reported to
Did you do a "git clean -fxd" before re-installing?
On Thu, Aug 13, 2015 at 2:34 PM, Sebastian Berg
wrote:
> Hey,
>
> just for hacking/testing, I tried to add to shape.c:
>
>
> /*NUMPY_API
> *
> * Checks if memory overlap exists
> */
> NPY_NO_EXPORT int
> PyArray_ArraysShareMemory(PyArrayObje
1:10 PM, Sebastian Berg
wrote:
> On Mo, 2015-08-10 at 12:09 -0400, Benjamin Root wrote:
> > Just came across this one today:
> >
> > >>> np.in1d([1], set([0, 1, 2]), assume_unique=True)
> > array([ False], dtype=bool)
> >
> > >>> np.in1d([1]
Just came across this one today:
>>> np.in1d([1], set([0, 1, 2]), assume_unique=True)
array([ False], dtype=bool)
>>> np.in1d([1], [0, 1, 2], assume_unique=True)
array([ True], dtype=bool)
I am assuming this has something to do with the fact that order is not
guaranteed with set() objects? I was
What a coincidence! A very related bug just got re-opened today at my
behest: https://github.com/numpy/numpy/issues/5095
Not the same, but I wouldn't be surprised if it stems from the same
sources. The short of it...
np.where(x, 0, x)
where x is a masked array, will return a masked array in 1.8.
I think there is a misunderstanding. What you are seeing is documentation
on how to use f90 for compiling, which then outputs some stuff to the
terminal, which is being shown in the documentation. We don't actually
include any compilers in numpy.
Ben Root
On Wed, Jul 8, 2015 at 11:53 AM, Nguyen,
On Thu, Jun 4, 2015 at 10:41 PM, Nathaniel Smith wrote:
> My comment was about the second type. Are your comments about the
> second type? The second type definitely does not produce a flattened
> array:
>
I was talking about the second type in that I never even knew it existed.
My understandin
On Thu, Jun 4, 2015 at 9:04 PM, Nathaniel Smith wrote:
> On Thu, Jun 4, 2015 at 5:57 PM, Nathaniel Smith wrote:
>
> One place where the current behavior is particularly baffling and annoying
> is when you have multiple boolean masks in the same indexing operation. I
> think everyone would expect
Speaking from the matplotlib project, our binaries are substantial due to
our suite of test images. Pypi worked with us on relaxing size constraints.
Also, I think the new cheese shop/warehouse server they are using scales
better, so size is not nearly the same concern as before.
Ben Root
On May 2
Then add in broadcasting behavior...
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> On May 22, 2015 1:26 PM, "Benjamin Root" wrote:
> >
> > That assumes that the said recently-confused ever get to the point of
> understanding it...
>
> Well, I
python guy".
So, having a page I can point them to would be extremely valuable.
Ben Root
On Fri, May 22, 2015 at 4:05 PM, Nathaniel Smith wrote:
> On May 22, 2015 11:34 AM, "Benjamin Root" wrote:
> >
> > At some point, someone is going to make a single documentation
At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
Ben Root
On Fri, May 22, 2015 at 2:23 PM, Nathaniel Smith wrote:
> On May 22, 2015 11:00 AM, "Alexander Belopolsky" wr
On Sat, May 9, 2015 at 4:03 PM, Nathaniel Smith wrote:
> Not sure what this has to do with Jaime's post about nonzero? There is
> indeed a potential question about what 3-argument where() should do with
> subclasses, but that's effectively a different operation entirely and to
> discuss it we'd n
Absolutely, it should be writable. As for subclassing, that might be messy.
Consider the following:
inds = np.where(data > 5)
In that case, I'd expect a normal, bog-standard ndarray because that is
what you use for indexing (although pandas might have a good argument for
having it return one of t
I have been very happy with the bitarray package. I don't know if it is
faster than bitstring, but it is worth a mention. Just watch out for any
hashing operations on its objects, it doesn't seem to do them right (set(),
dict(), etc...), but comparison operations work just fine.
Ben Root
_
at 10:19 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Tue, Apr 28, 2015 at 7:00 AM, Benjamin Root wrote:
>
>> I have a need to have a numpy array of 17 byte (more specifically, at
>> least 147 bits) values that I would be doing some bit twiddling on. I ha
I have a need to have a numpy array of 17 byte (more specifically, at least
147 bits) values that I would be doing some bit twiddling on. I have found
that doing a dtype of "i17" yields a dtype of int32, which is completely
not what I intended. Doing 'u17' gets an "data type not understood". I have
"Then you can set about convincing matplotlib and friends to
use it by default"
Just to note, this proposal was originally made over in the matplotlib
project. We sent it over here where its benefits would have wider reach.
Matplotlib's plan is not to change the defaults, but to offload as much as
The distinction that boolean indexing has over the other 2 methods of
indexing is that it can guarantee that it references a position at most
once. Slicing and scalar indexes are also this way, hence why these methods
allow for in-place assignments. I don't see boolean indexing as an
extension of o
Another usecase would be for MaskedArrays. ma.masked_array.min() wouldn't
have to make a copy anymore (there is a github issue about that). It could
just pass its mask into the where= argument of min() and be done with it.
Problem would be generalizing situations where where= effectively results
in
mixed C and python development? I would just wait for the Jupyter folks to
create "IC" and maybe even "IC++"!
On Wed, Apr 1, 2015 at 12:04 PM, Charles R Harris wrote:
> Hi All,
>
> In a recent exchange Mark Wiebe suggested that the lack of support for
> numpy development in Visual Studio might l
other magical object that doesn't work like anything else in python.
Ben Root
On Wed, Mar 25, 2015 at 5:11 PM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Wed, Mar 25, 2015 at 1:45 PM, Benjamin Root wrote:
>
>> I fail to see the wtf.
>>
>
>>
I fail to see the wtf.
flags = a.flags
So, "flags" at this point is just an alias to "a.flags", just like any
other variable in python
"flags.writeable = False" would then be equivalent to "a.flags.writeable =
False". There is nothing numpy-specific here. a.flags is mutable object.
This is how P
Release minion? Sounds a lot like an academic minion:
https://twitter.com/academicminions
On Fri, Mar 13, 2015 at 2:51 AM, Ralf Gommers
wrote:
>
>
> On Fri, Mar 13, 2015 at 7:29 AM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> On Thu, Mar 12, 2015 at 10:16 PM, Charles R Harris <
I think the question is if scalars should be acceptable for the first
argument, not if it should be for the 2nd and 3rd argument.
If scalar can be given for the first argument, the the first three makes
sense. Although, I have no clue why we would allow that.
Ben Root
On Mar 12, 2015 9:25 PM, "Na
A slightly different way to look at this is one of sharing data. If I am
working on a system with 3.4 and I want to share data with others who may
be using a mix of 2.7 and 3.3 systems, this problem makes npz format much
less attractive.
Ben Root
On Fri, Mar 6, 2015 at 12:51 PM, Charles R Harris
On Fri, Mar 6, 2015 at 7:59 AM, Charles R Harris
wrote:
> Datetime64 seems to use the highest precision
>
> In [12]: result_type(ones(1, dtype='datetime64[D]'), 'datetime64[us]')
> Out[12]: dtype('
> In [13]: result_type(ones(1, dtype='datetime64[D]'), 'datetime64[Y]')
> Out[13]: dtype('
Ah, ye
On Thu, Mar 5, 2015 at 12:04 PM, Chris Barker wrote:
> well, the precision of those is 64 bits, yes? so if you asked for less
> than that, you'd still get a dt64. If you asked for 64 bits, you'd get it,
> if you asked for datetime128 -- what would you get???
>
> a 128 bit integer? or an Exceptio
1 - 100 of 617 matches
Mail list logo