? is numpy considered the backend to scipy or visa-versa? at times
it seems like things are double implemented in numpy and scipy, but in
general should not implement things in numpy that already exist in scipy?
Thanks,
Robert
--
Sent from: http://numpy-discussion.10968.n7.
I had not looked into sympy that closes, thinking it was mostly a symbolic
package. However, there appears to be functions that convert back to numpy
expressions so that np.ndarray's and such can work. There also appears to be
extensive polynomial classes already defined.
Thanks for pointing me i
ave it at that and not extend the `Generator`
interface.
https://github.com/numpy/numpy/issues/24458#issuecomment-1685022258
--
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-
f a plain function like `fix()` and has a (strict, I believe)
superset of functionality. You can ignore `fix()`, more or less. I'm not
sure if it's on the list for deprecation/removal in numpy 2.0, but it
certainly could be.
--
Robert Kern
_
e the rest of the information in `__array_interface__`, and I think
you should be good to go. I don't think you'll need to infer or represent
the precise path of Python-level operations (slices, transposes, reshapes,
etc.) to which it got to that point.
--
Robert Kern
__
es, kudos, etc. you may
have.
Enjoy data!
--
Robert McLeod
robbmcl...@gmail.com
robert.mcl...@hitachi-hightech.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://ma
it this
problem. I.e. if you use pickling, you're told to use it only for transient
data with the same versions of libraries on both ends of the pipe, but the
reality is that it's too useful to avoid in creating files with arbitrarily
long lives. Not their fault; they warned us!
--
Robe
integer
sampling. The builtin `random.randrange()` will do arbitrary-sized integers
and is quite reasonable for this task. If you want it to use our
BitGenerators underneath for clean PRNG state management, this is quite
doable with a simple subclass of `random.Random`:
https://github.com/num
On Fri, Nov 17, 2023 at 4:15 PM Aaron Meurer wrote:
> On Fri, Nov 17, 2023 at 12:10 PM Robert Kern
> wrote:
> >
> > If the arrays you are drawing indices for are real in-memory arrays for
> present-day 64-bit computers, this should be adequate. If it's a notional
&g
just
doing set-checking anyways, so no loss.
--
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com
et()
while len(seen) < size:
dsize = size - len(seen)
seen.update(map(tuple, rng.integers(0, ashape, size=(dsize,
len(shape)
return list(seen)
That optimistic optimization makes this the fastest solution.
--
Robert Kern
___
NumP
subarrays
<https://numpy.org/doc/stable/reference/arrays.dtypes.html#index-7> (e.g.
`np.dtype((np.int32, (2,2)))`), that info here.
3. If there are fields, a tuple of the names of the fields
4. If there are fields, the field descriptor dict.
5. If extended dtype (e.g. fields, strings, void, etc.
ng of
the assertion of correctness (`random()`, as used in that StackOverflow
demonstration, does *not* exercise a lot of the important edge cases in the
floating point format). But if your true concern is that 9% of disk space,
you probably don't want to be using `savetxt()` in any case.
-
rwise difficult to
> work with since it compares equal to 0.0. I would find it surprising
> for copysign to do a numeric calculation on complex numbers. Also,
> your suggested definition would be wrong for 0.0 and -0.0, since
> sign(0) is 0, and this is precis
ually sorted to).
Either way, we probably aren't going to add this as its own function. Both
options are straightforward combinations of existing primitives.
--
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe
vor of functions. A general way to add some kind of fluency cheaply in
an Array API-agnostic fashion might be helpful to people trying to make
their numpy-only code that uses our current set of methods in this way a
bit easier. But you'll have to make the proposal to them, I thi
t from
an array to a reasonable JSONable encoding (e.g. base64). The time you are
seeing is the time it takes to encode that amount of data, period. That
said, if you want to use a quite inefficient hex encoding, `a.data.hex()`
is somewhat faster than the ba
ure JSON. Consider looking at
BJData, but it's "JSON-based" and not quite pure JSON.
https://neurojson.org/
--
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussio
On Tue, Mar 26, 2024 at 3:39 PM Luca Bertolotti
wrote:
> Thanks for your reply yes it seems more appropriate a cubic spline but how
> can i get what they call SC
>
I don't think any of us has enough context to know what "SC" is. It's not a
standard term that I
oing the type inference yourself and implement that in the
`PyObjectArray.__array__()` implementation and avoid implementing
`__array_interface__` for that object. Then `np.asarray()` will just
delegate to `PyObjectArray.__array__()`.
--
Robert Kern
___
64/python_d.exe`.
```batch
set PREFIX=C:/Users/Robert/dev/cpython
set PATH=%PREFIX%;%PREFIX%/PCBuild/amd64;%PREFIX%/Scripts;%PATH%
```
Next we have to install pip:
https://docs.python.org/3/library/ensurepip.html,
meson, and cython.
```shell
python_d -m ensurepip
python_d -mpip install meson meson-pyth
ainers). If it gets picked up by a bunch of other
array implementations, then you can make a proposal to have them added to
the Array API standard. numpy probably won't add them unless if that
happens first.
--
Robert Kern
___
NumPy-Discussion ma
ely
desired, so it can be rarely requested in an explicit manner.
For `np.heaviside()`, a default value was intentionally left unspecified
because of the multiple conventions that people want. In the face of
ambiguity, refuse the temptation to guess.
I think we're comfortable with these cho
lues to "func". The result
> value at position (1,1,1) in the output array would be y = func(X). The
> same would apply for all entries excluding the padding area (or according
> to some padding policy).
>
scipy.ndimage.generic_filter()
<https://docs.scipy.org/doc/scipy/reference
tor
>
>>> rng = np.random.default_rng()
>>> rng.bit_generator.seed_seq
SeedSequence(
entropy=186013007116029215180532390504704448637,
)
In some older versions of numpy, the attribute was semi-private as
_seed_seq, if you're still using one of those.
--
Robert Kern
__
different encodings and things like NULL-termination (also working
with the legacy dtypes and handling structured arrays easily, etc.).
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Thu, Apr 20, 2017 at 12:05 PM, Stephan Hoyer wrote:
>
> On Thu, Apr 20, 2017 at 11:53 AM, Robert Kern
wrote:
>>
>> I don't know of a format off-hand that works with numpy uniform-length
strings and Unicode as well. HDF5 (to my recollection) supports arrays of
NULL-ter
On Thu, Apr 20, 2017 at 12:17 PM, Anne Archibald
wrote:
>
> On Thu, Apr 20, 2017 at 8:55 PM Robert Kern wrote:
>> For example, to my understanding, FITS files more or less follow numpy
assumptions for its string columns (i.e. uniform-length). But it enforces
7-bit-clean ASCII a
On Thu, Apr 20, 2017 at 12:27 PM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
>
> On 20.04.2017 20:53, Robert Kern wrote:
> > On Thu, Apr 20, 2017 at 6:15 AM, Julian Taylor
> > mailto:jtaylor.deb...@googlemail.com>>
> > wrote:
> >
> >
On Thu, Apr 20, 2017 at 12:51 PM, Stephan Hoyer wrote:
>
> On Thu, Apr 20, 2017 at 12:17 PM, Robert Kern
wrote:
>>
>> On Thu, Apr 20, 2017 at 12:05 PM, Stephan Hoyer wrote:
>> >
>> > On Thu, Apr 20, 2017 at 11:53 AM, Robert Kern
wrote:
>> >>
&g
BINTABLE extensions can have columns containing strings, and in that
case the values are NULL terminated, except that if the string fills the
field (i.e. there's no room for a NULL), the NULL will not be written.
Ah, that's what I was think
al case for latin-1. Solve the
HDF5 problem (i.e. fixed-length UTF-8 strings) or leave it be until someone
else is willing to solve that problem. I don't think we're at the
bikeshedding stage yet; we're still disagreeing about fundamental
requirements.
--
Robert Kern
ts encoding to and decoding from a hardcoded latin-1
encoding.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
I'm working with specifies another encoding? Am I
supposed to encode all of my Unicode strings in the specified encoding,
then decode as latin-1 to assign into my array? HDF5's UTF-8 arrays are a
really important use case for me.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Mon, Apr 24, 2017 at 11:56 AM, Aldcroft, Thomas <
aldcr...@head.cfa.harvard.edu> wrote:
>
> On Mon, Apr 24, 2017 at 2:47 PM, Robert Kern
wrote:
>>
>> On Mon, Apr 24, 2017 at 10:51 AM, Aldcroft, Thomas <
aldcr...@head.cfa.harvard.edu> wrote:
>> >
>
00 PM, Chris Barker wrote:
>
> On Mon, Apr 24, 2017 at 11:36 AM, Robert Kern
wrote:
>> Solve the HDF5 problem (i.e. fixed-length UTF-8 strings)
>
> I agree-- binary compatibility with utf-8 is a core use case -- though is
it so bad to go through python's encoding/decoding
On Mon, Apr 24, 2017 at 4:06 PM, Aldcroft, Thomas <
aldcr...@head.cfa.harvard.edu> wrote:
>
> On Mon, Apr 24, 2017 at 4:06 PM, Robert Kern
wrote:
>>
>> I am not unfamiliar with this problem. I still work with files that have
fields that are supposed to be in EBCDIC but
ave decided, as a
developer, to write code that just hardcodes latin-1 for such cases, I have
regretted it. While it's just personal anecdote, I think it's at least
measuring the right thing. :-)
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Mon, Apr 24, 2017 at 5:56 PM, Aldcroft, Thomas <
aldcr...@head.cfa.harvard.edu> wrote:
>
> On Mon, Apr 24, 2017 at 7:11 PM, Robert Kern
wrote:
>>
>> On Mon, Apr 24, 2017 at 4:06 PM, Aldcroft, Thomas <
aldcr...@head.cfa.harvard.edu> wrote:
>> >
>> &
nes.
https://support.hdfgroup.org/HDF5/doc/Advanced/UsingUnicode/index.html
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Mon, Apr 24, 2017 at 7:41 PM, Nathaniel Smith wrote:
>
> On Mon, Apr 24, 2017 at 7:23 PM, Robert Kern
wrote:
> > On Mon, Apr 24, 2017 at 7:07 PM, Nathaniel Smith wrote:
> >
> >> That said, AFAICT what people actually want in most use cases is
support
> >&g
On Tue, Apr 25, 2017 at 9:01 AM, Chris Barker wrote:
> Anyway, I think I made the mistake of mingling possible solutions in with
the use-cases, so I'm not sure if there is any consensus on the use cases
-- which I think we really do need to nail down first -- as Robert has made
clear
as latin-1 does not make for a happy time. Both
encodings also technically derive from ASCII in the lower half, but most of
the actual language is written with the high-bit characters.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion
the fixed width, whichever comes
first", effectively being NULL-terminated just not requiring the reserved
space.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Tue, Apr 25, 2017 at 12:30 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
> On Tue, Apr 25, 2017 at 12:52 PM, Robert Kern
wrote:
>>
>> On Tue, Apr 25, 2017 at 11:18 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>> >
>> >
me involve in-memory
manipulation. Whatever change we make is going to impinge somehow on all of
the use cases. If all we do is add a latin-1 dtype for people to use to
create new in-memory data, then someone is going to use it to read existing
data in unknown or ambiguous encodings.
--
Robert Kern
__
e the in memory problem, but does have some advantages on disk as well
as making for easy display. We could compress it ourselves after encoding
by truncation.
The major use case that we have for a UTF-8 array is HDF5, and it specifies
the width in bytes, not Unicode characters.
--
Rob
be adding
utf-8 later)? Or a latin-1-specific dtype such that we will have to add a
second utf-8 dtype at a later date?
If you're not going to support arbitrary encodings right off the bat, I'd
actually suggest implementing UTF-8 and ASCII-surrogateescape first as they
seem to knock off more use cases straight away.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Wed, Apr 26, 2017 at 3:27 AM, Anne Archibald
wrote:
>
> On Wed, Apr 26, 2017 at 7:20 AM Stephan Hoyer wrote:
>>
>> On Tue, Apr 25, 2017 at 9:21 PM Robert Kern
wrote:
>>>
>>> On Tue, Apr 25, 2017 at 6:27 PM, Charles R Harris <
charlesr.har...@gmail.com&
On Wed, Apr 26, 2017 at 10:43 AM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
>
> On 26.04.2017 19:08, Robert Kern wrote:
> > On Wed, Apr 26, 2017 at 2:15 AM, Julian Taylor
> > mailto:jtaylor.deb...@googlemail.com>>
> > wrote:
> >
> >>
. 2 bytes
for UTF-16). It's only if you have to hack around at a higher level with
numpy's S arrays, which return Python byte strings that strip off the
trailing NULL bytes, that you have to worry about such things. Getting a
Python scalar from the
On Wed, Apr 26, 2017 at 4:49 PM, Nathaniel Smith wrote:
>
> On Apr 26, 2017 12:09 PM, "Robert Kern" wrote:
>> It's worthwhile enough that both major HDF5 bindings don't support
Unicode arrays, despite user requests for years. The sticking point seems
to be the d
the eventual support of UTF-8 will be constrained by specification of the
width in terms of characters rather than bytes, which conflicts with the
use cases of UTF-8 that have been brought forth.
https://mail.python.org/pipermail/numpy-discussion/2017-April/076668.html
--
Robert Kern
_
invert it to get a
boolean mask which is True where they are "far" with respect to the
threshold: `far_mask = ~close_mask`. Then you can use `i_idx, j_idx =
np.nonzero(far_mask)` to get arrays of the `i` and `j` indices where the
values are far. For example:
for i, j in zip(i_idx, j_i
et the following output:
$ python isclose.py
0, 1, 20, 30, 1.0, Fail
1, 2, 60, 160, 1.0, Fail
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
homogenized coefficients
- support for saving custom structured data to HDF5 files
- new tutorial on preparing meshes using FreeCAD/OpenSCAD and Gmsh
For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1
(rather long and technical).
Cheers,
Robert Cimrman
---
Contributors
uld have the positional argument,
> 'other' equal to a[0][0].
>
> What am I missing?
> a=np.array([[ShortestNull,ShortestPath(12)],[ShortestPath(
12),ShortestNull()]],dtype=object)
You didn't instantiate ShortestNull but passed the class object instead.
--
Robert
On Tue, Jun 27, 2017 at 3:01 PM, Benjamin Root wrote:
>
> Forgive my ignorance, but what is "Z/2"?
https://groupprops.subwiki.org/wiki/Cyclic_group:Z2
https://en.wikipedia.org/wiki/Cyclic_group
--
Robert Kern
___
NumPy-Discussion ma
d be useful.
It's not that hard: wrap the new `set_printoptions(pad=True)` in a `try:`
block to catch the error under old versions.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
contribution to the
whole ecosystem.
I'd recommend just making an independent project on Github and posting it
as its own project to PyPI when you think it's ready. We'll link to it in
our documentation. I don't think that it ought to be pa
to get it again. It would mess up the
iteration if you did and cause you to skip lines.
By the way, it is useful to help us help you if you copy-paste the exact
code that you are running as well as the full traceback instead of
paraphrasing the error message.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
brary/io.html
https://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html
Robert
On Wed, Jul 5, 2017 at 3:21 PM, Robert Kern wrote:
> On Wed, Jul 5, 2017 at 5:41 AM, wrote:
> >
> > Dear all
> >
> > I’m sorry if my question is too basic (not fully in relation to Nump
n
handle multiple data values on one line (not especially well-tested, but it
ought to work), but it assumes that the number of sub-blocks, index of the
sub-block, and sub-block size are each on the own line. The code gets a
little more complicated if that's not the case.
--
Robert Kern
from __
r to just write the code than to try to explain in
prose what to do. :-)
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
urning a
Boolean array for each element, which cannot be coerced to a single Boolean.
>
> The expression
>
> >>> numpy.vectorize(operator.is_)(a,None)
>
> gives the desired result, but feels a bit clumsy.
Wrap the clumsiness up in a docume
to those who have to read the code (e.g. you in 6 months). :-)
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
ouldn't. The coincidental similarity in functional form (domain
and normalizing constants notwithstanding) obscures the very different
mechanisms each represent.
The ambiguous name of the method `power` instead of `power_function` is my
fault. You have my apologies.
--
Robert Kern
___
63],
[ 72, 81, 90, 99],
[108, 117, 126, 135]])
[~]
|32> c[1]
array([[ 1, 10, 19, 28],
[ 37, 46, 55, 64],
[ 73, 82, 91, 100],
[109, 118, 127, 136]])
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
uplicates or intermediate arrays, but that's the extent of memory
> optimization you can do in numpy itself.
>
NumPy does have it's own memory map variant on ndarray:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html
--
Robert McLeod, Ph.D.
index. It's being assigned into a float
array.
Rather, it's the slicing inside of `trace_block()` when it's being given
arrays as inputs for `x` and `y`. numpy simply doesn't support that because
in general the result wouldn't have a uniform shape.
--
Robert Kern
___
).
Cheers,
Robert Cimrman
---
Contributors to this release in alphabetical order:
Robert Cimrman
Lubos Kejzlar
Vladimir Lukes
Matyas Novak
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy
ly the most inefficient way of doing
it. What would be a decent rewrite?
Index with a boolean mask.
mask = (tmp_px > 2)
px = tmp_px[mask]
py = tmp_py[mask]
# ... etc.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
avior with np.concatenate or
np.stack?
Quite frankly, I ignore the documentation as I think it's recommendation is
wrong in these cases. Vive la vstack!
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
ferences:
https://github.com/numpy/numpy/pull/7253
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Thu, Nov 9, 2017 at 2:49 PM, Allan Haldane
wrote:
>
> On 11/09/2017 05:39 PM, Robert Kern wrote:
> > On Thu, Nov 9, 2017 at 1:58 PM, Mark Bakker wrote:
> >
> >> Can anybody explain why vstack is going the way of the dodo?
> >> Why are stack / concatenat
leases as
required. This will still cause regressions but it's a matter of modifying
`requirements.txt` in downstream Python 2.7 packages and not much else.
E.g. in `requirements.txt`:
numpy;python_version>"3.0"
numpylts; python_version<"3.0"
In both cases you still call `import numpy` in the code.
Robert
--
Robert McLeod, Ph.D.
robbmcl...@gmail.com
robbmcl...@protonmail.com
www.entropyreduction.al
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
;2` and scipy saying `numpylts` and now the pasckages are
> incompatible ?
The trouble is PyPi doesn't allow multiple branches. So if you upload
NumPy 2.0 wheels, then you cannot turn around and upload 1.18.X bug-fix
patches. At least, this is my understanding of PyPi.
--
Robert
ion for users. The regular
docs should be the authority. To the extent that the NEPs happen to provide
useful documentation for the new feature (and represent a significant
amount of sunk effort to draft that documentation), we should feel free to
copy-paste that into the regular d
probably use `np.isscalar(x)` for the test
and `x = np.atleast_1d(x)` for the coercion for readability, but otherwise,
that's it.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Wed, Dec 13, 2017 at 5:00 AM, Kirill Balunov
wrote:
>
> On minor thing that instead of 'ret' there should be 'x'.
No, `x` is the input. The code that actually does the computation (elided
here by the `# The magic happens here` comment) would have assigned
exing, it
works as required for other arrays too.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
- support for user-defined contexts in all solvers and preconditioners
- new example: dispersion analysis of heterogeneous periodic materials
For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1
(rather long and technical).
Cheers,
Robert Cimrman
---
Contributors to this
C and Python APIs are flexible
enough to do what the GPU libraries need. This ties into the work that's
being done to make ndarray subclasses better and formalizing the notions of
an "array-like" interface that things like pandas Series, etc. can
implement and play well with the re
he-warnings-filter
It also explains the previous results. The same warning would have been
issued from the same place in each of the variations you tried. Since the
warnings mechanism had already seen that RuntimeWarning with the same
message from the same code location, they were not printed.
--
R
out
multitaper methods that may be useful to you:
http://nipy.org/nitime/examples/multi_taper_spectral_estimation.html
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
ances.
In any case, we have a lot of different options to discuss if we decide to
relax our stream-compatibility policy. At the moment, I'm not pushing for
any particular changes to the code, just the policy in order to enable a
more wide-ranging field of options that we h
case. Since that's the only
real difference between rounding routines, we often recognize that from the
context and speak in a somewhat elliptical way (i.e. just "round-to-even"
instead of "round to the nearest integer and rounding numbers ending in .5
to the nearest
eam-compatible np.random
version and maintain it in future for those usecases, and add a new
"high-performance" version with the new features.
That is one of the alternatives I raised.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Sat, Jan 20, 2018 at 2:57 AM, Stephan Hoyer wrote:
>
> On Fri, Jan 19, 2018 at 6:57 AM Robert Kern wrote:
>>
>> As an alternative, we may also want to leave `np.random.RandomState`
entirely fixed in place as deprecated legacy code that is never updated.
This would allow
On Sat, Jan 20, 2018 at 7:34 AM, Robert Kern wrote:
>
> On Sat, Jan 20, 2018 at 2:27 AM, wrote:
>
> > I'm not sure I fully understand
> > Is the proposal to drop stream-backward compatibility completely for
the future or just a one time change?
>
> For all future
want to do things like caching the next Box-Muller variate and not
force that onto the core PRNG state like I currently do. Though I'd rather
just drop Box-Muller, and that's not a common pattern outside of
Box-Muller. But it's a possibility.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
On Tue, Jan 30, 2018 at 5:39 AM, Pierre de Buyl <
pierre.deb...@chem.kuleuven.be> wrote:
>
> Hello,
>
> On Sat, Jan 27, 2018 at 09:28:54AM +0900, Robert Kern wrote:
> > On Sat, Jan 27, 2018 at 1:14 AM, Kevin Sheppard
> > wrote:
> > >
> > > In term
gt;
> But it only prints the first number correctly, i.e., dims[0]. The second
> number is always 0.
>
The correct typecode would be NPY_INTP.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
or
every bugfix release).
> Directions will be greatly appreciated.
> I suspect that this info is all gathered somewhere
> I did not find.
Sorry this isn't gathered anywhere, but truly, the answer is "there is not
much to it". You're doing everything right. :-)
--
Rober
On Feb 22, 2018 16:30, "Kevin Sheppard" wrote:
> What is the syntax to construct an initialized generator?
> RandomGenerator(Xoroshiro128(my_seed))?
>
>
Not 100% certain on this. There was talk in the earlier thread that seed
should be killed,
No, just the np.random.seed() function alias for
/release_notes.html#id1
(rather long and technical).
Cheers,
Robert Cimrman
---
Contributors to this release in alphabetical order:
Robert Cimrman
Jan Heczko
Jan Kopacka
Vladimir Lukes
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https
hat you can test that the two initialization routines
are equivalent. But if you're going to do that, you might as well take my
recommended approach.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
emulate the same behaviour in my Scala code by sampling
from a
> Gaussian distribution with mean = 0 and std dev = 1.
`np.random.randn(n_h, n_x) * 0.01` gives a Gaussian distribution of mean=0
and stdev=0.01
--
Robert Kern
___
NumPy-Discussi
On Thu, Mar 8, 2018 at 12:44 PM, Marko Asplund
wrote:
>
> On Wed, 7 Mar 2018 13:14:36, Robert Kern wrote:
>
> > > With NumPy I'm simply using the following random initilization code:
> > >
> > > np.random.randn(n_h, n_x) * 0.01
> > >
> > >
y on my to-do list for quite a while now.
+1 for scipy.special.
--
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
1 - 100 of 326 matches
Mail list logo