[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-05 Thread via NumPy-Discussion
Good evening, Ralf!

I beg your pardon, for some reason I didn't get the notification of your 
response to this issue and couldn't answer in a more timely fashion. 

We'll cover all the mentioned points in shortest time possible (also some 
university and job projects) and I really appreciate such a profound person as 
you are finding time reviewing our contribution. 

I'll contact you later on here on all the mentioned points with the results.

Thanks for you time, 

Regards,

Aleksandr
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-11 Thread via NumPy-Discussion
Good afternoon, Ralf.

We have done some of the measurements you recommended, for your convenience we 
have created a separate folder with notebooks where we measured memory usage 
and performance of our interpretation against Scipy. Separately you can run the 
tests on your hardware and separately measure memory. I've left the link below.

https://github.com/2D-FFT-Project/2d-fft/tree/main/notebooks

We measured efficiency for 4 versions - with multithreading and data type 
conversion. According to the results of the tests, our algorithm has the 
greatest lead in the case with multithreading and without data type conversion 
- 75%, the worst performance without multithreading and with data type 
conversion - 14%. In terms of memory usage we beat NumPy and Scipy by 2 times 
in all cases, I think this is a solid achievement at this point. 

I can generalise that our mathematical approach still has a serious advantage, 
nevertheless we lose always to Scipy in inverse operation case, we haven't 
figured out the reasons yet, we are discussing it at the moment, but we will 
fix it. 

It is important to note that at this stage our algorithm shows the above 
perfomance on matrices of size powers of two. 
This is a specificity of the mathematical butterfly formula. We are 
investigating ways to remove this limitation, we already assessed the effect of 
element imputation and column dropping, the result is not accurate enough. 
Otherwise, we can suggest putting our version to work only in cases of the 
mentioned matrices, it'll still be an upgrade for NumPy.

At this point I can say that we are willing to work and improve the existing 
version within our skills, knowledge and available resources. We still live 
with the idea of adding our interpretation or idea to the existing NumPy 
package, as in theoretical perspective within the memory usage and efficiency, 
it can give a serious advantage on other projects built on NumPy. 

Thank you for your time, we will continue our work and look forward to your 
review.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-12 Thread via NumPy-Discussion
i appreciate your correction, indeed you are right, it was my fault.

i changed everything and i believe it is in the correct order of things right 
now.

our current best result is
FFT:
-46%(no multithreading, no type conversions) from scipy and +0.37% is the worst 
case (multithreaded, no type conversions). 

 IFFT:
-32%(no multithreading, no type conversions), +4.71%(multithreaded, no type 
conversions).

https://github.com/2D-FFT-Project/2d-fft/blob/main/notebooks/comparisons.ipynb

i suppose due to your corrections we were able to achieve even better results 
in my opinion. we will take a closer look at (M+, TC-) case, as it seems to be 
the worst perfomance of our algorithm. 

all in all, we'll be waiting for your further directions what we could do. 

please take a moment to review it again and thanks for your time. have a lovely 
day.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Assessment of the difficulty in porting CPU architecture for numpy

2023-11-16 Thread xuanbao via NumPy-Discussion
Hello everyone! I am working on implementing a tool to assess the complexity of 
CPU architecture porting. It primarily focuses on RISC-V architecture porting. 
In fact, the tool may have an average estimate of various architecture porting 
efforts.My focus is on the overall workload and difficulty of transplantation 
in the past and future,even if a project has already been ported.As part of my 
dataset, I have collected the **numpy** project. **I would like to gather 
community opinions to support my assessment. I appreciate your help and 
response!** Based on scanning tools, the porting complexity is determined to be 
moderate leaning towards simple, with a moderate amount of code related to the 
CPU architecture in the project.  Is this assessment accurate?Do you often have 
any opinions on personnel allocation and consumption time? I look forward to 
your help and response.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Need help in numpy building from source on windows.

2024-02-26 Thread rajoraganesh--- via NumPy-Discussion
detailed proble can be found at - 
https://stackoverflow.com/questions/78059816/issues-in-buildingnumpy-from-source-on-windows
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Improved 2DFFT Approach

2024-02-27 Thread camrymanjr--- via NumPy-Discussion
Good day! 

My name is Alexander Levin.

My colleague and I did a project on optimisation of two-dimensional Fourier 
transform algorithm six months ago. We took your implementation of numpy fft2d 
as a unit of quality. 

In the course of our research we found out that initially mathematically the 
method uses a far from optimal algorithm. As you may know, your operation goes 
first by rows then by columns applying a one-dimensional transformation. After 
spending some time researching mathematical papers on this topic, we found and 
implemented a new approach, the so-called Cooley-Tukey butterfly, optimised by 
Russian mathematicians in 2016.

The output is a completely new approach, where we immediately apply a 
two-dimensional operation and save a serious proportion of time on it. As a 
result, we wrote a C++ package for Python, using Cython as a wrapper. The 
result was the package and an article on Medium describing the process. On 
tests for matrices ranging in size from 2048x512 to 8192x8192, our algorithm 
outperformed the NumPy transformation by an average of 50% in time. 

After discussing this matter with my colleague with whom we did the above 
development, we came to a common desire to share our results with you. Your 
company has been making a huge contribution to the IT community for almost 20 
years on a pro bono basis. We share your philosophy and so we want to make the 
CS world a better place by providing you with our code to optimise the approach 
in an existing operation in your package.

We would like to offer you our development version, though it'll need some 
minor improvements, we'd love to finish them collaboratively with you and hear 
your thoughts regarding what we have discovered and done so far. We'd be 
honored to help you and become NumPy contributors.

I trust that our vision will resonate with you and that you will agree with us. 
I invite you to read our quick article about the process, which I have linked 
below, and our fully functioning package and share your views on our 
contribution to NumPy. We are willing to edit the code to fit your standards, 
as we believe that the best use of our work will be to contribute to the 
development of technology in the world. 

Thank you for your time. We look forward to your response and opinion. 

Medium Article about the process:
https://medium.com/p/7963c3b2f3c9
GitHub Source code Repository:
https://github.com/2D-FFT-Project/2d-fft
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: JSON format for multi-dimensional data

2024-02-28 Thread fangqq--- via NumPy-Discussion
aside from the previously mentioned ticket 
https://github.com/numpy/numpy/issues/12481, I also made a similar proposal, 
posted in 2021

https://github.com/numpy/numpy/issues/20461
https://mail.python.org/archives/list/numpy-discussion@python.org/message/EVQW2PO64464JEN3RQXSCDP32RQDIQFW/


lightweight JSON annotations for various data structures (trees, tables, 
graphs), especially ND-arrays, are defined in the JData spec

https://github.com/NeuroJSON/jdata/blob/master/JData_specification.md#data-annotation-keywords


JSON/binary JSON annotation encoders/decoders have been implemented for Python 
(https://pypi.org/project/jdata/), MATLAB/Octave 
(https://github.com/fangq/jsonlab), JavaScript/NodeJS 
(https://www.npmjs.com/package/jda), as well as C++ (JSON for Modern C++, 
https://json.nlohmann.me/features/binary_formats/bjdata/)


I have been extensively used this annotation in JSON/binary JSON in my 
neuroimaging data portal, https://neurojson.io/, for example, for 3D data

https://neurojson.org/db/fieldtrip(atlas)/FieldTrip--Brainnetome--Brainnetome
https://neurojson.org/db/fieldtrip(atlas)/FieldTrip--Brainnetome--Brainnetome#preview=$.tissue

for mesh data

https://neurojson.org/db/brainmeshlibrary/BrainWeb--Subject04--gm--surf
https://neurojson.org/db/brainmeshlibrary/BrainWeb--Subject04--gm--surf#preview=$

the ND array supports binary data with loss-less compression. I've also 
implemented


in a renewed thread posted in 2022, I also tested the blosc2 
(https://www.blosc.org/) compression codecs and got excellent read/write speed

https://mail.python.org/archives/list/numpy-discussion@python.org/thread/JIT4AIVEYJLSSHTSA7GOUBIVQLT3WPRU/#U33R5GL34OTL7EZX2VRQGOO4KUWED56M
https://mail.python.org/archives/list/numpy-discussion@python.org/message/TUO7CKTQL2GGH2MIWSBH6YCO3GX4AV2O/

the blosc2 compression codes are supported in my python and matlab/C parsers.


Qianqian
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: (no subject)

2024-04-23 Thread steven.kakaire--- via NumPy-Discussion

On 2024-04-19 07:20, Shreya Nalawade wrote:

Dear Sir/Madam
I want to contribute to the GSOD
 Numpy. Till when are the proposal accepted plus what should be the
format of the proposals. Kindly guide me through the organisation
policies and contribution guidelines.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: steven.kaka...@mak.ac.ug

I also wish for know information about GSoD Numpy.
Thanks
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] np.where and ZeroDivisionError: float division by zero

2024-04-25 Thread 840362492--- via NumPy-Discussion
0

In my code, I use the following calculation for a column in the dataframe: 
np.where(df_score['number'] ! = 0, 100 - ((100 * df_score[rank_column] 
-50)/df_score['number']), None),I have used df_score['number']! = 0, but the 
code is still wrong, ZeroDivisionError: float division by zero, even if I put 
df_score['number']! = 0 changed to df_score['number'] > 0, why?

pandas version:1.1.5 numpy version:1.24.4

Here are my numbers: 12.0 12.0 12.0 12.0 12.0 0.0 
0.0 0.0 0.0 0.0 12.0 12.0 12.0

I want to know why it went wrong and what should be done to fix it? Thank you 
for your help
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] .npy MIME type?

2024-05-17 Thread None via NumPy-Discussion
Hi,

As I wanted to have a textual preview of .npy files, I noticed that there is no 
support (partial support in magic, none in xdg-mime upstream), and simply no 
documented MIME type (official or not), for the .npy file format.

Would it be reasonable to consider application/x-numpy (which is the 
placeholder I use for now in my code [1] so I'm only suggesting it by lazyness 
and lack of imagination) and perhaps even registration of application/numpy at
 https://www.iana.org/assignments/media-types/media-types.xhtml
?

Best regards,

--
Jérôme

[1] Links to code:
- xdg-mime XML info at 
https://gitlab.com/cJ/xdg-shared-mime-info-packages/-/blob/main/numpy.xml ; 
could be upstreamed
- text preview at 
https://gitlab.com/exmakhina/lessopen/-/blob/main/render/xm_lessopen_render_numpy.py
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Windows 11 arm64 wheel

2024-07-11 Thread slobodan.miletic--- via NumPy-Discussion
Hi,

I am writing on behalf of my team from Endava company. Our task is to work with 
opensource community, and setup multiple applications for use on win11 arm64 
machines.
One of the tasks is to understand the problems and if possible help NumPy team 
in setting up win11 arm64 compatible wheel on the PyPi.
I saw that there were some earlier discussions about this subject, and as I 
understood problem was that at that moment this configuration was not so common 
and there were no appropriate CI slaves to create the wheel.
Today there are more devices in this configuration available on the market and 
additional are announced, so I wanted to check if there are plans for creating 
this wheel, and if we can somehow help with that work?

Regards,
Slobodan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Windows 11 arm64 wheel

2024-07-12 Thread slobodan.miletic--- via NumPy-Discussion
We will start looking at the bug and the CI job.

Thanks, 
Slobodan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Windows 11 arm64 wheel

2024-08-12 Thread slobodan.miletic--- via NumPy-Discussion
As the bug fix is merged we are starting the investigation on the CI job.
I have a few questions about this:
1) Are there some additional instructions for making and running the numpy CI 
jobs and cross compilation available in the documentation?
2) Do we need to have arm64 scipy-openblas released to build numpy wheel?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Windows 11 arm64 wheel

2024-09-03 Thread slobodan.miletic--- via NumPy-Discussion
Hi,
I finished the first step and created the workflow for cross-compiling wheel 
without OpenBlas. PR for this can be found at: 
https://github.com/numpy/numpy/pull/27330

Output wheel is successfully installed and tested on the arm machine.

After this gets merged I will continue with adding the steps for 
cross-compiling OpenBlas and linking the Numpy with it. 

Thanks,
Slobodan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Looking for insights and Authors for some ideas

2023-03-02 Thread kritis--- via NumPy-Discussion
Hi All,
Packt, an established publishing company (https://www.packtpub.com), is looking 
for authors to develop two books with the following working titles “Clean 
architecture in Python” and “Domain Driven Design in Python”.  If this is 
something you would be interested to work on, please email me 
(kri...@packt.com) to schedule a call.  

Looking forward to your reply.
Thank you :)
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] update / revision suggest around f2py

2023-04-13 Thread nbehrnd--- via NumPy-Discussion
Dear maintainers of the documentation,

Python numpy includes f2py to access functionality (and performance) of Fortran 
modules, which

https://numpy.org/doc/stable/f2py/f2py.getting-started.html#the-smart-way

aims to present.  However, I was not able to successfully replicate fully 
either one of the three approaches (the quick way, the smart way, the quick and 
smart way) in an instance of Linux Debian 12/bookworm with gcc and gfortran 
(12.2.0), Python (3.11.2), and f2py (1.24.2) as provided from the repositories 
of Debian's branch testing.

I would like to suggest an update and a revision of the page in question.

For one, the snippets of Fortran adhere the (fixed format) standard of 
FORTRAN77, which became "old" with the major revision of Fortran90 (by 
1991/1992) and the introduction of the free format.  With current standard 
Fortran2018 (and Fortran2023 anticipated for July this year) an update of them 
would be very welcome - one still can state (as by now) that multiple standards 
of Fortran are supported. For two and in addition to this, it would be nice if 
the instructions given could be cross checked if they still provide the 
functionality claimed.  If helpful, I can send the captured terminal 
input/output as ASCII text to one of you maintainers in a direct message, too.

Regards
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: update / revision suggest around f2py

2023-04-14 Thread nbehrnd--- via NumPy-Discussion
Thank you, the issue is filed (https://github.com/numpy/numpy/issues/23592) as 
relevant to numpy's documentation.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Installing numpy on an "unsupported" platform

2023-05-23 Thread asyropoulos--- via NumPy-Discussion
Hello,

I am using Python 3.10.0 on OpenIndiana and yesterday I tried to install numpy 
in my system. The command

/opt/gnu/python/bin/python3.10  -m pip install --user numpy

failed and it it printed a long error report. The errors are of the form 

from numpy/core/src/umath/string_ufuncs.cpp:1:
  /usr/gcc/10/include/c++/10.4.0/cmath:1134:11: error: 'llrint' has not 
been declared in '::'
   1134 |   using ::llrint;
|   ^~
First I have noticed that ': warning: "__STDC_VERSION__" 
redefined' and after some experimentation
I realized that the extra option  -D__STDC_VERSION__=0 is actually the one that 
causes the problem. So my question is:
How can I disable this option? 

Regards,

Apostolos
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: mixed mode arithmetic

2023-07-09 Thread glaserj--- via NumPy-Discussion
Neal Becker wrote:
> I've been browsing the numpy source.  I'm wondering about mixed-mode
> arithmetic on arrays.  I believe the way numpy handles this is that it
> never does mixed arithmetic, but instead converts arrays to a common type. 
> Arguably, that might be efficient for a mix of say, double and float. 
> Maybe not.
> But for a mix of complex and a scalar type (say, CDouble * Double), it's
> clearly suboptimal in efficiency.
> So, do I understand this correctly?  If so, is that something we should
> improve?

Reviving this old thread - I note that numpy.dot supports in-place computation 
for performance reasons like this

c = np.empty_like(a, order='C')
np.dot(a, b, out=c)

However, the data type of the pre-allocated c array must match the result 
datatype of a times b. Now, with some accelerator hardware (i.e. tensor cores 
or matrix multiplication engines in GPUs), mixed precision arithmetics with 
relaxed floating point precision (i.e.., which are not necessarily IEEE754 
conformant) but with faster performance are possible, which could be supported 
in downstream libraries such as cupy.

Case in point, a mixed precision calculation may take half precision inputs, 
but accumulate in and return full precision outputs. Due to the above mentioned 
type consistency, the outputs would be unnecessarily demoted (truncated) to 
half precision again. The current API of numpy does not expose mixed precision 
concepts. Therefore, it would be nice if it was possible to build in support 
for hardware accelerated linear algebra, even if that may not be available on 
the standard (CPU) platforms numpy is typically compiled for.

I'd be happy to flesh out some API concepts, but would be curious to first get 
an opinion from others. It may be necessary to weigh the complexity of adding 
such support explicitly against providing minimal hooks for add-on libraries in 
the style of JMP (for jax.numpy), or AMP (for torch).

Jens
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Endorsing SPECs 1, 6, 7, and 8

2024-10-08 Thread Nathan via NumPy-Discussion
Thanks for clarifying! In that case I think endorsing SPEC 7 makes sense.

On Tue, Oct 8, 2024 at 3:08 PM Robert Kern  wrote:

> On Tue, Oct 8, 2024 at 8:36 AM Nathan via NumPy-Discussion <
> numpy-discussion@python.org> wrote:
>
>>
>> Since the legacy RNG interface cannot be deprecated and we encourage
>> downstream to use it in tests according to the text of NEP 19, I'm not sure
>> about the text in SPEC 7 that talks about deprecating using legacy RNGs. Or
>> are you saying that we have now reached the point where we can update NEP
>> 19 to encourage moving away from the legacy interface?
>>
>
>  We have already always encouraged people to move away from the legacy
> interface in their APIs. SPEC 7 recommends a principled way for downstream
> projects to implement that move.
>
> NEP 19 acknowledged that sometimes one might still have a use case for
> creating a legacy RandomState object and calling it in their tests to
> generate test data (but not otherwise pass that RandomState object to the
> code under test), but that's not what SPEC 7 addresses. NEP 19 doesn't
> really actively recommend the use of RandomState for this purpose, just
> acknowledges that it's a valid use case that numpy will continue to support
> even while we push for the exclusive use of Generator inside of
> library/program code. NEP 19 doesn't need an update for us to endorse SPEC
> 7 (whether it needs one, separately, to clarify its intent is another
> question).
>
> --
> Robert Kern
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Endorsing SPECs 1, 6, 7, and 8

2024-10-08 Thread Nathan via NumPy-Discussion
Regarding thread safety - that's not a problem. At least for Python 3.13,
the GIL is temporarily re-enabled during imports. That won't necessarily be
true in the future, but separately CPython also uses per-module locks on
import, so there shouldn't be any issues with threads simultaneously
importing submodules.

It looks like we already implement lazy-loading for e.g. linalg, fft,
random, and other submodules. Does that lazy-loading mechanism conform to
the SPEC? If not, should it?

The keys to the castle SPEC makes sense to me, I'm fine with endorsing it.
I believe that all of NumPy's online accounts are already spread out over
multiple maintainers, so presumably we don't actually need to do much here
to implement it?

Since the legacy RNG interface cannot be deprecated and we encourage
downstream to use it in tests according to the text of NEP 19, I'm not sure
about the text in SPEC 7 that talks about deprecating using legacy RNGs. Or
are you saying that we have now reached the point where we can update NEP
19 to encourage moving away from the legacy interface? From the text of NEP
19 regarding the legacy RNG interface:

> This NEP does not propose that these requirements remain in perpetuity.
After we have experience with the new PRNG subsystem, we can and should
revisit these issues in future NEPs.

I don't have a problem with SPEC 8, although I suspect there might be a
fair bit of work to get NumPy's CI to match the suggestions in the SPEC.



On Tue, Oct 8, 2024 at 2:08 PM Joren Hammudoglu via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> Is SPEC 1 thread-safe enough for py313+nogil?
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: What to do with np.matrix

2024-10-14 Thread Nathan via NumPy-Discussion
Here's a github code search for the string "np.matrix":

https://github.com/search?q=%22np.matrix%22&type=code

First, if you narrow down to just Python code, there are almost 60 thousand
results, which is quite high, much higher than we we're comfortable with
for outright removals for NumPy 2.0.

Compared with code searches I did in service of the NumPy 2.0 API changes,
this returns a lot of repositories in the flavor of "someone's homework
assignments" rather than "core scientific python package" or "package owned
by a billion dollar corporation".

So, it's good that "important" packages don't seem to use np.matrix much,
but also it's bad given that the code that *does* seem to use it is
probably infrequently or poorly tested, and will require a lengthy
deprecation period to catch, if the authors are inclined to do anything
about it at all.

In that case, I think moving things to an external pypi package along with
a long-lived shim in NumPy that points people to the pypi package is
probably the least disruptive thing to do, if we're going to do anything.

-Nathan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Suggestion to show the shape in repr for summarized arrays

2024-10-30 Thread mattip via NumPy-Discussion
We discussed this again and will merge the current version in a few days unless 
there is more discussion.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: How do I use the numpy on spyder?

2024-10-31 Thread Alan via NumPy-Discussion
Nowadays, free AIs are good for such questions:
https://www.youtube.com/watch?v=bA-tbHwvA6A

hth

On Thu, Oct 31, 2024 at 9:31 AM Joao Pereira via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> I don´t know how to use the numpy on spyder. I have tried to use this
> line: "import numpy as np" but i dont know if it's the only necessary thing
> to do.
> If you could explain it to me I'll appreciate it.
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: alan.is...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] C API could be more consistent with returning NPY_SUCCEED/NPY_FAIL

2024-10-31 Thread sverre.hassing--- via NumPy-Discussion
While using the C API to work with Numpy arrays, I came across some 
inconsistencies regarding the return of a success state from various Numpy 
functions. For example, the array iterator functions return either NPY_SUCCEED 
(defined as 1) or NPY_FAIL (defined as 0) to indicate whether the function 
successfully finished. However, various of the array functions return -1 on 
success and 0 on failure. The result is similar enough, but I would think it 
more consistent to use NPY_SUCCEED/NPY_FAIL everywhere. As an example, consider 
the documentation pages for the Array API:
https://numpy.org/doc/stable/reference/c-api/array.html
and the Array iterator API:
https://numpy.org/doc/stable/reference/c-api/iterator.html#array-iterator-api

I hope this is the right place to place this. It is the first time I really 
publicly engage with this kind of thing. This did not seem like a bug and the 
Github page indicated that feature requests should go to the mailing list first.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] (no subject)

2024-09-23 Thread slobodan.miletic--- via NumPy-Discussion
Hi,

I made the second PR for full wheel with openblas. It is reviewed, but solution 
with openblas is not good enough, so I am investigating alternate options for 
cross compiling openblas.
While I am working on this I have one question about the wheel without 
OpenBLAS. As the job is now created on the main it can be triggered manually 
and the produced wheel can be used for publishing to PyPi. I'm wondering what 
the next steps are with this build. Is it planned for the wheel to be 
built/published without OpenBLAS and what would be the conditions for that.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: (no subject)

2024-09-24 Thread slobodan.miletic--- via NumPy-Discussion
Sorry for misunderstanding. I accidentally created this thread from Windows 11 
arm64 wheel. That is also the reason why there is no subject.  I will repost 
this to that discussion. 
It was not strictly openblas problem. I was trying to make the gh actions build 
for win11 arm64 numpy with openblas. As native runner for windows arm64 was not 
available it was agreed to try cross compiling openblas. Problem with this was 
that there were no fortran compiler binaries to use so I tried using linaro 
built openblas libraries for arm64 which is not secure. Localy I could modified 
flang-new to build the openblas but setting that up on the github actions would 
take to much time.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Windows 11 arm64 wheel

2024-09-24 Thread slobodan.miletic--- via NumPy-Discussion
I accidentally created new thread without subject with message that is 
belonging here:

Hi,
I made the second PR for full wheel with openblas. It is reviewed, but rejected 
as solution with linaro built openblas is not good enough, so I am 
investigating alternate options for cross compiling openblas.
While I am working on this I have one question about the wheel without 
OpenBLAS. As the job is now created on the main it can be triggered manually 
and the produced wheel can be used for publishing to PyPi. I'm wondering what 
the next steps are with this build. Is it planned for the wheel to be 
built/published without OpenBLAS and what would be the conditions for that.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: NumPy 2.2.0 Released

2024-12-08 Thread Nathan via NumPy-Discussion
Improvements to the promoters for some of the string ufuncs:
https://github.com/numpy/numpy/pull/27636

Support for stringdtype arrays in the type hints and typing support for the
string ufuncs:
https://github.com/numpy/numpy/pull/27470

If you have a particular improvement you’re looking for I’d love to hear
more.

On Sun, Dec 8, 2024 at 3:01 PM Neal Becker via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> Where can I find more information on improvements to stringdtype?
>
> On Sun, Dec 8, 2024, 11:25 AM Charles R Harris via NumPy-Discussion <
> numpy-discussion@python.org> wrote:
>
>> Hi All,
>>
>> On behalf of the NumPy team, I'm pleased to announce the release of NumPy
>> 2.2.0. The NumPy 2.2.0 release is a short release that brings us back
>> into sync with the usual twice yearly release cycle. There have been a
>> number of small cleanups, as well as work bringing the new StringDType to
>> completion and improving support for free threaded Python. Highlights are:
>>
>>- New functions `matvec` and `vecmat`, see below.
>>- Many improved annotations.
>>- Improved support for the new StringDType.
>>- Improved support for free threaded Python
>>- Fixes for f2py
>>
>> This release supports Python 3.10-3.13. Wheels can be downloaded from
>> PyPI <https://pypi.org/project/numpy/2.2.0>; source archives, release
>> notes, and wheel hashes are available on Github
>> <https://github.com/numpy/numpy/releases/tag/v2.2.0>.
>>
>> Cheers,
>>
>> Charles Harris
>> ___
>> NumPy-Discussion mailing list -- numpy-discussion@python.org
>> To unsubscribe send an email to numpy-discussion-le...@python.org
>> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> Member address: ndbeck...@gmail.com
>>
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Proposing a flattening functionality for deeply nested lists in NumPy

2024-12-30 Thread Mark via NumPy-Discussion
Hello all,


Many people have asked how to flatten a nested list into a one-dimensional
list (e.g., see this StackOverflow thread

). While flattening a 2D list is relatively straightforward, deeply nested
lists can become cumbersome to handle. To address this challenge, I propose
adding a built-in list-flattening functionality to NumPy.



By adding this feature to NumPy, the library would not only simplify a
frequently used task but also enhance its overall usability, making it an
even more powerful tool for data manipulation and scientific computing.



The code snippet below demonstrates how a nested list can be flattened,
enabling conversion into a NumPy array. I believe this would be a valuable
addition to the package. See also this issue
.


from collections.abc import Iterable



def flatten_list(iterable):



"""

Flatten a (nested) list into a one-dimensional list.



Parameters

--

iterable : iterable

The input collection.



Returns

---

flattened_list : list

A one-dimensional list containing all the elements from the input,

with any nested structures flattened.



Examples





Flattening a list containing nested lists:



>>> obj = [[1, 2, 3], [1, 2, 3]]

>>> flatten_list(obj)

[1, 2, 3, 1, 2, 3]



Flattening a list with sublists of different lengths:



>>> obj = [1, [7, 4], [8, 1, 5]]

>>> flatten_list(obj)

[1, 7, 4, 8, 1, 5]



Flattening a deeply nested list.



>>> obj = [1, [2], [[3]], [[[4]]],]

>>> flatten_list(obj)

[1, 2, 3, 4]



Flattening a list with various types of elements:



>>> obj = [1, [2], (3), (4,), {5}, np.array([1,2,3]), range(3), 'Hello']

>>> flatten_list(obj)

[1, 2, 3, 4, 5, 1, 2, 3, 0, 1, 2, 'Hello']



"""



if not isinstance(iterable, Iterable) or isinstance(iterable, str):

return [iterable]



def flatten_generator(iterable):



for item in iterable:



if isinstance(item, Iterable) and not isinstance(item, str):

yield from flatten_generator(item)



else:

yield item



return list(flatten_generator(iterable))
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] ENH: Efficient vectorized sampling without replacement

2025-01-01 Thread Mark via NumPy-Discussion
Hello,

Numpy provides efficient, vectorized methods for generating random samples
of an array with replacement. However, it lacks similar functionality for
sampling *without replacement* in a vectorized manner. To address this
limitation, I developed a function capable of performing this task,
achieving approximately a 30x performance improvement over a basic Python
loop for small sample sizes (and 2x performance improvement using numba).
Could this functionality, or something similar, be integrated into numpy?
See also this issue .

Kind regards,
Mark


def random_choice_without_replacement(array, sample_size, n_iterations):

"""
Generates random samples from a given array without replacement.

Parameters
--
array : array-like
Array from which to draw the random samples.
sample_size : int
Number of random samples to draw without replacement per iteration.
n_iterations : int
Number of iterations to generate random samples.

Returns
---
random_samples : ndarray
The generated random samples.

Raises
--
ValueError
If sample_size is greater than the population size.

Examples

Generate 10 random samples from np.arange(5) of size 3 without
replacement.

>>> array = np.arange(5)
>>> random_choice_without_replacement(array, 3, 10)
array([[4, 0, 1],
   [1, 4, 0],
   [1, 3, 2],
   [0, 1, 3],
   [1, 0, 2],
   [3, 2, 4],
   [0, 3, 1],
   [1, 3, 4],
   [3, 1, 4],
   [0, 1, 3]]) # random

Generate 4 random samples from an n-dimensional array of size 3 without
replacement.

>>> array = np.arange(10).reshape(5, 2)
>>> random_choice_without_replacement(array, 3, 4)
array([[[0, 1],
[8, 9],
[4, 5]],

   [[2, 3],
[8, 9],
[0, 1]],

   [[0, 1],
[2, 3],
[8, 9]],

   [[4, 5],
[2, 3],
[8, 9]]]) # random

"""

if sample_size > len(array):
raise ValueError(f"Sample_size ({sample_size}) is greater than the
population size ({len(array)}).")

indices = np.tile(np.arange(len(array)), (n_iterations,1))
random_samples = np.empty((n_iterations, sample_size), dtype=int)
rng = np.random.default_rng()

for i, int_max in zip(range(sample_size), reversed(range(len(array) -
sample_size, len(array:
random_indices = rng.integers(0, int_max + 1, size=(n_iterations,1))
random_samples[:, i] = np.take_along_axis(indices, random_indices,
axis=-1).T
np.put_along_axis(indices, random_indices, indices[:,
int_max:int_max+1], axis=-1)

return array[random_samples]
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Is a Python function a gufunc if it broadcasts its arguments appropriately?

2024-12-27 Thread john.a.dawson--- via NumPy-Discussion
Can gufuncs be written in Python?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Is a Python function a gufunc if it broadcasts its arguments appropriately?

2024-12-21 Thread john.a.dawson--- via NumPy-Discussion
For example, is the function `stack` below a gufunc with signature (),()->(2)?

def stack(a, b):
broadcasts = np.broadcast_arrays(a, b)
return np.stack(broadcasts, axis=-1)

Or must gufuncs be written in C? Or are other things required?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] New GitHub issue UI

2025-01-14 Thread Nathan via NumPy-Discussion
Hi all,

GitHub is rolling out the new UI for issues, which includes a lot of new
opportunities to reorganize our backlog. More detail on the changelog blog:
https://github.blog/changelog/2025-01-13-evolving-github-issues-public-preview/

In particular, there is now much richer support for tracking issues by
marking issues as "sub-issues". We can also (finally) get rid of the issue
category labels - GitHub now has support for "issue types".

If someone with triage rights would like to take this on, it would be a
nice project to try to go through the backlog and update things to use the
new system, as well as the bot that auto-applies labels. You could probably
use a script rather than doing it manually.

-Nathan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Is a Python function a gufunc if it broadcasts its arguments appropriately?

2024-12-31 Thread john.a.dawson--- via NumPy-Discussion
Is the function `stack` above a gufunc?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] unique_2D

2024-12-24 Thread Mark via NumPy-Discussion
Hello,

I've made a contribution to Numpy: unique_2D() which determines unique
values and counts of a multi-dimensional array for each row (last
dimension).

As this is my first contribution, I expect some feedback before being added
to the final version.

The request can be found here: https://github.com/numpy/numpy/pull/28064

I hope to get some constructive feedback such that I can be more effective
in contributing to Numpy in the future.

Kind regards,
Mark
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Beginners

2025-03-11 Thread Monkeysigh via NumPy-Discussion
Do you know if beginners are invited to attend?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Wondering if there is interest in a "variable convolution" feature in numpy?

2025-06-05 Thread cantor.duster--- via NumPy-Discussion
Thanks for the response, Nathan!  I'll check in with SciPy.  They have quite a 
few different convolution functions.

Our backup is publishing this as a standalone package on PyPI and conda-forge.  
I wanted to see if it made sense to integrate it with something else first, 
though.

Unfortunately since it's a single function I don't think it's a candidate for 
JOSS.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: arch...@mail-archive.com


[Numpy-discussion] Wondering if there is interest in a "variable convolution" feature in numpy?

2025-06-04 Thread cantor.duster--- via NumPy-Discussion
Hello,

My team and I (especially @Arqu1100) have been working on energy-dependent 
convolutions for a nuclear physics application: 
https://github.com/det-lab/energyDependentColvolve.

We're looking to release this code either as a standalone library or as part of 
a library because we ran into quite a few issues when writing the code and 
would like to help out other groups who need to do this "simple" calculation.  

This code is definitely not ready for a pull request, but if there's any 
interest in this feature we're happy to create one.  @Arqu1100 has worked 
particularly hard on creating test cases. The only existing library we found 
that does what we do here is varconvolve, which we haven't been able to verify 
against our test cases. Other examples of code that does a similar job are 
referenced on Stack Overflow as being embedded into ldscat 
(https://stackoverflow.com/questions/18624005/how-do-i-perform-a-convolution-in-python-with-a-variable-width-gaussian).

If there's a better place to ask the question, please let me know.  Thanks all!

Amy Roberts
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Wondering if there is interest in a "variable convolution" feature in numpy?

2025-06-04 Thread Nathan via NumPy-Discussion
NumPy probably isn’t the right spot for this - we’re very conservative
about adding new functionality to NumPy that might also live in SciPy.
SciPy has convolution functionality but I’m not sure if they would want
greenfield code for this. Definitely worth asking the SciPy developers.

That said, have you considered publishing and promoting your own package on
PyPI and conda-forge? It’s a bit of work to get everything set up, but at
least these days you can “publish” the work (in an academic sense)
relatively straightforwardly with a Journal of Open Source Software
submission.

See also the PyOpenSci guide, which has extensive guidance for writing and
publishing packages for general consumption:

https://www.pyopensci.org/python-package-guide/index.html

On Wed, Jun 4, 2025 at 6:32 AM cantor.duster--- via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> Hello,
>
> My team and I (especially @Arqu1100) have been working on energy-dependent
> convolutions for a nuclear physics application:
> https://github.com/det-lab/energyDependentColvolve.
>
> We're looking to release this code either as a standalone library or as
> part of a library because we ran into quite a few issues when writing the
> code and would like to help out other groups who need to do this "simple"
> calculation.
>
> This code is definitely not ready for a pull request, but if there's any
> interest in this feature we're happy to create one.  @Arqu1100 has worked
> particularly hard on creating test cases. The only existing library we
> found that does what we do here is varconvolve, which we haven't been able
> to verify against our test cases. Other examples of code that does a
> similar job are referenced on Stack Overflow as being embedded into ldscat (
> https://stackoverflow.com/questions/18624005/how-do-i-perform-a-convolution-in-python-with-a-variable-width-gaussian
> ).
>
> If there's a better place to ask the question, please let me know.  Thanks
> all!
>
> Amy Roberts
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Addition of eigenvalue functions

2025-06-12 Thread Nathan via NumPy-Discussion
If functionality is available in SciPy we usually don’t consider adding it
to NumPy. That rules out adding eig.

Is there any reason why polyeig doesn’t make sense to add to SciPy instead
of NumPy? Generally if functionality makes sense to add to SciPy that’s
where we point people to.

On Thu, Jun 12, 2025 at 6:39 AM waqar jamali via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> NumPy currently lacks a generalized eigenvalue function such as eig(A, B)
> or polyeig(A, B).
>
> These functions are essential for several algorithms, including the
> Criss-Cross algorithm and various eigenvalue problems. In particular,
> large-scale problems in control theory are often reduced to subspace
> problems where MATLAB routines like eig(A, B) and polyeig are widely used
> in research. Therefore, I believe that adding such functionality to NumPy
> would be highly beneficial.
>
> I have submitted a pull request here.
>
> https://github.com/numpy/numpy/pull/29163
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Change in numpy.percentile

2023-10-11 Thread Peter Cock via NumPy-Discussion
On Tue, Oct 10, 2023 at 6:32 PM Matthew Brett 
wrote:

> Hi,
>
>
> On Tue, 10 Oct 2023 at 00:55, Andrew Nelson  wrote:
> >
> >
> > On Mon, 9 Oct 2023 at 23:50, Matthew Brett 
> wrote:
> >>
> >> Hi,
> >>
> >> On Mon, Oct 9, 2023 at 11:49 AM Andrew Nelson 
> wrote:
> >> Could you say more about why you consider:
> >> np.mean(x, dropna=True)
> >> to be less clear in intent than:
> >> np.nanmean(x)
> >> ?  Is it just that someone could accidentally forget that the default
> >
> >
> > The discussion isn't a deal breaker for me, I just wanted to put out a
> different POV.
> > The name of the function encodes what it does. By putting them both in
> the function name it's clear what the function does.
> >
> > ...
> >
> > Imagine that one has a large codebase and you have to find all the
> locations where nans could affect a mean. There may be lots of prod, sum,
> etc, also distributed within the codebase. You wouldn't want to search for
> `dropna` because you get every function that handles a nan. If you search
> for nanmean you only get the locations you want.
>
> So, is this the more or less the difference between:
>
> grep 'np\.nanmean' *.py
>
> and
>
> grep 'np\.mean(.*,\s*dropna\s*=\s*True' *.py
>
> ?
>
> Cheers,
>
> Matthew
>
>
Keep in mind that the dropna argument might very well be on a different
line (especially with black formatting), so searches could be much harder
than looking for the nanmean function.

(I do not deal with enough NaN data to have a strong view either way here)

Peter
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Black style as applied to np.array(...) and the ruff formatter

2023-11-03 Thread Peter Cock via NumPy-Discussion
Hello all,

I imagine there are many people here using the black coding style as
implemented by the tool black, albeit with reservations about how it
lays out arrays by default (often therefore wrapped in a format off/on
block to exclude the array from automatic layout to allow for manual
column based layouts).

You may already be aware of the tool ruff as a fast alternative to flake8,
but it now has a formatter which implements the black format (with some
minor divergences):

https://astral.sh/blog/the-ruff-formatter

The authors are open to exploring special casing how it autoformats
arrays, and I think input now from the numpy community would be a
good idea:

https://github.com/astral-sh/ruff/discussions/8452

Kind regards,

Peter
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] How is "round to N decimal places" defined for binary floating point numbers?

2023-12-28 Thread Stefano Miccoli via NumPy-Discussion
I have always been puzzled about how to correctly define the python built-in 
`round(number, ndigits)` when `number` is a binary float and `ndigits` is 
greater than zero.
Apparently CPython and numpy disagree:
>>> round(2.765, 2)
2.77
>>> np.round(2.765, 2)
2.76

My question for the numpy devs are:
- Is there an authoritative source that explains what `round(number, ndigits)` 
means when the digits are counted in a base different from the one used in the 
floating point representation?
- Which was the first programming language to implement an intrinsic function 
`round(number, ndigits)` where ndgits are always decimal, irrespective of the 
representation of the floating point number? (I’m not interested in algorithms 
for printing a decimal representation, but in languages that allow to store and 
perform computations with the rounded value.)
- Is `round(number, ndigits)` a useful function that deserves a rigorous 
definition, or is its use limited to fuzzy situations, where accuracy can be 
safely traded for speed?

Personally I cannot think of sensible uses of `round(number, ndigits)` for 
binary floats: whenever you positively need `round(number, ndigits)`, you 
should use a decimal floating point representation.

Stefano

smime.p7s
Description: S/MIME cryptographic signature
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: How is "round to N decimal places" defined for binary floating point numbers?

2023-12-29 Thread Stefano Miccoli via NumPy-Discussion
Oscar Gustafsson wrote:
> I would take it that round x to N radix-R digits means
> round_to_integer(x * R**N)/R**N
> (ignoring floating-point issues)

Yes, this is the tried-and-true way: first define the function in exact 
arithmetic, then ask for the floating point implementation to return an 
approximate result within a given tolerance, say 1/2 ulp.
This is what CPython does, and it seems to be quite hard to get it right, at 
least inspecting the code of the current implementation. BTW I think that numpy 
uses the same definitions, but simply accepts a bigger (not specified) 
tolerance in its implementation.

Maybe I should have phrased my question differently: is this definition the 
only accepted one, or there are different formulations which give raise to more 
expedite implementations?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] PyData yerevan chapter - sprint

2024-01-09 Thread Habet Madoyan via NumPy-Discussion
Dear community members,

I am Habet, the co-founder and organizer of the PyData Yerevan chapter. We
are planning to host a contributors' sprint for NumPy in Yerevan, Armenia,
during one of our upcoming monthly meetups in either February or March. We
have allocated a modest budget to facilitate the participation of
individuals from Europe and Asia who can assist in organizing this event.
If you have relevant experience and are interested in contributing, please
feel free to reach out to me at *hmadoyan@aua*.

thanks
habet



-- 
Habet Madoyan, PhD | Data Science Program Chair

+374 60 612 632,

hmado...@aua.am



_

40 Baghramyan Avenue, Yerevan 0019, Republic of Armenia

http://cse.aua.am/ds


To know wisdom and instruction; to perceive the words of understanding;
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: New Ruff rule for migrating to NumPy 2.0

2024-01-11 Thread Peter Cock via NumPy-Discussion
This looks handy - I used the following to try it:

$ pip install -U ruff
$ ruff --preview --select NPY201 --fix 

Happily nothing to address on the code baseI tried.

Thanks,

Peter

On Thu, Jan 11, 2024 at 11:32 AM Mateusz Sokol  wrote:
>
> Hi all!
>
> Some time ago we added a new rule to Ruff linter, "NPY201", which updates the 
> codebase to a NumPy 2.0 compatible version.
>
> You can read about it in the migration guide: 
> https://numpy.org/devdocs/numpy_2_0_migration_guide.html#ruff-plugin
> And on the Ruff docs website: 
> https://docs.astral.sh/ruff/rules/numpy2-deprecation/
> (it's still in a "preview" mode but available since 0.1.4 release).
>
> Best regards,
> Mateusz
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: p.j.a.c...@googlemail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: welcome Raghuveer, Chris, Mateusz and Matt to the NumPy maintainers team

2024-01-27 Thread Hameer Abbasi via NumPy-Discussion
Welcome, Raghuveer, Chris, Mateusz and Matt!

> Am 26.01.2024 um 21:04 schrieb Ralf Gommers :
> 
> Hi all,
> 
> We've got four new NumPy maintainers! Welcome to the team, and 
> congratulations to:
> 
> - Raghuveer Devulapalli (https://github.com/r-devulap)
> - Chris Sidebottom (https://github.com/mousius)
> - Mateusz Sokół (https://github.com/mtsokol/)
> - Matt Haberland (https://github.com/mdhaber)
> 
> Raghuveer and Chris have been contribution to the effort on SIMD and 
> performance optimizations for quite a while now. Mateusz has done a lot of 
> the heavy lifting on the Python API improvements for NumPy 2.0 And Matt has 
> been contributing to the test infrastructure and docs.
> 
> Thanks to all four of you for the great work to date!
> 
> Cheers,
> Ralf
> 
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: hameerabb...@yahoo.com

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] API: make numpy.lib._arraysetops.intersect1d work on multiple arrays #25688

2024-02-02 Thread Stephan Kuschel via NumPy-Discussion

Dear Community,

For my own work, I required the intersect1d function to work on multiple 
arrays while returning the indices (using `return_indizes=True`). 
Consequently I changed the function in numpy and now I am seeking 
feedback from the community.


This is the corresponding PR: https://github.com/numpy/numpy/pull/25688

My motivation for the change may also apply to a larger group of people 
as it is important for lots of simulation data analysis:


In various simulations there is often the case that many entities 
(particles, cells, vehicles, whatever the simulation consists of) are 
being tracked throughout the simulation. A typical approach is to assign 
a unique ID to every entity which stays constant and unique throughout 
the simulation and is written together with other properties of the 
entities on every simulation snapshot in time. Note, that during the 
simulation new entities may enter or leave the simulation and due to 
parallelization the order of those entities is not conserved.
Tracking the position of entities over, lets say, 100 snapshots requires 
the intersection of 100 id lists instead of only two.


Consequently I changed the intersect1d function from
`intersect1d(ar1, ar2, assume_unique=False, return_indices=False)` to
`intersect1d(*ars, assume_unique=False, return_indices=False)`.

Please let me know if there is any interest in those changes -- be it in 
this form or another.


All the Best
Stephan
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: API: make numpy.lib._arraysetops.intersect1d work on multiple arrays #25688

2024-02-06 Thread Stephan Kuschel via NumPy-Discussion

Dear Dom,

thanks for bringing up the possible constriction. I agree that this 
would be serious argument against the change.


However, as you said the overlapping/non-overlapping indices would 
become ambiguous with more than two arrays. And calling the fucntion 
with only two arrays at a time would still be possible. So we will be 
unable to generalize in the future towards a problem, that only has 
ambinuous solutions. So I fail to see what exactly we the other use case 
would be.


The point of this change is not the luxory of allowing multiple arrays 
to calculate the intersection. Its all about getting the indices in the 
original arrays, using `return_indices=True`.


All the Best
Stephan

Am 02.02.24 um 17:36 schrieb Dom Grigonis:
Also, I don’t know if this could be of value, but my use case for this 
is to find overlaps, then split arrays into overlapping and 
non-overlapping segments.


Thus, it might be useful for `return_indices=True` to return indices of 
all instances, not only the first.


Also, in my case I need both overlapping and non-overlapping indices, 
but this would become ambiguous with more than 2 arrays.


If it was left with 2 array input, then it can be extended to return 
both overlapping and non-overlapping parts. I think it could be another 
potential path to consider.


E.g. what would be the speed comparison:


intr=  intersect1d(arr1, arr2, assume_unique=False)
intr=  intersect1d(intr, np.unique(arr3), assume_unique=True)

# VS new

intr=  intersect1d(arr1, arr2, arr3, assume_unique=False)

Then, does the gain from such generalisation justify constriction it 
introduces?


Regards,
DG

On 2 Feb 2024, at 17:31, Marten van Kerkwijk > wrote:



For my own work, I required the intersect1d function to work on multiple
arrays while returning the indices (using `return_indizes=True`).
Consequently I changed the function in numpy and now I am seeking
feedback from the community.

This is the corresponding PR: 
https://github.com/numpy/numpy/pull/25688 





To me this looks like a very sensible generalization.  In terms of numpy
API, the only real change is that, effectively, the assume_unique and
return_indices arguments become keyword-only, i.e., in the unlikely case
that someone passed those as positional, a trivial backward-compatible
change will fix it.

-- Marten
___
NumPy-Discussion mailing list -- numpy-discussion@python.org 

To unsubscribe send an email to numpy-discussion-le...@python.org 

https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ 


Member address: dom.grigo...@gmail.com



___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: stephan.kusc...@gmail.com

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: API: make numpy.lib._arraysetops.intersect1d work on multiple arrays #25688

2024-02-06 Thread Stephan Kuschel via NumPy-Discussion

Dear Dom,

just check, and on my computer the new version is ~factor 2 faster 
compared to the reduce approach if arrays are shuffled. For sorted 
arrays, the the new version is factor 3.4. faster:



from functools import reduce
idss = [np.random.permutation(np.arange(a*100, int(1e5)+a*100, 1)) for a 
in range(20)]


%timeit intersect1d(*idss)   # 166 +- 47ms
%timeit reduce(np.intersect1d, idss)  # 301 +- 3.7ms

and

from functools import reduce
idss = [np.arange(a*100, int(1e5)+a*100, 1) for a in range(20)]

%timeit intersect1d(*idss)# 77 +- 6ms
%timeit reduce(np.intersect1d, idss) 212 +- 3.8ms

Stephan


Am 02.02.24 um 17:10 schrieb Dom Grigonis:

Just curious, how much faster is it compared to currently recommended `reduce` 
approach?

DG


On 2 Feb 2024, at 17:31, Marten van Kerkwijk  wrote:


For my own work, I required the intersect1d function to work on multiple
arrays while returning the indices (using `return_indizes=True`).
Consequently I changed the function in numpy and now I am seeking
feedback from the community.

This is the corresponding PR: https://github.com/numpy/numpy/pull/25688




To me this looks like a very sensible generalization.  In terms of numpy
API, the only real change is that, effectively, the assume_unique and
return_indices arguments become keyword-only, i.e., in the unlikely case
that someone passed those as positional, a trivial backward-compatible
change will fix it.

-- Marten
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: dom.grigo...@gmail.com


___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: stephan.kusc...@gmail.com

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Introducing quarterly date units to datetime64 and timedelta64

2024-02-24 Thread Stefano Miccoli via NumPy-Discussion
Actually quarters (3 months sub-year groupings) are already supported as 
‘M8[3M]’ and ‘m8[3M]’:
>>> np.datetime64('2024-05').astype('M8[3M]') - 
np.datetime64('2020-03').astype('M8[3M]')
numpy.timedelta64(17,'3M')
So explicitly introducing a ‘Q’ time unit is only to enable more intuitive 
representation/parsing of dates and durations.

I’m moderately negative on this proposal:
- there is no native support of quarters in Python
- ISO 8601-1 does not support sub-year groupings
- the ISO 8601-2 extensions representing sub-year groupings is not sufficiently 
widespread to be adopted by numpy. (E.g. '2001-34' expresses "second quarter of 
2001”, but I suppose nobody would guess this meaning) 

In other words, without a clear normative reference, implementing quarters in 
numpy would risk to introduce a custom/arbitrary notation.

Stefano

smime.p7s
Description: S/MIME cryptographic signature
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] builing numpy on windows

2024-02-27 Thread Ganesh Rajora via NumPy-Discussion
Hi Team,
I am Ganesh working for an MNC here in India and I am working on customised 
Python where I build set of python modules with python.

I do not install them directly from web but I build each and everything from 
source code.  this is because security concerns at the organisation. 

In similar line I want to get some help from you on building numpy on windows, 
as I did not find any direct reference on how to do that on any of numpy's 
official web. I am facing lots of issues to build that, If you could help me 
out with it or could give me a right point of contact to discuss the issue 
would be great help.
I have posted the issue here on stackoverflow - 
Issues in buildingnumpy from source on Windows

Thanks,Ganesh___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-12 Thread Alexander Levin via NumPy-Discussion
thanks for your extensive feedback. if i got you right, we can't state the 
outperformance in all cases, because it is measured by an insufficiently 
precise function and a relatively short period of time.

I understand your point of view and thank you for your observation. we will 
start working on fixing it and re-testing. 

I don't fully see your point about O3 and --fastmath, if I correctly understand 
these concepts of aggressive optimization sacrifice performance and 
computational accuracy in the long run. at the moment our operation has been 
tested by another project dealing with signal processing and the testing was 
successful and indeed showed better performance. as for computational accuracy, 
I don't fully grasp your point here either, since before publishing this we ran 
a number of tests on different dimensions and sizes, some of the tests can be 
found as well. 

nevertheless, i would agree that our development is the result of the personal 
enthusiasm of two individuals, based on the fact that the mathematical aspect 
of the algorithm currently used in the field is far from ideal. We have spent 
some of our time understanding the problem, analyzing articles on this matter, 
and implementing functions to achieve a more efficient mathematical algorithm. 

We have spent some of our time bringing the aforementioned package to a state 
where it can be further worked on and improved, and we have a great deal of 
respect for the motivations of more experienced colleagues working on 
open-source development to help us in this endeavor. As I mentioned earlier, we 
are ready to work within our abilities and knowledge, applying all the 
information available to us for research. I believe that with the combination 
of our efforts and the guidance of colleagues from NumPy, our operation can be 
tested and integrated into the package itself. We are ready to put in all 
necessary efforts from our side.

i thank you for your comments, we will replace our module with the one you 
mentioned, re-run the tests and will be happy to share the results. i 
appreciate that such a knowledgeable person in the industry took the time to 
pay attention to our work.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-14 Thread Alexander Levin via NumPy-Discussion
Good day, Ralf.

I am sharing the results of the latest updates on our code. We have taken into 
account the comments below and are testing the timing with %timeit -o inside 
jupyter, having information about the best of 7 code passes and the average 
deviation. Writing to summarise the intermediate results.

The testing notebooks:
Memory Usage - 
https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/memory_usage.ipynb
 
Timing comparisons(updated) - 
https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/comparisons.ipynb
Our version loses to Scipy always if multithreading is enabled, also we 
wondered about type conversions - whether to leave them for test metrics or 
not. The point is that they are necessary for converting matrix values from int 
to complex128 (we will replace them with 64 if necessary) and back when 
outputting. For more convenient user-experience we preferred to leave the 
conversions for testing, we will be interested in your opinion. 

Regarding the results we have after all updates - everything is stable in 
memory, our operation wins by 2 times. Regarding execution time and efficiency 
- I have the following opinion. On tests with multithreading enabled we are 
consistently losing, while on tests with multithreading disabled we are 
consistently winning. From this we should draw one logical conclusion - our 
algorithm is mathematically smarter, which makes it possible for it to win 
steadily within the limits of memory usage and performance when multithreading 
is switched off. At the same time, multithreading itself, used by Scipy 
authors, is better and more efficient than ours - that's why our operation 
loses algorithmically at the moment when it is switched on.

>From this I can conclude that our algorithm is still more performant, but it 
>obviously needs modification of the existing multithreading system. In this 
>situation we need your advice. In theory, we can figure out and write a more 
>efficient and smarter algorithm for multithreading than our current one. In 
>practice, I'm sure the best way forward would be to collaborate with someone 
>responsible for FFT from Scipy or NumPy so that we can test our algorithm with 
>their multithreading, I'm sure this action will give the best possible 
>performance at the moment in general. I propose this option instead of our 
>separate multithreading writing, as the goal of our work is to embed in NumPy 
>so that as many people as possible can use a more efficient algorithm for 
>their work. And if we write our multithreading first, we will then have to 
>switch to the NumPy version to synthesise the package anyway. So I'm asking 
>for your feedback on our memory usage and operation efficiency results to 
>decide toget
 her the next steps of our hopefully collaborative work, if you're interested 
in doing so. Thank you for your time.

Regards, 

Alexander
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-14 Thread Alexander Levin via NumPy-Discussion
Hi Stefan, 

indeed you're right, the underlying formula initially was created by V. 
Tutatchikov for power-of-two matrices. The initial butterfly approach requires 
a recursive breakdown to 2x2 matrix in order to proceed with precalculations of 
roots of unity (exactly what provides you the aforementioned performance 
advantage). 

I did my own research on implementations where the size is not a power of two 
and they still implement the butterfly, usually they proceeded with 0 
imputation to build to closest power of 2 (which is a mess on a bigger sizes) 
or tried dropped the columns and building them back which is also not a 
brilliant solution. At some point I found a patent number US 8.484.274B2, where 
they discussed the possible padding options for such matrices. The methodology 
was picked depending on the actual size of signal/data, therefore, in case you 
proceed with implementing butterfly with such approach, you'd need to write 
several cases and check the size. Theoretically, it's possible, though I can't 
really say if this particular case would give much more performance advantage 
rather than the same or the worse. 

Yep, indeed the intend is to generate a random matrix, so our tests can be 
objective as they can be. I appreciate your review and will also run the memory 
against rfft (i hope tomorrow). 

So for now I'm sure this method shows better performance on size of 
power-of-two square matrices as well as rectangular matrices of size  2^n x 2^m 
(this one was tested during development process). Speaking of other sizes, I 
mentioned above that I have some thoughts, but it's very case sensitive in 
terms of specific type of signal we are working with.  I would like to 
emphasize that regardless of my genuine desire to create the best possible 
version for the user, my work is not limited by my desire but constrained by 
capabilities. As you may have noticed earlier, the multithreading implemented 
by the authors of Scipy surpasses the version created by us. Considering the 
matrix dimension sensitivity of the butterfly method, I am ready to share my 
thoughts regarding specific data sizes and use cases. However, I cannot readily 
provide an optimal solution for all cases, otherwise, I would have implemented 
it first and then presented it to you. At the moment, I believe this is the 
only sig
 nificant vulnerability in the mathematics we have presented. I can't provide 
much feedback on Bluestein / Chirp-Z at this very moment, but I'll research on 
this matter and if in some case it solves our issue - will definitely implement 
it.

At this very point, I believe if our method after thorough provided corrections 
still has some performance advantage (even on matrices of more simple sizes 
like power-of-two) it is worth to embed it even using in 'if-cases' in my 
opinion. Yet i'm only a contributor and that's why i'm discussing this matter 
with you. 

I want to admit I appreciate your feedback and your time and will be waiting 
for your response.

Regards,

Alexander
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Improved 2DFFT Approach

2024-03-15 Thread Alexander Levin via NumPy-Discussion
Hi Stéfan,

upd: 

indeed, rfft2 has equal memory usage with our fft2d in terms of reals. thanks, 
Stefan. 

to this moment, i believe the results are following:

> scipy time outperformance on rectangular signals with sides of power-of-two. 
> equal memory usage with rfft2

in my eyes, it's worth trying putting our algorithm and scipy multithreading 
together, considering previous results, I believe it'll show major performance 
improvements. in case it does, i still think it's worthy trying putting the 
Cooley-Tukey operation in work in terms of cases of the mentioned signals. like 
i suggest we try testing our code as a part of numpy/scipy, tbh, i really lost 
the track of whether this thread is about numpy or scipy embedding. 

i believe if we could place the butterfly algorithm into scipy and add a 
'checking if' for the size of the matrix, we would rather win some performance 
than lose, i think any advantage in performance of the algorithm is important, 
considering the balance of memory usage and time is still observed. i suppose 
in terms of algorithm performance one step for a man is a leap for mankind in 
terms of other projects.

please lmk if you share this opinion and we should try testing. 

regards

Alexander
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Accelerated NumPy with Weld compiler

2024-03-21 Thread Hemant Singh via NumPy-Discussion
Stanford and MIT jointly developed the Weld language and compiler to run NumPy 
and Pandas significantly faster, 10x-250x.
My company has changed the compiler to be production ready. Anyone interested 
in trialing this compiler, please contact me with your work email.

Best,

Hemant
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Should we add a NumPy default protocol version for np.save?

2024-05-07 Thread Chunqing Shan via NumPy-Discussion
Currently, when NumPy saves data using pickle, it hard-coded the protocol
version to 3, which was the default value from Python 3.0 to Python 3.7.
However, since Python 3.7 has reached its end-of-life (EOL), there are no
actively maintained Python versions that default to using pickle protocol 3.  

  

By allowing NumPy to use the default pickle protocol version, objects larger
than 4GB can be pickled, resolving the issue described in #26224 [1]. Although
this new protocol version is incompatible with Python 3.3 and earlier
versions, Python 3.3 has long reached its EOL. Therefore, there is no need to
force pickle protocol 3 when saving data using pickle.  

  

One of the reasons for having a hard-coded version is that upstream Python may
bump the default version to a new version that will break the compatibility
between two supported versions of NumPy. This scenario won't happen as Python
core developers promised that it's a policy to only use the protocol supported
by all currently supported versions of Python as default.[2][3]  

  

Another reason for having a different NumPy default version of pickle protocol
is that because NumPy does not support every old Python version, we can
actually bump the default version earlier than upstream Python to get the
performance gains.  

  

I have done some experiments[4] that show while pickle protocol 4 improved
about 4% for overall np.save performance, protocol 5 did not have that kind of
performance improvement. Therefore, I'm not sure whether it is worth it to add
a NumPy default version of pickle protocol. Protocol 5 use out-of-band buffer
which may increase performance for lots of small pickles, but since np.save
typical usage is saving to a file, filesystem overhead on creating lots of
small files is the main bottleneck.  

  

There are two possible solutions to this problem. One is to just follow the
upstream protocol version(which is currently 4) and expect them to bump it
wisely when they should. The second is to add a pickle_protocol keyword to the
numpy.save function, defaulting to the highest version that the oldest Python
version NumPy supports (which is 5). Since this introduces some complexity
into a pretty base interface for NumPy, I believe it needs members of NumPy to
decide which way to go. I can finish the documentation (with an explanation of
performance differences on different protocol versions of pickle) and add this
to the release checklist of NumPy and code if we decide to add this keyword.  

  

I can also see if we can utilize pickle protocol 5's improvement with reuse of
out-of-band buffers to keep things in cache and improve performance in another
PR, if we decide to add NumPy's own default version of pickle protocol and set
it to 5.  

  

Best regards,  

Chunqing Shan  

  

1\.   

2\.

  

3\.   

4\. With  

a = np.array(np.zeros((1024, 1024, 24, 7)), dtype=object) # 1.5GB  

and time the following statement(/dev/shm is configured as a tmpfs)  

np.save('/dev/shm/a.npy', a)  

For protocol 3, on average 4.25s is required; for protocol 4, on average
4.07s; for protocol 5, on average 4.06s. This result is acquired on a Xeon
E-2286G bare metal with 2 16GB 2667 MT/s DDR4 ECC, with Python 3.11.2 and
NumPy 1.26.4.  

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Adding bfill() to numpy.

2024-05-20 Thread Raquel Braunschweig via NumPy-Discussion
Hello everyone,

My colleague and I will be opening a Pull Request (PR) about adding bfill() 
(backward fill) function to NumPy. This function is designed to fill NaN values 
in an array by propagating the next valid observation backward along a 
specified axis. We believe this addition will be highly useful for data 
preprocessing and manipulation tasks.

Here are some key points regarding our proposed implementation:

Function Explanation: The bfill() function identifies NaN values in an array 
and replaces them by copying the next valid value in the array backwards. Users 
can specify the axis along which the filling should be performed, providing 
flexibility for different data structures.
Use Cases: This function is particularly beneficial in time series analysis, 
data cleaning, and preparing datasets for machine learning models.

We are looking forward to your feedback and suggestions.

Thank you for your attention and we appreciate your support.

Best regards,
Raquel and Gonçalo
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Adding bfill() to numpy.

2024-05-22 Thread Raquel Braunschweig via NumPy-Discussion
Thank you for the suggestion. We will look into it!
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: next Documentation team meeting at 7 PM UTC - New Time!

2024-06-17 Thread Kakaire Steven via NumPy-Discussion

On 2024-06-16 18:33, Mukulika Pahari wrote:

Hi all,

Our next Documentation Team meeting will happen on *Monday, June 17* at 
*7PM UTC*.


**
I picked the new time trying my best to accommodate my and other's 
schedules (thanks to those who responded to the poll). Sorry to those 
who can't make the new time- hopefully we can collaborate through the 
other communication channels.

**

All are welcome - you don't need to already be a contributor to join. 
If you have questions or are curious about what we're doing, we'll be 
happy to meet you!


If you wish to join on Zoom, use this (updated) link:
https://numfocus-org.zoom.us/j/85016474448?pwd=TWEvaWJ1SklyVEpwNXUrcHV1YmFJQ...

Here's the permanent hackmd document with the meeting notes (still 
being

updated):
https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg

Hope to see you around!

Best wishes,
Mukulika
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: steven.kaka...@mak.ac.ug

Thanks for the time review. I hope all can find it favorable.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Comparison between numpy scalars returns numpy bool class and not native python bool class

2024-06-27 Thread Stefano Miccoli via NumPy-Discussion
It is well known that ‘np.bool' is not interchangeable with python ‘bool’, and 
in fact 'issubclass(np.bool, bool)’ is false.

On the contrary, numpy floats are subclassing python 
floats—'issubclass(np.float64, float) is true—so I’m wondering if the fact that 
scalar comparison returns a np.bool breaks the Liskov substitution principle. 
In fact  ’(np.float64(1) > 0) is True’ is unexpectedly false.

I was hit by this behaviour because in python structural pattern matching, the 
‘a > 1’ subject will not match neither ’True’ or ‘False’ if ‘a' is a numpy 
scalar: In this short example

import numpy as np
a = np.float64(1)
assert isinstance(a, float)
match a > 1:
case True | False:
print('python float')
case _:
print('Huh?: numpy float’)

the default clause is matched. If we set instead ‘a = float(1)’, the first 
clause will be matched. The surprise factor is quite high here, in my opinion.
(Let me add that ‘True', ‘False', ‘None' are special in python structural 
pattern matching, because they are matched by identity and not by equality.)

I’m not sure if this behaviour can be avoided, or if we have to live with the 
fact that numpy floats are to be kept well contained and never mixed with 
python floats.

Stefano

smime.p7s
Description: S/MIME cryptographic signature
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Comparison between numpy scalars returns numpy bool class and not native python bool class

2024-06-28 Thread Stefano Miccoli via NumPy-Discussion


> On 27 Jun 2024, at 23:48, Aaron Meurer  wrote:
> 
> Apparently the reason this happens is that True, False, and None are
> compared using 'is' in structural pattern matching (see

Please let me stress that the ‘match/case’ snippet was only a concrete example 
of a situation in which, say ‘f(a)’ gives the correct result when ‘a’ is a 
‘float’ instance and breaks down when ‘a’ is a ‘np.foat64’ instance.
Now the fact that numpy floats are subclasses of python floats is quite a 
strong promise that this should never be the case…
Realistically this can be solved in a couple of ways.

(i) Refactoring ‘f(a)’ so that it is aware of the numpy float quirks… not 
always possible, especially if ‘f(a)’ belongs to an external package.

(ii) Sanitizing numpy floats, lets say by ‘f(a.item())’ in the calling code.

(iii) Ensuring that scalar comparisons always return python bools and not 
‘np.bool'


(i) and (ii) are quite simple user-side workarouns, but sometimes the surprise 
factor is high, as in the given code snippet. 

On the contrary (iii) is a radical solution on the library side, but I’m not 
sure if it’s worth implementing for a few edge cases. In fact  ‘b is True’ is 
an anti-pattern in python, and probably the places in which this behaviour 
surfaces should be sparse.

Stefano

smime.p7s
Description: S/MIME cryptographic signature
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Policy on AI-generated code

2024-07-03 Thread Loïc Estève via NumPy-Discussion
Hi,

in scikit-learn, more of a FYI than some kind of policy (amongst other
things it does not even mention explicitly "AI" and avoids the licence
discussion), we recently added a note in our FAQ about "fully automated
tools":
https://github.com/scikit-learn/scikit-learn/pull/29287

From my personal experience in scikit-learn, I am very skeptical about
the quality of this kind of contributions so far ... but you know future
may well prove me very wrong.

Cheers,
Loïc

> Hi,
>
> We recently got a set of well-labeled PRs containing (reviewed)
> AI-generated code:
>
> https://github.com/numpy/numpy/pull/26827
> https://github.com/numpy/numpy/pull/26828
> https://github.com/numpy/numpy/pull/26829
> https://github.com/numpy/numpy/pull/26830
> https://github.com/numpy/numpy/pull/26831
>
> Do we have a policy on AI-generated code?   It seems to me that
> AI-code in general must be a license risk, as the AI may well generate
> code that was derived from, for example, code with a GPL-license.
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: loic.est...@ymail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Accessing real and imaginary parts of npy_complex(nbits)

2024-07-08 Thread Brendan Murphy via NumPy-Discussion
Hi,

This is my first look at the numpy internals, so 

I'm trying to help update pytensor to be compatible with numpy 2.0, and we have 
some structs that inherit from npy_complex64 and npy_complex128, which 
currently use .real and .imag to access real and imaginary parts.

Assuming a 64 bit system, it is easy to update the code using npy_crealf (etc) 
for the struct that inherits from npy_complex64, and so on. (I'll put an 
example of the code at the bottom of the post.)

Since numpy doesn't assume a 64 bit system, I'm writing some aliases for 
npy_crealf etc., depending on NPY_SIZEOF_FLOAT etc. 

I'm wondering if there is a smarter way to do this.

Also, the pytensor code redefines complex arithmetic in terms of the standard 
"math" definitions. For C99 complex types, this can be achieved using #pragma 
STDC CX_LIMITED_RANGE (in theory, but really depending on the compiler). Is 
there any way to ask numpy to use this directive? (Googling says this makes 
complex arithmetic 3-5 times faster. The C99 complex arithmetic tries to avoid 
overflow, and this directive is only recommended if you know that won't be an 
issue. I don't know if we can assume that, but this code has been this way 
since it was added to Theano.)

Example code:

struct pytensor_complex64 : public npy_complex64 {
  typedef pytensor_complex64 complex_type;
  typedef npy_float32 scalar_type;

  complex_type operator+(const complex_type &y) const {
complex_type ret;
// ret.real = this->real + y.real;
// ret.imag = this->imag + npy_cimagf(y);
npy_csetrealf(&ret, npy_crealf(*this) + npy_crealf(y));
npy_csetimagf(&ret, npy_cimagf(*this) + npy_cimagf(y));
return ret;
  }

  complex_type operator-() const {
complex_type ret;
npy_csetrealf(&ret, -npy_crealf(*this));
npy_csetimagf(&ret, -npy_cimagf(*this));
return ret;
  }
  bool operator==(const complex_type &y) const {
return (npy_crealf(*this) == npy_crealf(y)) && (npy_cimagf(*this) == 
npy_cimagf(y));
  }
  bool operator==(const scalar_type &y) const {
return (npy_crealf(*this) == y) && (npy_crealf(*this) == 0);
  }
  complex_type operator-(const complex_type &y) const {
complex_type ret;
npy_csetrealf(&ret, npy_crealf(*this) - npy_crealf(y));
npy_csetimagf(&ret, npy_cimagf(*this) - npy_cimagf(y));
return ret;
  }
  complex_type operator*(const complex_type &y) const {
complex_type ret;
npy_csetrealf(&ret, npy_crealf(*this) * npy_crealf(y) - npy_cimagf(*this) * 
npy_cimagf(y));
npy_csetimagf(&ret, npy_crealf(*this) * npy_cimagf(y) + npy_cimagf(*this) * 
npy_crealf(y));
return ret;
  }
  complex_type operator/(const complex_type &y) const {
complex_type ret;
scalar_type y_norm_square = npy_crealf(y) * npy_crealf(y) + npy_cimagf(y) * 
npy_cimagf(y);
npy_csetrealf(&ret, (npy_crealf(*this) * npy_crealf(y) + npy_cimagf(*this) 
* npy_cimagf(y)) / y_norm_square);
npy_csetimagf(&ret, (npy_crealf(*this) * npy_crealf(y) - npy_cimagf(*this) 
* npy_cimagf(y)) / y_norm_square);
return ret;
  }
  template  complex_type &operator=(const T &y);

  pytensor_complex64() {}

  template  pytensor_complex64(const T &y) { *this = y; }

  template 
  pytensor_complex64(const TR &r, const TI &i) {
npy_csetrealf(this, r);
npy_csetimagf(this, i);
  }
};
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Accessing real and imaginary parts of npy_complex(nbits)

2024-07-09 Thread Brendan Murphy via NumPy-Discussion
I can partially answer my own questions here:

1) To avoid figuring out the type underlying npy_complex64 etc, the following 
macro seems to work:

  #define set_real(X, Y) _Generic((X), \
  npy_cfloat: npy_csetrealf, \
  npy_cdouble: npy_csetreal, \
  npy_clongdouble: npy_csetreall \
)((X), (Y))

#define set_imag(X, Y) _Generic((X), \
  npy_cfloat: npy_csetimagf, \
  npy_cdouble: npy_csetimag, \
  npy_clongdouble: npy_csetimagl \
)((X), (Y))

#define get_real(X) _Generic((X), \
  npy_cfloat: npy_crealf, \
  npy_cdouble: npy_creal, \
  npy_clongdouble: npy_creall \
)(X)

#define get_imag(X) _Generic((X), \
  npy_cfloat: npy_cimagf, \
  npy_cdouble: npy_cimag, \
  npy_clongdouble: npy_cimagl \
)(X)

Since we're using C++ for pytensor, overloading inline functions for get_real, 
set_real, etc. would also be an option.

For 2), I realised that scalarmath.c.src already redefines the arithmetic of 
complex types. I was assuming that the built in C99 operations were being used 
(and if __cplusplus is defined, I guess I was assuming that the structs were 
cast to C complex types).

This post suggests this gives better performance than the built in C99 
operations: 
https://stackoverflow.com/questions/11076924/how-to-use-cx-limited-range-on-off

Using the compiler flag they suggest could mean that numpy doesn't need to 
redefine the arithmetic operators for complex numbers, although I don't know if 
there is a clang equivalent, and I suppose it isn't much code saved.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: ENH: Uniform interface for accessing minimum or maximum value of a dtype

2024-08-27 Thread Hameer Abbasi via NumPy-Discussion
I’d advocate for something like a `DTypeInfo` object in the Array API itself, 
with `max_value` and `min_value` being members. Of course, one would have to 
imagine how this would work with complex-valued dtypes, but I’d like API that 
returns an object rather than a million different calls, similar to `finfo` and 
`iinfo`, but unified.

> Am 25.08.2024 um 21:50 schrieb Lucas Colley :
> 
> +1 for the general idea!
> 
> It may be nice to have such a function which sits at the top level of the 
> API, to fit into 
> https://data-apis.org/array-api/draft/API_specification/data_type_functions.html
>  nicely. However, ‘min_value’ or ‘min‘ won’t do then - we’d probably need to 
> include ‘dtype’ in the name somewhere. But I don’t really like 
> `np.min_dtype(dt)`. Maybe `np.min_dtype_value(dt)`?
> 
> Cheers,
> Lucas
> 
>> On 25 Aug 2024, at 20:59, Carlos Martin  wrote:
>> 
>> As discussed 
>> [here](https://github.com/numpy/numpy/issues/5032#issuecomment-1830838701), 
>> [here](https://github.com/numpy/numpy/issues/5032#issuecomment-2307927804), 
>> and 
>> [here](https://github.com/google/jax/issues/18661#issuecomment-1829031914), 
>> I'm interested in a uniform interface for accessing the minimum or maximum 
>> value of a given dtype.
>> 
>> Currently, this requires branching on the type of dtype (boolean, integer, 
>> or floating point) and then (for the latter two) calling either 
>> [iinfo](https://numpy.org/doc/stable/reference/generated/numpy.iinfo.html) 
>> or 
>> [finfo](https://numpy.org/doc/stable/reference/generated/numpy.finfo.html), 
>> respectively. It would be more ergonomic to have a single, uniform interface 
>> for accessing this information that is dtype-independent.
>> 
>> Possible interfaces include:
>> 
>> ```python3
>> import numpy as np
>> dt = np.dtype('int32')
>> 
>> dt.min
>> np.dtypes.info(dt).min
>> np.dtypes.min(dt)
>> np.dtypes.min_value(dt)
>> ```
>> ___
>> NumPy-Discussion mailing list -- numpy-discussion@python.org 
>> 
>> To unsubscribe send an email to numpy-discussion-le...@python.org 
>> 
>> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> Member address: lucas.coll...@gmail.com 
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org 
> 
> To unsubscribe send an email to numpy-discussion-le...@python.org 
> 
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: hameerabb...@yahoo.com 
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Adding `P.coef_natural` property to polynomials

2024-09-01 Thread oc-spam66--- via NumPy-Discussion
I can summarize the different possibilities/proposals:
(A) Create new properties: add a `P.coef_natural` property, with a suitable 
documentation ; maybe also add a `P.coef_internal` property. There would be no 
change to the existing code (only addition of properties).
(B) Change `P.coef` attribute into a property, with a suitable documentation. 
Hide `P.coef` attribute into `P._coef` (change existing code). Do not create 
more properties (unlike A).

- About (A), I don't think that adding `P.coef_natural` would add a risk.
- About (B), it may be appreciated that the API does not change (does not 
occupy more namespace)
- Both (A) and (B) would help basic users to get out of the `P.coef` attribute 
confusion.

Side remark (not important):
> "natural" coefficients make very little if any sense for some of the other 
> polynomial subclasses, such as Chebyshev -- for those, there's nothing 
> natural about them!
Are you sure? Can they not be the weights at different order of approximation 
of a solution?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Adding `P.coef_natural` property to polynomials

2024-08-31 Thread oc-spam66--- via NumPy-Discussion
Hello,
I would like to add a property `P.coef_natural` to polynomials. Would you 
accept it?

Reason:
Most people who had ground courses on polynomials expect `P.coef` to return the 
natural coefficients. They face a huge confusion because this is not the case 
and because the `coef` attribute is not linked to a documentation. Moreover, 
the documentation of the class does not explain the situation clearly either.

Solution:
I propose to add a property `P.coef_natural` to polynomials, with the suitable 
documentation. In this situation, basic users will have an easier path to 
understanding.

Proposed implementation:
https://github.com/numpy/numpy/pull/27232/files

Further possible options:
- Add a property `P.coef_internal` that would copy `P.coef` and provide 
documentation.
- Hide `P.coef` into `P._coef` and leave only `P.coef_natural` and 
`P.coef_internal` visible.

(1) Can you comment and tell if you accept this? 
(2) Any opinion about whether the options should be implemented?

The main idea is that we should not require basic users to be specialists of 
the subtleties of the module.

Olivier
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: What should remain on PyPi

2024-09-03 Thread Peter Cock via NumPy-Discussion
If I recall correctly, people were building against the Numpy 2.0.0 release
candidates in particular. In hindsight keeping those on PyPI might have
been better. A formal NEP/SPEC seems a good idea.

Peter

On Tue, Sep 3, 2024 at 6:20 PM matti picus  wrote:

> I would prefer we never delete packages once we upload them to PyPI,
> unless there are security issues with them. As Sean demonstrated,
> someone somewhere is going to be using them, and deleting packages
> will inevitably break something.
> Matti
>
>
> On Tue, Sep 3, 2024 at 7:44 PM Sean Gillies 
> wrote:
> >
> > Hi Chuck,
> >
> > I've got a version of a package on PyPI that requires Numpy 2.0.0rc1 at
> build time. Not the best decision in hindsight, but I assumed that Numpy
> was the kind of project that wouldn't remove published distributions unless
> there were security issues. It had not up today, right? Would it be
> possible to restore 2.0.0rc1?
> >
> > On Tue, Sep 3, 2024 at 9:20 AM Charles R Harris <
> charlesr.har...@gmail.com> wrote:
> >>
> >> Hi All,
> >>
> >> I just got through deleting a bunch of pre-releases on PyPi and it
> occurred to me that we should have a policy as to what releases should be
> kept. I think that reproducibility requires that we keep all the major and
> micro versions, but if so, we should make that an official guarantee.
> Perhaps a short NEP? This might even qualify for an SPEC. Thoughts?
> >>
> >> Chuck
> >
> >
> > --
> > Sean Gillies
> > ___
> > NumPy-Discussion mailing list -- numpy-discussion@python.org
> > To unsubscribe send an email to numpy-discussion-le...@python.org
> > https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> > Member address: matti.pi...@gmail.com
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: p.j.a.c...@googlemail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy triage meeting - September 18th, 2024 at 18:00 UTC

2024-09-15 Thread Inessa Pawson via NumPy-Discussion
The next NumPy triage meeting will be held this Wednesday, September 18th
at 18:00 UTC. This is a meeting where we synchronously triage prioritized
PRs and issues.
Join us via Zoom:
https://numfocus-org.zoom.us/j/82096749952?pwd=MW9oUmtKQ1c3a2gydGk1RTdYUUVXZz09
.
Everyone is welcome to attend and contribute to a conversation.
Please notify us of issues or PRs that you’d like to have reviewed by
adding a GitHub link to them in the meeting agenda:
https://hackmd.io/68i_JvOYQfy9ERiHgXMPvg.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy Newcomers' Hour - September 19th, 2024 at 10 pm UTC

2024-09-16 Thread Inessa Pawson via NumPy-Discussion
Our next Newcomers' Hour will be held this Thursday, September 19th at 10
pm UTC. Stop by to ask questions, share your progress, celebrate success,
or just to say hi.

To add to the meeting agenda the topics you’d like to discuss, follow the
link: https://hackmd.io/3f3otyyuTte3FU9y3QzsLg?both.

Join the meeting via Zoom:
https://us06web.zoom.us/j/82563808729?pwd=ZFU3Z2dMcXBGb05YemRsaGE1OW5nQT09.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Adding `P.coef_natural` property to polynomials

2024-09-16 Thread oc-spam66--- via NumPy-Discussion
What do you mean by "changing the API"?
- Case (A): Adding a property `P.coef_natural` is not a change IMO, it is an 
addition.
- Case (B): Do you consider that changing `P.coef` from an attribute to a 
property is a change in the API ? It is transparent IMO.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Invalid value encoutered : how to prevent numpy.where to do this?

2023-02-18 Thread Hameer Abbasi via NumPy-Discussion
Hi! You can use a context manager: with np.errstate(all=”ignore”): …

Best regards,
Hameer Abbasi
Von meinem iPhone gesendet

> Am 18.02.2023 um 16:00 schrieb David Pine :
> 
> I agree.  The problem can be avoided in a very inelegant way by turning 
> warnings off before calling where() and turning them back on afterward, like 
> this
> 
>warnings.filterwarnings("ignore", category=RuntimeWarning)
>result = np.where(x == 0.0, 0.0, 1./data)
>warnings.filterwarnings("always", category=RuntimeWarning)
> 
> But it would be MUCH nicer if there were an optional keyword argument in the 
> where() call.
> 
> Thanks,
> Dave
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: einstein.edi...@gmail.com

___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: removing NUMPY_EXPERIMENTAL_ARRAY_FUNCTION env var

2023-03-10 Thread Hameer Abbasi via NumPy-Discussion
+1 from my side as well.Von meinem iPhone gesendetAm 10.03.2023 um 19:10 schrieb Stephan Hoyer :+1 for removing this environment variable. It was never intended to stick around this long.On Fri, Mar 10, 2023 at 6:48 AM Ralf Gommers  wrote:Hi all,In https://github.com/numpy/numpy/pull/23364 we touched on the NUMPY_EXPERIMENTAL_ARRAY_FUNCTION environment variable. This was a temporary feature during the introduction of `__array_function__` (see NEP 18), but we never removed it. I propose we do so now, since it is cumbersome to have around (see gh-23364 for one reason why). GitHub code search shows some usages, but that's mostly old code to explicitly enable it or print diagnostic info it looks like - none of it seemed relevant.In case there is any need for this functionality to disable `__array_function__`, then please speak up. In that case it probably applies to `__array_ufunc__` as well, and there should be a better way to do this than an undocumented environment variable with "experimental" in the name.Cheers,Ralf
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: sho...@gmail.com

___NumPy-Discussion mailing list -- numpy-discussion@python.orgTo unsubscribe send an email to numpy-discussion-le...@python.orghttps://mail.python.org/mailman3/lists/numpy-discussion.python.org/Member address: hameerabb...@yahoo.com___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] PR-23061

2023-03-25 Thread Matteo Raso via NumPy-Discussion
I have an open PR that's been reviewed but keeps getting dropped. Specifically, 
I've had to make a comment asking for updates on the PR's status 3 times, with 
the last comment going ignored. I'm not upset, since I understand that the team 
is very busy (I actually looked through the archives a while ago and saw that 
NumPy 2.0 is in development, which must be taking up a lot of your time), but 
I'm sure you also understand that I don't want my PR to hang in limbo forever. 
If the team doesn't have time to deal with it now, it should at least be given 
a triage tag so it's properly labelled and put on the backburner.

Best regards, Matteo.

P.S. I originally tried to send this message as an email, but it was instantly 
rejected because I'm not a list member. That's a pretty serious error for a 
public mailing list.
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: PR-23061

2023-03-25 Thread Peter Cock via NumPy-Discussion
On Sat, Mar 25, 2023 at 12:35 PM Matteo Raso via NumPy-Discussion
 wrote:
>
> P.S. I originally tried to send this message as an email, but it was instantly
> rejected because I'm not a list member. That's a pretty serious error for a
> public mailing list.

That's very normal on a mailing list. Even if you disagree with the design,
the immediate failure message was very clear so you know how to fix it
(sign up first, then resend your message).

And the URL to the issue in question, certainly not a trivial issue where
a quick review and resolution might be expected:

https://github.com/numpy/numpy/pull/23061

Peter
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Stefano Miccoli via NumPy-Discussion


On 31 May 2023, at 16:32, 
numpy-discussion-requ...@python.org 
wrote:

It seems fairly clear that with this recent change, the feeling is that the 
tradeoff is bad and that too much accuracy was lost, for not enough real-world 
gain. However, we now had several years worth of performance work with few 
complaints about accuracy issues. So I wouldn't throw out the baby with the 
bath water now and say that we always want the best accuracy only. It seems to 
me like we need a better methodology for evaluating changes. Contributors have 
been pretty careful, but looking back at SIMD PRs, there were usually detailed 
benchmarks but not always detailed accuracy impact evaluations.

Cheers,
Ralf


If I can throw my 2cents in, my feeling is that most user will not notice 
neither the decrease in accuracy, nor the increase in speed.
(I failed to mention, I'm an engineer so a few ULPs are almost nothing for 
me; unless I have to solve a very ILL conditioned problem, but then I do not 
blame numpy, but myself for formulating such a bad model ;-)

The only real problem is for code that relies on these assumptions:

assert np.sin(np.pi/2) == -np.cos(np.pi) == 1

which will fail in numpy==1.25.rc0 but should hold true for numpy~=1.24.3, at 
least on most runtime environments.

I do not have strong feelings on this issue: in an ideal world code should have 
unit-testing modules and assertion scattered here and there in order to make 
all implicit assumptions explicit. Adapting to the new routines should be 
fairly simple.
Of course we do not live in an ideal world and there will definitely be a 
number of users that will experience hard to debug failures linked to this new 
trig routines.

But again I prefer to remain neutral.

Stefano


___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Precision changes to sin/cos in the next release?

2023-05-31 Thread Allan, Daniel via NumPy-Discussion
Thanks for your work on this, Sebastian.

I think there is a benefit for new users and learners to have visually 
obviously-correct results for the identities. The SciPy Developer's Guide [1] 
several of us worked on last week uses Snell's Law as a teaching example, and 
it would now give some results that would make newcomers double-take. You could 
argue that it's important to learn early not to expect too much from floats, 
but nonetheless I think something tangible is lost with this change, in a 
teaching context.

[1] https://learn.scientific-python.org/development/tutorials/module/

Dan

Daniel B. Allan, Ph.D (he/him)
Data Science and Systems Integration
NSLS-II
Brookhaven National Laboratory

From: Thomas Caswell 
Sent: Wednesday, May 31, 2023 6:38 PM
To: Discussion of Numerical Python 
Subject: [Numpy-discussion] Re: Precision changes to sin/cos in the next 
release?

Just for reference, this is the current results on the numpy main branch at 
special points:

In [1]: import numpy as np

In [2]: np.sin(0.0)
Out[2]: 0.0

In [3]: np.cos(0.0)
Out[3]: 0.

In [4]: np.cos(2*np.pi)
Out[4]: 0.9998

In [5]: np.sin(2*np.pi)
Out[5]: -2.4492935982947064e-16

In [6]: np.sin(np.pi)
Out[6]: 1.2246467991473532e-16

In [7]: np.cos(np.pi)
Out[7]: -0.9998

In [8]: np.cos(np.pi/2)
Out[8]: 6.123233995736766e-17

In [9]: np.sin(np.pi/2)
Out[9]: 0.9998

In [10]: np.__version__
Out[10]: '2.0.0.dev0+60.g174dfae62'

On Wed, May 31, 2023 at 6:20 PM Robert Kern 
mailto:robert.k...@gmail.com>> wrote:
On Wed, May 31, 2023 at 5:51 PM Benjamin Root 
mailto:ben.v.r...@gmail.com>> wrote:
I think it is the special values aspect that is most concerning. Math is just 
littered with all sorts of identities, especially with trig functions. While I 
know that floating point calculations are imprecise, there are certain 
properties of these functions that still held, such as going from -1 to 1.

As a reference point on an M1 Mac using conda-forge:
```
>>> import numpy as np
>>> np.__version__
'1.24.3'
>>> np.sin(0.0)
0.0
>>> np.cos(0.0)
1.0
>>> np.sin(np.pi)
1.2246467991473532e-16
>>> np.cos(np.pi)
-1.0
>>> np.sin(2*np.pi)
-2.4492935982947064e-16
>>> np.cos(2*np.pi)
1.0
```

Not perfect, but still right in most places.

FWIW, those ~0 answers are actually closer to the correct answers than 0 would 
be because `np.pi` is not actually π. Those aren't problems in the 
implementations of np.sin/np.cos, just the intrinsic problems with floating 
point representations and the choice of radians which places particularly 
special values at places in between adjacent representable floating point 
numbers.

I'm ambivalent about reverting. I know I would love speed improvements because 
transformation calculations in GIS is slow using numpy, but also some 
coordinate transformations might break because of these changes.

Good to know. Do you have any concrete example that might be worth taking a 
look at in more detail? Either for performance or accuracy.

--
Robert Kern
___
NumPy-Discussion mailing list -- 
numpy-discussion@python.org
To unsubscribe send an email to 
numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: tcasw...@gmail.com


--
Thomas Caswell
tcasw...@gmail.com
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: mixed mode arithmetic

2023-07-11 Thread Jens Glaser via NumPy-Discussion
Hi Matti,

The documentation for numpy.dot currently states

"""
out
ndarray, optional
Output argument. This must have the exact kind that would be returned if it was 
not used. In particular, it must have the right type, must be C-contiguous, and 
its dtype must be the dtype that would be returned for dot(a,b). This is a 
performance feature. Therefore, if these conditions are not met, an exception 
is raised, instead of attempting to be flexible.
"""

I think this means that if dot(a,b) returned FP32 for FP16 inputs, it would be 
consistent with this API to supply a full precision output array. All that 
would be needed in an actual implementation is a mixed_precision flag (or 
output_dtype option) for this op to override the usual type promotion rules. Do 
you agree?

Jens
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] [JOB] Astropy Research Software engineer

2023-08-04 Thread Aldcroft, Thomas via NumPy-Discussion
Astropy is hiring a Research Software engineer. We are looking for people
who can spend 50-100% of their time on Astropy development in the next 6-9
months.

Qualified candidates can range from software developers with open source
experience to astronomy students with software experience.


*If you are interested, please apply! If you know a qualified candidate,
please encourage them to
apply: https://jobs.numfocus.org/job/2023-07-31_astropy
*

The role will include solving some long-standing bugs and issues in Astropy and
some coordinated packages as well as making progress on items that the
community already prioritized on the Astropy roadmap, but that no volunteer
has yet found time to tackle.

Thanks,
Tom Aldcroft, on behalf of the Astropy Project
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy triage meeting - October 16th, 2024 at 18:00 UTC

2024-10-12 Thread Inessa Pawson via NumPy-Discussion
The next NumPy triage meeting will be held this Wednesday, October 16th at
18:00 UTC. This is a meeting where we synchronously triage prioritized PRs
and issues.
Join us via Zoom:
https://numfocus-org.zoom.us/j/82096749952?pwd=MW9oUmtKQ1c3a2gydGk1RTdYUUVXZz09
.
Everyone is welcome to attend and contribute to a conversation.
Please notify us of issues or PRs that you’d like to have reviewed by
adding a GitHub link to them in the meeting agenda:
https://hackmd.io/68i_JvOYQfy9ERiHgXMPvg.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Community Review #4 of the NumPy Comics

2024-10-14 Thread Mars Lee via NumPy-Discussion
Hi all,

The “How to Contribute to NumPy” comics are open for review again! Since
the last review, I have added the last 6 pages, back cover and credits
section.

Here’s the issue link: https://github.com/numpy/numpy/issues/27375

Here’s the comic link: https://heyzine.com/flip-book/3e66a13901.html

I have conducted some review sessions at the NumPy Community Call and NumPy
Documentation call this and last month.

The issue will be open for a few more days and will close on Thurs, Oct 17.

Best,

Mars
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: What to do with np.matrix

2024-10-15 Thread Ralf Gommers via NumPy-Discussion
On Sat, Oct 12, 2024 at 6:23 PM Marten van Kerkwijk 
wrote:

> Hi Dan, others,
>
> Great news that the sparse array implementation is getting there!
> The continued existence of np.matrix has in large part been because of
> sparse matrices, so in some sense the decision depends also on what
> happens to those.
>

+1, this is the critical point. Deprecating `np.matrix` effectively
deprecates SciPy's sparse matrices, which is the wrong way around. We
cannot force SciPy's hand like that, the deprecations in `scipy.sparse`
should come first.

But generally I'm in favour of just deprecating and removing matrix, as
> I don't see any advantages.


Agreed here as well. Once the deprecation of `np.matrix` goes through, just
removing it makes sense to me. There is little value in moving it to an
external package (scipy won't move to that as a dependency, nor would I
expect any other maintained packages to make that move), and it will force
keeping workarounds within NumPy that somehow have to couple to that new
package.

Cheers,
Ralf



> But admittedly I'm biased: I was the one
> that added the PendingDeprecationWarning, and attempted to consolidate
> at least all the tests in matrixlib to make eventual deprecation easier...
>
> Regardless, since there have been 7 years of PendingDeprecationWarning,
> I think changing that to a regular DeprecationWarning should not
> surprise anybody, at least not if they had built a package based on
> np.matrix.
>
> It is maybe good to add that it is less easy to split of matrix into its
> own package than it was for the financial routines: because matrix
> overrides some pretty basic assumptions of arrays (mostly how shapes
> behave), there are bits of special-casing for np.matrix throughout the
> rest of the code, which we would want to remove if np.matrix is no
> longer part of numpy.  I.e., moving it might mean having to define
> ``matrix.__array_function__`` and override those functions. This means a
> new package would need a bit of a champion.
>
> All the best,
>
> Marten
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: ralf.gomm...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Endorsing SPECs 1, 6, 7, and 8

2024-10-08 Thread Robert Kern via NumPy-Discussion
On Tue, Oct 8, 2024 at 8:36 AM Nathan via NumPy-Discussion <
numpy-discussion@python.org> wrote:

>
> Since the legacy RNG interface cannot be deprecated and we encourage
> downstream to use it in tests according to the text of NEP 19, I'm not sure
> about the text in SPEC 7 that talks about deprecating using legacy RNGs. Or
> are you saying that we have now reached the point where we can update NEP
> 19 to encourage moving away from the legacy interface?
>

 We have already always encouraged people to move away from the legacy
interface in their APIs. SPEC 7 recommends a principled way for downstream
projects to implement that move.

NEP 19 acknowledged that sometimes one might still have a use case for
creating a legacy RandomState object and calling it in their tests to
generate test data (but not otherwise pass that RandomState object to the
code under test), but that's not what SPEC 7 addresses. NEP 19 doesn't
really actively recommend the use of RandomState for this purpose, just
acknowledges that it's a valid use case that numpy will continue to support
even while we push for the exclusive use of Generator inside of
library/program code. NEP 19 doesn't need an update for us to endorse SPEC
7 (whether it needs one, separately, to clarify its intent is another
question).

-- 
Robert Kern
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Expected behavior of np.array(..., copy=True)

2024-10-08 Thread Kevin Sheppard via NumPy-Discussion
Can anyone shed some light on the expected behavior of code using
array(..., copy=True) with pandas objects? We ran into this in statsmodels
and I think there are probably plenty of places where we explicitly call
array(..., copy=True) and think we should have a totally independent copy
of the data. One workaround is to use np.require(...,requirements="O") but
it would help to understand the expected behavior.

Here is a simple example:

import numpy as np
import pandas as pd

weeks = 2
now = pd.to_datetime('2024-01-01')
testdata = pd.DataFrame(columns=['dates', 'values'])
rg = np.random.default_rng(0)
testdata['dates'] = pd.date_range(start=now, periods=weeks * 7, freq='D')
testdata['values']=rg.integers(0, 100, size=(weeks * 7))

values = testdata['values']
print("*"*10, " Before ", "*"*10)
print(values.head())
arr = np.array(values, copy=True)
arr.sort()
print("*"*10, " After ", "*"*10)
print(values.head())
print("*"*10, " Flags ", "*"*10)
print(arr.flags)

This produces

**  Before  **
085
163
251
326
430
Name: values, dtype: int64
**  After  **
0 1
1 4
2 7
317
426
Name: values, dtype: int64
**  Flags  **
  C_CONTIGUOUS : True
  F_CONTIGUOUS : True
  OWNDATA : False
  WRITEABLE : True
  ALIGNED : True
  WRITEBACKIFCOPY : False

Thanks,
Kevin
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy community meeting - October 9th, 2024 at 6 pm UTC

2024-10-06 Thread Inessa Pawson via NumPy-Discussion
The next NumPy community meeting will be held this Wednesday, October 9th
at 18:00 UTC.
Join us via Zoom:
https://numfocus-org.zoom.us/j/83278611437?pwd=ekhoLzlHRjdWc0NOY2FQM0NPemdkZz09
.
Everyone is welcome and encouraged to attend.
To add to the meeting agenda the topics you’d like to discuss, follow the
link: https://hackmd.io/76o-IxCjQX2mOXO_wwkcpg?both.
For the notes from the previous meetings, visit:
https://github.com/numpy/archive/tree/main/community_meetings.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy Optimization Team meeting - October 7th, 2024 at 5 pm UTC

2024-10-04 Thread Inessa Pawson via NumPy-Discussion
The next NumPy Optimization Team meeting will be held this Monday, October
7th at 17:00 UTC.
Join us via Zoom:
https://numfocus-org.zoom.us/j/81261288210?pwd=iwV99tGSjR61RTGEERKM4QKxe46g1n.1
.
Everyone is welcome and encouraged to attend.
To add to the meeting agenda the topics you’d like to discuss, follow the
link: https://hackmd.io/dVdSlQ0TThWkOk0OkmGsmw?both.
For the notes from the previous meetings, visit:
https://github.com/numpy/archive/tree/main/optim_team_meetings.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: next Documentation team meeting

2024-10-06 Thread Mukulika Pahari via NumPy-Discussion
Hi all,

Our next Documentation Team meeting will happen on *Monday, October 7* at *7PM 
UTC*. I will not be around for the next few docs meetings but they will 
continue to be hosted by other members.

All are welcome - you don't need to already be a contributor to join. If you 
have questions or are curious about what we're doing, we'll be happy to meet 
you!

If you wish to join on Zoom, use this (updated) link:
https://numfocus-org.zoom.us/j/85016474448?pwd=TWEvaWJ1SklyVEpwNXUrcHV1YmFJQ...

Here's the permanent hackmd document with the meeting notes (still being
updated):
https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg

Hope to see you around!

Best wishes,
Mukulika
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Endorsing SPECs 1, 6, 7, and 8

2024-10-07 Thread matti picus via NumPy-Discussion
It seems to me that we should only endorse SPECs that we ourselves
implement, otherwise it is kind of "do as I say, not as I do". For
instance, it would be strange to endorse SPEC0 but stay with NEP 29.
If we are to endorse SPEC0 without changing our version end-of-life
timing, we should at least modify NEP 29 with some commentary about
why we chose not to implementing SPEC0.  If a SPEC is not relevant,
then I don't think the NumPy project (as a project) can have an
opinion on whether it is "good" for other projects. Individual
contributors can of course endorse whatever they want, but as a
project we should only weigh in when it is relevant to our community
experience
Matti

On Mon, Oct 7, 2024 at 1:04 PM Sebastian Berg
 wrote:
>
> Hi all,
>
> TL;DR: NumPy should endorse some or all of the new SPECs if we like
> them.  If you don't or do like them, please discuss, otherwise I
> suspect we will propose and endorsing them soon and do it if a few core
> maintainers agree.
> ...
> Cheers,
>
> Sebastian
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] help

2024-10-02 Thread Usha Gayatri via NumPy-Discussion
I am working on a Jupyter notebook in Anaconda Navigator. I have done some
projects in 2021, 2022,2023 and 2024. When I run my old project which was
created in 2021. it is giving errors.I am just testing import numpy as np
import pandas as pd
which is giving an error.

File E:\anaconda3\Lib\site-packages\pandas\_libs\interval.pyx:1, in
init pandas._libs.interval()
ValueError: numpy.dtype size changed, may indicate binary
incompatibility. Expected 96 from C header, got 88 from PyObject.


I did uninstall numpy, pandas and again installed. even updated anaconda.

Please help me. I am unable to run any program.

Thank you.

Usha
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: What should remain on PyPi

2024-10-02 Thread Ralf Gommers via NumPy-Discussion
On Tue, Sep 3, 2024 at 7:53 PM Peter Cock via NumPy-Discussion <
numpy-discussion@python.org> wrote:

> If I recall correctly, people were building against the Numpy 2.0.0
> release candidates in particular. In hindsight keeping those on PyPI might
> have been better. A formal NEP/SPEC seems a good idea.
>

The only reason we deleted pre-releases in the past is for space limit
constraints (PyPI has a serious issue with approving limit increase
requests). We may have to do that again, but shouldn't delete anything less
than 2 years old. I've always kept the last 2 years of pre-releases as well
as the 1.0 pre-releases which are of historical interest.

Cheers,
Ralf



>
> Peter
>
> On Tue, Sep 3, 2024 at 6:20 PM matti picus  wrote:
>
>> I would prefer we never delete packages once we upload them to PyPI,
>> unless there are security issues with them. As Sean demonstrated,
>> someone somewhere is going to be using them, and deleting packages
>> will inevitably break something.
>> Matti
>>
>>
>> On Tue, Sep 3, 2024 at 7:44 PM Sean Gillies 
>> wrote:
>> >
>> > Hi Chuck,
>> >
>> > I've got a version of a package on PyPI that requires Numpy 2.0.0rc1 at
>> build time. Not the best decision in hindsight, but I assumed that Numpy
>> was the kind of project that wouldn't remove published distributions unless
>> there were security issues. It had not up today, right? Would it be
>> possible to restore 2.0.0rc1?
>> >
>> > On Tue, Sep 3, 2024 at 9:20 AM Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>> >>
>> >> Hi All,
>> >>
>> >> I just got through deleting a bunch of pre-releases on PyPi and it
>> occurred to me that we should have a policy as to what releases should be
>> kept. I think that reproducibility requires that we keep all the major and
>> micro versions, but if so, we should make that an official guarantee.
>> Perhaps a short NEP? This might even qualify for an SPEC. Thoughts?
>> >>
>> >> Chuck
>> >
>> >
>> > --
>> > Sean Gillies
>> > ___
>> > NumPy-Discussion mailing list -- numpy-discussion@python.org
>> > To unsubscribe send an email to numpy-discussion-le...@python.org
>> > https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> > Member address: matti.pi...@gmail.com
>> ___
>> NumPy-Discussion mailing list -- numpy-discussion@python.org
>> To unsubscribe send an email to numpy-discussion-le...@python.org
>> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
>> Member address: p.j.a.c...@googlemail.com
>>
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: ralf.gomm...@googlemail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: next Documentation team meeting

2024-10-20 Thread Mukulika Pahari via NumPy-Discussion
Hi all,

Our next Documentation Team meeting will happen on *Monday, October 21* at *7PM 
UTC*. I will not be around for the next few docs meetings but they will 
continue to be hosted by other members.

All are welcome - you don't need to already be a contributor to join. If you 
have questions or are curious about what we're doing, we'll be happy to meet 
you!

If you wish to join on Zoom, use this (updated) link:
https://numfocus-org.zoom.us/j/85016474448?pwd=TWEvaWJ1SklyVEpwNXUrcHV1YmFJQ...

Here's the permanent hackmd document with the meeting notes (still being
updated):
https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg

Hope to see you around!

Best wishes,
Mukulika
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] np.ndenumerate doesn't obey mask?

2024-10-21 Thread Neal Becker via NumPy-Discussion
I was using ndenuerate with a masked array, and it seems that the mask is
ignored.  Is this true?  If so, isn't that a bug?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: What to do with np.matrix

2024-10-20 Thread Ralf Gommers via NumPy-Discussion
On Sat, Oct 19, 2024 at 2:18 PM Dan Schult  wrote:

> This is quite helpful. Thanks!
>
> Github search:
> I'm not surprised that many github hits are like homework problems. The
> big resistance to removing np.matrix early on (~2008) came from educators
> who wanted a Matrix oriented experience for their students who had recent
> linear algebra background. It was heavily used for at least a decade in the
> education setting. That started to wane when Python created the `@`
> operator. But change is slow. It's been 10 years with `@`.
>
> I very much support Ralf's concerns that this not push SciPy. It would be
> pushing me after-all. But it is hard to know how to proceed with the
> transfer from spmatrix to sparray without also knowing something about the
> support for np.matrix from the numpy devs. But... there's no reason removal
> of spmatrix needs to happen first. It may be quite natural for both to be
> moved to  a single separated package. Or they could be removed at the same
> time. We'll have to see what makes sense.
>
> Thanks Nathan for the preview of github usage and perspective from the 2.0
> release. I'm also pleased to find that github search results for PRs can be
> sorted by date. While there are 28K PRs involving np.matrix, the recent PRs
> are almost all dependabot reminding folks to upgrade their dependencies. Of
> the top 30, 3 were actions to **remove** np.matrix in favor of ndarray. 1
> was `scipy.interpolate` (which also mentioned removing support for
> `np.matrix`, though provided a workaround instead). And the remaining 26
> are dependency updates. That takes us back to Aug 30. Jumping to the most
> recent 80 PRs gave the same type of results, but I didn't bother counting.
> Almost all of them are dependency updates. Most of the rest are moving away
> from np.matrix. It is clear that recent activity (as measured by PRs) does
> not show much activity using np.matrix.
>
> Perhaps most importantly, there don't seem to be any courses being run
> this semester that have students creating PRs using np.matrix.
>
> And thanks Marten, Sebastian and Chuck for the nudge to find a way to move
> forward with the deprecation process. I think the change to
> `VisibleDeprecationWarning` is a good next step. Hopefully we don't have to
> wait another 7 years for the following step unless we decide that keeping
> that code in numpy is the best way to go.  No one seems to have argued for
> just leaving np.matrix in the package forever, but I think it is a
> reasonable approach (similar to stating that RandomState will remain
> forever). But given the decline in usage, and the negative impacts of
> having multiple interfaces to array-like objects, it is probably better to
> stop supporting matrix at some point.
>
> Summary:
> It seems like eventually removing np.matrix is desirable. The choice of
> removing versus separating depends somewhat on how easy that is for both
> devs, and for users. It might be worth a short exploration to see if there
> is a solution. We should time this so it doesn't negatively impact the
> transition SciPy sparse is making. They are the main users, and leaving
> np.matrix as it is costs very little.
>
> Action items from this discussion include:
> - Exploring impact on SciPy of a change to `VisibleDreprecationWarning`,
> possibly followed by a PR to make the change.
>

When something gets deprecated in NumPy, as a rule we remove it in SciPy
immediately. We could _maybe_ postpone full removal for a bit, but I don't
really see a way to keep sparse matrices around if `np.matrix` gets a
visible deprecation warning. The most conservative open source project
which depends on sparse matrices is probably scikit-learn, so I'd suggest
getting an answer from the scikit-learn team about what the minimum
timeline is that they can live with for first deprecation and then removal
of sparse matrices.

Cheers,
Ralf



> - Investigating a light-weight, simple separation package that wouldn't
> affect user experience much. If that's hard, then we have identified the
> pain points. If that's easy then it informs the choice of a path forward
> for both matrix and spmatrix.
> - Collect info about the current usage of np.matrix, and what type of
> usage the large existing codebase needs. Put that info into a NEP, along
> with a summary of the history and current discussion, and a description of
> our exploration into possible light-weight routes to separation.
>
> I don't expect this to be soon -- maybe by next summer -- unless other
> people get involved. I'm interested in further discussion and suggestions
> too.
>
> FYI Chuck: It looks like Event Horizon Telescope doesn't use np.matrix at
> all any more.
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: ralf.gomm...@gmail.com
>

[Numpy-discussion] next NumPy community meeting - Wednesday, October 23rd, 2024 at 6 pm UTC

2024-10-20 Thread Inessa Pawson via NumPy-Discussion
The next NumPy community meeting will be held this Wednesday, October 23rd
at 18:00 UTC.
Join us via Zoom:
https://numfocus-org.zoom.us/j/83278611437?pwd=ekhoLzlHRjdWc0NOY2FQM0NPemdkZz09
.
Everyone is welcome and encouraged to attend.
To add to the meeting agenda the topics you’d like to discuss, follow the
link: https://hackmd.io/76o-IxCjQX2mOXO_wwkcpg?both.
For the notes from the previous meetings, visit:
https://github.com/numpy/archive/tree/main/community_meetings.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] Re: Endorsing SPECs 1, 6, 7, and 8

2024-10-08 Thread Joren Hammudoglu via NumPy-Discussion
Is SPEC 1 thread-safe enough for py313+nogil?
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy triage meeting - October 30th, 2024 at 18:00 UTC

2024-10-29 Thread Inessa Pawson via NumPy-Discussion
The next NumPy triage meeting will be held this Wednesday, October 30th at
18:00 UTC. This is a meeting where we synchronously triage prioritized PRs
and issues.
Join us via Zoom:
https://numfocus-org.zoom.us/j/82096749952?pwd=MW9oUmtKQ1c3a2gydGk1RTdYUUVXZz09
.
Everyone is welcome to attend and contribute to a conversation.
Please notify us of issues or PRs that you’d like to have reviewed by
adding a GitHub link to them in the meeting agenda:
https://hackmd.io/68i_JvOYQfy9ERiHgXMPvg.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


[Numpy-discussion] next NumPy community meeting - Wednesday, November 6th, 2024 at 6 pm UTC

2024-11-03 Thread Inessa Pawson via NumPy-Discussion
The next NumPy community meeting will be held this Wednesday, November 6th
at 18:00 UTC.
Join us via Zoom:
https://numfocus-org.zoom.us/j/83278611437?pwd=ekhoLzlHRjdWc0NOY2FQM0NPemdkZz09
.
Everyone is welcome and encouraged to attend.
To add to the meeting agenda the topics you’d like to discuss, follow the
link: https://hackmd.io/76o-IxCjQX2mOXO_wwkcpg?both.
For the notes from the previous meetings, visit:
https://github.com/numpy/archive/tree/main/community_meetings.

-- 
Cheers,
Inessa

Inessa Pawson
GitHub: inessapawson
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


  1   2   3   4   >