Re: [Numpy-discussion] Software Capabilities of NumPy in Our Tensor Survey Paper

2016-01-15 Thread Yuxiang Wang
I echo with Robert that the contraction can be done with np.einsum().
Also, check out the np.tensordot() as well - it can also be used to
perform contraction.

Shawn

On Fri, Jan 15, 2016 at 12:32 PM, Robert Kern  wrote:
> On Fri, Jan 15, 2016 at 5:30 PM, Nathaniel Smith  wrote:
>>
>> On Jan 15, 2016 8:36 AM, "Li Jiajia"  wrote:
>> >
>> > Hi all,
>> > I’m a PhD student in Georgia Tech. Recently, we’re working on a survey
>> > paper about tensor algorithms: basic tensor operations, tensor 
>> > decomposition
>> > and some tensor applications. We are making a table to compare the
>> > capabilities of different software and planning to include NumPy. We’d like
>> > to make sure these parameters are correct to make a fair compare. Although
>> > we have looked into the related documents, please help us to confirm these.
>> > Besides, if you think there are more features of your software and a more
>> > preferred citation, please let us know. We’ll consider to update them. We
>> > want to show NumPy supports tensors, and we also include "scikit-tensor” in
>> > our survey, which is based on NumPy.
>> > Please let me know any confusion or any advice!
>> > Thanks a lot! :-)
>> >
>> > Notice:
>> > 1. “YES/NO” to show whether or not the software supports the operation
>> > or has the feature.
>> > 2. “?” means we’re not sure of the feature, and please help us out.
>> > 3. “Tensor order” means the maximum number of tensor dimensions that
>> > users can do with this software.
>> > 4. For computational cores,
>> > 1) "Element-wise Tensor Operation (A * B)” includes element-wise
>> > add/minus/multiply/divide, also Kronecker, outer and Katri-Rao products. If
>> > the software contains one of them, we mark “YES”.
>> > 2) “TTM” means tensor-times-matrix multiplication. We distinguish TTM
>> > from tensor contraction. If the software includes tensor contraction, it 
>> > can
>> > also support TTM.
>> > 3) For “MTTKRP”, we know most software can realize it through the above
>> > two operations. We mark it “YES”, only if an specified optimization for the
>> > whole operation.
>>
>> NumPy has support for working with multidimensional tensors, if you like,
>> but it doesn't really use the tensor language and notation (preferring
>> instead to think in terms of "arrays" as a somewhat more computationally
>> focused and less mathematically focused conceptual framework).
>>
>> Which is to say that I actually have no idea what all those jargon terms
>> you're asking about mean :-) I am suspicious that NumPy supports more of
>> those operations than you have marked, just under different names/notation,
>> but really can't tell either way for sure without knowing what exactly they
>> are.
>
> In particular check if your operations can be expressed with einsum()
>
> http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.einsum.html
>
> --
> Robert Kern
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Yuxiang "Shawn" Wang
Gerling Haptics Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should I use pip install numpy in linux?

2016-01-08 Thread Yuxiang Wang
Dear Nathaniel,

Gotcha. That's very helpful. Thank you so much!

Shawn

On Thu, Jan 7, 2016 at 10:01 PM, Nathaniel Smith <n...@pobox.com> wrote:
> On Thu, Jan 7, 2016 at 6:18 PM, Yuxiang Wang <yw...@virginia.edu> wrote:
>> Dear all,
>>
>> I know that in Windows, we should use either Christoph's package or
>> Anaconda for MKL-optimized numpy. In Linux, the fortran compiler issue
>> is solved, so should I directly used pip install numpy to get numpy
>> with a reasonable BLAS library?
>
> pip install numpy should work fine; whether it gives you a reasonable
> BLAS library will depend on whether you have the development files for
> a reasonable BLAS library installed, and whether numpy's build system
> is able to automatically locate them. Generally this means that if
> you're on a regular distribution and remember to install a decent BLAS
> -dev or -devel package, then you'll be fine.
>
> On Debian/Ubuntu, 'apt install libopenblas-dev' is probably enough to
> ensure something reasonable happens.
>
> Anaconda is also an option on linux if you want MKL (or openblas).
>
> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang "Shawn" Wang
Gerling Haptics Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Should I use pip install numpy in linux?

2016-01-07 Thread Yuxiang Wang
Dear all,

I know that in Windows, we should use either Christoph's package or
Anaconda for MKL-optimized numpy. In Linux, the fortran compiler issue
is solved, so should I directly used pip install numpy to get numpy
with a reasonable BLAS library?

Thanks!

Shawn

-- 
Yuxiang "Shawn" Wang
Gerling Haptics Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] future of f2py and Fortran90+

2015-12-03 Thread Yuxiang Wang
Too add to Sturla - I think this is what he mentioned but in more details:

http://www.fortran90.org/src/best-practices.html#interfacing-with-python

Shawn

On Tue, Jul 14, 2015 at 9:45 PM, Sturla Molden  wrote:
> Eric Firing  wrote:
>
>> I'm curious: has anyone been looking into what it would take to enable
>> f2py to handle modern Fortran in general?  And into prospects for
>> getting such an effort funded?
>
> No need. Use Cython and Fortran 2003 ISO C bindings. That is the only
> portable way to interop between Fortran and C (including CPython) anyway.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang "Shawn" Wang
Gerling Haptics Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDE's for numpy development?

2015-04-08 Thread Yuxiang Wang
I think spyder supports code highlighting in C and that's all...
There's no way to compile in Spyder, is there?

Shawn

On Tue, Apr 7, 2015 at 2:46 AM, Suzen, Mehmet msu...@gmail.com wrote:
 Spyder supports C.

 Thanks for correcting this. I wasn't aware of it.
 How was your experience with it?

 Best,
 -m
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDE's for numpy development?

2015-04-01 Thread Yuxiang Wang
That would really be hilarious - and IFortran probably! :)

Shawn

On Wed, Apr 1, 2015 at 12:07 PM, Benjamin Root ben.r...@ou.edu wrote:
 mixed C and python development? I would just wait for the Jupyter folks to
 create IC and maybe even IC++!

 On Wed, Apr 1, 2015 at 12:04 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:

 Hi All,

 In a recent exchange Mark Wiebe suggested that the lack of support for
 numpy development in Visual Studio might limit the number of developers
 attracted to the project. I'm a vim/console developer myself and make no
 claim of familiarity with modern development tools, but I wonder if such
 tools might now be available for Numpy. A quick google search turns up a
 beta plugin for Visual Studio,, and there is an xcode IDE for the mac that
 apparently offers some Python support. The two things that I think are
 required are: 1) support for mixed C, python developement and 2) support for
 building and testing numpy. I'd be interested in information from anyone
 with experience in using such an IDE and ideas of how Numpy might make using
 some of the common IDEs easier.

 Thoughts?

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] F2PY cannot see module-scope variables

2015-01-26 Thread Yuxiang Wang
Dear all,

Sorry about being new to both Fortran 90 and f2py.

I have a module in fortran, written as follows, with a module-scope variable dp:


! testf2py.f90
module testf2py
implicit none
private
public dp, i1
integer, parameter :: dp=kind(0.d0)
contains
real(dp) function i1(m)
real(dp), intent(in) :: m(3, 3)
i1 = m(1, 1) + m(2, 2) + m(3, 3)
return
end function i1
end module testf2py


Then, if I run f2py -c testf2py.f90 -m testf2py

It would report an error, stating that dp was not declared.

If I copy the module-scope to the function-scope, it would work.


! testf2py.f90
module testf2py
implicit none
private
public i1
integer, parameter :: dp=kind(0.d0)
contains
real(dp) function i1(m)
integer, parameter :: dp=kind(0.d0)
real(dp), intent(in) :: m(3, 3)
i1 = m(1, 1) + m(2, 2) + m(3, 3)
return
end function i1
end module testf2py


However, this does not look like the best coding practice though, as
it is pretty wet.

Any ideas?

Thanks,

Shawn

-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] F2PY cannot see module-scope variables

2015-01-26 Thread Yuxiang Wang
Sorry that I forgot to report the environment -

Windows 64 bit, Python 3.4 64 bit. Numpy version is 1.9.1, and I
commented the raise NotImplementedError(Only MS compiler supported
with gfortran on win64) in the gnu.py, as instructed on this link:
http://scientificcomputingco.blogspot.com.au/2013/02/f2py-on-64bit-windows-python27.html

On Mon, Jan 26, 2015 at 10:29 PM, Yuxiang Wang yw...@virginia.edu wrote:
 Dear all,

 Sorry about being new to both Fortran 90 and f2py.

 I have a module in fortran, written as follows, with a module-scope variable 
 dp:

 
 ! testf2py.f90
 module testf2py
 implicit none
 private
 public dp, i1
 integer, parameter :: dp=kind(0.d0)
 contains
 real(dp) function i1(m)
 real(dp), intent(in) :: m(3, 3)
 i1 = m(1, 1) + m(2, 2) + m(3, 3)
 return
 end function i1
 end module testf2py
 

 Then, if I run f2py -c testf2py.f90 -m testf2py

 It would report an error, stating that dp was not declared.

 If I copy the module-scope to the function-scope, it would work.

 
 ! testf2py.f90
 module testf2py
 implicit none
 private
 public i1
 integer, parameter :: dp=kind(0.d0)
 contains
 real(dp) function i1(m)
 integer, parameter :: dp=kind(0.d0)
 real(dp), intent(in) :: m(3, 3)
 i1 = m(1, 1) + m(2, 2) + m(3, 3)
 return
 end function i1
 end module testf2py
 

 However, this does not look like the best coding practice though, as
 it is pretty wet.

 Any ideas?

 Thanks,

 Shawn

 --
 Yuxiang Shawn Wang
 Gerling Research Lab
 University of Virginia
 yw...@virginia.edu
 +1 (434) 284-0836
 https://sites.google.com/a/virginia.edu/yw5aj/



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-03 Thread Yuxiang Wang
Hi Sturla,

First of all, my apologies to have spelled your name wrong for the
past year - I just realized it! Thanks to Eric Firing who pointed this
out to me. Thank you Sturla for bearing with me!

And then - thank you for pointing out Numba! I tried to use it years
ago, but ended up Cython eventually because the loop-jitting
constraint 
(http://numba.pydata.org/numba-doc/0.16.0/arrays.html#loop-jitting-constraints)
was too strict by that time. After seeing your email, I went to the
latest version and saw that it has been greatly relaxed! I really look
forward to using it for any new projects. Most of the time,
loop-jitting is all I need.

Lastly, your comments on Fortran 90/95 is convincing me to move away
from Fortran 77. I am writing a small section of code that was called
by some other legacy code written in F77. I heard that as long as I
compile it correctly, it will interface with the legacy code with no
problem. I'll definitely give it a try!

Thanks again for all the help Sturla,

Shawn



On Fri, Jan 2, 2015 at 8:22 AM, Sturla Molden sturla.mol...@gmail.com wrote:
 Yuxiang Wang yw...@virginia.edu wrote:

 4) I wanted to say that it seems to me, as the project gradually
 scales up, Cython is easier to deal with, especially when I am using a
 lot of numpy arrays. If it is even higher dimensional data, it would
 be verbose while it is really succinct to use Cython.

 The easiest way to speed up NumPy code is to use Numba which is an LLVM
 based JIT compiler. Numba will often give you performance comparable to C
 for free. All you have to do is to add the @numba.jit decorator to your
 Python function and possibly include some type hints. If all you want is to
 speed up NumPy code, just use Numba and it will take care of it in at least
 9 of 10 cases.

 Numexpr is also a JIT compiler which can speed up Numpy code, but it does
 not give as dramatic results as Numba.

 Cython is easier to work with than ctypes, particularly when the problem
 scales up. If you use typed memoryviews in Cython you can also avoid having
 to work with pointer arithmetics. Cython is mainly a competitior to using
 the Python C API manually for C extension modules to Python. Cython also
 allows you to wrap external C and C++ code, and e.g. use Python and C++
 objects together. The drawback is that you need to learn the Cython
 language as well as Python and C and C++ and know how they differ. Cython
 also have many of the same hidden dangers as C++, due to the possibility of
 exceptions being raised between C statements. But because Cython is the
 most flexible tool for writing C extensions to Python you will in the long
 run do yourself a favor by learning to use it.

 ctypes is good when you have a DLL, possibly form a foreign source, and you
 just want to use it without any build step. CFFI is easier to work with
 than ctypes and has the same usecase. It can parse C headers and does not
 require you to define the C API with Python statements like ctypes do.
 Generally I would say it is alway better to use CFFI than ctypes. ctypes is
 also currently an orphan, albeit in the Python standard library, while CFFI
 is actively maintained.

 Numba will also JIT compile ctypes and CFFI calls to remove the extra
 overhead. This is good to know if you need to call a C function in a tight
 loop. In that case Numba can JIT compile away the Python as well as the
 ctypes/CFFI overhead.

 Fortran 90/95 is also underrated. It is easier to work with than C, and
 gives similar results performance wise. You can call Fortran with f2py,
 ctypes, CFFI, or Cython (use fwrap). Generally I would say that it is
 better for a student to learn C than Fortran if you have to choose, because
 C is also useful for other things than numerical computing. But if you want
 fast and robust numerical code, it is easier to get good results with
 Fortran than C or Cython.

 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Yuxiang Wang
Great thanks to both Strula and also Nathaniel!

@Strula, thanks for your help! And I do think your solution makes
total sense. However, the code doesn't work well on my computer.


// dummy.c
#include stdlib.h

__declspec(dllexport) void foobar(const int m, const int n, const
double **x, double **y)
{
size_t i, j;
y = (** double)malloc(sizeof(double *) * m);
for(i=0; im; i++)
y[i] = (*double)calloc(sizeof(double), n);
for(i=0; im; i++)
for(j=0; jn; j++)
y[i][j] = x[i][j];
}


And then, I was using MSVC, I did cl /LD dummy.c to generate dummy.dll

And then:


import numpy as np
from numpy.ctypeslib import ndpointer
import ctypes

_doublepp = ndpointer(dtype=np.uintp, ndim=1, flags='C')
_dll = ctypes.CDLL('dummy.dll')
_foobar = _dll.foobar
_foobar.argtypes = [ctypes.c_int, ctypes.c_int, _doublepp, _doublepp]
_foobar.restype = None

def foobar(x, y):
assert x.shape == y.shape
xpp = (x.__array_interface__['data'][0]
+ np.arange(x.shape[0])*x.strides[0]).astype(np.uintp)
ypp = (y.__array_interface__['data'][0]
  + np.arange(y.shape[0])*y.strides[0]).astype(np.uintp)
m = ctypes.c_int(x.shape[0])
n = ctypes.c_int(x.shape[1])
_foobar(m, n, xpp, ypp)
return x, y

if __name__ == '__main__':
x = np.arange(9.).reshape((3, 3))
y = np.zeros_like(x)
xnew, ynew = foobar(x, y)



The result is:

ynew
Out[76]:
array([[ 0.,  0.,  0.],
   [ 0.,  0.,  0.],
   [ 0.,  0.,  0.]])


Was I doing something wrong here?

Shawn

On Thu, Jan 1, 2015 at 1:56 PM, Nathaniel Smith n...@pobox.com wrote:
 On Thu, Jan 1, 2015 at 6:00 PM, Yuxiang Wang yw...@virginia.edu wrote:
 Dear all,

 I am currently using a piece of C code, where one of the input
 argument of a function is **double.

 As you discovered, Numpy's ctypes utilities are helpful for getting a
 *double out of an ndarray, but they don't really have anything to do
 with **double's -- for that you should refer to the plain-old-ctypes
 documentation: https://docs.python.org/2/library/ctypes.html#ctypes.pointer

 However, I suspect that this question can't really be answered in a
 useful way without more information about why exactly the C code wants
 a **double (instead of a *double) and what it expects to do with it.
 E.g., is it going to throw away the passed in array and return a new
 one?

 -n

 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Yuxiang Wang
Dear all,

I am currently using a piece of C code, where one of the input
argument of a function is **double.

So, in numpy, I tried np.ctypeslib.ndpointer(ctypes.c_double), but
obviously this wouldn't work because this is only *double, not
**double.

Then I tried np.ctypeslib.ndpointer(np.ctypeslib.ndpointer(ctypes.c_double)),
but this didn't work either because it says ArgumentError: argument
4: class 'TypeError': array must have data type uint64
.

np.ctypeslib.ndpointer(ctypes.c_double, ndim=2) wound't work too,
because **double is not the same with *double[].

Could anyone please give any thoughts to help?

Thanks,

Shawn
-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Yuxiang Wang
1) @Strula Sorry about my stupid mistake! That piece of code totally
gave away how green I am in coding C :)

And yes, that piece of code works like a charm now! I am able to run
my model. Thanks a million!

2) @Strula and also thanks for your insight on the limitation of the
method. Currently I am just passing in 2d ndarray for data input, so I
can get away with this method; but it is really important to keep that
piece of knowledge in mind.

3) @Nathaniel Could you please give a hint on how should this be done
with the ctypes library (only for reading a 2d ndarray)? I noticed
that it wouldn't work if I set:
_doublepp = ctypes.POINTER(ctypes.POINTER(ctypes.c_double))
xpp = x.ctypes.data_as(ctypes.POINTER(ctypes.POINTER(ctypes.c_double)))
Could you please give a hint if possible?
(Complete code is attached at the end of this message)

4) I wanted to say that it seems to me, as the project gradually
scales up, Cython is easier to deal with, especially when I am using a
lot of numpy arrays. If it is even higher dimensional data, it would
be verbose while it is really succinct to use Cython.

Attached is the complete code.

Code #1: From Strula, and it worked:
// dummy.c
#include stdlib.h

__declspec(dllexport) void foobar(const int m, const int n, const
double **x, double **y)
{
size_t i, j;
for(i=0; im; i++)
for(j=0; jn; j++)
y[i][j] = x[i][j];
}

# test.py
import numpy as np
from numpy.ctypeslib import ndpointer
import ctypes

_doublepp = ndpointer(dtype=np.uintp, ndim=1, flags='C')

_dll = ctypes.CDLL('dummy.dll')

_foobar = _dll.foobar
_foobar.argtypes = [ctypes.c_int, ctypes.c_int, _doublepp, _doublepp]
_foobar.restype = None

def foobar(x):
y = np.zeros_like(x)
xpp = (x.__array_interface__['data'][0]
  + np.arange(x.shape[0])*x.strides[0]).astype(np.uintp)
ypp = (y.__array_interface__['data'][0]
  + np.arange(y.shape[0])*y.strides[0]).astype(np.uintp)
m = ctypes.c_int(x.shape[0])
n = ctypes.c_int(x.shape[1])
_foobar(m, n, xpp, ypp)
return y

if __name__ == '__main__':
x = np.arange(9.).reshape((3, 3))
y = foobar(x)

Code #2: Tried to use ctypes but it does not seem to work. Just being
curious how it should be done.
# test2.py
import numpy as np
import ctypes

_doublepp = ctypes.POINTER(ctypes.POINTER(ctypes.c_double))

_dll = ctypes.CDLL('dummy.dll')

_foobar = _dll.foobar
_foobar.argtypes = [ctypes.c_int, ctypes.c_int, _doublepp, _doublepp]
_foobar.restype = None

def foobar(x):
y = np.zeros_like(x)
xpp = x.ctypes.data_as(ctypes.POINTER(ctypes.POINTER(ctypes.c_double)))
ypp = y.ctypes.data_as(ctypes.POINTER(ctypes.POINTER(ctypes.c_double)))
m = ctypes.c_int(x.shape[0])
n = ctypes.c_int(x.shape[1])
_foobar(m, n, xpp, ypp)
return y

if __name__ == '__main__':
x = np.arange(9.).reshape((3, 3))
y = foobar(x)



Shawn

On Thu, Jan 1, 2015 at 2:52 PM, Sturla Molden sturla.mol...@gmail.com wrote:
 On 01/01/15 19:56, Nathaniel Smith wrote:

 However, I suspect that this question can't really be answered in a
 useful way without more information about why exactly the C code wants
 a **double (instead of a *double) and what it expects to do with it.

 E.g., is it going to throw away the passed in array and return a new
 one?

 That is an important question.

 The solution I provided only allows a 2D array to be passed in and
 possibly modified inplace. It does not allow the C function pass back a
 freshly allocated array.

 The problem is of course that the meaning of double** is ambiguous. It
 could mean a pointer to an array of pointers. But it could also mean a
 double* passed by reference, in which case the function would modify the
 pointer instead of the data it points to.

 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Why is numpy.ma.extras.clump_masked() not in main documentation?

2014-12-28 Thread Yuxiang Wang
Dear all,

I am really glad to find out a very useful function called
numpy.ma.extras.clump_masked(), and it is indeed well documented if
you look into the source. However, may I ask why does it not show up
in the main documentation website
(http://docs.scipy.org/doc/numpy/reference/routines.ma.html)?

Not a big deal, just being curious.

Shawn

-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Why is numpy.ma.extras.clump_masked() not in main documentation?

2014-12-28 Thread Yuxiang Wang
Hi Ralf,

Thanks for the quick response! I will submit a PR soon. I noticed
there are some other functions with outdated examples, such as
numpy.ma.notmasked_contiguous() (still using the
numpy.ma.extras.notmasked_contiguous).

-Shawn

On Sun, Dec 28, 2014 at 3:04 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:


 On Sun, Dec 28, 2014 at 8:48 PM, Yuxiang Wang yw...@virginia.edu wrote:

 Dear all,

 I am really glad to find out a very useful function called
 numpy.ma.extras.clump_masked(), and it is indeed well documented if
 you look into the source. However, may I ask why does it not show up
 in the main documentation website
 (http://docs.scipy.org/doc/numpy/reference/routines.ma.html)?


 Because they (there's also clump_unmasked) weren't added to the function
 list in doc/source/reference/routines.ma.rst. Care to send a PR for that?

 Other todo there is to fix up the examples, they should be used as
 np.ma.clump_masked not np.ma.extras.clump_masked.

 Cheers,
 Ralf




 Not a big deal, just being curious.

 Shawn

 --
 Yuxiang Shawn Wang
 Gerling Research Lab
 University of Virginia
 yw...@virginia.edu
 +1 (434) 284-0836
 https://sites.google.com/a/virginia.edu/yw5aj/
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Implementation of tensor product and tensor contraction?

2014-11-25 Thread Yuxiang Wang
Dear all,

I have been doing tensor algebra recently (for continuum mechanics)
and was looking into two common operations: tensor product  tensor
contraction.

1. Tensor product

One common usage is:
a[i1, i2, i3, ..., iN, j1, j2, j3, ..., jM] = b[i1, i2, i3, ..., iN] *
c[j1, j2, j3, ..., jM]

I looked into the current np.outer(), and the only difference is that
it always flattens the input array. So actually, the function for
tensor product is simply

np.outer(a, b, out=out).reshape(a.shape+b.shape)  -- I think I got
this right but please do correct me if I am wrong

Would anyone think it helpful or harmful to add such a function,
np.tensorprod()? it will simply be like
def tensorprod(a, b, out=None):
return outer(a, b, out=out).reshape(a.shape+b.shape)


2. Tensor contraction

It is currently the np.tensordot(a, b) and it will do np.tensordot(a,
b, axes=2) by default. I think this is all great, but it would be even
better if we specify in the doc, that:
i) say explicitly that by default it will be the double-dot or
contraction operator, and
ii) explain that in cases where axes is an integer-like scalar,
which axes were selected from the two array and in what order. Like:
if axes is an integer-like scalar, it is the number axes to sum over,
equivalent to axes=(list(range(-axes, 0)), list(range(0, axes)))   (or
something like this)


It'd be great to hear what you would think about it,

Shawn


-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementation of tensor product and tensor contraction?

2014-11-25 Thread Yuxiang Wang
Hi Charles,

Thank you for your response!

I do think np.einsum() is really great. I am not clear about how that
ties to my question though, because I was thinking more in lines of
wrapping and reshaping the output (#1) and improve the documentation
(#2), where the proper outcome was already calculated.


-Shawn

On Tue, Nov 25, 2014 at 11:12 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
 Take a look as einsum, it is quite good for such things.

 Chuck

 On Tue, Nov 25, 2014 at 9:06 PM, Yuxiang Wang yw...@virginia.edu wrote:

 Dear all,

 I have been doing tensor algebra recently (for continuum mechanics)
 and was looking into two common operations: tensor product  tensor
 contraction.

 1. Tensor product

 One common usage is:
 a[i1, i2, i3, ..., iN, j1, j2, j3, ..., jM] = b[i1, i2, i3, ..., iN] *
 c[j1, j2, j3, ..., jM]

 I looked into the current np.outer(), and the only difference is that
 it always flattens the input array. So actually, the function for
 tensor product is simply

 np.outer(a, b, out=out).reshape(a.shape+b.shape)  -- I think I got
 this right but please do correct me if I am wrong

 Would anyone think it helpful or harmful to add such a function,
 np.tensorprod()? it will simply be like
 def tensorprod(a, b, out=None):
 return outer(a, b, out=out).reshape(a.shape+b.shape)


 2. Tensor contraction

 It is currently the np.tensordot(a, b) and it will do np.tensordot(a,
 b, axes=2) by default. I think this is all great, but it would be even
 better if we specify in the doc, that:
 i) say explicitly that by default it will be the double-dot or
 contraction operator, and
 ii) explain that in cases where axes is an integer-like scalar,
 which axes were selected from the two array and in what order. Like:
 if axes is an integer-like scalar, it is the number axes to sum over,
 equivalent to axes=(list(range(-axes, 0)), list(range(0, axes)))   (or
 something like this)


 It'd be great to hear what you would think about it,

 Shawn


 --
 Yuxiang Shawn Wang
 Gerling Research Lab
 University of Virginia
 yw...@virginia.edu
 +1 (434) 284-0836
 https://sites.google.com/a/virginia.edu/yw5aj/
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Why ndarray provides four ways to flatten?

2014-10-27 Thread Yuxiang Wang
Hi Alexander,

In my opinion - because they don't do the same thing, especially when
you think in terms in lower-level.

ndarray.flat returns an iterator; ndarray.flatten() returns a copy;
ndarray.ravel() only makes copies when necessary; ndarray.reshape() is
more general purpose, even though you can use it to flatten arrays.

They are very distinct in behavior - for example, copies and views may
store in the memory very differently and you would have to pay
attention to the stride size if you are passing them down onto
C/Fortran code. (Correct me if I am wrong please)

-Shawn

On Mon, Oct 27, 2014 at 8:06 PM, Alexander Belopolsky ndar...@mac.com wrote:
 Given an n-dim array x, I can do

 1. x.flat
 2. x.flatten()
 3. x.ravel()
 4. x.reshape(-1)

 Each of these expressions returns a flat version of x with some variations.
 Why does NumPy implement four different ways to do essentially the same
 thing?


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Inverse function of numpy.polyval()

2014-05-19 Thread Yuxiang Wang
Dear all,

I was wondering is there a convenient inverse function of
np.polyval(), where I give the y value and it solves for x?

I know one way I could do this is:

import numpy as np

# Set up the question
p = np.array([1, 1, -10])
y = 100

# Solve
p_temp = p
p_temp[-1] -= y
x = np.roots(p_temp)

However my guess is most would agree on that this code has low
readability. Any suggestions?

Thanks!

-Shawn


-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second order gradient in numpy

2014-05-02 Thread Yuxiang Wang
Hi Chris,

Thank you! This is useful information. Unfortunately, I am doing this
on data from a sensor and would be hard to fit to a simple polynomial
while avoiding overfitting.

Thanks again!

Shawn

On Thu, May 1, 2014 at 7:01 PM, Chris Barker chris.bar...@noaa.gov wrote:
 On Thu, May 1, 2014 at 3:42 PM, Christian K. ckk...@hoc.net wrote:


 It looks like you are looking for the derivative rather than the
 gradient. Have a look at:

 np.diff(a, n=1, axis=-1)

 n is the order if the derivative.


 depending on your use case, you may want to use a polynomial fit for a
 higher order derivative:

 np.polyder()

 -Chris


 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second order gradient in numpy

2014-05-02 Thread Yuxiang Wang
Hi Christian,

Thank you for your input! I prefer np.gradient because it takes
mid-point finite difference estimation instead of one-sided estimates,
but np.diff() is also a good idea. Just wondering why np.gradient does
not have something similar, being curious :)

Shawn

On Thu, May 1, 2014 at 6:42 PM, Christian K. ckk...@hoc.net wrote:
 Am 01.05.14 18:45, schrieb Yuxiang Wang:
 Hi all,

 I am trying to calculate the 2nd-order gradient numerically of an
 array in numpy.

  import numpy as np
  a = np.sin(np.arange(0, 10, .01))
  da = np.gradient(a)
  dda = np.gradient(da)

 It looks like you are looking for the derivative rather than the
 gradient. Have a look at:

 np.diff(a, n=1, axis=-1)

 n is the order if the derivative.

 Christian


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Second order gradient in numpy

2014-05-01 Thread Yuxiang Wang
Hi all,

I am trying to calculate the 2nd-order gradient numerically of an
array in numpy.

import numpy as np
a = np.sin(np.arange(0, 10, .01))
da = np.gradient(a)
dda = np.gradient(da)

This is what I come up. Is the the way it should be done?

I am asking this, because in numpy there isn't an option saying
np.gradient(a, order=2). I am concerned about whether this usage is
wrong, and that is why numpy does not have this implemented.

Thank you!

-Shawn
-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] shortcut nonzero?

2014-02-25 Thread Yuxiang Wang
Hi Alan,

If you are only dealing with 1d array, What about:

np.nonzero(your_array)[0][:k]

?

-Shawn


On Tue, Feb 25, 2014 at 2:20 PM, Alan G Isaac alan.is...@gmail.com wrote:

 Is there a shortcut version for finding the first (k) instance(s) of
 nonzero entries?
 I'm thinking of Matlab's `find(X,k)`:
 http://www.mathworks.com/help/matlab/ref/find.html
 Easy enough to write of course.

 I thought `flatnonzero` would be the obvious place for this,
 but it does not have a `first=k` option.
 Is such an option worth suggesting?

 Thanks,
 Alan Isaac
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion