Works great ! Thanks ! Code attached.
Franck
>> make; python dummyTest1.py; python dummyTest2.py; python dummyTest3.py
g++ -I/usr/include/python2.7 -o dummy.so -shared -fPIC dummy.cpp
-lboost_python -lboost_numpy
[1 2 3]
[1.2 2.3 3.4]
[4.5 5.6 6.7]
ptrInt[0] = 1, 0x555a42aede80
ptrInt[1] = 2, 0
On 2020-02-11 2:05 p.m., HOUSSEN Franck wrote:
OK, I understand this is a type related problem. I found a workaround
(dummyTest3.py - init numpy arrays with list seems to work).
But, I unfortunately do not get how to change the code to get
dummyTest2.py to work ?!... Should I read reinterpret
OK, I understand this is a type related problem. I found a workaround
(dummyTest3.py - init numpy arrays with list seems to work).
But, I unfortunately do not get how to change the code to get dummyTest2.py
to work ?!... Should I read reinterpret_cast'ed data sizeof(double) by
sizeof(double) : see
> On Feb 11, 2020, at 13:24, HOUSSEN Franck wrote:
>
> Finally able to reproduce the "real" problem with a "dummy" example : seems
> that, at python side, when you use "np.append" C++ get screwed data ?!...
> (note that with or without "np.append" all is OK at python side).
> Can somebody hel
Finally able to reproduce the "real" problem with a "dummy" example : seems
that, at python side, when you use "np.append" C++ get screwed data ?!...
(note that with or without "np.append" all is OK at python side).
Can somebody help here ? Known problem ? Possible workaround or fix ?
Should I open