Yup. That was it. Manually updating this flag after PyCUDA modified the
buffer fixed it. Thank you!

And if anyone is interested, here is a petsc4py extension that lets you
access PETSc vectors as PyCUDA GPUArrays:
https://github.com/ashwinsrnth/petsc-pycuda

On Thu, Oct 16, 2014 at 1:37 PM, Ashwin Srinath <ashwinsr...@gmail.com>
wrote:

> Thank you, Andreas. The documentation does mention that PETSc internally
> keeps track of which (host or device) vector is last updated. So when I
> update the memory on the PyCUDA side, maybe PETSc doesn't know about it.
>
> Thank you, I'll investigate further.
>
> On Thu, Oct 16, 2014 at 1:28 PM, Andreas Kloeckner <
> li...@informa.tiker.net> wrote:
>
>> Ashwin Srinath <ashwinsr...@gmail.com> writes:
>> > I'm not sure - but this may have something to do with the
>> implementation of
>> > `fill`. Because on the flip side, changes to the PETSc Vec *are*
>> reflected
>> > the GPUArray. So I can see that they are actually sharing device
>> memory..
>>
>> As far as I know, PETSc maintains a copy of the vector on the host and
>> on the device and tacitly assumes that nobody except it makes
>> modifications to the vector. So if it assumes nobody has modified the
>> device-side vector, it may give you a (stale) host-side copy.
>>
>> HTH,
>> Andreas
>>
>
>
_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to