hi Al,

It's definitely wrong. I confirmed the behavior is present on master.

https://issues.apache.org/jira/browse/ARROW-10121

I made this a blocker for the release.

Thanks,
Wes

On Mon, Sep 28, 2020 at 10:52 AM Al Taylor
<al.taylor1...@googlemail.com.invalid> wrote:
>
> Hi,
>
> I've found that when I serialize two recordbatches which have a 
> dictionary-encoded field, but different encoding dictionaries to a sequence 
> of pybytes with a RecordBatchStreamWriter, then deserialize using 
> pa.ipc.open_stream(), the dictionaries get jumbled. (or at least, on 
> deserialization, the dictionary for the first RB is being reused for the 
> second)
>
> MWE:
> ```
> import pyarrow as pa
> from io import BytesIO
>
> pa.__version__
>
> schema = pa.schema([pa.field('foo', pa.int32()), pa.field('bar', 
> pa.dictionary(pa.int32(), pa.string()))] )
> r1 = pa.record_batch(
>     [
>         [1, 2, 3, 4, 5],
>         pa.array(["a", "b", "c", "d", "e"]).dictionary_encode()
>     ],
>     schema
> )
>
> r1.validate()
> r2 = pa.record_batch(
>     [
>         [1, 2, 3, 4, 5],
>         pa.array(["c", "c", "e", "f", "g"]).dictionary_encode()
>     ],
>     schema
> )
>
> r2.validate()
>
> assert r1.column(1).dictionary != r2.column(1).dictionary
>
>
> sink =  pa.BufferOutputStream()
> writer = pa.RecordBatchStreamWriter(sink, schema)
>
> writer.write(r1)
> writer.write(r2)
>
> serialized = BytesIO(sink.getvalue().to_pybytes())
> stream = pa.ipc.open_stream(serialized)
>
> deserialized = []
>
> while True:
>     try:
>         deserialized.append(stream.read_next_batch())
>     except StopIteration:
>         break
>
> deserialized[0].column(1).to_pylist()
> deserialized[1].column(1).to_pylist()
> ```
> (The last line of the above prints out `['a', 'a', 'b', 'c', 'd']`. This 
> behaviour doesn't look right. I was wondering whether I'm simply not using 
> the library correctly or if this is a bug in pyarrow.
>
> Thanks,
>
> Al

Reply via email to