Hi Radu,

I was trying to reproduce the issue you described, but I was unable to
reproduce the problem.
Could you provide an example of how you built the Table?

I tried reproducing it with a table with following schema

pa.schema([
pa.field('nums', pa.list_(pa.int32())),
pa.field('chars', pa.list_(pa.dictionary(pa.int32(), pa.string())))
])

but it succeeded serializing correctly

On Fri, Apr 23, 2021 at 6:36 AM Radu Teodorescu
<radukay...@yahoo.com.invalid> wrote:

> Hi I am seeing a similar problem when serializing tables with lists of
> dictionary encoded elements: each resulting chunk is pointing to the first
> chunk’s original dictionary.
> Is this a known issue/limitation.
> I can follow with a repro otherwise.
> Thank you
> Radu
>
> > On Sep 28, 2020, at 1:26 PM, Wes McKinney <wesmck...@gmail.com> wrote:
> >
> > hi Al,
> >
> > It's definitely wrong. I confirmed the behavior is present on master.
> >
> > https://issues.apache.org/jira/browse/ARROW-10121
> >
> > I made this a blocker for the release.
> >
> > Thanks,
> > Wes
> >
> > On Mon, Sep 28, 2020 at 10:52 AM Al Taylor
> > <al.taylor1...@googlemail.com.invalid> wrote:
> >>
> >> Hi,
> >>
> >> I've found that when I serialize two recordbatches which have a
> dictionary-encoded field, but different encoding dictionaries to a sequence
> of pybytes with a RecordBatchStreamWriter, then deserialize using
> pa.ipc.open_stream(), the dictionaries get jumbled. (or at least, on
> deserialization, the dictionary for the first RB is being reused for the
> second)
> >>
> >> MWE:
> >> ```
> >> import pyarrow as pa
> >> from io import BytesIO
> >>
> >> pa.__version__
> >>
> >> schema = pa.schema([pa.field('foo', pa.int32()), pa.field('bar',
> pa.dictionary(pa.int32(), pa.string()))] )
> >> r1 = pa.record_batch(
> >>    [
> >>        [1, 2, 3, 4, 5],
> >>        pa.array(["a", "b", "c", "d", "e"]).dictionary_encode()
> >>    ],
> >>    schema
> >> )
> >>
> >> r1.validate()
> >> r2 = pa.record_batch(
> >>    [
> >>        [1, 2, 3, 4, 5],
> >>        pa.array(["c", "c", "e", "f", "g"]).dictionary_encode()
> >>    ],
> >>    schema
> >> )
> >>
> >> r2.validate()
> >>
> >> assert r1.column(1).dictionary != r2.column(1).dictionary
> >>
> >>
> >> sink =  pa.BufferOutputStream()
> >> writer = pa.RecordBatchStreamWriter(sink, schema)
> >>
> >> writer.write(r1)
> >> writer.write(r2)
> >>
> >> serialized = BytesIO(sink.getvalue().to_pybytes())
> >> stream = pa.ipc.open_stream(serialized)
> >>
> >> deserialized = []
> >>
> >> while True:
> >>    try:
> >>        deserialized.append(stream.read_next_batch())
> >>    except StopIteration:
> >>        break
> >>
> >> deserialized[0].column(1).to_pylist()
> >> deserialized[1].column(1).to_pylist()
> >> ```
> >> (The last line of the above prints out `['a', 'a', 'b', 'c', 'd']`.
> This behaviour doesn't look right. I was wondering whether I'm simply not
> using the library correctly or if this is a bug in pyarrow.
> >>
> >> Thanks,
> >>
> >> Al
>
>

Reply via email to