zeroshade commented on issue #37976:
URL: https://github.com/apache/arrow/issues/37976#issuecomment-1743406221
There's a few possible ways of handling this server side:
1. Clients could use the call option `MaxCallRecvMsgSize(s)` to increase
that 4MB limit if desired, or use the dial option
`WithDefaultCallOptions(MaxCallRecvMsgSize(s))` when connecting to by default
raise the 4MB limit.
2. Rather than creating a single record with all of your results, you could
create multiple records. The `RecordBuilder` resets with the same schema after
you call `NewRecord` so you can easily write to it again and generate a new
record. Calling `rw.Write(rec)` on each record you want to create (i.e. chunks)
3. You can slice a record via `rec.NewSlice(i, j)` which will create a slice
of the record (slices need to have `Release` called on them separately!)
without performing a copy, and then just call `rw.Write(slice)` on each slice
of the record allowing you to chunk it out sending slices of the record at a
time.
On the client side, you already have the loop, so you can create an Arrow
table from the records:
```go
recs := make([]arrow.Record, 0)
for reader.Next() {
r := reader.Record()
r.Retain()
defer r.Release()
recs = append(recs, r)
}
if err := reader.Err(); err != nil {
return err
}
// if you want to treat all of the records as a single table without copying
tbl := array.NewTableFromRecords(reader.Schema(), recs)
defer tbl.Release()
// do stuff with tbl....
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]