[NIGHTLY] Arrow Build Report for Job nightly-2020-11-17-0

2020-11-17 Thread Crossbow
Arrow Build Report for Job nightly-2020-11-17-0 All tasks: https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-11-17-0 Failed Tasks: - conda-linux-gcc-py37-cpu: URL: https://github.com/ursa-labs/crossbow/branches/all?query=nightly-2020-11-17-0-azure-conda-linux-gcc-py37-cp

Re: C++: Cache RecordBatch

2020-11-17 Thread Antoine Pitrou
Hi Rares, Le 17/11/2020 à 03:34, Rares Vernica a écrit : > > I'm using an arrow::io::BufferReader and > arrow::ipc::RecordBatchStreamReader to read an arrow::RecordBatch from a > file. There is only one batc in the file so I do a single > RecordBatchStreamReader::ReadNext call. I store the popu

[Discuss] Should dense union offsets be always increasing?

2020-11-17 Thread Antoine Pitrou
Hello, The format spec and the C++ implementation disagree on one point: * The spec says that dense union offsets should be increasing: """The respective offsets for each child value array must be in order / increasing.""" (from https://arrow.apache.org/docs/format/Columnar.html#dense-union)

Re: [Discuss] Should dense union offsets be always increasing?

2020-11-17 Thread Wes McKinney
In principle I'm in favor of #2 -- the only question is what kinds of problems it might pose for forward compatibility. Note * This is completely backward compatible (any data conforming to the spec to the letter will continue to be conforming) * It is also forward compatible at a protocol level,

Re: Using arrow/compute/kernels/*internal.h headers

2020-11-17 Thread Benjamin Kietzman
Hi Niranda, hastebin: That looks generally correct, though I should warn you that a recent PR ( https://github.com/apache/arrow/pull/8574 ) changed the return type of DispatchExact to Kernel so you'll need to insert an explicit cast to ScalarAggregateKernel. 1: This seems like a feature which mig

[Discuss] Refreshing a bearer token with retry mechanism

2020-11-17 Thread Keerat Singh
I am trying to implement an auth mechanism using access tokens that will expire, and with the ability to retry the Flight API call automatically with the basic credentials(username/pass) when the Flight Server comes back with the access token expired response. For this, I need to keep track of the

Re: C++: Cache RecordBatch

2020-11-17 Thread Rares Vernica
Hi Antoine, On Tue, Nov 17, 2020 at 2:34 AM Antoine Pitrou wrote: > > Le 17/11/2020 à 03:34, Rares Vernica a écrit : > > > > I'm using an arrow::io::BufferReader and > > arrow::ipc::RecordBatchStreamReader to read an arrow::RecordBatch from a > > file. There is only one batc in the file so I do a

Re: C++: Cache RecordBatch

2020-11-17 Thread Wes McKinney
On Tue, Nov 17, 2020 at 5:41 PM Rares Vernica wrote: > > Hi Antoine, > > On Tue, Nov 17, 2020 at 2:34 AM Antoine Pitrou wrote: > > > > Le 17/11/2020 à 03:34, Rares Vernica a écrit : > > > > > > I'm using an arrow::io::BufferReader and > > > arrow::ipc::RecordBatchStreamReader to read an arrow::Re

Re: Using arrow/compute/kernels/*internal.h headers

2020-11-17 Thread Niranda Perera
1. This is great. I will follow this JIRA. (better yet, I'll see if I can make that contribution) 2. If we forget about the multithreading case for a moment, this requirement came up while implementing a "groupby + aggregation" operation (single-threaded). Let's assume that a table is not sorted.

Graph model in arrow

2020-11-17 Thread Leonidus Bhai
Hi, I am thinking of building out a Query system using Arrow. I have a graph data model with objects having bidirectional relationships to each other. Objects are persisted in an OLTP system with normalized schema. Queries are scan-like queries across multiple object types having nested relationsh

RE: Travis CI jobs gummed up on Arrow PRs?

2020-11-17 Thread Kazuaki Ishizaki
We got a response at https://travis-ci.community/t/s390x-jobs-are-stuck-in-the-received-state-for-days/10581/3?u=kiszk . Now, this problem has been solved. An interesting comment in the post is as follow: > If you would like to have an increased build capacity, we are happy to discuss the plans