Yes, I guess I’d end up taking a similar path to Arrow in this regard. I think
I have some homework to do, to see whether I can use the Arrow format to model
some things like meshes, scene graph layout, etc. If that is a good fit, it
makes sense to use Arrow. Even if it isn’t a perfect fit, I li
Arrow only uses Flatbuffers to serialize metadata, *not* data.
On Mon, Dec 14, 2020 at 1:39 PM Robert Bigelow
wrote:
>
> This is an excellent point. I could use Flatbuffers directly to define any
> custom format needed by the engine. The engine itself would need to use the
> same principles the
This is an excellent point. I could use Flatbuffers directly to define any
custom format needed by the engine. The engine itself would need to use the
same principles the Arrow devs have, which I guess is true of any
data-intensive system. Thanks for your response!
> On Dec 14, 2020, at 11:24 A
Hi all,
I've created
https://cwiki.apache.org/confluence/display/ARROW/Arrow+3.0.0+Release,
cloned from our previous release dashboards, for our upcoming release. A
few things I'd like to draw your attention to:
* Blockers: there are 5 currently. I'm not certain that they all are
blockers, nor am
Hi Rares,
Ok, so here is the explanation. `pa.ipc.open_stream` will open the
given file memory-mapped, so the buffers read from the file are
zero-copy. But now you're rewriting the file from scratch... so the
buffers become invalid memory (they're zero-copy). Hence the "Bad
address" error you'
Arrow uses flatbuffers under the hood.
https://google.github.io/flatbuffers/
FlatBuffers is an efficient cross platform serialization library for C++, C#,
C, Go, Java, Kotlin, JavaScript, Lobster, Lua, TypeScript, PHP, Python, Rust
and Swift. It was originally created at Google for game develo
Hi Antoine,
Here is a repro for this issue:
import pyarrow
fn = '/tmp/foo'
# Data
data = [
pyarrow.array(range(1000)),
pyarrow.array(range(1000))
]
batch = pyarrow.record_batch(data, names=['f0', 'f1'])
# File Prep
writer = pyarrow.ipc.RecordBatchStreamWriter(fn, batch.schema)
writer.w
Hi,
I have a simple feather file created via a pandas to_feather with a
datetime64[ns] column, and cannot get timestamps in javascript
apache-arrow@2.0.0
See this notebook:
https://observablehq.com/@nite/apache-arrow-timestamp-investigation
I'm guessing I'm missing something, has anyone got any
I would like to draw people's attention to the following
proposal documenting the acceptable use of `unsafe` Rust in the Rust Arrow
implementation:
https://github.com/apache/arrow/pull/8901
I wanted to increase the visibility of this proposal as it has implications
for future contributions.
Andr
Also, do not feel the need to be constrained by the structures that
are currently defined.
On Mon, Dec 14, 2020 at 4:33 AM Antoine Pitrou wrote:
>
>
> Hi,
>
> If you set `can_execute_chunkwise = false` on the kernel options, you
> should see the whole chunked array.
>
> Regards
>
> Antoine.
>
>
>
Hi,
If you set `can_execute_chunkwise = false` on the kernel options, you
should see the whole chunked array.
Regards
Antoine.
Le 14/12/2020 à 11:27, Yibo Cai a écrit :
> Current kernel framework divides inputs (e.g. arrays, chunked arrays) into
> batches and feeds to kernel code.
> Does it
Current kernel framework divides inputs (e.g. arrays, chunked arrays) into
batches and feeds to kernel code.
Does it make sense to pass input args directly to kernel?
I'm writing quantile kernel, need to allocate buffer to record all inputs and
find nth at last. For chunked array, input is recei
Hello Rares,
Is there a complete reproducer that we may try out?
Regards
Antoine.
Le 14/12/2020 à 06:52, Rares Vernica a écrit :
> Hello,
>
> As part of a test, I'm reading a record batch from an Arrow file,
> re-batching the data in smaller batches, and writing back the result to the
> sam
13 matches
Mail list logo