Golden files can also make it easier to implement the read side without
firing up the entire integration machinery.
Regards
Antoine.
Le 19/03/2021 à 17:56, Micah Kornfield a écrit :
For historical context golden files were first introduced so we could
verify backwards compatibility. I
For historical context golden files were first introduced so we could
verify backwards compatibility. I think the preferred method is still to
do "live" testing. (i.e. Having one implementation consume JSON output a
binary file, read the binary file with the second implementation and emit
JSON,
Hi,
Thanks a lot for bringing this up, Fernando. I had the same thought when I
first looked at the tensor implementation in Rust. Now it is a bit more
clear :)
So, if I understood correctly, the direction would be to declare a
"JSON-integration" equivalent for tensors, generate a set of "golden
Hi Fernando,
Le 16/03/2021 à 17:01, Fernando Herrera a écrit :
Thanks for the update. It would be interesting to add a centralized
plan for tensors in Arrow. It would allow sharing data between packages
like numpy, ndarray, pytorch, tensorflow really easy.
Those Python packages already
Hi Wes,
Thanks for the update. It would be interesting to add a centralized
plan for tensors in Arrow. It would allow sharing data between packages
like numpy, ndarray, pytorch, tensorflow really easy. Don't you think so?
Let me have a look at how the integration tests are created in Archery
so I
hi Fernando — for clarity, there is no centralized planning in this
project. If a volunteer wants to do something and there are no
objections from other people, then they are free to go ahead and do
it. If there aren't any Jira issues about adding integration tests, it
would make sense to go ahead
Hi Fernando,
Currently there are no explicit plans to do it, but that would be
certainly useful if other implementation start implementing tensor IPC
support.
One should start by defining a reference format (probably JSON) such as
exists for other IPC types:
Are there any plans to include integration testing for tensors in the
pipeline?
Thanks,
Fernando
On Mon, Mar 15, 2021 at 8:16 PM Antoine Pitrou wrote:
> On Mon, 15 Mar 2021 19:48:22 +
> Fernando Herrera wrote:
> > Hi Neal,
> >
> > Thanks for the update and the link.
> >
> > I found that
On Mon, 15 Mar 2021 19:48:22 +
Fernando Herrera wrote:
> Hi Neal,
>
> Thanks for the update and the link.
>
> I found that the project has these files for tensor checking
> https://github.com/apache/arrow-testing/tree/e8ce32338f2dfeca3a5126f7677bdee159604000/data/arrow-ipc-tensor-stream
>
>
> I found that the project has these files for tensor checking
>
> https://github.com/apache/arrow-testing/tree/e8ce32338f2dfeca3a5126f7677bdee159604000/data/arrow-ipc-tensor-stream
These are fuzzing files from the readme in that directory:
> These files contain Arrow IPC tensor streams,
Hi Neal,
Thanks for the update and the link.
I found that the project has these files for tensor checking
https://github.com/apache/arrow-testing/tree/e8ce32338f2dfeca3a5126f7677bdee159604000/data/arrow-ipc-tensor-stream
So, if I understand correctly, for any application to be compatible
with
Hi Fernando,
https://arrow.apache.org/docs/status.html lists the implementation status
across languages, and that mostly reflects integration testing I think
(though there may be some features listed as "implemented" that aren't
tested).
Neal
On Mon, Mar 15, 2021 at 5:00 AM Fernando Herrera <
Hi all,
Does anyone know what is the status for the dense and sparse tensor tests?
I was looking for a data file with a tensor created with the C++
implementation
but I couldnt find anything. Is anybody testing IPC for a tensor?
Thanks in advance,
Fernando
13 matches
Mail list logo