Could you resend your reply to
https://lists.apache.org/thread/5rpykkfoz416mq889pcpx9rwrrtjog60
on dev@ to connect the existing thread?
In
"Re: StreamReader" on Tue, 12 Jul 2022 10:01:00 +0200,
L Ait wrote:
> Thank you, I will look on that,
> The real problem is that I read data in chunks a
Hi,
How do you send the written data over the network? Do you
use raw socket(2) and write(2)? If you use raw socket, can
we wrap the raw socket by GioUnixSocketStream[1]? We can
wrap the raw socket by g_unix_output_stream_new()[2] with
the file descriptor of the raw socket.
[1] https://docs.gtk.o
Hi David,
Are there any good examples for the first
section of your reference [1]: Controlling conversion to
pyarrow.Array with the __arrow_array__ protocol?
I find examples of creating an extension array using an extension
type with explicit code in test_
Hello,
I integrated the arrow library to a larger project, and was testing doing
exports/imports of the same tables to see if it behaved well. Doing this, I
became aware that arrow DURATION types were exported as INT64 (as the
corresponding number of µs if I remember correctly) in the parquet e
Is there any way to filter rows by remainder?
likes to call("mod", fiedl_ref("id"), literal(10), literal(3))?? to keep
id%10==3
1057445597
1057445...@qq.com
Thank you, I will look on that,
The real problem is that I read data in chunks and the end of the chunk is
truncated (not a complete line) . I need to wait for the next chunk to have
the line completion.
Is there a way you suggest to process only the chunks smoothly ?
Thank you
Le ven. 8 juil.