jecsand838 opened a new pull request, #7966:
URL: https://github.com/apache/arrow-rs/pull/7966
… Avro files
# Which issue does this PR close?
- Part of https://github.com/apache/arrow-rs/issues/4886
- Follow up to https://github.com/apache/arrow-rs/pull/7834
# Rati
zhuqi-lucas commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2217158575
##
arrow-ord/src/sort.rs:
##
@@ -4709,4 +4731,77 @@ mod tests {
assert_eq!(&sorted[0], &expected_struct_array);
}
+
+/// A simple, correct but
zhuqi-lucas commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2217158467
##
arrow-buffer/src/util/bit_iterator.rs:
##
@@ -323,4 +380,110 @@ mod tests {
let mask = &[223, 23];
BitIterator::new(mask, 17, 0);
}
+
+
zhuqi-lucas commented on PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#issuecomment-3091882200
Latest result for new implement, has a little regression, but still
promising result:
```rust
critcmp --filter "nulls to indices" fast_path_for_bit_map_scan main
group
zhuqi-lucas commented on PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#issuecomment-3091846858
Thank you @alamb @Dandandan @jhorstmann for review.
I addressed comments in latest PR, and also added rich tests. Thanks!
--
This is an automated message from the Apache Git
zhuqi-lucas commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2217145627
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) -
zhuqi-lucas commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2217145748
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) -
zhuqi-lucas commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2217145627
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) -
kou commented on code in PR #3176:
URL: https://github.com/apache/arrow-adbc/pull/3176#discussion_r2217102143
##
docs/source/format/driver_manifests.rst:
##
@@ -0,0 +1,300 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreement
blacha commented on PR #47086:
URL: https://github.com/apache/arrow/pull/47086#issuecomment-3091392804
Thanks for all the discussion, I was unaware this would affect IPC as well.
Up until a few weeks ago GDAL did not have a `compresssion_level` parameter
so all parquet files with comp
yu-iskw commented on PR #3174:
URL: https://github.com/apache/arrow-adbc/pull/3174#issuecomment-3091358486
We can also implement kind of acceptance tests to call BigQuery API only if
environment variables to use BigQuery are set. Indeed, I tested the changed
code with the approach on the lo
yu-iskw commented on PR #3174:
URL: https://github.com/apache/arrow-adbc/pull/3174#issuecomment-3091341131
@zeroshade Thank you for the feedback. I have updated the code at
https://github.com/apache/arrow-adbc/pull/3174/commits/47683ec893483afbeac6a123ffd11ec3090dd3f7
.
--
This is an aut
alexguo-db opened a new pull request, #3177:
URL: https://github.com/apache/arrow-adbc/pull/3177
## Motivation
In scenarios like PowerBI dataset refresh, if a query runs longer than the
OAuth token's expiration time (typically 1 hour for AAD tokens), the connection
fails. PowerBI onl
scovich commented on code in PR #7935:
URL: https://github.com/apache/arrow-rs/pull/7935#discussion_r2216465127
##
parquet-variant/src/builder.rs:
##
@@ -598,6 +599,49 @@ impl ParentState<'_> {
}
}
}
+
+// returns the beginning offset of buffer for
zeroshade commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2217006088
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding:
zeroshade commented on issue #7895:
URL: https://github.com/apache/arrow-rs/issues/7895#issuecomment-3091018615
I'm in favor of @scovich's suggestion, and that is what I did for the Go
implementation along with my plan for defining the Canonical extension type.
The schema
```
zeroshade opened a new pull request, #3176:
URL: https://github.com/apache/arrow-adbc/pull/3176
With the driver manager implementations for C/C++, Go, Rust and Python
updated to utilize and leverage driver manifests, we should properly document
how manifests work and what the format is.
cashmand commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216981871
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: e
veronica-m-ef commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216923022
##
arrow-avro/src/reader/mod.rs:
##
@@ -221,12 +221,11 @@ impl ReaderBuilder {
}
fn make_record_decoder(&self, schema: &AvroSchema<'_>) ->
Result {
scovich commented on issue #7895:
URL: https://github.com/apache/arrow-rs/issues/7895#issuecomment-3090705369
> We would need this schema:
>
> ```
> STRUCT {
> metadata: BinaryView,
> value: BinaryView,
> typed_value: STRUCT {
> foo: Int64,
> bar: Int32
kou commented on issue #47127:
URL: https://github.com/apache/arrow/issues/47127#issuecomment-3090705259
@fvalenduc Could you open a new issue for it? You may need to wait for the R
package release. #46950 is the related issue for it.
--
This is an automated message from the Apache Git Se
kou commented on issue #47127:
URL: https://github.com/apache/arrow/issues/47127#issuecomment-3090701946
@lscheilling Could you open a new issue for it? FYI: #46959 is the related
issue for it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
scovich commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216900571
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: ex
scovich commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216895846
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: ex
alamb merged PR #7940:
URL: https://github.com/apache/arrow-rs/pull/7940
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache
alamb commented on PR #7940:
URL: https://github.com/apache/arrow-rs/pull/7940#issuecomment-3090645347
Thanks again @brancz
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
alamb commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216815522
##
arrow-avro/src/reader/mod.rs:
##
@@ -221,12 +221,11 @@ impl ReaderBuilder {
}
fn make_record_decoder(&self, schema: &AvroSchema<'_>) ->
Result {
-
alamb commented on issue #7715:
URL: https://github.com/apache/arrow-rs/issues/7715#issuecomment-3090538741
We are discussing reading shredded variants here;
- https://github.com/apache/arrow-rs/issues/7941
We are discussing writing shredded variants here:
- https://github.co
alamb commented on issue #7941:
URL: https://github.com/apache/arrow-rs/issues/7941#issuecomment-3090535697
There is a bunch more back and forth on the thread as well that might be
interesting
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
alamb commented on issue #7941:
URL: https://github.com/apache/arrow-rs/issues/7941#issuecomment-3090534867
@friendlymatthew in
https://github.com/apache/arrow-rs/pull/7915#discussion_r2203418536
Hi, how do we plan on storing `typed_value`s? Do we plan on encoding it as a
`Variant` a
alamb commented on issue #7941:
URL: https://github.com/apache/arrow-rs/issues/7941#issuecomment-3090534233
@alamb in https://github.com/apache/arrow-rs/pull/7915#discussion_r2203360483
> Shredded fields need a full blown variant builder, because they're
strongly typed and we need to
alamb commented on issue #7941:
URL: https://github.com/apache/arrow-rs/issues/7941#issuecomment-3090533015
@scovich and I were discussing other options here
https://github.com/apache/arrow-rs/pull/7915#discussion_r2202981997:
---
@scovich :
https://github.com/apache/arrow-rs
alamb commented on PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#issuecomment-3090523533
> Is there any issue for implementing this? I would love to work on it
I think we are discussing reading shredded variants on
- https://github.com/apache/arrow-rs/issues/7941
veronica-m-ef commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216802532
##
arrow-avro/src/reader/record.rs:
##
@@ -301,9 +301,23 @@ impl Decoder {
}
Codec::Uuid => Self::Uuid(Vec::with_capacity(DEFAULT_CAPAC
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2214304249
##
arrow-avro/src/reader/record.rs:
##
@@ -301,9 +301,23 @@ impl Decoder {
}
Codec::Uuid => Self::Uuid(Vec::with_capacity(DEFAULT_CAPACITY
alamb commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216801798
##
parquet-variant-compute/src/field_operations.rs:
##
@@ -0,0 +1,532 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license a
carpecodeum commented on PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#issuecomment-3090517778
> Thank you for this PR @carpecodeum
>
> This is very cool
>
> I think there is already a `variant_get` implementation in
https://github.com/apache/arrow-rs/blob/d809f19
carpecodeum commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216798559
##
parquet-variant-compute/src/field_operations.rs:
##
@@ -0,0 +1,532 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor lic
carpecodeum commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216798292
##
parquet-variant-compute/src/variant_array.rs:
##
@@ -154,6 +155,172 @@ impl VariantArray {
fn find_value_field(array: &StructArray) -> Option {
ar
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216795716
##
arrow-avro/src/reader/record.rs:
##
@@ -301,9 +301,23 @@ impl Decoder {
}
Codec::Uuid => Self::Uuid(Vec::with_capacity(DEFAULT_CAPACITY
alamb commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216789293
##
parquet-variant-compute/src/field_operations.rs:
##
@@ -0,0 +1,532 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license a
alamb commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216788578
##
parquet-variant-compute/src/variant_array.rs:
##
@@ -154,6 +155,172 @@ impl VariantArray {
fn find_value_field(array: &StructArray) -> Option {
array.co
CurtHagenlocher merged PR #3175:
URL: https://github.com/apache/arrow-adbc/pull/3175
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216783991
##
arrow-avro/src/codec.rs:
##
@@ -161,6 +161,66 @@ impl<'a> TryFrom<&Schema<'a>> for AvroField {
}
}
+/// Builder for an [`AvroField`]
+#[derive(Debug)]
+p
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216779091
##
arrow-avro/src/codec.rs:
##
@@ -161,6 +161,66 @@ impl<'a> TryFrom<&Schema<'a>> for AvroField {
}
}
+/// Builder for an [`AvroField`]
+#[derive(Debug)]
+p
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216766988
##
arrow-avro/src/reader/record.rs:
##
@@ -431,12 +422,18 @@ impl Decoder {
let nanos = (millis as i64) * 1_000_000;
builder.appen
Rich-T-kid commented on code in PR #7933:
URL: https://github.com/apache/arrow-rs/pull/7933#discussion_r2216752880
##
arrow-ord/src/cmp.rs:
##
@@ -232,6 +239,7 @@ fn compare_op(op: Op, lhs: &dyn Datum, rhs: &dyn Datum) ->
Result
Rich-T-kid commented on code in PR #7933:
URL: https://github.com/apache/arrow-rs/pull/7933#discussion_r2216753563
##
arrow-ord/src/cmp.rs:
##
@@ -855,4 +863,122 @@ mod tests {
neq(&col.slice(0, col.len() - 1), &col.slice(1, col.len() -
1)).unwrap();
}
+
+#[
Rich-T-kid commented on code in PR #7933:
URL: https://github.com/apache/arrow-rs/pull/7933#discussion_r2216752202
##
arrow-ord/src/cmp.rs:
##
@@ -224,6 +223,14 @@ fn compare_op(op: Op, lhs: &dyn Datum, rhs: &dyn Datum) ->
Result
alamb commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216749153
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: extr
jackyhu-db commented on PR #3175:
URL: https://github.com/apache/arrow-adbc/pull/3175#issuecomment-3090431479
> This change will obviously work, but I think it's pretty confusing. Can we
instead put a `virtual int DefaultQueryTimeoutSeconds { get; } on
`Hive2Connection` and then override it
Rich-T-kid commented on code in PR #7933:
URL: https://github.com/apache/arrow-rs/pull/7933#discussion_r2216748841
##
arrow-arith/src/aggregate.rs:
##
@@ -17,7 +17,7 @@
//! Defines aggregations over Arrow arrays.
-use arrow_array::cast::*;
+use arrow_array::cast::{*};
Revi
toddmeng-db commented on code in PR #3140:
URL: https://github.com/apache/arrow-adbc/pull/3140#discussion_r2213774571
##
csharp/src/Drivers/Databricks/DatabricksStatement.cs:
##
@@ -64,10 +64,53 @@ public DatabricksStatement(DatabricksConnection connection)
enablePK
toddmeng-db commented on code in PR #3140:
URL: https://github.com/apache/arrow-adbc/pull/3140#discussion_r2214598029
##
csharp/test/Drivers/Databricks/E2E/StatementTests.cs:
##
@@ -1279,5 +1279,80 @@ public async Task
OlderDBRVersion_ShouldSetSchemaViaUseStatement()
alamb commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216573737
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: extr
jhorstmann commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2216741483
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) ->
amoeba commented on issue #40735:
URL: https://github.com/apache/arrow/issues/40735#issuecomment-3090390884
That seems fine and fair @pitrou.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
jecsand838 commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216696821
##
arrow-avro/src/reader/record.rs:
##
@@ -344,7 +332,10 @@ impl Decoder {
Self::Decimal256(_, _, _, builder) =>
builder.append_value(i256::ZERO),
Dandandan commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2216677908
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) ->
Samyak2 commented on code in PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#discussion_r2216671411
##
parquet-variant-compute/src/variant_get.rs:
##
@@ -177,4 +192,209 @@ mod test {
r#"{"inner_field": 1234}"#,
);
}
+
+/// Shredding: ex
Mandukhai-Alimaa commented on issue #436:
URL: https://github.com/apache/arrow-go/issues/436#issuecomment-3090308881
take
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
carpecodeum commented on code in PR #7946:
URL: https://github.com/apache/arrow-rs/pull/7946#discussion_r2216626412
##
parquet-variant-compute/src/field_operations.rs:
##
@@ -0,0 +1,532 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor lic
birschick-bq commented on code in PR #2949:
URL: https://github.com/apache/arrow-adbc/pull/2949#discussion_r2216612783
##
csharp/src/Telemetry/Traces/Exporters/ExportersBuilder.cs:
##
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+
birschick-bq commented on code in PR #2949:
URL: https://github.com/apache/arrow-adbc/pull/2949#discussion_r2216611639
##
csharp/src/Telemetry/Traces/Exporters/FileExporter/TracingFile.cs:
##
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one o
veronica-m-ef commented on code in PR #7954:
URL: https://github.com/apache/arrow-rs/pull/7954#discussion_r2216601845
##
arrow-avro/src/reader/record.rs:
##
@@ -301,9 +301,23 @@ impl Decoder {
}
Codec::Uuid => Self::Uuid(Vec::with_capacity(DEFAULT_CAPAC
alamb commented on PR #7961:
URL: https://github.com/apache/arrow-rs/pull/7961#issuecomment-3090226051
So I really think it is important to be able to compare the logical value
the Variant encodes for the purpose of tests. You can see almost all tests do
this, and as we move into shredding
alamb commented on PR #7961:
URL: https://github.com/apache/arrow-rs/pull/7961#issuecomment-3090220267
> I agree that whatever we do should not be merely physical byte
comparisons... but what does logical equality even mean? As in, if two variant
objects compare logically equal, what can I
alamb commented on code in PR #7962:
URL: https://github.com/apache/arrow-rs/pull/7962#discussion_r2216588076
##
arrow-ord/src/sort.rs:
##
@@ -178,44 +178,136 @@ where
}
}
-// partition indices into valid and null indices
-fn partition_validity(array: &dyn Array) -> (Vec
alamb commented on PR #7965:
URL: https://github.com/apache/arrow-rs/pull/7965#issuecomment-3090174790
FYI @Samyak2 @scovich @friendlymatthew and @klion26 and @carpecodeum as I
think you are interested in this feature
--
This is an automated message from the Apache Git Service.
To respond
alamb commented on issue #7941:
URL: https://github.com/apache/arrow-rs/issues/7941#issuecomment-3090173185
Here is a practical suggestion on how to make progress on variant shredding
-- we start working out how it would work for a simple example
I have written some tests here that ma
jduo commented on code in PR #2949:
URL: https://github.com/apache/arrow-adbc/pull/2949#discussion_r2216571309
##
csharp/src/Telemetry/Traces/Exporters/FileExporter/TracingFile.cs:
##
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+
alamb opened a new pull request, #7965:
URL: https://github.com/apache/arrow-rs/pull/7965
# Which issue does this PR close?
- Part of https://github.com/apache/arrow-rs/issues/6736
- Part of https://github.com/apache/arrow-rs/issues/7941
# Rationale for this change
In
jackyhu-db opened a new pull request, #3175:
URL: https://github.com/apache/arrow-adbc/pull/3175
## Motivation
Currently, the default `QueryTimeoutSeconds` is **60s** (set by
`Hive2Server2Connection`
[here](https://github.com/apache/arrow-adbc/blob/main/csharp/src/Drivers/Apache/Hive
jduo commented on code in PR #2949:
URL: https://github.com/apache/arrow-adbc/pull/2949#discussion_r2216549385
##
csharp/src/Telemetry/Traces/Exporters/ExportersBuilder.cs:
##
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contr
fvalenduc commented on issue #47127:
URL: https://github.com/apache/arrow/issues/47127#issuecomment-3090049020
I tried to install the arrow package with devtools using the
apache-arrow-21.0.0 tag and it failed like this:
** testing if installed package can be loaded from temporary locatio
scovich commented on code in PR #7943:
URL: https://github.com/apache/arrow-rs/pull/7943#discussion_r2216453780
##
parquet-variant/src/variant/object.rs:
##
@@ -387,6 +389,38 @@ impl<'m, 'v> VariantObject<'m, 'v> {
}
}
+// Custom implementation of PartialEq for variant o
scovich commented on code in PR #7943:
URL: https://github.com/apache/arrow-rs/pull/7943#discussion_r2216451926
##
parquet-variant/src/variant/object.rs:
##
@@ -387,6 +389,31 @@ impl<'m, 'v> VariantObject<'m, 'v> {
}
}
+impl<'m, 'v> PartialEq for VariantObject<'m, 'v> {
scovich commented on code in PR #7943:
URL: https://github.com/apache/arrow-rs/pull/7943#discussion_r2216449041
##
parquet-variant/src/variant/object.rs:
##
@@ -387,6 +389,31 @@ impl<'m, 'v> VariantObject<'m, 'v> {
}
}
+impl<'m, 'v> PartialEq for VariantObject<'m, 'v> {
zeroshade merged PR #3150:
URL: https://github.com/apache/arrow-adbc/pull/3150
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.
scovich commented on PR #7961:
URL: https://github.com/apache/arrow-rs/pull/7961#issuecomment-3089972179
> While reviewing this I was thinking maybe we should revisit equality
>
> I think what we are doing is trying to make `Variant::eq` to compare if
the Variants are _logically_ equa
zeroshade commented on code in PR #3174:
URL: https://github.com/apache/arrow-adbc/pull/3174#discussion_r2216383547
##
go/adbc/driver/bigquery/driver.go:
##
@@ -77,6 +77,49 @@ const (
AccessTokenEndpoint = "https://accounts.google.com/o/oauth2/token";
AccessT
alamb commented on code in PR #7935:
URL: https://github.com/apache/arrow-rs/pull/7935#discussion_r2216382026
##
parquet-variant/src/builder.rs:
##
@@ -1317,7 +1414,15 @@ impl<'a> ObjectBuilder<'a> {
/// This is to ensure that the object is always finalized before its parent
b
alamb commented on PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089948663
This one is now ready for review. I am quite pleased it already shows some
benchmarks going 30% faster
- https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089911559
Along
alamb commented on PR #7935:
URL: https://github.com/apache/arrow-rs/pull/7935#issuecomment-3089923144
🤖: Benchmark completed
Details
```
group
7899-avoid-extra-allocation-in-object-buildermain
---
alamb commented on PR #7935:
URL: https://github.com/apache/arrow-rs/pull/7935#issuecomment-3089911780
🤖 `./gh_compare_arrow.sh` [Benchmark
Script](https://github.com/alamb/datafusion-benchmarking/blob/main/gh_compare_arrow.sh)
Running
Linux aal-dev 6.11.0-1016-gcp #16~24.04.1-Ubuntu SMP
alamb commented on PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089911559
🤖: Benchmark completed
Details
```
group
alamb_append_variant_builder main
-
alamb commented on PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089900939
🤖 `./gh_compare_arrow.sh` [Benchmark
Script](https://github.com/alamb/datafusion-benchmarking/blob/main/gh_compare_arrow.sh)
Running
Linux aal-dev 6.11.0-1016-gcp #16~24.04.1-Ubuntu SMP
alamb commented on code in PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#discussion_r2216348377
##
parquet-variant-compute/src/variant_array_builder.rs:
##
@@ -55,9 +55,14 @@ use std::sync::Arc;
/// };
/// builder.append_variant_buffers(&metadata, &value);
///
+
alamb merged PR #7963:
URL: https://github.com/apache/arrow-rs/pull/7963
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache
alamb commented on PR #7963:
URL: https://github.com/apache/arrow-rs/pull/7963#issuecomment-3089894974
In order to keep the CI clean, I am going to merge this without review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
alamb commented on PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089880742
🤖: Benchmark completed
Details
```
group
alamb_append_variant_builder main
-
codephage2020 commented on code in PR #7956:
URL: https://github.com/apache/arrow-rs/pull/7956#discussion_r2216344593
##
parquet-variant/src/variant/metadata.rs:
##
@@ -240,28 +240,23 @@ impl<'m> VariantMetadata<'m> {
let value_buffer =
string_from_
alamb commented on PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#issuecomment-3089864104
🤖 `./gh_compare_arrow.sh` [Benchmark
Script](https://github.com/alamb/datafusion-benchmarking/blob/main/gh_compare_arrow.sh)
Running
Linux aal-dev 6.11.0-1016-gcp #16~24.04.1-Ubuntu SMP
alamb commented on code in PR #7911:
URL: https://github.com/apache/arrow-rs/pull/7911#discussion_r2216344784
##
parquet-variant-compute/src/from_json.rs:
##
@@ -41,10 +40,10 @@ pub fn batch_json_string_to_variant(input: &ArrayRef) ->
Result
criccomini commented on issue #368:
URL:
https://github.com/apache/arrow-rs-object-store/issues/368#issuecomment-3089855509
Just came here to say we're hitting:
```
thread 'tokio-runtime-worker' panicked at
/root/.cargo/git/checkouts/slatedb-a6e73982df30678a/2fe991a/slatedb/src/co
alamb opened a new issue, #7964:
URL: https://github.com/apache/arrow-rs/issues/7964
**Is your feature request related to a problem or challenge? Please describe
what you are trying to do.**
In a quest to have the fastest and most efficient Variant implementation I
would like to avoi
alamb merged PR #7958:
URL: https://github.com/apache/arrow-rs/pull/7958
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache
alamb closed issue #7947: [Variant] remove VariantMetadata::dictionary_size
URL: https://github.com/apache/arrow-rs/issues/7947
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
alamb commented on PR #7953:
URL: https://github.com/apache/arrow-rs/pull/7953#issuecomment-3089829118
Thanks again @friendlymatthew
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
alamb merged PR #7953:
URL: https://github.com/apache/arrow-rs/pull/7953
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache
1 - 100 of 162 matches
Mail list logo