viirya commented on issue #3215:
URL: https://github.com/apache/arrow-rs/issues/3215#issuecomment-1331769380
If your IPC payload is generated by apache-arrow NPM package function
`tableToIPC`, the size of buffers is produced by that, not from arrow-rs. IPC
reader just reads provided buffer
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035631504
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -403,94 +294,90 @@ fn extract_or_clause(expr: &Expr, schema_columns:
&HashSet) -> Option,
plan:
mingmwang commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035626623
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -403,94 +294,90 @@ fn extract_or_clause(expr: &Expr, schema_columns:
&HashSet) -> Option,
plan:
jackwener commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331756958
All followup enhancement in #4433
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
jackwener opened a new issue, #4433:
URL: https://github.com/apache/arrow-datafusion/issues/4433
**Is your feature request related to a problem or challenge? Please describe
what you are trying to do.**
- support push_down_filter when meet Window
- support SEMI/ANTI JOIN push_down
ursabot commented on PR #14770:
URL: https://github.com/apache/arrow/pull/14770#issuecomment-1331755566
Benchmark runs are scheduled for baseline =
ccb68afedf00a064c280220f480f3a639cce28f6 and contender =
0f66b714860f25ef711c39ee9cb068a70b302c69.
0f66b714860f25ef711c39ee9cb068a70b302c69 is
tustvold commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035620730
##
datafusion/expr/src/type_coercion/binary.rs:
##
@@ -287,8 +287,8 @@ fn get_wider_decimal_type(
(DataType::Decimal128(p1, s1), DataType::Decimal128
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035613833
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,359 @@ fn optimize_join(
// vector will contain only join keys (without additi
retikulum commented on issue #4386:
URL:
https://github.com/apache/arrow-datafusion/issues/4386#issuecomment-1331748181
Hi. I added this on purpose (but without knowing it is extremely expensive)
to pass `test_dictionary_type_to_array_coersion` test case. The following error
was generated
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035609444
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -403,94 +294,90 @@ fn extract_or_clause(expr: &Expr, schema_columns:
&HashSet) -> Option,
plan:
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035613833
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,359 @@ fn optimize_join(
// vector will contain only join keys (without additi
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035611930
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,359 @@ fn optimize_join(
// vector will contain only join keys (without additi
mingmwang commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035609575
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,359 @@ fn optimize_join(
// vector will contain only join keys (without additi
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035609444
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -403,94 +294,90 @@ fn extract_or_clause(expr: &Expr, schema_columns:
&HashSet) -> Option,
plan:
mingmwang commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035603648
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -403,94 +294,90 @@ fn extract_or_clause(expr: &Expr, schema_columns:
&HashSet) -> Option,
plan:
mingmwang commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331709722
Except for the LogicalPlan::Window, the others LGTM.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
liukun4515 commented on code in PR #4377:
URL: https://github.com/apache/arrow-datafusion/pull/4377#discussion_r1035549338
##
datafusion/core/src/physical_plan/joins/hash_join.rs:
##
@@ -2306,23 +2162,36 @@ mod tests {
Ok(())
}
+fn build_semi_anti_left_table(
ygf11 commented on issue #4389:
URL:
https://github.com/apache/arrow-datafusion/issues/4389#issuecomment-1331669666
> If we can change the pub on: Vec<(column,column)> to option, we
don't need to do the https://github.com/apache/arrow-datafusion/pull/4353
specifically for the expr in the J
wjones127 opened a new pull request, #3236:
URL: https://github.com/apache/arrow-rs/pull/3236
# Which issue does this PR close?
Closes #3235.
# Rationale for this change
The `copy_if_not_exist` function was not tested, and didn't pass the test
when enabled. It needed to
wjones127 opened a new issue, #3235:
URL: https://github.com/apache/arrow-rs/issues/3235
**Describe the bug**
An error in the GCP `copy_if_not_exist` was reported upstream in delta-rs:
https://github.com/delta-io/delta-rs/issues/878#issue-1404449207
```
PyDeltaTableError: Fa
HaoYang670 opened a new pull request, #4432:
URL: https://github.com/apache/arrow-datafusion/pull/4432
Signed-off-by: remzi <1371656737...@gmail.com>
# Which issue does this PR close?
Closes #4431 .
# Rationale for this change
# What changes are inc
HaoYang670 opened a new issue, #4431:
URL: https://github.com/apache/arrow-datafusion/issues/4431
**Is your feature request related to a problem or challenge? Please describe
what you are trying to do.**
https://github.com/apache/arrow-datafusion/blob/49166ea55f317722ab7a37fbfc253bcd497c
ursabot commented on PR #14768:
URL: https://github.com/apache/arrow/pull/14768#issuecomment-1331653701
Benchmark runs are scheduled for baseline =
b1bcd6f3f17ceee958fae6905185a99e1307e6a7 and contender =
ccb68afedf00a064c280220f480f3a639cce28f6.
ccb68afedf00a064c280220f480f3a639cce28f6 is
liukun4515 commented on code in PR #4377:
URL: https://github.com/apache/arrow-datafusion/pull/4377#discussion_r1035533851
##
datafusion/core/src/physical_plan/joins/hash_join.rs:
##
@@ -2306,23 +2162,36 @@ mod tests {
Ok(())
}
+fn build_semi_anti_left_table(
aarashy commented on issue #3215:
URL: https://github.com/apache/arrow-rs/issues/3215#issuecomment-1331649397
I removed the unwraps here https://github.com/apache/arrow-rs/pull/3232
I have some bytes which reproduce this error, but the data is private. The
bytes were the result of the
liukun4515 commented on code in PR #4377:
URL: https://github.com/apache/arrow-datafusion/pull/4377#discussion_r1035532240
##
datafusion/core/src/physical_plan/joins/hash_join.rs:
##
@@ -2306,23 +2162,36 @@ mod tests {
Ok(())
}
+fn build_semi_anti_left_table(
liukun4515 commented on code in PR #4377:
URL: https://github.com/apache/arrow-datafusion/pull/4377#discussion_r1035532240
##
datafusion/core/src/physical_plan/joins/hash_join.rs:
##
@@ -2306,23 +2162,36 @@ mod tests {
Ok(())
}
+fn build_semi_anti_left_table(
jackwener commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331646328
Has added it in UT.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
jackwener commented on PR #4429:
URL:
https://github.com/apache/arrow-datafusion/pull/4429#issuecomment-1331644695
Agree with @HaoYang670 , look like it should be fixed in sqlparser-rs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on t
HaoYang670 commented on PR #4429:
URL:
https://github.com/apache/arrow-datafusion/pull/4429#issuecomment-1331635067
> Hi @HaoYang670 please check the PR But tbh the optimizer doesn't respect
errors now so the error message looks like
>
> ```
> DataFusion CLI v14.0.0
> ❯ explain
wgtmac commented on PR #14742:
URL: https://github.com/apache/arrow/pull/14742#issuecomment-1331634459
I have addressed your comment, and the unsuccessful CI checks are unrelated
to my change. Can you please take a look again? @emkornfield
--
This is an automated message from the Apache
mingmwang commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331632285
Before this PR, there is a global state which can help to avoid duplicate
Filters been generated and pushed down.
Now the global state is removed. Need to double conform th
liukun4515 commented on code in PR #4377:
URL: https://github.com/apache/arrow-datafusion/pull/4377#discussion_r1035513158
##
datafusion/core/src/physical_plan/joins/hash_join.rs:
##
@@ -1440,44 +1181,150 @@ fn equal_rows(
err.unwrap_or(Ok(res))
}
-// Produces a batch fo
liukun4515 commented on PR #4411:
URL:
https://github.com/apache/arrow-datafusion/pull/4411#issuecomment-1331624320
cc @Dandandan if it looks good to you, I will merge this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
wjones127 opened a new pull request, #3234:
URL: https://github.com/apache/arrow-rs/pull/3234
# Which issue does this PR close?
Closes #3233.
# Rationale for this change
Bumping up the size of the test data as well, so it's easier to catch this.
However, I think the loc
wjones127 opened a new issue, #3233:
URL: https://github.com/apache/arrow-rs/issues/3233
**Describe the bug**
Our multi-part upload pieces are too small for the AWS API's liking.
Currently, it is using 5,000,000 byte parts, but minimum is either 5 MB or 5
MiB (not sure).
Examp
liukun4515 commented on issue #4389:
URL:
https://github.com/apache/arrow-datafusion/issues/4389#issuecomment-1331621205
😭, I also confused about we split the join to `join` and `crossjoin` in the
logical phase, I think we can combine these two together and just add
`crossjoin` join_type f
mingmwang commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331618851
Could you please also modify the UT `optimize_plan()` method and let the
rule run twice and see what will happen ?
```
fn optimize_plan(plan: &LogicalPlan) -> Log
liukun4515 commented on issue #4389:
URL:
https://github.com/apache/arrow-datafusion/issues/4389#issuecomment-1331617846
Can we change the logical plan of join to presto or doris? and extract the
`on condition` to the `option`
If we can change the `pub on: Vec<(column,column)>,` to o
liukun4515 commented on issue #4389:
URL:
https://github.com/apache/arrow-datafusion/issues/4389#issuecomment-1331614793
> equi_preds
in the spark just the
```
case class Join(
left: LogicalPlan,
right: LogicalPlan,
joinType: JoinType,
condition:
aarashy opened a new pull request, #3232:
URL: https://github.com/apache/arrow-rs/pull/3232
# Which issue does this PR close?
Addresses part of https://github.com/apache/arrow-rs/issues/3215, but there
is a separate mystery at play - what types of inputs were triggering panics
mingmwang commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331609471
> > You can try this: select (a + b) as c, count(*) from Table_A group by 1
>
> ```rust
> #[test]
> fn push_down_filter_groupby_expr_contains_alias() {
> let
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035499731
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,336 @@ fn optimize_join(
// vector will contain only join keys (without additi
jackwener commented on code in PR #4365:
URL: https://github.com/apache/arrow-datafusion/pull/4365#discussion_r1035499731
##
datafusion/optimizer/src/push_down_filter.rs:
##
@@ -500,302 +387,336 @@ fn optimize_join(
// vector will contain only join keys (without additi
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035496939
##
datafusion/sql/src/planner.rs:
##
@@ -3213,7 +3213,7 @@ mod tests {
let sql = "SELECT CAST(10 AS DECIMAL(0))";
let err = logica
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035496657
##
datafusion/sql/src/utils.rs:
##
@@ -522,9 +522,12 @@ pub(crate) fn make_decimal_type(
};
// Arrow decimal is i128 meaning 38 maximum decimal
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035493777
##
datafusion/sql/src/utils.rs:
##
@@ -522,9 +522,12 @@ pub(crate) fn make_decimal_type(
};
// Arrow decimal is i128 meaning 38 maximum decimal
jackwener opened a new issue, #4430:
URL: https://github.com/apache/arrow-datafusion/issues/4430
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
run in optimizer integration-test
```rust
#[test]
fn push_down_filter_groupby_ex
HaoYang670 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035494671
##
datafusion/sql/src/planner.rs:
##
@@ -3213,7 +3213,7 @@ mod tests {
let sql = "SELECT CAST(10 AS DECIMAL(0))";
let err = logica
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035493777
##
datafusion/sql/src/utils.rs:
##
@@ -522,9 +522,12 @@ pub(crate) fn make_decimal_type(
};
// Arrow decimal is i128 meaning 38 maximum decimal
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035493298
##
datafusion/sql/src/planner.rs:
##
@@ -3213,7 +3213,7 @@ mod tests {
let sql = "SELECT CAST(10 AS DECIMAL(0))";
let err = logica
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035492782
##
datafusion/sql/src/planner.rs:
##
@@ -3213,7 +3213,7 @@ mod tests {
let sql = "SELECT CAST(10 AS DECIMAL(0))";
let err = logica
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035491442
##
datafusion/optimizer/src/simplify_expressions/utils.rs:
##
@@ -108,8 +106,12 @@ pub fn is_one(s: &Expr) -> bool {
| Expr::Literal(ScalarValue::U
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035489865
##
datafusion/expr/src/type_coercion/binary.rs:
##
@@ -287,8 +287,8 @@ fn get_wider_decimal_type(
(DataType::Decimal128(p1, s1), DataType::Decimal1
liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r103543
##
datafusion/expr/src/type_coercion/binary.rs:
##
@@ -287,8 +287,8 @@ fn get_wider_decimal_type(
(DataType::Decimal128(p1, s1), DataType::Decimal1
jackwener commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331592683
> You can try this: select (a + b) as c, count(*) from Table_A group by 1
```rust
#[test]
fn push_down_filter_groupby_expr_contains_alias() {
let sql = "SEL
mingmwang commented on PR #4365:
URL:
https://github.com/apache/arrow-datafusion/pull/4365#issuecomment-1331584666
> @mingmwang look like alias can't be in groupby.
>
> sql 1999
>
> ```
> Function
> Specify a grouped table derived by the application of the to the result
ursabot commented on PR #14731:
URL: https://github.com/apache/arrow/pull/14731#issuecomment-1331578942
Benchmark runs are scheduled for baseline =
fde7b937c84eaad842ab0457d2490c6c8c244697 and contender =
b1bcd6f3f17ceee958fae6905185a99e1307e6a7.
b1bcd6f3f17ceee958fae6905185a99e1307e6a7 is
liukun4515 commented on issue #3223:
URL: https://github.com/apache/arrow-rs/issues/3223#issuecomment-1331566959
@viirya @tustvold thanks for your advice.
In the user case, some cases want to get the error when the data is overflow
for the precision, and some cases don't want to get the
wgtmac commented on code in PR #14742:
URL: https://github.com/apache/arrow/pull/14742#discussion_r1035455360
##
cpp/src/parquet/metadata.h:
##
@@ -171,6 +171,13 @@ class PARQUET_EXPORT ColumnChunkMetaData {
int64_t total_uncompressed_size() const;
std::unique_ptr crypto_m
HaoYang670 commented on PR #4429:
URL:
https://github.com/apache/arrow-datafusion/pull/4429#issuecomment-1331528826
Could we add a test for this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
xudong963 commented on PR #4395:
URL:
https://github.com/apache/arrow-datafusion/pull/4395#issuecomment-1331510714
Thanks for reviewing @alamb . I'll review it in the evening. (GMT+8)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
Jimexist closed pull request #65: version update of python and maturin
URL: https://github.com/apache/arrow-datafusion-python/pull/65
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
mvanschellebeeck commented on code in PR #4395:
URL: https://github.com/apache/arrow-datafusion/pull/4395#discussion_r1035414419
##
Cargo.toml:
##
@@ -31,6 +31,7 @@ members = [
"test-utils",
"parquet-test-utils",
"benchmarks",
+"tests/sqllogictests",
Review C
mvanschellebeeck commented on code in PR #4395:
URL: https://github.com/apache/arrow-datafusion/pull/4395#discussion_r1035414041
##
tests/sqllogictests/README.md:
##
@@ -0,0 +1,45 @@
+ Overview
+
+This is the Datafusion implementation of
[sqllogictest](https://www.sqlite.or
ursabot commented on PR #14744:
URL: https://github.com/apache/arrow/pull/14744#issuecomment-1331492200
Benchmark runs are scheduled for baseline =
a594e38fad126a63c952e0fd84e773f80fc3b3f0 and contender =
fde7b937c84eaad842ab0457d2490c6c8c244697.
fde7b937c84eaad842ab0457d2490c6c8c244697 is
vibhatha commented on code in PR #14646:
URL: https://github.com/apache/arrow/pull/14646#discussion_r1035405839
##
cpp/src/arrow/dataset/partition_test.cc:
##
@@ -1048,5 +1051,60 @@ TEST(TestStripPrefixAndFilename, Basic) {
"year=2019/m
wjones127 commented on code in PR #14679:
URL: https://github.com/apache/arrow/pull/14679#discussion_r1035397111
##
r/R/csv.R:
##
@@ -722,9 +731,10 @@ write_csv_arrow <- function(x,
if (is.null(write_options)) {
write_options <- readr_to_csv_write_options(
- inclu
mvanschellebeeck commented on code in PR #4395:
URL: https://github.com/apache/arrow-datafusion/pull/4395#discussion_r1035395990
##
tests/sqllogictests/src/main.rs:
##
@@ -0,0 +1,121 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor licens
comphead commented on PR #4429:
URL:
https://github.com/apache/arrow-datafusion/pull/4429#issuecomment-1331454937
Hi @HaoYang670 please check the PR
But tbh the optimizer doesn't respect errors now so the error message looks
like
```
DataFusion CLI v14.0.0
❯ explain explain sel
comphead opened a new pull request, #4429:
URL: https://github.com/apache/arrow-datafusion/pull/4429
# Which issue does this PR close?
Closes #4378 .
# Rationale for this change
# What changes are included in this PR?
Replace panics in favor if Error
fatemehp commented on code in PR #14603:
URL: https://github.com/apache/arrow/pull/14603#discussion_r1035378763
##
cpp/src/parquet/column_reader.cc:
##
@@ -263,6 +269,11 @@ class SerializedPageReader : public PageReader {
int compres
fatemehp commented on code in PR #14603:
URL: https://github.com/apache/arrow/pull/14603#discussion_r1035378900
##
cpp/src/parquet/metadata.h:
##
@@ -182,6 +184,28 @@ class PARQUET_EXPORT ColumnChunkMetaData {
std::unique_ptr impl_;
};
+// \brief DataPageStats stores stati
fatemehp commented on code in PR #14603:
URL: https://github.com/apache/arrow/pull/14603#discussion_r1035363944
##
cpp/src/parquet/column_reader.cc:
##
@@ -337,6 +348,50 @@ void SerializedPageReader::UpdateDecryption(const
std::shared_ptr& de
}
}
+bool SerializedPageReade
dmitrijoseph opened a new issue, #4428:
URL: https://github.com/apache/arrow-datafusion/issues/4428
```
let test_df = ctx.read_csv("test.csv", CsvReadOptions::new()).await?;
let test_df = test_df.with_column_renamed("id", "renamedID")?;
println!("{:#?}", test_df.explain(true, true)?
github-actions[bot] commented on PR #14777:
URL: https://github.com/apache/arrow/pull/14777#issuecomment-1331406762
:warning: Ticket **has not been started in JIRA**, please click 'Start
Progress'.
--
This is an automated message from the Apache Git Service.
To respond to the message, ple
github-actions[bot] commented on PR #14777:
URL: https://github.com/apache/arrow/pull/14777#issuecomment-1331406735
https://issues.apache.org/jira/browse/ARROW-18112
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
alamb commented on issue #4349:
URL:
https://github.com/apache/arrow-datafusion/issues/4349#issuecomment-1331402197
Here is my next contribution to clean up configuration:
https://github.com/apache/arrow-datafusion/pull/4427 (slowly consolidating the
configurations)
--
This is an automa
alamb closed pull request #3885: Consolidate remaining parquet config options
into ConfigOptions
URL: https://github.com/apache/arrow-datafusion/pull/3885
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
alamb commented on PR #3885:
URL:
https://github.com/apache/arrow-datafusion/pull/3885#issuecomment-1331401524
Updated version in https://github.com/apache/arrow-datafusion/pull/4427
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
alamb commented on code in PR #4427:
URL: https://github.com/apache/arrow-datafusion/pull/4427#discussion_r1035340983
##
benchmarks/src/bin/tpch.rs:
##
@@ -396,7 +396,8 @@ async fn get_table(
}
"parquet" => {
let path = format!("{}/{}",
alamb opened a new pull request, #4427:
URL: https://github.com/apache/arrow-datafusion/pull/4427
this is a reworked version of
https://github.com/apache/arrow-datafusion/pull/3885
# Which issue does this PR close?
Closes https://github.com/apache/arrow-datafusion/issues/3821
ursabot commented on PR #14762:
URL: https://github.com/apache/arrow/pull/14762#issuecomment-1331370822
['Python', 'R'] benchmarks have high level of regressions.
[ursa-i9-9960x](https://conbench.ursa.dev/compare/runs/1f50bc0aff244c6db1a8dd358b21f256...8a1000f38f5849a9aaaf1dcc92024de7/)
ursabot commented on PR #14762:
URL: https://github.com/apache/arrow/pull/14762#issuecomment-1331370501
Benchmark runs are scheduled for baseline =
d77ced27a008ef0cb32093e62f890ba38a16febd and contender =
a594e38fad126a63c952e0fd84e773f80fc3b3f0.
a594e38fad126a63c952e0fd84e773f80fc3b3f0 is
kou commented on code in PR #14585:
URL: https://github.com/apache/arrow/pull/14585#discussion_r1035301666
##
cpp/cmake_modules/ThirdpartyToolchain.cmake:
##
@@ -183,7 +184,9 @@ macro(build_dependency DEPENDENCY_NAME)
build_orc()
elseif("${DEPENDENCY_NAME}" STREQUAL "Pro
alamb commented on code in PR #3885:
URL: https://github.com/apache/arrow-datafusion/pull/3885#discussion_r1035276922
##
datafusion/core/src/config.rs:
##
@@ -237,6 +247,29 @@ impl BuiltInConfigs {
to reduce the number of rows decoded.",
false,
ursabot commented on PR #3231:
URL: https://github.com/apache/arrow-rs/pull/3231#issuecomment-1331288540
Benchmark runs are scheduled for baseline =
bdfe0fdeb127c99ef918af779a3b8404e91e41b1 and contender =
1a8e6ed957e483ec27b88fce54a48b8176be3179.
1a8e6ed957e483ec27b88fce54a48b8176be3179 i
viirya merged PR #3231:
URL: https://github.com/apache/arrow-rs/pull/3231
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apach
codecov-commenter commented on PR #78:
URL: https://github.com/apache/arrow-nanoarrow/pull/78#issuecomment-1331273561
#
[Codecov](https://codecov.io/gh/apache/arrow-nanoarrow/pull/78?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+
paleolimbot opened a new pull request, #78:
URL: https://github.com/apache/arrow-nanoarrow/pull/78
Work in progress!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsu
alamb commented on issue #4426:
URL:
https://github.com/apache/arrow-datafusion/issues/4426#issuecomment-1331261721
I think the feature described in this proposal is needed to properly handle
prepared statements in FlightSQL
For example, in ballista parameter handling appears to stil
viirya opened a new pull request, #3231:
URL: https://github.com/apache/arrow-rs/pull/3231
# Which issue does this PR close?
Closes #.
# Rationale for this change
CI now failed by
```
error: failed to select a version for the requirement `tonic-
NGA-TRAN commented on issue #4426:
URL:
https://github.com/apache/arrow-datafusion/issues/4426#issuecomment-1331260826
Thanks @alamb . Let me see how Logical Plan looks like and propose a clearer
one
--
This is an automated message from the Apache Git Service.
To respond to the message,
lidavidm merged PR #14573:
URL: https://github.com/apache/arrow/pull/14573
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apac
alamb commented on issue #4426:
URL:
https://github.com/apache/arrow-datafusion/issues/4426#issuecomment-1331258449
This is great @NGA-TRAN -- thank you for writing it up. My only feedback is
that for option 2 it might be easier if the output was a new LogicalPlan that
had the parameter v
viirya opened a new pull request, #3230:
URL: https://github.com/apache/arrow-rs/pull/3230
# Which issue does this PR close?
Closes #.
# Rationale for this change
I re-checked how Spark handles negative scale. Negative scale is not limited
to the max sca
NGA-TRAN commented on issue #4426:
URL:
https://github.com/apache/arrow-datafusion/issues/4426#issuecomment-1331255758
@alamb What do you think?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
NGA-TRAN opened a new issue, #4426:
URL: https://github.com/apache/arrow-datafusion/issues/4426
**Is your feature request related to a problem or challenge? Please describe
what you are trying to do.**
In order to support [Prepare
statement](https://en.wikipedia.org/wiki/Prepared_stateme
viirya commented on code in PR #3222:
URL: https://github.com/apache/arrow-rs/pull/3222#discussion_r1035233528
##
arrow-cast/src/cast.rs:
##
@@ -3614,7 +3616,6 @@ mod tests {
}
#[test]
-#[cfg(not(feature = "force_validate"))]
Review Comment:
For the tests whi
ursabot commented on PR #4406:
URL:
https://github.com/apache/arrow-datafusion/pull/4406#issuecomment-1331242517
Benchmark runs are scheduled for baseline =
66c95e70ae2ff9f3f89b91898ede875d316e731f and contender =
49166ea55f317722ab7a37fbfc253bcd497c1672.
49166ea55f317722ab7a37fbfc253bcd4
1 - 100 of 459 matches
Mail list logo