Fokko commented on PR #36846:
URL: https://github.com/apache/arrow/pull/36846#issuecomment-1689372348
Alright, doing casts during reads has its [own
issues](https://github.com/apache/arrow/issues/36845) (this might be faster,
because it reads it right into the correct format?). Also, other
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302547704
##
cpp/src/parquet/page_index.cc:
##
@@ -830,14 +894,13 @@ RowGroupIndexReadRange
PageIndexReader::DeterminePageIndexRangesInRowGroup(
// -
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302546880
##
cpp/src/parquet/encryption/test_encryption_util.cc:
##
@@ -509,4 +513,178 @@ void FileDecryptor::CheckFile(parquet::ParquetFileReader*
file_reader,
}
}
+void F
rsm-23 commented on code in PR #37301:
URL: https://github.com/apache/arrow/pull/37301#discussion_r1302538388
##
docs/source/cpp/json.rst:
##
@@ -58,9 +58,8 @@ the output table.
// Instantiate TableReader from input stream and options
std::shared_ptr reader;
Rev
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302537048
##
cpp/src/parquet/page_index.h:
##
@@ -332,7 +340,8 @@ class PARQUET_EXPORT OffsetIndexBuilder {
class PARQUET_EXPORT PageIndexBuilder {
public:
/// \brief API co
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302534492
##
cpp/src/parquet/page_index.h:
##
@@ -186,8 +191,7 @@ class PARQUET_EXPORT PageIndexReader {
/// that creates this PageIndexReader.
static std::shared_ptr Make(
smallzhongfeng commented on issue #7289:
URL:
https://github.com/apache/arrow-datafusion/issues/7289#issuecomment-1689346970
Can you assign it to me, I am more interested. @izveigor :-)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on t
Light-City commented on code in PR #37171:
URL: https://github.com/apache/arrow/pull/37171#discussion_r1302524312
##
cpp/src/arrow/record_batch.cc:
##
@@ -283,6 +283,25 @@ bool RecordBatch::ApproxEquals(const RecordBatch& other,
const EqualOptions& opt
return true;
}
+Sta
Light-City commented on code in PR #37171:
URL: https://github.com/apache/arrow/pull/37171#discussion_r1302523577
##
cpp/src/arrow/record_batch.cc:
##
@@ -283,6 +283,25 @@ bool RecordBatch::ApproxEquals(const RecordBatch& other,
const EqualOptions& opt
return true;
}
+Sta
zinking commented on PR #36704:
URL: https://github.com/apache/arrow/pull/36704#issuecomment-1689332969
https://github.com/apache/arrow/issues/37323
@davisusanibar could this be pushed forward?
--
This is an automated message from the Apache Git Service.
To respond to the message,
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302509826
##
cpp/src/parquet/encryption/test_encryption_util.cc:
##
@@ -509,4 +513,178 @@ void FileDecryptor::CheckFile(parquet::ParquetFileReader*
file_reader,
}
}
+void F
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302503911
##
cpp/src/parquet/encryption/test_encryption_util.cc:
##
@@ -509,4 +513,178 @@ void FileDecryptor::CheckFile(parquet::ParquetFileReader*
file_reader,
}
}
+void F
spaydar commented on code in PR #7362:
URL: https://github.com/apache/arrow-datafusion/pull/7362#discussion_r1302501616
##
docs/source/user-guide/sql/dml.md:
##
@@ -0,0 +1,60 @@
+
+
+# DML
+
+## COPY
+
+Copy a table to file(s). Supported file formats are `parquet`, `csv`, and
`
wgtmac commented on code in PR #36574:
URL: https://github.com/apache/arrow/pull/36574#discussion_r1302499077
##
cpp/src/parquet/encryption/test_encryption_util.cc:
##
@@ -509,4 +513,178 @@ void FileDecryptor::CheckFile(parquet::ParquetFileReader*
file_reader,
}
}
+void F
spaydar commented on code in PR #7362:
URL: https://github.com/apache/arrow-datafusion/pull/7362#discussion_r1302497654
##
docs/source/user-guide/sql/dml.md:
##
@@ -0,0 +1,60 @@
+
+
+# DML
+
+## COPY
+
+Copy a table to file(s). Supported file formats are `parquet`, `csv`, and
`
mapleFU commented on PR #36967:
URL: https://github.com/apache/arrow/pull/36967#issuecomment-1689296443
@westonpace @pitrou would you mind take a look at this interface? it split a
parquet file scanner by offset-length
--
This is an automated message from the Apache Git Service.
To respon
AlenkaF commented on code in PR #35865:
URL: https://github.com/apache/arrow/pull/35865#discussion_r1302476258
##
python/pyarrow/array.pxi:
##
@@ -2053,6 +2053,38 @@ cdef class ListArray(BaseListArray):
@property
def values(self):
+"""
+Return the und
AlenkaF commented on code in PR #35865:
URL: https://github.com/apache/arrow/pull/35865#discussion_r1302476258
##
python/pyarrow/array.pxi:
##
@@ -2053,6 +2053,38 @@ cdef class ListArray(BaseListArray):
@property
def values(self):
+"""
+Return the und
conbench-apache-arrow[bot] commented on PR #37272:
URL: https://github.com/apache/arrow/pull/37272#issuecomment-1689267864
After merging your PR, Conbench analyzed the 5 benchmarking runs that have
been run so far on merge-commit 9ddd8d5c52796f20cf619b8f43538b9d454fb9c0.
There were no
JayjeetAtGithub opened a new issue, #4725:
URL: https://github.com/apache/arrow-rs/issues/4725
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
JayjeetAtGithub commented on issue #7342:
URL:
https://github.com/apache/arrow-datafusion/issues/7342#issuecomment-1689266808
I looked into this issue a little bit. Looks like there needs to be changes
in `arrow-string` which is basically a part of `arrow-rs`. Specifically, I
found out tha
mapleFU commented on PR #37264:
URL: https://github.com/apache/arrow/pull/37264#issuecomment-1689259680
I've fixed the comment, would you mind take a look again?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
yjshen merged PR #7329:
URL: https://github.com/apache/arrow-datafusion/pull/7329
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arr
mapleFU commented on code in PR #37171:
URL: https://github.com/apache/arrow/pull/37171#discussion_r1302430191
##
cpp/src/arrow/record_batch.cc:
##
@@ -283,6 +283,25 @@ bool RecordBatch::ApproxEquals(const RecordBatch& other,
const EqualOptions& opt
return true;
}
+Status
mapleFU commented on code in PR #36073:
URL: https://github.com/apache/arrow/pull/36073#discussion_r1302427543
##
cpp/src/parquet/column_writer.cc:
##
@@ -2305,6 +2307,74 @@ struct SerializeFunctor<
int64_t* scratch;
};
+// -
nseekhao opened a new pull request, #7382:
URL: https://github.com/apache/arrow-datafusion/pull/7382
## Which issue does this PR close?
Closes #7381 .
## Rationale for this change
To add support for aggregation with `ROLLUP` and `GROUPING SETS`.
## What
jiangzhx commented on issue #7380:
URL:
https://github.com/apache/arrow-datafusion/issues/7380#issuecomment-1689204698
I found another way to make this work.
select make_array(case when 1>0 then true else false end,case when 2>0 then
true else false end);
--
This is an automated m
kou commented on issue #37296:
URL: https://github.com/apache/arrow/issues/37296#issuecomment-1689198643
Do you have `nvcc`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
mapleFU commented on issue #31678:
URL: https://github.com/apache/arrow/issues/31678#issuecomment-1689196976
```
>>> metadata_collector[0].schema
required group field_id=-1 schema {
optional int64 field_id=-1 A;
optional binary field_id=-1 B (String);
}
>>> metad
nseekhao opened a new issue, #7381:
URL: https://github.com/apache/arrow-datafusion/issues/7381
### Is your feature request related to a problem or challenge?
The Substrait producer currently throws an error if `ROLL UP` or `GROUPING
SETS` is used in the query.
### Describe the
wgtmac commented on PR #36519:
URL: https://github.com/apache/arrow/pull/36519#issuecomment-1689190499
Thanks, I agree with you @lidavidm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
kou merged PR #37315:
URL: https://github.com/apache/arrow/pull/37315
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache.or
mhkeller commented on issue #35041:
URL: https://github.com/apache/arrow/issues/35041#issuecomment-1689171844
Is there any update on this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
jiangzhx commented on issue #7380:
URL:
https://github.com/apache/arrow-datafusion/issues/7380#issuecomment-1689169251
I'm not sure if this feature should be included in the discussion on the
following issue:
https://github.com/apache/arrow-datafusion/issues/6980.
--
This is an autom
jiangzhx opened a new issue, #7380:
URL: https://github.com/apache/arrow-datafusion/issues/7380
### Is your feature request related to a problem or challenge?
Using the CASE WHEN statement in ARRAY.
`select [case when col1>0 then true else false end,case when col1>0 then
true e
github-actions[bot] commented on PR #37321:
URL: https://github.com/apache/arrow/pull/37321#issuecomment-1689163906
:warning: GitHub issue #37320 **has been automatically assigned in GitHub**
to PR creator.
--
This is an automated message from the Apache Git Service.
To respond to the mes
Light-City opened a new pull request, #37321:
URL: https://github.com/apache/arrow/pull/37321
### Rationale for this change
Ordinary comparison operators yield null (signifying “unknown”), not true or
false, when either input is null. For example, 7 = NULL yields null, as does 7
<> N
avantgardnerio commented on PR #7192:
URL:
https://github.com/apache/arrow-datafusion/pull/7192#issuecomment-1689158528
> Reported performance results
I'd like to reiterate that this PR is really about using constant memory
(which it does), not increasing throughput, but here's some
ozankabak commented on code in PR #7364:
URL: https://github.com/apache/arrow-datafusion/pull/7364#discussion_r1302364213
##
datafusion/sqllogictest/test_files/order.slt:
##
@@ -410,3 +410,38 @@ SELECT DISTINCT time as "first_seen" FROM t ORDER BY 1;
## Cleanup
statement ok
d
conbench-apache-arrow[bot] commented on PR #36977:
URL: https://github.com/apache/arrow/pull/36977#issuecomment-1689147489
After merging your PR, Conbench analyzed the 6 benchmarking runs that have
been run so far on merge-commit fe750ed10531c47131b447397e67486656cf8135.
There were no
avantgardnerio commented on PR #7192:
URL:
https://github.com/apache/arrow-datafusion/pull/7192#issuecomment-1689144857
> Tests for the optimizer pass
@alamb this bothered me as well. Would you be able to direct me to the most
exemplary test to reference?
--
This is an automated m
zinking commented on issue #37005:
URL: https://github.com/apache/arrow/issues/37005#issuecomment-1689137038
> Sorry, I deleted the comment I just posted. You mean call into Arrow Rust
_from_ Java, right?
that's correct.
--
This is an automated message from the Apache Git Service.
paleolimbot commented on PR #280:
URL: https://github.com/apache/arrow-nanoarrow/pull/280#issuecomment-1689124104
Yes, I'm on M1.
If I change the unpacking to a macro, I get 3x faster unpacking (and no
difference between shift/no shift):
```c
#define ARROW_BITS_UNPACK1(word,
kou commented on code in PR #37238:
URL: https://github.com/apache/arrow/pull/37238#discussion_r1302344970
##
cpp/CMakeLists.txt:
##
@@ -18,35 +18,55 @@
cmake_minimum_required(VERSION 3.16)
message(STATUS "Building using CMake version: ${CMAKE_VERSION}")
-# Compiler id for A
kou merged PR #37258:
URL: https://github.com/apache/arrow/pull/37258
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache.or
kou commented on PR #37258:
URL: https://github.com/apache/arrow/pull/37258#issuecomment-1689118588
+1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mai
parkma99 commented on PR #7350:
URL:
https://github.com/apache/arrow-datafusion/pull/7350#issuecomment-1689114310
> Perhaps we could implement this as part of the coercion rules as opposed
to internal to the evaluation logic? See coerce_arguments_for_fun perhaps?
Thank you, it looks
lidavidm commented on code in PR #989:
URL: https://github.com/apache/arrow-adbc/pull/989#discussion_r1302336577
##
python/adbc_driver_manager/adbc_driver_manager/dbapi.py:
##
@@ -973,7 +1012,7 @@ def fetchone(self) -> Optional[tuple]:
self.rownumber += 1
retur
lidavidm commented on code in PR #989:
URL: https://github.com/apache/arrow-adbc/pull/989#discussion_r1302335936
##
python/adbc_driver_manager/adbc_driver_manager/dbapi.py:
##
@@ -926,6 +927,44 @@ def fetch_df(self) -> "pandas.DataFrame":
)
return self._res
lidavidm commented on code in PR #989:
URL: https://github.com/apache/arrow-adbc/pull/989#discussion_r1302335724
##
python/adbc_driver_manager/adbc_driver_manager/dbapi.py:
##
@@ -926,6 +927,44 @@ def fetch_df(self) -> "pandas.DataFrame":
)
return self._res
lidavidm commented on code in PR #989:
URL: https://github.com/apache/arrow-adbc/pull/989#discussion_r1302335522
##
python/adbc_driver_manager/adbc_driver_manager/dbapi.py:
##
@@ -926,6 +927,44 @@ def fetch_df(self) -> "pandas.DataFrame":
)
return self._res
lidavidm commented on code in PR #989:
URL: https://github.com/apache/arrow-adbc/pull/989#discussion_r1302334457
##
python/adbc_driver_manager/adbc_driver_manager/dbapi.py:
##
@@ -973,7 +1012,7 @@ def fetchone(self) -> Optional[tuple]:
self.rownumber += 1
retur
lidavidm commented on issue #37318:
URL: https://github.com/apache/arrow/issues/37318#issuecomment-1689106759
Hmm, interesting. Ideally on the C++ side we would change concat_arrays to
not allocate a new array if there's only one chunk. But that would actually be
a breaking change, since th
R-JunmingChen commented on code in PR #37100:
URL: https://github.com/apache/arrow/pull/37100#discussion_r1302332197
##
cpp/src/arrow/compute/kernels/aggregate_basic.cc:
##
@@ -492,11 +492,24 @@ Result>
MinMaxInit(KernelContext* ctx,
return visitor.Create();
}
+namespace
2010YOUY01 commented on PR #7337:
URL:
https://github.com/apache/arrow-datafusion/pull/7337#issuecomment-1689072998
> # Does this belong in Datafusion core? Or does it belong as an add on?
> With this level of specialization required, I wonder where shall we stop
adding built in aggregat
ywc88 opened a new pull request, #989:
URL: https://github.com/apache/arrow-adbc/pull/989
Fixes #968
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail
2010YOUY01 commented on PR #7337:
URL:
https://github.com/apache/arrow-datafusion/pull/7337#issuecomment-1689064579
> Thank you @2010YOUY01 . This PR, as all your others, is well written,
documented and tested and is easy to read and understand. Thank you so much.
>
> # Sorting
>
spenczar commented on PR #35865:
URL: https://github.com/apache/arrow/pull/35865#issuecomment-1689059622
Is there anything I can do to get this merged?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
github-actions[bot] commented on PR #37319:
URL: https://github.com/apache/arrow/pull/37319#issuecomment-1689054222
:warning: GitHub issue #37318 **has been automatically assigned in GitHub**
to PR creator.
--
This is an automated message from the Apache Git Service.
To respond to the mes
spenczar opened a new pull request, #37319:
URL: https://github.com/apache/arrow/pull/37319
### Rationale for this change
The associated issue explains the rationale.
I'd love to add benchmarks, but don't really know how; Arrow's benchmark
system is pretty daunting for adding a
conbench-apache-arrow[bot] commented on PR #37275:
URL: https://github.com/apache/arrow/pull/37275#issuecomment-1689043221
After merging your PR, Conbench analyzed the 6 benchmarking runs that have
been run so far on merge-commit 369bb318b016e26db1c9933418a8855975eeab01.
There were no
wiedld commented on code in PR #7379:
URL: https://github.com/apache/arrow-datafusion/pull/7379#discussion_r1302269471
##
datafusion/core/src/physical_plan/sorts/streaming_merge.rs:
##
@@ -0,0 +1,92 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more co
alamb commented on PR #7355:
URL:
https://github.com/apache/arrow-datafusion/pull/7355#issuecomment-1689036656
Thank you @DDtKey !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
wiedld commented on code in PR #7379:
URL: https://github.com/apache/arrow-datafusion/pull/7379#discussion_r1302270314
##
datafusion/core/src/physical_plan/sorts/merge.rs:
##
@@ -15,95 +15,20 @@
// specific language governing permissions and limitations
// under the License.
wiedld commented on code in PR #7379:
URL: https://github.com/apache/arrow-datafusion/pull/7379#discussion_r1302269471
##
datafusion/core/src/physical_plan/sorts/streaming_merge.rs:
##
@@ -0,0 +1,92 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more co
wiedld commented on code in PR #7379:
URL: https://github.com/apache/arrow-datafusion/pull/7379#discussion_r1302264850
##
datafusion/core/src/physical_plan/sorts/cursor.rs:
##
@@ -99,6 +100,16 @@ pub trait Cursor: Ord {
/// Advance the cursor, returning the previous row i
wiedld opened a new pull request, #7379:
URL: https://github.com/apache/arrow-datafusion/pull/7379
**WIP: have a few optimizations todo, including those noted in this code.**
## Which issue does this PR close?
External sorting (cascading merges) of the internal-sorted (in-memory
github-actions[bot] commented on PR #37255:
URL: https://github.com/apache/arrow/pull/37255#issuecomment-1689012808
Revision: 8eb925d16c808109b174ca1e82e53e6e4f87b06b
Submitted crossbow builds: [ursacomputing/crossbow @
actions-a1c4d5ead7](https://github.com/ursacomputing/crossbow/bra
danepitkin commented on PR #37255:
URL: https://github.com/apache/arrow/pull/37255#issuecomment-1689011035
@github-actions crossbow submit test-conda-python-3.10-hdfs*
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
github-actions[bot] commented on PR #37317:
URL: https://github.com/apache/arrow/pull/37317#issuecomment-1688996632
:warning: GitHub issue #37310 **has been automatically assigned in GitHub**
to PR creator.
--
This is an automated message from the Apache Git Service.
To respond to the mes
danepitkin opened a new pull request, #37317:
URL: https://github.com/apache/arrow/pull/37317
### Rationale for this change
Warnings are enabled for some nightly jobs, but not for CI jobs. This is not
helpful since devs typically rely on the CI job as part of the PR process.
##
github-actions[bot] commented on PR #37255:
URL: https://github.com/apache/arrow/pull/37255#issuecomment-1688948279
Revision: 7710a866a309148304da9873f77c5f0b7637cc33
Submitted crossbow builds: [ursacomputing/crossbow @
actions-c53268ddd6](https://github.com/ursacomputing/crossbow/bra
danepitkin commented on PR #37255:
URL: https://github.com/apache/arrow/pull/37255#issuecomment-1688943523
@github-actions crossbow submit *python*
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
Fokko commented on issue #37219:
URL: https://github.com/apache/arrow/issues/37219#issuecomment-1688941267
Thanks @bkietz for the context and pointer. I was digging into the code, but
was unable to see when the loop is actually executed. It seems that the size of
`exprs` is always zero. I w
viirya commented on code in PR #7378:
URL: https://github.com/apache/arrow-datafusion/pull/7378#discussion_r1302200666
##
datafusion/physical-expr/src/expressions/in_list.rs:
##
@@ -94,7 +94,7 @@ impl Set for ArraySet
where
T: Array + 'static,
for<'a> &'a T: ArrayAcce
kou commented on code in PR #37311:
URL: https://github.com/apache/arrow/pull/37311#discussion_r1302186757
##
cpp/examples/tutorial_examples/CMakeLists.txt:
##
@@ -23,6 +23,7 @@ find_package(Arrow REQUIRED)
get_filename_component(ARROW_CONFIG_PATH ${Arrow_CONFIG} DIRECTORY)
zeroshade commented on PR #37174:
URL: https://github.com/apache/arrow/pull/37174#issuecomment-1688930246
@felipecrv I've pushed the Go implementation for REE with c-export/import
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
sarutak commented on code in PR #7378:
URL: https://github.com/apache/arrow-datafusion/pull/7378#discussion_r1302178468
##
datafusion/physical-expr/src/expressions/in_list.rs:
##
@@ -609,50 +643,100 @@ mod tests {
#[test]
fn in_list_float64() -> Result<()> {
l
kou commented on code in PR #37301:
URL: https://github.com/apache/arrow/pull/37301#discussion_r1302178475
##
docs/source/cpp/json.rst:
##
@@ -58,9 +58,8 @@ the output table.
// Instantiate TableReader from input stream and options
std::shared_ptr reader;
Review
kou commented on code in PR #37301:
URL: https://github.com/apache/arrow/pull/37301#discussion_r1302178475
##
docs/source/cpp/json.rst:
##
@@ -58,9 +58,8 @@ the output table.
// Instantiate TableReader from input stream and options
std::shared_ptr reader;
Review
sarutak commented on code in PR #7378:
URL: https://github.com/apache/arrow-datafusion/pull/7378#discussion_r1302178468
##
datafusion/physical-expr/src/expressions/in_list.rs:
##
@@ -609,50 +643,100 @@ mod tests {
#[test]
fn in_list_float64() -> Result<()> {
l
sarutak opened a new pull request, #7378:
URL: https://github.com/apache/arrow-datafusion/pull/7378
## Which issue does this PR close?
Closes #7377
## Rationale for this change
This PR fixes an issue that `'NaN'::double in ('NaN'::double)` is evaluated
as `false`, which is inco
legout commented on issue #31678:
URL: https://github.com/apache/arrow/issues/31678#issuecomment-1688905002
Create toy dataset with parquet files having identical column types, but
different column ordering.
```python
import os
import tempfile
import pyarrow as pa
impo
conbench-apache-arrow[bot] commented on PR #37285:
URL: https://github.com/apache/arrow/pull/37285#issuecomment-1688897859
After merging your PR, Conbench analyzed the 5 benchmarking runs that have
been run so far on merge-commit 9ecd0f2a5fb76cca859269a6ff13eaf315abac62.
There were no
kou merged PR #37300:
URL: https://github.com/apache/arrow/pull/37300
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.apache.or
sarutak opened a new issue, #7377:
URL: https://github.com/apache/arrow-datafusion/issues/7377
### Describe the bug
Given the following query.
```
SELECT 'NAN'::double in ('NAN'::double);
```
I expected the result is `true` but the actual is `false`.
It's not inconsisten
jorisvandenbossche commented on PR #280:
URL: https://github.com/apache/arrow-nanoarrow/pull/280#issuecomment-1688896563
(I am on ubuntu / intel cpu)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
Dandandan commented on code in PR #7376:
URL: https://github.com/apache/arrow-datafusion/pull/7376#discussion_r1302163472
##
datafusion/physical-expr/src/aggregate/median.rs:
##
@@ -106,159 +126,75 @@ impl PartialEq for Median {
}
}
-#[derive(Debug)]
/// The median accu
sgilmore10 commented on code in PR #37315:
URL: https://github.com/apache/arrow/pull/37315#discussion_r1302141482
##
matlab/src/matlab/+arrow/+array/Time32Array.m:
##
@@ -0,0 +1,84 @@
+% arrow.array.Time32Array
+
+% Licensed to the Apache Software Foundation (ASF) under one or m
rsm-23 commented on PR #37301:
URL: https://github.com/apache/arrow/pull/37301#issuecomment-1688850144
@wjones127 resolved all the comments.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
paleolimbot commented on PR #280:
URL: https://github.com/apache/arrow-nanoarrow/pull/280#issuecomment-1688849511
FWIW I also had to compile slightly differently because I got a bunch of
missing symbol errors.
```
gcc -O3 -Wall -Werror -shared -fPIC \
-I$(python -c "import s
sgilmore10 commented on code in PR #37315:
URL: https://github.com/apache/arrow/pull/37315#discussion_r1302132211
##
matlab/src/cpp/arrow/matlab/array/proxy/time32_array.cc:
##
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributo
Dandandan commented on code in PR #7362:
URL: https://github.com/apache/arrow-datafusion/pull/7362#discussion_r1302125394
##
docs/source/user-guide/sql/dml.md:
##
@@ -0,0 +1,60 @@
+
+
+# DML
+
+## COPY
+
+Copy a table to file(s). Supported file formats are `parquet`, `csv`, and
Dandandan commented on code in PR #7362:
URL: https://github.com/apache/arrow-datafusion/pull/7362#discussion_r1302124882
##
docs/source/user-guide/sql/dml.md:
##
@@ -0,0 +1,60 @@
+
+
+# DML
+
+## COPY
+
+Copy a table to file(s). Supported file formats are `parquet`, `csv`, and
paleolimbot commented on PR #280:
URL: https://github.com/apache/arrow-nanoarrow/pull/280#issuecomment-1688836990
The difference is more subtle for me on packing (but more pronounced for
packing)...I'm game!
Can you add a comment above each hard-coded shift and explain that it was
do
Dandandan commented on code in PR #7362:
URL: https://github.com/apache/arrow-datafusion/pull/7362#discussion_r1302124087
##
docs/source/user-guide/sql/dml.md:
##
@@ -0,0 +1,60 @@
+
+
+# DML
+
+## COPY
+
+Copy a table to file(s). Supported file formats are `parquet`, `csv`, and
kevingurney merged PR #37316:
URL: https://github.com/apache/arrow/pull/37316
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: github-unsubscr...@arrow.a
kevingurney commented on PR #37316:
URL: https://github.com/apache/arrow/pull/37316#issuecomment-1688831897
+1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscrib
Dandandan commented on code in PR #7364:
URL: https://github.com/apache/arrow-datafusion/pull/7364#discussion_r1302119255
##
datafusion/sqllogictest/test_files/order.slt:
##
@@ -410,3 +410,38 @@ SELECT DISTINCT time as "first_seen" FROM t ORDER BY 1;
## Cleanup
statement ok
d
lidavidm commented on PR #988:
URL: https://github.com/apache/arrow-adbc/pull/988#issuecomment-1688825412
I subscribed to the issue for making Breathe compatible with Sphinx 7. Once
that passes, I'll unpin this and bump the minimum Sphinx version. (That said,
I've been seeing colleagues eva
1 - 100 of 407 matches
Mail list logo