paleolimbot merged PR #1:
URL: https://github.com/apache/arrow-nanoarrow/pull/1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
paleolimbot opened a new pull request, #1:
URL: https://github.com/apache/arrow-nanoarrow/pull/1
This PR imports the preliminary work from
https://github.com/paleolimbot/nanoarrow (with reviews from @lidavidm and
@pitrou and input from the dev mailing list via the [initial design
tpgillam opened a new issue, #327:
URL: https://github.com/apache/arrow-julia/issues/327
It seems like the ArrowTypes representation of ZonedDateTime doesn't include
enough information to resolve ambiguities around DST, e.g.:
```julia
julia> zdt = ZonedDateTime(DateTime(2020, 11,
lidavidm commented on issue #23:
URL: https://github.com/apache/arrow-adbc/issues/23#issuecomment-1165550741
We also need to implement rollback
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
pitrou commented on PR #56:
URL: https://github.com/apache/arrow-testing/pull/56#issuecomment-1168754007
@liukun4515 The compression file was generated using Arrow C++ IIRC.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
lidavidm opened a new issue, #32:
URL: https://github.com/apache/arrow-adbc/issues/32
We should have a structure like
```
/
adbc.h
c/
driver_manager
driver
validation
java/
...
```
(also, the Java "testsuite" package should be
lidavidm merged PR #34:
URL: https://github.com/apache/arrow-adbc/pull/34
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm opened a new pull request, #35:
URL: https://github.com/apache/arrow-adbc/pull/35
Fixes #32.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
lidavidm opened a new issue, #33:
URL: https://github.com/apache/arrow-adbc/issues/33
For C/Java parity.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
lidavidm merged PR #35:
URL: https://github.com/apache/arrow-adbc/pull/35
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm closed issue #32: Cleanly separate C/Java code
URL: https://github.com/apache/arrow-adbc/issues/32
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
lidavidm opened a new pull request, #36:
URL: https://github.com/apache/arrow-adbc/pull/36
Not all of the metadata is currently mapped and there are some more cases to
consider in the future.
Fixes #33.
--
This is an automated message from the Apache Git Service.
To respond to the
lidavidm merged PR #36:
URL: https://github.com/apache/arrow-adbc/pull/36
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm closed issue #33: Implement database metadata calls in Java
URL: https://github.com/apache/arrow-adbc/issues/33
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
lidavidm opened a new issue, #37:
URL: https://github.com/apache/arrow-adbc/issues/37
The existing bindings aren't fully complete.
We should have two packages: a low-level package with no/minimal
dependencies that mirrors the C API closely; and eventually, a high-level
package that
lidavidm opened a new pull request, #30:
URL: https://github.com/apache/arrow-adbc/pull/30
Also factors out a helper module. Currently the only feature is to bind
Arrow data to a JDBC prepared statement. Ideally this would be migrated
upstream into arrow-jdbc, though!
--
This is an
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1161903416
Implemented for SQLite, though the constraint metadata ignores the column
name filter for simplicity.
--
This is an automated message from the Apache Git Service.
To respond to the
lidavidm merged PR #11:
URL: https://github.com/apache/arrow-adbc/pull/11
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm opened a new pull request, #12:
URL: https://github.com/apache/arrow-adbc/pull/12
Next steps:
- CI setup
- Java driver manager
- Flight SQL driver
- JNI ADBC driver
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
lidavidm opened a new pull request, #8:
URL: https://github.com/apache/arrow-adbc/pull/8
This only builds the driver manager (without tests).
TODOs
- [ ] Set up helper scripts, Conda environments
- [ ] Can we share the main Arrow project's caches?
- [ ] Build tests as
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1156813679
CC @hannes, @krlmlr. This is roughly patterned off of Flight SQL (and seems
similarish to DBI from quick look as well)
I think we discussed an MSSQL style `information_schema` or
lidavidm opened a new pull request, #18:
URL: https://github.com/apache/arrow-adbc/pull/18
Adds methods to get table schema/columns, and implements all the metadata
methods for SQLite.
The methods differ slightly from Flight SQL. Instead of returning a
serialized schema with the
lidavidm opened a new issue, #20:
URL: https://github.com/apache/arrow-adbc/issues/20
We should fill out the error codes. Things to consider:
- Flight/gRPC status codes: the gRPC status codes are nice because they're
well-defined.
- PEP 249 exception hierarchy
- SQLSTATE
lidavidm opened a new issue, #19:
URL: https://github.com/apache/arrow-adbc/issues/19
Removed in https://github.com/apache/arrow/pull/13382
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
lidavidm opened a new pull request, #21:
URL: https://github.com/apache/arrow-adbc/pull/21
Adds a wider set of error codes and describes what they're meant to be used
for.
Fixes #20.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
lidavidm opened a new issue, #22:
URL: https://github.com/apache/arrow-adbc/issues/22
We need to set up:
- cpplint
- clang-tidy
- ASan/UBSan
- Valgrind, possibly
- flake8 for Cython
--
This is an automated message from the Apache Git Service.
To respond to the message,
pitrou commented on issue #20:
URL: https://github.com/apache/arrow-adbc/issues/20#issuecomment-1157797045
> Do we want to extend AdbcError with space for database-specific error
codes?
Probably? It seems that wouldn't hurt, and being possible to faithfully
recreate errors is always
lidavidm opened a new issue, #23:
URL: https://github.com/apache/arrow-adbc/issues/23
JDBC, ODBC, Flight SQL (implicitly): auto-commit
PEP 249: manual commit
We should define what the default is and add a function to set this on the
connection.
--
This is an automated message
lidavidm commented on PR #21:
URL: https://github.com/apache/arrow-adbc/pull/21#issuecomment-1157998986
Updates: now the error struct also includes space for vendor-specific codes
and the SQLSTATE code from the SQL standards, for completeness.
--
This is an automated message from the
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1158022230
Thanks. I wonder if we need that many entry points.
If we assume a hierarchy like catalog -> schema -> table -> column, could we
have one single entry point that returns a nested
liukun4515 commented on PR #56:
URL: https://github.com/apache/arrow-testing/pull/56#issuecomment-1158424874
@pitrou Hi, how do you generated the compression file?
I'm doing compression work in rust version
https://github.com/apache/arrow-rs/pull/1855
But the test failed when reading
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1158246255
Fair enough, we can go with a static output schema for type stability.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1159135921
Implemented for SQLite. It's a little tedious to implement for SQLite but
isn't especially complicated, at least.
--
This is an automated message from the Apache Git Service.
To
lidavidm opened a new pull request, #24:
URL: https://github.com/apache/arrow-adbc/pull/24
Clarify that autocommit is the default, add an option to disable it, and add
an explicit commit.
Though, this is all not currently implemented.
Fixes #23.
--
This is an automated
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1158027202
Hmm, that sounds reasonable. So the schema would be something like this?
```
catalog_name: utf8
catalog_schemas: list>>
```
And if you filtered by (say)
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1158037665
I'm not sure how difficult unpacking will be, and perhaps if filtering by
catalog name we would only get
```
schema_name: utf8
schema_tables: list<...>
```
because
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1158040008
Hmm, the filters (at least currently, and in Flight SQL) can be patterns as
well as fixed strings.
If filtering by catalog name gives you the schemas directly, that starts to
lidavidm merged PR #16:
URL: https://github.com/apache/arrow-adbc/pull/16
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm opened a new pull request, #16:
URL: https://github.com/apache/arrow-adbc/pull/16
On top of #15
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
lidavidm merged PR #15:
URL: https://github.com/apache/arrow-adbc/pull/15
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm opened a new issue, #14:
URL: https://github.com/apache/arrow-adbc/issues/14
See https://github.com/lidavidm/arrow/issues/9
> Similar to AdbcConnectionGetTables, there should be a way to query the
columns available in a table with their names, types, NULL-ness etc.
>> I
lidavidm opened a new issue, #13:
URL: https://github.com/apache/arrow-adbc/issues/13
- Function to `SELECT *` from a table without providing a query (makes it
easier to provide non-query-engine based backends, e.g. a Parquet file backend)
--
This is an automated message from the Apache
lidavidm opened a new pull request, #17:
URL: https://github.com/apache/arrow-adbc/pull/17
These bindings are structured as a low-level module that mostly
mirrors the ADBC API, and a TBD high-level module that will
implement PEP 249 (except with Turbodbc-style extensions).
This
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1160481520
Is something like this what you were thinking? (This has less information
than what Flight SQL provides but feels minimal, unless we also want to reflect
the cascade/delete rules)
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1160634749
Ah, something more like this then?
```
/// CONSTRAINT_SCHEMA is a Struct with fields:
///
/// Field Name | Field Type| Comments
///
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1160597922
This is similar to what MySQL does -- others (like Postgres, SQL Server,
DuckDB (?)) would store the `fk_` columns in a detail table
`constraint_column_usage` (we would use a list of
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1160714111
Yes, that looks about right -- and is still createable with *one single
query* on DuckDB and perhaps Postgres.
--
This is an automated message from the Apache Git Service.
To respond to
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1159752806
Great! I think I can come up with a SQL query that returns this result for
Postgres right from the database, but this is useful only if we support getting
nested data from the database.
lidavidm commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1159733606
Since we have all the other metadata here, I think it's reasonable to add
that information as well. So the 'tables' would now be something like:
```
table_name: utf8
krlmlr commented on PR #18:
URL: https://github.com/apache/arrow-adbc/pull/18#issuecomment-1159732527
I wonder if we also should offer a way to expose primary and foreign key,
and perhaps unique constraints. This allows learning the data model from a
remote database.
I have devised
lidavidm opened a new pull request, #10:
URL: https://github.com/apache/arrow-adbc/pull/10
Builds on #7.
The driver manager and drivers define the same symbols. Normally,
this means that if the driver manager loads a driver, and then the
driver attempts to resolve ADBC
lidavidm commented on PR #9:
URL: https://github.com/apache/arrow-adbc/pull/9#issuecomment-1149072070
Hmm, the pre-commit action isn't allowed. Time to do it manually…
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
lidavidm merged PR #8:
URL: https://github.com/apache/arrow-adbc/pull/8
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #9:
URL: https://github.com/apache/arrow-adbc/pull/9
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm commented on issue #14:
URL: https://github.com/apache/arrow-adbc/issues/14#issuecomment-1153984889
Also see https://github.com/lidavidm/arrow/issues/6 where we need to clarify
the signature for GetTables
--
This is an automated message from the Apache Git Service.
To respond to
lidavidm merged PR #12:
URL: https://github.com/apache/arrow-adbc/pull/12
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #6:
URL: https://github.com/apache/arrow-adbc/pull/6
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #7:
URL: https://github.com/apache/arrow-adbc/pull/7
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #5:
URL: https://github.com/apache/arrow-adbc/pull/5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #10:
URL: https://github.com/apache/arrow-adbc/pull/10
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm opened a new pull request, #15:
URL: https://github.com/apache/arrow-adbc/pull/15
Also renames the CMake options so they don't clash with the Arrow CMake
config.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
lidavidm opened a new pull request, #7:
URL: https://github.com/apache/arrow-adbc/pull/7
Sketch out an API for bulk data ingestion. Reuses the AdbcStatement
structure.
Based on #6.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
lidavidm opened a new pull request, #41:
URL: https://github.com/apache/arrow-adbc/pull/41
Also refactors the bindings to not depend on PyArrow.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
lidavidm closed issue #37: Reorganize and complete Python bindings
URL: https://github.com/apache/arrow-adbc/issues/37
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
lidavidm merged PR #41:
URL: https://github.com/apache/arrow-adbc/pull/41
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #42:
URL: https://github.com/apache/arrow-adbc/pull/42
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm commented on PR #45:
URL: https://github.com/apache/arrow-adbc/pull/45#issuecomment-1194001552
CC @hannes @krlmlr if either of you have comments - I noticed this was
missing while working on the Ibis backend
--
This is an automated message from the Apache Git Service.
To respond
GavinRay97 opened a new issue, #46:
URL: https://github.com/apache/arrow-adbc/issues/46
I noticed that the ADBC metadata information assumes a fixed hierarchy:
lidavidm commented on issue #46:
URL: https://github.com/apache/arrow-adbc/issues/46#issuecomment-1194305728
Hmm, real-world edge cases are always fun, thanks for poking around.
(Admittedly we should've looked more closely at these.)
We were debating whether to keep the hierarchy or
lidavidm commented on code in PR #47:
URL: https://github.com/apache/arrow-adbc/pull/47#discussion_r929911902
##
java/driver/flight-sql/src/main/java/org/apache/arrow/adbc/driver/flightsql/FlightSqlDriver.java:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software
tokoko commented on code in PR #47:
URL: https://github.com/apache/arrow-adbc/pull/47#discussion_r929794347
##
java/driver/flight-sql/src/main/java/org/apache/arrow/adbc/driver/flightsql/FlightSqlDriver.java:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software
lidavidm opened a new pull request, #40:
URL: https://github.com/apache/arrow-adbc/pull/40
- Implement prepared statements
- Define method needed for partitioned data (though no driver implements it
yet)
- Refactor test suite to test both Derby and Postgres (Postgres needs more
CI
lidavidm opened a new pull request, #39:
URL: https://github.com/apache/arrow-adbc/pull/39
- `AdbcDriver#connect` -> `AdbcDriver#open`
- Make append vs create explicit in `bulkIngest`
- Add `AdbcStatement#executeQuery` as a convenience method
--
This is an automated message from the
paleolimbot commented on PR #2:
URL: https://github.com/apache/arrow-nanoarrow/pull/2#issuecomment-1183393758
Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
paleolimbot merged PR #2:
URL: https://github.com/apache/arrow-nanoarrow/pull/2
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
lidavidm merged PR #38:
URL: https://github.com/apache/arrow-adbc/pull/38
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
paleolimbot opened a new issue, #5:
URL: https://github.com/apache/arrow-nanoarrow/issues/5
Now that we have buffer holders (`struct ArrowBuffer`) we can implement an
owning `struct ArrowArray`. I envision the API something like the
`ArrowSchema*` helpers:
- `ArrowArrayInit(struct
lidavidm commented on issue #7:
URL: https://github.com/apache/arrow-nanoarrow/issues/7#issuecomment-1195873419
FWIW: Arrow does this sort of thing here
https://github.com/apache/arrow/blob/master/cpp/valgrind.supp
Can't claim to understand how it works, but it gets set up here:
paleolimbot opened a new issue, #7:
URL: https://github.com/apache/arrow-nanoarrow/issues/7
Right now the valgrind test isn't actually testing for memory leaks...we
actually just need to:
- Add `include(CTest)` in CMakeLists.txt if building tests
- Change the current command to
paleolimbot opened a new issue, #6:
URL: https://github.com/apache/arrow-nanoarrow/issues/6
Now that we have growable buffers we can implement a more natural way to
build key/value metadata. It might just take one function:
```
ArrowMetadataBuild(struct ArrowBuffer* buffer, const
wesm commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195977036
I guess for the performance sensitive use case, I would focus on the
reserve-then-unsafeappend usage pattern which we have in plenty of places in
the Arrow codebase.
--
This is
paleolimbot opened a new issue, #4:
URL: https://github.com/apache/arrow-nanoarrow/issues/4
Dealing with bitmaps is hard and we should provide some tools to do the
finicky hard-to-get-right math:
- `ArrowBitmapIsNull(const void* bitmap, int64_t i)`
- `ArrowBitmapSetNull(void*
paleolimbot merged PR #3:
URL: https://github.com/apache/arrow-nanoarrow/pull/3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
paleolimbot commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195967377
That's a good point...we need a system for that for the buffer builders too
since those functions should be inlinable too (but it's inconvenient for
readability if they're all
paleolimbot opened a new issue, #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8
This will require a bit of code but is also the thing that most people need
help with when generating C arrays (looping over some row-based structure and
generating a column-based structure). One
paleolimbot commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195980100
Maybe the first step is not a builder class then...as per #5, a `struct
ArrowArray` is already just a "bag of `struct ArrowBuffer`s" that can can be
reserved/appended. Then we
lidavidm commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195959994
Looks reasonable to me. We may eventually want more conveniences, e.g. being
able to append a range of values from a `int64_t*`, but this would work to
start with. Also a
lidavidm commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195974480
They could perhaps be Arrow type-oblivious macros that expand into a write
into a particular buffer (and possibly a bounds-check/reserve call)? Then the
builder is just a bag of
wesm commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195963131
I'm thinking that performance sensitive applications (especially those that
might be migrating from C++ where anything that's in the Arrow builder headers
is inlined) many of the
paleolimbot commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195962996
I'm not sure about the nested types...perhaps for the first PR I'll get the
non-nested types in place and after that maybe it will be clearer whether the
`struct
paleolimbot commented on issue #8:
URL: https://github.com/apache/arrow-nanoarrow/issues/8#issuecomment-1195970110
I think with the callable approach those functions will all exist (i.e.,
`ArrowInt32BuilderAppendInteger()`)...maybe they live in something like
lidavidm opened a new pull request, #49:
URL: https://github.com/apache/arrow-adbc/pull/49
Clarify how we approach dependencies.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
wjones127 commented on PR #80:
URL: https://github.com/apache/arrow-testing/pull/80#issuecomment-1191717690
cc @pitrou
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
lidavidm opened a new pull request, #44:
URL: https://github.com/apache/arrow-adbc/pull/44
- The SQLite catalog is called "main"
- Fix a couple bugs and note areas of improvement
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
lidavidm opened a new issue, #43:
URL: https://github.com/apache/arrow-adbc/issues/43
- Need way to query driver and server version
- Need handling of multiple databases within a connection
--
This is an automated message from the Apache Git Service.
To respond to the message, please
pitrou merged PR #80:
URL: https://github.com/apache/arrow-testing/pull/80
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
pitrou commented on PR #80:
URL: https://github.com/apache/arrow-testing/pull/80#issuecomment-1191800075
Thanks for the update @wjones127 :-)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
pitrou commented on PR #80:
URL: https://github.com/apache/arrow-testing/pull/80#issuecomment-1191734043
Some nits:
* avoid creating a subdir for a single file?
* add a README.md in `data/parquet` to start describing the files being
added? A bit like in
wjones127 opened a new pull request, #80:
URL: https://github.com/apache/arrow-testing/pull/80
We used to always set `is_compressed=false` in page headers regardless of
whether there was actual compression. Check you can read this file if you want
to suppose files written by Arrow C++ 2.0.
lidavidm merged PR #39:
URL: https://github.com/apache/arrow-adbc/pull/39
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
401 - 500 of 1547 matches
Mail list logo