Hi Martin,
Thanks a lot for preparing the new repo and making it super easy for me to
just copy my code to the new repo! I will create a new PR there.
> I think the PR is fine from a code perspective as a starting point. I've
prepared the go repository with all the things necessary so that it
These are all valid points and it makes total sense to continue to consider
them. However, reading the mail I'm wondering if we're discussing the same
problems.
Deprecation of APIs aside, the main benefit it Spark Connect is that the
contract is explicitly not a Jar file full of transitive
Hi Alex,
- Your first assertion is correct. Regardless of Spark and back to
Jurassic Park data processing , partition pruning and column pruning are
either involved all or none. This means that for a given query, all
partitions and columns are either used or not used. There is no
we would cut it from the hadoop dependencies, but still allow IPC messages
using it to be marshalled if the protoc-compiled classes were using the
protobuf-2.5 JAR *and* that JAR was on the classpath.
it'd be come the homework of those apps which need protobuf-2.5, here
hbase, to set things up.
Hi Martin,
On 5/30/23 11:50, Martin Grund wrote:
I think it makes sense to split this discussion into two pieces. On > the contribution side, my personal perspective is that these new >
clients are explicitly marked as experimental and unsupported until > we
deem them mature enough to be
Hi Bo,
I think the PR is fine from a code perspective as a starting point. I've
prepared the go repository with all the things necessary so that it reduces
friction for you. The protos are automatically generated, pre-commit checks
etc. All you need to do is drop your code :)
Once we have the