In general master should be a superset of what is in any of the release
branches. In the particular case of Spark SQL master and branch-1.1 should
be identical (though that will likely change once Patrick cuts the first
RC).
On Mon, Aug 25, 2014 at 12:50 PM, Dmitriy Lyubimov
wrote:
> Ok, I was
Ok, I was just asking that the changes you've mentioned are likely to be
found on 1.1 branch so it would make sense for my starting point to fork
off 1.1. Or perhaps master.
The question of PR is fairly far off at this point, for legal reasons if
nothing else. if and by the time the work is approv
In general all PRs should be made against master. When necessary, we can
back port them to the 1.1 branch as well. However, since we are in
code-freeze for that branch, we'll only do that for major bug fixes at this
point.
On Thu, Aug 21, 2014 at 10:58 AM, Dmitriy Lyubimov
wrote:
> ok i'll tr
ok i'll try. happen to do that a lot to other tools.
So I am guessing you are saying if i wanted to do it now, i'd start against
https://github.com/apache/spark/tree/branch-1.1 and PR against it?
On Thu, Aug 21, 2014 at 12:28 AM, Michael Armbrust
wrote:
> I do not know of any existing way to d
I do not know of any existing way to do this. It should be possible using
the new public API for applying schema (will be available in 1.1) to an
RDD. Basically you'll need to convert the proto buff records into rows,
and also create a StructType that represents the schema. With this two
things
Hello,
is there any known work to adapt protobuf schema to Spark QL data sourcing?
If not, would it present interest to contribute one?
thanks.
-d