Ok, I was just asking that the changes you've mentioned are likely to be
found on 1.1 branch so it would make sense for my starting point to fork
off 1.1. Or perhaps master.

The question of PR is fairly far off at this point, for legal reasons if
nothing else. if and by the time the work is approved for contribution,
obviously PR process will be followed.


On Mon, Aug 25, 2014 at 11:57 AM, Michael Armbrust <mich...@databricks.com>
wrote:

> In general all PRs should be made against master.  When necessary, we can
> back port them to the 1.1 branch as well.  However, since we are in
> code-freeze for that branch, we'll only do that for major bug fixes at this
> point.
>
>
> On Thu, Aug 21, 2014 at 10:58 AM, Dmitriy Lyubimov <dlie...@gmail.com>
> wrote:
>
>> ok i'll try. happen to do that a lot to other tools.
>>
>> So I am guessing you are saying if i wanted to do it now, i'd start
>> against https://github.com/apache/spark/tree/branch-1.1 and PR against
>> it?
>>
>>
>> On Thu, Aug 21, 2014 at 12:28 AM, Michael Armbrust <
>> mich...@databricks.com> wrote:
>>
>>> I do not know of any existing way to do this.  It should be possible
>>> using the new public API for applying schema (will be available in 1.1) to
>>> an RDD.  Basically you'll need to convert the proto buff records into rows,
>>> and also create a StructType that represents the schema.  With this two
>>> things you can all the applySchema method on SparkContext.
>>>
>>> Would be great if you could contribute this back.
>>>
>>>
>>> On Wed, Aug 20, 2014 at 5:57 PM, Dmitriy Lyubimov <dlie...@gmail.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> is there any known work to adapt protobuf schema to Spark QL data
>>>> sourcing? If not, would it present interest to contribute one?
>>>>
>>>> thanks.
>>>> -d
>>>>
>>>
>>>
>>
>

Reply via email to