Good to hear there will be partitioning support.  I’ve had some success loading 
partitioned data specified with Unix glowing format.  i.e.:

sc.textFile("s3:/bucket/directory/dt=2014-11-{2[4-9],30}T00-00-00”)

would load dates 2014-11-24 through 2014-11-30.  Not the most ideal solution, 
but it seems to work for loading data from a range.

Best,
Chris

> On Jan 26, 2015, at 10:55 AM, Cheng Lian <lian.cs....@gmail.com> wrote:
> 
> Currently no if you don't want to use Spark SQL's HiveContext. But we're 
> working on adding partitioning support to the external data sources API, with 
> which you can create, for example, partitioned Parquet tables without using 
> Hive.
> 
> Cheng
> 
> On 1/26/15 8:47 AM, Danny Yates wrote:
>> Thanks Michael.
>> 
>> I'm not actually using Hive at the moment - in fact, I'm trying to avoid it 
>> if I can. I'm just wondering whether Spark has anything similar I can 
>> leverage?
>> 
>> Thanks
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to