;
> On Wed, May 11, 2016 at 10:05 PM, Giri P <gpatc...@gmail.com> wrote:
>
>> I'm not using docker
>>
>> On Wed, May 11, 2016 at 8:47 AM, Raghavendra Pandey <
>> raghavendra.pan...@gmail.com> wrote:
>>
>>> By any chance, are you using docke
I'm not using docker
On Wed, May 11, 2016 at 8:47 AM, Raghavendra Pandey <
raghavendra.pan...@gmail.com> wrote:
> By any chance, are you using docker to execute?
> On 11 May 2016 21:16, "Raghavendra Pandey"
> wrote:
>
>> On 11 May 2016 02:13, "gpatcham"
method1 looks like this
reRDD.map(row =>method1(row,sc)).saveAsTextFile(outputDir)
reRDD has userId's
def method1(sc:SparkContext , userId: string){
sc.cassandraTable("Keyspace", "Table2").where("userid = ?" userId)
...do something
return "Test"
}
On Wed, Jan 20, 2016 at 11:00 AM,
I'm using spark cassandra connector to do this and the way we access
cassandra table is
sc.cassandraTable("keySpace", "tableName")
Thanks
Giri
On Mon, Jan 18, 2016 at 12:37 PM, Ted Yu wrote:
> Can you pass the properties which are needed for accessing Cassandra
> without
Can we use @transient ?
On Mon, Jan 18, 2016 at 12:44 PM, Giri P <gpatc...@gmail.com> wrote:
> I'm using spark cassandra connector to do this and the way we access
> cassandra table is
>
> sc.cassandraTable("keySpace", "tableName")
>
> Thanks
> Gi
that would work.
>
> Doesn't seem to be good practice.
>
> On Mon, Jan 18, 2016 at 1:27 PM, Giri P <gpatc...@gmail.com> wrote:
>
>> Can we use @transient ?
>>
>>
>> On Mon, Jan 18, 2016 at 12:44 PM, Giri P <gpatc...@gmail.com> wrote:
>>
Any idea what causing this error
15/08/28 21:03:03 WARN scheduler.TaskSetManager: Lost task 34.0 in stage
9.0 (TID 20, dtord01hdw0228p.dc.dotomi.net): java.lang.RuntimeException:
cannot find field message_campaign_id from
[0:error_error_error_error_error_error_error, 1:cannot_determine_schema,
...@databricks.com; user@spark.apache.org
can we run hive queries using spark-avro ?
In our case its not just reading the avro file. we have view in hive which
is based on multiple tables.
On Thu, Aug 27, 2015 at 9:41 AM, Giri P gpatc...@gmail.com wrote:
we are using hive1.1 .
I was able to fix below
can we run hive queries using spark-avro ?
In our case its not just reading the avro file. we have view in hive which
is based on multiple tables.
On Thu, Aug 27, 2015 at 9:41 AM, Giri P gpatc...@gmail.com wrote:
we are using hive1.1 .
I was able to fix below error when I used right version
we are using hive1.1 .
I was able to fix below error when I used right version spark
15/08/26 17:51:12 WARN avro.AvroSerdeUtils: Encountered AvroSerdeException
determining schema. Returning signal schema to indicate problem
org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Neither
but on spark 0.9 we don't have these options
--num-executors: controls how many executors will be allocated
--executor-memory: RAM for each executor
--executor-cores: CPU cores for each executor
On Fri, Dec 12, 2014 at 12:27 PM, Sameer Farooqui same...@databricks.com
wrote:
Hi,
FYI - There
11 matches
Mail list logo