Re: Avro Schema + GenericRecord to HadoopRDD

2014-07-30 Thread Laird, Benjamin
That makes sense, thanks Chris.

I'm currently reworking my code to use the newAPIHadoopRDD with an
AvroSequenceFileInputFormat (see below), but I think I'll run into the
same issue. I'll give your suggestion a try.

val avroRdd = sc.newAPIHadoopFile(fp,
classOf[AvroSequenceFileInputFormat[AvroKey[GenericRecord],NullWritable]],c
lassOf[AvroKey[GenericRecord]], classOf[NullWritable])

On 7/29/14, 7:13 PM, Severs, Chris csev...@ebay.com wrote:

Hi Benjamin,

I think the best bet would be to use the Avro code generation stuff to
generate a SpecificRecord for your schema and then change the reader to
use your specific type rather than GenericRecord.

Trying to read up the generic record and then do type inference and spit
out a tuple is way more headache than it's worth if you already have the
schema in hand (I've done it for Cascading/Scalding).

-
Chris



From: Laird, Benjamin [benjamin.la...@capitalone.com]
Sent: Tuesday, July 29, 2014 8:00 AM
To: user@spark.apache.org; u...@spark.incubator.apache.org
Subject: Avro Schema + GenericRecord to HadoopRDD

Hi all,

I can read in Avro files to Spark with HadoopRDD and submit the schema in
the jobConf, but with the guidance I've seen so far, I'm left with a avro
GenericRecord of Java objects without type. How do I actually use the
schema to have the types inferred?

Example:

scala AvroJob.setInputSchema(jobConf,schema);
scala val rdd =
sc.hadoopRDD(jobConf,classOf[org.apache.avro.mapred.AvroInputFormat[Generi
c
Record]],classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],classO
f
[org.apache.hadoop.io.NullWritable],10)
14/07/29 09:27:49 INFO storage.MemoryStore: ensureFreeSpace(134254) called
with curMem=0, maxMem=308713881
14/07/29 09:27:49 INFO storage.MemoryStore: Block broadcast_0 stored as
values to memory (estimated size 131.1 KB, free 294.3 MB)
rdd:
org.apache.spark.rdd.RDD[(org.apache.avro.mapred.AvroWrapper[org.apache.av
r
o.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] =
HadoopRDD[0] at hadoopRDD at console:50

scala rdd.first._1.datum.get(amt)
14/07/29 09:31:34 INFO spark.SparkContext: Starting job: first at
console:53
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Got job 3 (first at
console:53) with 1 output partitions (allowLocal=true)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Final stage: Stage 3(first
at console:53)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Missing parents: List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Computing the requested
partition locally
14/07/29 09:31:34 INFO rdd.HadoopRDD: Input split:
hdfs://nameservice1:8020/user/nylab/prod/persistent_tables/creditsetl_ref_
e
txns/201201/part-0.avro:0+34279385
14/07/29 09:31:34 INFO spark.SparkContext: Job finished: first at
console:53, took 0.061220615 s
res11: Object = 24.0


Thanks!
Ben



The information contained in this e-mail is confidential and/or
proprietary to Capital One and/or its affiliates. The information
transmitted herewith is intended only for use by the individual or entity
to which it is addressed.  If the reader of this message is not the
intended recipient, you are hereby notified that any review,
retransmission, dissemination, distribution, copying or other use of, or
taking of any action in reliance upon this information is strictly
prohibited. If you have received this communication in error, please
contact the sender and delete the material from your computer.




The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed.  If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



Avro Schema + GenericRecord to HadoopRDD

2014-07-29 Thread Laird, Benjamin
Hi all, 

I can read in Avro files to Spark with HadoopRDD and submit the schema in
the jobConf, but with the guidance I've seen so far, I'm left with a avro
GenericRecord of Java objects without type. How do I actually use the
schema to have the types inferred?

Example:

scala AvroJob.setInputSchema(jobConf,schema);
scala val rdd = 
sc.hadoopRDD(jobConf,classOf[org.apache.avro.mapred.AvroInputFormat[Generic
Record]],classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],classOf
[org.apache.hadoop.io.NullWritable],10)
14/07/29 09:27:49 INFO storage.MemoryStore: ensureFreeSpace(134254) called
with curMem=0, maxMem=308713881
14/07/29 09:27:49 INFO storage.MemoryStore: Block broadcast_0 stored as
values to memory (estimated size 131.1 KB, free 294.3 MB)
rdd: 
org.apache.spark.rdd.RDD[(org.apache.avro.mapred.AvroWrapper[org.apache.avr
o.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] =
HadoopRDD[0] at hadoopRDD at console:50

scala rdd.first._1.datum.get(amt)
14/07/29 09:31:34 INFO spark.SparkContext: Starting job: first at
console:53
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Got job 3 (first at
console:53) with 1 output partitions (allowLocal=true)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Final stage: Stage 3(first
at console:53)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Missing parents: List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Computing the requested
partition locally
14/07/29 09:31:34 INFO rdd.HadoopRDD: Input split:
hdfs://nameservice1:8020/user/nylab/prod/persistent_tables/creditsetl_ref_e
txns/201201/part-0.avro:0+34279385
14/07/29 09:31:34 INFO spark.SparkContext: Job finished: first at
console:53, took 0.061220615 s
res11: Object = 24.0


Thanks!
Ben



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed.  If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



RE: Avro Schema + GenericRecord to HadoopRDD

2014-07-29 Thread Severs, Chris
Hi Benjamin,

I think the best bet would be to use the Avro code generation stuff to generate 
a SpecificRecord for your schema and then change the reader to use your 
specific type rather than GenericRecord. 

Trying to read up the generic record and then do type inference and spit out a 
tuple is way more headache than it's worth if you already have the schema in 
hand (I've done it for Cascading/Scalding). 

-
Chris



From: Laird, Benjamin [benjamin.la...@capitalone.com]
Sent: Tuesday, July 29, 2014 8:00 AM
To: user@spark.apache.org; u...@spark.incubator.apache.org
Subject: Avro Schema + GenericRecord to HadoopRDD

Hi all,

I can read in Avro files to Spark with HadoopRDD and submit the schema in
the jobConf, but with the guidance I've seen so far, I'm left with a avro
GenericRecord of Java objects without type. How do I actually use the
schema to have the types inferred?

Example:

scala AvroJob.setInputSchema(jobConf,schema);
scala val rdd =
sc.hadoopRDD(jobConf,classOf[org.apache.avro.mapred.AvroInputFormat[Generic
Record]],classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],classOf
[org.apache.hadoop.io.NullWritable],10)
14/07/29 09:27:49 INFO storage.MemoryStore: ensureFreeSpace(134254) called
with curMem=0, maxMem=308713881
14/07/29 09:27:49 INFO storage.MemoryStore: Block broadcast_0 stored as
values to memory (estimated size 131.1 KB, free 294.3 MB)
rdd:
org.apache.spark.rdd.RDD[(org.apache.avro.mapred.AvroWrapper[org.apache.avr
o.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] =
HadoopRDD[0] at hadoopRDD at console:50

scala rdd.first._1.datum.get(amt)
14/07/29 09:31:34 INFO spark.SparkContext: Starting job: first at
console:53
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Got job 3 (first at
console:53) with 1 output partitions (allowLocal=true)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Final stage: Stage 3(first
at console:53)
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Missing parents: List()
14/07/29 09:31:34 INFO scheduler.DAGScheduler: Computing the requested
partition locally
14/07/29 09:31:34 INFO rdd.HadoopRDD: Input split:
hdfs://nameservice1:8020/user/nylab/prod/persistent_tables/creditsetl_ref_e
txns/201201/part-0.avro:0+34279385
14/07/29 09:31:34 INFO spark.SparkContext: Job finished: first at
console:53, took 0.061220615 s
res11: Object = 24.0


Thanks!
Ben



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed.  If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.