[ 
https://issues.apache.org/jira/browse/PIG-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13509964#comment-13509964
 ] 

Cheolsoo Park commented on PIG-3015:
------------------------------------

Hi Joe,

Thanks for your prompt response!

To answer your questions,
{quote}
I have always assumed that AvroStorage was designed to be used with Hadoop 
sequence files that contained a series of records, so I implemented AvroStorage 
to only work with a file in this format. Are there cases where the highest 
level schema for a file will be another type? If so... what does that mean for 
pig? Is there one record per file?
{quote}
This is a good question, and I see your argument. But this will be very 
different from what the current AvroStorage does. Currently, a non-record type 
is automatically wrapped in a tuple. For example, "1" is loaded as (1) in Pig. 
If a file includes multiple values, they are loaded as multiple tuples as 
follows:
{code:title=avro}
cheolsoo@localhost:~/workspace/avro $java -jar avro-tools-1.5.4.jar getschema 
multiple_int.avro 
"int"
cheolsoo@localhost:~/workspace/avro $java -jar avro-tools-1.5.4.jar tojson 
multiple_int.avro 
1
2
3
{code}
{code:title=pig}
in = LOAD 'multiple_int.avro' USING 
org.apache.pig.piggybank.storage.avro.AvroStorage();
DUMP in;
(1)
(2)
(3)
{code}
Agreed that we can tell users that the top-level schema must be a record type, 
but I am afraid that people might not agree. In my experience, people tend to 
think that every valid Avro file should be able to be loaded by AvroStorage. 
Granted, there exist some restrictions (e.g. recursive records and unions), but 
even these restrictions have been loosened recently. Unless there is a 
convincing reason to not, I think that we should keep it that way.

In many cases, people already have data pipeline in place (e.g. Flume produces 
Avro files => Pig consumes Avro files), and it is not guaranteed that the 
top-level schema is always a record type.
{quote}
Here's a specific example: suppose that we have this schema:
\{"name" : "IntArray", "type" : "array", "items" : "int"\}
Suppose that we have 3 files to load, each with this schema, each containing an 
array of 10 integers. Should we load this into pig as a single bag with 30 
integers? A bag containing three bags (each, in turn, containing 10 integers)? 
Or reject this file entirely?
{quote}
Currently, they are loaded as 3 tuples, and each tuple contains a bag of 10 
integers.
{code}
({(1),(2), ... ,(10)})
({(1),(2), ... ,(10)})
({(1),(2), ... ,(10)})
{code}
Thoughts?
                
> Rewrite of AvroStorage
> ----------------------
>
>                 Key: PIG-3015
>                 URL: https://issues.apache.org/jira/browse/PIG-3015
>             Project: Pig
>          Issue Type: Improvement
>          Components: piggybank
>            Reporter: Joseph Adler
>            Assignee: Joseph Adler
>         Attachments: PIG-3015.patch
>
>
> The current AvroStorage implementation has a lot of issues: it requires old 
> versions of Avro, it copies data much more than needed, and it's verbose and 
> complicated. (One pet peeve of mine is that old versions of Avro don't 
> support Snappy compression.)
> I rewrote AvroStorage from scratch to fix these issues. In early tests, the 
> new implementation is significantly faster, and the code is a lot simpler. 
> Rewriting AvroStorage also enabled me to implement support for Trevni.
> I'm opening this ticket to facilitate discussion while I figure out the best 
> way to contribute the changes back to Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to