As Doug notes, the documentation is somewhat lacking. As you start working with Avro, please feel free to add thoughts to the wiki, and hopefully a structure will start to cohere.
I've collected some random thoughts about the non-spec stuff at http://wiki.apache.org/hadoop/Avro/Glossary. On Tue, Dec 1, 2009 at 1:10 PM, Marko Milicevic <[email protected]> wrote: > Thanks Philp, i did not clue in that "specific" meant generating java > classes from schemas specified by the "Specification" doc. > http://hadoop.apache.org/avro/docs/current/spec.html > > Thanks to Doug and yourself, i'm straight now. > > Marko. > . > > ------------------------------ > *From:* Philip Zeyliger [mailto:[email protected]] > *Sent:* Tuesday, December 01, 2009 3:07 PM > > *To:* [email protected] > *Subject:* Re: reflected char array returning null package. > > > > On Tue, Dec 1, 2009 at 11:42 AM, Marko Milicevic <[email protected]>wrote: > >> Thanks Doug. >> >> Is there any documentation on the Specific API and the Specific schema >> format? >> > > The schema format is the same, always. What's going on is that there are > three different APIs to work with data. The "specific" API uses generated > code. (You generate that code with "avroj compile" or by using an ant task; > "avroj compile" is only in trunk, though.) The "generic" API exposes a > map-like interface: you use string keys to access fields. The "reflect" API > maps onto existing classes and is quite handy for adapting existing systems. > > -- Philip > >
