[ 
https://issues.apache.org/jira/browse/AVRO-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved AVRO-1905.
-------------------------------
    Resolution: Information Provided
      Assignee: Sean Busbey

When using Apache Avro as a serialization format, you must always know what 
schema was used to write the data and present that schema to the library as a 
part of reading.

In your examples above, it appears you are attempting to read data written with 
_old_schema_ using only _new_schema_, which will not work. Avro will need to 
know both _old_schemea_ and _new_schema_.

Please use the user@avro mailing list for help with getting started using Avro: 
http://avro.apache.org/mailing_lists.html#Users. It is easiest to do so by 
first subscribing to the user list and then posting your question.

> Backward and forward compatible
> -------------------------------
>
>                 Key: AVRO-1905
>                 URL: https://issues.apache.org/jira/browse/AVRO-1905
>             Project: Avro
>          Issue Type: Bug
>          Components: java
>            Reporter: Amod Kumar Pandey
>            Assignee: Sean Busbey
>
> I am under the understanding that Avro is both backward and forward 
> compatible (for certain schema changes). But as per my test it is neither 
> backward nor forward compatible.
> Maven project with avro
> {code}
> {"namespace": "example.avro",
>  "type": "record",
>  "name": "user",
>  "fields": [
>      {"name": "name", "type": "string"},
>      {"name": "favorite_number",  "type": "int"}
>  ]
> }
> {code}
> Producer
> {code}
> public class Producer {
>   public static void main(String[] args) throws IOException {
>     try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
>       user u1 = 
> user.newBuilder().setFavoriteNumber(1).setName("Amod").build();
>       writeBinaryEncodedAvro(u1, outputStream);
>       user u2 = 
> user.newBuilder().setFavoriteNumber(2).setName("Pandey").build();
>       writeBinaryEncodedAvro(u2, outputStream);
>       System.out.println(Arrays.toString(outputStream.toByteArray()));
>     }
>   }
>   static void writeBinaryEncodedAvro(SpecificRecord specificRecord, 
> OutputStream os) throws IOException {
>     BinaryEncoder binaryEncoder = EncoderFactory.get().binaryEncoder(os, 
> null);
>     @SuppressWarnings("unchecked")
>     DatumWriter<SpecificRecord> datumWriter =
>         new SpecificDatumWriter<SpecificRecord>((Class<SpecificRecord>) 
> specificRecord.getClass());
>     datumWriter.write(specificRecord, binaryEncoder);
>     binaryEncoder.flush();
>   }
> }
> {code}
> Consumer 
> {code}
> public class Consumer {
>   public static void main(String[] args) throws IOException {
>     byte[] data = {8, 65, 109, 111, 100, 2, 10, 103, 114, 101, 101, 110};
>     try (ByteArrayInputStream inputStream = new ByteArrayInputStream(data)) {
>       System.out.println(fromBinaryMulti(inputStream));
>     }
>   }
>   static user fromBinary(InputStream is) throws IOException {
>     BinaryDecoder binaryDecoder = DecoderFactory.get().binaryDecoder(is, 
> null);
>     DatumReader<user> datumReader = new SpecificDatumReader<user>(user.class);
>     return datumReader.read(null, binaryDecoder);
>   }
>   static List<user> fromBinaryMulti(InputStream is) throws IOException {
>     List<user> users = new ArrayList<user>();
>     BinaryDecoder binaryDecoder = DecoderFactory.get().binaryDecoder(is, 
> null);
>     while (!binaryDecoder.isEnd()) {
>       DatumReader<user> datumReader = new 
> SpecificDatumReader<user>(user.class);
>       users.add(datumReader.read(null, binaryDecoder));
>     }
>     return users;
>   }
> }
> {code}
> I changed the schema to 
> {code}
> {"namespace": "example.avro",
>  "type": "record",
>  "name": "user",
>  "fields": [
>      {"name": "name", "type": "string"},
>      {"name": "favorite_number",  "type": "int"},
>      {"name": "favorite_color", "type": "string", "default": "green"}
>  ]
> }
> {code}
> The following does not work.
> Consume code using new schema generated code cannot consume byte array 
> generated by old schema.
> Consume code using old schema generated code cannot consume byte array 
> generated with new schema.
> Is there a problem in the way I am trying to understand the forward or 
> backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to