Re: Apache Avro to .NET Core

2016-09-06 Thread Ryan Blue
Hi Welly,

I'm not very familiar with .NET or the C# Avro library, hopefully others
can help answer your specific question. One alternative is to look at the
Microsoft C# library, which also has codegen features and is intended for
use for .NET:


https://azure.microsoft.com/en-us/updates/microsoft-avro-library-updated-to-include-c-code-generator/

rb

On Mon, Sep 5, 2016 at 5:12 AM, Welly Tambunan <if05...@gmail.com> wrote:

> Hi All,
>
> I'm trying to port apache avro to .NET core.
>
> Here's the repository for the project
>
> https://github.com/welly87/Apache-Avro-Core
>
>
> However i found that some of the class and method is missing regarding
> Assembly loading and IL generation. I'm really close to completing the
> porting.
>
> [image: Inline image 1]
>
> #1. Is there any replacement on this line of code in .NET core ?
> Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies();
>
> #2. Another one is IL Generator. Any idea how to replace this code ?
>
> public CtorDelegate GetConstructor(string name, Schema.Type
> schemaType, Type type)
> {
> ConstructorInfo ctorInfo = type.GetConstructor(Type.Empty
> Types);
> if (ctorInfo == null)
> throw new AvroException("Class " + name + " has no default
> constructor");
>
> DynamicMethod dynMethod = new DynamicMethod("DM$OBJ_FACTORY_"
> + name, typeof(object), null, type, true);
> ILGenerator ilGen = dynMethod.GetILGenerator();
> ilGen.Emit(OpCodes.Nop);
> ilGen.Emit(OpCodes.Newobj, ctorInfo);
> ilGen.Emit(OpCodes.Ret);
>
> return (CtorDelegate)dynMethod.CreateDelegate(ctorType);
> }
>
> Thanks
>
> Cheers
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com <http://www.triplelands.com/blog/>
>



-- 
Ryan Blue
Software Engineer
Netflix


Re: Avro schema doesn't honor backward compatibilty

2016-02-02 Thread Ryan Blue

Hi Raghvendra,

It looks like the problem is that you're using the new schema in place 
of the schema that the data was written with.  You should run setSchema 
on your SpecificDatumReader to set the schema the data was written with.


What's happening is that the schema you're using, the new one, has the 
new field so Avro assumes it is present and tries to read it. By setting 
the schema that the data was actually written with, the datum reader 
will know that it isn't present and will use your default instead. When 
you read data encoded with the new schema, you need to use it as the 
written schema instead so the datum reader knows that the field should 
be read.


Does that make sense?

rb

On 02/01/2016 12:31 PM, Raghvendra Singh wrote:

down votefavorite
<http://stackoverflow.com/questions/34733604/avro-schema-doesnt-honor-backward-compatibilty#>

I have this avro schema

{
  "namespace": "xx..x.x",
  "type": "record",
  "name": "MyPayLoad",
  "fields": [
  {"name": "filed1",  "type": "string"},
  {"name": "filed2", "type": "long"},
  {"name": "filed3",  "type": "boolean"},
  {
   "name" : "metrics",
   "type":
   {
  "type" : "array",
  "items":
  {
  "name": "MyRecord",
  "type": "record",
  "fields" :
  [
{"name": "min", "type": "long"},
{"name": "max", "type": "long"},
{"name": "sum", "type": "long"},
{"name": "count", "type": "long"}
  ]
  }
   }
  }
   ]}

Here is the code which we use to parse the data

public static final MyPayLoad parseBinaryPayload(byte[] payload) {
 DatumReader payloadReader = new
SpecificDatumReader<>(MyPayLoad.class);
 Decoder decoder = DecoderFactory.get().binaryDecoder(payload, null);
 MyPayLoad myPayLoad = null;
 try {
 myPayLoad = payloadReader.read(null, decoder);
 } catch (IOException e) {
 logger.log(Level.SEVERE, e.getMessage(), e);
 }

 return myPayLoad;
 }

Now i want to add one more field int the schema so the schema looks like
below

  {
  "namespace": "xx..x.x",
  "type": "record",
  "name": "MyPayLoad",
  "fields": [
  {"name": "filed1",  "type": "string"},
  {"name": "filed2", "type": "long"},
  {"name": "filed3",  "type": "boolean"},
  {
   "name" : "metrics",
   "type":
   {
  "type" : "array",
  "items":
  {
  "name": "MyRecord",
  "type": "record",
  "fields" :
  [
{"name": "min", "type": "long"},
{"name": "max", "type": "long"},
{"name": "sum", "type": "long"},
{"name": "count", "type": "long"}
  ]
  }
   }
  }
  {"name": "agentType",  "type": ["null", "string"], "default": "APP_AGENT"}
   ]}

Note the filed added and also the default is defined. The problem is that
if we receive the data which was written using the older schema i get this
error

java.io.EOFException: null
 at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:473)
~[avro-1.7.4.jar:1.7.4]
 at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:128)
~[avro-1.7.4.jar:1.7.4]
 at org.apache.avro.io.BinaryDecoder.readIndex(BinaryDecoder.java:423)
~[avro-1.7.4.jar:1.7.4]
 at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229)
~[avro-1.7.4.jar:1.7.4]
 at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
~[avro-1.7.4.jar:1.7.4]
 at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206)
~[avro-1.7.4.jar:1.7.4]
 at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
~[avro-1.7.4.jar:1.7.4]
 at 
org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:177)
~[avro-1.7.4.jar:1.7.4]
 at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:148)
~[avro-1.7.4.jar:1.7.4]
 at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139)
~[avro-1.7.4.jar:1.7.4]
 at 
com.appdynamics.blitz.shared.util.X.parseBinaryPayload(BlitzAvroSharedUtil.java:38)
~[blitz-shared.jar:na]

What i understood from this
<https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html>
document
that this should have been backward compatible but somehow that doesn't
seem to be the case. Any idea what i am doing wrong?




--
Ryan Blue
Software Engineer
Cloudera, Inc.