[ 
https://issues.apache.org/jira/browse/AVRO-3223?focusedWorklogId=662686&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-662686
 ]

ASF GitHub Bot logged work on AVRO-3223:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Oct/21 12:39
            Start Date: 08/Oct/21 12:39
    Worklog Time Spent: 10m 
      Work Description: tgnm commented on pull request #1358:
URL: https://github.com/apache/avro/pull/1358#issuecomment-938610523


   @Indomitable the reason why I force the abstract class to use `int` is 
because arrays in .NET don't exceed 2GB in size, unless  
`gcAllowVeryLargeObjects` is set. Regardless, the indexer on the array is `int`.
   If we look at more recent .NET APIs, even the new `Span<T>` type doesn't 
support a 64-bit length (for the reasons described here: 
https://github.com/dotnet/apireviews/tree/main/2016/11-04-SpanOfT).
   
   Finally, the library itself doesn't support block sizes exceeding 32-bit:
   ```
   _blockSize = _decoder.ReadLong();           // read block size
   if (_blockSize > System.Int32.MaxValue || _blockSize < 0)
   {
           throw new AvroRuntimeException("Block size invalid or too large for 
this " +
                                                      "implementation: " + 
_blockSize);
   ```
   
   So having a long length isn't actually possible and would otherwise be 
misleading for people implementing `Codec`.
   Having said this, I'd like to see the library adopting the new Memory<T> and 
Span<T> APIs. There's a lot of opportunities for performance improvements and 
modernising the API of this library. However, this is out of scope for the 
issue I've raised and PR :)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@avro.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 662686)
    Time Spent: 40m  (was: 0.5h)

> Support optional codecs in C# library
> -------------------------------------
>
>                 Key: AVRO-3223
>                 URL: https://issues.apache.org/jira/browse/AVRO-3223
>             Project: Apache Avro
>          Issue Type: New Feature
>          Components: csharp
>            Reporter: Tiago Margalho
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> I'd like to propose a change to the C# library to allow it to dynamically 
> support codecs that aren't supported out of the box. That is, instead of 
> adding more codecs to the library, the library should allow codecs to be 
> resolved by the user.  At the moment, only deflate is supported by the 
> library.
> In fact, the library already supports "custom" codecs to be used when writing 
> to an Avro file. A user can implement a Codec and pass it into the call to 
> OpenWriter: 
> {code:c#|title=DataFileWriter.cs|borderStyle=solid}
> public static IFileWriter<T> OpenWriter(DatumWriter<T> writer, string path, 
> Codec codec);
>  {code}
> However, there's no way a user can call DataFileReader.OpenRead or 
> DataFileReader.OpenAppendWriter and specify how the codec should be resolved.
> So what I'm proposing is that we allow these other methods to resolve custom 
> codecs thus enabling the library to support codecs that are currently 
> supported in other languages but not currently on the C# implementation.
> Here are some reasons why I believe this would be a good approach to extend 
> support to other codecs:
>  - The .NET base class library may not support all codecs.
>  - The library won't have to depend on other OSS projects with potentially 
> conflicting license agreements.
>  - The user is allowed to pick which implementation of zstd, snappy, etc is 
> best for their use case (eg. may prefer a version compatible to an older 
> version of .NET, or one that is more recent and optimised for more recent 
> versions of .NET/C#).
> h2. How this would work
> At present, the library uses a codec resolution method that is only 
> internally used:
> {code:c#|title=DataFileReader.cs|borderStyle=solid}
>         private Codec ResolveCodec()
>         {
>             return 
> Codec.CreateCodecFromString(GetMetaString(DataFileConstants.MetaDataCodec));
>         }
> {code}
>  This can be somewhat easily extended to support other "resolvers" but first 
> we would need to allow "resolvers" to be registered to be discovered by the 
> DataFileReader:
> {code:c#|title=DataFileReader.cs|borderStyle=solid}
>         /// <summary>
>         /// Represents a function capable of resolving a codec identifying 
> string
>         /// into a matching codec implementation a reader can use to 
> decompress data.
>         /// </summary>
>         public delegate Codec CodecResolver(string codecMetaString);
>         private static List<CodecResolver> _codecResolvers = new 
> List<CodecResolver>();
>         /// <summary>
>         /// Registers a function that will be used to resolve a codec 
> identifying string
>         /// into a matching codec implementation when reading compressed Avro 
> data.
>         /// </summary>
>         public static void RegisterCodecResolver(CodecResolver resolver)
>         {
>             _codecResolvers.Add(resolver);
>         }
>  {code}
> Thus, the implementation of `ResolveCodec` could be:
> {code:c#|title=DataFileReader.cs|borderStyle=solid}
>         private Codec ResolveCodec()
>         {
>             var codecString = GetMetaString(DataFileConstants.MetaDataCodec);
>             
>             foreach (var resolver in _codecResolvers)
>             {
>                 var candidateCodec = resolver(codecString);
>                 if (candidateCodec != null)
>                 {
>                     return candidateCodec;
>                 }
>             }
>             return Codec.CreateCodecFromString(codecString);
>         }
> {code}
> h2. One additional change required
> The only change left to make is related to how decompression buffers are 
> passed around. When I tried adding support for zstd using this approach, I 
> found that I couldn't decompress data because the DataFileReader reuses a 
> `DataBlock` buffer and the actual compressed data length doesn't necessarily 
> match the length of the buffer. Although some compression algos are likely to 
> have metadata in frames/headers describing the length of the encoded payload, 
> not all implementations will allow the buffer size to be different so I think 
> it would make sense to make the buffer size explicit.
>  As a consequence I had to make a change that allows us to pass the length of 
> the data in the DataBlock into the Codec.Decompress method call:
> {code:c#|title=DataFileReader.cs|borderStyle=solid}
> if (HasNextBlock())
> {
>     _currentBlock = NextRawBlock(_currentBlock);
>     _currentBlock.Data = _codec.Decompress(_currentBlock.Data, 
> (int)this._blockSize);
>     ...
> {code}
> I will be raising a PR shortly with the changes I've described above. 
> Hopefully people will find this proposal reasonable but please let me know 
> what you think!
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to