[ 
https://issues.apache.org/jira/browse/APEXCORE-635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888527#comment-15888527
 ] 

Vlad Rozov commented on APEXCORE-635:
-------------------------------------

bq. Yes, Agree, One array of byte only treat as one object. But as the memory 
is relative big, it probably will not allocated in TLA( Thread Local Area ), 
which means it need to get lock for each allocation, so the allocation is not a 
very light operation.
Is your concern GC or allocating new byte array? From what I understand, it was 
GC.
bq. Output.getBuffer() probably not work in our case. The getBuffer() returns 
the reference of Output's internal buffer, which was used for putting the 
serialized bytes. As we need to share one instance of Output for multiple 
tuples, the buffer was reused when serialize next tuple. Which cause problem no 
matter we set the output's position to zero or not. If set position to zero, 
which means clean the previous serialized data. If not changed position, then 
the serialized data of next tuple concat to the previous one, but we don't know 
the boundary of each tuple, and the buffer keeping on growing which means 
output have to keep on allocate new big memory and copy the old data(if the 
initial buffer size is not very big).
Output.getBuffer() works. Tuples are serialized one at a time and after each 
serialization, bytes from the shared Output buffer are copied to the 
SerializedTuple that allocates a new buffer each time. It avoids extra copy 
that you refer to.
bq. Yes, the original prototype reset the buffer when it should not. But as I 
pointed in this proposal, the buffer will be reset after the data sent to 
socket. That why need to override the write() method.
The original prototype has a serious bug, so the results of benchmarking are 
not reliable. Prior to implementing the proposed approach, I suggest to 
implement a prototype in stages and have a prove that the approach will provide 
a significant performance improvement. Without such prove, the complexity of 
the proposal is not justifiable.
bq. Yes, agree. But I think it was worthy to change StreamCode, as suggested in 
the proposal, it's not easy to implemented the customized reusable 
serialization without memory copy with the previous interface. And it doesn't 
compatible with kryo's interface.
The current implementation(DefaultStatefulStreamCodec) of StatefulStreamCodec 
seems not natural for me. The pairs are cleared after first tuple. And for each 
other tuples, the pairs need to checked, and an instance of DataStatePair need 
to created to wrapper state(which is null except first tuple) and data.
I agree that the interface is not optimal and needs to be changed, but it is 
not possible to change it without breaking backward compatibility. It means 
that new interface needs to be introduced and BufferServer will need to deal 
with 3 different codec interfaces. Before this is done, I'd like to have a 
prove that there is significant performance benefit that justifies the 
complexity.
bq. The netlet client need to aware of the memory management mechanism in order 
to reset the memory. But the code is maintained in same class (current 
BufferServerPublisher; an extended class in proposal), so this should be ok.
The serialization formation don't have to change. it can compatible with the 
previous one. But if we want to take advantage of reserve() function of 
SerializationBuffer to avoid another extra copy, the format need to be changed 
to avoid use variable length.
Again, I'd like to see a staged prototype that proves that there is a 
performance benefit and that it is possible to avoid tight code coupling 
between Codec, Publisher and Tuple serialization format.

Overall, I am still (n) on the approach and I think that it needs to be 
revisited when netlet->netty migration is done. Netty has it's own memory 
allocator and it will be necessary to see how it can be integrated with Kryo 
and other serializers.

> Proposal: Manage memory to avoid memory copy and garbage collection
> -------------------------------------------------------------------
>
>                 Key: APEXCORE-635
>                 URL: https://issues.apache.org/jira/browse/APEXCORE-635
>             Project: Apache Apex Core
>          Issue Type: Wish
>            Reporter: bright chen
>            Assignee: bright chen
>
> Manage memory to avoid memory copy and garbage collection
> The aim of this proposal is to reuse the memory to avoid the garbage 
> collection and avoid unnecessary memory copy to increase the performance. In 
> this proposal the term serde means serialization and deserialization. It’s 
> same as codec.
> Currently, apex by default use DefaultStatefulStreamCodec for serde, which 
> extends Kryo and optimize it by replace class by class id. And application 
> developer can optimize serializer by implement interface StreamCodec. 
> First, let’s look into the default codec DefaultStatefulStreamCodec. It 
> basically optimize serde by replace class name by class id as my 
> understanding. And the state information only send before sending first 
> tuple, it’s kind like configuration for serde. So I suggest to separate this 
> feature from serde. The benefit is the customized serde can still use this 
> feature. And the kryo have some limitation which I’ll state later.
> Second, Let’s look at the customized serde. Let’s stand from application 
> developer point of view and look at how to implement StreamCodec. I take a 
> simple tuple List<String> as example.
> The first solution is use kryo. This is basically same as apex default codec.
> The second solution is implement StreamCodec for String and List, and 
> ListSerde delegate String to StringSerde. The benefit of this solution is the 
> StringSerde ListSerde can be reused. The problem is there need a lot of 
> temporary memory and memory copy. Following is the sample implement.
> Class StringSerde {
>   Slice toByteArray(String o) {
>     byte[] b = o.getBytes(“UTF8”);              // new bytes
>     byte[] b1 = new byte[b1.length + 4];      // new bytes
>     set the length of the string at the first 4 bytes
>     System.arrayCopy(b, 0, b1, 4, b.length);   //copy bytes
>     return new Slice(b1);
>   }
> class ListSerde<T> {
>   StreamCodec itemSerde;  //the serde for serialize/deserialize item
>   Slice toByteArray(List<T> list) {
>     Slice[] itemSlices = new Slice[list.size()];
>     int size = 0;
>     int index = 0;
>     for(T item : list) {
>       Slice slice = itemSerde.toByteArray(item);
>       size += slice.length;
>       itemSlices[index++] = slice;
>     }
>     byte[] b = new byte[size+4];                   //allocated the memory
>     set the length of the list at the first 4 bytes
>     copy the data from itemSlices
>     return new Slice(b);
>   }
> }
>   
> from above code, we can see that around 2 times of required memory were 
> allocated and data copied twice( one copy maybe special to string, but 
> another copy is mandatory). And when bytes written to the socket, all 
> allocated memory can’t be reused but need to be garbage collected.
> The above tuple only have two levels, if the tuple have n level, n times of 
> required memory would be allocated and n-1 time of data copy is required.
> The third solution could be allocate memory and then pass the memory and 
> offset to item serde. There are some problems for this solution:
> How to pass the memory from caller? As our previous interface only pass the 
> object but no way to pass memory. So the pass of memory will depends on 
> implementation.
> Another big problem of this solution is it hard to reallocate proper 
> memory(For this special case, it probably can allocate 2 times of all string 
> length. ). And the memory allocated more than required would be wasted until 
> data send to the socket(or allocate exact memory and copy the data to avoid 
> waste memory). And the code also need to handle the case if memory is not 
> enough. 
> The fourth solution could be treat whole object as flat, allocate memory and 
> handle it. For example as following. This solution solve the problem of pass 
> memory. But it has other problems of third solution and introduced some other 
> problems:
> Can’t reuse the code: we already have the StringSerde, but ListSerde<String> 
> have to implement almost same logic again. 
> The serializeItemToMemory() method should be implemented depend on different 
> item type.
> class ListSerde<T> {
>   Slice toByteArray(List<T> list) {
>     byte[] b = new byte[…];      //hard estimate proper size.
>     int size = 0;
>     for(T item : list) {
>       int length = serializeItemToMemory(item, b, size); 
>       size += length;
>     }
>     Allocate new memory to copy data if don’t want waste memory
>   }
> }
> So, from the analysis of these solutions. It’s not easy to implement good and 
> reusable customize serde.
> Third, let’s look at the Kryo serde. Kryo provides Output, so each field 
> serde write to the same Output. This approach solve the memory problem. But 
> the Output has some problem too.
> The Output, as a stream, can only write continuously. But it would be 
> problem. For example, when Serialize String to LV format. We don’t know what 
> the length could be before serialization. 
> The Output don’t have cache, which means the serialized data must copy to the 
> outside and manage them.
> The allocated memory can’t be reused without extra management.
> Another copy is required when add partition information.
> The memory allocated for different object are not continuous. Which mean need 
> another copy when merge multiple serialized tuple into one block to send to 
> socket.
> My suggest solution is:
> Add SerializationBuffer which extends from kryo Output and write data to 
> BlockStream. 
> BlockStream manages a list of block; BlockStream can reserve space and fill 
> value to reserved space; BlockStream can reset the memory when data not used 
> any more. We probably can use unsafe mode to increase the performance for 
> this part in the future.
> Add MemReuseCodec interface which extends StreamCodec, Deprecated Slice 
> toByteArray(T o) and add method void toByteArray(T o, SerializationBuffer 
> output); Here, toByteArray will not return slice, as the codec could be the 
> top level codec or a codec of a field. Call SerializationBuffer.toSlice() to 
> get the slice of serialized data.
> In Publisher, keep two lists/arrays of slices, one list/array for serialize 
> the tuples, another list/array for sending to the socket. When wake up for 
> writing, switch the lists/arrays. Then merge the slice to large slice and 
> call socket write. Reset the stream after data written.
> So the previous ListSerde can be implemented as following:
> class ListSerde<T> {
>   MemReuseCodec itemSerde;  //the serde for serialize/deserialize item
>   void toByteArray(List<T> list, SerializationBuffer buffer) {
>     buffer.reserveForLength();
>     for(T item : list) {
>       itemSerde.toByteArray(item, buffer);
>     }
>     buffer.fillLength();
>   }
> }
> The benefit of this mechanism
> the memory can be reused instead of garbage collected after data send to 
> socket 
> avoid unnecessary memory copy. Basically can avoid all extra copy required by 
> kryo.
> the data which send to socket can be easily merged in a block without extra 
> memory copy.
> can easily integrate with Kryo serde due to SerializationBuffer extends from 
> Output. 
> The work need to do to integrate this mechanism to Apex without modifying 
> netlet
> Add  MemReuseCodec field in BufferServerPublisher, which initialize in 
> setup() if the codec implements MemReuseCodec
> Change the DefaultStatefulStreamCodec to implement by using 
> SerializationBuffer
> For integrate with socket, basically it only need to override write(byte[] 
> message, int offset, int size) and write(). But unfortunately, write() is 
> final. So need following walk around. Add interface ListenerExt which only 
> have one method writeExt(); Change BufferServerPublisher implements 
> ListenerExt. Add DefaultEventLoopExt which extends DefaultEventLoop and 
> override handleSelectedKey, for selection key OP_WRITE, if it’s attachment 
> implements ListenerExt, call ListenerExt.writeExt(); else call write().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to