> Am 20.06.2017 um 16:06 schrieb David Turner <drakonst...@gmail.com>:
> 
> Ceph is a large scale storage system. You're hoping that it is going to care 
> about and split files that are 9 bytes in size. Do this same test with a 4MB 
> file and see how it splits up the content of the file.
> 
> 

Makes sense. I was just hoping to reproduce the behavior depicted in the figure 
at 
http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool
 
<http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool>
 with the exact same values. Thanks for the help!

> 
> On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic <jonasjaszkowic.w...@gmail.com 
> <mailto:jonasjaszkowic.w...@gmail.com>> wrote:
> I am currently evaluating erasure coding in Ceph. I wanted to know where my 
> data and coding chunks are located, so I 
> followed the example at 
> http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool
>  
> <http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool>
>  
> and setup an erasure coded pool with k=3 data chunks and m=2 coding chunks. I 
> stored an object named 'NYAN‘ with content
> ‚ABCDEFGHI‘ in the pool.
> 
> The output of ceph osd map ecpool NYAN is following, which seems correct:
> 
> osdmap e97 pool 'ecpool' (6) object 'NYAN' -> pg 6.bf243b9 (6.39) -> up 
> ([3,1,0,2,4], p3) acting ([3,1,0,2,4], p3)
> 
> But when I have a look at the chunks stored on the corresponding OSDs, I see 
> three chunks containing the whole content of the original file (padded with 
> zeros to a size of 4.0K)
> and two chunks containing nothing but zeros. I do not understand this 
> behavior. According to the link above: "The NYAN object will be divided in 
> three (K=3) and two additional chunks will be created (M=2).“, but what I 
> experience is that the file is replicated three times in its whole and what 
> appears to be the coding chunks (i.e. holding parity information) are objects 
> containing nothing but zeros? Am I doing something wrong here? 
> 
> Any help is appreciated!
> 
> Attached is the output on each OSD node with the path to the chunk and its 
> content as hexdump:
> 
> osd.0
> path: 
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2
> md5sum: 1666ba51af756693678da9efc443ef44  
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2
> filesize: 4.0K        
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6_ffffffffffffffff_2
> hexdump: 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
> |................|
> *
> 00000560
> 
> osd.1
> path: 
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1
> md5sum: 1666ba51af756693678da9efc443ef44  
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1
> filesize: 4.0K        
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6_ffffffffffffffff_1
> hexdump: 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
> |................|
> *
> 00000560
> 
> osd.2
> path: 
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3
> filesize: 4.0K        
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6_ffffffffffffffff_3
> hexdump: 00000000  41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI.......|
> 00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> *
> 00000560
> 
> osd.3
> path: 
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0
> filesize: 4.0K        
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6_ffffffffffffffff_0
> hexdump: 00000000  41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI.......|
> 00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> *
> 00000560
> 
> osd.4
> path: 
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4
> filesize: 4.0K        
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6_ffffffffffffffff_4
> hexdump: 00000000  41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI.......|
> 00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> *
> 00000560
> 
> 
> The erasure code profile used:
> 
> jerasure-per-chunk-alignment=false
> k=3
> m=2
> plugin=jerasure
> ruleset-failure-domain=host
> ruleset-root=default
> technique=reed_sol_van
> w=8
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to