Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread Jonas Jaszkowic

> Am 20.06.2017 um 16:06 schrieb David Turner :
> 
> Ceph is a large scale storage system. You're hoping that it is going to care 
> about and split files that are 9 bytes in size. Do this same test with a 4MB 
> file and see how it splits up the content of the file.
> 
> 

Makes sense. I was just hoping to reproduce the behavior depicted in the figure 
at 
http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool
 

 with the exact same values. Thanks for the help!

> 
> On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic  > wrote:
> I am currently evaluating erasure coding in Ceph. I wanted to know where my 
> data and coding chunks are located, so I 
> followed the example at 
> http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool
>  
> 
>  
> and setup an erasure coded pool with k=3 data chunks and m=2 coding chunks. I 
> stored an object named 'NYAN‘ with content
> ‚ABCDEFGHI‘ in the pool.
> 
> The output of ceph osd map ecpool NYAN is following, which seems correct:
> 
> osdmap e97 pool 'ecpool' (6) object 'NYAN' -> pg 6.bf243b9 (6.39) -> up 
> ([3,1,0,2,4], p3) acting ([3,1,0,2,4], p3)
> 
> But when I have a look at the chunks stored on the corresponding OSDs, I see 
> three chunks containing the whole content of the original file (padded with 
> zeros to a size of 4.0K)
> and two chunks containing nothing but zeros. I do not understand this 
> behavior. According to the link above: "The NYAN object will be divided in 
> three (K=3) and two additional chunks will be created (M=2).“, but what I 
> experience is that the file is replicated three times in its whole and what 
> appears to be the coding chunks (i.e. holding parity information) are objects 
> containing nothing but zeros? Am I doing something wrong here? 
> 
> Any help is appreciated!
> 
> Attached is the output on each OSD node with the path to the chunk and its 
> content as hexdump:
> 
> osd.0
> path: 
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> md5sum: 1666ba51af756693678da9efc443ef44  
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> hexdump:   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
> ||
> *
> 0560
> 
> osd.1
> path: 
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> md5sum: 1666ba51af756693678da9efc443ef44  
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> hexdump:   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
> ||
> *
> 0560
> 
> osd.2
> path: 
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 0560
> 
> osd.3
> path: 
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 0560
> 
> osd.4
> path: 
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4  
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00  
> |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 0560
> 
> 
> The erasure code profile used:
> 
> jerasure-per-chunk-alignment=false
> k=3
> m=2
> plugin=jerasure
> ruleset-failure-domain=host
> 

Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread David Turner
Ceph is a large scale storage system. You're hoping that it is going to
care about and split files that are 9 bytes in size. Do this same test with
a 4MB file and see how it splits up the content of the file.

On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic 
wrote:

> I am currently evaluating erasure coding in Ceph. I wanted to know where
> my data and coding chunks are located, so I
> followed the example at
> http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool
>
> and setup an erasure coded pool with k=3 data chunks and m=2 coding
> chunks. I stored an object named 'NYAN‘ with content
> ‚ABCDEFGHI‘ in the pool.
>
> The output of ceph osd map ecpool NYAN is following, which seems correct:
>
> osdmap e97 pool 'ecpool' (6) object 'NYAN' -> pg 6.bf243b9 (6.39) -> up
> ([3,1,0,2,4], p3) acting ([3,1,0,2,4], p3)
>
> But when I have a look at the chunks stored on the corresponding OSDs, I
> see three chunks containing the *whole *content of the original file
> (padded with zeros to a size of 4.0K)
> and two chunks containing nothing but zeros. I do not understand this
> behavior. According to the link above: "The NYAN object will be divided in
> three (K=3) and two additional chunks will be created (M=2).“, but what I
> experience is that the file is replicated three times *in its whole *and
> what appears to be the coding chunks (i.e. holding parity information) are
> objects containing *nothing but zeros*? Am I doing something wrong here?
>
> Any help is appreciated!
>
> Attached is the output on each OSD node with the path to the chunk and its
> content as hexdump:
>
> osd.0
> path:
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> md5sum: 1666ba51af756693678da9efc443ef44
>  
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-0/current/6.39s2_head/NYAN__head_0BF243B9__6__2
> hexdump:   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
>  ||
> *
> 0560
>
> osd.1
> path:
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> md5sum: 1666ba51af756693678da9efc443ef44
>  
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-1/current/6.39s1_head/NYAN__head_0BF243B9__6__1
> hexdump:   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
>  ||
> *
> 0560
>
> osd.2
> path:
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4
>  
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-2/current/6.39s3_head/NYAN__head_0BF243B9__6__3
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00
>  |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
>  ||
> *
> 0560
>
> osd.3
> path:
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4
>  
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-3/current/6.39s0_head/NYAN__head_0BF243B9__6__0
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00
>  |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
>  ||
> *
> 0560
>
> osd.4
> path:
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> md5sum: ff6a7f77674e23fd7e3a0c11d7b36ed4
>  
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> filesize: 4.0K
> /var/lib/ceph/osd/ceph-4/current/6.39s4_head/NYAN__head_0BF243B9__6__4
> hexdump:   41 42 43 44 45 46 47 48  49 0a 00 00 00 00 00 00
>  |ABCDEFGHI...|
> 0010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
>  ||
> *
> 0560
>
>
> The erasure code profile used:
>
> jerasure-per-chunk-alignment=false
> k=3
> m=2
> plugin=jerasure
> ruleset-failure-domain=host
> ruleset-root=default
> technique=reed_sol_van
> w=8
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com