Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-26 Thread Voloshanenko Igor
Great!
Yes, behaviour exact as i described. So looks like it's root cause )

Thank you, Sam. Ilya!

2015-08-21 21:08 GMT+03:00 Samuel Just :

> I think I found the bug -- need to whiteout the snapset (or decache
> it) upon evict.
>
> http://tracker.ceph.com/issues/12748
> -Sam
>
> On Fri, Aug 21, 2015 at 8:04 AM, Ilya Dryomov  wrote:
> > On Fri, Aug 21, 2015 at 5:59 PM, Samuel Just  wrote:
> >> Odd, did you happen to capture osd logs?
> >
> > No, but the reproducer is trivial to cut & paste.
> >
> > Thanks,
> >
> > Ilya
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-21 Thread Samuel Just
I think I found the bug -- need to whiteout the snapset (or decache
it) upon evict.

http://tracker.ceph.com/issues/12748
-Sam

On Fri, Aug 21, 2015 at 8:04 AM, Ilya Dryomov  wrote:
> On Fri, Aug 21, 2015 at 5:59 PM, Samuel Just  wrote:
>> Odd, did you happen to capture osd logs?
>
> No, but the reproducer is trivial to cut & paste.
>
> Thanks,
>
> Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-21 Thread Ilya Dryomov
On Fri, Aug 21, 2015 at 5:59 PM, Samuel Just  wrote:
> Odd, did you happen to capture osd logs?

No, but the reproducer is trivial to cut & paste.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-21 Thread Samuel Just
Odd, did you happen to capture osd logs?
-Sam

On Thu, Aug 20, 2015 at 8:10 PM, Ilya Dryomov  wrote:
> On Fri, Aug 21, 2015 at 2:02 AM, Samuel Just  wrote:
>> What's supposed to happen is that the client transparently directs all
>> requests to the cache pool rather than the cold pool when there is a
>> cache pool.  If the kernel is sending requests to the cold pool,
>> that's probably where the bug is.  Odd.  It could also be a bug
>> specific 'forward' mode either in the client or on the osd.  Why did
>> you have it in that mode?
>
> I think I reproduced this on today's master.
>
> Setup, cache mode is writeback:
>
> $ ./ceph osd pool create foo 12 12
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo' created
> $ ./ceph osd pool create foo-hot 12 12
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo-hot' created
> $ ./ceph osd tier add foo foo-hot
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo-hot' is now (or already was) a tier of 'foo'
> $ ./ceph osd tier cache-mode foo-hot writeback
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> set cache-mode for pool 'foo-hot' to writeback
> $ ./ceph osd tier set-overlay foo foo-hot
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> overlay for 'foo' is now (or already was) 'foo-hot'
>
> Create an image:
>
> $ ./rbd create --size 10M --image-format 2 foo/bar
> $ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
> $ sudo mkfs.ext4 /mnt/bar
> $ sudo umount /mnt
>
> Create a snapshot, take md5sum:
>
> $ ./rbd snap create foo/bar@snap
> $ ./rbd export foo/bar /tmp/foo-1
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-1
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-1
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-1
> $ md5sum /tmp/snap-1
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-1
>
> Set the cache mode to forward and do a flush, hashes don't match - the
> snap is empty - we bang on the hot tier and don't get redirected to the
> cold tier, I suspect:
>
> $ ./ceph osd tier cache-mode foo-hot forward
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> set cache-mode for pool 'foo-hot' to forward
> $ ./rados -p foo-hot cache-flush-evict-all
> rbd_data.100a6b8b4567.0002
> rbd_id.bar
> rbd_directory
> rbd_header.100a6b8b4567
> bar.rbd
> rbd_data.100a6b8b4567.0001
> rbd_data.100a6b8b4567.
> $ ./rados -p foo-hot cache-flush-evict-all
> $ ./rbd export foo/bar /tmp/foo-2
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-2
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-2
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-2
> $ md5sum /tmp/snap-2
> f1c9645dbc14efddc7d8a322685f26eb  /tmp/snap-2
> $ od /tmp/snap-2
> 000 00 00 00 00 00 00 00 00
> *
> 5000
>
> Disable the cache tier and we are back to normal:
>
> $ ./ceph osd tier remove-overlay foo
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> there is now (or already was) no overlay for 'foo'
> $ ./rbd export foo/bar /tmp/foo-3
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-3
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-3
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-3
> $ md5sum /tmp/snap-3
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-3
>
> I first reproduced it with the kernel client, rbd export was just to
> take it out of the equation.
>
>
> Also, Igor sort of raised a question in his second message: if, after
> setting the cache mode to forward and doing a flush, I open an image
> (not a snapshot, so may not be related to the above) for write (e.g.
> with rbd-fuse), I get an rbd header object in the hot pool, even though
> it's in forward mode:
>
> $ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
> $ sudo mount /mnt/bar /media
> $ sudo umount /media
> $ sudo umount /mnt
> $ ./rados -p foo-hot ls
> rbd_header.100a6b8b4567
> $ ./rados -p foo ls | grep rbd_header
> rbd_header.100a6b8b4567
>
> It's been a while since I looked into tiering, is that how it's
> supposed to work?  It looks like it happens because rbd_header op
> replies don't redirect?
>
> Thanks,
>
> Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Exact as in our case.

Ilya, same for images from our side. Headers opened from hot tier

пятница, 21 августа 2015 г. пользователь Ilya Dryomov написал:

> On Fri, Aug 21, 2015 at 2:02 AM, Samuel Just  > wrote:
> > What's supposed to happen is that the client transparently directs all
> > requests to the cache pool rather than the cold pool when there is a
> > cache pool.  If the kernel is sending requests to the cold pool,
> > that's probably where the bug is.  Odd.  It could also be a bug
> > specific 'forward' mode either in the client or on the osd.  Why did
> > you have it in that mode?
>
> I think I reproduced this on today's master.
>
> Setup, cache mode is writeback:
>
> $ ./ceph osd pool create foo 12 12
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo' created
> $ ./ceph osd pool create foo-hot 12 12
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo-hot' created
> $ ./ceph osd tier add foo foo-hot
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> pool 'foo-hot' is now (or already was) a tier of 'foo'
> $ ./ceph osd tier cache-mode foo-hot writeback
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> set cache-mode for pool 'foo-hot' to writeback
> $ ./ceph osd tier set-overlay foo foo-hot
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> overlay for 'foo' is now (or already was) 'foo-hot'
>
> Create an image:
>
> $ ./rbd create --size 10M --image-format 2 foo/bar
> $ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
> $ sudo mkfs.ext4 /mnt/bar
> $ sudo umount /mnt
>
> Create a snapshot, take md5sum:
>
> $ ./rbd snap create foo/bar@snap
> $ ./rbd export foo/bar /tmp/foo-1
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-1
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-1
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-1
> $ md5sum /tmp/snap-1
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-1
>
> Set the cache mode to forward and do a flush, hashes don't match - the
> snap is empty - we bang on the hot tier and don't get redirected to the
> cold tier, I suspect:
>
> $ ./ceph osd tier cache-mode foo-hot forward
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> set cache-mode for pool 'foo-hot' to forward
> $ ./rados -p foo-hot cache-flush-evict-all
> rbd_data.100a6b8b4567.0002
> rbd_id.bar
> rbd_directory
> rbd_header.100a6b8b4567
> bar.rbd
> rbd_data.100a6b8b4567.0001
> rbd_data.100a6b8b4567.
> $ ./rados -p foo-hot cache-flush-evict-all
> $ ./rbd export foo/bar /tmp/foo-2
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-2
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-2
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-2
> $ md5sum /tmp/snap-2
> f1c9645dbc14efddc7d8a322685f26eb  /tmp/snap-2
> $ od /tmp/snap-2
> 000 00 00 00 00 00 00 00 00
> *
> 5000
>
> Disable the cache tier and we are back to normal:
>
> $ ./ceph osd tier remove-overlay foo
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> there is now (or already was) no overlay for 'foo'
> $ ./rbd export foo/bar /tmp/foo-3
> Exporting image: 100% complete...done.
> $ ./rbd export foo/bar@snap /tmp/snap-3
> Exporting image: 100% complete...done.
> $ md5sum /tmp/foo-3
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-3
> $ md5sum /tmp/snap-3
> 83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-3
>
> I first reproduced it with the kernel client, rbd export was just to
> take it out of the equation.
>
>
> Also, Igor sort of raised a question in his second message: if, after
> setting the cache mode to forward and doing a flush, I open an image
> (not a snapshot, so may not be related to the above) for write (e.g.
> with rbd-fuse), I get an rbd header object in the hot pool, even though
> it's in forward mode:
>
> $ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
> $ sudo mount /mnt/bar /media
> $ sudo umount /media
> $ sudo umount /mnt
> $ ./rados -p foo-hot ls
> rbd_header.100a6b8b4567
> $ ./rados -p foo ls | grep rbd_header
> rbd_header.100a6b8b4567
>
> It's been a while since I looked into tiering, is that how it's
> supposed to work?  It looks like it happens because rbd_header op
> replies don't redirect?
>
> Thanks,
>
> Ilya
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Ilya Dryomov
On Fri, Aug 21, 2015 at 2:02 AM, Samuel Just  wrote:
> What's supposed to happen is that the client transparently directs all
> requests to the cache pool rather than the cold pool when there is a
> cache pool.  If the kernel is sending requests to the cold pool,
> that's probably where the bug is.  Odd.  It could also be a bug
> specific 'forward' mode either in the client or on the osd.  Why did
> you have it in that mode?

I think I reproduced this on today's master.

Setup, cache mode is writeback:

$ ./ceph osd pool create foo 12 12
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'foo' created
$ ./ceph osd pool create foo-hot 12 12
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'foo-hot' created
$ ./ceph osd tier add foo foo-hot
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'foo-hot' is now (or already was) a tier of 'foo'
$ ./ceph osd tier cache-mode foo-hot writeback
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
set cache-mode for pool 'foo-hot' to writeback
$ ./ceph osd tier set-overlay foo foo-hot
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
overlay for 'foo' is now (or already was) 'foo-hot'

Create an image:

$ ./rbd create --size 10M --image-format 2 foo/bar
$ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
$ sudo mkfs.ext4 /mnt/bar
$ sudo umount /mnt

Create a snapshot, take md5sum:

$ ./rbd snap create foo/bar@snap
$ ./rbd export foo/bar /tmp/foo-1
Exporting image: 100% complete...done.
$ ./rbd export foo/bar@snap /tmp/snap-1
Exporting image: 100% complete...done.
$ md5sum /tmp/foo-1
83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-1
$ md5sum /tmp/snap-1
83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-1

Set the cache mode to forward and do a flush, hashes don't match - the
snap is empty - we bang on the hot tier and don't get redirected to the
cold tier, I suspect:

$ ./ceph osd tier cache-mode foo-hot forward
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
set cache-mode for pool 'foo-hot' to forward
$ ./rados -p foo-hot cache-flush-evict-all
rbd_data.100a6b8b4567.0002
rbd_id.bar
rbd_directory
rbd_header.100a6b8b4567
bar.rbd
rbd_data.100a6b8b4567.0001
rbd_data.100a6b8b4567.
$ ./rados -p foo-hot cache-flush-evict-all
$ ./rbd export foo/bar /tmp/foo-2
Exporting image: 100% complete...done.
$ ./rbd export foo/bar@snap /tmp/snap-2
Exporting image: 100% complete...done.
$ md5sum /tmp/foo-2
83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-2
$ md5sum /tmp/snap-2
f1c9645dbc14efddc7d8a322685f26eb  /tmp/snap-2
$ od /tmp/snap-2
000 00 00 00 00 00 00 00 00
*
5000

Disable the cache tier and we are back to normal:

$ ./ceph osd tier remove-overlay foo
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
there is now (or already was) no overlay for 'foo'
$ ./rbd export foo/bar /tmp/foo-3
Exporting image: 100% complete...done.
$ ./rbd export foo/bar@snap /tmp/snap-3
Exporting image: 100% complete...done.
$ md5sum /tmp/foo-3
83f5d244bb65eb19eddce0dc94bf6dda  /tmp/foo-3
$ md5sum /tmp/snap-3
83f5d244bb65eb19eddce0dc94bf6dda  /tmp/snap-3

I first reproduced it with the kernel client, rbd export was just to
take it out of the equation.


Also, Igor sort of raised a question in his second message: if, after
setting the cache mode to forward and doing a flush, I open an image
(not a snapshot, so may not be related to the above) for write (e.g.
with rbd-fuse), I get an rbd header object in the hot pool, even though
it's in forward mode:

$ sudo ./rbd-fuse -p foo -c $PWD/ceph.conf /mnt
$ sudo mount /mnt/bar /media
$ sudo umount /media
$ sudo umount /mnt
$ ./rados -p foo-hot ls
rbd_header.100a6b8b4567
$ ./rados -p foo ls | grep rbd_header
rbd_header.100a6b8b4567

It's been a while since I looked into tiering, is that how it's
supposed to work?  It looks like it happens because rbd_header op
replies don't redirect?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
It would help greatly if, on a disposable cluster, you could reproduce
the snapshot problem with

debug osd = 20
debug filestore = 20
debug ms = 1

on all of the osds and attach the logs to the bug report.  That should
make it easier to work out what is going on.
-Sam

On Thu, Aug 20, 2015 at 4:40 PM, Voloshanenko Igor
 wrote:
> Attachment blocked, so post as text...
>
> root@zzz:~# cat update_osd.sh
> #!/bin/bash
>
> ID=$1
> echo "Process OSD# ${ID}"
>
> DEV=`mount | grep "ceph-${ID} " | cut -d " " -f 1`
> echo "OSD# ${ID} hosted on ${DEV::-1}"
>
> TYPE_RAW=`smartctl -a ${DEV} | grep Rota | cut -d " " -f 6`
> if [ "${TYPE_RAW}" == "Solid" ]
> then
> TYPE="ssd"
> elif [ "${TYPE_RAW}" == "7200" ]
> then
> TYPE="platter"
> fi
>
> echo "OSD Type = ${TYPE}"
>
> HOST=`hostname`
> echo "Current node hostname: ${HOST}"
>
> echo "Set noout option for CEPH cluster"
> ceph osd set noout
>
> echo "Marked OSD # ${ID} out"
> [19/1857]
> ceph osd out ${ID}
>
> echo "Remove OSD # ${ID} from CRUSHMAP"
> ceph osd crush remove osd.${ID}
>
> echo "Delete auth for OSD# ${ID}"
> ceph auth del osd.${ID}
>
> echo "Stop OSD# ${ID}"
> stop ceph-osd id=${ID}
>
> echo "Remove OSD # ${ID} from cluster"
> ceph osd rm ${ID}
>
> echo "Unmount OSD# ${ID}"
> umount ${DEV}
>
> echo "ZAP ${DEV::-1}"
> ceph-disk zap ${DEV::-1}
>
> echo "Create new OSD with ${DEV::-1}"
> ceph-disk-prepare ${DEV::-1}
>
> echo "Activate new OSD"
> ceph-disk-activate ${DEV}
>
> echo "Dump current CRUSHMAP"
> ceph osd getcrushmap -o cm.old
>
> echo "Decompile CRUSHMAP"
> crushtool -d cm.old -o cm
>
> echo "Place new OSD in proper place"
> sed -i "s/device${ID}/osd.${ID}/" cm
> LINE=`cat -n cm | sed -n "/${HOST}-${TYPE} {/,/}/p" | tail -n 1 | awk
> '{print $1}'`
> sed -i "${LINE}iitem osd.${ID} weight 1.000" cm
>
> echo "Modify ${HOST} weight into CRUSHMAP"
> sed -i "s/item ${HOST}-${TYPE} weight 9.000/item ${HOST}-${TYPE} weight
> 1.000/" cm
>
> echo "Compile new CRUSHMAP"
> crushtool -c cm -o cm.new
>
> echo "Inject new CRUSHMAP"
> ceph osd setcrushmap -i cm.new
>
> #echo "Clean..."
> #rm -rf cm cm.new
>
> echo "Unset noout option for CEPH cluster"
> ceph osd unset noout
>
> echo "OSD recreated... Waiting for rebalancing..."
>
> 2015-08-21 2:37 GMT+03:00 Voloshanenko Igor :
>>
>> As i we use journal collocation for journal now (because we want to
>> utilize cache layer ((( ) i use ceph-disk to create new OSD (changed journal
>> size on ceph.conf). I don;t prefer manual work))
>>
>> So create very simple script to update journal size
>>
>> 2015-08-21 2:25 GMT+03:00 Voloshanenko Igor :
>>>
>>> Exactly
>>>
>>> пятница, 21 августа 2015 г. пользователь Samuel Just написал:
>>>
 And you adjusted the journals by removing the osd, recreating it with
 a larger journal, and reinserting it?
 -Sam

 On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
  wrote:
 > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
 >
 > 2015-08-21 2:23 GMT+03:00 Samuel Just :
 >>
 >> Specifically, the snap behavior (we already know that the pgs went
 >> inconsistent while the pool was in writeback mode, right?).
 >> -Sam
 >>
 >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just 
 >> wrote:
 >> > Yeah, I'm trying to confirm that the issues did happen in writeback
 >> > mode.
 >> > -Sam
 >> >
 >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
 >> >  wrote:
 >> >> Right. But issues started...
 >> >>
 >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
 >> >>>
 >> >>> But that was still in writeback mode, right?
 >> >>> -Sam
 >> >>>
 >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
 >> >>>  wrote:
 >> >>> > WE haven't set values for max_bytes / max_objects.. and all
 >> >>> > data
 >> >>> > initially
 >> >>> > writes only to cache layer and not flushed at all to cold
 >> >>> > layer.
 >> >>> >
 >> >>> > Then we received notification from monitoring that we collect
 >> >>> > about
 >> >>> > 750GB in
 >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9
 >> >>> > of
 >> >>> > disk
 >> >>> > size... And then evicting/flushing started...
 >> >>> >
 >> >>> > And issue with snapshots arrived
 >> >>> >
 >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
 >> >>> >>
 >> >>> >> Not sure what you mean by:
 >> >>> >>
 >> >>> >> but it's stop to work in same moment, when cache layer
 >> >>> >> fulfilled
 >> >>> >> with
 >> >>> >> data and evict/flush started...
 >> >>> >> -Sam
 >> >>> >>
 >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
 >> >>> >>  wrote:
 >> >>> >> > No, when we start draining cache - bad pgs was in place...
 >> >>> >> > We have big rebalance (disk by disk - to change journal side
 >> >>> >> > on
 >> >>> >> > both
 >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - ar

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Attachment blocked, so post as text...

root@zzz:~# cat update_osd.sh
#!/bin/bash

ID=$1
echo "Process OSD# ${ID}"

DEV=`mount | grep "ceph-${ID} " | cut -d " " -f 1`
echo "OSD# ${ID} hosted on ${DEV::-1}"

TYPE_RAW=`smartctl -a ${DEV} | grep Rota | cut -d " " -f 6`
if [ "${TYPE_RAW}" == "Solid" ]
then
TYPE="ssd"
elif [ "${TYPE_RAW}" == "7200" ]
then
TYPE="platter"
fi

echo "OSD Type = ${TYPE}"

HOST=`hostname`
echo "Current node hostname: ${HOST}"

echo "Set noout option for CEPH cluster"
ceph osd set noout

echo "Marked OSD # ${ID} out"
  [19/1857]
ceph osd out ${ID}

echo "Remove OSD # ${ID} from CRUSHMAP"
ceph osd crush remove osd.${ID}

echo "Delete auth for OSD# ${ID}"
ceph auth del osd.${ID}

echo "Stop OSD# ${ID}"
stop ceph-osd id=${ID}

echo "Remove OSD # ${ID} from cluster"
ceph osd rm ${ID}

echo "Unmount OSD# ${ID}"
umount ${DEV}

echo "ZAP ${DEV::-1}"
ceph-disk zap ${DEV::-1}

echo "Create new OSD with ${DEV::-1}"
ceph-disk-prepare ${DEV::-1}

echo "Activate new OSD"
ceph-disk-activate ${DEV}

echo "Dump current CRUSHMAP"
ceph osd getcrushmap -o cm.old

echo "Decompile CRUSHMAP"
crushtool -d cm.old -o cm

echo "Place new OSD in proper place"
sed -i "s/device${ID}/osd.${ID}/" cm
LINE=`cat -n cm | sed -n "/${HOST}-${TYPE} {/,/}/p" | tail -n 1 | awk
'{print $1}'`
sed -i "${LINE}iitem osd.${ID} weight 1.000" cm

echo "Modify ${HOST} weight into CRUSHMAP"
sed -i "s/item ${HOST}-${TYPE} weight 9.000/item ${HOST}-${TYPE} weight
1.000/" cm

echo "Compile new CRUSHMAP"
crushtool -c cm -o cm.new

echo "Inject new CRUSHMAP"
ceph osd setcrushmap -i cm.new

#echo "Clean..."
#rm -rf cm cm.new

echo "Unset noout option for CEPH cluster"
ceph osd unset noout

echo "OSD recreated... Waiting for rebalancing..."

2015-08-21 2:37 GMT+03:00 Voloshanenko Igor :

> As i we use journal collocation for journal now (because we want to
> utilize cache layer ((( ) i use ceph-disk to create new OSD (changed
> journal size on ceph.conf). I don;t prefer manual work))
>
> So create very simple script to update journal size
>
> 2015-08-21 2:25 GMT+03:00 Voloshanenko Igor :
>
>> Exactly
>>
>> пятница, 21 августа 2015 г. пользователь Samuel Just написал:
>>
>> And you adjusted the journals by removing the osd, recreating it with
>>> a larger journal, and reinserting it?
>>> -Sam
>>>
>>> On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
>>>  wrote:
>>> > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
>>> >
>>> > 2015-08-21 2:23 GMT+03:00 Samuel Just :
>>> >>
>>> >> Specifically, the snap behavior (we already know that the pgs went
>>> >> inconsistent while the pool was in writeback mode, right?).
>>> >> -Sam
>>> >>
>>> >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just 
>>> wrote:
>>> >> > Yeah, I'm trying to confirm that the issues did happen in writeback
>>> >> > mode.
>>> >> > -Sam
>>> >> >
>>> >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
>>> >> >  wrote:
>>> >> >> Right. But issues started...
>>> >> >>
>>> >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>>> >> >>>
>>> >> >>> But that was still in writeback mode, right?
>>> >> >>> -Sam
>>> >> >>>
>>> >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>>> >> >>>  wrote:
>>> >> >>> > WE haven't set values for max_bytes / max_objects.. and all data
>>> >> >>> > initially
>>> >> >>> > writes only to cache layer and not flushed at all to cold layer.
>>> >> >>> >
>>> >> >>> > Then we received notification from monitoring that we collect
>>> about
>>> >> >>> > 750GB in
>>> >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of
>>> >> >>> > disk
>>> >> >>> > size... And then evicting/flushing started...
>>> >> >>> >
>>> >> >>> > And issue with snapshots arrived
>>> >> >>> >
>>> >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>>> >> >>> >>
>>> >> >>> >> Not sure what you mean by:
>>> >> >>> >>
>>> >> >>> >> but it's stop to work in same moment, when cache layer
>>> fulfilled
>>> >> >>> >> with
>>> >> >>> >> data and evict/flush started...
>>> >> >>> >> -Sam
>>> >> >>> >>
>>> >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>>> >> >>> >>  wrote:
>>> >> >>> >> > No, when we start draining cache - bad pgs was in place...
>>> >> >>> >> > We have big rebalance (disk by disk - to change journal side
>>> on
>>> >> >>> >> > both
>>> >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived
>>> scrub
>>> >> >>> >> > errors
>>> >> >>> >> > and 2
>>> >> >>> >> > pgs inconsistent...
>>> >> >>> >> >
>>> >> >>> >> > In writeback - yes, looks like snapshot works good. but it's
>>> stop
>>> >> >>> >> > to
>>> >> >>> >> > work in
>>> >> >>> >> > same moment, when cache layer fulfilled with data and
>>> evict/flush
>>> >> >>> >> > started...
>>> >> >>> >> >
>>> >> >>> >> >
>>> >> >>> >> >
>>> >> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>>> >> >>> >> >>
>>> >> >>> >> >> So you started draining the cache pool before you saw
>>> either the
>>> >> >>> >> >> inconsistent pgs 

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Will do, Sam!

thank in advance for you help!

2015-08-21 2:28 GMT+03:00 Samuel Just :

> Ok, create a ticket with a timeline and all of this information, I'll
> try to look into it more tomorrow.
> -Sam
>
> On Thu, Aug 20, 2015 at 4:25 PM, Voloshanenko Igor
>  wrote:
> > Exactly
> >
> > пятница, 21 августа 2015 г. пользователь Samuel Just написал:
> >
> >> And you adjusted the journals by removing the osd, recreating it with
> >> a larger journal, and reinserting it?
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
> >>  wrote:
> >> > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
> >> >
> >> > 2015-08-21 2:23 GMT+03:00 Samuel Just :
> >> >>
> >> >> Specifically, the snap behavior (we already know that the pgs went
> >> >> inconsistent while the pool was in writeback mode, right?).
> >> >> -Sam
> >> >>
> >> >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just 
> wrote:
> >> >> > Yeah, I'm trying to confirm that the issues did happen in writeback
> >> >> > mode.
> >> >> > -Sam
> >> >> >
> >> >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
> >> >> >  wrote:
> >> >> >> Right. But issues started...
> >> >> >>
> >> >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
> >> >> >>>
> >> >> >>> But that was still in writeback mode, right?
> >> >> >>> -Sam
> >> >> >>>
> >> >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
> >> >> >>>  wrote:
> >> >> >>> > WE haven't set values for max_bytes / max_objects.. and all
> data
> >> >> >>> > initially
> >> >> >>> > writes only to cache layer and not flushed at all to cold
> layer.
> >> >> >>> >
> >> >> >>> > Then we received notification from monitoring that we collect
> >> >> >>> > about
> >> >> >>> > 750GB in
> >> >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9
> of
> >> >> >>> > disk
> >> >> >>> > size... And then evicting/flushing started...
> >> >> >>> >
> >> >> >>> > And issue with snapshots arrived
> >> >> >>> >
> >> >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
> >> >> >>> >>
> >> >> >>> >> Not sure what you mean by:
> >> >> >>> >>
> >> >> >>> >> but it's stop to work in same moment, when cache layer
> fulfilled
> >> >> >>> >> with
> >> >> >>> >> data and evict/flush started...
> >> >> >>> >> -Sam
> >> >> >>> >>
> >> >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
> >> >> >>> >>  wrote:
> >> >> >>> >> > No, when we start draining cache - bad pgs was in place...
> >> >> >>> >> > We have big rebalance (disk by disk - to change journal side
> >> >> >>> >> > on
> >> >> >>> >> > both
> >> >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived
> >> >> >>> >> > scrub
> >> >> >>> >> > errors
> >> >> >>> >> > and 2
> >> >> >>> >> > pgs inconsistent...
> >> >> >>> >> >
> >> >> >>> >> > In writeback - yes, looks like snapshot works good. but it's
> >> >> >>> >> > stop
> >> >> >>> >> > to
> >> >> >>> >> > work in
> >> >> >>> >> > same moment, when cache layer fulfilled with data and
> >> >> >>> >> > evict/flush
> >> >> >>> >> > started...
> >> >> >>> >> >
> >> >> >>> >> >
> >> >> >>> >> >
> >> >> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
> >> >> >>> >> >>
> >> >> >>> >> >> So you started draining the cache pool before you saw
> either
> >> >> >>> >> >> the
> >> >> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
> >> >> >>> >> >> writeback
> >> >> >>> >> >> mode was working correctly?)
> >> >> >>> >> >> -Sam
> >> >> >>> >> >>
> >> >> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >> >> >>> >> >>  wrote:
> >> >> >>> >> >> > Good joke )
> >> >> >>> >> >> >
> >> >> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just  >:
> >> >> >>> >> >> >>
> >> >> >>> >> >> >> Certainly, don't reproduce this with a cluster you care
> >> >> >>> >> >> >> about
> >> >> >>> >> >> >> :).
> >> >> >>> >> >> >> -Sam
> >> >> >>> >> >> >>
> >> >> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just
> >> >> >>> >> >> >> 
> >> >> >>> >> >> >> wrote:
> >> >> >>> >> >> >> > What's supposed to happen is that the client
> >> >> >>> >> >> >> > transparently
> >> >> >>> >> >> >> > directs
> >> >> >>> >> >> >> > all
> >> >> >>> >> >> >> > requests to the cache pool rather than the cold pool
> >> >> >>> >> >> >> > when
> >> >> >>> >> >> >> > there
> >> >> >>> >> >> >> > is
> >> >> >>> >> >> >> > a
> >> >> >>> >> >> >> > cache pool.  If the kernel is sending requests to the
> >> >> >>> >> >> >> > cold
> >> >> >>> >> >> >> > pool,
> >> >> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also
> >> >> >>> >> >> >> > be a
> >> >> >>> >> >> >> > bug
> >> >> >>> >> >> >> > specific 'forward' mode either in the client or on the
> >> >> >>> >> >> >> > osd.
> >> >> >>> >> >> >> > Why
> >> >> >>> >> >> >> > did
> >> >> >>> >> >> >> > you have it in that mode?
> >> >> >>> >> >> >> > -Sam
> >> >> >>> >> >> >> >
> >> >> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >> >> >>> >> >> >> >  wrote:
> >> >> >>> >> >> >> >> We used 4.x branch, a

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
As i we use journal collocation for journal now (because we want to utilize
cache layer ((( ) i use ceph-disk to create new OSD (changed journal size
on ceph.conf). I don;t prefer manual work))

So create very simple script to update journal size

2015-08-21 2:25 GMT+03:00 Voloshanenko Igor :

> Exactly
>
> пятница, 21 августа 2015 г. пользователь Samuel Just написал:
>
> And you adjusted the journals by removing the osd, recreating it with
>> a larger journal, and reinserting it?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
>>  wrote:
>> > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
>> >
>> > 2015-08-21 2:23 GMT+03:00 Samuel Just :
>> >>
>> >> Specifically, the snap behavior (we already know that the pgs went
>> >> inconsistent while the pool was in writeback mode, right?).
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  wrote:
>> >> > Yeah, I'm trying to confirm that the issues did happen in writeback
>> >> > mode.
>> >> > -Sam
>> >> >
>> >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
>> >> >  wrote:
>> >> >> Right. But issues started...
>> >> >>
>> >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>> >> >>>
>> >> >>> But that was still in writeback mode, right?
>> >> >>> -Sam
>> >> >>>
>> >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>> >> >>>  wrote:
>> >> >>> > WE haven't set values for max_bytes / max_objects.. and all data
>> >> >>> > initially
>> >> >>> > writes only to cache layer and not flushed at all to cold layer.
>> >> >>> >
>> >> >>> > Then we received notification from monitoring that we collect
>> about
>> >> >>> > 750GB in
>> >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of
>> >> >>> > disk
>> >> >>> > size... And then evicting/flushing started...
>> >> >>> >
>> >> >>> > And issue with snapshots arrived
>> >> >>> >
>> >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>> >> >>> >>
>> >> >>> >> Not sure what you mean by:
>> >> >>> >>
>> >> >>> >> but it's stop to work in same moment, when cache layer fulfilled
>> >> >>> >> with
>> >> >>> >> data and evict/flush started...
>> >> >>> >> -Sam
>> >> >>> >>
>> >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>> >> >>> >>  wrote:
>> >> >>> >> > No, when we start draining cache - bad pgs was in place...
>> >> >>> >> > We have big rebalance (disk by disk - to change journal side
>> on
>> >> >>> >> > both
>> >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived
>> scrub
>> >> >>> >> > errors
>> >> >>> >> > and 2
>> >> >>> >> > pgs inconsistent...
>> >> >>> >> >
>> >> >>> >> > In writeback - yes, looks like snapshot works good. but it's
>> stop
>> >> >>> >> > to
>> >> >>> >> > work in
>> >> >>> >> > same moment, when cache layer fulfilled with data and
>> evict/flush
>> >> >>> >> > started...
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >> >>> >> >>
>> >> >>> >> >> So you started draining the cache pool before you saw either
>> the
>> >> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>> >> >>> >> >> writeback
>> >> >>> >> >> mode was working correctly?)
>> >> >>> >> >> -Sam
>> >> >>> >> >>
>> >> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >> >>> >> >>  wrote:
>> >> >>> >> >> > Good joke )
>> >> >>> >> >> >
>> >> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >> >>> >> >> >>
>> >> >>> >> >> >> Certainly, don't reproduce this with a cluster you care
>> about
>> >> >>> >> >> >> :).
>> >> >>> >> >> >> -Sam
>> >> >>> >> >> >>
>> >> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just
>> >> >>> >> >> >> 
>> >> >>> >> >> >> wrote:
>> >> >>> >> >> >> > What's supposed to happen is that the client
>> transparently
>> >> >>> >> >> >> > directs
>> >> >>> >> >> >> > all
>> >> >>> >> >> >> > requests to the cache pool rather than the cold pool
>> when
>> >> >>> >> >> >> > there
>> >> >>> >> >> >> > is
>> >> >>> >> >> >> > a
>> >> >>> >> >> >> > cache pool.  If the kernel is sending requests to the
>> cold
>> >> >>> >> >> >> > pool,
>> >> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also
>> be a
>> >> >>> >> >> >> > bug
>> >> >>> >> >> >> > specific 'forward' mode either in the client or on the
>> osd.
>> >> >>> >> >> >> > Why
>> >> >>> >> >> >> > did
>> >> >>> >> >> >> > you have it in that mode?
>> >> >>> >> >> >> > -Sam
>> >> >>> >> >> >> >
>> >> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >>> >> >> >> >  wrote:
>> >> >>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850
>> pro
>> >> >>> >> >> >> >> in
>> >> >>> >> >> >> >> production,
>> >> >>> >> >> >> >> and they don;t support ncq_trim...
>> >> >>> >> >> >> >>
>> >> >>> >> >> >> >> And 4,x first branch which include exceptions for this
>> in
>> >> >>> >> >> >> >> libsata.c.
>> >> >>> >> >> >> >>
>> >> >>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we
>> >> >>> >> >> >> >> p

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Ok, create a ticket with a timeline and all of this information, I'll
try to look into it more tomorrow.
-Sam

On Thu, Aug 20, 2015 at 4:25 PM, Voloshanenko Igor
 wrote:
> Exactly
>
> пятница, 21 августа 2015 г. пользователь Samuel Just написал:
>
>> And you adjusted the journals by removing the osd, recreating it with
>> a larger journal, and reinserting it?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
>>  wrote:
>> > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
>> >
>> > 2015-08-21 2:23 GMT+03:00 Samuel Just :
>> >>
>> >> Specifically, the snap behavior (we already know that the pgs went
>> >> inconsistent while the pool was in writeback mode, right?).
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  wrote:
>> >> > Yeah, I'm trying to confirm that the issues did happen in writeback
>> >> > mode.
>> >> > -Sam
>> >> >
>> >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
>> >> >  wrote:
>> >> >> Right. But issues started...
>> >> >>
>> >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>> >> >>>
>> >> >>> But that was still in writeback mode, right?
>> >> >>> -Sam
>> >> >>>
>> >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>> >> >>>  wrote:
>> >> >>> > WE haven't set values for max_bytes / max_objects.. and all data
>> >> >>> > initially
>> >> >>> > writes only to cache layer and not flushed at all to cold layer.
>> >> >>> >
>> >> >>> > Then we received notification from monitoring that we collect
>> >> >>> > about
>> >> >>> > 750GB in
>> >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of
>> >> >>> > disk
>> >> >>> > size... And then evicting/flushing started...
>> >> >>> >
>> >> >>> > And issue with snapshots arrived
>> >> >>> >
>> >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>> >> >>> >>
>> >> >>> >> Not sure what you mean by:
>> >> >>> >>
>> >> >>> >> but it's stop to work in same moment, when cache layer fulfilled
>> >> >>> >> with
>> >> >>> >> data and evict/flush started...
>> >> >>> >> -Sam
>> >> >>> >>
>> >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>> >> >>> >>  wrote:
>> >> >>> >> > No, when we start draining cache - bad pgs was in place...
>> >> >>> >> > We have big rebalance (disk by disk - to change journal side
>> >> >>> >> > on
>> >> >>> >> > both
>> >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived
>> >> >>> >> > scrub
>> >> >>> >> > errors
>> >> >>> >> > and 2
>> >> >>> >> > pgs inconsistent...
>> >> >>> >> >
>> >> >>> >> > In writeback - yes, looks like snapshot works good. but it's
>> >> >>> >> > stop
>> >> >>> >> > to
>> >> >>> >> > work in
>> >> >>> >> > same moment, when cache layer fulfilled with data and
>> >> >>> >> > evict/flush
>> >> >>> >> > started...
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >> >>> >> >>
>> >> >>> >> >> So you started draining the cache pool before you saw either
>> >> >>> >> >> the
>> >> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>> >> >>> >> >> writeback
>> >> >>> >> >> mode was working correctly?)
>> >> >>> >> >> -Sam
>> >> >>> >> >>
>> >> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >> >>> >> >>  wrote:
>> >> >>> >> >> > Good joke )
>> >> >>> >> >> >
>> >> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >> >>> >> >> >>
>> >> >>> >> >> >> Certainly, don't reproduce this with a cluster you care
>> >> >>> >> >> >> about
>> >> >>> >> >> >> :).
>> >> >>> >> >> >> -Sam
>> >> >>> >> >> >>
>> >> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just
>> >> >>> >> >> >> 
>> >> >>> >> >> >> wrote:
>> >> >>> >> >> >> > What's supposed to happen is that the client
>> >> >>> >> >> >> > transparently
>> >> >>> >> >> >> > directs
>> >> >>> >> >> >> > all
>> >> >>> >> >> >> > requests to the cache pool rather than the cold pool
>> >> >>> >> >> >> > when
>> >> >>> >> >> >> > there
>> >> >>> >> >> >> > is
>> >> >>> >> >> >> > a
>> >> >>> >> >> >> > cache pool.  If the kernel is sending requests to the
>> >> >>> >> >> >> > cold
>> >> >>> >> >> >> > pool,
>> >> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also
>> >> >>> >> >> >> > be a
>> >> >>> >> >> >> > bug
>> >> >>> >> >> >> > specific 'forward' mode either in the client or on the
>> >> >>> >> >> >> > osd.
>> >> >>> >> >> >> > Why
>> >> >>> >> >> >> > did
>> >> >>> >> >> >> > you have it in that mode?
>> >> >>> >> >> >> > -Sam
>> >> >>> >> >> >> >
>> >> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >>> >> >> >> >  wrote:
>> >> >>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850
>> >> >>> >> >> >> >> pro
>> >> >>> >> >> >> >> in
>> >> >>> >> >> >> >> production,
>> >> >>> >> >> >> >> and they don;t support ncq_trim...
>> >> >>> >> >> >> >>
>> >> >>> >> >> >> >> And 4,x first branch which include exceptions for this
>> >> >>> >> >> >> >> in
>> >> >>> >> >> >> >> libsata.c.
>> >> >>> >> >> >> >>
>> >> >>> 

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
And you adjusted the journals by removing the osd, recreating it with
a larger journal, and reinserting it?
-Sam

On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
 wrote:
> Right ( but also was rebalancing cycle 2 day before pgs corrupted)
>
> 2015-08-21 2:23 GMT+03:00 Samuel Just :
>>
>> Specifically, the snap behavior (we already know that the pgs went
>> inconsistent while the pool was in writeback mode, right?).
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  wrote:
>> > Yeah, I'm trying to confirm that the issues did happen in writeback
>> > mode.
>> > -Sam
>> >
>> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
>> >  wrote:
>> >> Right. But issues started...
>> >>
>> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>> >>>
>> >>> But that was still in writeback mode, right?
>> >>> -Sam
>> >>>
>> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>> >>>  wrote:
>> >>> > WE haven't set values for max_bytes / max_objects.. and all data
>> >>> > initially
>> >>> > writes only to cache layer and not flushed at all to cold layer.
>> >>> >
>> >>> > Then we received notification from monitoring that we collect about
>> >>> > 750GB in
>> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of
>> >>> > disk
>> >>> > size... And then evicting/flushing started...
>> >>> >
>> >>> > And issue with snapshots arrived
>> >>> >
>> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>> >>> >>
>> >>> >> Not sure what you mean by:
>> >>> >>
>> >>> >> but it's stop to work in same moment, when cache layer fulfilled
>> >>> >> with
>> >>> >> data and evict/flush started...
>> >>> >> -Sam
>> >>> >>
>> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>> >>> >>  wrote:
>> >>> >> > No, when we start draining cache - bad pgs was in place...
>> >>> >> > We have big rebalance (disk by disk - to change journal side on
>> >>> >> > both
>> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
>> >>> >> > errors
>> >>> >> > and 2
>> >>> >> > pgs inconsistent...
>> >>> >> >
>> >>> >> > In writeback - yes, looks like snapshot works good. but it's stop
>> >>> >> > to
>> >>> >> > work in
>> >>> >> > same moment, when cache layer fulfilled with data and evict/flush
>> >>> >> > started...
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >>> >> >>
>> >>> >> >> So you started draining the cache pool before you saw either the
>> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>> >>> >> >> writeback
>> >>> >> >> mode was working correctly?)
>> >>> >> >> -Sam
>> >>> >> >>
>> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >>> >> >>  wrote:
>> >>> >> >> > Good joke )
>> >>> >> >> >
>> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >>> >> >> >>
>> >>> >> >> >> Certainly, don't reproduce this with a cluster you care about
>> >>> >> >> >> :).
>> >>> >> >> >> -Sam
>> >>> >> >> >>
>> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just
>> >>> >> >> >> 
>> >>> >> >> >> wrote:
>> >>> >> >> >> > What's supposed to happen is that the client transparently
>> >>> >> >> >> > directs
>> >>> >> >> >> > all
>> >>> >> >> >> > requests to the cache pool rather than the cold pool when
>> >>> >> >> >> > there
>> >>> >> >> >> > is
>> >>> >> >> >> > a
>> >>> >> >> >> > cache pool.  If the kernel is sending requests to the cold
>> >>> >> >> >> > pool,
>> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also be a
>> >>> >> >> >> > bug
>> >>> >> >> >> > specific 'forward' mode either in the client or on the osd.
>> >>> >> >> >> > Why
>> >>> >> >> >> > did
>> >>> >> >> >> > you have it in that mode?
>> >>> >> >> >> > -Sam
>> >>> >> >> >> >
>> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >>> >> >> >> >  wrote:
>> >>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro
>> >>> >> >> >> >> in
>> >>> >> >> >> >> production,
>> >>> >> >> >> >> and they don;t support ncq_trim...
>> >>> >> >> >> >>
>> >>> >> >> >> >> And 4,x first branch which include exceptions for this in
>> >>> >> >> >> >> libsata.c.
>> >>> >> >> >> >>
>> >>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we
>> >>> >> >> >> >> prefer
>> >>> >> >> >> >> no
>> >>> >> >> >> >> to
>> >>> >> >> >> >> go
>> >>> >> >> >> >> deeper if packege for new kernel exist.
>> >>> >> >> >> >>
>> >>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >>> >> >> >> >> :
>> >>> >> >> >> >>>
>> >>> >> >> >> >>> root@test:~# uname -a
>> >>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun
>> >>> >> >> >> >>> May 17
>> >>> >> >> >> >>> 17:37:22
>> >>> >> >> >> >>> UTC
>> >>> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>> >> >> >> >>>
>> >>> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> >>> >> >> >> 
>> >>> >> >> >>  Also, can you include the kernel version?
>> >>> >> >> >>  -Sam
>> >>> >> >> >> 
>> >>> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Exactly

пятница, 21 августа 2015 г. пользователь Samuel Just написал:

> And you adjusted the journals by removing the osd, recreating it with
> a larger journal, and reinserting it?
> -Sam
>
> On Thu, Aug 20, 2015 at 4:24 PM, Voloshanenko Igor
> > wrote:
> > Right ( but also was rebalancing cycle 2 day before pgs corrupted)
> >
> > 2015-08-21 2:23 GMT+03:00 Samuel Just >:
> >>
> >> Specifically, the snap behavior (we already know that the pgs went
> >> inconsistent while the pool was in writeback mode, right?).
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  > wrote:
> >> > Yeah, I'm trying to confirm that the issues did happen in writeback
> >> > mode.
> >> > -Sam
> >> >
> >> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
> >> > > wrote:
> >> >> Right. But issues started...
> >> >>
> >> >> 2015-08-21 2:20 GMT+03:00 Samuel Just  >:
> >> >>>
> >> >>> But that was still in writeback mode, right?
> >> >>> -Sam
> >> >>>
> >> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
> >> >>> > wrote:
> >> >>> > WE haven't set values for max_bytes / max_objects.. and all data
> >> >>> > initially
> >> >>> > writes only to cache layer and not flushed at all to cold layer.
> >> >>> >
> >> >>> > Then we received notification from monitoring that we collect
> about
> >> >>> > 750GB in
> >> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of
> >> >>> > disk
> >> >>> > size... And then evicting/flushing started...
> >> >>> >
> >> >>> > And issue with snapshots arrived
> >> >>> >
> >> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just  >:
> >> >>> >>
> >> >>> >> Not sure what you mean by:
> >> >>> >>
> >> >>> >> but it's stop to work in same moment, when cache layer fulfilled
> >> >>> >> with
> >> >>> >> data and evict/flush started...
> >> >>> >> -Sam
> >> >>> >>
> >> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
> >> >>> >> > wrote:
> >> >>> >> > No, when we start draining cache - bad pgs was in place...
> >> >>> >> > We have big rebalance (disk by disk - to change journal side on
> >> >>> >> > both
> >> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
> >> >>> >> > errors
> >> >>> >> > and 2
> >> >>> >> > pgs inconsistent...
> >> >>> >> >
> >> >>> >> > In writeback - yes, looks like snapshot works good. but it's
> stop
> >> >>> >> > to
> >> >>> >> > work in
> >> >>> >> > same moment, when cache layer fulfilled with data and
> evict/flush
> >> >>> >> > started...
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just  >:
> >> >>> >> >>
> >> >>> >> >> So you started draining the cache pool before you saw either
> the
> >> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
> >> >>> >> >> writeback
> >> >>> >> >> mode was working correctly?)
> >> >>> >> >> -Sam
> >> >>> >> >>
> >> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >> >>> >> >> > wrote:
> >> >>> >> >> > Good joke )
> >> >>> >> >> >
> >> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just  >:
> >> >>> >> >> >>
> >> >>> >> >> >> Certainly, don't reproduce this with a cluster you care
> about
> >> >>> >> >> >> :).
> >> >>> >> >> >> -Sam
> >> >>> >> >> >>
> >> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just
> >> >>> >> >> >> >
> >> >>> >> >> >> wrote:
> >> >>> >> >> >> > What's supposed to happen is that the client
> transparently
> >> >>> >> >> >> > directs
> >> >>> >> >> >> > all
> >> >>> >> >> >> > requests to the cache pool rather than the cold pool when
> >> >>> >> >> >> > there
> >> >>> >> >> >> > is
> >> >>> >> >> >> > a
> >> >>> >> >> >> > cache pool.  If the kernel is sending requests to the
> cold
> >> >>> >> >> >> > pool,
> >> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also
> be a
> >> >>> >> >> >> > bug
> >> >>> >> >> >> > specific 'forward' mode either in the client or on the
> osd.
> >> >>> >> >> >> > Why
> >> >>> >> >> >> > did
> >> >>> >> >> >> > you have it in that mode?
> >> >>> >> >> >> > -Sam
> >> >>> >> >> >> >
> >> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >> >>> >> >> >> > > wrote:
> >> >>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850
> pro
> >> >>> >> >> >> >> in
> >> >>> >> >> >> >> production,
> >> >>> >> >> >> >> and they don;t support ncq_trim...
> >> >>> >> >> >> >>
> >> >>> >> >> >> >> And 4,x first branch which include exceptions for this
> in
> >> >>> >> >> >> >> libsata.c.
> >> >>> >> >> >> >>
> >> >>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we
> >> >>> >> >> >> >> prefer
> >> >>> >> >> >> >> no
> >> >>> >> >> >> >> to
> >> >>> >> >> >> >> go
> >> >>> >> >> >> >> deeper if packege for new kernel exist.
> >> >>> >> >> >> >>
> >> >>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >> >>> >> >> >> >> >:
> >> >>> >> >> >> >>>
> >> >>> >> >> >> >>> root@test:~# uname -a
> >> >>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun
> >> >>> >> >> >> >>> May 17

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Right ( but also was rebalancing cycle 2 day before pgs corrupted)

2015-08-21 2:23 GMT+03:00 Samuel Just :

> Specifically, the snap behavior (we already know that the pgs went
> inconsistent while the pool was in writeback mode, right?).
> -Sam
>
> On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  wrote:
> > Yeah, I'm trying to confirm that the issues did happen in writeback mode.
> > -Sam
> >
> > On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
> >  wrote:
> >> Right. But issues started...
> >>
> >> 2015-08-21 2:20 GMT+03:00 Samuel Just :
> >>>
> >>> But that was still in writeback mode, right?
> >>> -Sam
> >>>
> >>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
> >>>  wrote:
> >>> > WE haven't set values for max_bytes / max_objects.. and all data
> >>> > initially
> >>> > writes only to cache layer and not flushed at all to cold layer.
> >>> >
> >>> > Then we received notification from monitoring that we collect about
> >>> > 750GB in
> >>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
> >>> > size... And then evicting/flushing started...
> >>> >
> >>> > And issue with snapshots arrived
> >>> >
> >>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
> >>> >>
> >>> >> Not sure what you mean by:
> >>> >>
> >>> >> but it's stop to work in same moment, when cache layer fulfilled
> with
> >>> >> data and evict/flush started...
> >>> >> -Sam
> >>> >>
> >>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
> >>> >>  wrote:
> >>> >> > No, when we start draining cache - bad pgs was in place...
> >>> >> > We have big rebalance (disk by disk - to change journal side on
> both
> >>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
> >>> >> > errors
> >>> >> > and 2
> >>> >> > pgs inconsistent...
> >>> >> >
> >>> >> > In writeback - yes, looks like snapshot works good. but it's stop
> to
> >>> >> > work in
> >>> >> > same moment, when cache layer fulfilled with data and evict/flush
> >>> >> > started...
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
> >>> >> >>
> >>> >> >> So you started draining the cache pool before you saw either the
> >>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
> >>> >> >> writeback
> >>> >> >> mode was working correctly?)
> >>> >> >> -Sam
> >>> >> >>
> >>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >>> >> >>  wrote:
> >>> >> >> > Good joke )
> >>> >> >> >
> >>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
> >>> >> >> >>
> >>> >> >> >> Certainly, don't reproduce this with a cluster you care about
> :).
> >>> >> >> >> -Sam
> >>> >> >> >>
> >>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just <
> sj...@redhat.com>
> >>> >> >> >> wrote:
> >>> >> >> >> > What's supposed to happen is that the client transparently
> >>> >> >> >> > directs
> >>> >> >> >> > all
> >>> >> >> >> > requests to the cache pool rather than the cold pool when
> there
> >>> >> >> >> > is
> >>> >> >> >> > a
> >>> >> >> >> > cache pool.  If the kernel is sending requests to the cold
> >>> >> >> >> > pool,
> >>> >> >> >> > that's probably where the bug is.  Odd.  It could also be a
> bug
> >>> >> >> >> > specific 'forward' mode either in the client or on the osd.
> >>> >> >> >> > Why
> >>> >> >> >> > did
> >>> >> >> >> > you have it in that mode?
> >>> >> >> >> > -Sam
> >>> >> >> >> >
> >>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >>> >> >> >> >  wrote:
> >>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro
> in
> >>> >> >> >> >> production,
> >>> >> >> >> >> and they don;t support ncq_trim...
> >>> >> >> >> >>
> >>> >> >> >> >> And 4,x first branch which include exceptions for this in
> >>> >> >> >> >> libsata.c.
> >>> >> >> >> >>
> >>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we
> prefer
> >>> >> >> >> >> no
> >>> >> >> >> >> to
> >>> >> >> >> >> go
> >>> >> >> >> >> deeper if packege for new kernel exist.
> >>> >> >> >> >>
> >>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >>> >> >> >> >> :
> >>> >> >> >> >>>
> >>> >> >> >> >>> root@test:~# uname -a
> >>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun
> May 17
> >>> >> >> >> >>> 17:37:22
> >>> >> >> >> >>> UTC
> >>> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >>> >> >> >> >>>
> >>> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >>> >> >> >> 
> >>> >> >> >>  Also, can you include the kernel version?
> >>> >> >> >>  -Sam
> >>> >> >> >> 
> >>> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just
> >>> >> >> >>  
> >>> >> >> >>  wrote:
> >>> >> >> >>  > Snapshotting with cache/tiering *is* supposed to work.
> >>> >> >> >>  > Can
> >>> >> >> >>  > you
> >>> >> >> >>  > open a
> >>> >> >> >>  > bug?
> >>> >> >> >>  > -Sam
> >>> >> >> >>  >
> >>> >> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >>> >> >> >>  >  wrote:
> >>> >> >> >>  >> This was relate

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
I mean in forward mode - it;s permanent problem - snapshots not working.
And for writeback mode after we change max_bytes/object values, it;s around
30 by 70... 70% of time it;s works... 30% - not. Looks like for old images
- snapshots works fine (images which already exists before we change
values). For any new images - no

2015-08-21 2:21 GMT+03:00 Voloshanenko Igor :

> Right. But issues started...
>
> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>
>> But that was still in writeback mode, right?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>>  wrote:
>> > WE haven't set values for max_bytes / max_objects.. and all data
>> initially
>> > writes only to cache layer and not flushed at all to cold layer.
>> >
>> > Then we received notification from monitoring that we collect about
>> 750GB in
>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
>> > size... And then evicting/flushing started...
>> >
>> > And issue with snapshots arrived
>> >
>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>> >>
>> >> Not sure what you mean by:
>> >>
>> >> but it's stop to work in same moment, when cache layer fulfilled with
>> >> data and evict/flush started...
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>> >>  wrote:
>> >> > No, when we start draining cache - bad pgs was in place...
>> >> > We have big rebalance (disk by disk - to change journal side on both
>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
>> errors
>> >> > and 2
>> >> > pgs inconsistent...
>> >> >
>> >> > In writeback - yes, looks like snapshot works good. but it's stop to
>> >> > work in
>> >> > same moment, when cache layer fulfilled with data and evict/flush
>> >> > started...
>> >> >
>> >> >
>> >> >
>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >> >>
>> >> >> So you started draining the cache pool before you saw either the
>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>> writeback
>> >> >> mode was working correctly?)
>> >> >> -Sam
>> >> >>
>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >> >>  wrote:
>> >> >> > Good joke )
>> >> >> >
>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >> >> >>
>> >> >> >> Certainly, don't reproduce this with a cluster you care about :).
>> >> >> >> -Sam
>> >> >> >>
>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
>> >> >> >> wrote:
>> >> >> >> > What's supposed to happen is that the client transparently
>> directs
>> >> >> >> > all
>> >> >> >> > requests to the cache pool rather than the cold pool when
>> there is
>> >> >> >> > a
>> >> >> >> > cache pool.  If the kernel is sending requests to the cold
>> pool,
>> >> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
>> >> >> >> > specific 'forward' mode either in the client or on the osd.
>> Why
>> >> >> >> > did
>> >> >> >> > you have it in that mode?
>> >> >> >> > -Sam
>> >> >> >> >
>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >> >> >  wrote:
>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>> >> >> >> >> production,
>> >> >> >> >> and they don;t support ncq_trim...
>> >> >> >> >>
>> >> >> >> >> And 4,x first branch which include exceptions for this in
>> >> >> >> >> libsata.c.
>> >> >> >> >>
>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer
>> no
>> >> >> >> >> to
>> >> >> >> >> go
>> >> >> >> >> deeper if packege for new kernel exist.
>> >> >> >> >>
>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >> >> >> >> :
>> >> >> >> >>>
>> >> >> >> >>> root@test:~# uname -a
>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>> >> >> >> >>> 17:37:22
>> >> >> >> >>> UTC
>> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >> >> >> >>>
>> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> >> >> >> 
>> >> >> >>  Also, can you include the kernel version?
>> >> >> >>  -Sam
>> >> >> >> 
>> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just <
>> sj...@redhat.com>
>> >> >> >>  wrote:
>> >> >> >>  > Snapshotting with cache/tiering *is* supposed to work.
>> Can
>> >> >> >>  > you
>> >> >> >>  > open a
>> >> >> >>  > bug?
>> >> >> >>  > -Sam
>> >> >> >>  >
>> >> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>> >> >> >>  >  wrote:
>> >> >> >>  >> This was related to the caching layer, which doesnt
>> support
>> >> >> >>  >> snapshooting per
>> >> >> >>  >> docs...for sake of closing the thread.
>> >> >> >>  >>
>> >> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>> >> >> >>  >> 
>> >> >> >>  >> wrote:
>> >> >> >>  >>>
>> >> >> >>  >>> Hi all, can you please help me with unexplained
>> >> >> >>  >>> situation...
>> >> >> >>  >>>
>> >> >> >>  >>> All snapshot inside ceph broken...
>> >> >> >>  >>>
>> >> >> >>  >>> So, as example, we have VM template, as rbd insid

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Specifically, the snap behavior (we already know that the pgs went
inconsistent while the pool was in writeback mode, right?).
-Sam

On Thu, Aug 20, 2015 at 4:22 PM, Samuel Just  wrote:
> Yeah, I'm trying to confirm that the issues did happen in writeback mode.
> -Sam
>
> On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
>  wrote:
>> Right. But issues started...
>>
>> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>>>
>>> But that was still in writeback mode, right?
>>> -Sam
>>>
>>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>>>  wrote:
>>> > WE haven't set values for max_bytes / max_objects.. and all data
>>> > initially
>>> > writes only to cache layer and not flushed at all to cold layer.
>>> >
>>> > Then we received notification from monitoring that we collect about
>>> > 750GB in
>>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
>>> > size... And then evicting/flushing started...
>>> >
>>> > And issue with snapshots arrived
>>> >
>>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>>> >>
>>> >> Not sure what you mean by:
>>> >>
>>> >> but it's stop to work in same moment, when cache layer fulfilled with
>>> >> data and evict/flush started...
>>> >> -Sam
>>> >>
>>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>>> >>  wrote:
>>> >> > No, when we start draining cache - bad pgs was in place...
>>> >> > We have big rebalance (disk by disk - to change journal side on both
>>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
>>> >> > errors
>>> >> > and 2
>>> >> > pgs inconsistent...
>>> >> >
>>> >> > In writeback - yes, looks like snapshot works good. but it's stop to
>>> >> > work in
>>> >> > same moment, when cache layer fulfilled with data and evict/flush
>>> >> > started...
>>> >> >
>>> >> >
>>> >> >
>>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>>> >> >>
>>> >> >> So you started draining the cache pool before you saw either the
>>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>>> >> >> writeback
>>> >> >> mode was working correctly?)
>>> >> >> -Sam
>>> >> >>
>>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>>> >> >>  wrote:
>>> >> >> > Good joke )
>>> >> >> >
>>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>>> >> >> >>
>>> >> >> >> Certainly, don't reproduce this with a cluster you care about :).
>>> >> >> >> -Sam
>>> >> >> >>
>>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
>>> >> >> >> wrote:
>>> >> >> >> > What's supposed to happen is that the client transparently
>>> >> >> >> > directs
>>> >> >> >> > all
>>> >> >> >> > requests to the cache pool rather than the cold pool when there
>>> >> >> >> > is
>>> >> >> >> > a
>>> >> >> >> > cache pool.  If the kernel is sending requests to the cold
>>> >> >> >> > pool,
>>> >> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
>>> >> >> >> > specific 'forward' mode either in the client or on the osd.
>>> >> >> >> > Why
>>> >> >> >> > did
>>> >> >> >> > you have it in that mode?
>>> >> >> >> > -Sam
>>> >> >> >> >
>>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>>> >> >> >> >  wrote:
>>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>>> >> >> >> >> production,
>>> >> >> >> >> and they don;t support ncq_trim...
>>> >> >> >> >>
>>> >> >> >> >> And 4,x first branch which include exceptions for this in
>>> >> >> >> >> libsata.c.
>>> >> >> >> >>
>>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer
>>> >> >> >> >> no
>>> >> >> >> >> to
>>> >> >> >> >> go
>>> >> >> >> >> deeper if packege for new kernel exist.
>>> >> >> >> >>
>>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>>> >> >> >> >> :
>>> >> >> >> >>>
>>> >> >> >> >>> root@test:~# uname -a
>>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>>> >> >> >> >>> 17:37:22
>>> >> >> >> >>> UTC
>>> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>>> >> >> >> >>>
>>> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>>> >> >> >> 
>>> >> >> >>  Also, can you include the kernel version?
>>> >> >> >>  -Sam
>>> >> >> >> 
>>> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just
>>> >> >> >>  
>>> >> >> >>  wrote:
>>> >> >> >>  > Snapshotting with cache/tiering *is* supposed to work.
>>> >> >> >>  > Can
>>> >> >> >>  > you
>>> >> >> >>  > open a
>>> >> >> >>  > bug?
>>> >> >> >>  > -Sam
>>> >> >> >>  >
>>> >> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>>> >> >> >>  >  wrote:
>>> >> >> >>  >> This was related to the caching layer, which doesnt
>>> >> >> >>  >> support
>>> >> >> >>  >> snapshooting per
>>> >> >> >>  >> docs...for sake of closing the thread.
>>> >> >> >>  >>
>>> >> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>>> >> >> >>  >> 
>>> >> >> >>  >> wrote:
>>> >> >> >>  >>>
>>> >> >> >>  >>> Hi all, can you please help me with unexplained
>>> >> >> >> 

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Right. But issues started...

2015-08-21 2:20 GMT+03:00 Samuel Just :

> But that was still in writeback mode, right?
> -Sam
>
> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>  wrote:
> > WE haven't set values for max_bytes / max_objects.. and all data
> initially
> > writes only to cache layer and not flushed at all to cold layer.
> >
> > Then we received notification from monitoring that we collect about
> 750GB in
> > hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
> > size... And then evicting/flushing started...
> >
> > And issue with snapshots arrived
> >
> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
> >>
> >> Not sure what you mean by:
> >>
> >> but it's stop to work in same moment, when cache layer fulfilled with
> >> data and evict/flush started...
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
> >>  wrote:
> >> > No, when we start draining cache - bad pgs was in place...
> >> > We have big rebalance (disk by disk - to change journal side on both
> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors
> >> > and 2
> >> > pgs inconsistent...
> >> >
> >> > In writeback - yes, looks like snapshot works good. but it's stop to
> >> > work in
> >> > same moment, when cache layer fulfilled with data and evict/flush
> >> > started...
> >> >
> >> >
> >> >
> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
> >> >>
> >> >> So you started draining the cache pool before you saw either the
> >> >> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
> >> >> mode was working correctly?)
> >> >> -Sam
> >> >>
> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >> >>  wrote:
> >> >> > Good joke )
> >> >> >
> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
> >> >> >>
> >> >> >> Certainly, don't reproduce this with a cluster you care about :).
> >> >> >> -Sam
> >> >> >>
> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
> >> >> >> wrote:
> >> >> >> > What's supposed to happen is that the client transparently
> directs
> >> >> >> > all
> >> >> >> > requests to the cache pool rather than the cold pool when there
> is
> >> >> >> > a
> >> >> >> > cache pool.  If the kernel is sending requests to the cold pool,
> >> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
> >> >> >> > specific 'forward' mode either in the client or on the osd.  Why
> >> >> >> > did
> >> >> >> > you have it in that mode?
> >> >> >> > -Sam
> >> >> >> >
> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >> >> >> >  wrote:
> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
> >> >> >> >> production,
> >> >> >> >> and they don;t support ncq_trim...
> >> >> >> >>
> >> >> >> >> And 4,x first branch which include exceptions for this in
> >> >> >> >> libsata.c.
> >> >> >> >>
> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer
> no
> >> >> >> >> to
> >> >> >> >> go
> >> >> >> >> deeper if packege for new kernel exist.
> >> >> >> >>
> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >> >> >> >> :
> >> >> >> >>>
> >> >> >> >>> root@test:~# uname -a
> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
> >> >> >> >>> 17:37:22
> >> >> >> >>> UTC
> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >> >> >> >>>
> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >> >> >> 
> >> >> >>  Also, can you include the kernel version?
> >> >> >>  -Sam
> >> >> >> 
> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just <
> sj...@redhat.com>
> >> >> >>  wrote:
> >> >> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can
> >> >> >>  > you
> >> >> >>  > open a
> >> >> >>  > bug?
> >> >> >>  > -Sam
> >> >> >>  >
> >> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >> >> >>  >  wrote:
> >> >> >>  >> This was related to the caching layer, which doesnt
> support
> >> >> >>  >> snapshooting per
> >> >> >>  >> docs...for sake of closing the thread.
> >> >> >>  >>
> >> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >> >> >>  >> 
> >> >> >>  >> wrote:
> >> >> >>  >>>
> >> >> >>  >>> Hi all, can you please help me with unexplained
> >> >> >>  >>> situation...
> >> >> >>  >>>
> >> >> >>  >>> All snapshot inside ceph broken...
> >> >> >>  >>>
> >> >> >>  >>> So, as example, we have VM template, as rbd inside ceph.
> >> >> >>  >>> We can map it and mount to check that all ok with it
> >> >> >>  >>>
> >> >> >>  >>> root@test:~# rbd map
> >> >> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >> >> >>  >>> /dev/rbd0
> >> >> >>  >>> root@test:~# parted /dev/rbd0 print
> >> >> >>  >>> Model: Unknown (unknown)
> >> >> >>  >>> Disk /dev/rbd0: 10.7GB
> >> >> >>  >>> Sector size (logical/physical): 512B/512B
> >> >> >>  >>> Partition Table: msdos
> >> >> >>  >>>
> >> >>

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Yeah, I'm trying to confirm that the issues did happen in writeback mode.
-Sam

On Thu, Aug 20, 2015 at 4:21 PM, Voloshanenko Igor
 wrote:
> Right. But issues started...
>
> 2015-08-21 2:20 GMT+03:00 Samuel Just :
>>
>> But that was still in writeback mode, right?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
>>  wrote:
>> > WE haven't set values for max_bytes / max_objects.. and all data
>> > initially
>> > writes only to cache layer and not flushed at all to cold layer.
>> >
>> > Then we received notification from monitoring that we collect about
>> > 750GB in
>> > hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
>> > size... And then evicting/flushing started...
>> >
>> > And issue with snapshots arrived
>> >
>> > 2015-08-21 2:15 GMT+03:00 Samuel Just :
>> >>
>> >> Not sure what you mean by:
>> >>
>> >> but it's stop to work in same moment, when cache layer fulfilled with
>> >> data and evict/flush started...
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>> >>  wrote:
>> >> > No, when we start draining cache - bad pgs was in place...
>> >> > We have big rebalance (disk by disk - to change journal side on both
>> >> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub
>> >> > errors
>> >> > and 2
>> >> > pgs inconsistent...
>> >> >
>> >> > In writeback - yes, looks like snapshot works good. but it's stop to
>> >> > work in
>> >> > same moment, when cache layer fulfilled with data and evict/flush
>> >> > started...
>> >> >
>> >> >
>> >> >
>> >> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >> >>
>> >> >> So you started draining the cache pool before you saw either the
>> >> >> inconsistent pgs or the anomalous snap behavior?  (That is,
>> >> >> writeback
>> >> >> mode was working correctly?)
>> >> >> -Sam
>> >> >>
>> >> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >> >>  wrote:
>> >> >> > Good joke )
>> >> >> >
>> >> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >> >> >>
>> >> >> >> Certainly, don't reproduce this with a cluster you care about :).
>> >> >> >> -Sam
>> >> >> >>
>> >> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
>> >> >> >> wrote:
>> >> >> >> > What's supposed to happen is that the client transparently
>> >> >> >> > directs
>> >> >> >> > all
>> >> >> >> > requests to the cache pool rather than the cold pool when there
>> >> >> >> > is
>> >> >> >> > a
>> >> >> >> > cache pool.  If the kernel is sending requests to the cold
>> >> >> >> > pool,
>> >> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
>> >> >> >> > specific 'forward' mode either in the client or on the osd.
>> >> >> >> > Why
>> >> >> >> > did
>> >> >> >> > you have it in that mode?
>> >> >> >> > -Sam
>> >> >> >> >
>> >> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >> >> >  wrote:
>> >> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>> >> >> >> >> production,
>> >> >> >> >> and they don;t support ncq_trim...
>> >> >> >> >>
>> >> >> >> >> And 4,x first branch which include exceptions for this in
>> >> >> >> >> libsata.c.
>> >> >> >> >>
>> >> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer
>> >> >> >> >> no
>> >> >> >> >> to
>> >> >> >> >> go
>> >> >> >> >> deeper if packege for new kernel exist.
>> >> >> >> >>
>> >> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >> >> >> >> :
>> >> >> >> >>>
>> >> >> >> >>> root@test:~# uname -a
>> >> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>> >> >> >> >>> 17:37:22
>> >> >> >> >>> UTC
>> >> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >> >> >> >>>
>> >> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> >> >> >> 
>> >> >> >>  Also, can you include the kernel version?
>> >> >> >>  -Sam
>> >> >> >> 
>> >> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just
>> >> >> >>  
>> >> >> >>  wrote:
>> >> >> >>  > Snapshotting with cache/tiering *is* supposed to work.
>> >> >> >>  > Can
>> >> >> >>  > you
>> >> >> >>  > open a
>> >> >> >>  > bug?
>> >> >> >>  > -Sam
>> >> >> >>  >
>> >> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>> >> >> >>  >  wrote:
>> >> >> >>  >> This was related to the caching layer, which doesnt
>> >> >> >>  >> support
>> >> >> >>  >> snapshooting per
>> >> >> >>  >> docs...for sake of closing the thread.
>> >> >> >>  >>
>> >> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>> >> >> >>  >> 
>> >> >> >>  >> wrote:
>> >> >> >>  >>>
>> >> >> >>  >>> Hi all, can you please help me with unexplained
>> >> >> >>  >>> situation...
>> >> >> >>  >>>
>> >> >> >>  >>> All snapshot inside ceph broken...
>> >> >> >>  >>>
>> >> >> >>  >>> So, as example, we have VM template, as rbd inside ceph.
>> >> >> >>  >>> We can map it and mount to check that all ok with it
>> >> >> >>  >>>
>> >> >> >>  >>> root@test:~

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Our initial values for journal sizes was enough, but flush time was 5 secs,
so we increase journal side to fit flush timeframe min|max for 29/30
seconds.

I mean
  filestore max sync interval = 30
  filestore min sync interval = 29
when said flush time

2015-08-21 2:16 GMT+03:00 Samuel Just :

> Also, what do you mean by "change journal side"?
> -Sam
>
> On Thu, Aug 20, 2015 at 4:15 PM, Samuel Just  wrote:
> > Not sure what you mean by:
> >
> > but it's stop to work in same moment, when cache layer fulfilled with
> > data and evict/flush started...
> > -Sam
> >
> > On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
> >  wrote:
> >> No, when we start draining cache - bad pgs was in place...
> >> We have big rebalance (disk by disk - to change journal side on both
> >> hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors
> and 2
> >> pgs inconsistent...
> >>
> >> In writeback - yes, looks like snapshot works good. but it's stop to
> work in
> >> same moment, when cache layer fulfilled with data and evict/flush
> started...
> >>
> >>
> >>
> >> 2015-08-21 2:09 GMT+03:00 Samuel Just :
> >>>
> >>> So you started draining the cache pool before you saw either the
> >>> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
> >>> mode was working correctly?)
> >>> -Sam
> >>>
> >>> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >>>  wrote:
> >>> > Good joke )
> >>> >
> >>> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
> >>> >>
> >>> >> Certainly, don't reproduce this with a cluster you care about :).
> >>> >> -Sam
> >>> >>
> >>> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
> wrote:
> >>> >> > What's supposed to happen is that the client transparently directs
> >>> >> > all
> >>> >> > requests to the cache pool rather than the cold pool when there
> is a
> >>> >> > cache pool.  If the kernel is sending requests to the cold pool,
> >>> >> > that's probably where the bug is.  Odd.  It could also be a bug
> >>> >> > specific 'forward' mode either in the client or on the osd.  Why
> did
> >>> >> > you have it in that mode?
> >>> >> > -Sam
> >>> >> >
> >>> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >>> >> >  wrote:
> >>> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
> >>> >> >> production,
> >>> >> >> and they don;t support ncq_trim...
> >>> >> >>
> >>> >> >> And 4,x first branch which include exceptions for this in
> libsata.c.
> >>> >> >>
> >>> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no
> to
> >>> >> >> go
> >>> >> >> deeper if packege for new kernel exist.
> >>> >> >>
> >>> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >>> >> >> :
> >>> >> >>>
> >>> >> >>> root@test:~# uname -a
> >>> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
> >>> >> >>> 17:37:22
> >>> >> >>> UTC
> >>> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >>> >> >>>
> >>> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >>> >> 
> >>> >>  Also, can you include the kernel version?
> >>> >>  -Sam
> >>> >> 
> >>> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  >
> >>> >>  wrote:
> >>> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can
> you
> >>> >>  > open a
> >>> >>  > bug?
> >>> >>  > -Sam
> >>> >>  >
> >>> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >>> >>  >  wrote:
> >>> >>  >> This was related to the caching layer, which doesnt support
> >>> >>  >> snapshooting per
> >>> >>  >> docs...for sake of closing the thread.
> >>> >>  >>
> >>> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >>> >>  >> 
> >>> >>  >> wrote:
> >>> >>  >>>
> >>> >>  >>> Hi all, can you please help me with unexplained
> situation...
> >>> >>  >>>
> >>> >>  >>> All snapshot inside ceph broken...
> >>> >>  >>>
> >>> >>  >>> So, as example, we have VM template, as rbd inside ceph.
> >>> >>  >>> We can map it and mount to check that all ok with it
> >>> >>  >>>
> >>> >>  >>> root@test:~# rbd map
> >>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >>> >>  >>> /dev/rbd0
> >>> >>  >>> root@test:~# parted /dev/rbd0 print
> >>> >>  >>> Model: Unknown (unknown)
> >>> >>  >>> Disk /dev/rbd0: 10.7GB
> >>> >>  >>> Sector size (logical/physical): 512B/512B
> >>> >>  >>> Partition Table: msdos
> >>> >>  >>>
> >>> >>  >>> Number  Start   End SizeType File system  Flags
> >>> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >>> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >>> >>  >>>
> >>> >>  >>> Than i want to create snap, so i do:
> >>> >>  >>> root@test:~# rbd snap create
> >>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> >>  >>>
> >>> >>  >>> And now i want to map it:
> >>> >>  >>>
> >>> >>  >>> root@test:~# rbd map
> >>> >>  >>> cold-s

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
But that was still in writeback mode, right?
-Sam

On Thu, Aug 20, 2015 at 4:18 PM, Voloshanenko Igor
 wrote:
> WE haven't set values for max_bytes / max_objects.. and all data initially
> writes only to cache layer and not flushed at all to cold layer.
>
> Then we received notification from monitoring that we collect about 750GB in
> hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
> size... And then evicting/flushing started...
>
> And issue with snapshots arrived
>
> 2015-08-21 2:15 GMT+03:00 Samuel Just :
>>
>> Not sure what you mean by:
>>
>> but it's stop to work in same moment, when cache layer fulfilled with
>> data and evict/flush started...
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>>  wrote:
>> > No, when we start draining cache - bad pgs was in place...
>> > We have big rebalance (disk by disk - to change journal side on both
>> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors
>> > and 2
>> > pgs inconsistent...
>> >
>> > In writeback - yes, looks like snapshot works good. but it's stop to
>> > work in
>> > same moment, when cache layer fulfilled with data and evict/flush
>> > started...
>> >
>> >
>> >
>> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
>> >>
>> >> So you started draining the cache pool before you saw either the
>> >> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
>> >> mode was working correctly?)
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>> >>  wrote:
>> >> > Good joke )
>> >> >
>> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >> >>
>> >> >> Certainly, don't reproduce this with a cluster you care about :).
>> >> >> -Sam
>> >> >>
>> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
>> >> >> wrote:
>> >> >> > What's supposed to happen is that the client transparently directs
>> >> >> > all
>> >> >> > requests to the cache pool rather than the cold pool when there is
>> >> >> > a
>> >> >> > cache pool.  If the kernel is sending requests to the cold pool,
>> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
>> >> >> > specific 'forward' mode either in the client or on the osd.  Why
>> >> >> > did
>> >> >> > you have it in that mode?
>> >> >> > -Sam
>> >> >> >
>> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >> >  wrote:
>> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>> >> >> >> production,
>> >> >> >> and they don;t support ncq_trim...
>> >> >> >>
>> >> >> >> And 4,x first branch which include exceptions for this in
>> >> >> >> libsata.c.
>> >> >> >>
>> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no
>> >> >> >> to
>> >> >> >> go
>> >> >> >> deeper if packege for new kernel exist.
>> >> >> >>
>> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >> >> >> :
>> >> >> >>>
>> >> >> >>> root@test:~# uname -a
>> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>> >> >> >>> 17:37:22
>> >> >> >>> UTC
>> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >> >> >>>
>> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> >> >> 
>> >> >>  Also, can you include the kernel version?
>> >> >>  -Sam
>> >> >> 
>> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
>> >> >>  wrote:
>> >> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can
>> >> >>  > you
>> >> >>  > open a
>> >> >>  > bug?
>> >> >>  > -Sam
>> >> >>  >
>> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>> >> >>  >  wrote:
>> >> >>  >> This was related to the caching layer, which doesnt support
>> >> >>  >> snapshooting per
>> >> >>  >> docs...for sake of closing the thread.
>> >> >>  >>
>> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>> >> >>  >> 
>> >> >>  >> wrote:
>> >> >>  >>>
>> >> >>  >>> Hi all, can you please help me with unexplained
>> >> >>  >>> situation...
>> >> >>  >>>
>> >> >>  >>> All snapshot inside ceph broken...
>> >> >>  >>>
>> >> >>  >>> So, as example, we have VM template, as rbd inside ceph.
>> >> >>  >>> We can map it and mount to check that all ok with it
>> >> >>  >>>
>> >> >>  >>> root@test:~# rbd map
>> >> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>> >> >>  >>> /dev/rbd0
>> >> >>  >>> root@test:~# parted /dev/rbd0 print
>> >> >>  >>> Model: Unknown (unknown)
>> >> >>  >>> Disk /dev/rbd0: 10.7GB
>> >> >>  >>> Sector size (logical/physical): 512B/512B
>> >> >>  >>> Partition Table: msdos
>> >> >>  >>>
>> >> >>  >>> Number  Start   End SizeType File system  Flags
>> >> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>> >> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>> >> >>  >>>
>> >> >>  >>> Than i want to create snap, so i do:
>> >> >>  >>> root@test:~# rbd snap create
>> >> >>  >>>

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
WE haven't set values for max_bytes / max_objects.. and all data initially
writes only to cache layer and not flushed at all to cold layer.

Then we received notification from monitoring that we collect about 750GB
in hot pool ) So i changed values for max_object_bytes to be 0,9 of disk
size... And then evicting/flushing started...

And issue with snapshots arrived

2015-08-21 2:15 GMT+03:00 Samuel Just :

> Not sure what you mean by:
>
> but it's stop to work in same moment, when cache layer fulfilled with
> data and evict/flush started...
> -Sam
>
> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>  wrote:
> > No, when we start draining cache - bad pgs was in place...
> > We have big rebalance (disk by disk - to change journal side on both
> > hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors
> and 2
> > pgs inconsistent...
> >
> > In writeback - yes, looks like snapshot works good. but it's stop to
> work in
> > same moment, when cache layer fulfilled with data and evict/flush
> started...
> >
> >
> >
> > 2015-08-21 2:09 GMT+03:00 Samuel Just :
> >>
> >> So you started draining the cache pool before you saw either the
> >> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
> >> mode was working correctly?)
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
> >>  wrote:
> >> > Good joke )
> >> >
> >> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
> >> >>
> >> >> Certainly, don't reproduce this with a cluster you care about :).
> >> >> -Sam
> >> >>
> >> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just 
> wrote:
> >> >> > What's supposed to happen is that the client transparently directs
> >> >> > all
> >> >> > requests to the cache pool rather than the cold pool when there is
> a
> >> >> > cache pool.  If the kernel is sending requests to the cold pool,
> >> >> > that's probably where the bug is.  Odd.  It could also be a bug
> >> >> > specific 'forward' mode either in the client or on the osd.  Why
> did
> >> >> > you have it in that mode?
> >> >> > -Sam
> >> >> >
> >> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >> >> >  wrote:
> >> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
> >> >> >> production,
> >> >> >> and they don;t support ncq_trim...
> >> >> >>
> >> >> >> And 4,x first branch which include exceptions for this in
> libsata.c.
> >> >> >>
> >> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no
> to
> >> >> >> go
> >> >> >> deeper if packege for new kernel exist.
> >> >> >>
> >> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >> >> >> :
> >> >> >>>
> >> >> >>> root@test:~# uname -a
> >> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
> >> >> >>> 17:37:22
> >> >> >>> UTC
> >> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >> >> >>>
> >> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >> >> 
> >> >>  Also, can you include the kernel version?
> >> >>  -Sam
> >> >> 
> >> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
> >> >>  wrote:
> >> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can
> you
> >> >>  > open a
> >> >>  > bug?
> >> >>  > -Sam
> >> >>  >
> >> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >> >>  >  wrote:
> >> >>  >> This was related to the caching layer, which doesnt support
> >> >>  >> snapshooting per
> >> >>  >> docs...for sake of closing the thread.
> >> >>  >>
> >> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >> >>  >> 
> >> >>  >> wrote:
> >> >>  >>>
> >> >>  >>> Hi all, can you please help me with unexplained situation...
> >> >>  >>>
> >> >>  >>> All snapshot inside ceph broken...
> >> >>  >>>
> >> >>  >>> So, as example, we have VM template, as rbd inside ceph.
> >> >>  >>> We can map it and mount to check that all ok with it
> >> >>  >>>
> >> >>  >>> root@test:~# rbd map
> >> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >> >>  >>> /dev/rbd0
> >> >>  >>> root@test:~# parted /dev/rbd0 print
> >> >>  >>> Model: Unknown (unknown)
> >> >>  >>> Disk /dev/rbd0: 10.7GB
> >> >>  >>> Sector size (logical/physical): 512B/512B
> >> >>  >>> Partition Table: msdos
> >> >>  >>>
> >> >>  >>> Number  Start   End SizeType File system  Flags
> >> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >> >>  >>>
> >> >>  >>> Than i want to create snap, so i do:
> >> >>  >>> root@test:~# rbd snap create
> >> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>  >>>
> >> >>  >>> And now i want to map it:
> >> >>  >>>
> >> >>  >>> root@test:~# rbd map
> >> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>  >>> /dev/rbd1
> >> >>  >>> root@test:~# parted /dev/rbd1 

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Also, what do you mean by "change journal side"?
-Sam

On Thu, Aug 20, 2015 at 4:15 PM, Samuel Just  wrote:
> Not sure what you mean by:
>
> but it's stop to work in same moment, when cache layer fulfilled with
> data and evict/flush started...
> -Sam
>
> On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
>  wrote:
>> No, when we start draining cache - bad pgs was in place...
>> We have big rebalance (disk by disk - to change journal side on both
>> hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors and 2
>> pgs inconsistent...
>>
>> In writeback - yes, looks like snapshot works good. but it's stop to work in
>> same moment, when cache layer fulfilled with data and evict/flush started...
>>
>>
>>
>> 2015-08-21 2:09 GMT+03:00 Samuel Just :
>>>
>>> So you started draining the cache pool before you saw either the
>>> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
>>> mode was working correctly?)
>>> -Sam
>>>
>>> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>>>  wrote:
>>> > Good joke )
>>> >
>>> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>>> >>
>>> >> Certainly, don't reproduce this with a cluster you care about :).
>>> >> -Sam
>>> >>
>>> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
>>> >> > What's supposed to happen is that the client transparently directs
>>> >> > all
>>> >> > requests to the cache pool rather than the cold pool when there is a
>>> >> > cache pool.  If the kernel is sending requests to the cold pool,
>>> >> > that's probably where the bug is.  Odd.  It could also be a bug
>>> >> > specific 'forward' mode either in the client or on the osd.  Why did
>>> >> > you have it in that mode?
>>> >> > -Sam
>>> >> >
>>> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>>> >> >  wrote:
>>> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>>> >> >> production,
>>> >> >> and they don;t support ncq_trim...
>>> >> >>
>>> >> >> And 4,x first branch which include exceptions for this in libsata.c.
>>> >> >>
>>> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no to
>>> >> >> go
>>> >> >> deeper if packege for new kernel exist.
>>> >> >>
>>> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>>> >> >> :
>>> >> >>>
>>> >> >>> root@test:~# uname -a
>>> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>>> >> >>> 17:37:22
>>> >> >>> UTC
>>> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>>> >> >>>
>>> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>>> >> 
>>> >>  Also, can you include the kernel version?
>>> >>  -Sam
>>> >> 
>>> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
>>> >>  wrote:
>>> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
>>> >>  > open a
>>> >>  > bug?
>>> >>  > -Sam
>>> >>  >
>>> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>>> >>  >  wrote:
>>> >>  >> This was related to the caching layer, which doesnt support
>>> >>  >> snapshooting per
>>> >>  >> docs...for sake of closing the thread.
>>> >>  >>
>>> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>>> >>  >> 
>>> >>  >> wrote:
>>> >>  >>>
>>> >>  >>> Hi all, can you please help me with unexplained situation...
>>> >>  >>>
>>> >>  >>> All snapshot inside ceph broken...
>>> >>  >>>
>>> >>  >>> So, as example, we have VM template, as rbd inside ceph.
>>> >>  >>> We can map it and mount to check that all ok with it
>>> >>  >>>
>>> >>  >>> root@test:~# rbd map
>>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>>> >>  >>> /dev/rbd0
>>> >>  >>> root@test:~# parted /dev/rbd0 print
>>> >>  >>> Model: Unknown (unknown)
>>> >>  >>> Disk /dev/rbd0: 10.7GB
>>> >>  >>> Sector size (logical/physical): 512B/512B
>>> >>  >>> Partition Table: msdos
>>> >>  >>>
>>> >>  >>> Number  Start   End SizeType File system  Flags
>>> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>> >>  >>>
>>> >>  >>> Than i want to create snap, so i do:
>>> >>  >>> root@test:~# rbd snap create
>>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>  >>>
>>> >>  >>> And now i want to map it:
>>> >>  >>>
>>> >>  >>> root@test:~# rbd map
>>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>  >>> /dev/rbd1
>>> >>  >>> root@test:~# parted /dev/rbd1 print
>>> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>> >>  >>> system).
>>> >>  >>> /dev/rbd1 has been opened read-only.
>>> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>> >>  >>> system).
>>> >>  >>> /dev/rbd1 has been opened read-only.
>>> >>  >>> Error: /dev/rbd1: unrecognised disk label
>>> >>  >>>
>>> >>  >>> Even md5 different...
>

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Not sure what you mean by:

but it's stop to work in same moment, when cache layer fulfilled with
data and evict/flush started...
-Sam

On Thu, Aug 20, 2015 at 4:11 PM, Voloshanenko Igor
 wrote:
> No, when we start draining cache - bad pgs was in place...
> We have big rebalance (disk by disk - to change journal side on both
> hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors and 2
> pgs inconsistent...
>
> In writeback - yes, looks like snapshot works good. but it's stop to work in
> same moment, when cache layer fulfilled with data and evict/flush started...
>
>
>
> 2015-08-21 2:09 GMT+03:00 Samuel Just :
>>
>> So you started draining the cache pool before you saw either the
>> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
>> mode was working correctly?)
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>>  wrote:
>> > Good joke )
>> >
>> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
>> >>
>> >> Certainly, don't reproduce this with a cluster you care about :).
>> >> -Sam
>> >>
>> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
>> >> > What's supposed to happen is that the client transparently directs
>> >> > all
>> >> > requests to the cache pool rather than the cold pool when there is a
>> >> > cache pool.  If the kernel is sending requests to the cold pool,
>> >> > that's probably where the bug is.  Odd.  It could also be a bug
>> >> > specific 'forward' mode either in the client or on the osd.  Why did
>> >> > you have it in that mode?
>> >> > -Sam
>> >> >
>> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >> >  wrote:
>> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>> >> >> production,
>> >> >> and they don;t support ncq_trim...
>> >> >>
>> >> >> And 4,x first branch which include exceptions for this in libsata.c.
>> >> >>
>> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no to
>> >> >> go
>> >> >> deeper if packege for new kernel exist.
>> >> >>
>> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >> >> :
>> >> >>>
>> >> >>> root@test:~# uname -a
>> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
>> >> >>> 17:37:22
>> >> >>> UTC
>> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >> >>>
>> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> >> 
>> >>  Also, can you include the kernel version?
>> >>  -Sam
>> >> 
>> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
>> >>  wrote:
>> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
>> >>  > open a
>> >>  > bug?
>> >>  > -Sam
>> >>  >
>> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>> >>  >  wrote:
>> >>  >> This was related to the caching layer, which doesnt support
>> >>  >> snapshooting per
>> >>  >> docs...for sake of closing the thread.
>> >>  >>
>> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>> >>  >> 
>> >>  >> wrote:
>> >>  >>>
>> >>  >>> Hi all, can you please help me with unexplained situation...
>> >>  >>>
>> >>  >>> All snapshot inside ceph broken...
>> >>  >>>
>> >>  >>> So, as example, we have VM template, as rbd inside ceph.
>> >>  >>> We can map it and mount to check that all ok with it
>> >>  >>>
>> >>  >>> root@test:~# rbd map
>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>> >>  >>> /dev/rbd0
>> >>  >>> root@test:~# parted /dev/rbd0 print
>> >>  >>> Model: Unknown (unknown)
>> >>  >>> Disk /dev/rbd0: 10.7GB
>> >>  >>> Sector size (logical/physical): 512B/512B
>> >>  >>> Partition Table: msdos
>> >>  >>>
>> >>  >>> Number  Start   End SizeType File system  Flags
>> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>> >>  >>>
>> >>  >>> Than i want to create snap, so i do:
>> >>  >>> root@test:~# rbd snap create
>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>  >>>
>> >>  >>> And now i want to map it:
>> >>  >>>
>> >>  >>> root@test:~# rbd map
>> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>  >>> /dev/rbd1
>> >>  >>> root@test:~# parted /dev/rbd1 print
>> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>> >>  >>> system).
>> >>  >>> /dev/rbd1 has been opened read-only.
>> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>> >>  >>> system).
>> >>  >>> /dev/rbd1 has been opened read-only.
>> >>  >>> Error: /dev/rbd1: unrecognised disk label
>> >>  >>>
>> >>  >>> Even md5 different...
>> >>  >>> root@ix-s2:~# md5sum /dev/rbd0
>> >>  >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>> >>  >>> root@ix-s2:~# md5sum /dev/rbd1
>> >>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>> >>  >>>
>> >>  >>>
>

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
No, when we start draining cache - bad pgs was in place...
We have big rebalance (disk by disk - to change journal side on both
hot/cold layers).. All was Ok, but after 2 days - arrived scrub errors and
2 pgs inconsistent...

In writeback - yes, looks like snapshot works good. but it's stop to work
in same moment, when cache layer fulfilled with data and evict/flush
started...



2015-08-21 2:09 GMT+03:00 Samuel Just :

> So you started draining the cache pool before you saw either the
> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
> mode was working correctly?)
> -Sam
>
> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>  wrote:
> > Good joke )
> >
> > 2015-08-21 2:06 GMT+03:00 Samuel Just :
> >>
> >> Certainly, don't reproduce this with a cluster you care about :).
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
> >> > What's supposed to happen is that the client transparently directs all
> >> > requests to the cache pool rather than the cold pool when there is a
> >> > cache pool.  If the kernel is sending requests to the cold pool,
> >> > that's probably where the bug is.  Odd.  It could also be a bug
> >> > specific 'forward' mode either in the client or on the osd.  Why did
> >> > you have it in that mode?
> >> > -Sam
> >> >
> >> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >> >  wrote:
> >> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
> >> >> production,
> >> >> and they don;t support ncq_trim...
> >> >>
> >> >> And 4,x first branch which include exceptions for this in libsata.c.
> >> >>
> >> >> sure we can backport this 1 line to 3.x branch, but we prefer no to
> go
> >> >> deeper if packege for new kernel exist.
> >> >>
> >> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
> >> >> :
> >> >>>
> >> >>> root@test:~# uname -a
> >> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17
> 17:37:22
> >> >>> UTC
> >> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >> >>>
> >> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >> 
> >>  Also, can you include the kernel version?
> >>  -Sam
> >> 
> >>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
> >>  wrote:
> >>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
> >>  > open a
> >>  > bug?
> >>  > -Sam
> >>  >
> >>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >>  >  wrote:
> >>  >> This was related to the caching layer, which doesnt support
> >>  >> snapshooting per
> >>  >> docs...for sake of closing the thread.
> >>  >>
> >>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >>  >> 
> >>  >> wrote:
> >>  >>>
> >>  >>> Hi all, can you please help me with unexplained situation...
> >>  >>>
> >>  >>> All snapshot inside ceph broken...
> >>  >>>
> >>  >>> So, as example, we have VM template, as rbd inside ceph.
> >>  >>> We can map it and mount to check that all ok with it
> >>  >>>
> >>  >>> root@test:~# rbd map
> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >>  >>> /dev/rbd0
> >>  >>> root@test:~# parted /dev/rbd0 print
> >>  >>> Model: Unknown (unknown)
> >>  >>> Disk /dev/rbd0: 10.7GB
> >>  >>> Sector size (logical/physical): 512B/512B
> >>  >>> Partition Table: msdos
> >>  >>>
> >>  >>> Number  Start   End SizeType File system  Flags
> >>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >>  >>>
> >>  >>> Than i want to create snap, so i do:
> >>  >>> root@test:~# rbd snap create
> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>  >>>
> >>  >>> And now i want to map it:
> >>  >>>
> >>  >>> root@test:~# rbd map
> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>  >>> /dev/rbd1
> >>  >>> root@test:~# parted /dev/rbd1 print
> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> >>  >>> system).
> >>  >>> /dev/rbd1 has been opened read-only.
> >>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> >>  >>> system).
> >>  >>> /dev/rbd1 has been opened read-only.
> >>  >>> Error: /dev/rbd1: unrecognised disk label
> >>  >>>
> >>  >>> Even md5 different...
> >>  >>> root@ix-s2:~# md5sum /dev/rbd0
> >>  >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> >>  >>> root@ix-s2:~# md5sum /dev/rbd1
> >>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>  >>>
> >>  >>>
> >>  >>> Ok, now i protect snap and create clone... but same thing...
> >>  >>> md5 for clone same as for snap,,
> >>  >>>
> >>  >>> root@test:~# rbd unmap /dev/rbd1
> >>  >>> root@test:~# rbd snap protect
> >>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>  >>> root@test:~# rbd clone
> >>  >

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Created a ticket to improve our testing here -- this appears to be a hole.

http://tracker.ceph.com/issues/12742
-Sam

On Thu, Aug 20, 2015 at 4:09 PM, Samuel Just  wrote:
> So you started draining the cache pool before you saw either the
> inconsistent pgs or the anomalous snap behavior?  (That is, writeback
> mode was working correctly?)
> -Sam
>
> On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
>  wrote:
>> Good joke )
>>
>> 2015-08-21 2:06 GMT+03:00 Samuel Just :
>>>
>>> Certainly, don't reproduce this with a cluster you care about :).
>>> -Sam
>>>
>>> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
>>> > What's supposed to happen is that the client transparently directs all
>>> > requests to the cache pool rather than the cold pool when there is a
>>> > cache pool.  If the kernel is sending requests to the cold pool,
>>> > that's probably where the bug is.  Odd.  It could also be a bug
>>> > specific 'forward' mode either in the client or on the osd.  Why did
>>> > you have it in that mode?
>>> > -Sam
>>> >
>>> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>>> >  wrote:
>>> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>>> >> production,
>>> >> and they don;t support ncq_trim...
>>> >>
>>> >> And 4,x first branch which include exceptions for this in libsata.c.
>>> >>
>>> >> sure we can backport this 1 line to 3.x branch, but we prefer no to go
>>> >> deeper if packege for new kernel exist.
>>> >>
>>> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>>> >> :
>>> >>>
>>> >>> root@test:~# uname -a
>>> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22
>>> >>> UTC
>>> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>>> >>>
>>> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>>> 
>>>  Also, can you include the kernel version?
>>>  -Sam
>>> 
>>>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
>>>  wrote:
>>>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
>>>  > open a
>>>  > bug?
>>>  > -Sam
>>>  >
>>>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>>>  >  wrote:
>>>  >> This was related to the caching layer, which doesnt support
>>>  >> snapshooting per
>>>  >> docs...for sake of closing the thread.
>>>  >>
>>>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>>>  >> 
>>>  >> wrote:
>>>  >>>
>>>  >>> Hi all, can you please help me with unexplained situation...
>>>  >>>
>>>  >>> All snapshot inside ceph broken...
>>>  >>>
>>>  >>> So, as example, we have VM template, as rbd inside ceph.
>>>  >>> We can map it and mount to check that all ok with it
>>>  >>>
>>>  >>> root@test:~# rbd map
>>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>>>  >>> /dev/rbd0
>>>  >>> root@test:~# parted /dev/rbd0 print
>>>  >>> Model: Unknown (unknown)
>>>  >>> Disk /dev/rbd0: 10.7GB
>>>  >>> Sector size (logical/physical): 512B/512B
>>>  >>> Partition Table: msdos
>>>  >>>
>>>  >>> Number  Start   End SizeType File system  Flags
>>>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>>  >>>
>>>  >>> Than i want to create snap, so i do:
>>>  >>> root@test:~# rbd snap create
>>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>>  >>>
>>>  >>> And now i want to map it:
>>>  >>>
>>>  >>> root@test:~# rbd map
>>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>>  >>> /dev/rbd1
>>>  >>> root@test:~# parted /dev/rbd1 print
>>>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>>  >>> system).
>>>  >>> /dev/rbd1 has been opened read-only.
>>>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>>  >>> system).
>>>  >>> /dev/rbd1 has been opened read-only.
>>>  >>> Error: /dev/rbd1: unrecognised disk label
>>>  >>>
>>>  >>> Even md5 different...
>>>  >>> root@ix-s2:~# md5sum /dev/rbd0
>>>  >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>>>  >>> root@ix-s2:~# md5sum /dev/rbd1
>>>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>>  >>>
>>>  >>>
>>>  >>> Ok, now i protect snap and create clone... but same thing...
>>>  >>> md5 for clone same as for snap,,
>>>  >>>
>>>  >>> root@test:~# rbd unmap /dev/rbd1
>>>  >>> root@test:~# rbd snap protect
>>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>>  >>> root@test:~# rbd clone
>>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>>  >>> cold-storage/test-image
>>>  >>> root@test:~# rbd map cold-storage/test-image
>>>  >>> /dev/rbd1
>>>  >>> root@test:~# md5sum /dev/rbd1
>>>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>>  >>>
>>>  >>>  but it's broken...
>>>  >>> root@test:~# parted /de

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
So you started draining the cache pool before you saw either the
inconsistent pgs or the anomalous snap behavior?  (That is, writeback
mode was working correctly?)
-Sam

On Thu, Aug 20, 2015 at 4:07 PM, Voloshanenko Igor
 wrote:
> Good joke )
>
> 2015-08-21 2:06 GMT+03:00 Samuel Just :
>>
>> Certainly, don't reproduce this with a cluster you care about :).
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
>> > What's supposed to happen is that the client transparently directs all
>> > requests to the cache pool rather than the cold pool when there is a
>> > cache pool.  If the kernel is sending requests to the cold pool,
>> > that's probably where the bug is.  Odd.  It could also be a bug
>> > specific 'forward' mode either in the client or on the osd.  Why did
>> > you have it in that mode?
>> > -Sam
>> >
>> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>> >  wrote:
>> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
>> >> production,
>> >> and they don;t support ncq_trim...
>> >>
>> >> And 4,x first branch which include exceptions for this in libsata.c.
>> >>
>> >> sure we can backport this 1 line to 3.x branch, but we prefer no to go
>> >> deeper if packege for new kernel exist.
>> >>
>> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor
>> >> :
>> >>>
>> >>> root@test:~# uname -a
>> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22
>> >>> UTC
>> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> >>>
>> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>> 
>>  Also, can you include the kernel version?
>>  -Sam
>> 
>>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
>>  wrote:
>>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
>>  > open a
>>  > bug?
>>  > -Sam
>>  >
>>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>>  >  wrote:
>>  >> This was related to the caching layer, which doesnt support
>>  >> snapshooting per
>>  >> docs...for sake of closing the thread.
>>  >>
>>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>>  >> 
>>  >> wrote:
>>  >>>
>>  >>> Hi all, can you please help me with unexplained situation...
>>  >>>
>>  >>> All snapshot inside ceph broken...
>>  >>>
>>  >>> So, as example, we have VM template, as rbd inside ceph.
>>  >>> We can map it and mount to check that all ok with it
>>  >>>
>>  >>> root@test:~# rbd map
>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>>  >>> /dev/rbd0
>>  >>> root@test:~# parted /dev/rbd0 print
>>  >>> Model: Unknown (unknown)
>>  >>> Disk /dev/rbd0: 10.7GB
>>  >>> Sector size (logical/physical): 512B/512B
>>  >>> Partition Table: msdos
>>  >>>
>>  >>> Number  Start   End SizeType File system  Flags
>>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>  >>>
>>  >>> Than i want to create snap, so i do:
>>  >>> root@test:~# rbd snap create
>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>  >>>
>>  >>> And now i want to map it:
>>  >>>
>>  >>> root@test:~# rbd map
>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>  >>> /dev/rbd1
>>  >>> root@test:~# parted /dev/rbd1 print
>>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>  >>> system).
>>  >>> /dev/rbd1 has been opened read-only.
>>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
>>  >>> system).
>>  >>> /dev/rbd1 has been opened read-only.
>>  >>> Error: /dev/rbd1: unrecognised disk label
>>  >>>
>>  >>> Even md5 different...
>>  >>> root@ix-s2:~# md5sum /dev/rbd0
>>  >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>>  >>> root@ix-s2:~# md5sum /dev/rbd1
>>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>  >>>
>>  >>>
>>  >>> Ok, now i protect snap and create clone... but same thing...
>>  >>> md5 for clone same as for snap,,
>>  >>>
>>  >>> root@test:~# rbd unmap /dev/rbd1
>>  >>> root@test:~# rbd snap protect
>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>  >>> root@test:~# rbd clone
>>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>  >>> cold-storage/test-image
>>  >>> root@test:~# rbd map cold-storage/test-image
>>  >>> /dev/rbd1
>>  >>> root@test:~# md5sum /dev/rbd1
>>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>  >>>
>>  >>>  but it's broken...
>>  >>> root@test:~# parted /dev/rbd1 print
>>  >>> Error: /dev/rbd1: unrecognised disk label
>>  >>>
>>  >>>
>>  >>> =
>>  >>>
>>  >>> tech details:
>>  >>>
>>  >>> root@test:~# ceph -v
>>  >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>>  >>>
>>  >>> We have 

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Good joke )

2015-08-21 2:06 GMT+03:00 Samuel Just :

> Certainly, don't reproduce this with a cluster you care about :).
> -Sam
>
> On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
> > What's supposed to happen is that the client transparently directs all
> > requests to the cache pool rather than the cold pool when there is a
> > cache pool.  If the kernel is sending requests to the cold pool,
> > that's probably where the bug is.  Odd.  It could also be a bug
> > specific 'forward' mode either in the client or on the osd.  Why did
> > you have it in that mode?
> > -Sam
> >
> > On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
> >  wrote:
> >> We used 4.x branch, as we have "very good" Samsung 850 pro in
> production,
> >> and they don;t support ncq_trim...
> >>
> >> And 4,x first branch which include exceptions for this in libsata.c.
> >>
> >> sure we can backport this 1 line to 3.x branch, but we prefer no to go
> >> deeper if packege for new kernel exist.
> >>
> >> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor <
> igor.voloshane...@gmail.com>:
> >>>
> >>> root@test:~# uname -a
> >>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22
> UTC
> >>> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >>>
> >>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> 
>  Also, can you include the kernel version?
>  -Sam
> 
>  On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just 
> wrote:
>  > Snapshotting with cache/tiering *is* supposed to work.  Can you
> open a
>  > bug?
>  > -Sam
>  >
>  > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>  >  wrote:
>  >> This was related to the caching layer, which doesnt support
>  >> snapshooting per
>  >> docs...for sake of closing the thread.
>  >>
>  >> On 17 August 2015 at 21:15, Voloshanenko Igor
>  >> 
>  >> wrote:
>  >>>
>  >>> Hi all, can you please help me with unexplained situation...
>  >>>
>  >>> All snapshot inside ceph broken...
>  >>>
>  >>> So, as example, we have VM template, as rbd inside ceph.
>  >>> We can map it and mount to check that all ok with it
>  >>>
>  >>> root@test:~# rbd map
>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>  >>> /dev/rbd0
>  >>> root@test:~# parted /dev/rbd0 print
>  >>> Model: Unknown (unknown)
>  >>> Disk /dev/rbd0: 10.7GB
>  >>> Sector size (logical/physical): 512B/512B
>  >>> Partition Table: msdos
>  >>>
>  >>> Number  Start   End SizeType File system  Flags
>  >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>  >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>  >>>
>  >>> Than i want to create snap, so i do:
>  >>> root@test:~# rbd snap create
>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>  >>>
>  >>> And now i want to map it:
>  >>>
>  >>> root@test:~# rbd map
>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>  >>> /dev/rbd1
>  >>> root@test:~# parted /dev/rbd1 print
>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
>  >>> /dev/rbd1 has been opened read-only.
>  >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
>  >>> /dev/rbd1 has been opened read-only.
>  >>> Error: /dev/rbd1: unrecognised disk label
>  >>>
>  >>> Even md5 different...
>  >>> root@ix-s2:~# md5sum /dev/rbd0
>  >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>  >>> root@ix-s2:~# md5sum /dev/rbd1
>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>  >>>
>  >>>
>  >>> Ok, now i protect snap and create clone... but same thing...
>  >>> md5 for clone same as for snap,,
>  >>>
>  >>> root@test:~# rbd unmap /dev/rbd1
>  >>> root@test:~# rbd snap protect
>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>  >>> root@test:~# rbd clone
>  >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>  >>> cold-storage/test-image
>  >>> root@test:~# rbd map cold-storage/test-image
>  >>> /dev/rbd1
>  >>> root@test:~# md5sum /dev/rbd1
>  >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>  >>>
>  >>>  but it's broken...
>  >>> root@test:~# parted /dev/rbd1 print
>  >>> Error: /dev/rbd1: unrecognised disk label
>  >>>
>  >>>
>  >>> =
>  >>>
>  >>> tech details:
>  >>>
>  >>> root@test:~# ceph -v
>  >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>  >>>
>  >>> We have 2 inconstistent pgs, but all images not placed on this
> pgs...
>  >>>
>  >>> root@test:~# ceph health detail
>  >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>  >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>  >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>  >>> 18 scrub errors
>  >>>
>  >>> ===

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
We switch to forward mode as step to switch cache layer off.

Right now we have "samsung 850 pro" in cache layer (10 ssd, 2 per nodes)
and they show 2MB for 4K blocks... 250 IOPS... intead of 18-20K for intel
S3500 240G which we choose as replacement..

So with such good disks - cache layer - very big bottleneck for us...

2015-08-21 2:02 GMT+03:00 Samuel Just :

> What's supposed to happen is that the client transparently directs all
> requests to the cache pool rather than the cold pool when there is a
> cache pool.  If the kernel is sending requests to the cold pool,
> that's probably where the bug is.  Odd.  It could also be a bug
> specific 'forward' mode either in the client or on the osd.  Why did
> you have it in that mode?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>  wrote:
> > We used 4.x branch, as we have "very good" Samsung 850 pro in production,
> > and they don;t support ncq_trim...
> >
> > And 4,x first branch which include exceptions for this in libsata.c.
> >
> > sure we can backport this 1 line to 3.x branch, but we prefer no to go
> > deeper if packege for new kernel exist.
> >
> > 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor  >:
> >>
> >> root@test:~# uname -a
> >> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22
> UTC
> >> 2015 x86_64 x86_64 x86_64 GNU/Linux
> >>
> >> 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >>>
> >>> Also, can you include the kernel version?
> >>> -Sam
> >>>
> >>> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
> >>> > Snapshotting with cache/tiering *is* supposed to work.  Can you open
> a
> >>> > bug?
> >>> > -Sam
> >>> >
> >>> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
> >>> >  wrote:
> >>> >> This was related to the caching layer, which doesnt support
> >>> >> snapshooting per
> >>> >> docs...for sake of closing the thread.
> >>> >>
> >>> >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >>> >> 
> >>> >> wrote:
> >>> >>>
> >>> >>> Hi all, can you please help me with unexplained situation...
> >>> >>>
> >>> >>> All snapshot inside ceph broken...
> >>> >>>
> >>> >>> So, as example, we have VM template, as rbd inside ceph.
> >>> >>> We can map it and mount to check that all ok with it
> >>> >>>
> >>> >>> root@test:~# rbd map
> >>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >>> >>> /dev/rbd0
> >>> >>> root@test:~# parted /dev/rbd0 print
> >>> >>> Model: Unknown (unknown)
> >>> >>> Disk /dev/rbd0: 10.7GB
> >>> >>> Sector size (logical/physical): 512B/512B
> >>> >>> Partition Table: msdos
> >>> >>>
> >>> >>> Number  Start   End SizeType File system  Flags
> >>> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >>> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >>> >>>
> >>> >>> Than i want to create snap, so i do:
> >>> >>> root@test:~# rbd snap create
> >>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> >>>
> >>> >>> And now i want to map it:
> >>> >>>
> >>> >>> root@test:~# rbd map
> >>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> >>> /dev/rbd1
> >>> >>> root@test:~# parted /dev/rbd1 print
> >>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
> >>> >>> /dev/rbd1 has been opened read-only.
> >>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
> >>> >>> /dev/rbd1 has been opened read-only.
> >>> >>> Error: /dev/rbd1: unrecognised disk label
> >>> >>>
> >>> >>> Even md5 different...
> >>> >>> root@ix-s2:~# md5sum /dev/rbd0
> >>> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> >>> >>> root@ix-s2:~# md5sum /dev/rbd1
> >>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>> >>>
> >>> >>>
> >>> >>> Ok, now i protect snap and create clone... but same thing...
> >>> >>> md5 for clone same as for snap,,
> >>> >>>
> >>> >>> root@test:~# rbd unmap /dev/rbd1
> >>> >>> root@test:~# rbd snap protect
> >>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> >>> root@test:~# rbd clone
> >>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> >>> cold-storage/test-image
> >>> >>> root@test:~# rbd map cold-storage/test-image
> >>> >>> /dev/rbd1
> >>> >>> root@test:~# md5sum /dev/rbd1
> >>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>> >>>
> >>> >>>  but it's broken...
> >>> >>> root@test:~# parted /dev/rbd1 print
> >>> >>> Error: /dev/rbd1: unrecognised disk label
> >>> >>>
> >>> >>>
> >>> >>> =
> >>> >>>
> >>> >>> tech details:
> >>> >>>
> >>> >>> root@test:~# ceph -v
> >>> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> >>> >>>
> >>> >>> We have 2 inconstistent pgs, but all images not placed on this
> pgs...
> >>> >>>
> >>> >>> root@test:~# ceph health detail
> >>> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> >>> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> >>> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> >>> >>> 18 scrub errors
> >>> >>>
> >>> >>> =

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Certainly, don't reproduce this with a cluster you care about :).
-Sam

On Thu, Aug 20, 2015 at 4:02 PM, Samuel Just  wrote:
> What's supposed to happen is that the client transparently directs all
> requests to the cache pool rather than the cold pool when there is a
> cache pool.  If the kernel is sending requests to the cold pool,
> that's probably where the bug is.  Odd.  It could also be a bug
> specific 'forward' mode either in the client or on the osd.  Why did
> you have it in that mode?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
>  wrote:
>> We used 4.x branch, as we have "very good" Samsung 850 pro in production,
>> and they don;t support ncq_trim...
>>
>> And 4,x first branch which include exceptions for this in libsata.c.
>>
>> sure we can backport this 1 line to 3.x branch, but we prefer no to go
>> deeper if packege for new kernel exist.
>>
>> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor :
>>>
>>> root@test:~# uname -a
>>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22 UTC
>>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>>>
>>> 2015-08-21 1:54 GMT+03:00 Samuel Just :

 Also, can you include the kernel version?
 -Sam

 On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
 > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
 > bug?
 > -Sam
 >
 > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
 >  wrote:
 >> This was related to the caching layer, which doesnt support
 >> snapshooting per
 >> docs...for sake of closing the thread.
 >>
 >> On 17 August 2015 at 21:15, Voloshanenko Igor
 >> 
 >> wrote:
 >>>
 >>> Hi all, can you please help me with unexplained situation...
 >>>
 >>> All snapshot inside ceph broken...
 >>>
 >>> So, as example, we have VM template, as rbd inside ceph.
 >>> We can map it and mount to check that all ok with it
 >>>
 >>> root@test:~# rbd map
 >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
 >>> /dev/rbd0
 >>> root@test:~# parted /dev/rbd0 print
 >>> Model: Unknown (unknown)
 >>> Disk /dev/rbd0: 10.7GB
 >>> Sector size (logical/physical): 512B/512B
 >>> Partition Table: msdos
 >>>
 >>> Number  Start   End SizeType File system  Flags
 >>>  1  1049kB  525MB   524MB   primary  ext4 boot
 >>>  2  525MB   10.7GB  10.2GB  primary   lvm
 >>>
 >>> Than i want to create snap, so i do:
 >>> root@test:~# rbd snap create
 >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
 >>>
 >>> And now i want to map it:
 >>>
 >>> root@test:~# rbd map
 >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
 >>> /dev/rbd1
 >>> root@test:~# parted /dev/rbd1 print
 >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
 >>> /dev/rbd1 has been opened read-only.
 >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
 >>> /dev/rbd1 has been opened read-only.
 >>> Error: /dev/rbd1: unrecognised disk label
 >>>
 >>> Even md5 different...
 >>> root@ix-s2:~# md5sum /dev/rbd0
 >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
 >>> root@ix-s2:~# md5sum /dev/rbd1
 >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
 >>>
 >>>
 >>> Ok, now i protect snap and create clone... but same thing...
 >>> md5 for clone same as for snap,,
 >>>
 >>> root@test:~# rbd unmap /dev/rbd1
 >>> root@test:~# rbd snap protect
 >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
 >>> root@test:~# rbd clone
 >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
 >>> cold-storage/test-image
 >>> root@test:~# rbd map cold-storage/test-image
 >>> /dev/rbd1
 >>> root@test:~# md5sum /dev/rbd1
 >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
 >>>
 >>>  but it's broken...
 >>> root@test:~# parted /dev/rbd1 print
 >>> Error: /dev/rbd1: unrecognised disk label
 >>>
 >>>
 >>> =
 >>>
 >>> tech details:
 >>>
 >>> root@test:~# ceph -v
 >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
 >>>
 >>> We have 2 inconstistent pgs, but all images not placed on this pgs...
 >>>
 >>> root@test:~# ceph health detail
 >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
 >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
 >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
 >>> 18 scrub errors
 >>>
 >>> 
 >>>
 >>> root@test:~# ceph osd map cold-storage
 >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
 >>> osdmap e16770 pool 'cold-storage' (2) object
 >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
 >>> ([37,15,14], p37) acting ([37,15,14], p37)
 >>> root@test:~# ceph osd map cold-storage
 >>> 0e23c701-401

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
What's supposed to happen is that the client transparently directs all
requests to the cache pool rather than the cold pool when there is a
cache pool.  If the kernel is sending requests to the cold pool,
that's probably where the bug is.  Odd.  It could also be a bug
specific 'forward' mode either in the client or on the osd.  Why did
you have it in that mode?
-Sam

On Thu, Aug 20, 2015 at 3:58 PM, Voloshanenko Igor
 wrote:
> We used 4.x branch, as we have "very good" Samsung 850 pro in production,
> and they don;t support ncq_trim...
>
> And 4,x first branch which include exceptions for this in libsata.c.
>
> sure we can backport this 1 line to 3.x branch, but we prefer no to go
> deeper if packege for new kernel exist.
>
> 2015-08-21 1:56 GMT+03:00 Voloshanenko Igor :
>>
>> root@test:~# uname -a
>> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22 UTC
>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>>
>> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>>>
>>> Also, can you include the kernel version?
>>> -Sam
>>>
>>> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
>>> > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
>>> > bug?
>>> > -Sam
>>> >
>>> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic
>>> >  wrote:
>>> >> This was related to the caching layer, which doesnt support
>>> >> snapshooting per
>>> >> docs...for sake of closing the thread.
>>> >>
>>> >> On 17 August 2015 at 21:15, Voloshanenko Igor
>>> >> 
>>> >> wrote:
>>> >>>
>>> >>> Hi all, can you please help me with unexplained situation...
>>> >>>
>>> >>> All snapshot inside ceph broken...
>>> >>>
>>> >>> So, as example, we have VM template, as rbd inside ceph.
>>> >>> We can map it and mount to check that all ok with it
>>> >>>
>>> >>> root@test:~# rbd map
>>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>>> >>> /dev/rbd0
>>> >>> root@test:~# parted /dev/rbd0 print
>>> >>> Model: Unknown (unknown)
>>> >>> Disk /dev/rbd0: 10.7GB
>>> >>> Sector size (logical/physical): 512B/512B
>>> >>> Partition Table: msdos
>>> >>>
>>> >>> Number  Start   End SizeType File system  Flags
>>> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>> >>>
>>> >>> Than i want to create snap, so i do:
>>> >>> root@test:~# rbd snap create
>>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>>
>>> >>> And now i want to map it:
>>> >>>
>>> >>> root@test:~# rbd map
>>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>> /dev/rbd1
>>> >>> root@test:~# parted /dev/rbd1 print
>>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>>> >>> /dev/rbd1 has been opened read-only.
>>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>>> >>> /dev/rbd1 has been opened read-only.
>>> >>> Error: /dev/rbd1: unrecognised disk label
>>> >>>
>>> >>> Even md5 different...
>>> >>> root@ix-s2:~# md5sum /dev/rbd0
>>> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>>> >>> root@ix-s2:~# md5sum /dev/rbd1
>>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>> >>>
>>> >>>
>>> >>> Ok, now i protect snap and create clone... but same thing...
>>> >>> md5 for clone same as for snap,,
>>> >>>
>>> >>> root@test:~# rbd unmap /dev/rbd1
>>> >>> root@test:~# rbd snap protect
>>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>> root@test:~# rbd clone
>>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> >>> cold-storage/test-image
>>> >>> root@test:~# rbd map cold-storage/test-image
>>> >>> /dev/rbd1
>>> >>> root@test:~# md5sum /dev/rbd1
>>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>> >>>
>>> >>>  but it's broken...
>>> >>> root@test:~# parted /dev/rbd1 print
>>> >>> Error: /dev/rbd1: unrecognised disk label
>>> >>>
>>> >>>
>>> >>> =
>>> >>>
>>> >>> tech details:
>>> >>>
>>> >>> root@test:~# ceph -v
>>> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>>> >>>
>>> >>> We have 2 inconstistent pgs, but all images not placed on this pgs...
>>> >>>
>>> >>> root@test:~# ceph health detail
>>> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>>> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>>> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>>> >>> 18 scrub errors
>>> >>>
>>> >>> 
>>> >>>
>>> >>> root@test:~# ceph osd map cold-storage
>>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
>>> >>> osdmap e16770 pool 'cold-storage' (2) object
>>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
>>> >>> ([37,15,14], p37) acting ([37,15,14], p37)
>>> >>> root@test:~# ceph osd map cold-storage
>>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
>>> >>> osdmap e16770 pool 'cold-storage' (2) object
>>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3)
>>> >>> -> up
>>> >>> ([12,23,17], p12) acting ([12,23,17], p12)
>>> >>> root@test:~# ceph osd map cold-

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
I already kill cache layer, but will try to reproduce on lab

2015-08-21 1:58 GMT+03:00 Samuel Just :

> Hmm, that might actually be client side.  Can you attempt to reproduce
> with rbd-fuse (different client side implementation from the kernel)?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:56 PM, Voloshanenko Igor
>  wrote:
> > root@test:~# uname -a
> > Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22
> UTC
> > 2015 x86_64 x86_64 x86_64 GNU/Linux
> >
> > 2015-08-21 1:54 GMT+03:00 Samuel Just :
> >>
> >> Also, can you include the kernel version?
> >> -Sam
> >>
> >> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
> >> > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
> >> > bug?
> >> > -Sam
> >> >
> >> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic <
> andrija.pa...@gmail.com>
> >> > wrote:
> >> >> This was related to the caching layer, which doesnt support
> >> >> snapshooting per
> >> >> docs...for sake of closing the thread.
> >> >>
> >> >> On 17 August 2015 at 21:15, Voloshanenko Igor
> >> >> 
> >> >> wrote:
> >> >>>
> >> >>> Hi all, can you please help me with unexplained situation...
> >> >>>
> >> >>> All snapshot inside ceph broken...
> >> >>>
> >> >>> So, as example, we have VM template, as rbd inside ceph.
> >> >>> We can map it and mount to check that all ok with it
> >> >>>
> >> >>> root@test:~# rbd map
> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >> >>> /dev/rbd0
> >> >>> root@test:~# parted /dev/rbd0 print
> >> >>> Model: Unknown (unknown)
> >> >>> Disk /dev/rbd0: 10.7GB
> >> >>> Sector size (logical/physical): 512B/512B
> >> >>> Partition Table: msdos
> >> >>>
> >> >>> Number  Start   End SizeType File system  Flags
> >> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >> >>>
> >> >>> Than i want to create snap, so i do:
> >> >>> root@test:~# rbd snap create
> >> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>>
> >> >>> And now i want to map it:
> >> >>>
> >> >>> root@test:~# rbd map
> >> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>> /dev/rbd1
> >> >>> root@test:~# parted /dev/rbd1 print
> >> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
> >> >>> /dev/rbd1 has been opened read-only.
> >> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file
> system).
> >> >>> /dev/rbd1 has been opened read-only.
> >> >>> Error: /dev/rbd1: unrecognised disk label
> >> >>>
> >> >>> Even md5 different...
> >> >>> root@ix-s2:~# md5sum /dev/rbd0
> >> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> >> >>> root@ix-s2:~# md5sum /dev/rbd1
> >> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >> >>>
> >> >>>
> >> >>> Ok, now i protect snap and create clone... but same thing...
> >> >>> md5 for clone same as for snap,,
> >> >>>
> >> >>> root@test:~# rbd unmap /dev/rbd1
> >> >>> root@test:~# rbd snap protect
> >> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>> root@test:~# rbd clone
> >> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> >>> cold-storage/test-image
> >> >>> root@test:~# rbd map cold-storage/test-image
> >> >>> /dev/rbd1
> >> >>> root@test:~# md5sum /dev/rbd1
> >> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >> >>>
> >> >>>  but it's broken...
> >> >>> root@test:~# parted /dev/rbd1 print
> >> >>> Error: /dev/rbd1: unrecognised disk label
> >> >>>
> >> >>>
> >> >>> =
> >> >>>
> >> >>> tech details:
> >> >>>
> >> >>> root@test:~# ceph -v
> >> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> >> >>>
> >> >>> We have 2 inconstistent pgs, but all images not placed on this
> pgs...
> >> >>>
> >> >>> root@test:~# ceph health detail
> >> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> >> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> >> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> >> >>> 18 scrub errors
> >> >>>
> >> >>> 
> >> >>>
> >> >>> root@test:~# ceph osd map cold-storage
> >> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
> >> >>> osdmap e16770 pool 'cold-storage' (2) object
> >> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) ->
> up
> >> >>> ([37,15,14], p37) acting ([37,15,14], p37)
> >> >>> root@test:~# ceph osd map cold-storage
> >> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
> >> >>> osdmap e16770 pool 'cold-storage' (2) object
> >> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3
> (2.4a3)
> >> >>> -> up
> >> >>> ([12,23,17], p12) acting ([12,23,17], p12)
> >> >>> root@test:~# ceph osd map cold-storage
> >> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
> >> >>> osdmap e16770 pool 'cold-storage' (2) object
> >> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
> >> >>> (2.2a9)
> >> >>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
> >> >>>
> >> >>>
> >> >>> Also we use cache la

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
We used 4.x branch, as we have "very good" Samsung 850 pro in production,
and they don;t support ncq_trim...

And 4,x first branch which include exceptions for this in libsata.c.

sure we can backport this 1 line to 3.x branch, but we prefer no to go
deeper if packege for new kernel exist.

2015-08-21 1:56 GMT+03:00 Voloshanenko Igor :

> root@test:~# uname -a
> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22 UTC
> 2015 x86_64 x86_64 x86_64 GNU/Linux
>
> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>
>> Also, can you include the kernel version?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
>> > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
>> bug?
>> > -Sam
>> >
>> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic 
>> wrote:
>> >> This was related to the caching layer, which doesnt support
>> snapshooting per
>> >> docs...for sake of closing the thread.
>> >>
>> >> On 17 August 2015 at 21:15, Voloshanenko Igor <
>> igor.voloshane...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi all, can you please help me with unexplained situation...
>> >>>
>> >>> All snapshot inside ceph broken...
>> >>>
>> >>> So, as example, we have VM template, as rbd inside ceph.
>> >>> We can map it and mount to check that all ok with it
>> >>>
>> >>> root@test:~# rbd map
>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>> >>> /dev/rbd0
>> >>> root@test:~# parted /dev/rbd0 print
>> >>> Model: Unknown (unknown)
>> >>> Disk /dev/rbd0: 10.7GB
>> >>> Sector size (logical/physical): 512B/512B
>> >>> Partition Table: msdos
>> >>>
>> >>> Number  Start   End SizeType File system  Flags
>> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>> >>>
>> >>> Than i want to create snap, so i do:
>> >>> root@test:~# rbd snap create
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>>
>> >>> And now i want to map it:
>> >>>
>> >>> root@test:~# rbd map
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> /dev/rbd1
>> >>> root@test:~# parted /dev/rbd1 print
>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> >>> /dev/rbd1 has been opened read-only.
>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> >>> /dev/rbd1 has been opened read-only.
>> >>> Error: /dev/rbd1: unrecognised disk label
>> >>>
>> >>> Even md5 different...
>> >>> root@ix-s2:~# md5sum /dev/rbd0
>> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>> >>> root@ix-s2:~# md5sum /dev/rbd1
>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>> >>>
>> >>>
>> >>> Ok, now i protect snap and create clone... but same thing...
>> >>> md5 for clone same as for snap,,
>> >>>
>> >>> root@test:~# rbd unmap /dev/rbd1
>> >>> root@test:~# rbd snap protect
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> root@test:~# rbd clone
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> cold-storage/test-image
>> >>> root@test:~# rbd map cold-storage/test-image
>> >>> /dev/rbd1
>> >>> root@test:~# md5sum /dev/rbd1
>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>> >>>
>> >>>  but it's broken...
>> >>> root@test:~# parted /dev/rbd1 print
>> >>> Error: /dev/rbd1: unrecognised disk label
>> >>>
>> >>>
>> >>> =
>> >>>
>> >>> tech details:
>> >>>
>> >>> root@test:~# ceph -v
>> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>> >>>
>> >>> We have 2 inconstistent pgs, but all images not placed on this pgs...
>> >>>
>> >>> root@test:~# ceph health detail
>> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>> >>> 18 scrub errors
>> >>>
>> >>> 
>> >>>
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
>> >>> ([37,15,14], p37) acting ([37,15,14], p37)
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3)
>> -> up
>> >>> ([12,23,17], p12) acting ([12,23,17], p12)
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
>> (2.2a9)
>> >>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
>> >>>
>> >>>
>> >>> Also we use cache layer, which in current moment - in forward mode...
>> >>>
>> >>> Can you please help me with this.. As my brain stop to understand
>> what is
>> >>> going on...
>> >>>
>> >>> Thank in advance!
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> ___

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Hmm, that might actually be client side.  Can you attempt to reproduce
with rbd-fuse (different client side implementation from the kernel)?
-Sam

On Thu, Aug 20, 2015 at 3:56 PM, Voloshanenko Igor
 wrote:
> root@test:~# uname -a
> Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22 UTC
> 2015 x86_64 x86_64 x86_64 GNU/Linux
>
> 2015-08-21 1:54 GMT+03:00 Samuel Just :
>>
>> Also, can you include the kernel version?
>> -Sam
>>
>> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
>> > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
>> > bug?
>> > -Sam
>> >
>> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic 
>> > wrote:
>> >> This was related to the caching layer, which doesnt support
>> >> snapshooting per
>> >> docs...for sake of closing the thread.
>> >>
>> >> On 17 August 2015 at 21:15, Voloshanenko Igor
>> >> 
>> >> wrote:
>> >>>
>> >>> Hi all, can you please help me with unexplained situation...
>> >>>
>> >>> All snapshot inside ceph broken...
>> >>>
>> >>> So, as example, we have VM template, as rbd inside ceph.
>> >>> We can map it and mount to check that all ok with it
>> >>>
>> >>> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>> >>> /dev/rbd0
>> >>> root@test:~# parted /dev/rbd0 print
>> >>> Model: Unknown (unknown)
>> >>> Disk /dev/rbd0: 10.7GB
>> >>> Sector size (logical/physical): 512B/512B
>> >>> Partition Table: msdos
>> >>>
>> >>> Number  Start   End SizeType File system  Flags
>> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
>> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
>> >>>
>> >>> Than i want to create snap, so i do:
>> >>> root@test:~# rbd snap create
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>>
>> >>> And now i want to map it:
>> >>>
>> >>> root@test:~# rbd map
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> /dev/rbd1
>> >>> root@test:~# parted /dev/rbd1 print
>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> >>> /dev/rbd1 has been opened read-only.
>> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> >>> /dev/rbd1 has been opened read-only.
>> >>> Error: /dev/rbd1: unrecognised disk label
>> >>>
>> >>> Even md5 different...
>> >>> root@ix-s2:~# md5sum /dev/rbd0
>> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>> >>> root@ix-s2:~# md5sum /dev/rbd1
>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>> >>>
>> >>>
>> >>> Ok, now i protect snap and create clone... but same thing...
>> >>> md5 for clone same as for snap,,
>> >>>
>> >>> root@test:~# rbd unmap /dev/rbd1
>> >>> root@test:~# rbd snap protect
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> root@test:~# rbd clone
>> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> >>> cold-storage/test-image
>> >>> root@test:~# rbd map cold-storage/test-image
>> >>> /dev/rbd1
>> >>> root@test:~# md5sum /dev/rbd1
>> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>> >>>
>> >>>  but it's broken...
>> >>> root@test:~# parted /dev/rbd1 print
>> >>> Error: /dev/rbd1: unrecognised disk label
>> >>>
>> >>>
>> >>> =
>> >>>
>> >>> tech details:
>> >>>
>> >>> root@test:~# ceph -v
>> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>> >>>
>> >>> We have 2 inconstistent pgs, but all images not placed on this pgs...
>> >>>
>> >>> root@test:~# ceph health detail
>> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>> >>> 18 scrub errors
>> >>>
>> >>> 
>> >>>
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
>> >>> ([37,15,14], p37) acting ([37,15,14], p37)
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3)
>> >>> -> up
>> >>> ([12,23,17], p12) acting ([12,23,17], p12)
>> >>> root@test:~# ceph osd map cold-storage
>> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
>> >>> osdmap e16770 pool 'cold-storage' (2) object
>> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
>> >>> (2.2a9)
>> >>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
>> >>>
>> >>>
>> >>> Also we use cache layer, which in current moment - in forward mode...
>> >>>
>> >>> Can you please help me with this.. As my brain stop to understand what
>> >>> is
>> >>> going on...
>> >>>
>> >>> Thank in advance!
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> ___
>> >>> ceph-users mailing list
>> >>> ceph-users@lists.ceph.com
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-use

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
root@test:~# uname -a
Linux ix-s5 4.0.4-040004-generic #201505171336 SMP Sun May 17 17:37:22 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux

2015-08-21 1:54 GMT+03:00 Samuel Just :

> Also, can you include the kernel version?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
> > Snapshotting with cache/tiering *is* supposed to work.  Can you open a
> bug?
> > -Sam
> >
> > On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic 
> wrote:
> >> This was related to the caching layer, which doesnt support
> snapshooting per
> >> docs...for sake of closing the thread.
> >>
> >> On 17 August 2015 at 21:15, Voloshanenko Igor <
> igor.voloshane...@gmail.com>
> >> wrote:
> >>>
> >>> Hi all, can you please help me with unexplained situation...
> >>>
> >>> All snapshot inside ceph broken...
> >>>
> >>> So, as example, we have VM template, as rbd inside ceph.
> >>> We can map it and mount to check that all ok with it
> >>>
> >>> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >>> /dev/rbd0
> >>> root@test:~# parted /dev/rbd0 print
> >>> Model: Unknown (unknown)
> >>> Disk /dev/rbd0: 10.7GB
> >>> Sector size (logical/physical): 512B/512B
> >>> Partition Table: msdos
> >>>
> >>> Number  Start   End SizeType File system  Flags
> >>>  1  1049kB  525MB   524MB   primary  ext4 boot
> >>>  2  525MB   10.7GB  10.2GB  primary   lvm
> >>>
> >>> Than i want to create snap, so i do:
> >>> root@test:~# rbd snap create
> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>>
> >>> And now i want to map it:
> >>>
> >>> root@test:~# rbd map
> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> /dev/rbd1
> >>> root@test:~# parted /dev/rbd1 print
> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
> >>> /dev/rbd1 has been opened read-only.
> >>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
> >>> /dev/rbd1 has been opened read-only.
> >>> Error: /dev/rbd1: unrecognised disk label
> >>>
> >>> Even md5 different...
> >>> root@ix-s2:~# md5sum /dev/rbd0
> >>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> >>> root@ix-s2:~# md5sum /dev/rbd1
> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>>
> >>>
> >>> Ok, now i protect snap and create clone... but same thing...
> >>> md5 for clone same as for snap,,
> >>>
> >>> root@test:~# rbd unmap /dev/rbd1
> >>> root@test:~# rbd snap protect
> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> root@test:~# rbd clone
> >>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>> cold-storage/test-image
> >>> root@test:~# rbd map cold-storage/test-image
> >>> /dev/rbd1
> >>> root@test:~# md5sum /dev/rbd1
> >>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>>
> >>>  but it's broken...
> >>> root@test:~# parted /dev/rbd1 print
> >>> Error: /dev/rbd1: unrecognised disk label
> >>>
> >>>
> >>> =
> >>>
> >>> tech details:
> >>>
> >>> root@test:~# ceph -v
> >>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> >>>
> >>> We have 2 inconstistent pgs, but all images not placed on this pgs...
> >>>
> >>> root@test:~# ceph health detail
> >>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> >>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> >>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> >>> 18 scrub errors
> >>>
> >>> 
> >>>
> >>> root@test:~# ceph osd map cold-storage
> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5
> >>> osdmap e16770 pool 'cold-storage' (2) object
> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
> >>> ([37,15,14], p37) acting ([37,15,14], p37)
> >>> root@test:~# ceph osd map cold-storage
> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
> >>> osdmap e16770 pool 'cold-storage' (2) object
> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3)
> -> up
> >>> ([12,23,17], p12) acting ([12,23,17], p12)
> >>> root@test:~# ceph osd map cold-storage
> >>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
> >>> osdmap e16770 pool 'cold-storage' (2) object
> >>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
> (2.2a9)
> >>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
> >>>
> >>>
> >>> Also we use cache layer, which in current moment - in forward mode...
> >>>
> >>> Can you please help me with this.. As my brain stop to understand what
> is
> >>> going on...
> >>>
> >>> Thank in advance!
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >>
> >>
> >> --
> >>
> >> Andrija Panić
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
>
___
ceph-users mailing list
ceph-users@lists.c

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Also, can you include the kernel version?
-Sam

On Thu, Aug 20, 2015 at 3:51 PM, Samuel Just  wrote:
> Snapshotting with cache/tiering *is* supposed to work.  Can you open a bug?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic  
> wrote:
>> This was related to the caching layer, which doesnt support snapshooting per
>> docs...for sake of closing the thread.
>>
>> On 17 August 2015 at 21:15, Voloshanenko Igor 
>> wrote:
>>>
>>> Hi all, can you please help me with unexplained situation...
>>>
>>> All snapshot inside ceph broken...
>>>
>>> So, as example, we have VM template, as rbd inside ceph.
>>> We can map it and mount to check that all ok with it
>>>
>>> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>>> /dev/rbd0
>>> root@test:~# parted /dev/rbd0 print
>>> Model: Unknown (unknown)
>>> Disk /dev/rbd0: 10.7GB
>>> Sector size (logical/physical): 512B/512B
>>> Partition Table: msdos
>>>
>>> Number  Start   End SizeType File system  Flags
>>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>>
>>> Than i want to create snap, so i do:
>>> root@test:~# rbd snap create
>>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>>
>>> And now i want to map it:
>>>
>>> root@test:~# rbd map
>>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> /dev/rbd1
>>> root@test:~# parted /dev/rbd1 print
>>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>>> /dev/rbd1 has been opened read-only.
>>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>>> /dev/rbd1 has been opened read-only.
>>> Error: /dev/rbd1: unrecognised disk label
>>>
>>> Even md5 different...
>>> root@ix-s2:~# md5sum /dev/rbd0
>>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>>> root@ix-s2:~# md5sum /dev/rbd1
>>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>>
>>>
>>> Ok, now i protect snap and create clone... but same thing...
>>> md5 for clone same as for snap,,
>>>
>>> root@test:~# rbd unmap /dev/rbd1
>>> root@test:~# rbd snap protect
>>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> root@test:~# rbd clone
>>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>> cold-storage/test-image
>>> root@test:~# rbd map cold-storage/test-image
>>> /dev/rbd1
>>> root@test:~# md5sum /dev/rbd1
>>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>>
>>>  but it's broken...
>>> root@test:~# parted /dev/rbd1 print
>>> Error: /dev/rbd1: unrecognised disk label
>>>
>>>
>>> =
>>>
>>> tech details:
>>>
>>> root@test:~# ceph -v
>>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>>>
>>> We have 2 inconstistent pgs, but all images not placed on this pgs...
>>>
>>> root@test:~# ceph health detail
>>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>>> 18 scrub errors
>>>
>>> 
>>>
>>> root@test:~# ceph osd map cold-storage
>>> 0e23c701-401d-4465-b9b4-c02939d57bb5
>>> osdmap e16770 pool 'cold-storage' (2) object
>>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
>>> ([37,15,14], p37) acting ([37,15,14], p37)
>>> root@test:~# ceph osd map cold-storage
>>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
>>> osdmap e16770 pool 'cold-storage' (2) object
>>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3) -> up
>>> ([12,23,17], p12) acting ([12,23,17], p12)
>>> root@test:~# ceph osd map cold-storage
>>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
>>> osdmap e16770 pool 'cold-storage' (2) object
>>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9 (2.2a9)
>>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
>>>
>>>
>>> Also we use cache layer, which in current moment - in forward mode...
>>>
>>> Can you please help me with this.. As my brain stop to understand what is
>>> going on...
>>>
>>> Thank in advance!
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>>
>> Andrija Panić
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Voloshanenko Igor
Yes, will do.

What we see. When cache tier in forward mod, if i did
rbd snap create - it's use rbd_header not from cold tier, but from
hot-tier, butm this 2 headers not synced
And can;t be evicted from hot-storage, as it;s locked by KVM (Qemu). If i
kill lock, evict header - all start to work..
But it's unacceptable for production... To kill lock during running VM (((

2015-08-21 1:51 GMT+03:00 Samuel Just :

> Snapshotting with cache/tiering *is* supposed to work.  Can you open a bug?
> -Sam
>
> On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic 
> wrote:
> > This was related to the caching layer, which doesnt support snapshooting
> per
> > docs...for sake of closing the thread.
> >
> > On 17 August 2015 at 21:15, Voloshanenko Igor <
> igor.voloshane...@gmail.com>
> > wrote:
> >>
> >> Hi all, can you please help me with unexplained situation...
> >>
> >> All snapshot inside ceph broken...
> >>
> >> So, as example, we have VM template, as rbd inside ceph.
> >> We can map it and mount to check that all ok with it
> >>
> >> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> >> /dev/rbd0
> >> root@test:~# parted /dev/rbd0 print
> >> Model: Unknown (unknown)
> >> Disk /dev/rbd0: 10.7GB
> >> Sector size (logical/physical): 512B/512B
> >> Partition Table: msdos
> >>
> >> Number  Start   End SizeType File system  Flags
> >>  1  1049kB  525MB   524MB   primary  ext4 boot
> >>  2  525MB   10.7GB  10.2GB  primary   lvm
> >>
> >> Than i want to create snap, so i do:
> >> root@test:~# rbd snap create
> >> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >>
> >> And now i want to map it:
> >>
> >> root@test:~# rbd map
> >> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> /dev/rbd1
> >> root@test:~# parted /dev/rbd1 print
> >> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
> >> /dev/rbd1 has been opened read-only.
> >> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
> >> /dev/rbd1 has been opened read-only.
> >> Error: /dev/rbd1: unrecognised disk label
> >>
> >> Even md5 different...
> >> root@ix-s2:~# md5sum /dev/rbd0
> >> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> >> root@ix-s2:~# md5sum /dev/rbd1
> >> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>
> >>
> >> Ok, now i protect snap and create clone... but same thing...
> >> md5 for clone same as for snap,,
> >>
> >> root@test:~# rbd unmap /dev/rbd1
> >> root@test:~# rbd snap protect
> >> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> root@test:~# rbd clone
> >> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> >> cold-storage/test-image
> >> root@test:~# rbd map cold-storage/test-image
> >> /dev/rbd1
> >> root@test:~# md5sum /dev/rbd1
> >> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
> >>
> >>  but it's broken...
> >> root@test:~# parted /dev/rbd1 print
> >> Error: /dev/rbd1: unrecognised disk label
> >>
> >>
> >> =
> >>
> >> tech details:
> >>
> >> root@test:~# ceph -v
> >> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> >>
> >> We have 2 inconstistent pgs, but all images not placed on this pgs...
> >>
> >> root@test:~# ceph health detail
> >> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> >> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> >> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> >> 18 scrub errors
> >>
> >> 
> >>
> >> root@test:~# ceph osd map cold-storage
> >> 0e23c701-401d-4465-b9b4-c02939d57bb5
> >> osdmap e16770 pool 'cold-storage' (2) object
> >> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
> >> ([37,15,14], p37) acting ([37,15,14], p37)
> >> root@test:~# ceph osd map cold-storage
> >> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
> >> osdmap e16770 pool 'cold-storage' (2) object
> >> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3)
> -> up
> >> ([12,23,17], p12) acting ([12,23,17], p12)
> >> root@test:~# ceph osd map cold-storage
> >> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
> >> osdmap e16770 pool 'cold-storage' (2) object
> >> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
> (2.2a9)
> >> -> up ([12,44,23], p12) acting ([12,44,23], p12)
> >>
> >>
> >> Also we use cache layer, which in current moment - in forward mode...
> >>
> >> Can you please help me with this.. As my brain stop to understand what
> is
> >> going on...
> >>
> >> Thank in advance!
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com

Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Samuel Just
Snapshotting with cache/tiering *is* supposed to work.  Can you open a bug?
-Sam

On Thu, Aug 20, 2015 at 3:36 PM, Andrija Panic  wrote:
> This was related to the caching layer, which doesnt support snapshooting per
> docs...for sake of closing the thread.
>
> On 17 August 2015 at 21:15, Voloshanenko Igor 
> wrote:
>>
>> Hi all, can you please help me with unexplained situation...
>>
>> All snapshot inside ceph broken...
>>
>> So, as example, we have VM template, as rbd inside ceph.
>> We can map it and mount to check that all ok with it
>>
>> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
>> /dev/rbd0
>> root@test:~# parted /dev/rbd0 print
>> Model: Unknown (unknown)
>> Disk /dev/rbd0: 10.7GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: msdos
>>
>> Number  Start   End SizeType File system  Flags
>>  1  1049kB  525MB   524MB   primary  ext4 boot
>>  2  525MB   10.7GB  10.2GB  primary   lvm
>>
>> Than i want to create snap, so i do:
>> root@test:~# rbd snap create
>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>>
>> And now i want to map it:
>>
>> root@test:~# rbd map
>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> /dev/rbd1
>> root@test:~# parted /dev/rbd1 print
>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> /dev/rbd1 has been opened read-only.
>> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>> /dev/rbd1 has been opened read-only.
>> Error: /dev/rbd1: unrecognised disk label
>>
>> Even md5 different...
>> root@ix-s2:~# md5sum /dev/rbd0
>> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
>> root@ix-s2:~# md5sum /dev/rbd1
>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>
>>
>> Ok, now i protect snap and create clone... but same thing...
>> md5 for clone same as for snap,,
>>
>> root@test:~# rbd unmap /dev/rbd1
>> root@test:~# rbd snap protect
>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> root@test:~# rbd clone
>> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>> cold-storage/test-image
>> root@test:~# rbd map cold-storage/test-image
>> /dev/rbd1
>> root@test:~# md5sum /dev/rbd1
>> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>>
>>  but it's broken...
>> root@test:~# parted /dev/rbd1 print
>> Error: /dev/rbd1: unrecognised disk label
>>
>>
>> =
>>
>> tech details:
>>
>> root@test:~# ceph -v
>> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>>
>> We have 2 inconstistent pgs, but all images not placed on this pgs...
>>
>> root@test:~# ceph health detail
>> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
>> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
>> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
>> 18 scrub errors
>>
>> 
>>
>> root@test:~# ceph osd map cold-storage
>> 0e23c701-401d-4465-b9b4-c02939d57bb5
>> osdmap e16770 pool 'cold-storage' (2) object
>> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
>> ([37,15,14], p37) acting ([37,15,14], p37)
>> root@test:~# ceph osd map cold-storage
>> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
>> osdmap e16770 pool 'cold-storage' (2) object
>> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3) -> up
>> ([12,23,17], p12) acting ([12,23,17], p12)
>> root@test:~# ceph osd map cold-storage
>> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
>> osdmap e16770 pool 'cold-storage' (2) object
>> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9 (2.2a9)
>> -> up ([12,44,23], p12) acting ([12,44,23], p12)
>>
>>
>> Also we use cache layer, which in current moment - in forward mode...
>>
>> Can you please help me with this.. As my brain stop to understand what is
>> going on...
>>
>> Thank in advance!
>>
>>
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
>
> Andrija Panić
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-20 Thread Andrija Panic
This was related to the caching layer, which doesnt support snapshooting
per docs...for sake of closing the thread.

On 17 August 2015 at 21:15, Voloshanenko Igor 
wrote:

> Hi all, can you please help me with unexplained situation...
>
> All snapshot inside ceph broken...
>
> So, as example, we have VM template, as rbd inside ceph.
> We can map it and mount to check that all ok with it
>
> root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
> /dev/rbd0
> root@test:~# parted /dev/rbd0 print
> Model: Unknown (unknown)
> Disk /dev/rbd0: 10.7GB
> Sector size (logical/physical): 512B/512B
> Partition Table: msdos
>
> Number  Start   End SizeType File system  Flags
>  1  1049kB  525MB   524MB   primary  ext4 boot
>  2  525MB   10.7GB  10.2GB  primary   lvm
>
> Than i want to create snap, so i do:
> root@test:~# rbd snap create
> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
>
> And now i want to map it:
>
> root@test:~# rbd map
> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> /dev/rbd1
> root@test:~# parted /dev/rbd1 print
> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>  /dev/rbd1 has been opened read-only.
> Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
>  /dev/rbd1 has been opened read-only.
> Error: /dev/rbd1: unrecognised disk label
>
> Even md5 different...
> root@ix-s2:~# md5sum /dev/rbd0
> 9a47797a07fee3a3d71316e22891d752  /dev/rbd0
> root@ix-s2:~# md5sum /dev/rbd1
> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>
>
> Ok, now i protect snap and create clone... but same thing...
> md5 for clone same as for snap,,
>
> root@test:~# rbd unmap /dev/rbd1
> root@test:~# rbd snap protect
> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> root@test:~# rbd clone
> cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
> cold-storage/test-image
> root@test:~# rbd map cold-storage/test-image
> /dev/rbd1
> root@test:~# md5sum /dev/rbd1
> e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1
>
>  but it's broken...
> root@test:~# parted /dev/rbd1 print
> Error: /dev/rbd1: unrecognised disk label
>
>
> =
>
> tech details:
>
> root@test:~# ceph -v
> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
>
> We have 2 inconstistent pgs, but all images not placed on this pgs...
>
> root@test:~# ceph health detail
> HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
> pg 2.490 is active+clean+inconsistent, acting [56,15,29]
> pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
> 18 scrub errors
>
> 
>
> root@test:~# ceph osd map cold-storage
> 0e23c701-401d-4465-b9b4-c02939d57bb5
> osdmap e16770 pool 'cold-storage' (2) object
> '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
> ([37,15,14], p37) acting ([37,15,14], p37)
> root@test:~# ceph osd map cold-storage
> 0e23c701-401d-4465-b9b4-c02939d57bb5@snap
> osdmap e16770 pool 'cold-storage' (2) object
> '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3) ->
> up ([12,23,17], p12) acting ([12,23,17], p12)
> root@test:~# ceph osd map cold-storage
> 0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
> osdmap e16770 pool 'cold-storage' (2) object
> '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9
> (2.2a9) -> up ([12,44,23], p12) acting ([12,44,23], p12)
>
>
> Also we use cache layer, which in current moment - in forward mode...
>
> Can you please help me with this.. As my brain stop to understand what is
> going on...
>
> Thank in advance!
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Broken snapshots... CEPH 0.94.2

2015-08-17 Thread Voloshanenko Igor
Hi all, can you please help me with unexplained situation...

All snapshot inside ceph broken...

So, as example, we have VM template, as rbd inside ceph.
We can map it and mount to check that all ok with it

root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5
/dev/rbd0
root@test:~# parted /dev/rbd0 print
Model: Unknown (unknown)
Disk /dev/rbd0: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End SizeType File system  Flags
 1  1049kB  525MB   524MB   primary  ext4 boot
 2  525MB   10.7GB  10.2GB  primary   lvm

Than i want to create snap, so i do:
root@test:~# rbd snap create
cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap

And now i want to map it:

root@test:~# rbd map
cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
/dev/rbd1
root@test:~# parted /dev/rbd1 print
Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
 /dev/rbd1 has been opened read-only.
Warning: Unable to open /dev/rbd1 read-write (Read-only file system).
 /dev/rbd1 has been opened read-only.
Error: /dev/rbd1: unrecognised disk label

Even md5 different...
root@ix-s2:~# md5sum /dev/rbd0
9a47797a07fee3a3d71316e22891d752  /dev/rbd0
root@ix-s2:~# md5sum /dev/rbd1
e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1


Ok, now i protect snap and create clone... but same thing...
md5 for clone same as for snap,,

root@test:~# rbd unmap /dev/rbd1
root@test:~# rbd snap protect
cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
root@test:~# rbd clone
cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap
cold-storage/test-image
root@test:~# rbd map cold-storage/test-image
/dev/rbd1
root@test:~# md5sum /dev/rbd1
e450f50b9ffa0073fae940ee858a43ce  /dev/rbd1

 but it's broken...
root@test:~# parted /dev/rbd1 print
Error: /dev/rbd1: unrecognised disk label


=

tech details:

root@test:~# ceph -v
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)

We have 2 inconstistent pgs, but all images not placed on this pgs...

root@test:~# ceph health detail
HEALTH_ERR 2 pgs inconsistent; 18 scrub errors
pg 2.490 is active+clean+inconsistent, acting [56,15,29]
pg 2.c4 is active+clean+inconsistent, acting [56,10,42]
18 scrub errors



root@test:~# ceph osd map cold-storage 0e23c701-401d-4465-b9b4-c02939d57bb5
osdmap e16770 pool 'cold-storage' (2) object
'0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up
([37,15,14], p37) acting ([37,15,14], p37)
root@test:~# ceph osd map cold-storage
0e23c701-401d-4465-b9b4-c02939d57bb5@snap
osdmap e16770 pool 'cold-storage' (2) object
'0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3) -> up
([12,23,17], p12) acting ([12,23,17], p12)
root@test:~# ceph osd map cold-storage
0e23c701-401d-4465-b9b4-c02939d57bb5@test-image
osdmap e16770 pool 'cold-storage' (2) object
'0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9 (2.2a9)
-> up ([12,44,23], p12) acting ([12,44,23], p12)


Also we use cache layer, which in current moment - in forward mode...

Can you please help me with this.. As my brain stop to understand what is
going on...

Thank in advance!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com