Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Christoph Adomeit
So my question regarding the latest ceph releases still is:

Where do all these scrub errors come from and do we have to worry about ?


On Thu, Nov 08, 2018 at 12:16:05AM +0800, Ashley Merrick wrote:
> I am seeing this on the latest mimic on my test cluster aswel.
> 
> Every automatic deep-scrub comes back as inconsistent, but doing another
> manual scrub comes back as fine and clear each time.
> 
> Not sure if related or not..
> 
> On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit <
> christoph.adom...@gatworks.de> wrote:
> 
> > Hello together,
> >
> > we have upgraded to 12.2.9 because it was in the official repos.
> >
> > Right after the update and some scrubs we have issues.
> >
> > This morning after regular scrubs we had around 10% of all pgs inconstent:
> >
> > pgs: 4036 active+clean
> >   380  active+clean+inconsistent
> >
> > After repairung these 380 pgs we again have:
> >
> > 1/93611534 objects unfound (0.000%)
> > 28   active+clean+inconsistent
> > 1active+recovery_wait+degraded
> >
> > Now we stopped repairing because it does not seem to solve the problem and
> > more and more error messages are occuring. So far we did not see corruption
> > but we do not feel well with the cluster.
> >
> > What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?
> >
> > Is ist dangerous for our Data to leave the cluster running ?
> >
> > I am sure we do not have hardware errors and that these errors came with
> > the update to 12.2.9.
> >
> > Thanks
> >   Christoph
> >
> >
> >
> > On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
> > > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
> > > wrote:
> > >
> > > >
> > > >
> > > > On 07/11/2018 10:59, Konstantin Shalygin wrote:
> > > > >> I wonder if there is any release announcement for ceph 12.2.9 that I
> > > > missed.
> > > > >> I just found the new packages on download.ceph.com, is this an
> > official
> > > > >> release?
> > > > >
> > > > > This is because 12.2.9 have a several bugs. You should avoid to use
> > this
> > > > > release and wait for 12.2.10
> > > >
> > > > Argh! What's it doing in the repos then?? I've just upgraded to it!
> > > > What are the bugs? Is there a thread about them?
> > >
> > >
> > > If you’ve already upgraded and have no issues then you won’t have any
> > > trouble going forward — except perhaps on the next upgrade, if you do it
> > > while the cluster is unhealthy.
> > >
> > > I agree that it’s annoying when these issues make it out. We’ve had
> > ongoing
> > > discussions to try and improve the release process so it’s less drawn-out
> > > and to prevent these upgrade issues from making it through testing, but
> > > nobody has resolved it yet. If anybody has experience working with deb
> > > repositories and handling releases, the Ceph upstream could use some
> > > help... ;)
> > > -Greg
> > >
> > >
> > > >
> > > > Simon
> > > > ___
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >

-- 
Was macht ein Clown im Büro ? Faxen 
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht Moenchengladbach, HRB 6303
Geschaeftsfuehrer:
Christoph Adomeit, Hans Wilhelm Terstappen

christoph.adom...@gatworks.de Internetloesungen vom Feinsten
Fon. +49 2166 9149-32  Fax. +49 2166 9149-10
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Migrate OSD journal to SSD partition

2018-11-07 Thread Dave.Chen
Hi all,

I have been trying to migrate the journal to SSD partition for an while, 
basically I followed the guide here [1],  I have the below configuration 
defined in the ceph.conf

[osd.0]
osd_journal = /dev/disk/by-partlabel/journal-1

And then create the journal in this way,
# ceph-osd -i 0 -mkjournal

After that, I started the osd,  and I saw the service is started successfully 
from the log print out on the console,
08 14:03:35 ceph1 ceph-osd[5111]: starting osd.0 at :/0 osd_data 
/var/lib/ceph/osd/ceph-0 /dev/disk/by-partlabel/journal-1
08 14:03:35 ceph1 ceph-osd[5111]: 2018-11-08 14:03:35.618247 7fe8b54b28c0 -1 
osd.0 766 log_to_monitors {default=true}

But I not sure whether the new journal is effective or not, looks like it is 
still using the old partition (/dev/sdc2) for journal, and new partition which 
is actually "dev/sde1" has no information on the journal,

# ceph-disk list

/dev/sdc :
/dev/sdc2 ceph journal, for /dev/sdc1
/dev/sdc1 ceph data, active, cluster ceph, osd.0, journal /dev/sdc2
/dev/sdd :
/dev/sdd2 ceph journal, for /dev/sdd1
/dev/sdd1 ceph data, active, cluster ceph, osd.1, journal /dev/sdd2
/dev/sde :
/dev/sde1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/sdf other, unknown

# ls -l /var/lib/ceph/osd/ceph-0/journal
lrwxrwxrwx 1 ceph ceph 58  21  2018 /var/lib/ceph/osd/ceph-0/journal -> 
/dev/disk/by-partuuid/5b5cd6f6-5de4-44f3-9d33-e8a7f4b59f61

# ls -l /dev/disk/by-partuuid/5b5cd6f6-5de4-44f3-9d33-e8a7f4b59f61
lrwxrwxrwx 1 root root 10 8 13:59 
/dev/disk/by-partuuid/5b5cd6f6-5de4-44f3-9d33-e8a7f4b59f61 -> ../../sdc2


My question is how I know which partition is taking the role of journal? Where 
can I see the new journal partition is linked?

Any comments is highly appreciated!


[1] https://fatmin.com/2015/08/11/ceph-show-osd-to-journal-mapping/


Best Regards,
Dave Chen

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] troubleshooting ceph rdma performance

2018-11-07 Thread Raju Rangoju
Hello All,

I have been collecting performance numbers on our ceph cluster, and I had 
noticed a very poor throughput on ceph async+rdma when compared with tcp. I was 
wondering what tunings/settings should I do to the cluster that would improve 
the ceph rdma (async+rdma) performance.

Currently, from what we see: Ceph rdma throughput is less than half of the ceph 
tcp throughput (ran fio over iscsi mounted disks).
Our ceph cluster has 8 nodes and configured with two networks, cluster and 
client networks.

Can someone please shed some light.

I'd be glad to provide any further information regarding the setup.

Thanks in Advance,
Raju
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 4:54 AM, Hayashida, Mami wrote:
> Wow, after all of this, everything went well and I was able to convert
> osd.120-129 from Filestore to Bluestore. 

Glad to hear it works! Make sure you reboot and check that everything
comes back up cleanly.

FWIW, I expect most of the files under /var/lib/ceph/osd/ceph-* to
disappear after a reboot, but this is normal and it should still work. I
think a bunch of stuff gets created when the OSD is created that isn't
strictly necessary to persist, and ceph-volume activate does not
re-create it. On my box this is what I have:

lrwxrwxrwx 1 ceph ceph  50 Oct 28 16:12 block ->
/dev/mapper/5bZDkx-uj0S-T4sU-aUeY-zhok-bD8s-xzcyVi
-rw--- 1 ceph ceph  37 Oct 28 16:12 ceph_fsid
-rw--- 1 ceph ceph  37 Oct 28 16:12 fsid
-rw--- 1 ceph ceph  56 Oct 28 16:12 keyring
-rw--- 1 ceph ceph 106 Oct 28 16:12 lockbox.keyring
-rw--- 1 ceph ceph   6 Oct 28 16:12 ready
-rw--- 1 ceph ceph  10 Oct 28 16:12 type
-rw--- 1 ceph ceph   3 Oct 28 16:12 whoami

(lockbox.keyring is for encryption, which you do not use)

-- 
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Neha Ojha
For those on 12.2.9 -

If you have successfully upgraded to 12.2.9, there is no reason for
you to downgrade, since the bug appears while upgrading to 12.2.9 -
http://tracker.ceph.com/issues/36686. We suggest you to not upgrade to
12.2.10, which reverts the feature that caused this bug. Also, 12.2.10
does not have much in-store except for the revert. We are working on a
clean upgrade path for this feature and will announce it when it is
ready.

For those who haven't upgraded to 12.2.9 -

Please avoid this release and wait for 12.2.10.

More information here:
https://www.spinics.net/lists/ceph-devel/msg43509.html,
https://www.spinics.net/lists/ceph-users/msg49112.html

Again, sorry about the inconvenience and hope this helps!

Thanks,
Neha

On Wed, Nov 7, 2018 at 2:38 PM, Ricardo J. Barberis
 wrote:
> El Miércoles 07/11/2018 a las 10:58, Simon Ironside escribió:
>> On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> >> I wonder if there is any release announcement for ceph 12.2.9 that I
>> >> missed. I just found the new packages on download.ceph.com, is this an
>> >> official release?
>> >
>> > This is because 12.2.9 have a several bugs. You should avoid to use this
>> > release and wait for 12.2.10
>>
>> Argh! What's it doing in the repos then?? I've just upgraded to it!
>> What are the bugs? Is there a thread about them?
>>
>> Simon
>
> Is it safe to downgrade from 12.2.9 to 12.2.8?
>
> Or should we just wait for 12.2.10?
>
> Thanks,
> --
> Ricardo J. Barberis
> Usuario Linux Nº 250625: http://counter.li.org/
> Usuario LFS Nº 5121: http://www.linuxfromscratch.org/
> Senior SysAdmin / IT Architect - www.DonWeb.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 10:58, Simon Ironside escribió:
> On 07/11/2018 10:59, Konstantin Shalygin wrote:
> >> I wonder if there is any release announcement for ceph 12.2.9 that I
> >> missed. I just found the new packages on download.ceph.com, is this an
> >> official release?
> >
> > This is because 12.2.9 have a several bugs. You should avoid to use this
> > release and wait for 12.2.10
>
> Argh! What's it doing in the repos then?? I've just upgraded to it!
> What are the bugs? Is there a thread about them?
>
> Simon

Is it safe to downgrade from 12.2.9 to 12.2.8?

Or should we just wait for 12.2.10?

Thanks,
-- 
Ricardo J. Barberis
Usuario Linux Nº 250625: http://counter.li.org/
Usuario LFS Nº 5121: http://www.linuxfromscratch.org/
Senior SysAdmin / IT Architect - www.DonWeb.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 11:28, Matthew Vernon escribió:
> On 07/11/2018 14:16, Marc Roos wrote:
> >  
> > 
> > I don't see the problem. I am installing only the ceph updates when 
> > others have done this and are running several weeks without problems. I 
> > have noticed this 12.2.9 availability also, did not see any release 
> > notes, so why install it? Especially with recent issues of other 
> > releases.
> 
> Relevantly, if you want to upgrade to Luminous in many of the obvious
> ways, you'll end up with 12.2.9.
> 
> Regards,
> 
> Matthew

Also relevant: if you use ceph-deploy like I do con CentOS 7, it installs the 
latest version available, so I inadvertedly ended up with 12.2.9 on my last 
four servers.

Thanks,
-- 
Ricardo J. Barberis
Usuario Linux Nº 250625: http://counter.li.org/
Usuario LFS Nº 5121: http://www.linuxfromscratch.org/
Senior SysAdmin / IT Architect - www.DonWeb.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 11:05, Matthew Vernon escribió:
> On 07/11/2018 10:59, Konstantin Shalygin wrote:
> >> I wonder if there is any release announcement for ceph 12.2.9 that I 
> >> missed.
> >> I just found the new packages on download.ceph.com, is this an official
> >> release?
> > 
> > This is because 12.2.9 have a several bugs. You should avoid to use this
> > release and wait for 12.2.10
> 
> It seems that maybe something isn't quite right in the release
> infrastructure, then? The 12.2.8 packages are still available, but e.g.
> debian-luminous's Packages file is pointing to the 12.2.9 (broken) packages.
> 
> Could the Debian/Ubuntu repos only have their releases updated (as
> opposed to what's in the pool) for safe/official releases? It's one
> thing letting people find pre-release things if they go looking, but
> ISTM that arranging that a mis-timed apt-get update/upgrade might
> install known-broken packages is ... unfortunate.
> 
> Regards,
> 
> Matthew

Not only Debian/Ubuntu, CentOS also:

# LANG=C yum list ceph-common
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
 * elrepo: repos.mia.lax-noc.com
 * epel: epel.gtdinternet.com
8 packages excluded due to repository priority protections
Installed Packages
ceph-common.x86_64  2:12.2.8-0.el7 @Ceph
Available Packages
ceph-common.x86_64  2:12.2.9-0.el7 Ceph


-- 
Ricardo J. Barberis
Usuario Linux Nº 250625: http://counter.li.org/
Usuario LFS Nº 5121: http://www.linuxfromscratch.org/
Senior SysAdmin / IT Architect - www.DonWeb.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ken Dreyer
On Wed, Nov 7, 2018 at 8:57 AM Kevin Olbrich  wrote:
> We solve this problem by hosting two repos. One for staging and QA and one 
> for production.
> Every release gets to staging (for example directly after building a scm tag).
>
> If QA passed, the stage repo is turned into the prod one.
> Using symlinks, it would be possible to switch back if problems occure.
> Example: https://incoming.debian.org/

With the CentOS Storage SIG's cbs.centos.org , we have the ability to
tag builds as "-candidate", "-testing", and "-released". I think that
mechanism could help here, so brave users can run "testing" early
before it goes out to the entire world in "released".

We would have to build out something like that for Ubuntu, maybe
copying around the binaries.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter




Am 07.11.18 um 21:17 schrieb Alex Gorbachev:

On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:


I've been reading a bit and trying around but it seems I'm not quite where I 
want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME SIZE TIMESTAMP
  81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
  92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.


Hi Uwe,

If these are Proxmox images, would you be able to move them simply
using Proxmox Move Disk in hardware for VM?  I have had good results
with that.


You are correct that this is on Proxmox but the UI prohobits moving Ceph-backed 
disks when the VM has snapshots.

I know how to alter the config files so I'm going the manual route here.

But thanks for the suggestion.






--
Alex Gorbachev
Storcium



Thanks,

 Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:

I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:

With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:


Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there 
a way to move such images from one pool to another
and perserve the snapshots?

Regards,

  Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Alex Gorbachev
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:
>
> I've been reading a bit and trying around but it seems I'm not quite where I 
> want to be.
>
> I want to migrate from pool "vms" to pool "vdisks".
>
> # ceph osd pool ls
> vms
> vdisks
>
> # rbd ls vms
> vm-101-disk-1
> vm-101-disk-2
> vm-102-disk-1
> vm-102-disk-2
>
> # rbd snap ls vms/vm-102-disk-2
> SNAPID NAME SIZE TIMESTAMP
>  81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
>  92 SL6_82 100GiB Fri Oct 12 13:27:53 2018
>
> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
> vdisks/vm-102-disk-2
> Exporting image: 100% complete...done.
> Importing image: 100% complete...done.
>
> # rbd snap ls vdisks/vm-102-disk-2
> (no output)
>
> # rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
> vdisks/vm-102-disk-2
> Exporting image: 100% complete...done.
> Importing image diff: 100% complete...done.
>
> # rbd snap ls vdisks/vm-102-disk-2
> (still no output)
>
> It looks like the current content is copied but not the snapshots.
>
> What am I doing wrong? Any help is appreciated.

Hi Uwe,

If these are Proxmox images, would you be able to move them simply
using Proxmox Move Disk in hardware for VM?  I have had good results
with that.

--
Alex Gorbachev
Storcium

>
> Thanks,
>
> Uwe
>
>
>
> Am 07.11.18 um 14:39 schrieb Uwe Sauter:
> > I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.
> >
> > Am 07.11.18 um 14:31 schrieb Jason Dillaman:
> >> With the Mimic release, you can use "rbd deep-copy" to transfer the
> >> images (and associated snapshots) to a new pool. Prior to that, you
> >> could use "rbd export-diff" / "rbd import-diff" to manually transfer
> >> an image and its associated snapshots.
> >> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have several VM images sitting in a Ceph pool which are snapshotted. Is 
> >>> there a way to move such images from one pool to another
> >>> and perserve the snapshots?
> >>>
> >>> Regards,
> >>>
> >>>  Uwe
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >>
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
Yes, that's it -- or upgrade your local Ceph client packages (if you
are on luminous).
On Wed, Nov 7, 2018 at 3:02 PM Uwe Sauter  wrote:
>
> I do have an empty disk in that server. Just go the extra step, save the 
> export to a file and import that one?
>
>
>
> Am 07.11.18 um 20:55 schrieb Jason Dillaman:
> > There was a bug in "rbd import" where it disallowed the use of stdin
> > for export-format 2. This has been fixed in v12.2.9 and is in the
> > pending 13.2.3 release.
> > On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter  wrote:
> >>
> >> I tried that but it fails:
> >>
> >> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import 
> >> --export-format 2 - vdisks/vm-102-disk-2
> >> rbd: import header failed.
> >> Importing image: 0% complete...failed.
> >> rbd: import failed: (22) Invalid argument
> >> Exporting image: 0% complete...failed.
> >> rbd: export error: (32) Broken pipe
> >>
> >>
> >> But the version seems to support that option:
> >>
> >> # rbd help import
> >> usage: rbd import [--path ] [--dest-pool ] [--dest ]
> >> [--image-format ] [--new-format]
> >> [--order ] [--object-size ]
> >> [--image-feature ] [--image-shared]
> >> [--stripe-unit ]
> >> [--stripe-count ] [--data-pool 
> >> ]
> >> [--journal-splay-width ]
> >> [--journal-object-size ]
> >> [--journal-pool ]
> >> [--sparse-size ] [--no-progress]
> >> [--export-format ] [--pool ]
> >> [--image ]
> >>  
> >>
> >>
> >>
> >>
> >>
> >> Am 07.11.18 um 20:41 schrieb Jason Dillaman:
> >>> If your CLI supports "--export-format 2", you can just do "rbd export
> >>> --export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
> >>> vdisks/vm-102-disk-2" (you need to specify the data format on import
> >>> otherwise it will assume it's copying a raw image).
> >>> On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:
> 
>  I've been reading a bit and trying around but it seems I'm not quite 
>  where I want to be.
> 
>  I want to migrate from pool "vms" to pool "vdisks".
> 
>  # ceph osd pool ls
>  vms
>  vdisks
> 
>  # rbd ls vms
>  vm-101-disk-1
>  vm-101-disk-2
>  vm-102-disk-1
>  vm-102-disk-2
> 
>  # rbd snap ls vms/vm-102-disk-2
>  SNAPID NAME SIZE TIMESTAMP
> 81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
> 92 SL6_82 100GiB Fri Oct 12 13:27:53 2018
> 
>  # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
>  vdisks/vm-102-disk-2
>  Exporting image: 100% complete...done.
>  Importing image: 100% complete...done.
> 
>  # rbd snap ls vdisks/vm-102-disk-2
>  (no output)
> 
>  # rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
>  vdisks/vm-102-disk-2
>  Exporting image: 100% complete...done.
>  Importing image diff: 100% complete...done.
> 
>  # rbd snap ls vdisks/vm-102-disk-2
>  (still no output)
> 
>  It looks like the current content is copied but not the snapshots.
> 
>  What am I doing wrong? Any help is appreciated.
> 
>  Thanks,
> 
>    Uwe
> 
> 
> 
>  Am 07.11.18 um 14:39 schrieb Uwe Sauter:
> > I'm still on luminous (12.2.8). I'll have a look on the commands. 
> > Thanks.
> >
> > Am 07.11.18 um 14:31 schrieb Jason Dillaman:
> >> With the Mimic release, you can use "rbd deep-copy" to transfer the
> >> images (and associated snapshots) to a new pool. Prior to that, you
> >> could use "rbd export-diff" / "rbd import-diff" to manually transfer
> >> an image and its associated snapshots.
> >> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have several VM images sitting in a Ceph pool which are 
> >>> snapshotted. Is there a way to move such images from one pool to 
> >>> another
> >>> and perserve the snapshots?
> >>>
> >>> Regards,
> >>>
> >>>Uwe
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >>
> >
> >>>
> >>>
> >>>
> >
> >
> >



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter

I do have an empty disk in that server. Just go the extra step, save the export 
to a file and import that one?



Am 07.11.18 um 20:55 schrieb Jason Dillaman:

There was a bug in "rbd import" where it disallowed the use of stdin
for export-format 2. This has been fixed in v12.2.9 and is in the
pending 13.2.3 release.
On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter  wrote:


I tried that but it fails:

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 
2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing image: 0% complete...failed.
rbd: import failed: (22) Invalid argument
Exporting image: 0% complete...failed.
rbd: export error: (32) Broken pipe


But the version seems to support that option:

# rbd help import
usage: rbd import [--path ] [--dest-pool ] [--dest ]
[--image-format ] [--new-format]
[--order ] [--object-size ]
[--image-feature ] [--image-shared]
[--stripe-unit ]
[--stripe-count ] [--data-pool ]
[--journal-splay-width ]
[--journal-object-size ]
[--journal-pool ]
[--sparse-size ] [--no-progress]
[--export-format ] [--pool ]
[--image ]
 





Am 07.11.18 um 20:41 schrieb Jason Dillaman:

If your CLI supports "--export-format 2", you can just do "rbd export
--export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
vdisks/vm-102-disk-2" (you need to specify the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:


I've been reading a bit and trying around but it seems I'm not quite where I 
want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME SIZE TIMESTAMP
   81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
   92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.

Thanks,

  Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:

I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:

With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:


Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there 
a way to move such images from one pool to another
and perserve the snapshots?

Regards,

   Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com















___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
There was a bug in "rbd import" where it disallowed the use of stdin
for export-format 2. This has been fixed in v12.2.9 and is in the
pending 13.2.3 release.
On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter  wrote:
>
> I tried that but it fails:
>
> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import 
> --export-format 2 - vdisks/vm-102-disk-2
> rbd: import header failed.
> Importing image: 0% complete...failed.
> rbd: import failed: (22) Invalid argument
> Exporting image: 0% complete...failed.
> rbd: export error: (32) Broken pipe
>
>
> But the version seems to support that option:
>
> # rbd help import
> usage: rbd import [--path ] [--dest-pool ] [--dest ]
>[--image-format ] [--new-format]
>[--order ] [--object-size ]
>[--image-feature ] [--image-shared]
>[--stripe-unit ]
>[--stripe-count ] [--data-pool ]
>[--journal-splay-width ]
>[--journal-object-size ]
>[--journal-pool ]
>[--sparse-size ] [--no-progress]
>[--export-format ] [--pool ]
>[--image ]
> 
>
>
>
>
>
> Am 07.11.18 um 20:41 schrieb Jason Dillaman:
> > If your CLI supports "--export-format 2", you can just do "rbd export
> > --export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
> > vdisks/vm-102-disk-2" (you need to specify the data format on import
> > otherwise it will assume it's copying a raw image).
> > On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:
> >>
> >> I've been reading a bit and trying around but it seems I'm not quite where 
> >> I want to be.
> >>
> >> I want to migrate from pool "vms" to pool "vdisks".
> >>
> >> # ceph osd pool ls
> >> vms
> >> vdisks
> >>
> >> # rbd ls vms
> >> vm-101-disk-1
> >> vm-101-disk-2
> >> vm-102-disk-1
> >> vm-102-disk-2
> >>
> >> # rbd snap ls vms/vm-102-disk-2
> >> SNAPID NAME SIZE TIMESTAMP
> >>   81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
> >>   92 SL6_82 100GiB Fri Oct 12 13:27:53 2018
> >>
> >> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
> >> vdisks/vm-102-disk-2
> >> Exporting image: 100% complete...done.
> >> Importing image: 100% complete...done.
> >>
> >> # rbd snap ls vdisks/vm-102-disk-2
> >> (no output)
> >>
> >> # rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
> >> vdisks/vm-102-disk-2
> >> Exporting image: 100% complete...done.
> >> Importing image diff: 100% complete...done.
> >>
> >> # rbd snap ls vdisks/vm-102-disk-2
> >> (still no output)
> >>
> >> It looks like the current content is copied but not the snapshots.
> >>
> >> What am I doing wrong? Any help is appreciated.
> >>
> >> Thanks,
> >>
> >>  Uwe
> >>
> >>
> >>
> >> Am 07.11.18 um 14:39 schrieb Uwe Sauter:
> >>> I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.
> >>>
> >>> Am 07.11.18 um 14:31 schrieb Jason Dillaman:
>  With the Mimic release, you can use "rbd deep-copy" to transfer the
>  images (and associated snapshots) to a new pool. Prior to that, you
>  could use "rbd export-diff" / "rbd import-diff" to manually transfer
>  an image and its associated snapshots.
>  On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  
>  wrote:
> >
> > Hi,
> >
> > I have several VM images sitting in a Ceph pool which are snapshotted. 
> > Is there a way to move such images from one pool to another
> > and perserve the snapshots?
> >
> > Regards,
> >
> >   Uwe
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> >>>
> >
> >
> >



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Wow, after all of this, everything went well and I was able to convert
osd.120-129 from Filestore to Bluestore.

***
root@osd2:~# ls -l /var/lib/ceph/osd/ceph-120
total 48
-rw-r--r-- 1 ceph ceph 384 Nov  7 14:34 activate.monmap
lrwxrwxrwx 1 ceph ceph  19 Nov  7 14:38 block -> /dev/hdd120/data120
lrwxrwxrwx 1 ceph ceph  15 Nov  7 14:38 block.db -> /dev/ssd0/db120
-rw-r--r-- 1 ceph ceph   2 Nov  7 14:34 bluefs
-rw-r--r-- 1 ceph ceph  37 Nov  7 14:38 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Nov  7 14:38 fsid
-rw--- 1 ceph ceph  57 Nov  7 14:38 keyring
-rw-r--r-- 1 ceph ceph   8 Nov  7 14:34 kv_backend
-rw-r--r-- 1 ceph ceph  21 Nov  7 14:34 magic
-rw-r--r-- 1 ceph ceph   4 Nov  7 14:34 mkfs_done
-rw-r--r-- 1 ceph ceph  41 Nov  7 14:34 osd_key
-rw-r--r-- 1 ceph ceph   6 Nov  7 14:38 ready
-rw-r--r-- 1 ceph ceph  10 Nov  7 14:38 type
-rw-r--r-- 1 ceph ceph   4 Nov  7 14:38 whoami
***

and df -h showing
tmpfs   126G   48K  126G   1% /var/lib/ceph/osd/ceph-120
tmpfs   126G   48K  126G   1% /var/lib/ceph/osd/ceph-121
tmpfs   126G   48K  126G   1% /var/lib/ceph/osd/ceph-122
tmpfs   126G   48K  126G   1% /var/lib/ceph/osd/ceph-123 

**
It seems like wipefs did delete all the remnants of the filestore partition
correctly since I did not have to do any additional clean-up this time. I
basically followed all the steps that I wrote out (with a few minor edits
Hector suggested).   THANK YOU SO MUCH!!!  After I work on the rest of this
node, I will go back to the previous node and see if I can zap it and start
all over again.

On Wed, Nov 7, 2018 at 12:21 PM, Hector Martin 
wrote:

> On 11/8/18 2:15 AM, Hayashida, Mami wrote:
> > Thank you very much.  Yes, I am aware that zapping the SSD and
> > converting it to LVM requires stopping all the FileStore OSDs whose
> > journals are on that SSD first.  I will add in the `hdparm` to my steps.
> > I did run into remnants of gpt information lurking around when trying to
> > re-use osd disks in the past -- so that's probably a good preemptive
> move.
>
> Just for reference, "ceph-volume lvm zap" runs wipefs and also wipes the
> beginning of the device separately. It should get rid of the GPT
> partition table. hdparm -z just tells the kernel to re-read it (which
> should remove any device nodes associated with now-gone partitions).
>
> I just checked the wipefs manpage and it seems it does trigger a
> partition table re-read itself, which would make the hdparm unnecessary.
> It might be useful if you can check that the partition devices (sda1
> etc) exist before the zap command and disappear after it, confirming
> that hdparm is not necessary. And if they still exist, then run hdparm,
> and if they persist after that too, something's wrong and you should
> investigate. GPT partition tables can be notoriously annoying to wipe
> because there is a backup at the end of the device, but wipefs *should*
> know about that as far as I know.
>
> --
> Hector Martin (hec...@marcansoft.com)
> Public Key: https://mrcn.st/pub
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter

Looks like I'm hitting this:

http://tracker.ceph.com/issues/34536

Am 07.11.18 um 20:46 schrieb Uwe Sauter:

I tried that but it fails:

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 
2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing image: 0% complete...failed.
rbd: import failed: (22) Invalid argument
Exporting image: 0% complete...failed.
rbd: export error: (32) Broken pipe


But the version seems to support that option:

# rbd help import
usage: rbd import [--path ] [--dest-pool ] [--dest ]
   [--image-format ] [--new-format]
   [--order ] [--object-size ]
   [--image-feature ] [--image-shared]
   [--stripe-unit ]
   [--stripe-count ] [--data-pool ]
   [--journal-splay-width ]
   [--journal-object-size ]
   [--journal-pool ]
   [--sparse-size ] [--no-progress]
   [--export-format ] [--pool ]
   [--image ]
    





Am 07.11.18 um 20:41 schrieb Jason Dillaman:

If your CLI supports "--export-format 2", you can just do "rbd export
--export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
vdisks/vm-102-disk-2" (you need to specify the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:


I've been reading a bit and trying around but it seems I'm not quite where I 
want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME SIZE TIMESTAMP
  81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
  92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.

Thanks,

 Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:

I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:

With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:


Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there a way to move such images from one 
pool to another

and perserve the snapshots?

Regards,

  Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com











___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter

I tried that but it fails:

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 
2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing image: 0% complete...failed.
rbd: import failed: (22) Invalid argument
Exporting image: 0% complete...failed.
rbd: export error: (32) Broken pipe


But the version seems to support that option:

# rbd help import
usage: rbd import [--path ] [--dest-pool ] [--dest ]
  [--image-format ] [--new-format]
  [--order ] [--object-size ]
  [--image-feature ] [--image-shared]
  [--stripe-unit ]
  [--stripe-count ] [--data-pool ]
  [--journal-splay-width ]
  [--journal-object-size ]
  [--journal-pool ]
  [--sparse-size ] [--no-progress]
  [--export-format ] [--pool ]
  [--image ]
   





Am 07.11.18 um 20:41 schrieb Jason Dillaman:

If your CLI supports "--export-format 2", you can just do "rbd export
--export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
vdisks/vm-102-disk-2" (you need to specify the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:


I've been reading a bit and trying around but it seems I'm not quite where I 
want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME SIZE TIMESTAMP
  81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
  92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.

Thanks,

 Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:

I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:

With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:


Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there 
a way to move such images from one pool to another
and perserve the snapshots?

Regards,

  Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com











___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
If your CLI supports "--export-format 2", you can just do "rbd export
--export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
vdisks/vm-102-disk-2" (you need to specify the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter  wrote:
>
> I've been reading a bit and trying around but it seems I'm not quite where I 
> want to be.
>
> I want to migrate from pool "vms" to pool "vdisks".
>
> # ceph osd pool ls
> vms
> vdisks
>
> # rbd ls vms
> vm-101-disk-1
> vm-101-disk-2
> vm-102-disk-1
> vm-102-disk-2
>
> # rbd snap ls vms/vm-102-disk-2
> SNAPID NAME SIZE TIMESTAMP
>  81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
>  92 SL6_82 100GiB Fri Oct 12 13:27:53 2018
>
> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
> vdisks/vm-102-disk-2
> Exporting image: 100% complete...done.
> Importing image: 100% complete...done.
>
> # rbd snap ls vdisks/vm-102-disk-2
> (no output)
>
> # rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
> vdisks/vm-102-disk-2
> Exporting image: 100% complete...done.
> Importing image diff: 100% complete...done.
>
> # rbd snap ls vdisks/vm-102-disk-2
> (still no output)
>
> It looks like the current content is copied but not the snapshots.
>
> What am I doing wrong? Any help is appreciated.
>
> Thanks,
>
> Uwe
>
>
>
> Am 07.11.18 um 14:39 schrieb Uwe Sauter:
> > I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.
> >
> > Am 07.11.18 um 14:31 schrieb Jason Dillaman:
> >> With the Mimic release, you can use "rbd deep-copy" to transfer the
> >> images (and associated snapshots) to a new pool. Prior to that, you
> >> could use "rbd export-diff" / "rbd import-diff" to manually transfer
> >> an image and its associated snapshots.
> >> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have several VM images sitting in a Ceph pool which are snapshotted. Is 
> >>> there a way to move such images from one pool to another
> >>> and perserve the snapshots?
> >>>
> >>> Regards,
> >>>
> >>>  Uwe
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >>
> >



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter

I've been reading a bit and trying around but it seems I'm not quite where I 
want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME SIZE TIMESTAMP
81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - 
vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.

Thanks,

Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:

I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:

With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:


Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there 
a way to move such images from one pool to another
and perserve the snapshots?

Regards,

 Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread David Turner
My big question is that we've had a few of these releases this year that
are bugged and shouldn't be upgraded to... They don't have any release
notes or announcement and the only time this comes out is when users
finally ask about it weeks later.  Why is this not proactively announced to
avoid a problematic release and hopefully prevent people from installing
it?  It would be great if there was an actual release notes saying not to
upgrade to this version or something.

On Wed, Nov 7, 2018 at 11:16 AM Ashley Merrick 
wrote:

> I am seeing this on the latest mimic on my test cluster aswel.
>
> Every automatic deep-scrub comes back as inconsistent, but doing another
> manual scrub comes back as fine and clear each time.
>
> Not sure if related or not..
>
> On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit <
> christoph.adom...@gatworks.de> wrote:
>
>> Hello together,
>>
>> we have upgraded to 12.2.9 because it was in the official repos.
>>
>> Right after the update and some scrubs we have issues.
>>
>> This morning after regular scrubs we had around 10% of all pgs inconstent:
>>
>> pgs: 4036 active+clean
>>   380  active+clean+inconsistent
>>
>> After repairung these 380 pgs we again have:
>>
>> 1/93611534 objects unfound (0.000%)
>> 28   active+clean+inconsistent
>> 1active+recovery_wait+degraded
>>
>> Now we stopped repairing because it does not seem to solve the problem
>> and more and more error messages are occuring. So far we did not see
>> corruption but we do not feel well with the cluster.
>>
>> What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?
>>
>> Is ist dangerous for our Data to leave the cluster running ?
>>
>> I am sure we do not have hardware errors and that these errors came with
>> the update to 12.2.9.
>>
>> Thanks
>>   Christoph
>>
>>
>>
>> On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
>> > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
>> > wrote:
>> >
>> > >
>> > >
>> > > On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> > > >> I wonder if there is any release announcement for ceph 12.2.9 that
>> I
>> > > missed.
>> > > >> I just found the new packages on download.ceph.com, is this an
>> official
>> > > >> release?
>> > > >
>> > > > This is because 12.2.9 have a several bugs. You should avoid to use
>> this
>> > > > release and wait for 12.2.10
>> > >
>> > > Argh! What's it doing in the repos then?? I've just upgraded to it!
>> > > What are the bugs? Is there a thread about them?
>> >
>> >
>> > If you’ve already upgraded and have no issues then you won’t have any
>> > trouble going forward — except perhaps on the next upgrade, if you do it
>> > while the cluster is unhealthy.
>> >
>> > I agree that it’s annoying when these issues make it out. We’ve had
>> ongoing
>> > discussions to try and improve the release process so it’s less
>> drawn-out
>> > and to prevent these upgrade issues from making it through testing, but
>> > nobody has resolved it yet. If anybody has experience working with deb
>> > repositories and handling releases, the Ceph upstream could use some
>> > help... ;)
>> > -Greg
>> >
>> >
>> > >
>> > > Simon
>> > > ___
>> > > ceph-users mailing list
>> > > ceph-users@lists.ceph.com
>> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > >
>>
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 2:15 AM, Hayashida, Mami wrote:
> Thank you very much.  Yes, I am aware that zapping the SSD and
> converting it to LVM requires stopping all the FileStore OSDs whose
> journals are on that SSD first.  I will add in the `hdparm` to my steps.
> I did run into remnants of gpt information lurking around when trying to
> re-use osd disks in the past -- so that's probably a good preemptive move.

Just for reference, "ceph-volume lvm zap" runs wipefs and also wipes the
beginning of the device separately. It should get rid of the GPT
partition table. hdparm -z just tells the kernel to re-read it (which
should remove any device nodes associated with now-gone partitions).

I just checked the wipefs manpage and it seems it does trigger a
partition table re-read itself, which would make the hdparm unnecessary.
It might be useful if you can check that the partition devices (sda1
etc) exist before the zap command and disappear after it, confirming
that hdparm is not necessary. And if they still exist, then run hdparm,
and if they persist after that too, something's wrong and you should
investigate. GPT partition tables can be notoriously annoying to wipe
because there is a backup at the end of the device, but wipefs *should*
know about that as far as I know.

-- 
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Thank you very much.  Yes, I am aware that zapping the SSD and converting
it to LVM requires stopping all the FileStore OSDs whose journals are on
that SSD first.  I will add in the `hdparm` to my steps. I did run into
remnants of gpt information lurking around when trying to re-use osd disks
in the past -- so that's probably a good preemptive move.

On Wed, Nov 7, 2018 at 10:46 AM, Hector Martin 
wrote:

> On 11/8/18 12:29 AM, Hayashida, Mami wrote:
> > Yes, that was indeed a copy-and-paste mistake.  I am trying to use
> > /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal.
> > That's how the Filestore is set-up.  So, for the Bluestore, data on
> > /dev/sdh,  wal and db on /dev/sda.
>
> /dev/sda is the SSD you use for all OSDs on each node, right? Keep in
> mind that what you're doing here is wiping that SSD entirely and
> converting it to LVM. If any FileStore OSDs are using that SSD as
> journal then this will kill them. If you're doing one node at a time
> that's fine, but then you need to out and stop all the FileStore OSDs on
> that node first.
>
> You should throw a "systemctl daemon-reload" in there after tweaking
> fstab and the systemd configs, to make sure systemd is aware of the
> changes, e.g. after the `ln -s /dev/null ...`. FWIW I don't think that
> symlink is necessary, but it won't hurt.
>
> Also, `ceph-volume lvm zap` doesn't seem to trigger a re-read of the
> partition table, from a quick look. Since you used partitions before, it
> might be prudent to do that. Try `hdparm -z /dev/sdh` after the zap
> (same for sda). That should get rid of any /dev/sdh1 etc partition
> devices and leave only /dev/sdh. Do the same for sda and anything else
> you zap. This also shouldn't strictly be needed as those will disappear
> after a reboot anyway, and it's possible some other tool implicitly does
> this for you, but it's good to be safe, and might avoid trouble if some
> FileStore remnant tries to mount phantom partitions.
>
> --
> Hector Martin (hec...@marcansoft.com)
> Public Key: https://mrcn.st/pub
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ashley Merrick
I am seeing this on the latest mimic on my test cluster aswel.

Every automatic deep-scrub comes back as inconsistent, but doing another
manual scrub comes back as fine and clear each time.

Not sure if related or not..

On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit <
christoph.adom...@gatworks.de> wrote:

> Hello together,
>
> we have upgraded to 12.2.9 because it was in the official repos.
>
> Right after the update and some scrubs we have issues.
>
> This morning after regular scrubs we had around 10% of all pgs inconstent:
>
> pgs: 4036 active+clean
>   380  active+clean+inconsistent
>
> After repairung these 380 pgs we again have:
>
> 1/93611534 objects unfound (0.000%)
> 28   active+clean+inconsistent
> 1active+recovery_wait+degraded
>
> Now we stopped repairing because it does not seem to solve the problem and
> more and more error messages are occuring. So far we did not see corruption
> but we do not feel well with the cluster.
>
> What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?
>
> Is ist dangerous for our Data to leave the cluster running ?
>
> I am sure we do not have hardware errors and that these errors came with
> the update to 12.2.9.
>
> Thanks
>   Christoph
>
>
>
> On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
> > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
> > wrote:
> >
> > >
> > >
> > > On 07/11/2018 10:59, Konstantin Shalygin wrote:
> > > >> I wonder if there is any release announcement for ceph 12.2.9 that I
> > > missed.
> > > >> I just found the new packages on download.ceph.com, is this an
> official
> > > >> release?
> > > >
> > > > This is because 12.2.9 have a several bugs. You should avoid to use
> this
> > > > release and wait for 12.2.10
> > >
> > > Argh! What's it doing in the repos then?? I've just upgraded to it!
> > > What are the bugs? Is there a thread about them?
> >
> >
> > If you’ve already upgraded and have no issues then you won’t have any
> > trouble going forward — except perhaps on the next upgrade, if you do it
> > while the cluster is unhealthy.
> >
> > I agree that it’s annoying when these issues make it out. We’ve had
> ongoing
> > discussions to try and improve the release process so it’s less drawn-out
> > and to prevent these upgrade issues from making it through testing, but
> > nobody has resolved it yet. If anybody has experience working with deb
> > repositories and handling releases, the Ceph upstream could use some
> > help... ;)
> > -Greg
> >
> >
> > >
> > > Simon
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Gregory Farnum
The specific bug you are known at risk for when installing the 12.2.9
packages is http://tracker.ceph.com/issues/36686.

It only triggers when PGs are not active+clean and are running different
minor versions. (Even more specifically, it seems to only show up when
doing backfill from an OSD running new code to an OSD running old code
during the upgrade process.)

If you have encountered other issues, there are no special troubleshooting
steps I'm aware of; follow the standard advice.
-Greg

On Wed, Nov 7, 2018 at 8:08 AM Christoph Adomeit <
christoph.adom...@gatworks.de> wrote:

> Hello together,
>
> we have upgraded to 12.2.9 because it was in the official repos.
>
> Right after the update and some scrubs we have issues.
>
> This morning after regular scrubs we had around 10% of all pgs inconstent:
>
> pgs: 4036 active+clean
>   380  active+clean+inconsistent
>
> After repairung these 380 pgs we again have:
>
> 1/93611534 objects unfound (0.000%)
> 28   active+clean+inconsistent
> 1active+recovery_wait+degraded
>
> Now we stopped repairing because it does not seem to solve the problem and
> more and more error messages are occuring. So far we did not see corruption
> but we do not feel well with the cluster.
>
> What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?
>
> Is ist dangerous for our Data to leave the cluster running ?
>
> I am sure we do not have hardware errors and that these errors came with
> the update to 12.2.9.
>
> Thanks
>   Christoph
>
>
>
> On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
> > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
> > wrote:
> >
> > >
> > >
> > > On 07/11/2018 10:59, Konstantin Shalygin wrote:
> > > >> I wonder if there is any release announcement for ceph 12.2.9 that I
> > > missed.
> > > >> I just found the new packages on download.ceph.com, is this an
> official
> > > >> release?
> > > >
> > > > This is because 12.2.9 have a several bugs. You should avoid to use
> this
> > > > release and wait for 12.2.10
> > >
> > > Argh! What's it doing in the repos then?? I've just upgraded to it!
> > > What are the bugs? Is there a thread about them?
> >
> >
> > If you’ve already upgraded and have no issues then you won’t have any
> > trouble going forward — except perhaps on the next upgrade, if you do it
> > while the cluster is unhealthy.
> >
> > I agree that it’s annoying when these issues make it out. We’ve had
> ongoing
> > discussions to try and improve the release process so it’s less drawn-out
> > and to prevent these upgrade issues from making it through testing, but
> > nobody has resolved it yet. If anybody has experience working with deb
> > repositories and handling releases, the Ceph upstream could use some
> > help... ;)
> > -Greg
> >
> >
> > >
> > > Simon
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Simon Ironside

On 07/11/2018 15:39, Gregory Farnum wrote:
On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside > wrote:




On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9
that I missed.
>> I just found the new packages on download.ceph.com
, is this an official
>> release?
>
> This is because 12.2.9 have a several bugs. You should avoid to
use this
> release and wait for 12.2.10

Argh! What's it doing in the repos then?? I've just upgraded to it!
What are the bugs? Is there a thread about them?


If you’ve already upgraded and have no issues then you won’t have any 
trouble going forward — except perhaps on the next upgrade, if you do 
it while the cluster is unhealthy.


Thanks, the upgrade went fine and I've no known issues. The only warning 
I have is about too many PGs per OSD which is my fault not ceph's. I 
trust that doesn't count as a reason not to proceed to 13.2.2?


I agree that it’s annoying when these issues make it out. We’ve had 
ongoing discussions to try and improve the release process so it’s 
less drawn-out and to prevent these upgrade issues from making it 
through testing, but nobody has resolved it yet. If anybody has 
experience working with deb repositories and handling releases, the 
Ceph upstream could use some help... ;)


Totally, I get that this happens from time to time but once a bad 
release is known why not just delete the affected packages from the 
official repos? That seems to me to be a really easy step to take 
especially if release announcements haven't been sent, docs.ceph.com 
hasn't been updated yet etc. I reposync --newest-only the RPMs from the 
official repos to my own then update my ceph hosts from there which is 
how I ended up with 12.2.9.


Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Christoph Adomeit
Hello together,

we have upgraded to 12.2.9 because it was in the official repos.

Right after the update and some scrubs we have issues.

This morning after regular scrubs we had around 10% of all pgs inconstent:

pgs: 4036 active+clean
  380  active+clean+inconsistent

After repairung these 380 pgs we again have:

1/93611534 objects unfound (0.000%)
28   active+clean+inconsistent
1active+recovery_wait+degraded

Now we stopped repairing because it does not seem to solve the problem and more 
and more error messages are occuring. So far we did not see corruption but we 
do not feel well with the cluster.

What do you suggest, wait for 12.2.10 ? Roll Back to 12.2.8 ?

Is ist dangerous for our Data to leave the cluster running ?

I am sure we do not have hardware errors and that these errors came with the 
update to 12.2.9.

Thanks
  Christoph



On Wed, Nov 07, 2018 at 07:39:59AM -0800, Gregory Farnum wrote:
> On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
> wrote:
> 
> >
> >
> > On 07/11/2018 10:59, Konstantin Shalygin wrote:
> > >> I wonder if there is any release announcement for ceph 12.2.9 that I
> > missed.
> > >> I just found the new packages on download.ceph.com, is this an official
> > >> release?
> > >
> > > This is because 12.2.9 have a several bugs. You should avoid to use this
> > > release and wait for 12.2.10
> >
> > Argh! What's it doing in the repos then?? I've just upgraded to it!
> > What are the bugs? Is there a thread about them?
> 
> 
> If you’ve already upgraded and have no issues then you won’t have any
> trouble going forward — except perhaps on the next upgrade, if you do it
> while the cluster is unhealthy.
> 
> I agree that it’s annoying when these issues make it out. We’ve had ongoing
> discussions to try and improve the release process so it’s less drawn-out
> and to prevent these upgrade issues from making it through testing, but
> nobody has resolved it yet. If anybody has experience working with deb
> repositories and handling releases, the Ceph upstream could use some
> help... ;)
> -Greg
> 
> 
> >
> > Simon
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Kevin Olbrich
Am Mi., 7. Nov. 2018 um 16:40 Uhr schrieb Gregory Farnum :

> On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
> wrote:
>
>>
>>
>> On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> >> I wonder if there is any release announcement for ceph 12.2.9 that I
>> missed.
>> >> I just found the new packages on download.ceph.com, is this an
>> official
>> >> release?
>> >
>> > This is because 12.2.9 have a several bugs. You should avoid to use
>> this
>> > release and wait for 12.2.10
>>
>> Argh! What's it doing in the repos then?? I've just upgraded to it!
>> What are the bugs? Is there a thread about them?
>
>
> If you’ve already upgraded and have no issues then you won’t have any
> trouble going forward — except perhaps on the next upgrade, if you do it
> while the cluster is unhealthy.
>
> I agree that it’s annoying when these issues make it out. We’ve had
> ongoing discussions to try and improve the release process so it’s less
> drawn-out and to prevent these upgrade issues from making it through
> testing, but nobody has resolved it yet. If anybody has experience working
> with deb repositories and handling releases, the Ceph upstream could use
> some help... ;)
> -Greg
>
>>
>>
We solve this problem by hosting two repos. One for staging and QA and one
for production.
Every release gets to staging (for example directly after building a scm
tag).

If QA passed, the stage repo is turned into the prod one.
Using symlinks, it would be possible to switch back if problems occure.
Example: https://incoming.debian.org/

Currently I would be unable to deploy new nodes if I use the official
mirrors as apt is unable to use older versions (which does work on yum/dnf).
Thats why we are implementing "mirror-sync" / rsync with a copy of the repo
and the desired packages until such solution is available.

Kevin


>> Simon
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 12:29 AM, Hayashida, Mami wrote:
> Yes, that was indeed a copy-and-paste mistake.  I am trying to use
> /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal. 
> That's how the Filestore is set-up.  So, for the Bluestore, data on
> /dev/sdh,  wal and db on /dev/sda. 

/dev/sda is the SSD you use for all OSDs on each node, right? Keep in
mind that what you're doing here is wiping that SSD entirely and
converting it to LVM. If any FileStore OSDs are using that SSD as
journal then this will kill them. If you're doing one node at a time
that's fine, but then you need to out and stop all the FileStore OSDs on
that node first.

You should throw a "systemctl daemon-reload" in there after tweaking
fstab and the systemd configs, to make sure systemd is aware of the
changes, e.g. after the `ln -s /dev/null ...`. FWIW I don't think that
symlink is necessary, but it won't hurt.

Also, `ceph-volume lvm zap` doesn't seem to trigger a re-read of the
partition table, from a quick look. Since you used partitions before, it
might be prudent to do that. Try `hdparm -z /dev/sdh` after the zap
(same for sda). That should get rid of any /dev/sdh1 etc partition
devices and leave only /dev/sdh. Do the same for sda and anything else
you zap. This also shouldn't strictly be needed as those will disappear
after a reboot anyway, and it's possible some other tool implicitly does
this for you, but it's good to be safe, and might avoid trouble if some
FileStore remnant tries to mount phantom partitions.

-- 
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Gregory Farnum
On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside 
wrote:

>
>
> On 07/11/2018 10:59, Konstantin Shalygin wrote:
> >> I wonder if there is any release announcement for ceph 12.2.9 that I
> missed.
> >> I just found the new packages on download.ceph.com, is this an official
> >> release?
> >
> > This is because 12.2.9 have a several bugs. You should avoid to use this
> > release and wait for 12.2.10
>
> Argh! What's it doing in the repos then?? I've just upgraded to it!
> What are the bugs? Is there a thread about them?


If you’ve already upgraded and have no issues then you won’t have any
trouble going forward — except perhaps on the next upgrade, if you do it
while the cluster is unhealthy.

I agree that it’s annoying when these issues make it out. We’ve had ongoing
discussions to try and improve the release process so it’s less drawn-out
and to prevent these upgrade issues from making it through testing, but
nobody has resolved it yet. If anybody has experience working with deb
repositories and handling releases, the Ceph upstream could use some
help... ;)
-Greg


>
> Simon
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Ashley Merrick
Sorry my mixup.

Therefore you shouldn’t be running ZAP against /dev/sda as this will wipe
the whole SSD.

I Guess currently in its setup it’s using a partition on /dev/sda? Like
/dev/sda2 for example.

,Ashley

On Wed, 7 Nov 2018 at 11:30 PM, Hayashida, Mami 
wrote:

> Yes, that was indeed a copy-and-paste mistake.  I am trying to use
> /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal.
> That's how the Filestore is set-up.  So, for the Bluestore, data on
> /dev/sdh,  wal and db on /dev/sda.
>
>
> On Wed, Nov 7, 2018 at 10:26 AM, Ashley Merrick 
> wrote:
>
>> ceph osd destroy 70  --yes-i-really-mean-it
>>
>> I am guessing that’s a copy and paste mistake and should say 120.
>>
>> Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the
>> journal and other partitions are for other SSD’s?
>>
>> On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami 
>> wrote:
>>
>>> I would agree with that.  So, here is what I am planning on doing
>>> today.  I will try this from scratch on a different OSD node from the very
>>> first step and log input and output for every step.  Here is the outline of
>>> what I think (based on all the email exchanges so far) should happen.
>>>
>>> ***
>>> Trying to convert osd.120 to Bluestore.  Data is on /sda/sdh.
>>>  Filestore Journal is on a partition drive (40GB) on /dev/sda.
>>>
>>> #Mark those OSDs out
>>> ceph osd out 120
>>>
>>> # Stop the OSDs
>>> systemctl kill ceph-osd@120
>>>
>>> # Unmount the filesystem
>>> sudo umount /var/lib/ceph/osd/ceph-120
>>>
>>> # Destroy the data
>>> ceph-volume lvm zap /dev/sdh --destroy   # data disk
>>> ceph-volume lvm zap /dev/sda --destroy   # ssd for wal and db
>>>
>>> # Inform the cluster
>>> ceph osd destroy 70  --yes-i-really-mean-it
>>>
>>> # Check all the /etc/fstab and /etc/systemd/system to make sure that all
>>> the references to the filesystem is gone. Run
>>> ln -sf /dev/null /etc/systemd/system/ceph-disk@70.service
>>>
>>> # Create PVs, VGs, LVs
>>> pvcreate /dev/sda # for wal and db
>>> pvcreate /dev/sdh # for data
>>>
>>> vgcreate ssd0 /dev/sda
>>> vgcreate hdd120  /dev/sdh
>>>
>>> lvcreate -L 40G -n db120 ssd0
>>> lvcreate -l 100%VG data120 hdd120
>>>
>>> # Run ceph-volume
>>> ceph-volume lvm prepare --bluestore --data hdd120/data120 --block.db
>>> ssd0/db120  --osd-id 120
>>>
>>> # Activate
>>> ceph-volume lvm activate 120 
>>>
>>> **
>>> Does this sound right?
>>>
>>> On Tue, Nov 6, 2018 at 4:32 PM, Alfredo Deza  wrote:
>>>
 It is pretty difficult to know what step you are missing if we are
 getting the `activate --all` command.

 Maybe if you try one by one, capturing each command, throughout the
 process, with output. In the filestore-to-bluestore guides we never
 advertise `activate --all` for example.

 Something is missing here, and I can't tell what it is.
 On Tue, Nov 6, 2018 at 4:13 PM Hayashida, Mami 
 wrote:
 >
 > This is becoming even more confusing. I got rid of those 
 > ceph-disk@6[0-9].service
 (which had been symlinked to /dev/null).  Moved
 /var/lib/ceph/osd/ceph-6[0-9] to  /var/./osd_old/.  Then, I ran
 `ceph-volume lvm activate --all`.  I got once again
 >
 > root@osd1:~# ceph-volume lvm activate --all
 > --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-1bf13d09fb3d
 > Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
 > --> Absolute path not found for executable: restorecon
 > --> Ensure $PATH environment variable contains common executable
 locations
 > Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir
 --dev /dev/hdd67/data67 --path /var/lib/ceph/osd/ceph-67
 >  stderr: failed to read label for /dev/hdd67/data67: (2) No such file
 or directory
 > -->  RuntimeError: command returned non-zero exit status: 1
 >
 > But when I ran `df` and `mount` ceph-67 is the only one that exists.
 (and in  /var/lib/ceph/osd/)
 >
 > root@osd1:~# df -h | grep ceph-6
 > tmpfs   126G 0  126G   0% /var/lib/ceph/osd/ceph-67
 >
 > root@osd1:~# mount | grep ceph-6
 > tmpfs on /var/lib/ceph/osd/ceph-67 type tmpfs (rw,relatime)
 >
 > root@osd1:~# ls /var/lib/ceph/osd/ | grep ceph-6
 > ceph-67
 >
 > But in I cannot restart any of these 10 daemons (`systemctl start
 ceph-osd@6[0-9]`).
 >
 > I am wondering if I should zap these 10 osds and start over although
 at this point I am afraid even zapping may not be a simple task
 >
 >
 >
 > On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin 
 wrote:
 >>
 >> On 11/7/18 5:27 AM, Hayashida, Mami wrote:
 >> > 1. Stopped osd.60-69:  no problem
 >> > 2. Skipped this and went to #3 to check first
 >> > 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
 >> > nothing.  I see in that directory
 >> >
 >> > /etc/systemd/system/ceph-disk@60.service# and 61 - 69.
 

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Yes, that was indeed a copy-and-paste mistake.  I am trying to use /dev/sdh
(hdd) for data and a part of /dev/sda (ssd)  for the journal.  That's how
the Filestore is set-up.  So, for the Bluestore, data on /dev/sdh,  wal and
db on /dev/sda.


On Wed, Nov 7, 2018 at 10:26 AM, Ashley Merrick 
wrote:

> ceph osd destroy 70  --yes-i-really-mean-it
>
> I am guessing that’s a copy and paste mistake and should say 120.
>
> Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the
> journal and other partitions are for other SSD’s?
>
> On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami 
> wrote:
>
>> I would agree with that.  So, here is what I am planning on doing today.
>> I will try this from scratch on a different OSD node from the very first
>> step and log input and output for every step.  Here is the outline of what
>> I think (based on all the email exchanges so far) should happen.
>>
>> ***
>> Trying to convert osd.120 to Bluestore.  Data is on /sda/sdh.   Filestore
>> Journal is on a partition drive (40GB) on /dev/sda.
>>
>> #Mark those OSDs out
>> ceph osd out 120
>>
>> # Stop the OSDs
>> systemctl kill ceph-osd@120
>>
>> # Unmount the filesystem
>> sudo umount /var/lib/ceph/osd/ceph-120
>>
>> # Destroy the data
>> ceph-volume lvm zap /dev/sdh --destroy   # data disk
>> ceph-volume lvm zap /dev/sda --destroy   # ssd for wal and db
>>
>> # Inform the cluster
>> ceph osd destroy 70  --yes-i-really-mean-it
>>
>> # Check all the /etc/fstab and /etc/systemd/system to make sure that all
>> the references to the filesystem is gone. Run
>> ln -sf /dev/null /etc/systemd/system/ceph-disk@70.service
>>
>> # Create PVs, VGs, LVs
>> pvcreate /dev/sda # for wal and db
>> pvcreate /dev/sdh # for data
>>
>> vgcreate ssd0 /dev/sda
>> vgcreate hdd120  /dev/sdh
>>
>> lvcreate -L 40G -n db120 ssd0
>> lvcreate -l 100%VG data120 hdd120
>>
>> # Run ceph-volume
>> ceph-volume lvm prepare --bluestore --data hdd120/data120 --block.db
>> ssd0/db120  --osd-id 120
>>
>> # Activate
>> ceph-volume lvm activate 120 
>>
>> **
>> Does this sound right?
>>
>> On Tue, Nov 6, 2018 at 4:32 PM, Alfredo Deza  wrote:
>>
>>> It is pretty difficult to know what step you are missing if we are
>>> getting the `activate --all` command.
>>>
>>> Maybe if you try one by one, capturing each command, throughout the
>>> process, with output. In the filestore-to-bluestore guides we never
>>> advertise `activate --all` for example.
>>>
>>> Something is missing here, and I can't tell what it is.
>>> On Tue, Nov 6, 2018 at 4:13 PM Hayashida, Mami 
>>> wrote:
>>> >
>>> > This is becoming even more confusing. I got rid of those 
>>> > ceph-disk@6[0-9].service
>>> (which had been symlinked to /dev/null).  Moved
>>> /var/lib/ceph/osd/ceph-6[0-9] to  /var/./osd_old/.  Then, I ran
>>> `ceph-volume lvm activate --all`.  I got once again
>>> >
>>> > root@osd1:~# ceph-volume lvm activate --all
>>> > --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-1bf13d09fb3d
>>> > Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
>>> > --> Absolute path not found for executable: restorecon
>>> > --> Ensure $PATH environment variable contains common executable
>>> locations
>>> > Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir
>>> --dev /dev/hdd67/data67 --path /var/lib/ceph/osd/ceph-67
>>> >  stderr: failed to read label for /dev/hdd67/data67: (2) No such file
>>> or directory
>>> > -->  RuntimeError: command returned non-zero exit status: 1
>>> >
>>> > But when I ran `df` and `mount` ceph-67 is the only one that exists.
>>> (and in  /var/lib/ceph/osd/)
>>> >
>>> > root@osd1:~# df -h | grep ceph-6
>>> > tmpfs   126G 0  126G   0% /var/lib/ceph/osd/ceph-67
>>> >
>>> > root@osd1:~# mount | grep ceph-6
>>> > tmpfs on /var/lib/ceph/osd/ceph-67 type tmpfs (rw,relatime)
>>> >
>>> > root@osd1:~# ls /var/lib/ceph/osd/ | grep ceph-6
>>> > ceph-67
>>> >
>>> > But in I cannot restart any of these 10 daemons (`systemctl start
>>> ceph-osd@6[0-9]`).
>>> >
>>> > I am wondering if I should zap these 10 osds and start over although
>>> at this point I am afraid even zapping may not be a simple task
>>> >
>>> >
>>> >
>>> > On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin 
>>> wrote:
>>> >>
>>> >> On 11/7/18 5:27 AM, Hayashida, Mami wrote:
>>> >> > 1. Stopped osd.60-69:  no problem
>>> >> > 2. Skipped this and went to #3 to check first
>>> >> > 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
>>> >> > nothing.  I see in that directory
>>> >> >
>>> >> > /etc/systemd/system/ceph-disk@60.service# and 61 - 69.
>>> >> >
>>> >> > No ceph-volume entries.
>>> >>
>>> >> Get rid of those, they also shouldn't be there. Then `systemctl
>>> >> daemon-reload` and continue, see if you get into a good state.
>>> basically
>>> >> feel free to nuke anything in there related to OSD 60-69, since
>>> whatever
>>> >> is needed should be taken care of by the ceph-volume activation.
>>> >>
>>> >>
>>> >> --
>>> >> Hector Martin 

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Ashley Merrick
ceph osd destroy 70  --yes-i-really-mean-it

I am guessing that’s a copy and paste mistake and should say 120.

Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the
journal and other partitions are for other SSD’s?

On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami 
wrote:

> I would agree with that.  So, here is what I am planning on doing today.
> I will try this from scratch on a different OSD node from the very first
> step and log input and output for every step.  Here is the outline of what
> I think (based on all the email exchanges so far) should happen.
>
> ***
> Trying to convert osd.120 to Bluestore.  Data is on /sda/sdh.   Filestore
> Journal is on a partition drive (40GB) on /dev/sda.
>
> #Mark those OSDs out
> ceph osd out 120
>
> # Stop the OSDs
> systemctl kill ceph-osd@120
>
> # Unmount the filesystem
> sudo umount /var/lib/ceph/osd/ceph-120
>
> # Destroy the data
> ceph-volume lvm zap /dev/sdh --destroy   # data disk
> ceph-volume lvm zap /dev/sda --destroy   # ssd for wal and db
>
> # Inform the cluster
> ceph osd destroy 70  --yes-i-really-mean-it
>
> # Check all the /etc/fstab and /etc/systemd/system to make sure that all
> the references to the filesystem is gone. Run
> ln -sf /dev/null /etc/systemd/system/ceph-disk@70.service
>
> # Create PVs, VGs, LVs
> pvcreate /dev/sda # for wal and db
> pvcreate /dev/sdh # for data
>
> vgcreate ssd0 /dev/sda
> vgcreate hdd120  /dev/sdh
>
> lvcreate -L 40G -n db120 ssd0
> lvcreate -l 100%VG data120 hdd120
>
> # Run ceph-volume
> ceph-volume lvm prepare --bluestore --data hdd120/data120 --block.db
> ssd0/db120  --osd-id 120
>
> # Activate
> ceph-volume lvm activate 120 
>
> **
> Does this sound right?
>
> On Tue, Nov 6, 2018 at 4:32 PM, Alfredo Deza  wrote:
>
>> It is pretty difficult to know what step you are missing if we are
>> getting the `activate --all` command.
>>
>> Maybe if you try one by one, capturing each command, throughout the
>> process, with output. In the filestore-to-bluestore guides we never
>> advertise `activate --all` for example.
>>
>> Something is missing here, and I can't tell what it is.
>> On Tue, Nov 6, 2018 at 4:13 PM Hayashida, Mami 
>> wrote:
>> >
>> > This is becoming even more confusing. I got rid of those 
>> > ceph-disk@6[0-9].service
>> (which had been symlinked to /dev/null).  Moved
>> /var/lib/ceph/osd/ceph-6[0-9] to  /var/./osd_old/.  Then, I ran
>> `ceph-volume lvm activate --all`.  I got once again
>> >
>> > root@osd1:~# ceph-volume lvm activate --all
>> > --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-1bf13d09fb3d
>> > Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
>> > --> Absolute path not found for executable: restorecon
>> > --> Ensure $PATH environment variable contains common executable
>> locations
>> > Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
>> /dev/hdd67/data67 --path /var/lib/ceph/osd/ceph-67
>> >  stderr: failed to read label for /dev/hdd67/data67: (2) No such file
>> or directory
>> > -->  RuntimeError: command returned non-zero exit status: 1
>> >
>> > But when I ran `df` and `mount` ceph-67 is the only one that exists.
>> (and in  /var/lib/ceph/osd/)
>> >
>> > root@osd1:~# df -h | grep ceph-6
>> > tmpfs   126G 0  126G   0% /var/lib/ceph/osd/ceph-67
>> >
>> > root@osd1:~# mount | grep ceph-6
>> > tmpfs on /var/lib/ceph/osd/ceph-67 type tmpfs (rw,relatime)
>> >
>> > root@osd1:~# ls /var/lib/ceph/osd/ | grep ceph-6
>> > ceph-67
>> >
>> > But in I cannot restart any of these 10 daemons (`systemctl start
>> ceph-osd@6[0-9]`).
>> >
>> > I am wondering if I should zap these 10 osds and start over although at
>> this point I am afraid even zapping may not be a simple task
>> >
>> >
>> >
>> > On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin 
>> wrote:
>> >>
>> >> On 11/7/18 5:27 AM, Hayashida, Mami wrote:
>> >> > 1. Stopped osd.60-69:  no problem
>> >> > 2. Skipped this and went to #3 to check first
>> >> > 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
>> >> > nothing.  I see in that directory
>> >> >
>> >> > /etc/systemd/system/ceph-disk@60.service# and 61 - 69.
>> >> >
>> >> > No ceph-volume entries.
>> >>
>> >> Get rid of those, they also shouldn't be there. Then `systemctl
>> >> daemon-reload` and continue, see if you get into a good state.
>> basically
>> >> feel free to nuke anything in there related to OSD 60-69, since
>> whatever
>> >> is needed should be taken care of by the ceph-volume activation.
>> >>
>> >>
>> >> --
>> >> Hector Martin (hec...@marcansoft.com)
>> >> Public Key: https://mrcn.st/pub
>> >
>> >
>> >
>> >
>> > --
>> > Mami Hayashida
>> > Research Computing Associate
>> >
>> > Research Computing Infrastructure
>> > University of Kentucky Information Technology Services
>> > 301 Rose Street | 102 James F. Hardymon Building
>> > Lexington, KY 40506-0495
>> > mami.hayash...@uky.edu
>> > (859)323-7521
>>
>
>
>
> --
> *Mami Hayashida*
>
> *Research Computing 

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
I would agree with that.  So, here is what I am planning on doing today.  I
will try this from scratch on a different OSD node from the very first step
and log input and output for every step.  Here is the outline of what I
think (based on all the email exchanges so far) should happen.

***
Trying to convert osd.120 to Bluestore.  Data is on /sda/sdh.   Filestore
Journal is on a partition drive (40GB) on /dev/sda.

#Mark those OSDs out
ceph osd out 120

# Stop the OSDs
systemctl kill ceph-osd@120

# Unmount the filesystem
sudo umount /var/lib/ceph/osd/ceph-120

# Destroy the data
ceph-volume lvm zap /dev/sdh --destroy   # data disk
ceph-volume lvm zap /dev/sda --destroy   # ssd for wal and db

# Inform the cluster
ceph osd destroy 70  --yes-i-really-mean-it

# Check all the /etc/fstab and /etc/systemd/system to make sure that all
the references to the filesystem is gone. Run
ln -sf /dev/null /etc/systemd/system/ceph-disk@70.service

# Create PVs, VGs, LVs
pvcreate /dev/sda # for wal and db
pvcreate /dev/sdh # for data

vgcreate ssd0 /dev/sda
vgcreate hdd120  /dev/sdh

lvcreate -L 40G -n db120 ssd0
lvcreate -l 100%VG data120 hdd120

# Run ceph-volume
ceph-volume lvm prepare --bluestore --data hdd120/data120 --block.db
ssd0/db120  --osd-id 120

# Activate
ceph-volume lvm activate 120 

**
Does this sound right?

On Tue, Nov 6, 2018 at 4:32 PM, Alfredo Deza  wrote:

> It is pretty difficult to know what step you are missing if we are
> getting the `activate --all` command.
>
> Maybe if you try one by one, capturing each command, throughout the
> process, with output. In the filestore-to-bluestore guides we never
> advertise `activate --all` for example.
>
> Something is missing here, and I can't tell what it is.
> On Tue, Nov 6, 2018 at 4:13 PM Hayashida, Mami 
> wrote:
> >
> > This is becoming even more confusing. I got rid of those 
> > ceph-disk@6[0-9].service
> (which had been symlinked to /dev/null).  Moved
> /var/lib/ceph/osd/ceph-6[0-9] to  /var/./osd_old/.  Then, I ran
> `ceph-volume lvm activate --all`.  I got once again
> >
> > root@osd1:~# ceph-volume lvm activate --all
> > --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-1bf13d09fb3d
> > Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
> > --> Absolute path not found for executable: restorecon
> > --> Ensure $PATH environment variable contains common executable
> locations
> > Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
> /dev/hdd67/data67 --path /var/lib/ceph/osd/ceph-67
> >  stderr: failed to read label for /dev/hdd67/data67: (2) No such file or
> directory
> > -->  RuntimeError: command returned non-zero exit status: 1
> >
> > But when I ran `df` and `mount` ceph-67 is the only one that exists.
> (and in  /var/lib/ceph/osd/)
> >
> > root@osd1:~# df -h | grep ceph-6
> > tmpfs   126G 0  126G   0% /var/lib/ceph/osd/ceph-67
> >
> > root@osd1:~# mount | grep ceph-6
> > tmpfs on /var/lib/ceph/osd/ceph-67 type tmpfs (rw,relatime)
> >
> > root@osd1:~# ls /var/lib/ceph/osd/ | grep ceph-6
> > ceph-67
> >
> > But in I cannot restart any of these 10 daemons (`systemctl start
> ceph-osd@6[0-9]`).
> >
> > I am wondering if I should zap these 10 osds and start over although at
> this point I am afraid even zapping may not be a simple task
> >
> >
> >
> > On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin 
> wrote:
> >>
> >> On 11/7/18 5:27 AM, Hayashida, Mami wrote:
> >> > 1. Stopped osd.60-69:  no problem
> >> > 2. Skipped this and went to #3 to check first
> >> > 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
> >> > nothing.  I see in that directory
> >> >
> >> > /etc/systemd/system/ceph-disk@60.service# and 61 - 69.
> >> >
> >> > No ceph-volume entries.
> >>
> >> Get rid of those, they also shouldn't be there. Then `systemctl
> >> daemon-reload` and continue, see if you get into a good state. basically
> >> feel free to nuke anything in there related to OSD 60-69, since whatever
> >> is needed should be taken care of by the ceph-volume activation.
> >>
> >>
> >> --
> >> Hector Martin (hec...@marcansoft.com)
> >> Public Key: https://mrcn.st/pub
> >
> >
> >
> >
> > --
> > Mami Hayashida
> > Research Computing Associate
> >
> > Research Computing Infrastructure
> > University of Kentucky Information Technology Services
> > 301 Rose Street | 102 James F. Hardymon Building
> > Lexington, KY 40506-0495
> > mami.hayash...@uky.edu
> > (859)323-7521
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [bug] mount.ceph man description is wrong

2018-11-07 Thread Ilya Dryomov
On Wed, Nov 7, 2018 at 2:25 PM  wrote:
>
> Hi!
>
> I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic 
> (stable) and i want to call `ls -ld` to read whole dir size in cephfs:
>
> When i man mount.ceph:
>
> rbytes Report the recursive size of the directory contents for st_size on 
> directories.  Default: on
>
> But without rbytes like below, "ls -ld" do not work:
>
> mount -t ceph 192.168.0.24:/ /mnt -o 
> name=admin,secretfile=/etc/ceph/admin.secret
>
> [root@test mnt]# ls -ld mongo
> drwxr-xr-x 4 polkitd root 29 11月  6 16:33 mongo
>
> Then i umoun and mount use below cmd, it works:
>
> mount -t ceph 192.168.0.24:/ /mnt -o 
> name=admin,secretfile=/etc/ceph/admin.secret,rbytes
>
>
> [root@test mnt]# ls -ld mongo
> drwxr-xr-x 4 polkitd root 392021518 11月  6 16:33 mongo
>
>
> So the description is wrong, right?

Yes, it's wrong.  Thanks for the PR, if you address the feedback we'll
merge it.

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Dietmar Rieder
On 11/7/18 11:59 AM, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9 that I missed.
>> I just found the new packages on download.ceph.com, is this an official
>> release?
> 
> This is because 12.2.9 have a several bugs. You should avoid to use this
> release and wait for 12.2.10

Thanks a lot!

~Dietmar


-- 
_
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics
Email: dietmar.rie...@i-med.ac.at
Web:   http://www.icbi.at




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Packages for debian in Ceph repo

2018-11-07 Thread Kevin Olbrich
Am Mi., 7. Nov. 2018 um 07:40 Uhr schrieb Nicolas Huillard <
nhuill...@dolomede.fr>:

>
> > It lists rbd but still fails with the exact same error.
>
> I stumbled upon the exact same error, and since there was no answer
> anywhere, I figured it was a very simple problem: don't forget to
> install the qemu-block-extra package (Debian stretch) along with qemu-
> utils which contains the qemu-img command.
> This command is actually compiled with rbd support (hence the output
> above), but need this extra package to pull actual support-code and
> dependencies...
>

I have not been able to test this yet but this package was indeed missing
on my system!
Thank you for this hint!


> --
> Nicolas Huillard
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Matthew Vernon
On 07/11/2018 14:16, Marc Roos wrote:
>  
> 
> I don't see the problem. I am installing only the ceph updates when 
> others have done this and are running several weeks without problems. I 
> have noticed this 12.2.9 availability also, did not see any release 
> notes, so why install it? Especially with recent issues of other 
> releases.

Relevantly, if you want to upgrade to Luminous in many of the obvious
ways, you'll end up with 12.2.9.

Regards,

Matthew


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Thomas White
One of the Ceph clusters my team manages is on 12.2.9 on a Proxmox
environment seems to be running fine with simple x3 replication and RBD.
Would be interesting to know what issues have been encountered so far. All
our OSDs are simple filestore at present and our path to 12.2.9 was 10.2.7
-> 10.2.10 -> 12.2.9 in the past 2 weeks with no issues.

That said, it is disappointing these packages are making their way into
repositories without the proper announcements for an LTS release, especially
given this is enterprise orientated software.

Thomas

-Original Message-
From: ceph-users  On Behalf Of Simon
Ironside
Sent: 07 November 2018 13:58
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph 12.2.9 release



On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9 that I
missed.
>> I just found the new packages on download.ceph.com, is this an 
>> official release?
> 
> This is because 12.2.9 have a several bugs. You should avoid to use 
> this release and wait for 12.2.10

Argh! What's it doing in the repos then?? I've just upgraded to it!
What are the bugs? Is there a thread about them?

Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Marc Roos
 

I don't see the problem. I am installing only the ceph updates when 
others have done this and are running several weeks without problems. I 
have noticed this 12.2.9 availability also, did not see any release 
notes, so why install it? Especially with recent issues of other 
releases.

That being said, this ceph is often a major part of a production 
environment. And I am surprised how easily buggy releases are finding 
their way to the public. We had this 12.2.5 now 12.2.9 there was also 
something with upgrading to mimic. I would not expect this from a LTS 
version (or a redhat product).





-Original Message-
From: Matthew Vernon [mailto:m...@sanger.ac.uk] 
Sent: woensdag 7 november 2018 15:05
To: Konstantin Shalygin; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph 12.2.9 release

On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9 that I 
missed.
>> I just found the new packages on download.ceph.com, is this an 
>> official release?
> 
> This is because 12.2.9 have a several bugs. You should avoid to use 
> this release and wait for 12.2.10

It seems that maybe something isn't quite right in the release 
infrastructure, then? The 12.2.8 packages are still available, but e.g.
debian-luminous's Packages file is pointing to the 12.2.9 (broken) 
packages.

Could the Debian/Ubuntu repos only have their releases updated (as 
opposed to what's in the pool) for safe/official releases? It's one 
thing letting people find pre-release things if they go looking, but 
ISTM that arranging that a mis-timed apt-get update/upgrade might 
install known-broken packages is ... unfortunate.

Regards,

Matthew


--
 The Wellcome Sanger Institute is operated by Genome Research  Limited, 
a charity registered in England with number 1021457 and a  company 
registered in England with number 2742969, whose registered  office is 
215 Euston Road, London, NW1 2BE. 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Matthew Vernon
On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9 that I missed.
>> I just found the new packages on download.ceph.com, is this an official
>> release?
> 
> This is because 12.2.9 have a several bugs. You should avoid to use this
> release and wait for 12.2.10

It seems that maybe something isn't quite right in the release
infrastructure, then? The 12.2.8 packages are still available, but e.g.
debian-luminous's Packages file is pointing to the 12.2.9 (broken) packages.

Could the Debian/Ubuntu repos only have their releases updated (as
opposed to what's in the pool) for safe/official releases? It's one
thing letting people find pre-release things if they go looking, but
ISTM that arranging that a mis-timed apt-get update/upgrade might
install known-broken packages is ... unfortunate.

Regards,

Matthew


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd reweight = pgs stuck unclean

2018-11-07 Thread John Petrini
Hello,

I've got a small development cluster that shows some strange behavior
that I'm trying to understand.

If I reduce the weight of an OSD using ceph osd reweight X 0.9 for
example Ceph will move data but recovery stalls and a few pg's remain
stuck unclean. If I reset them all back to 1 ceph goes healthy again.

This is running an older version 0.94.6.

Here's the OSD tree:

ID WEIGHT  TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 8.24982 root default
-2 2.74994 host node-10
11 0.54999 osd.11up  1.0  1.0
 3 0.54999 osd.3 up  1.0  1.0
12 0.54999 osd.12up  1.0  1.0
 0 0.54999 osd.0 up  1.0  1.0
 6 0.54999 osd.6 up  1.0  1.0
-3 2.74994 host node-11
 8 0.54999 osd.8 up  1.0  1.0
15 0.54999 osd.15up  1.0  1.0
 9 0.54999 osd.9 up  1.0  1.0
 2 0.54999 osd.2 up  1.0  1.0
13 0.54999 osd.13up  1.0  1.0
-4 2.74994 host node-3
 4 0.54999 osd.4 up  1.0  1.0
 5 0.54999 osd.5 up  1.0  1.0
 7 0.54999 osd.7 up  1.0  1.0
 1 0.54999 osd.1 up  1.0  1.0
10 0.54999 osd.10up  1.0  1.0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Simon Ironside




On 07/11/2018 10:59, Konstantin Shalygin wrote:

I wonder if there is any release announcement for ceph 12.2.9 that I missed.
I just found the new packages on download.ceph.com, is this an official
release?


This is because 12.2.9 have a several bugs. You should avoid to use this 
release and wait for 12.2.10


Argh! What's it doing in the repos then?? I've just upgraded to it!
What are the bugs? Is there a thread about them?

Simon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:
> With the Mimic release, you can use "rbd deep-copy" to transfer the
> images (and associated snapshots) to a new pool. Prior to that, you
> could use "rbd export-diff" / "rbd import-diff" to manually transfer
> an image and its associated snapshots.
> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:
>>
>> Hi,
>>
>> I have several VM images sitting in a Ceph pool which are snapshotted. Is 
>> there a way to move such images from one pool to another
>> and perserve the snapshots?
>>
>> Regards,
>>
>> Uwe
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter  wrote:
>
> Hi,
>
> I have several VM images sitting in a Ceph pool which are snapshotted. Is 
> there a way to move such images from one pool to another
> and perserve the snapshots?
>
> Regards,
>
> Uwe
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] scrub and deep scrub - not respecting end hour

2018-11-07 Thread Konstantin Shalygin

Or scrub still running until it finish the process on queue?


Yes, this queue thresholds. If u want to finish your scrubs at 11, 
schedule end to 10.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] scrub and deep scrub - not respecting end hour

2018-11-07 Thread Luiz Gustavo Tonello
Hello guys,

Some days ago I created a time window for scrub execution in my OSDs, and
for 2 days it works perfectly.
Yesterday, I saw a deep scrubbing running out of this period and I though
that maybe osd_scrub_begin_hour and osd_scrub_end_hour are only for scrub
and not for deep scrub (am I right?).

But now, I have both process running out of this time, as follow below:

~# ceph daemon osd.1 config show|grep hour
"osd_scrub_begin_hour": "22",
"osd_scrub_end_hour": "11",


2018-11-07 11:38:31.916391 mon.0 [INF] pgmap v42784228: 904 pgs: 2
active+clean+scrubbing, 3 active+clean+scrubbing+deep, 899 active+clean;
45710 GB data, 135 TB used, 127 TB / 262 TB avail; 33469 kB/s rd, 334 kB/s
wr, 3078 op/s

There is a know issue with this flags? or something that I would
investigate to know more why it happens?
Or scrub still running until it finish the process on queue?

PS.: I'm running CEPH Jewel.
-- 
Luiz Gustavo P Tonello.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there 
a way to move such images from one pool to another
and perserve the snapshots?

Regards,

Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Konstantin Shalygin

I wonder if there is any release announcement for ceph 12.2.9 that I missed.
I just found the new packages on download.ceph.com, is this an official
release?


This is because 12.2.9 have a several bugs. You should avoid to use this 
release and wait for 12.2.10




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Zhenshi Zhou
Hi Jan,

Thanks for the explanation. I think I would deploy a mimic cluster and test
it
on a client with kernel version above 4.17. Then I may do some planning on
upgrading my current cluster if everything goes fine :)

Thanks

Jan Fajerski  于2018年11月7日周三 下午4:50写道:

> On Tue, Nov 06, 2018 at 08:57:48PM +0800, Zhenshi Zhou wrote:
> >   Hi,
> >   I'm wondering whether cephfs have quota limit options.
> >   I use kernel client and ceph version is 12.2.8.
> >   Thanks
> CephFS has quota support, see
> http://docs.ceph.com/docs/luminous/cephfs/quota/.
> The kernel has recently gained CephFS quota support too (before only the
> fuse
> client supported it) so it depends on your distro and kernel version.
>
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Jan Fajerski
> Engineer Enterprise Storage
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nürnberg)
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Luis Henriques
Jan Fajerski  writes:

> On Tue, Nov 06, 2018 at 08:57:48PM +0800, Zhenshi Zhou wrote:
>>   Hi,
>>   I'm wondering whether cephfs have quota limit options.
>>   I use kernel client and ceph version is 12.2.8.
>>   Thanks
> CephFS has quota support, see 
> http://docs.ceph.com/docs/luminous/cephfs/quota/.
> The kernel has recently gained CephFS quota support too (before only the fuse
> client supported it) so it depends on your distro and kernel version.

Correct, in order to have support for quotas using a kernel client
you'll need to meet 2 requirements:

 - kernel >= 4.17 (or have the relevant patches backported)
 - ceph version >= mimic

i.e. a kernel client 4.17 won't support quotas on a luminous-based
cluster.  The quota documentation for mimic states this too:

  http://docs.ceph.com/docs/mimic/cephfs/quota/

-- 
Luis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph 12.2.9 release

2018-11-07 Thread Dietmar Rieder
Hi,

I wonder if there is any release announcement for ceph 12.2.9 that I missed.
I just found the new packages on download.ceph.com, is this an official
release?

~ Dietmar

-- 
_
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics
Web:   http://www.icbi.at




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object

2018-11-07 Thread Dengke Du

Thanks!

This problem fixed by your advice:

1. add 3 osd service

2. link  libcls_rbd.so to libcls_rbd.so.1.0.0, because I build ceph from 
source code according to Mykola's advice.


On 2018/11/6 下午4:33, Ashley Merrick wrote:

Is that correct or have you added more than 1 OSD?

CEPH is never going to work or be able to bring up a pool with only 
one OSD, if you really do have more than OSD and have added them 
correctly then there really is something up with your CEPH setup / 
config and may be worth starting from scratch.


On Tue, Nov 6, 2018 at 4:31 PM Dengke Du > wrote:



On 2018/11/6 下午4:29, Ashley Merrick wrote:

What does

"ceph osd tree" show ?

root@node1:~# ceph osd tree
ID CLASS WEIGHT  TYPE NAME  STATUS REWEIGHT PRI-AFF
-2 0 host 0
-1   1.0 root default
-3   1.0 host node1
 0   hdd 1.0 osd.0    down    0 1.0


On Tue, Nov 6, 2018 at 4:27 PM Dengke Du mailto:dengke...@windriver.com>> wrote:


On 2018/11/6 下午4:24, Ashley Merrick wrote:

If I am reading your ceph -s output correctly you only have
1 OSD, and 0 pool's created.

So your be unable to create a RBD till you atleast have a
pool setup and configured to create the RBD within.

root@node1:~# ceph osd lspools
1 libvirt-pool
2 test-pool


I create pools using:

ceph osd pool create libvirt-pool 128 128

following:

http://docs.ceph.com/docs/master/rbd/libvirt/



On Tue, Nov 6, 2018 at 4:21 PM Dengke Du
mailto:dengke...@windriver.com>>
wrote:


On 2018/11/6 下午4:16, Mykola Golub wrote:
> On Tue, Nov 06, 2018 at 09:45:01AM +0800, Dengke Du wrote:
>
>> I reconfigure the osd service from start, the journal
was:
> I am not quite sure I understand what you mean here.
>
>>

--
>>
>> -- Unit ceph-osd@0.service
 has finished starting up.
>> --
>> -- The start-up result is RESULT.
>> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05
18:02:36.915 7f6a27204e80
>> -1 Public network was set, but cluster network was
not set
>> Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05
18:02:36.915 7f6a27204e80
>> -1 Using public network also for cluster network
>> Nov 05 18:02:36 node1 ceph-osd[4487]: starting osd.0
at - osd_data
>> /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
18:02:37.365 7f6a27204e80
>> -1 journal FileJournal::_open: disabling aio for
non-block journal.  Use
>> journal_force_aio to force use of a>
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
18:02:37.414 7f6a27204e80
>> -1 journal do_read_entry(6930432): bad header magic
>> Nov 05 18:02:37 node1 ceph-osd[4487]: 2018-11-05
18:02:37.729 7f6a27204e80
>> -1 osd.0 21 log_to_monitors {default=true}
>> Nov 05 18:02:47 node1 nagios[3584]: Warning: Return
code of 13 for check of
>> host 'localhost' was out of bounds.
>>
>>

--
> Could you please post the full ceph-osd log somewhere?
/var/log/ceph/ceph-osd.0.log

I don't have the file /var/log/ceph/ceph-osd.o.log

root@node1:~# systemctl status ceph-osd@0
● ceph-osd@0.service  - Ceph
object storage daemon osd.0
    Loaded: loaded
(/lib/systemd/system/ceph-osd@.service; disabled;
vendor preset: enabled)
    Active: active (running) since Mon 2018-11-05
18:02:36 UTC; 6h ago
  Main PID: 4487 (ceph-osd)
 Tasks: 64
    Memory: 27.0M
    CGroup:
/system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service

    └─4487 /usr/bin/ceph-osd -f --cluster ceph
--id 0

Nov 05 18:02:36 node1 systemd[1]: Starting Ceph object
storage daemon
osd.0...
Nov 05 18:02:36 node1 systemd[1]: Started Ceph object
storage daemon osd.0.
Nov 05 18:02:36 node1 ceph-osd[4487]: 2018-11-05
18:02:36.915

[ceph-users] [bug] mount.ceph man description is wrong

2018-11-07 Thread xiang . dai
Hi! 

I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic 
(stable) and i want to call `ls -ld` to read whole dir size in cephfs: 

When i man mount.ceph: 

rbytes Report the recursive size of the directory contents for st_size on 
directories. Default: on 

But without rbytes like below, "ls -ld" do not work: 

mount -t ceph 192.168.0.24:/ /mnt -o 
name=admin,secretfile=/etc/ceph/admin.secret 

[root@test mnt]# ls -ld mongo 
drwxr-xr-x 4 polkitd root 29 11月 6 16:33 mongo 

Then i umoun and mount use below cmd, it works: 

mount -t ceph 192.168.0.24:/ /mnt -o 
name=admin,secretfile=/etc/ceph/admin.secret,rbytes 


[root@test mnt]# ls -ld mongo 
drwxr-xr-x 4 polkitd root 392021518 11月 6 16:33 mongo 


So the description is wrong, right? 

Thanks 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Jan Fajerski

On Tue, Nov 06, 2018 at 08:57:48PM +0800, Zhenshi Zhou wrote:

  Hi,
  I'm wondering whether cephfs have quota limit options.
  I use kernel client and ceph version is 12.2.8.
  Thanks

CephFS has quota support, see http://docs.ceph.com/docs/luminous/cephfs/quota/.
The kernel has recently gained CephFS quota support too (before only the fuse 
client supported it) so it depends on your distro and kernel version.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jan Fajerski
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com