Re: [ceph-users] ceph-volume failed after replacing disk

2019-07-05 Thread Erik McCormick
If you create the OSD without specifying an ID it will grab the lowest
available one. Unless you have other gaps somewhere, that ID would probably
be the one you just removed.

-Erik

On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich  wrote:

>
> On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza  wrote:
>
>> On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITSC) 
>> wrote:
>> >
>> > Hi,
>> >
>> >
>> >
>> > I target to run just destroy and re-use the ID as stated in manual but
>> seems not working.
>> >
>> > Seems I’m unable to re-use the ID ?
>>
>> The OSD replacement guide does not mention anything about crush and
>> auth commands. I believe you are now in a situation where the ID is no
>> longer able to be re-used, and ceph-volume
>> will not create one for you when specifying it in the CLI.
>>
>> I don't know why there is so much attachment to these ID numbers, why
>> is it desirable to have that 71 number back again?
>>
>
> it avoids unnecessary rebalances
>
>
>> >
>> >
>> >
>> > Thanks.
>> >
>> > /stwong
>> >
>> >
>> >
>> >
>> >
>> > From: Paul Emmerich 
>> > Sent: Friday, July 5, 2019 5:54 PM
>> > To: ST Wong (ITSC) 
>> > Cc: Eugen Block ; ceph-users@lists.ceph.com
>> > Subject: Re: [ceph-users] ceph-volume failed after replacing disk
>> >
>> >
>> >
>> >
>> >
>> > On Fri, Jul 5, 2019 at 11:25 AM ST Wong (ITSC) 
>> wrote:
>> >
>> > Hi,
>> >
>> > Yes, I run the commands before:
>> >
>> > # ceph osd crush remove osd.71
>> > device 'osd.71' does not appear in the crush map
>> > # ceph auth del osd.71
>> > entity osd.71 does not exist
>> >
>> >
>> >
>> > which is probably the reason why you couldn't recycle the OSD ID.
>> >
>> >
>> >
>> > Either run just destroy and re-use the ID or run purge and not re-use
>> the ID.
>> >
>> > Manually deleting auth and crush entries is no longer needed since
>> purge was introduced.
>> >
>> >
>> >
>> >
>> >
>> > Paul
>> >
>> >
>> > --
>> > Paul Emmerich
>> >
>> > Looking for help with your Ceph cluster? Contact us at https://croit.io
>> >
>> > croit GmbH
>> > Freseniusstr. 31h
>> > 81247 München
>> > www.croit.io
>> > Tel: +49 89 1896585 90
>> >
>> >
>> >
>> >
>> > Thanks.
>> > /stwong
>> >
>> > -Original Message-
>> > From: ceph-users  On Behalf Of
>> Eugen Block
>> > Sent: Friday, July 5, 2019 4:54 PM
>> > To: ceph-users@lists.ceph.com
>> > Subject: Re: [ceph-users] ceph-volume failed after replacing disk
>> >
>> > Hi,
>> >
>> > did you also remove that OSD from crush and also from auth before
>> recreating it?
>> >
>> > ceph osd crush remove osd.71
>> > ceph auth del osd.71
>> >
>> > Regards,
>> > Eugen
>> >
>> >
>> > Zitat von "ST Wong (ITSC)" :
>> >
>> > > Hi all,
>> > >
>> > > We replaced a faulty disk out of N OSD and tried to follow steps
>> > > according to "Replacing and OSD" in
>> > > http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/,
>> > > but got error:
>> > >
>> > > # ceph osd destroy 71--yes-i-really-mean-it # ceph-volume lvm create
>> > > --bluestore --data /dev/data/lv01 --osd-id
>> > > 71 --block.db /dev/db/lv01
>> > > Running command: /bin/ceph-authtool --gen-print-key Running command:
>> > > /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
>> > > /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
>> > > -->  RuntimeError: The osd ID 71 is already in use or does not exist.
>> > >
>> > > ceph -s still shows  N OSDS.   I then remove with "ceph osd rm 71".
>> > >  Now "ceph -s" shows N-1 OSDS and id 71 doesn't appear in "ceph osd
>> > > ls".
>> > >
>> > > However, repeating the ceph-volume command still gets same error.
>> > > We're running CEPH 14.2.1.   I must have some steps missed.Would
>> > > anyone please help? Thanks a lot.
>> > >
>> > > Rgds,
>> > > /stwong
>> >
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Erik McCormick
On Fri, Jun 28, 2019, 10:05 AM Alfredo Deza  wrote:

> On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix 
> wrote:
> >
> > Thanks for the update Alfredo. What steps need to be done to rename my
> cluster back to "ceph"?
>
> That is a tough one, the ramifications of a custom cluster name are
> wild - it touches everything. I am not sure there is a step-by-step
> guide on how to do this, I would personally recommend re-doing the
> cluster (knowing well this might not be possible in certain cases)
> >
> > The clustername is in several folder- and filenames etc
> >
> > Regards
> > Felix
> >
>
Actually renaming is not really complicated at all. I did it manually
because paranoid, but you could easily enough script or ansiblize it.

Sage suggested a process to keep here:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-June/018521.html

And I finally reported back here:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022202.html

Cheers,
Erik

> Forschungszentrum Juelich GmbH
> > 52425 Juelich
> > Sitz der Gesellschaft: Juelich
> > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> > Prof. Dr. Sebastian M. Schmidt
> >
> -
> >
> -
> >
> >
> > Am 27.06.19, 15:09 schrieb "Alfredo Deza" :
> >
> > Although ceph-volume does a best-effort to support custom cluster
> > names, the Ceph project does not support custom cluster names anymore
> > even though you can still see settings/options that will allow you to
> > set it.
> >
> > For reference see:
> https://bugzilla.redhat.com/show_bug.cgi?id=1459861
> >
> > On Thu, Jun 27, 2019 at 7:59 AM Stolte, Felix <
> f.sto...@fz-juelich.de> wrote:
> > >
> > > Hi folks,
> > >
> > > I have a nautilus 14.2.1 cluster with a non-default cluster name
> (ceph_stag instead of ceph). I set “cluster = ceph_stag” in
> /etc/ceph/ceph_stag.conf.
> > >
> > > ceph-volume is using the correct config file but does not use the
> specified clustername. Did I hit a bug or do I need to define the
> clustername elsewere?
> > >
> > > Regards
> > > Felix
> > > IT-Services
> > > Telefon 02461 61-9243
> > > E-Mail: f.sto...@fz-juelich.de
> > >
> -
> > >
> -
> > > Forschungszentrum Juelich GmbH
> > > 52425 Juelich
> > > Sitz der Gesellschaft: Juelich
> > > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B
> 3498
> > > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> > > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt
> (Vorsitzender),
> > > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> > > Prof. Dr. Sebastian M. Schmidt
> > >
> -
> > >
> -
> > >
> > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] IMPORTANT : NEED HELP : Low IOPS on hdd : MAX AVAIL Draining fast

2019-04-27 Thread Erik McCormick
On Sat, Apr 27, 2019, 3:49 PM Nikhil R  wrote:

> We have baremetal nodes 256GB RAM, 36core CPU
> We are on ceph jewel 10.2.9 with leveldb
> The osd’s and journals are on the same hdd.
> We have 1 backfill_max_active, 1 recovery_max_active and 1
> recovery_op_priority
> The osd crashes and starts once a pg is backfilled and the next pg tried
> to backfill. This is when we see iostat and the disk is utilised upto 100%.
>

I would set noout to prevent excess movement in the event of OSD flapping,
and disable scrubbing and deep scrubbing until your backfilling has
completed. I would also bring the new OSDs online a few at a time rather
than all 25 at once if you add more servers.


> Appreciate your help David
>
> On Sun, 28 Apr 2019 at 00:46, David C  wrote:
>
>>
>>
>> On Sat, 27 Apr 2019, 18:50 Nikhil R,  wrote:
>>
>>> Guys,
>>> We now have a total of 105 osd’s on 5 baremetal nodes each hosting 21
>>> osd’s on HDD which are 7Tb with journals on HDD too. Each journal is about
>>> 5GB
>>>
>>
>> This would imply you've got a separate hdd partition for journals, I
>> don't think there's any value in that and would probabaly be detrimental to
>> performance.
>>
>>>
>>> We expanded our cluster last week and added 1 more node with 21 HDD and
>>> journals on same disk.
>>> Our client i/o is too heavy and we are not able to backfill even 1
>>> thread during peak hours - incase we backfill during peak hours osd's are
>>> crashing causing undersized pg's and if we have another osd crash we wont
>>> be able to use our cluster due to undersized and recovery pg's. During
>>> non-peak we can just backfill 8-10 pgs.
>>> Due to this our MAX AVAIL is draining out very fast.
>>>
>>
>> How much ram have you got in your nodes? In my experience that's a common
>> reason for crashing OSDs during recovery ops
>>
>> What does your recovery and backfill tuning look like?
>>
>>
>>
>>> We are thinking of adding 2 more baremetal nodes with 21 *7tb  osd’s on
>>>  HDD and add 50GB SSD Journals for these.
>>> We aim to backfill from the 105 osd’s a bit faster and expect writes of
>>> backfillis coming to these osd’s faster.
>>>
>>
>> Ssd journals would certainly help, just be sure it's a model that
>> performs well with Ceph
>>
>>>
>>> Is this a good viable idea?
>>> Thoughts please?
>>>
>>
>> I'd recommend sharing more detail e.g full spec of the nodes, Ceph
>> version etc.
>>
>>>
>>> -Nikhil
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> --
> Sent from my iPhone
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:53 AM Jason Dillaman  wrote:

> On Thu, Apr 11, 2019 at 8:49 AM Erik McCormick
>  wrote:
> >
> >
> >
> > On Thu, Apr 11, 2019, 8:39 AM Erik McCormick 
> wrote:
> >>
> >>
> >>
> >> On Thu, Apr 11, 2019, 12:07 AM Brayan Perera 
> wrote:
> >>>
> >>> Dear Jason,
> >>>
> >>>
> >>> Thanks for the reply.
> >>>
> >>> We are using python 2.7.5
> >>>
> >>> Yes. script is based on openstack code.
> >>>
> >>> As suggested, we have tried chunk_size 32 and 64, and both giving same
> >>> incorrect checksum value.
> >>
> >>
> >> The value of rbd_store_chunk_size in glance is expressed in MB and then
> converted to mb. I think the default is 8, so you would want 8192 if you're
> trying to match what the image was uploaded with.
> >
> >
> > Sorry, that should have been "...converted to KB."
>
> Wouldn't it be converted to bytes since all rbd API methods are in bytes?
> [1]
>

Well yeah in the end that's true. Old versions I recall just passed a KB
number, but now it's

self.chunk_size = CONF.rbd_store_chunk_size * 1024 * 1024

My main point though was just that glance defaults to 8 MB chunks which is
nk uch larger than IP was using.


> >>
> >>>
> >>> We tried to copy same image in different pool and resulted same
> >>> incorrect checksum.
> >>>
> >>>
> >>> Thanks & Regards,
> >>> Brayan
> >>>
> >>> On Wed, Apr 10, 2019 at 6:21 PM Jason Dillaman 
> wrote:
> >>> >
> >>> > On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera <
> brayan.per...@gmail.com> wrote:
> >>> > >
> >>> > > Dear All,
> >>> > >
> >>> > > Ceph Version : 12.2.5-2.ge988fb6.el7
> >>> > >
> >>> > > We are facing an issue on glance which have backend set to ceph,
> when
> >>> > > we try to create an instance or volume out of an image, it throws
> >>> > > checksum error.
> >>> > > When we use rbd export and use md5sum, value is matching with
> glance checksum.
> >>> > >
> >>> > > When we use following script, it provides same error checksum as
> glance.
> >>> >
> >>> > What version of Python are you using?
> >>> >
> >>> > > We have used below images for testing.
> >>> > > 1. Failing image (checksum mismatch):
> ffed4088-74e1-4f22-86cb-35e7e97c377c
> >>> > > 2. Passing image (checksum identical):
> c048f0f9-973d-4285-9397-939251c80a84
> >>> > >
> >>> > > Output from storage node:
> >>> > >
> >>> > > 1. Failing image: ffed4088-74e1-4f22-86cb-35e7e97c377c
> >>> > > checksum from glance database: 34da2198ec7941174349712c6d2096d8
> >>> > > [root@storage01moc ~]# python test_rbd_format.py
> >>> > > ffed4088-74e1-4f22-86cb-35e7e97c377c admin
> >>> > > Image size: 681181184
> >>> > > checksum from ceph: b82d85ae5160a7b74f52be6b5871f596
> >>> > > Remarks: checksum is different
> >>> > >
> >>> > > 2. Passing image: c048f0f9-973d-4285-9397-939251c80a84
> >>> > > checksum from glance database: 4f977f748c9ac2989cff32732ef740ed
> >>> > > [root@storage01moc ~]# python test_rbd_format.py
> >>> > > c048f0f9-973d-4285-9397-939251c80a84 admin
> >>> > > Image size: 1411121152
> >>> > > checksum from ceph: 4f977f748c9ac2989cff32732ef740ed
> >>> > > Remarks: checksum is identical
> >>> > >
> >>> > > Wondering whether this issue is from ceph python libs or from ceph
> itself.
> >>> > >
> >>> > > Please note that we do not have ceph pool tiering configured.
> >>> > >
> >>> > > Please let us know whether anyone faced similar issue and any
> fixes for this.
> >>> > >
> >>> > > test_rbd_format.py
> >>> > > ===
> >>> > > import rados, sys, rbd
> >>> > >
> >>> > > image_id = sys.argv[1]
> >>> > > try:
> >>> > > rados_id = sys.argv[2]
> >>> > > except:
> &

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:39 AM Erik McCormick 
wrote:

>
>
> On Thu, Apr 11, 2019, 12:07 AM Brayan Perera 
> wrote:
>
>> Dear Jason,
>>
>>
>> Thanks for the reply.
>>
>> We are using python 2.7.5
>>
>> Yes. script is based on openstack code.
>>
>> As suggested, we have tried chunk_size 32 and 64, and both giving same
>> incorrect checksum value.
>>
>
> The value of rbd_store_chunk_size in glance is expressed in MB and then
> converted to mb. I think the default is 8, so you would want 8192 if you're
> trying to match what the image was uploaded with.
>

Sorry, that should have been "...converted to KB."


>
>> We tried to copy same image in different pool and resulted same
>> incorrect checksum.
>>
>>
>> Thanks & Regards,
>> Brayan
>>
>> On Wed, Apr 10, 2019 at 6:21 PM Jason Dillaman 
>> wrote:
>> >
>> > On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera 
>> wrote:
>> > >
>> > > Dear All,
>> > >
>> > > Ceph Version : 12.2.5-2.ge988fb6.el7
>> > >
>> > > We are facing an issue on glance which have backend set to ceph, when
>> > > we try to create an instance or volume out of an image, it throws
>> > > checksum error.
>> > > When we use rbd export and use md5sum, value is matching with glance
>> checksum.
>> > >
>> > > When we use following script, it provides same error checksum as
>> glance.
>> >
>> > What version of Python are you using?
>> >
>> > > We have used below images for testing.
>> > > 1. Failing image (checksum mismatch):
>> ffed4088-74e1-4f22-86cb-35e7e97c377c
>> > > 2. Passing image (checksum identical):
>> c048f0f9-973d-4285-9397-939251c80a84
>> > >
>> > > Output from storage node:
>> > >
>> > > 1. Failing image: ffed4088-74e1-4f22-86cb-35e7e97c377c
>> > > checksum from glance database: 34da2198ec7941174349712c6d2096d8
>> > > [root@storage01moc ~]# python test_rbd_format.py
>> > > ffed4088-74e1-4f22-86cb-35e7e97c377c admin
>> > > Image size: 681181184
>> > > checksum from ceph: b82d85ae5160a7b74f52be6b5871f596
>> > > Remarks: checksum is different
>> > >
>> > > 2. Passing image: c048f0f9-973d-4285-9397-939251c80a84
>> > > checksum from glance database: 4f977f748c9ac2989cff32732ef740ed
>> > > [root@storage01moc ~]# python test_rbd_format.py
>> > > c048f0f9-973d-4285-9397-939251c80a84 admin
>> > > Image size: 1411121152
>> > > checksum from ceph: 4f977f748c9ac2989cff32732ef740ed
>> > > Remarks: checksum is identical
>> > >
>> > > Wondering whether this issue is from ceph python libs or from ceph
>> itself.
>> > >
>> > > Please note that we do not have ceph pool tiering configured.
>> > >
>> > > Please let us know whether anyone faced similar issue and any fixes
>> for this.
>> > >
>> > > test_rbd_format.py
>> > > ===
>> > > import rados, sys, rbd
>> > >
>> > > image_id = sys.argv[1]
>> > > try:
>> > > rados_id = sys.argv[2]
>> > > except:
>> > > rados_id = 'openstack'
>> > >
>> > >
>> > > class ImageIterator(object):
>> > > """
>> > > Reads data from an RBD image, one chunk at a time.
>> > > """
>> > >
>> > > def __init__(self, conn, pool, name, snapshot, store,
>> chunk_size='8'):
>> >
>> > Am I correct in assuming this was adapted from OpenStack code? That
>> > 8-byte "chunk" is going to be terribly inefficient to compute a CRC.
>> > Not that it should matter, but does it still fail if you increase this
>> > to 32KiB or 64KiB?
>> >
>> > > self.pool = pool
>> > > self.conn = conn
>> > > self.name = name
>> > > self.snapshot = snapshot
>> > > self.chunk_size = chunk_size
>> > > self.store = store
>> > >
>> > > def __iter__(self):
>> > > try:
>> > > with conn.open_ioctx(self.pool) as ioctx:
>> > > with rbd.Image(ioctx, self.name,
>> > >  

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 12:07 AM Brayan Perera 
wrote:

> Dear Jason,
>
>
> Thanks for the reply.
>
> We are using python 2.7.5
>
> Yes. script is based on openstack code.
>
> As suggested, we have tried chunk_size 32 and 64, and both giving same
> incorrect checksum value.
>

The value of rbd_store_chunk_size in glance is expressed in MB and then
converted to mb. I think the default is 8, so you would want 8192 if you're
trying to match what the image was uploaded with.


> We tried to copy same image in different pool and resulted same
> incorrect checksum.
>
>
> Thanks & Regards,
> Brayan
>
> On Wed, Apr 10, 2019 at 6:21 PM Jason Dillaman 
> wrote:
> >
> > On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera 
> wrote:
> > >
> > > Dear All,
> > >
> > > Ceph Version : 12.2.5-2.ge988fb6.el7
> > >
> > > We are facing an issue on glance which have backend set to ceph, when
> > > we try to create an instance or volume out of an image, it throws
> > > checksum error.
> > > When we use rbd export and use md5sum, value is matching with glance
> checksum.
> > >
> > > When we use following script, it provides same error checksum as
> glance.
> >
> > What version of Python are you using?
> >
> > > We have used below images for testing.
> > > 1. Failing image (checksum mismatch):
> ffed4088-74e1-4f22-86cb-35e7e97c377c
> > > 2. Passing image (checksum identical):
> c048f0f9-973d-4285-9397-939251c80a84
> > >
> > > Output from storage node:
> > >
> > > 1. Failing image: ffed4088-74e1-4f22-86cb-35e7e97c377c
> > > checksum from glance database: 34da2198ec7941174349712c6d2096d8
> > > [root@storage01moc ~]# python test_rbd_format.py
> > > ffed4088-74e1-4f22-86cb-35e7e97c377c admin
> > > Image size: 681181184
> > > checksum from ceph: b82d85ae5160a7b74f52be6b5871f596
> > > Remarks: checksum is different
> > >
> > > 2. Passing image: c048f0f9-973d-4285-9397-939251c80a84
> > > checksum from glance database: 4f977f748c9ac2989cff32732ef740ed
> > > [root@storage01moc ~]# python test_rbd_format.py
> > > c048f0f9-973d-4285-9397-939251c80a84 admin
> > > Image size: 1411121152
> > > checksum from ceph: 4f977f748c9ac2989cff32732ef740ed
> > > Remarks: checksum is identical
> > >
> > > Wondering whether this issue is from ceph python libs or from ceph
> itself.
> > >
> > > Please note that we do not have ceph pool tiering configured.
> > >
> > > Please let us know whether anyone faced similar issue and any fixes
> for this.
> > >
> > > test_rbd_format.py
> > > ===
> > > import rados, sys, rbd
> > >
> > > image_id = sys.argv[1]
> > > try:
> > > rados_id = sys.argv[2]
> > > except:
> > > rados_id = 'openstack'
> > >
> > >
> > > class ImageIterator(object):
> > > """
> > > Reads data from an RBD image, one chunk at a time.
> > > """
> > >
> > > def __init__(self, conn, pool, name, snapshot, store,
> chunk_size='8'):
> >
> > Am I correct in assuming this was adapted from OpenStack code? That
> > 8-byte "chunk" is going to be terribly inefficient to compute a CRC.
> > Not that it should matter, but does it still fail if you increase this
> > to 32KiB or 64KiB?
> >
> > > self.pool = pool
> > > self.conn = conn
> > > self.name = name
> > > self.snapshot = snapshot
> > > self.chunk_size = chunk_size
> > > self.store = store
> > >
> > > def __iter__(self):
> > > try:
> > > with conn.open_ioctx(self.pool) as ioctx:
> > > with rbd.Image(ioctx, self.name,
> > >snapshot=self.snapshot) as image:
> > > img_info = image.stat()
> > > size = img_info['size']
> > > bytes_left = size
> > > while bytes_left > 0:
> > > length = min(self.chunk_size, bytes_left)
> > > data = image.read(size - bytes_left, length)
> > > bytes_left -= len(data)
> > > yield data
> > > raise StopIteration()
> > > except rbd.ImageNotFound:
> > > raise exceptions.NotFound(
> > > _('RBD image %s does not exist') % self.name)
> > >
> > > conn = rados.Rados(conffile='/etc/ceph/ceph.conf',rados_id=rados_id)
> > > conn.connect()
> > >
> > >
> > > with conn.open_ioctx('images') as ioctx:
> > > try:
> > > with rbd.Image(ioctx, image_id,
> > >snapshot='snap') as image:
> > > img_info = image.stat()
> > > print "Image size: %s " % img_info['size']
> > > iter, size = (ImageIterator(conn, 'images', image_id,
> > > 'snap', 'rbd'), img_info['size'])
> > > import six, hashlib
> > > md5sum = hashlib.md5()
> > > for chunk in iter:
> > > if isinstance(chunk, six.string_types):
> > > chunk = six.b(chunk)
> > > md5sum.update(chunk)
> > >

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Erik McCormick
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer  wrote:
>
> On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote:
>
> > Hello all,
> >
> > Having dug through the documentation and reading mailing list threads
> > until my eyes rolled back in my head, I am left with a conundrum
> > still. Do I separate the DB / WAL or not.
> >
> You clearly didn't find this thread, most significant post here but read
> it all:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033799.html
>
> In short, a 30GB DB(and thus WAL) partition should do the trick for many
> use cases and will still be better than nothing.
>

Thanks for the link. I actually had seen it, but since it contained
the mention of the 4%, and my OSDs are larger than those of the
original poster there, I was still concerned that antying I could
throw at it would be insufficient. I have a few OSDs that I've created
with DB on the device, and this is what it ended up with after
backfilling:

Smallest:
"db_total_bytes": 320063143936,
"db_used_bytes": 1783627776,

Biggest:
"db_total_bytes": 320063143936,
"db_used_bytes": 167883309056,

So given that The biggest is ~160GB in size already, I wasn't certain
if it would be better to have some with only ~20% of it split off onto
an SSD, or leave it all together on the slower disk. I have a new
cluster I"m building out with the same hardware, so I guess I'll see
how it goes with a small DB unless anyone comes back and says it's a
terrible idea ;).

-Erik

> Christian
>
> > I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs
> > and 2 x 240 GB SSDs. I had put the OS on the first SSD, and then split
> > the journals on the remaining SSD space.
> >
> > My initial minimal understanding of Bluestore was that one should
> > stick the DB and WAL on an SSD, and if it filled up it would just
> > spill back onto the OSD itself where it otherwise would have been
> > anyway.
> >
> > So now I start digging and see that the minimum recommended size is 4%
> > of OSD size. For me that's ~2.6 TB of SSD. Clearly I do not have that
> > available to me.
> >
> > I've also read that it's not so much the data size that matters but
> > the number of objects and their size. Just looking at my current usage
> > and extrapolating that to my maximum capacity, I get to ~1.44 million
> > objects / OSD.
> >
> > So the question is, do I:
> >
> > 1) Put everything on the OSD and forget the SSDs exist.
> >
> > 2) Put just the WAL on the SSDs
> >
> > 3) Put the DB (and therefore the WAL) on SSD, ignore the size
> > recommendations, and just give each as much space as I can. Maybe 48GB
> > / OSD.
> >
> > 4) Some scenario I haven't considered.
> >
> > Is the penalty for a too small DB on an SSD partition so severe that
> > it's not worth doing?
> >
> > Thanks,
> > Erik
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Bluestore WAL/DB decisions

2019-03-28 Thread Erik McCormick
Hello all,

Having dug through the documentation and reading mailing list threads
until my eyes rolled back in my head, I am left with a conundrum
still. Do I separate the DB / WAL or not.

I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs
and 2 x 240 GB SSDs. I had put the OS on the first SSD, and then split
the journals on the remaining SSD space.

My initial minimal understanding of Bluestore was that one should
stick the DB and WAL on an SSD, and if it filled up it would just
spill back onto the OSD itself where it otherwise would have been
anyway.

So now I start digging and see that the minimum recommended size is 4%
of OSD size. For me that's ~2.6 TB of SSD. Clearly I do not have that
available to me.

I've also read that it's not so much the data size that matters but
the number of objects and their size. Just looking at my current usage
and extrapolating that to my maximum capacity, I get to ~1.44 million
objects / OSD.

So the question is, do I:

1) Put everything on the OSD and forget the SSDs exist.

2) Put just the WAL on the SSDs

3) Put the DB (and therefore the WAL) on SSD, ignore the size
recommendations, and just give each as much space as I can. Maybe 48GB
/ OSD.

4) Some scenario I haven't considered.

Is the penalty for a too small DB on an SSD partition so severe that
it's not worth doing?

Thanks,
Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Commercial support

2019-01-23 Thread Erik McCormick
Suse as well

https://www.suse.com/products/suse-enterprise-storage/


On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev  On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn  wrote:
> >
> > Hi,
> >
> > How is the commercial support for Ceph? More specifically, I was
> recently pointed in the direction of the very interesting combination of
> CephFS, Samba and ctdb. Is anyone familiar with companies that provide
> commercial support for in-house solutions like this?
> >
> > Regards, Ketil
>
> Hi Ketil,
>
> We provide a commercial solution based on Ceph, which is geared toward
> a business consumer with 10s or perhaps 100s, not 1000s of machines.
> Full hardware support, monitoring, integration, etc.
> http://storcium.com is the web site and VMWare certification is here:
>
> https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=san&productid=41781&vcl=true
>
> Red Hat, of course, provides commercial support for Ceph as RHES
> (https://redhatstorage.redhat.com/category/enterprise-storage/)
>
> Proxmox supports Ceph integrated with their clusters (we are liking
> that technology as well, more and more due to very good
> thought-through design and quality).
>
> If you provide more information on the specific use cases, it would be
> helpful.
> --
> Alex Gorbachev
> Storcium
>
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph on Azure ?

2018-12-23 Thread Erik McCormick
Dedicated links are not that difficult to come by anymore. It's mainly done
with SDN. I know Megaport, for example, let's you provision virtual
circuits to dozens of providers including Azure, AWS, and GCP. You can run
several virtual circuits over a single ccross-connect.

I look forward to hearing your performance results running in cloud VMs,
but I'm fairly confident it will be both sub-optimal and expensive.

Cheers,
Erik

On Sun, Dec 23, 2018, 10:46 AM LuD j  Hello Marc,
> Unfortunatly we can't move from Azure so easily, we plan to open more and
> more azure region in the futur, so this strategy leads us to the ceph
> integration issue.
> Even if we had others datacenters near to them, I guess it would require
> dedicated network links between the ceph clients and the ceph cluster and
> we may not have the resources for this kind of architecture.
>
> We are going to try ceph on azure by deploying an small cluster and keep a
> eye on any performances issues.
>
>
>
> Le dim. 23 déc. 2018 à 14:46, Marc Roos  a
> écrit :
>
>>
>> What about putting it in a datacenter near them? Or move everything out
>> to some provider that allows you to have both.
>>
>>
>> -Original Message-
>> From: LuD j [mailto:luds.jer...@gmail.com]
>> Sent: maandag 17 december 2018 21:38
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] Ceph on Azure ?
>>
>> Hello,
>>
>> We are working to integrate s3 protocol in our webs applications. The
>> objective is to stop storing documents in bdd or filesytem but use s3's
>> buckets in replacement.
>> We already gave a try to ceph with rados gateway on physicals nodes, its
>> working well.
>>
>> But we are also on Azure, and we can't get baremetals servers from them.
>> We planned a high volumetry ~50TB/year + 20% each year which make ~80K
>> Euro/year per Azure region. The storage cost on Azure is high and they
>> don't provide any Qos on the network latency.
>> We found a 2016 post from the gitlab infrastructure's team about the
>> network latency issue on azure which confirms our concern:
>> https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/678
>>
>>
>> Is there anyone using ceph in production on a cloud provider like Azure?
>>
>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick
 wrote:
>
>
>
> On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich  wrote:
>>
>> I had a similar problem:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>>
>> But even the recent 2.6.x releases were not working well for me (many many 
>> segfaults). I am on the master-branch (2.7.x) and that works well with less 
>> crashs.
>> Cluster is 13.2.1/.2 with nfs-ganesha as standalone VM.
>>
> Yeah I saw you got lots of responses . I actually came across your 
> post over on the ganesha git issues, and it was your bug that led me to 
> trying to build my own. Thanks for pointing it out!
>
> I'm totally fine building 2.7. I've been trying to build 2.6.3 and running 
> into errors running make on it like the following:
>
> /root/rpmbuild/BUILD/nfs-ganesha-2.6.3/include/nfsv41.h:6445:8: error: 
> passing argument 2 of 'xdr_pointer' from incompatible pointer type [-Werror]
> (xdrproc_t) xdr_entry4))
>
> I'm guessing I am missing some newer version of a library somewhere, but not 
> sure. Any tips for successfully getting it to build?
>
The same process worked fine building against master, so we'll see how
that goes.

> -Erik
>
>>
>> Kevin
>>
>>
>> Am Di., 9. Okt. 2018 um 19:39 Uhr schrieb Erik McCormick 
>> :
>>>
>>> On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
>>>  wrote:
>>> >
>>> > Hello,
>>> >
>>> > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
>>> > running into difficulties getting the current stable release running.
>>> > The versions in the Luminous repo is stuck at 2.6.1, whereas the
>>> > current stable version is 2.6.3. I've seen a couple of HA issues in
>>> > pre 2.6.3 versions that I'd like to avoid.
>>> >
>>>
>>> I should have been more specific that the ones I am looking for are for 
>>> Centos 7
>>>
>>> > I've also been attempting to build my own from source, but banging my
>>> > head against a wall as far as dependencies and config options are
>>> > concerned.
>>> >
>>> > If anyone reading this has the ability to kick off a fresh build of
>>> > the V2.6-stable branch with all the knobs turned properly for Ceph, or
>>> > can point me to a set of cmake configs and scripts that might help me
>>> > do it myself, I would be eternally grateful.
>>> >
>>> > Thanks,
>>> > Erik
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich  wrote:

> I had a similar problem:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>
> But even the recent 2.6.x releases were not working well for me (many many
> segfaults). I am on the master-branch (2.7.x) and that works well with less
> crashs.
> Cluster is 13.2.1/.2 with nfs-ganesha as standalone VM.
>
> Yeah I saw you got lots of responses . I actually came across
your post over on the ganesha git issues, and it was your bug that led me
to trying to build my own. Thanks for pointing it out!

I'm totally fine building 2.7. I've been trying to build 2.6.3 and running
into errors running make on it like the following:

/root/rpmbuild/BUILD/nfs-ganesha-2.6.3/include/nfsv41.h:6445:8: error:
passing argument 2 of 'xdr_pointer' from incompatible pointer type [-Werror]
(xdrproc_t) xdr_entry4))

I'm guessing I am missing some newer version of a library somewhere, but
not sure. Any tips for successfully getting it to build?

-Erik


> Kevin
>
>
> Am Di., 9. Okt. 2018 um 19:39 Uhr schrieb Erik McCormick <
> emccorm...@cirrusseven.com>:
>
>> On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
>>  wrote:
>> >
>> > Hello,
>> >
>> > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
>> > running into difficulties getting the current stable release running.
>> > The versions in the Luminous repo is stuck at 2.6.1, whereas the
>> > current stable version is 2.6.3. I've seen a couple of HA issues in
>> > pre 2.6.3 versions that I'd like to avoid.
>> >
>>
>> I should have been more specific that the ones I am looking for are for
>> Centos 7
>>
>> > I've also been attempting to build my own from source, but banging my
>> > head against a wall as far as dependencies and config options are
>> > concerned.
>> >
>> > If anyone reading this has the ability to kick off a fresh build of
>> > the V2.6-stable branch with all the knobs turned properly for Ceph, or
>> > can point me to a set of cmake configs and scripts that might help me
>> > do it myself, I would be eternally grateful.
>> >
>> > Thanks,
>> > Erik
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza  wrote:

> On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
>  wrote:
> >
> > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> >  wrote:
> > >
> > > Hello,
> > >
> > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > > running into difficulties getting the current stable release running.
> > > The versions in the Luminous repo is stuck at 2.6.1, whereas the
> > > current stable version is 2.6.3. I've seen a couple of HA issues in
> > > pre 2.6.3 versions that I'd like to avoid.
> > >
> >
> > I should have been more specific that the ones I am looking for are for
> Centos 7
>
> You mean these repos: http://download.ceph.com/nfs-ganesha/ ?
>

Yes. Specifically
http://download.ceph.com/nfs-ganesha/rpm-V2.6-stable/luminous/x86_64/

Which contains 2.6.1 while current stable upstream is 2.6.3


>
> > > I've also been attempting to build my own from source, but banging my
> > > head against a wall as far as dependencies and config options are
> > > concerned.
> > >
> > > If anyone reading this has the ability to kick off a fresh build of
> > > the V2.6-stable branch with all the knobs turned properly for Ceph, or
> > > can point me to a set of cmake configs and scripts that might help me
> > > do it myself, I would be eternally grateful.
> > >
> > > Thanks,
> > > Erik
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
 wrote:
>
> Hello,
>
> I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> running into difficulties getting the current stable release running.
> The versions in the Luminous repo is stuck at 2.6.1, whereas the
> current stable version is 2.6.3. I've seen a couple of HA issues in
> pre 2.6.3 versions that I'd like to avoid.
>

I should have been more specific that the ones I am looking for are for Centos 7

> I've also been attempting to build my own from source, but banging my
> head against a wall as far as dependencies and config options are
> concerned.
>
> If anyone reading this has the ability to kick off a fresh build of
> the V2.6-stable branch with all the knobs turned properly for Ceph, or
> can point me to a set of cmake configs and scripts that might help me
> do it myself, I would be eternally grateful.
>
> Thanks,
> Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
Hello,

I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
running into difficulties getting the current stable release running.
The versions in the Luminous repo is stuck at 2.6.1, whereas the
current stable version is 2.6.3. I've seen a couple of HA issues in
pre 2.6.3 versions that I'd like to avoid.

I've also been attempting to build my own from source, but banging my
head against a wall as far as dependencies and config options are
concerned.

If anyone reading this has the ability to kick off a fresh build of
the V2.6-stable branch with all the knobs turned properly for Ceph, or
can point me to a set of cmake configs and scripts that might help me
do it myself, I would be eternally grateful.

Thanks,
Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] list admin issues

2018-10-09 Thread Erik McCormick
Without an example of the bounce response itself it's virtually impossible
to troubleshoot. Can someone with mailman access please provide an example
of a bounce response?

All the attachments on those rejected messages are just HTML copies of the
message (which are not on the list of filtered attachments), and they seem
to be on every message anyway. It also looks like mailman is stripping them.

-Erik

On Tue, Oct 9, 2018, 7:04 AM Elias Abacioglu <
elias.abacio...@deltaprojects.com> wrote:

> Maybe there are some advice here that can help remedy the situation a bit?
> https://support.google.com/mail/answer/81126?hl=en
> https://support.google.com/mail/answer/6227174?hl=en
>
> /Elias
>
> On Tue, Oct 9, 2018 at 2:24 AM Alex Gorbachev 
> wrote:
>
>> On Mon, Oct 8, 2018 at 7:48 AM Elias Abacioglu
>>  wrote:
>> >
>> > If it's attachments causing this, perhaps forbid attachments? Force
>> people to use pastebin / imgur type of services?
>> >
>> > /E
>> >
>> > On Mon, Oct 8, 2018 at 1:33 PM Martin Palma  wrote:
>> >>
>> >> Same here also on Gmail with G Suite.
>> >> On Mon, Oct 8, 2018 at 12:31 AM Paul Emmerich 
>> wrote:
>> >> >
>> >> > I'm also seeing this once every few months or so on Gmail with G
>> Suite.
>> >> >
>> >> > Paul
>> >> > Am So., 7. Okt. 2018 um 08:18 Uhr schrieb Joshua Chen
>> >> > :
>> >> > >
>> >> > > I also got removed once, got another warning once (need to
>> re-enable).
>> >> > >
>> >> > > Cheers
>> >> > > Joshua
>> >> > >
>> >> > >
>> >> > > On Sun, Oct 7, 2018 at 5:38 AM Svante Karlsson <
>> svante.karls...@csi.se> wrote:
>> >> > >>
>> >> > >> I'm also getting removed but not only from ceph. I subscribe
>> d...@kafka.apache.org list and the same thing happens there.
>> >> > >>
>> >> > >> Den lör 6 okt. 2018 kl 23:24 skrev Jeff Smith <
>> j...@unbiasedgeek.com>:
>> >> > >>>
>> >> > >>> I have been removed twice.
>> >> > >>> On Sat, Oct 6, 2018 at 7:07 AM Elias Abacioglu
>> >> > >>>  wrote:
>> >> > >>> >
>> >> > >>> > Hi,
>> >> > >>> >
>> >> > >>> > I'm bumping this old thread cause it's getting annoying. My
>> membership get disabled twice a month.
>> >> > >>> > Between my two Gmail accounts I'm in more than 25 mailing
>> lists and I see this behavior only here. Why is only ceph-users only
>> affected? Maybe Christian was on to something, is this intentional?
>> >> > >>> > Reality is that there is a lot of ceph-users with Gmail
>> accounts, perhaps it wouldn't be so bad to actually trying to figure this
>> one out?
>> >> > >>> >
>> >> > >>> > So can the maintainers of this list please investigate what
>> actually gets bounced? Look at my address if you want.
>> >> > >>> > I got disabled 20181006, 20180927, 20180916, 20180725,
>> 20180718 most recently.
>> >> > >>> > Please help!
>> >> > >>> >
>> >> > >>> > Thanks,
>> >> > >>> > Elias
>> >> > >>> >
>> >> > >>> > On Mon, Oct 16, 2017 at 5:41 AM Christian Balzer <
>> ch...@gol.com> wrote:
>> >> > >>> >>
>> >> > >>> >>
>> >> > >>> >> Most mails to this ML score low or negatively with
>> SpamAssassin, however
>> >> > >>> >> once in a while (this is a recent one) we get relatively high
>> scores.
>> >> > >>> >> Note that the forged bits are false positives, but the SA is
>> up to date and
>> >> > >>> >> google will have similar checks:
>> >> > >>> >> ---
>> >> > >>> >> X-Spam-Status: No, score=3.9 required=10.0
>> tests=BAYES_00,DCC_CHECK,
>> >> > >>> >>
>> FORGED_MUA_MOZILLA,FORGED_YAHOO_RCVD,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,
>> >> > >>> >>
>> HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MIME_HTML_MOSTLY,RCVD_IN_MSPIKE_H4,
>> >> > >>> >>  RCVD_IN_MSPIKE_WL,RDNS_NONE,T_DKIM_INVALID shortcircuit=no
>> autolearn=no
>> >> > >>> >> ---
>> >> > >>> >>
>> >> > >>> >> Between attachment mails and some of these and you're well on
>> your way out.
>> >> > >>> >>
>> >> > >>> >> The default mailman settings and logic require 5 bounces to
>> trigger
>> >> > >>> >> unsubscription and 7 days of NO bounces to reset the counter.
>> >> > >>> >>
>> >> > >>> >> Christian
>> >> > >>> >>
>> >> > >>> >> On Mon, 16 Oct 2017 12:23:25 +0900 Christian Balzer wrote:
>> >> > >>> >>
>> >> > >>> >> > On Mon, 16 Oct 2017 14:15:22 +1100 Blair Bethwaite wrote:
>> >> > >>> >> >
>> >> > >>> >> > > Thanks Christian,
>> >> > >>> >> > >
>> >> > >>> >> > > You're no doubt on the right track, but I'd really like
>> to figure out
>> >> > >>> >> > > what it is at my end - I'm unlikely to be the only person
>> subscribed
>> >> > >>> >> > > to ceph-users via a gmail account.
>> >> > >>> >> > >
>> >> > >>> >> > > Re. attachments, I'm surprised mailman would be allowing
>> them in the
>> >> > >>> >> > > first place, and even so gmail's attachment requirements
>> are less
>> >> > >>> >> > > strict than most corporate email setups (those that don't
>> already use
>> >> > >>> >> > > a cloud provider).
>> >> > >>> >> > >
>> >> > >>> >> > Mailman doesn't do anything with this by default AFAIK, but
>> see below.
>> >> > >>> >> > Strict is fine if you're in control, corporate mail can be

Re: [ceph-users] list admin issues

2018-10-06 Thread Erik McCormick
This has happened to me several times as well. This address is hosted on
gmail.

-Erik

On Sat, Oct 6, 2018, 9:06 AM Elias Abacioglu <
elias.abacio...@deltaprojects.com> wrote:

> Hi,
>
> I'm bumping this old thread cause it's getting annoying. My membership get
> disabled twice a month.
> Between my two Gmail accounts I'm in more than 25 mailing lists and I see
> this behavior only here. Why is only ceph-users only affected? Maybe
> Christian was on to something, is this intentional?
> Reality is that there is a lot of ceph-users with Gmail accounts, perhaps
> it wouldn't be so bad to actually trying to figure this one out?
>
> So can the maintainers of this list please investigate what actually gets
> bounced? Look at my address if you want.
> I got disabled 20181006, 20180927, 20180916, 20180725, 20180718 most
> recently.
> Please help!
>
> Thanks,
> Elias
>
> On Mon, Oct 16, 2017 at 5:41 AM Christian Balzer  wrote:
>
>>
>> Most mails to this ML score low or negatively with SpamAssassin, however
>> once in a while (this is a recent one) we get relatively high scores.
>> Note that the forged bits are false positives, but the SA is up to date
>> and
>> google will have similar checks:
>> ---
>> X-Spam-Status: No, score=3.9 required=10.0 tests=BAYES_00,DCC_CHECK,
>>
>>  
>> FORGED_MUA_MOZILLA,FORGED_YAHOO_RCVD,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,
>>
>>  
>> HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MIME_HTML_MOSTLY,RCVD_IN_MSPIKE_H4,
>>  RCVD_IN_MSPIKE_WL,RDNS_NONE,T_DKIM_INVALID shortcircuit=no autolearn=no
>> ---
>>
>> Between attachment mails and some of these and you're well on your way
>> out.
>>
>> The default mailman settings and logic require 5 bounces to trigger
>> unsubscription and 7 days of NO bounces to reset the counter.
>>
>> Christian
>>
>> On Mon, 16 Oct 2017 12:23:25 +0900 Christian Balzer wrote:
>>
>> > On Mon, 16 Oct 2017 14:15:22 +1100 Blair Bethwaite wrote:
>> >
>> > > Thanks Christian,
>> > >
>> > > You're no doubt on the right track, but I'd really like to figure out
>> > > what it is at my end - I'm unlikely to be the only person subscribed
>> > > to ceph-users via a gmail account.
>> > >
>> > > Re. attachments, I'm surprised mailman would be allowing them in the
>> > > first place, and even so gmail's attachment requirements are less
>> > > strict than most corporate email setups (those that don't already use
>> > > a cloud provider).
>> > >
>> > Mailman doesn't do anything with this by default AFAIK, but see below.
>> > Strict is fine if you're in control, corporate mail can be hell, doubly
>> so
>> > if on M$ cloud.
>> >
>> > > This started happening earlier in the year after I turned off digest
>> > > mode. I also have a paid google domain, maybe I'll try setting
>> > > delivery to that address and seeing if anything changes...
>> > >
>> > Don't think google domain is handled differently, but what do I know.
>> >
>> > Though the digest bit confirms my suspicion about attachments:
>> > ---
>> > When a subscriber chooses to receive plain text daily “digests” of list
>> > messages, Mailman sends the digest messages without any original
>> > attachments (in Mailman lingo, it “scrubs” the messages of attachments).
>> > However, Mailman also includes links to the original attachments that
>> the
>> > recipient can click on.
>> > ---
>> >
>> > Christian
>> >
>> > > Cheers,
>> > >
>> > > On 16 October 2017 at 13:54, Christian Balzer 
>> wrote:
>> > > >
>> > > > Hello,
>> > > >
>> > > > You're on gmail.
>> > > >
>> > > > Aside from various potential false positives with regards to spam
>> my bet
>> > > > is that gmail's known dislike for attachments is the cause of these
>> > > > bounces and that setting is beyond your control.
>> > > >
>> > > > Because Google knows best[tm].
>> > > >
>> > > > Christian
>> > > >
>> > > > On Mon, 16 Oct 2017 13:50:43 +1100 Blair Bethwaite wrote:
>> > > >
>> > > >> Hi all,
>> > > >>
>> > > >> This is a mailing-list admin issue - I keep being unsubscribed from
>> > > >> ceph-users with the message:
>> > > >> "Your membership in the mailing list ceph-users has been disabled
>> due
>> > > >> to excessive bounces..."
>> > > >> This seems to be happening on roughly a monthly basis.
>> > > >>
>> > > >> Thing is I have no idea what the bounce is or where it is coming
>> from.
>> > > >> I've tried emailing ceph-users-ow...@lists.ceph.com and the
>> contact
>> > > >> listed in Mailman (l...@redhat.com) to get more info but haven't
>> > > >> received any response despite several attempts.
>> > > >>
>> > > >> Help!
>> > > >>
>> > > >
>> > > >
>> > > > --
>> > > > Christian BalzerNetwork/Systems Engineer
>> > > > ch...@gol.com   Rakuten Communications
>> > >
>> > >
>> > >
>> >
>> >
>>
>>
>> --
>> Christian BalzerNetwork/Systems Engineer
>> ch...@gol.com   Rakuten Communications
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-user

Re: [ceph-users] network architecture questions

2018-09-18 Thread Erik McCormick
On Tue, Sep 18, 2018, 7:56 PM solarflow99  wrote:

> thanks for the replies, I don't know that cephFS clients go through the
> MONs, they reach the OSDs directly.  When I mentioned NFS, I meant NFS
> clients (ie. not cephFS clients) This should have been pretty straight
> forward.
> Anyone doing HA on the MONs?  How do you mount the cephFS shares, surely
> you'd have a vip?
>

When you mount cephfs or an RBD (for your NFS case) you provide a list of
monitors. They are, by nature, highly available. They do not rely in any
sort of VIP failover like keepalived or pacemaker.

-Erik

>
>
>
> On Tue, Sep 18, 2018 at 12:37 PM Jean-Charles Lopez 
> wrote:
>
>> > On Sep 17, 2018, at 16:13, solarflow99  wrote:
>> >
>> > Hi, I read through the various documentation and had a few questions:
>> >
>> > - From what I understand cephFS clients reach the OSDs directly, does
>> the cluster network need to be opened up as a public network?
>> Client traffic only goes over the public network. Only OSD to OSD traffic
>> (replication, rebalancing, recovery go over the cluster network)
>> >
>> > - Is it still necessary to have a public and cluster network when the
>> using cephFS since the clients all reach the OSD's directly?
>> Separating the network is a plus for troubleshooting and sizing for
>> bandwidth
>> >
>> > - Simplest way to do HA on the mons for providing NFS, etc?
>> Don’t really understand the question (NFS vs CephFS).
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread Erik McCormick
Wherever I go, there you are ;). Glad to have you back again!

Cheers,
Erik

On Tue, Aug 28, 2018, 10:25 PM Dan Mick  wrote:

> On 08/28/2018 06:13 PM, Sage Weil wrote:
> > Hi everyone,
> >
> > Please help me welcome Mike Perez, the new Ceph community manager!
> >
> > Mike has a long history with Ceph: he started at DreamHost working on
> > OpenStack and Ceph back in the early days, including work on the
> original
> > RBD integration.  He went on to work in several roles in the OpenStack
> > project, doing a mix of infrastructure, cross-project and community
> > related initiatives, including serving as the Project Technical Lead for
> > Cinder.
> >
> > Mike lives in Pasadena, CA, and can be reached at mpe...@redhat.com, on
> > IRC as thingee, or twitter as @thingee.
> >
> > I am very excited to welcome Mike back to Ceph, and look forward to
> > working together on building the Ceph developer and user communities!
> >
> > sage
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> Welcome back Mike!
>
> --
> Dan Mick
> Red Hat, Inc.
> Ceph docs: http://ceph.com/docs
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
I'm not using this feature, so maybe I'm missing something, but from
the way I understand cluster naming to work...

I still don't understand why this is blocking for you. Unless you are
attempting to mirror between two clusters running on the same hosts
(why would you do this?) then systemd doesn't come into play. The
--cluster flag on the rbd command will simply set the name of a
configuration file with the FSID and settings of the appropriate
cluster. Cluster name is just a way of telling ceph commands and
systemd units where to find the configs.

So, what you end up with is something like:

/etc/ceph/ceph.conf (your local cluster configuration) on both clusters
/etc/ceph/local.conf (config of the source cluster. Just a copy of
ceph.conf of the source clsuter)
/etc/ceph/remote.conf (config of destination peer cluster. Just a copy
of ceph.conf of the remote cluster).

Run all your rbd mirror commands against local and remote names.
However when starting things like mons, osds, mds, etc. you need no
cluster name as it can use ceph.conf (cluster name of ceph).

Am I making sense, or have I completely missed something?

-Erik

On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you  or someone
> else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> Sent: jeudi, 2 août 2018 05:43
> To: Erik McCormick 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Hello Erik,
>
>
>
> We are going to use RBD-mirror to replicate the clusters. This seems to need
> separate cluster names.
>
> Kind regards,
>
> Glen Baars
>
>
>
> From: Erik McCormick 
> Sent: Thursday, 2 August 2018 9:39 AM
> To: Glen Baars 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Don't set a cluster name. It's no longer supported. It really only matters
> if you're running two or more independent clusters on the same boxes. That's
> generally inadvisable anyway.
>
>
>
> Cheers,
>
> Erik
>
>
>
> On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:
>
> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with
> Ceph-deploy? I have 3 clusters to configure and need to correctly set the
> name.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Glen Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni
> 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Thode
> Jocelyn
> Sent: Monday, 23 July 2018 1:42 PM
> To: Vasu Kulkarni 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi,
>
> Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where
> they are collocated as they all use the "/etc/sysconfig/ceph" configuration
> file.
>
> Best
> Jocelyn Thode
>
> -Original Message-
> From: Vasu Kulkarni [mailto:vakul...@redhat.com]
> Sent: vendredi, 20 juillet 2018 17:25
> To: Thode Jocelyn 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
> wrote:
>> Hi,
>>
>>
>>
>> I noticed that in commit
>> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
>> 23b60efe421f3, the ability to specify a cluster name was removed. Is
>> there a reason for this removal ?
>>
>>
>>
>> Because right now, there are no possibility to create a ceph cluster
>> with a different name with ceph-deploy which is a big problem when
>> having two clusters replicating with rbd-mirror as we need different
>> names.
>>
>>
>>
>> And even when following the doc here:
>> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
>> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
>> ith-the-same-name
>>
>>
>>
>> This is not sufficient as once we change the CLUSTER variable in the
>> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
>> reboot as they then try to load data from a path in /var/lib/ceph
>> containing the cluster name.
>
> Is 

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
Don't set a cluster name. It's no longer supported. It really only matters
if you're running two or more independent clusters on the same boxes.
That's generally inadvisable anyway.

Cheers,
Erik

On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:

> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with
> Ceph-deploy? I have 3 clusters to configure and need to correctly set the
> name.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Glen
> Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni <
> vakul...@redhat.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Thode
> Jocelyn
> Sent: Monday, 23 July 2018 1:42 PM
> To: Vasu Kulkarni 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi,
>
> Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes
> where they are collocated as they all use the "/etc/sysconfig/ceph"
> configuration file.
>
> Best
> Jocelyn Thode
>
> -Original Message-
> From: Vasu Kulkarni [mailto:vakul...@redhat.com]
> Sent: vendredi, 20 juillet 2018 17:25
> To: Thode Jocelyn 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
> wrote:
> > Hi,
> >
> >
> >
> > I noticed that in commit
> > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> > 23b60efe421f3, the ability to specify a cluster name was removed. Is
> > there a reason for this removal ?
> >
> >
> >
> > Because right now, there are no possibility to create a ceph cluster
> > with a different name with ceph-deploy which is a big problem when
> > having two clusters replicating with rbd-mirror as we need different
> names.
> >
> >
> >
> > And even when following the doc here:
> > https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> > tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> > ith-the-same-name
> >
> >
> >
> > This is not sufficient as once we change the CLUSTER variable in the
> > sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> > reboot as they then try to load data from a path in /var/lib/ceph
> > containing the cluster name.
>
> Is you rbd-mirror client also colocated with mon/osd? This needs to be
> changed only on the client side where you are doing mirroring, rest of the
> nodes are not affected?
>
>
> >
> >
> >
> > Is there a solution to this problem ?
> >
> >
> >
> > Best Regards
> >
> > Jocelyn Thode
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multiple Rados Gateways with different auth backends

2018-06-12 Thread Erik McCormick
Hello all,

I have recently had need to make use of the S3 API on my Rados
Gateway. We've been running just Swift API backed by Openstack for
some time with no issues.

Upon trying to use the S3 API I discovered that our combination of
Jewel and Keystone renders AWS v4 signatures unusable. Apparently the
only way to make it go at this point is to upgrade to Luminous, which
I'm not yet ready to do.

So that brought me to running multiple Rados Gateways; One with
Keystone, and one without. In theory it sounds simple. In practice,
I'm struggling with how to go about it. I'm currently running with
only a single default zone. I haven't done any sort of Multisite setup
to this point. The only way I can see to split things out is to create
two separate zonegroups, each consisting of only one zone. This seems
like using a backhoe to remove a few weeds from a garden.

So the question is: Is there a way to run two completely independent
Rados Gateways on their own pools with different auth mechanisms on
the same Ceph cluster? Preferably I would run them on the same hosts
as the current ones using different sockets, but I can sacrifice a
couple boxen to the cause of separating them if need be.

Thanks in advance for your advice!

Cheers,
Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] civetweb: ssl_private_key

2018-05-29 Thread Erik McCormick
On Tue, May 29, 2018, 11:00 AM Marc Roos  wrote:

>
> I guess we will not get this ssl_private_key option unless we upgrade
> from Luminous?
>
>
> http://docs.ceph.com/docs/master/radosgw/frontends/
>
> That option is only for Beast. For civetweb you just feed it
ssl_certificate with a combined PEM file.

https://civetweb.github.io/civetweb/UserManual.html

http://civetweb.github.io/civetweb/OpenSSL.html

-Erik

___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Erik McCormick
On Feb 28, 2018 10:06 AM, "Max Cuttins"  wrote:



Il 28/02/2018 15:19, Jason Dillaman ha scritto:

> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini 
> wrote:
>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that need:
>>
>> CentOS 7.5
>> (which is not available yet, it's still at 7.4)
>> https://wiki.centos.org/Download
>>
>> Kernel 4.17
>> (which is not available yet, it is still at 4.15.7)
>> https://www.kernel.org/
>>
> The necessary kernel changes actually are included as part of 4.16-rc1
> which is available now. We also offer a pre-built test kernel with the
> necessary fixes here [1].
>
This is a release candidate and it's not ready for production.
Does anybody know when the kernel 4.16 will be ready for production?


Release date is late March / early April.





> So I guess, there is no ufficial support and this is just a bad prank.
>>
>> Ceph is ready to be used with S3 since many years.
>> But need the kernel of the next century to works with such an old
>> technology
>> like iSCSI.
>> So sad.
>>
> Unfortunately, kernel vs userspace have very different development
> timelines.We have no interest in maintaining out-of-tree patchsets to
> the kernel.
>

This is true, but having something that just works in order to have minimum
compatibility and start to dismiss old disk is something you should think
about.
You'll have ages in order to improve and get better performance. But you
should allow Users to cut-off old solutions as soon as possible while
waiting for a better implementation.



>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> [1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous v12.2.2 released

2017-12-05 Thread Erik McCormick
On Dec 5, 2017 10:26 AM, "Florent B"  wrote:

On Debian systems, upgrading packages does not restart services !

You really don't want it to restart services. Many small clusters run mons
and osds on the same nodes, and auto restart makes it impossible to order
restarts.

-Erik

On 05/12/2017 16:22, Oscar Segarra wrote:

I have executed:

yum upgrade -y ceph

On each node and everything has worked fine...

2017-12-05 16:19 GMT+01:00 Florent B :

> Upgrade procedure is OSD or MON first ?
>
> There was a change on Luminous upgrade about it.
>
>
> On 01/12/2017 18:34, Abhishek Lekshmanan wrote:
> > We're glad to announce the second bugfix release of Luminous v12.2.x
> > stable release series. It contains a range of bug fixes and a few
> > features across Bluestore, CephFS, RBD & RGW. We recommend all the users
> > of 12.2.x series update.
> >
> > For more detailed information, see the blog[1] and the complete
> > changelog[2]
> >
> > A big thank you to everyone for the continual feedback & bug
> > reports we've received over this release cycle
> >
> > Notable Changes
> > ---
> > * Standby ceph-mgr daemons now redirect requests to the active
> messenger, easing
> >   configuration for tools & users accessing the web dashboard, restful
> API, or
> >   other ceph-mgr module services.
> > * The prometheus module has several significant updates and improvements.
> > * The new balancer module enables automatic optimization of CRUSH
> weights to
> >   balance data across the cluster.
> > * The ceph-volume tool has been updated to include support for BlueStore
> as well
> >   as FileStore. The only major missing ceph-volume feature is dm-crypt
> support.
> > * RGW's dynamic bucket index resharding is disabled in multisite
> environments,
> >   as it can cause inconsistencies in replication of bucket indexes to
> remote
> >   sites
> >
> > Other Notable Changes
> > -
> > * build/ops: bump sphinx to 1.6 (issue#21717, pr#18167, Kefu Chai,
> Alfredo Deza)
> > * build/ops: macros expanding in spec file comment (issue#22250,
> pr#19173, Ken Dreyer)
> > * build/ops: python-numpy-devel build dependency for SUSE (issue#21176,
> pr#17692, Nathan Cutler)
> > * build/ops: selinux: Allow getattr on lnk sysfs files (issue#21492,
> pr#18650, Boris Ranto)
> > * build/ops: Ubuntu amd64 client can not discover the ubuntu arm64 ceph
> cluster (issue#19705, pr#18293, Kefu Chai)
> > * core: buffer: fix ABI breakage by removing list _mempool member
> (issue#21573, pr#18491, Sage Weil)
> > * core: Daemons(OSD, Mon…) exit abnormally at injectargs command
> (issue#21365, pr#17864, Yan Jun)
> > * core: Disable messenger logging (debug ms = 0/0) for clients unless
> overridden (issue#21860, pr#18529, Jason Dillaman)
> > * core: Improve OSD startup time by only scanning for omap corruption
> once (issue#21328, pr#17889, Luo Kexue, David Zafman)
> > * core: upmap does not respect osd reweights (issue#21538, pr#18699,
> Theofilos Mouratidis)
> > * dashboard: barfs on nulls where it expects numbers (issue#21570,
> pr#18728, John Spray)
> > * dashboard: OSD list has servers and osds in arbitrary order
> (issue#21572, pr#18736, John Spray)
> > * dashboard: the dashboard uses absolute links for filesystems and
> clients (issue#20568, pr#18737, Nick Erdmann)
> > * filestore: set default readahead and compaction threads for rocksdb
> (issue#21505, pr#18234, Josh Durgin, Mark Nelson)
> > * librbd: object map batch update might cause OSD suicide timeout
> (issue#21797, pr#18416, Jason Dillaman)
> > * librbd: snapshots should be created/removed against data pool
> (issue#21567, pr#18336, Jason Dillaman)
> > * mds: make sure snap inode’s last matches its parent dentry’s last
> (issue#21337, pr#17994, “Yan, Zheng”)
> > * mds: sanitize mdsmap of removed pools (issue#21945, issue#21568,
> pr#18628, Patrick Donnelly)
> > * mgr: bulk backport of ceph-mgr improvements (issue#21594, issue#17460,
> >   issue#21197, issue#21158, issue#21593, pr#18675, Benjeman Meekhof,
> >   Sage Weil, Jan Fajerski, John Spray, Kefu Chai, My Do, Spandan Kumar
> Sahu)
> > * mgr: ceph-mgr gets process called “exe” after respawn (issue#21404,
> pr#18738, John Spray)
> > * mgr: fix crashable DaemonStateIndex::get calls (issue#17737, pr#18412,
> John Spray)
> > * mgr: key mismatch for mgr after upgrade from jewel to luminous(dev)
> (issue#20950, pr#18727, John Spray)
> > * mgr: mgr status module uses base 10 units (issue#21189, issue#21752,
> pr#18257, John Spray, Yanhu Cao)
> > * mgr: mgr[zabbix] float division by zero (issue#21518, pr#18734, John
> Spray)
> > * mgr: Prometheus crash when update (issue#21253, pr#17867, John Spray)
> > * mgr: prometheus module generates invalid output when counter names
> contain non-alphanum characters (issue#20899, pr#17868, John Spray, Jeremy
> H Austin)
> > * mgr: Quieten scary RuntimeError from restful module on startup
> (issue#21292, pr#17866, John Spray)
> > * mgr: Spurious ceph-mgr failovers during mon elections 

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-16 Thread Erik McCormick
I was told at the Openstack Summit that 12.2.2 should drop "In a few days."
That was a week ago yesterday.  If you have a little leeway,  it may be
best to wait. I know I am, but I'm paranoid.

There was also a performance regression mentioned recently that's supposed
to be fixed.

-Erik

On Nov 16, 2017 9:22 AM, "Jack"  wrote:

My cluster (55 OSDs) runs 12.2.x since the release, and bluestore too
All good so far

On 16/11/2017 15:14, Konstantin Shalygin wrote:
> Hi cephers.
> Some thoughts...
> At this time my cluster on Kraken 11.2.0 - works smooth with FileStore
> and RBD only.
> I want upgrade to Luminous 12.2.1 and go to Bluestore because this
> cluster want grows double with new disks, so is best opportunity migrate
> to Bluestore.
>
> In ML I was found two problems:
> 1. Increased memory usage, should be fixed in upstream
> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/
2017-October/021676.html).
>
> 2. OSD drops and goes cluster offline
> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/
2017-November/022494.html).
> Don't know this Bluestore or FileStore OSD'.s.
>
> If the first case I can safely survive - hosts has enough memory to go
> to Bluestore and with the growing I can wait until the next stable
release.
> That second case really scares me. As I understood clusters with this
> problem for now not in production.
>
> By this point I have completed all the preparations for the update and
> now I need to figure out whether I should update to 12.2.1 or wait for
> the next stable release, because my cluster is in production and I can't
> fail. Or I can upgrade and use FileStore until next release, this is
> acceptable for me.
>
> Thanks.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 7:33 AM, "Vasu Kulkarni"  wrote:

On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil  wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai  wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil  wrote:
>> >> At CDM yesterday we talked about removing the ability to name your
ceph
>> >> clusters.  There are a number of hurtles that make it difficult to
fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it.  We decided that the most
we can
>> >> do at this point is remove support for it in ceph-deploy and
ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >> https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed.  Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that
support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>  /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible
releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.


Agreed on the Openstack part. I actually changed nothing on that side of
things. The clients still run with a custom config name with no issues.

-Erik


>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the
> awkward path names, though.  Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host.  At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >>  - default cluster name is 'ceph'
>> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >>  - daemon data paths include the cluster name,
>> >>  /var/lib/ceph/osd/$cluster-$id
>> >>which is weird (but mostly people are used to it?)
>> >>  - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>  - systemd only supports a single cluster per host, as defined by
$CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config
path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$
cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>  - existing daemon directories for existing clusters
>> >>  - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >>  - rename /etc/ceph/foo.conf -> ceph.conf
>> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >>  - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >

Re: [ceph-users] removing cluster name support

2017-11-06 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil  wrote:
> On Fri, 9 Jun 2017, Erik McCormick wrote:
>> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil  wrote:
>> > On Thu, 8 Jun 2017, Sage Weil wrote:
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >
>> > It sounds like the answer is "yes," but not for daemons. Several users use
>> > it on the client side to connect to multiple clusters from the same host.
>> >
>>
>> I thought some folks said they were running with non-default naming
>> for daemons, but if not, then count me as one who does. This was
>> mainly a relic of the past, where I thought I would be running
>> multiple clusters on one host. Before long I decided it would be a bad
>> idea, but by then the cluster was already in heavy use and I couldn't
>> undo it.
>>
>> I will say that I am not opposed to renaming back to ceph, but it
>> would be great to have a documented process for accomplishing this
>> prior to deprecation. Even going so far as to remove --cluster from
>> deployment tools will leave me unable to add OSDs if I want to upgrade
>> when Luminous is released.
>
> Note that even if the tool doesn't support it, the cluster name is a
> host-local thing, so you can always deploy ceph-named daemons on other
> hosts.
>
> For an existing host, the removal process should be as simple as
>
>  - stop the daemons on the host
>  - rename /etc/ceph/foo.conf -> /etc/ceph/ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* (this mainly
> matters for non-osds, since the osd dirs will get dynamically created by
> ceph-disk, but renaming will avoid leaving clutter behind)
>  - comment out the CLUSTER= line in /etc/{sysconfig,default}/ceph (if
> you're on jewel)
>  - reboot
>
> If you wouldn't mind being a guinea pig and verifying that this is
> sufficient that would be really helpful!  We'll definitely want to
> document this process.
>
> Thanks!
> sage
>
Sitting here in a room with you reminded me I dropped the ball on
feeding back on the procedure. I did this a couple weeks ago and it
worked fine. I had a few problems with OSDs not wanting to unmount, so
I had to reboot each node along the way. I just used it as an excuse
to run updates.

-Erik
>
>>
>> > Nobody is colocating multiple daemons from different clusters on the same
>> > host.  Some have in the past but stopped.  If they choose to in the
>> > future, they can customize the systemd units themselves.
>> >
>> > The rbd-mirror daemon has a similar requirement to talk to multiple
>> > clusters as a client.
>> >
>> > This makes me conclude our current path is fine:
>> >
>> >  - leave existing --cluster infrastructure in place in the ceph code, but
>> >  - remove support for deploying daemons with custom cluster names from the
>> > deployment tools.
>> >
>> > This neatly avoids the systemd limitations for all but the most
>> > adventuresome admins and avoid the more common case of an admin falling
>> > into the "oh, I can name my cluster? cool! [...] oh, i have to add
>> > --cluster rover to every command? ick!" trap.
>> >
>>
>> Yeah, that was me in 2012. Oops.
>>
>> -Erik
>>
>> > sage
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Creating a custom cluster name using ceph-deploy

2017-10-15 Thread Erik McCormick
Do not, under any circumstances, make a custom named cluster. There be pain
and suffering (and dragons) there, and official support for it has been
deprecated.

On Oct 15, 2017 6:29 PM, "Bogdan SOLGA"  wrote:

> Hello, everyone!
>
> We are trying to create a custom cluster name using the latest ceph-deploy
> version (1.5.39), but we keep getting the error:
>
> *'ceph-deploy new: error: subnet must have at least 4 numbers separated by
> dots like x.x.x.x/xx, but got: cluster_name'*
>
> We tried to run the new command using the following orders for the
> parameters:
>
>- *ceph-deploy new --cluster cluster_name ceph-mon-001*
>- *ceph-deploy new ceph-mon-001 --cluster cluster_name*
>
> The output of 'ceph-deploy new -h' no longer lists the '--cluster' option,
> but the 'man ceph-deploy' lists it.
>
> Any help is highly appreciated.
> Thank you,
> Bogdan
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Erik McCormick
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon  wrote:
> On 02/10/17 12:34, Osama Hasebou wrote:
>> Hi Everyone,
>>
>> Is there a guide/tutorial about how to setup Ceph monitoring system
>> using collectd / grafana / graphite ? Other suggestions are welcome as
>> well !
>
> We just installed the collectd plugin for ceph, and pointed it at our
> grahphite server; that did most of what we wanted (we also needed a
> script to monitor wear on our SSD devices).
>
> Making a dashboard is rather a matter of personal preference - we plot
> client and s3 i/o, network, server load & CPU use, and have indicator
> plots for numbers of osds up&in, and monitor quorum.
>
> [I could share our dashboard JSON, but it's obviously specific to our
> data sources]
>
> Regards,
>
> Matthew
>
>

I for one would love to see your dashboard. host and data source names
can be easily replaced :)

-Erik

> --
>  The Wellcome Trust Sanger Institute is operated by Genome Research
>  Limited, a charity registered in England with number 1021457 and a
>  company registered in England with number 2742969, whose registered
>  office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil  wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Several users use
> it on the client side to connect to multiple clusters from the same host.
>

I thought some folks said they were running with non-default naming
for daemons, but if not, then count me as one who does. This was
mainly a relic of the past, where I thought I would be running
multiple clusters on one host. Before long I decided it would be a bad
idea, but by then the cluster was already in heavy use and I couldn't
undo it.

I will say that I am not opposed to renaming back to ceph, but it
would be great to have a documented process for accomplishing this
prior to deprecation. Even going so far as to remove --cluster from
deployment tools will leave me unable to add OSDs if I want to upgrade
when Luminous is released.

> Nobody is colocating multiple daemons from different clusters on the same
> host.  Some have in the past but stopped.  If they choose to in the
> future, they can customize the systemd units themselves.
>
> The rbd-mirror daemon has a similar requirement to talk to multiple
> clusters as a client.
>
> This makes me conclude our current path is fine:
>
>  - leave existing --cluster infrastructure in place in the ceph code, but
>  - remove support for deploying daemons with custom cluster names from the
> deployment tools.
>
> This neatly avoids the systemd limitations for all but the most
> adventuresome admins and avoid the more common case of an admin falling
> into the "oh, I can name my cluster? cool! [...] oh, i have to add
> --cluster rover to every command? ick!" trap.
>

Yeah, that was me in 2012. Oops.

-Erik

> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Giant Repo problem

2017-03-30 Thread Erik McCormick
Try setting

obsoletes=0

in /etc/yum.conf and see if that doesn't make it happier. The package is
clearly there and it even shows it available in your log.

-Erik

On Thu, Mar 30, 2017 at 8:55 PM, Vlad Blando  wrote:

> Hi Guys,
>
> I encountered some issue with installing ceph package for giant, was there
> a change somewhere or was I using wrong repo information.
>
> ceph.repo
> -
> [Ceph]
> name=Ceph packages for $basearch
> baseurl=http://download.ceph.com/rpm-giant/rhel7/$basearch
> enabled=1
> priority=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
> [Ceph-noarch]
> name=Ceph noarch packages
> baseurl=http://download.ceph.com/rpm-giant/rhel7/noarch
> enabled=1
> priority=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
> [ceph-source]
> name=Ceph source packages
> baseurl=http://download.ceph.com/rpm-giant/rhel7/SRPMS
> enabled=1
> priority=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
> 
>
> installation error
> 
> [root@ceph-test yum.repos.d]# yum install ceph
> Loaded plugins: priorities
> Ceph
>
>   | 2.9 kB  00:00:00
> Ceph-noarch
>
>| 2.9 kB  00:00:00
> ceph-source
>
>| 2.9 kB  00:00:00
> rhel-7-server-optional-rpms
>
>| 3.5 kB  00:00:00
> rhel-7-server-rpms
>
>   | 3.5 kB  00:00:00
> (1/9): ceph-source/primary_db
>
>| 4.4 kB  00:00:00
> (2/9): Ceph-noarch/primary_db
>
>| 4.1 kB  00:00:00
> (3/9): Ceph/x86_64/primary_db
>
>|  61 kB  00:00:01
> (4/9): rhel-7-server-optional-rpms/7Server/x86_64/group
>
>|  25 kB  00:00:01
> (5/9): rhel-7-server-rpms/7Server/x86_64/group
>
> | 701 kB  00:00:05
> (6/9): rhel-7-server-optional-rpms/7Server/x86_64/updateinfo
>
> | 1.3 MB  00:00:07
> (7/9): rhel-7-server-rpms/7Server/x86_64/updateinfo
>
>| 1.8 MB  00:00:06
> (8/9): rhel-7-server-optional-rpms/7Server/x86_64/primary_db
>
> | 5.0 MB  00:00:25
> (9/9): rhel-7-server-rpms/7Server/x86_64/primary_db
>
>|  34 MB  00:00:50
> 9 packages excluded due to repository priority protections
> Resolving Dependencies
> --> Running transaction check
> ---> Package ceph.x86_64 1:0.87.2-0.el7.centos will be installed
> --> Processing Dependency: python-ceph = 1:0.87.2-0.el7.centos for
> package: 1:ceph-0.87.2-0.el7.centos.x86_64
> Package python-ceph is obsoleted by python-rados, but obsoleting package
> does not provide for requirements
> --> Processing Dependency: ceph-common = 1:0.87.2-0.el7.centos for
> package: 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libcephfs1 = 1:0.87.2-0.el7.centos for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: librbd1 = 1:0.87.2-0.el7.centos for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: librados2 = 1:0.87.2-0.el7.centos for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: cryptsetup for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: python-flask for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: hdparm for package: 1:ceph-0.87.2-0.el7.centos.
> x86_64
> --> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libcephfs.so.1()(64bit) for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libboost_system-mt.so.1.53.0()(64bit) for
> package: 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libaio.so.1()(64bit) for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: libboost_thread-mt.so.1.53.0()(64bit) for
> package: 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Processing Dependency: librados.so.2()(64bit) for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> --> Running transaction check
> ---> Package boost-system.x86_64 0:1.53.0-26.el7 will be installed
> ---> Package boost-thread.x86_64 0:1.53.0-26.el7 will be installed
> ---> Package ceph.x86_64 1:0.87.2-0.el7.centos will be installed
> --> Processing Dependency: python-ceph = 1:0.87.2-0.el7.centos for
> package: 1:ceph-0.87.2-0.el7.centos.x86_64
> Package python-ceph is obsoleted by python-rados, but obsoleting package
> does not provide for requirements
> --> Processing Dependency: python-flask for package:
> 1:ceph-0.87.2-0.el7.centos.x86_64
> ---> Package ceph-common.x86_64 1:0.87.2-0.el7.centos will be installed
> --> Processing Dependency: python-ceph = 1:0.87.2-0.el7.centos for
> package: 1:ceph-common-0.87.2-0.el7.centos.x86_64
> Package python-ceph is obsoleted by python-rados, but obsoleting package
> does not provide for requirements
> --> Processing Dependency: redhat-lsb-core for package:
> 1:ceph-common-0.87.2-0.el7.centos.x86_64
> ---> Package cryptsetup.x86_64 0:1.7.2-1.el7 will be insta

[ceph-users] Change ownership of objects

2016-12-07 Thread Erik McCormick
Hello everyone,

I am running Ceph (firefly) Radosgw integrated with Openstack
Keystone. Recently we built a whole new Openstack cloud and created
users in that cluster. The names were the same, but the UUID's are
not. Both clouds are using the same Ceph cluster with their own RGW.

I have managed to change the ownership of the buckets using
radosgw-admin bucket link, and I've managed to update the key for them
by dumping the bucket metadata, changing the key, and reimporting it.
Users can now list all objects, and create / delete new objects.
However, they cannot access any of the old objects.

I can see the owner in radosgw-admin bucket list. I traced its
location down to the keystore for the bucket index. However, I can't
seem to find a good way to update the owner. There doesn't seem to be
any way to export that metadata other than doing a getomapval on the
object. I attempted to use a hex editor to modify the ID and then do a
setomapval piping the file back in, but it doesn't seem to want to
take it.

Thanks in advance for any help you can provide.

Cheers,
Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Erik McCormick
On Oct 27, 2016 3:16 PM, "Oliver Dzombic"  wrote:
>
> Hi,
>
> i can recommand
>
> X710-DA2
>
Weer also use this nice for everything.

> 10G Switch is going over our bladeinfrastructure, so i can't recommand
> something for you there.
>
> I assume that the usual juniper/cisco will do a good job. I think, for
> ceph, the switch is not the major point of a setup.
>
We use Edge-Core 5712-54x running Cumulus Linux. Anything off their
compatibility list would be good though. The switch is 48 10G sfp+ ports.
We just use copper cables with attached sfp. It also had 6 40G ports. The
switch cost around $4800 and the cumulus license is about 3k for a
perpetual license.

> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Interactive
>
> mailto:i...@ip-interactive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 27.10.2016 um 15:04 schrieb Jelle de Jong:
> > Hello everybody,
> >
> > I want to upgrade my small ceph cluster to 10Gbit networking and would
> > like some recommendation from your experience.
> >
> > What is your recommend budget 10Gbit switch suitable for Ceph?
> >
> > I would like to use X550-T1 intel adapters in my nodes.
> >
> > Or is fibre recommended?
> > X520-DA2
> > X520-SR1
> >
> > Kind regards,
> >
> > Jelle de Jong
> > GNU/Linux Consultant
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] hadoop on cephfs

2016-04-30 Thread Erik McCormick
I think what you are thinking of is the driver that was built to actually
replace hdfs with rbd. As far as I know that thing had a very short
lifespan on one version of hadoop. Very sad.

As to what you proposed:

1) Don't use Cephfs in production pre-jewel.

2) running hdfs on top of ceph is a massive waste of disk and fairly
pointless as you make replicas of replicas.

-Erik
On Apr 29, 2016 9:20 PM, "Bill Sharer"  wrote:

> Actually this guy is already a fan of Hadoop.  I was just wondering
> whether anyone has been playing around with it on top of cephfs lately.  It
> seems like the last round of papers were from around cuttlefish.
>
> On 04/28/2016 06:21 AM, Oliver Dzombic wrote:
>
>> Hi,
>>
>> bad idea :-)
>>
>> Its of course nice and important to drag developer towards a
>> new/promising technology/software.
>>
>> But if the technology under the individual required specifications does
>> not match, you will just risk to show this developer how worst this
>> new/promising technology is.
>>
>> So you will just reach the opposite of what you want.
>>
>> So before you are doing something, usually big, like hadoop on an
>> unstable software, maybe you should not use it.
>>
>> For the good of the developer, for your good and for the good of the
>> reputation of the new/promising technology/software you wish.
>>
>> To force a pinguin to somehow live in the sahara, might be possible ( at
>> least for some time ), but usually not a good idea ;-)
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Rename Ceph cluster

2015-08-18 Thread Erik McCormick
I've got a custom named cluster integrated with Openstack (Juno) and didn't
run into any hard-coded name issues that I can recall. Where are you seeing
that?

As to the name change itself, I think it's really just a label applying to
a configuration set. The name doesn't actually appear *in* the
configuration files. It stands to reason you should be able to rename the
configuration files on the client side and leave the cluster alone. It'd be
with trying in a test environment anyway.

-Erik
On Aug 18, 2015 7:59 AM, "Jan Schermer"  wrote:

> This should be simple enough
>
> mv /etc/ceph/ceph-prod.conf /etc/ceph/ceph.conf
>
> No? :-)
>
> Or you could set this in nova.conf:
> images_rbd_ceph_conf=/etc/ceph/ceph-prod.conf
>
> Obviously since different parts of openstack have their own configs, you'd
> have to do something similiar for cinder/glance... so not worth the hassle.
>
> Jan
>
> > On 18 Aug 2015, at 13:50, Vasiliy Angapov  wrote:
> >
> > Hi,
> >
> > Does anyone know what steps should be taken to rename a Ceph cluster?
> > Btw, is it ever possbile without data loss?
> >
> > Background: I have a cluster named "ceph-prod" integrated with
> > OpenStack, however I found out that the default cluster name "ceph" is
> > very much hardcoded into OpenStack so I decided to change it to the
> > default value.
> >
> > Regards, Vasily.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] QEMU Venom Vulnerability

2015-05-19 Thread Erik McCormick
Sorry, I made the assumption you were on 7. If you're on 6 then I defer to
someone else ;)

If you're on 7, go here.

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/
On May 19, 2015 2:47 PM, "Georgios Dimitrakakis" 
wrote:

> Erik,
>
> are you talking about the ones here :
> http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/ ???
>
> From what I see the version is rather "small" 0.12.1.2-2.448
>
> How one can verify that it has been patched against venom vulnerability?
>
> Additionally I only see the qemu-kvm package and not the qemu-img. Is it
> essential to update both in order to have a working CentOS system or can I
> just proceed with the qemu-kvm?
>
> Robert, any ideas where can I find the latest and patched SRPMs...I have
> been building v.2.3.0 from source but I am very reluctant to use it in my
> system :-)
>
> Best,
>
> George
>
>
>  You can also just fetch the rhev SRPMs  and build those. They have
>> rbd enabled already.
>> On May 19, 2015 12:31 PM, "Robert LeBlanc"  wrote:
>>
>>  -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA256
>>>
>>> You should be able to get the SRPM, extract the SPEC file and use
>>> that
>>> to build a new package. You should be able to tweak all the compile
>>> options as well. Im still really new to building/rebuilding RPMs
>>> but
>>> Ive been able to do this for a couple of packages.
>>> - 
>>> Robert LeBlanc
>>> GPG Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>>
>>> On Tue, May 19, 2015 at 12:33 PM, Georgios Dimitrakakis  wrote:
>>> > I am trying to build the packages manually and I was wondering
>>> > is the flag --enable-rbd enough to have full Ceph functionality?
>>> >
>>> > Does anybody know what else flags should I include in order to
>>> have the same
>>> > functionality as the original CentOS package plus the RBD
>>> support?
>>> >
>>> > Regards,
>>> >
>>> > George
>>> >
>>> >
>>> > On Tue, 19 May 2015 13:45:50 +0300, Georgios Dimitrakakis wrote:
>>> >>
>>> >> Hi!
>>> >>
>>> >> The QEMU Venom vulnerability (http://venom.crowdstrike.com/ [1])
>>> got my
>>> >> attention and I would
>>> >> like to know what are you people doing in order to have the
>>> latest
>>> >> patched QEMU version
>>> >> working with Ceph RBD?
>>> >>
>>> >> In my case I am using the qemu-img and qemu-kvm packages
>>> provided by
>>> >> Ceph (http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/
>>> [2]) in
>>> >> order to have RBD working on CentOS6 since the default
>>> repository
>>> >> packages do not work!
>>> >>
>>> >> If I want to update to the latest QEMU packages which ones are
>>> known
>>> >> to work with Ceph RBD?
>>> >> I have seen some people mentioning that Fedora packages are
>>> working
>>> >> but I am not sure if they have the latest packages available and
>>> if
>>> >> they are going to work eventually.
>>> >>
>>> >> Is building manually the QEMU packages the only way???
>>> >>
>>> >>
>>> >> Best regards,
>>> >>
>>> >>
>>> >> George
>>> >> ___
>>> >> ceph-users mailing list
>>> >> ceph-users@lists.ceph.com [3]
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [4]
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com [5]
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [6]
>>>
>>> -BEGIN PGP SIGNATURE-
>>> Version: Mailvelope v0.13.1
>>> Comment: https://www.mailvelope.com [7]
>>>
>>> wsFcBAEBCAAQBQJVW4+RCRDmVDuy+mK58QAAg8AP/jqmQFYEwOeGRTJigk9M
>>> pBhr34vyA3mky+BjjW9pt2tydECOH0p5PlYXBfhrQeg2B/yT0uVUKYbYkdBU
>>> fY85UhS5NFdm7VyFyMPSGQwZlXIADF8YJw+Zbj1tpfRvbCi/sntbvGQk+9X8
>>> usVSwBTbWKhYyMW8J5edppv72fMwoVjmoNXuE7wCUoqwxpQBUt0ouap6gDNd
>>> Cu0ZMu+RKq+gfLGcIeSIhsDfV0/LHm2QBO/XjNZtMjyomOWNk9nYHp6HGJxH
>>> MV/EoF4dYoCqHcODPjU2NvesQfYkmqfFoq/n9q/fMEV5JQ+mDfXqc2BcQUsx
>>> 40LDWDs+4BTw0KI+dNT0XUYTw+O0WnXFzgIn1wqXEs8pyOSJy1gCcnOGEavy
>>> 4PqYasm1g+5uzggaIddFPcWHJTw5FuFfjCnHX8Jo3EeQVDM6Vg8FPkkb5JQk
>>> sqxVRQWsF89gGRUbHIQWdkgy3PZN0oTkBvUfflmE/cUq/r40sD4c25D+9Gti
>>> Gj0IKG5uqMaHud3Hln++0ai5roOghoK0KxcDoBTmFLaQSNo9c4CIFCDf2kJ3
>>> idH5tVozDSgvFpgBFLFatb7isctIYf4Luh/XpLXUzdjklGGzo9mhOjXsbm56
>>> WCJZOkQ/OY1UFysMV5+tSSEn7TsF7Np9NagZB7AHhYuTKlOnbv3QJlhATOPp
>>> u4wP
>>> =SsM2
>>> -END PGP SIGNATURE-
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com [8]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [9]
>>>
>>
>>
>> Links:
>> --
>> [1] http://venom.crowdstrike.com/
>> [2] http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/
>> [3] mailto:ceph-users@lists.ceph.com
>> [4] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> [5] mailto:ceph-users@lists.ceph.com
>> [6] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> [7] https://www.mailvelope.com
>> [8] mailto:ceph-users@lists.ceph.com
>> [9] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> [10] mailto:rob

Re: [ceph-users] QEMU Venom Vulnerability

2015-05-19 Thread Erik McCormick
You can also just fetch the rhev SRPMs  and build those. They have rbd
enabled already.
On May 19, 2015 12:31 PM, "Robert LeBlanc"  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> You should be able to get the SRPM, extract the SPEC file and use that
> to build a new package. You should be able to tweak all the compile
> options as well. I'm still really new to building/rebuilding RPMs but
> I've been able to do this for a couple of packages.
> - 
> Robert LeBlanc
> GPG Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
> On Tue, May 19, 2015 at 12:33 PM, Georgios Dimitrakakis  wrote:
> > I am trying to build the packages manually and I was wondering
> > is the flag --enable-rbd enough to have full Ceph functionality?
> >
> > Does anybody know what else flags should I include in order to have the
> same
> > functionality as the original CentOS package plus the RBD support?
> >
> > Regards,
> >
> > George
> >
> >
> > On Tue, 19 May 2015 13:45:50 +0300, Georgios Dimitrakakis wrote:
> >>
> >> Hi!
> >>
> >> The QEMU Venom vulnerability (http://venom.crowdstrike.com/) got my
> >> attention and I would
> >> like to know what are you people doing in order to have the latest
> >> patched QEMU version
> >> working with Ceph RBD?
> >>
> >> In my case I am using the qemu-img and qemu-kvm packages provided by
> >> Ceph (http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/) in
> >> order to have RBD working on CentOS6 since the default repository
> >> packages do not work!
> >>
> >> If I want to update to the latest QEMU packages which ones are known
> >> to work with Ceph RBD?
> >> I have seen some people mentioning that Fedora packages are working
> >> but I am not sure if they have the latest packages available and if
> >> they are going to work eventually.
> >>
> >> Is building manually the QEMU packages the only way???
> >>
> >>
> >> Best regards,
> >>
> >>
> >> George
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> -BEGIN PGP SIGNATURE-
> Version: Mailvelope v0.13.1
> Comment: https://www.mailvelope.com
>
> wsFcBAEBCAAQBQJVW4+RCRDmVDuy+mK58QAAg8AP/jqmQFYEwOeGRTJigk9M
> pBhr34vyA3mky+BjjW9pt2tydECOH0p5PlYXBfhrQeg2B/yT0uVUKYbYkdBU
> fY85UhS5NFdm7VyFyMPSGQwZlXIADF8YJw+Zbj1tpfRvbCi/sntbvGQk+9X8
> usVSwBTbWKhYyMW8J5edppv72fMwoVjmoNXuE7wCUoqwxpQBUt0ouap6gDNd
> Cu0ZMu+RKq+gfLGcIeSIhsDfV0/LHm2QBO/XjNZtMjyomOWNk9nYHp6HGJxH
> MV/EoF4dYoCqHcODPjU2NvesQfYkmqfFoq/n9q/fMEV5JQ+mDfXqc2BcQUsx
> 40LDWDs+4BTw0KI+dNT0XUYTw+O0WnXFzgIn1wqXEs8pyOSJy1gCcnOGEavy
> 4PqYasm1g+5uzggaIddFPcWHJTw5FuFfjCnHX8Jo3EeQVDM6Vg8FPkkb5JQk
> sqxVRQWsF89gGRUbHIQWdkgy3PZN0oTkBvUfflmE/cUq/r40sD4c25D+9Gti
> Gj0IKG5uqMaHud3Hln++0ai5roOghoK0KxcDoBTmFLaQSNo9c4CIFCDf2kJ3
> idH5tVozDSgvFpgBFLFatb7isctIYf4Luh/XpLXUzdjklGGzo9mhOjXsbm56
> WCJZOkQ/OY1UFysMV5+tSSEn7TsF7Np9NagZB7AHhYuTKlOnbv3QJlhATOPp
> u4wP
> =SsM2
> -END PGP SIGNATURE-
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Rados Gateway and keystone

2015-04-13 Thread Erik McCormick
I haven't really used the S3 stuff much, but the credentials should be in
keystone already. If you're in horizon, you can download them under Access
and Security->API Access. Using the CLI you can use the openstack client
like "openstack credential " or with
the keystone client like "keystone ec2-credentials-list", etc.  Then you
should be able to feed those credentials to the rgw like a normal S3 API
call.

Cheers,
Erik

On Mon, Apr 13, 2015 at 10:16 AM,  wrote:

> Hi all,
>
> Coming back to that issue.
>
> I successfully used keystone users for the rados gateway and the swift API
> but I still don't understand how it can work with S3 API and i.e. S3 users
> (AccessKey/SecretKey)
>
> I found a swift3 initiative but I think It's only compliant in a pure
> OpenStack swift environment  by setting up a specific plug-in.
> https://github.com/stackforge/swift3
>
> A rgw can be, at the same, time under keystone control and  standard
> radosgw-admin if
> - for swift, you use the right authentication service (keystone or
> internal)
> - for S3, you use the internal authentication service
>
> So, my questions are still valid.
> How can a rgw work for S3 users if there are stored in keystone? Which is
> the accesskey and secretkey?
> What is the purpose of "rgw s3 auth use keystone" parameter ?
>
> Best regards
>
> --
> De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
> ghislain.cheval...@orange.com
> Envoyé : lundi 23 mars 2015 14:03
> À : ceph-users
> Objet : [ceph-users] Rados Gateway and keystone
>
> Hi All,
>
> I just would to be sure about keystone configuration for Rados Gateway.
>
> I read the documentation http://ceph.com/docs/master/radosgw/keystone/
> and http://ceph.com/docs/master/radosgw/config-ref/?highlight=keystone
> but I didn't catch if after having configured the rados gateway
> (ceph.conf) in order to use keystone, it becomes mandatory to create all
> the users in it.
>
> In other words, can a rgw be, at the same, time under keystone control
> and  standard radosgw-admin ?
> How does it work for S3 users ?
> What is the purpose of "rgw s3 auth use keystone" parameter ?
>
> Best regards
>
> - - - - - - - - - - - - - - - - -
> Ghislain Chevalier
> +33299124432
> +33788624370
> ghislain.cheval...@orange.com
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph and glance... permission denied??

2015-04-06 Thread Erik McCormick
Glance needs some additional permissions including write access to the pool
you want to add images to. See the docs at:

http://ceph.com/docs/master/rbd/rbd-openstack/

Cheers,
Erik
On Apr 6, 2015 7:21 AM,  wrote:

> Hi, first off: long time reader, first time poster :)..
> I have a 4 node ceph cluster (~12TB in total) and an openstack cloud
> (juno) running.
> Everything we have is Suse based and ceph 0.80.8
>
> Now, the cluster works fine.. :
>
> cluster 54636e1e-aeb2-47a3-8cc6-684685264b63
>  health HEALTH_OK
>  monmap e1: 3 mons at
> {ceph01=
> 10.70.0.100:6789/0,ceph03=10.70.0.102:6789/0,ceph04=10.70.0.103:6789/0},
> election epoch 6, quorum 0,1,2 ceph01,ceph03,ceph04
>  osdmap e40: 7 osds: 7 up, 7 in
>   pgmap v78: 447 pgs, 5 pools, 0 bytes data, 0 objects
> 254 MB used, 12986 GB / 12986 GB avail
>  447 active+clean
>
> I also have pools for images and volumes ready:
> ceph04:~ # ceph osd lspools
> 0 data,1 metadata,2 rbd,3 volumes,4 images,
>
> and i have the keyrings and permissions done:
>
> client.admin
> key: X
> caps: [mds] allow
> caps: [mon] allow *
> caps: [osd] allow *
> client.bootstrap-mds
> key: X
> caps: [mon] allow profile bootstrap-mds
> client.bootstrap-osd
> key:X
> caps: [mon] allow profile bootstrap-osd
> client.glance
> key: X
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> \
> pool=images
> client.volumes
> key: X
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> \
> pool=volumes
>
>
> I have copied the files to the openstack glance server and in the
> ceph.conf, added the keyring sections.
>
> mon_initial_members = ceph01, ceph03, ceph04
> mon_host = 10.70.0.100,10.70.0.102,10.70.0.103
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
>
> [client.glance]
> keyring=/etc/glance/ceph.client.glance.keyring
>
> The glance user has permissions to read the files.
>
> Now,
> when i execute this command:
> glance  image-create --name CIRROS --is-public true --disk-format qcow2
> --container-format bare --file  cirros-0.3.3-x86_64-disk.img
>
> i get as a response:
>
>  500 Internal Server Error
>   Failed to upload image 3fc9fe83-cc52-4481-b95c-2b5724c1d971
>
> and in /var/log/glance/api.log I get this:
> 2015-04-06 14:15:49.097 15203 TRACE glance.api.v1.upload_utils
> features=rbd.RBD_FEATURE_LAYERING)
> 2015-04-06 14:15:49.097 15203 TRACE glance.api.v1.upload_utils   File
> "/usr/lib64/python2.6/site-packages/rbd.py", line 219, in create
> 2015-04-06 14:15:49.097 15203 TRACE glance.api.v1.upload_utils raise
> make_ex(ret, 'error creating image')
> 2015-04-06 14:15:49.097 15203 TRACE glance.api.v1.upload_utils
> PermissionError: error creating image
> 2015-04-06 14:15:49.097 15203 TRACE glance.api.v1.upload_utils
>
>
> I am a bit stumped... omn the ceph cluster I see nothing in the logs. its
> almost as if it wont even leave from the glance server..
>
> Any ideas here? I would really appreciate it..
> Thanks already,
>
> //f
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
On Thu, Apr 2, 2015 at 12:18 PM, Quentin Hartman <
qhart...@direwolfdigital.com> wrote:

> Hm, even lacking the mentions of rbd in the glance docs, and the lack of
> cephx auth information in the config, glance seems to be working after all.
> S, hooray! It was probably working all along, I just hadn't gotten to
> really testing it since I was getting blocked by my typo on the cinder
> config.
>
>
>
Glance sets defaults for almost everything, so just enabling the default
store will work. I thought you needed to specify a username still, but
maybe that's defaulted now as well. Glad it's working. So Quentin is 100%
working now and  Iain has no Cinder and slow Glance. Right?


Erik -
>
> Here's my output for the requested grep (though I am on Ubuntu, so the
> path was slightly different:
>
> cfg.IntOpt('rbd_store_chunk_size', default=DEFAULT_CHUNKSIZE,
> def __init__(self, name, store, chunk_size=None):
> self.chunk_size = chunk_size or store.READ_CHUNKSIZE
> length = min(self.chunk_size, bytes_left)
> chunk = self.conf.glance_store.rbd_store_chunk_size
> self.chunk_size = chunk * (1024 ** 2)
> self.READ_CHUNKSIZE = self.chunk_size
> def get(self, location, offset=0, chunk_size=None, context=None):
> return (ImageIterator(loc.image, self, chunk_size=chunk_size),
> chunk_size or self.get_size(location))
>
>
> This all looks correct, so any slowness isn't the bug I was thinking of.

>
> QH
>
> On Thu, Apr 2, 2015 at 10:06 AM, Erik McCormick <
> emccorm...@cirrusseven.com> wrote:
>
>> The RDO glance-store package had a bug in it that miscalculated the chunk
>> size. I should hope that it's been patched by Redhat now since the fix was
>> committed upstream before the first Juno rleease, but perhaps not. The
>> symptom of the bug was horribly slow uploads to glance.
>>
>> Run this and send back the output:
>>
>> grep chunk_size
>> /usr/lib/python2.7/site-packages/glance_store/_drivers/rbd.py
>>
>> -Erik
>>
>> On Thu, Apr 2, 2015 at 7:34 AM, Iain Geddes 
>> wrote:
>>
>>> Oh, apologies, I missed the versions ...
>>>
>>> # glance --version   :   0.14.2
>>> # cinder --version   :   1.1.1
>>> # ceph -v:   ceph version 0.87.1
>>> (283c2e7cfa2457799f534744d7d549f83ea1335e)
>>>
>>> From rpm I can confirm that Cinder and Glance are both of the February
>>> 2014 vintage:
>>>
>>> # rpm -qa |grep -e ceph -e glance -e cinder
>>> ceph-0.87.1-0.el7.x86_64
>>> libcephfs1-0.87.1-0.el7.x86_64
>>> ceph-common-0.87.1-0.el7.x86_64
>>> python-ceph-0.87.1-0.el7.x86_64
>>> openstack-cinder-2014.2.2-1.el7ost.noarch
>>> python-cinder-2014.2.2-1.el7ost.noarch
>>> python-cinderclient-1.1.1-1.el7ost.noarch
>>> python-glanceclient-0.14.2-2.el7ost.noarch
>>> python-glance-2014.2.2-1.el7ost.noarch
>>> python-glance-store-0.1.10-2.el7ost.noarch
>>> openstack-glance-2014.2.2-1.el7ost.noarch
>>>
>>> On Thu, Apr 2, 2015 at 4:24 AM, Iain Geddes 
>>> wrote:
>>>
>>>> Thanks Karan/Quentin/Erik,
>>>>
>>>> I admit up front that this is all new to me as my background is optical
>>>> transport rather than server/storage admin!
>>>>
>>>> I'm reassured to know that it should work and this is why I'm
>>>> completely willing to believe that it's something that I'm doing wrong ...
>>>> but unfortunately I can't see it based on the RDO Havana/Ceph integration
>>>> guide or http://ceph.com/docs/master/rbd/rbd-openstack/. Essentially I
>>>> have extracted everything so that it can be copy/pasted so I am guaranteed
>>>> consistency - and this has the added advantage that it's easy to compare
>>>> what was done with what was documented.
>>>>
>>>> Just to keep everything clean, I've just restarted the Cinder and
>>>> Glance processes and do indeed see them establish with the same responses
>>>> that you showed:
>>>>
>>>> *Cinder*
>>>>
>>>> 2015-04-02 10:50:54.990 16447 INFO cinder.openstack.common.service [-]
>>>> Caught SIGTERM, stopping children
>>>> 2015-04-02 10:50:54.992 16447 INFO cinder.openstack.common.service [-]
>>>> Waiting on 1 children to exit
>>>> 2015-04-02 10:52:25.273 17366 INFO cinder.openstack.common.service [-]
>>>> Starting 1 workers
>>&

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
gt;> glance_store.rbd_store_pool= images log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> 2015-04-02 10:58:37.142 18302 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_user= glance log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> 2015-04-02 10:58:37.143 18302 DEBUG glance.common.config [-]
>>> glance_store.stores= ['rbd'] log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>
>>>
>>> Debug of the api really doesn't reveal anything either as far as I can
>>> see. Attempting an image-create from the CLI:
>>>
>>> glance image-create --name "cirros-0.3.3-x86_64" --file
>>> cirros-0.3.3-x86_64-disk.raw --disk-format raw --container-format bare
>>> --is-public True --progress
>>> returns log entries that can be seen in the attached which appears to
>>> show that the process has started ... but progress never moves beyond 4%
>>> and I haven't seen any further log messages. openstack-status shows all the
>>> processes to be up, and Glance images as saving. Given that the top one was
>>> through the GUI yesterday I'm guessing it's not going to finish any time
>>> soon!
>>>
>>> == Glance images ==
>>>
>>> +--+-+-+--+--++
>>> | ID   | Name| Disk
>>> Format | Container Format | Size | Status |
>>>
>>> +--+-+-+--+--++
>>> | f77429b2-17fd-4ef6-97a8-f710862182c6 | Cirros Raw  | raw
>>>   | bare | 41126400 | saving |
>>> | 1b12e65a-01cd-4d05-91e8-9e9d86979229 | cirros-0.3.3-x86_64 | raw
>>>   | bare | 41126400 | saving |
>>> | fd23c0f3-54b9-4698-b90b-8cdbd6e152c6 | cirros-0.3.3-x86_64 | raw
>>>   | bare | 41126400 | saving |
>>> | db297a42-5242-4122-968e-33bf4ad3fe1f | cirros-0.3.3-x86_64 | raw
>>>   | bare | 41126400 | saving |
>>>
>>> +--+-+-+--+--++
>>>
>>> Was there a particular document that you referenced to perform your
>>> install Karan? This should be the easy part ... but I've been saying that
>>> about nearly everything for the past month or two!!
>>>
>>> Kind regards
>>>
>>>
>>> Iain
>>>
>>>
>>>
>>> On Thu, Apr 2, 2015 at 3:28 AM, Karan Singh  wrote:
>>>
>>>> Fortunately Ceph Giant + OpenStack Juno works flawlessly for me.
>>>>
>>>> If you have configured cinder / glance correctly , then after
>>>> restarting  cinder and glance services , you should see something like this
>>>> in cinder and glance logs.
>>>>
>>>>
>>>> Cinder logs :
>>>>
>>>> volume.log:2015-04-02 13:20:43.943 2085 INFO cinder.volume.manager
>>>> [req-526cb14e-42ef-4c49-b033-e9bf2096be8f - - - - -] Starting volume driver
>>>> RBDDriver (1.1.0)
>>>>
>>>>
>>>> Glance Logs:
>>>>
>>>> api.log:2015-04-02 13:20:50.448 1266 DEBUG glance.common.config [-]
>>>> glance_store.default_store = rbd log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>>> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>>> glance_store.rbd_store_chunk_size = 8 log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>>> glance_store.rbd_store_pool= images log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>>> glance_store.rbd_store_user= glance log_opt_values
>>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>> api.log:2015-04-02 13:20:50.451 1266 DEBUG glance.common.config [-]
>>>

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
d 4%
>> and I haven't seen any further log messages. openstack-status shows all the
>> processes to be up, and Glance images as saving. Given that the top one was
>> through the GUI yesterday I'm guessing it's not going to finish any time
>> soon!
>>
>> == Glance images ==
>>
>> +--+-+-+--+--++
>> | ID   | Name| Disk
>> Format | Container Format | Size | Status |
>>
>> +--+-+-+--+--++
>> | f77429b2-17fd-4ef6-97a8-f710862182c6 | Cirros Raw  | raw
>>   | bare | 41126400 | saving |
>> | 1b12e65a-01cd-4d05-91e8-9e9d86979229 | cirros-0.3.3-x86_64 | raw
>>   | bare | 41126400 | saving |
>> | fd23c0f3-54b9-4698-b90b-8cdbd6e152c6 | cirros-0.3.3-x86_64 | raw
>>   | bare | 41126400 | saving |
>> | db297a42-5242-4122-968e-33bf4ad3fe1f | cirros-0.3.3-x86_64 | raw
>>   | bare | 41126400 | saving |
>>
>> +--+-+-+--+--++
>>
>> Was there a particular document that you referenced to perform your
>> install Karan? This should be the easy part ... but I've been saying that
>> about nearly everything for the past month or two!!
>>
>> Kind regards
>>
>>
>> Iain
>>
>>
>>
>> On Thu, Apr 2, 2015 at 3:28 AM, Karan Singh  wrote:
>>
>>> Fortunately Ceph Giant + OpenStack Juno works flawlessly for me.
>>>
>>> If you have configured cinder / glance correctly , then after restarting
>>>  cinder and glance services , you should see something like this in cinder
>>> and glance logs.
>>>
>>>
>>> Cinder logs :
>>>
>>> volume.log:2015-04-02 13:20:43.943 2085 INFO cinder.volume.manager
>>> [req-526cb14e-42ef-4c49-b033-e9bf2096be8f - - - - -] Starting volume driver
>>> RBDDriver (1.1.0)
>>>
>>>
>>> Glance Logs:
>>>
>>> api.log:2015-04-02 13:20:50.448 1266 DEBUG glance.common.config [-]
>>> glance_store.default_store = rbd log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_chunk_size = 8 log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_pool= images log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-]
>>> glance_store.rbd_store_user= glance log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>> api.log:2015-04-02 13:20:50.451 1266 DEBUG glance.common.config [-]
>>> glance_store.stores= ['rbd'] log_opt_values
>>> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
>>>
>>>
>>> If Cinder and Glance are able to initialize RBD driver , then everything
>>> should work like charm.
>>>
>>>
>>> 
>>> Karan Singh
>>> Systems Specialist , Storage Platforms
>>> CSC - IT Center for Science,
>>> Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
>>> mobile: +358 503 812758
>>> tel. +358 9 4572001
>>> fax +358 9 4572302
>>> http://www.csc.fi/
>>> 
>>>
>>> On 02 Apr 2015, at 03:10, Erik McCormick 
>>> wrote:
>>>
>>> Can you both set Cinder and / or Glance logging to debug and provide
>>> some logs? There was an issue with the first Juno release of Glance in some
>>> vendor packages, so make sure you're fully updated to 2014.2.2
>>> On Apr 1, 2015 7:12 PM, "Quentin Hartman" 
>>> wrote:
>>>
>>>> I am conincidentally going through the same process right now. The best
>>>> reference I've found is this:
>>>> http://ceph.com/docs/master/rbd/rbd-openstack/
>>>>
>>>> When I did Firefly / icehouse, this (seemingly) same guid

Re: [ceph-users] Ceph and Openstack

2015-04-01 Thread Erik McCormick
Can you both set Cinder and / or Glance logging to debug and provide some
logs? There was an issue with the first Juno release of Glance in some
vendor packages, so make sure you're fully updated to 2014.2.2
On Apr 1, 2015 7:12 PM, "Quentin Hartman" 
wrote:

> I am conincidentally going through the same process right now. The best
> reference I've found is this:
> http://ceph.com/docs/master/rbd/rbd-openstack/
>
> When I did Firefly / icehouse, this (seemingly) same guide Just
> Worked(tm), but now with Giant / Juno I'm running into similar trouble  to
> that which you describe. Everything _seems_ right, but creating volumes via
> openstack just sits and spins forever, never creating anything and (as far
> as i've found so far) not logging anything interesting. Normal Rados
> operations work fine.
>
> Feel free to hit me up off list if you want to confer and then we can
> return here if we come up with anything to be shared with the group.
>
> QH
>
> On Wed, Apr 1, 2015 at 3:43 PM, Iain Geddes 
> wrote:
>
>> All,
>>
>> Apologies for my ignorance but I don't seem to be able to search an
>> archive.
>>
>> I've spent a lot of time trying but am having difficulty in integrating
>> Ceph (Giant) into Openstack (Juno). I don't appear to be recording any
>> errors anywhere, but simply don't seem to be writing to the cluster if I
>> try creating a new volume or importing an image. The cluster is good and I
>> can create a static rbd mapping so I know the key components are in place.
>> My problem is almost certainly finger trouble on my part but am completely
>> lost and wondered if there was a well thumbed guide to integration?
>>
>> Thanks
>>
>>
>> Iain
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Giant on Centos 7 with custom cluster name

2015-01-17 Thread Erik McCormick
Hello all,

I've got an existing Firefly cluster on Centos 7 which I deployed with
ceph-deploy. In the latest version of ceph-deploy, it refuses to handle
commands issued with a cluster name.

[ceph_deploy.install][ERROR ] custom cluster names are not supported on
sysvinit hosts

This is a production cluster. Small, but still production. Is it safe to go
through manually upgrading the packages? I'd hate to do the upgrade and
find out I can no longer start the cluster because it can't be called
anything other than "ceph".

Thanks,
Erik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com