Re: [ceph-users] Firefly to Hammer Upgrade -- HEALTH_WARN; too many PGs per OSD (480 > max 300)

2015-09-01 Thread 10 minus
Hi Greg,

Thanks for the update..
I think the documentation on Ceph should be reworded.

--snip--

http://ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups

* Less than 5 OSDs set pg_num to 128
* Between 5 and 10 OSDs set pg_num to 512
* Between 10 and 50 OSDs set pg_num to 4096
* If you have more than 50 OSDs, you need to understand the tradeoffs
and how to calculate the pg_num value by yourself

--snip--




On Mon, Aug 31, 2015 at 10:31 AM, Gregory Farnum  wrote:

> On Mon, Aug 31, 2015 at 8:30 AM, 10 minus  wrote:
> > Hi ,
> >
> > I 'm in the process of upgrading my ceph cluster from Firefly to Hammer.
> >
> > The ceph cluster has 12 OSD spread across 4 nodes.
> >
> > Mons have been upgraded to hammer, since I have created pools  with value
> > 512 and 256 , so am bit confused with the warning message.
> >
> > --snip--
> >
> > ceph -s
> > cluster a7160e16-0aaf-4e78-9e7c-7fbec08642f0
> >  health HEALTH_WARN
> > too many PGs per OSD (480 > max 300)
> >  monmap e1: 3 mons at
> > {mon01=
> 172.16.10.5:6789/0,mon02=172.16.10.6:6789/0,mon03=172.16.10.7:6789/0}
> > election epoch 116, quorum 0,1,2 mon01,mon02,mon03
> >  osdmap e6814: 12 osds: 12 up, 12 in
> >   pgmap v2961763: 1920 pgs, 4 pools, 230 GB data, 29600 objects
> > 692 GB used, 21652 GB / 22345 GB avail
> > 1920 active+clean
> >
> >
> >
> > --snip--
> >
> >
> > ## Conf and ceph output
> >
> > --snip--
> >
> > [global]
> > fsid = a7160e16-0aaf-4e78-9e7c-7fbec08642f0
> > public_network = 172.16.10.0/24
> > cluster_network = 172.16.10.0/24
> > mon_initial_members = mon01, mon02, mon03
> > mon_host = 172.16.10.5,172.16.10.6,172.16.10.7
> > auth_cluster_required = cephx
> > auth_service_required = cephx
> > auth_client_required = cephx
> > filestore_xattr_use_omap = true
> > mon_clock_drift_allowed = .15
> > mon_clock_drift_warn_backoff = 30
> > mon_osd_down_out_interval = 300
> > mon_osd_report_timeout = 300
> > mon_osd_full_ratio = .85
> > mon_osd_nearfull_ratio = .75
> > osd_backfill_full_ratio = .75
> > osd_pool_default_size = 3
> > osd_pool_default_min_size = 2
> > osd_pool_default_pg_num = 512
> > osd_pool_default_pgp_num = 512
> > --snip--
> >
> > ceph df
> >
> >
> > POOLS:
> > NAMEID   USED %USED   MAX AVAIL OBJECTS
> > images3 216G0.97 7179G
>  27793
> > vms4  14181M0.06 7179G
> 1804
> > volumes  50 0 7179G
> > 1
> > backups  60 0 7179G
> > 0
> >
> >
> > ceph osd pool get poolname pg_num
> > images: 256
> > backup: 512
> > vms: 512
> > volumes: 512
> >
> > --snip--
> >
> > Since it is a warning .. can I upgrade the OSDs without destroying the
> data.
> > or
> > Should I roll back.
>
> It's not a problem, just a diagnostic warning that appears to be
> misbehaving. If you can create a bug at tracker.ceph.com listing what
> Ceph versions are involved and exactly what's happened it can get
> investigated, but you should feel free to keep upgrading. :)
> -Greg
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Firefly to Hammer Upgrade -- HEALTH_WARN; too many PGs per OSD (480 > max 300)

2015-08-31 Thread 10 minus
Hi ,

I 'm in the process of upgrading my ceph cluster from Firefly to Hammer.

The ceph cluster has 12 OSD spread across 4 nodes.

Mons have been upgraded to hammer, since I have created pools  with value
512 and 256 , so am bit confused with the warning message.

--snip--

ceph -s
cluster a7160e16-0aaf-4e78-9e7c-7fbec08642f0
 health HEALTH_WARN
too many PGs per OSD (480 > max 300)
 monmap e1: 3 mons at {mon01=
172.16.10.5:6789/0,mon02=172.16.10.6:6789/0,mon03=172.16.10.7:6789/0}
election epoch 116, quorum 0,1,2 mon01,mon02,mon03
 osdmap e6814: 12 osds: 12 up, 12 in
  pgmap v2961763: 1920 pgs, 4 pools, 230 GB data, 29600 objects
692 GB used, 21652 GB / 22345 GB avail
1920 active+clean



--snip--


## Conf and ceph output

--snip--

[global]
fsid = a7160e16-0aaf-4e78-9e7c-7fbec08642f0
public_network = 172.16.10.0/24
cluster_network = 172.16.10.0/24
mon_initial_members = mon01, mon02, mon03
mon_host = 172.16.10.5,172.16.10.6,172.16.10.7
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = .15
mon_clock_drift_warn_backoff = 30
mon_osd_down_out_interval = 300
mon_osd_report_timeout = 300
mon_osd_full_ratio = .85
mon_osd_nearfull_ratio = .75
osd_backfill_full_ratio = .75
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 512
osd_pool_default_pgp_num = 512
--snip--

ceph df


POOLS:
NAMEID   USED %USED   MAX AVAIL OBJECTS
images3 216G0.97 7179G   27793
vms4  14181M0.06 7179G1804
volumes  50 0
7179G   1
backups  60 0
7179G   0


ceph osd pool get poolname pg_num
images: 256
backup: 512
vms: 512
volumes: 512

--snip--

Since it is a warning .. can I upgrade the OSDs without destroying the data.
or
Should I roll back.

Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-08-26 Thread 10 minus
Hi ,

We got a good deal on 843T and we are using it in our Openstack setup ..as
journals .
They have been running for last six months ... No issues .
When we compared with  Intel SSDs I think it was 3700 they  were shade
slower for our workload and considerably cheaper.
We did not run any synthetic benchmark since we had a specific use case.
The performance was better than our old setup so it was good enough.

hth



On Tue, Aug 25, 2015 at 12:07 PM, Andrija Panic 
wrote:

> We have some 850 pro 256gb ssds if anyone interested to buy:)
>
> And also there was new 850 pro firmware that broke peoples disk which was
> revoked later etc... I'm sticking with only vacuum cleaners from Samsung
> for now, maybe... :)
> On Aug 25, 2015 12:02 PM, "Voloshanenko Igor" 
> wrote:
>
>> To be honest, Samsung 850 PRO not 24/7 series... it's something about
>> desktop+ series, but anyway - results from this drives - very very bad in
>> any scenario acceptable by real life...
>>
>> Possible 845 PRO more better, but we don't want to experiment anymore...
>> So we choose S3500 240G. Yes, it's cheaper than S3700 (about 2x times), and
>> no so durable for writes, but we think more better to replace 1 ssd per 1
>> year than to pay double price now.
>>
>> 2015-08-25 12:59 GMT+03:00 Andrija Panic :
>>
>>> And should I mention that in another CEPH installation we had samsung
>>> 850 pro 128GB and all of 6 ssds died in 2 month period - simply disappear
>>> from the system, so not wear out...
>>>
>>> Never again we buy Samsung :)
>>> On Aug 25, 2015 11:57 AM, "Andrija Panic" 
>>> wrote:
>>>
 First read please:

 http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

 We are getting 200 IOPS in comparison to Intels3500 18.000 iops - those
 are  constant performance numbers, meaning avoiding drives cache and
 running for longer period of time...
 Also if checking with FIO you will get better latencies on intel s3500
 (model tested in our case) along with 20X better IOPS results...

 We observed original issue by having high speed at begining of i.e.
 file transfer inside VM, which than halts to zero... We moved journals back
 to HDDs and performans was acceptable...no we are upgrading to intel
 S3500...

 Best
 any details on that ?

 On Tue, 25 Aug 2015 11:42:47 +0200, Andrija Panic
  wrote:

 > Make sure you test what ever you decide. We just learned this the
 hard way
 > with samsung 850 pro, which is total crap, more than you could
 imagine...
 >
 > Andrija
 > On Aug 25, 2015 11:25 AM, "Jan Schermer"  wrote:
 >
 > > I would recommend Samsung 845 DC PRO (not EVO, not just PRO).
 > > Very cheap, better than Intel 3610 for sure (and I think it beats
 even
 > > 3700).
 > >
 > > Jan
 > >
 > > > On 25 Aug 2015, at 11:23, Christopher Kunz >>> >
 > > wrote:
 > > >
 > > > Am 25.08.15 um 11:18 schrieb Götz Reinicke - IT Koordinator:
 > > >> Hi,
 > > >>
 > > >> most of the times I do get the recommendation from resellers to
 go with
 > > >> the intel s3700 for the journalling.
 > > >>
 > > > Check out the Intel s3610. 3 drive writes per day for 5 years.
 Plus, it
 > > > is cheaper than S3700.
 > > >
 > > > Regards,
 > > >
 > > > --ck
 > > > ___
 > > > ceph-users mailing list
 > > > ceph-users@lists.ceph.com
 > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 > >
 > > ___
 > > ceph-users mailing list
 > > ceph-users@lists.ceph.com
 > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 > >



 --
 Mariusz Gronczewski, Administrator

 Efigence S. A.
 ul. Wołoska 9a, 02-583 Warszawa
 T: [+48] 22 380 13 13
 F: [+48] 22 380 13 14
 E: mariusz.gronczew...@efigence.com
 

>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is Ceph the right tool for me?

2015-06-26 Thread 10 minus
Hi ,

As Christian has mentioned ... bit more detailed information will do us
good..
Had explored Cephfs -- but performance was an issue vis-a-vis zfs when we
tested ( more than a year back) , so we did not get into details.
I will let the Cephfs experts chip in here on the present state of Cephfs
How are you using zfs on your main site .. nfs/cifs / iscsi
How much data are we talking about ?
Yes one machine is a SPOF but the questions you should ask or answer :
Is there a business requirement to restore data in a  defined time ?
How much data is in play here ?
what are the odds it fails (hw quality is improving by the day -- does mean
it wont fail) ?
How fast can one replace failed HW (We have spare HW always avaialbe) ?
do you need always on backup,  especially offsite backup?
Have you explored Tape option ?

We are using zfs on solaris and freebsd as a filer ( nfs/cifs) and we keep
three copies of snapshot  (We have  5 TB of data )
-  local  on filers ( snapshot every hour for 2 days)
-  onsite on another machine 1 Week (snapshot copy every 12 hrs  on a
machine onsite )
-  offsite (snapshot copy every day for 4 weeks --> then from offsite to
tape).

For DB backup we have a system in place but it does not rely on zfs
snapshot, Would love to know how you manage DB backups with zfs snapshots.
ZFS is a mature technology ..


P.S We use ceph for  openstack (ephemeral /cinder / glance ) .. with no
backup. (One year on we are still learning new things and it has just
worked)




On Fri, Jun 26, 2015 at 9:00 AM, Christian Balzer  wrote:

>
> Hello,
>
> On Fri, 26 Jun 2015 00:28:20 +0200 Cybertinus wrote:
>
> > Hello everybody,
> >
> >
> > I'm looking at Ceph as an alternative for my current storage solution,
> > but I'm wondering if it is the right choice for me. I'm hoping you guys
> > can help me decide.
> >
> > The current setup is a FreeBSD 10.1 machine running entirely on ZFS. The
> > function of the machine is offsite backup for important data. For some
> > (fairly rapidly changing) data this server is the only backup of it. But
> > because the data is changing fairly quickly (every day at least) I'm
> > looking to get this server more HA then it is now.
> > It is just one FreeBSD machine, so this is an enormous SPOF off course.
> >
> But aside from the SPOF part that machine is sufficient for your usage,
> right?
> Care to share the specs of it and what data volume (total space used, daily
> transactions) we're talking about>
>
> > The most used functionality of ZFS that I use is the snapshot technology.
> > I've got multiple users on this server and each user has it's own
> > filesystem within the pool. And I just snapshot each filesystem regularly
> > and that way I enable the users to go back in time.
> > I've looked at the snapshot functionality of Ceph, but it's not clear to
> > me what I can snapshot with it exactly.
> >
> > Furthermore: what is the best way to hook Ceph to the application I use
> > to transfer the data from the users to the backup server? Today I'm using
> > OwnCloud, which is (in essence) a WebDAV server. Now I'm thinking about
> > replacing OwnCloud with something custom build. That way I can let PHP
> > talk directly with librados, which makes it easy to store the data.
> > Or I can keep on using OwnCloud and just hook up Ceph via CephFS. This
> > has the added advantage that I don't have to get my head around the
> > concept of object storage :p ;).
> >
> I'm slightly confused here, namely:
> You use owncloud (I got a test installation on a VM here, too), which
> uses a DB (mysql by default) to index the files uploaded.
> How do you make sure that your snapshots are consistent when it comes to
> DB files other than being lucky 99.9% of the time?
>
> I'll let the CephFS experts pipe up, but the usual disclaimers about
> CephFS stability do apply, in particular the latest (beta) version of Ceph
> has this line on top of the changelog:
> ---
> Highlights here include lots of RGW Swift fixes, RBD feature work
> surrounding the new object map feature, more CephFS snapshot fixes, and a
> few important CRUSH fixes.
> ---
>
> Now you could just mount an RBD image (or run a VM) with BTRFS and have
> snapshots again that are known to work.
>
> However going back to my first question up there, I have a feeling that a
> functional Ceph cluster with at least 3 storage nodes might be both too
> expensive while at the same time less performant than what you have now.
>
> A 2 node DRBD cluster might fit your needs better.
>
> Christian
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw : Cannot set a new region as default

2015-04-30 Thread 10 minus
Hi,

I am in the process of setting up radosgw for a firefly ceph-cluster.

I have followed the docs for creating alternate region region map and
zones.

Now I want to delete the default region .

Is it possible to do that ?

Also I'm not able to promote my new region regeion1 as default region.


--snip--
 radosgw-admin region-map set --infile region-map.json
{ "regions": [
{ "key": "default",
  "val": { "name": "region1",
  "api_name": "",
  "is_master": "true",
  "endpoints": [],
  "master_zone": "",
  "zones": [
{ "name": "zn1",
  "endpoints": [],
  "log_meta": "false",
  "log_data": "false"}],
  "placement_targets": [
{ "name": "default-placement",
  "tags": []}],
  "default_placement": "default-placement"}}],
  "master_region": "region1",
  "bucket_quota": { "enabled": false,
  "max_size_kb": -1,
  "max_objects": -1},
  "user_quota": { "enabled": false,
  "max_size_kb": -1,
  "max_objects": -radosgw-admin regionmap updaten-map.json
{ "regions": [
{ "key": "default",
  "val": { "name": "default",
  "api_name": "",
  "is_master": "false",
  "endpoints": [],
  "master_zone": "",
  "zones": [
{ "name": "default",
  "endpoints": [],
  "log_meta": "false",
  "log_data": "false"}],
  "placement_targets": [
{ "name": "default-placement",
  "tags": []}],
  "default_placement": "default-placement"}},
{ "key": "region1",
  "val": { "name": "region1",
  "api_name": "",
  "is_master": "true",
  "endpoints": [],
  "master_zone": "",
  "zones": [
{ "name": "zn1",
  "endpoints": [],
  "log_meta": "false",
  "log_data": "false"}],
  "placement_targets": [
{ "name": "default-placement",
  "tags": []}],
  "default_placement": "default-placement"}}],
  "master_region": "region1",
  "bucket_quota": { "enabled": false,
  "max_size_kb": -1,
  "max_objects": -1},
  "user_quota": { "enabled": false,
  "max_size_kb": -1,
  "max_objects": -1}}

## region list

radosgw-admin regions list
{ "default_info": { "default_region": "default"},
  "regions": [
"default",
"region1"]}

## set region1 as default

radosgw-admin region default region1
[root@cc03 ceph-rgw]# radosgw-admin regions list
{ "default_info": { "default_region": "default"},
  "regions": [
"default",
"region1"]}


--snip--

any pointers to fix will be great
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Powering down a ceph cluster

2015-04-23 Thread 10 minus
Thanks Wido ...

It worked.



On Wed, Apr 22, 2015 at 5:33 PM, Wido den Hollander  wrote:

>
>
> > Op 22 apr. 2015 om 16:54 heeft 10 minus  het
> volgende geschreven:
> >
> > Hi,
> >
> > Is there a recommended way of powering down a ceph cluster and bringing
> it back up ?
> >
> > I have looked  thru the docs and cannot find anything wrt it.
> >
>
> Best way would be:
> - Stop all client I/O
> - Shut down the OSDs
> - Shut down the monitors
>
> Afterwards, boot the monitors first, then the OSDs.
>
> Wido
>
> >
> > Thanks in advance
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Powering down a ceph cluster

2015-04-22 Thread 10 minus
Hi,

Is there a recommended way of powering down a ceph cluster and bringing it
back up ?

I have looked  thru the docs and cannot find anything wrt it.


Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Is it possible to reinitialize the cluster

2015-04-20 Thread 10 minus
Hi ,

i have an issue with my ceph cluster were  two nodes wereby accident  and
have been recreated.


ceph osd tree
# idweight  type name   up/down reweight
-1  14.56   root default
-6  14.56   datacenter dc1
-7  14.56   row row1
-9  14.56
  rack rack2
-2  3.64host ceph01
0   1.82osd.0   up  1
1   1.82osd.1   up  1
2   1.82osd.2   up  1
-3  3.64host ceph02
3   1.82osd.3   up  1
8   1.82osd.8   up  1
9   1.82osd.9   up  1
-4  3.64host ceph03
4   1.82osd.4   up  1
5   1.82osd.5   up  1
-5  3.64host ceph04
6   1.82osd.6   up  1
7   1.82osd.7   up  1



For the past four hours the recovery process is running . I 'm not sure if
it will comeback.
what I have noticed is that some osd  are going down during recovery and
coming back.

--snip--

ceph -s
cluster 23d53990-4458-4faf-a598-9c60036a51f3
 health HEALTH_WARN 18 pgs down; 1814 pgs peering; 1948 pgs stuck
inactive; 1948 pgs stuck unclean; 1 requests are blocked > 32 sec; 1/8 in
osds are down
 monmap e1: 3 mons at {mon01=
172.16.101.5:6789/0,mon02=172.16.101.6:6789/0,mon03=172.16.101.7:6789/0},
election epoch 46, quorum 0,1,2 mon01,mon02,mon03
 osdmap e9077: 10 osds: 7 up, 8 in
  pgmap v127105: 2240 pgs, 7 pools, 0 bytes data, 0 objects
400M used, 14896 GB / 14896 GB avail
 134 creating
1316 peering
 394 creating+peering
 292 active+clean
  86 remapped+peering
  18 down+peering
--snip--


Should I wait  or should I just zap and start from scratch,  I dont have
any data on my ceph cluster ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Hammer release data and a Design question

2015-03-26 Thread 10 minus
Hi ,

I 'm just starting on small Ceph implementation and wanted to know the
release date for Hammer.
Will it coincide with relase of Openstack.

My Conf:  (using 10G and Jumboframes on Centos 7 / RHEL7 )

3x Mons (VMs) :
CPU - 2
Memory - 4G
Storage - 20 GB

4x OSDs :
CPU - Haswell Xeon
Memory - 8 GB
Sata - 3x 2TB (3 OSD per node)
SSD - 2x 480 GB ( Journaling and if possible tiering)


This is a test environment to see how all the components play . If all goes
well
then we plan to increase the OSDs to 24 per node and RAM to 32 GB and a
dual Socket Haswell Xeons

The storage is primarily will be  used to provide Cinder and Swift.
Just wanted to know  what the Expert opinion on how to scale will be.
- Keep the nodes Symmetric
- Just add the new beefy nodes and grow.

Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance data collection for Ceph

2014-11-17 Thread 10 minus
Thanks Dan, If I understand correctly , perf_counters have to run against
OSDs ( I mean for every OSD I have to run a check).



On Fri, Nov 14, 2014 at 8:44 PM, Dan Ryder (daryder) 
wrote:

>  Hi,
>
>
>
> Take a look at the built in perf counters -
> http://ceph.com/docs/master/dev/perf_counters/. Through this you can get
> individual daemon performance as well as some cluster level statistics.
>
>
>
> Other (cluster-level) disk space utilization and pool
> utilization/performance is available through “ceph df detail”. Hope this
> helps.
>
>
>
>
>
>
>
> Dan Ryder
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *10 minus
> *Sent:* Friday, November 14, 2014 10:26 AM
> *To:* ceph-users
> *Subject:* [ceph-users] Performance data collection for Ceph
>
>
>
> Hi,
>
> I 'm trying to collect  performance data for Ceph
>
> I 'm looking to run some commands .. on regular intervals. to collect data.
>
> Apart from "ceph osd perf" . Are there other commands one can use.
>
> Can I also track how much data is being replicated ?
>
> Does  Ceph maintain performance counters for individual OSDs ?
>
>   Something on the lines of  zpool iostat .
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Performance data collection for Ceph

2014-11-14 Thread 10 minus
Hi,

I 'm trying to collect  performance data for Ceph

I 'm looking to run some commands .. on regular intervals. to collect data.

Apart from "ceph osd perf" . Are there other commands one can use.

Can I also track how much data is being replicated ?
Does  Ceph maintain performance counters for individual OSDs ?


Something on the lines of  zpool iostat .
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph packages being blocked by epel packages on Centos6

2014-10-12 Thread 10 minus
Hi ,

I have observed that the latest ceph packages from ceph are being blocked
by ceph packages from epel on cEntos6 is it just me or are others observing
this too.

Cheers,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] firefly osds stuck in state booting

2014-07-29 Thread 10 minus
Hi Karan ,

Thanks .. that did the trick ..
The magic word was "in"
regarding rep size . I have adjusted them
my settings are
--snip--
osd pool default size =
2

osd pool default min size =
1

osd pool default pg num =
100

osd pool default pgp num = 100
--snip--

# Also in the meantime I had chance to play with ceph-deploy script too.
# Maybe it was me or probably it is a bug . I have tried twice and
everytime I have hit this

As I said before I'm using a directory as this is a test installation .

ceph-deploy osd prepare ceph2:/ceph2:/ceph2/journald <=== Works


but

--snip--
ceph-deploy osd activate
ceph2:/ceph2:/ceph2/journald

[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy osd
activate ceph2:/ceph2:/ceph2/journald
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
ceph2:/ceph2:/ceph2/journald
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph2 disk /ceph2
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph2][INFO  ] Running command: sudo ceph-disk-activate --mark-init
sysvinit --mount /ceph2
[ceph2][WARNIN] got monmap epoch 2
[ceph2][WARNIN] 2014-07-28 11:47:04.733204 7f08d1c667a0 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force
 use of aio anyway
[ceph2][WARNIN] 2014-07-28 11:47:04.733400 7f08d1c667a0 -1 journal check:
ondisk fsid ---- doesn't match expected
4795daff-
d63f-415b-9824-75f0863eb14f, invalid (someone else's?) journal
[ceph2][WARNIN] 2014-07-28 11:47:04.796835 7f08d1c667a0 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force
 use of aio anyway
[ceph2][WARNIN] 2014-07-28 11:47:04.798944 7f08d1c667a0 -1
filestore(/ceph2) could not find 23c2fcde/osd_superblock/0//-1 in index:
(2) No such file or dir
ectory
[ceph2][WARNIN] 2014-07-28 11:47:04.874282 7f08d1c667a0 -1 created object
store /ceph2 journal /ceph2/journal for osd.1 fsid
109507ab-adf1-4eb6-aacf-092549
4e3882
[ceph2][WARNIN] 2014-07-28 11:47:04.874474 7f08d1c667a0 -1 auth: error
reading file: /ceph2/keyring: can't open /ceph2/keyring: (2) No such file
or directo
ry
[ceph2][WARNIN] 2014-07-28 11:47:04.875209 7f08d1c667a0 -1 created new key
in keyring /ceph2/keyring
[ceph2][WARNIN] added key for osd.1
[ceph2][WARNIN] ceph-disk: Error: unable to create symlink
/var/lib/ceph/osd/ceph-1 -> /ceph2
[ceph2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init sysvinit --mount /ceph2
--snip--



It turns out ceph-deploy does not create the directory /var/lib/ceph/osd
and if I create them everything works.


Cheers


On Mon, Jul 28, 2014 at 9:09 AM, Karan Singh  wrote:

> The output that you have provided says that OSDs are not IN , Try the below
>
> ceph osd in osd.0
> ceph osd in osd.1
>
> service ceph start osd.0
> service ceph start osd.1
>
> If you have 1 more host with 1 disk , add it , starting Ceph Firefly
> default rep size is 3
>
>
> - Karan -
>
> On 27 Jul 2014, at 11:17, 10 minus  wrote:
>
> Hi Sage,
>
> I have dropped all unset .. and even restarted the osd
> No dice .. OSDs are still stuck .
>
>
>
> --snip--
>  ceph daemon osd.0 status│rtt
> min/avg/max/mdev = 0.095/0.120/0.236/0.015
> ms
>
> { "cluster_fsid":
> "99babb8f-c880-4b32-a227-94aa483d4871",
> │[root@ceph2 ~]#  ceph daemon osd.1
> status
>
>   "osd_fsid":
> "1ad28bde-c23c-44ba-a3b7-0fd3372e",│{
> "cluster_fsid":
> "99babb8f-c880-4b32-a227-94aa483d4871",
>
>   "whoami":
> 0,   │
> "osd_fsid":
> "becc3252-6977-47d6-87af-7b1337e591d8",
>
>   "state":
> "booting",
> │  "whoami":
> 1,
>
>   "oldest_map":
> 1,   │
> "state":
> "booting",
>
>   "newest_map":
> 24,  │
> "oldest_map":
> 1,
>
>   "num_pgs":
> 0}  │
> "newest_map":
> 21,
>
>  --snip--
>
> --snip--
> ceph osd
> tree
>
> # idweight  type name   up/down reweight
> -1  2   root default
> -3  1 

Re: [ceph-users] firefly osds stuck in state booting

2014-07-27 Thread 10 minus
Hi Sage,

I have dropped all unset .. and even restarted the osd
No dice .. OSDs are still stuck .



--snip--
 ceph daemon osd.0 status│rtt
min/avg/max/mdev = 0.095/0.120/0.236/0.015
ms

{ "cluster_fsid":
"99babb8f-c880-4b32-a227-94aa483d4871",
│[root@ceph2 ~]#  ceph daemon osd.1
status

  "osd_fsid":
"1ad28bde-c23c-44ba-a3b7-0fd3372e",│{
"cluster_fsid":
"99babb8f-c880-4b32-a227-94aa483d4871",

  "whoami":
0,   │
"osd_fsid":
"becc3252-6977-47d6-87af-7b1337e591d8",

  "state":
"booting",
│  "whoami":
1,

  "oldest_map":
1,   │
"state":
"booting",

  "newest_map":
24,  │
"oldest_map":
1,

  "num_pgs":
0}  │
"newest_map":
21,

 --snip--

--snip--
ceph osd
tree

# idweight  type name   up/down reweight
-1  2   root default
-3  1   host ceph1
0   1   osd.0   down0
-2  1   host ceph2
1   1   osd.1   down0

 --snip--

--snip--
 ceph -s
cluster 2929fa80-0841-4cb6-a133-90b2098fc802
 health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
 monmap e2: 3 mons at {ceph0=
10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0},
election epoch 50, quorum 0,1,2 ceph0,ceph1,ceph2
 osdmap e24: 2 osds: 0 up, 0 in
  pgmap v25: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
 192 creating
--snip--




On Sat, Jul 26, 2014 at 5:57 PM, Sage Weil  wrote:

> On Sat, 26 Jul 2014, 10 minus wrote:
> > Hi,
> >
> > I just setup a test ceph installation on 3 node Centos 6.5  .
> > two of the nodes are used for hosting osds and the third acts as mon .
> >
> > Please note I'm using LVM so had to set up the osd using the manual
> install
> > guide.
> >
> > --snip--
> > ceph -s
> > cluster 2929fa80-0841-4cb6-a133-90b2098fc802
> >  health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean;
> > noup,nodown,noout flag(s) set
> >  monmap e2: 3 mons at{ceph0=
> 10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0
> > }, election epoch 46, quorum 0,1,2 ceph0,ceph1,ceph2
> >  osdmap e21: 2 osds: 0 up, 0 in
> > flags noup,nodown,noout
> 
>
> Do 'ceph osd unset noup' and they should start up.  You likely also want
> to clear nodown and noout as well.
>
> sage
>
>
> >   pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
> > 0 kB used, 0 kB / 0 kB avail
> >  192 creating
> > --snip--
> >
> > osd tree
> >
> > --snip--
> > ceph osd tree
> > # idweight  type name   up/down reweight
> > -1  2   root default
> > -3  1   host ceph1
> > 0   1   osd.0   down0
> > -2  1   host ceph2
> > 1   1   osd.1   down0
> > --snip--
> >
> > --snip--
> >  ceph daemon osd.0 status
> > { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> >   "osd_fsid": "1ad28bde-c23c-44ba-a3b7-0fd3372e",
> >   "whoami": 0,
> >   "state": "booting",
> >   "oldest_map": 1,
> >   "newest_map": 21,
> >   "num_pgs": 0}
> >
> > --snip--
> >
> > --snip--
> >  ceph daemon osd.1 status
> > { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> >   "osd_fsid": "becc3252-6977-47d6-87af-7b1337e591d8",
> >   "whoami": 1,
> >   "state": "booting",
> >   "oldest_map": 1,
> >   "newest_map": 21,
> >   "num_pgs": 0}
> > --snip--
> >
> > # Cpus are idling
> >
> > # does anybody know what is wrong
> >
> > Thanks in advance
> >
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] firefly osds stuck in state booting

2014-07-26 Thread 10 minus
Hi,

I just setup a test ceph installation on 3 node Centos 6.5  .
two of the nodes are used for hosting osds and the third acts as mon .

Please note I'm using LVM so had to set up the osd using the manual install
guide.

--snip--
ceph -s
cluster 2929fa80-0841-4cb6-a133-90b2098fc802
 health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean;
noup,nodown,noout flag(s) set
 monmap e2: 3 mons at {ceph0=
10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0},
election epoch 46, quorum 0,1,2 ceph0,ceph1,ceph2
 osdmap e21: 2 osds: 0 up, 0 in
flags noup,nodown,noout
  pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
 192 creating
--snip--

osd tree

--snip--
ceph osd tree
# idweight  type name   up/down reweight
-1  2   root default
-3  1   host ceph1
0   1   osd.0   down0
-2  1   host ceph2
1   1   osd.1   down0
--snip--

--snip--
 ceph daemon osd.0 status
{ "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
  "osd_fsid": "1ad28bde-c23c-44ba-a3b7-0fd3372e",
  "whoami": 0,
  "state": "booting",
  "oldest_map": 1,
  "newest_map": 21,
  "num_pgs": 0}

--snip--

--snip--
 ceph daemon osd.1 status
{ "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
  "osd_fsid": "becc3252-6977-47d6-87af-7b1337e591d8",
  "whoami": 1,
  "state": "booting",
  "oldest_map": 1,
  "newest_map": 21,
  "num_pgs": 0}
--snip--

# Cpus are idling

# does anybody know what is wrong

Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph cinder compute-nodes

2014-05-29 Thread 10 minus
Hi ,

Thanks Travis .. I was following RDO documentation on howto deploy ceph.
Instead of Ceph - Once I read Ceph documenation on it . It was clear.


Cheersf
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] openstack volume to image

2014-05-29 Thread 10 minus
Hi ,


My cinder backend storage is ceph .  Isthere is a mechanism to convert a
booted instance (Volume) into an image  ?

Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph cinder compute-nodes

2014-05-24 Thread 10 minus
Hi,

I went through the docs fo setting up cinder with ceph.

from the docs -  I  have to perform on every compute node

virsh secret-define --file secret.xml

The issue I see is that I have to perform this on 5 compute nodes and on
cinder it expects to have only one

rbd_secret_uuid= uuid


as the former command will generate 5 uuids . How can I pass 5 uuids to
cinder

Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-24 Thread 10 minus
ocks=0,
rtextents=0
[cc02][DEBUG ] The operation has completed successfully.
[cc02][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[cc02][INFO  ] checking OSD status...
[cc02][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host cc02 is now ready for osd use.
--snip--

To get the ceph health status Ok  . I had to readjust pool size


for i in $(rados lspools);do ceph osd pool set $i size  1;done
set pool 0 size to 1
set pool 1 size to 1
set pool 2 size to 1
set pool 3 size to 1
set pool 4 size to 1
set pool 5 size to 1
[root@cc01 ceph]# for i in $(rados lspools);do ceph osd pool set $i size
2;done
set pool 0 size to 2
set pool 1 size to 2
set pool 2 size to 2
set pool 3 size to 2
set pool 4 size to 2
set pool 5 size to 2
[root@cc01 ceph]# ceph -s
cluster 9f951603-0c31-4942-aefd-96f85b5ea908
 health HEALTH_OK
 monmap e1: 3 mons at {cc01=
172.18.1.31:6789/0,cc02=172.18.1.32:6789/0,cc03=172.18.1.33:6789/0},
election epoch 26, quorum 0,1,2 cc01,cc02,cc03
 osdmap e64: 2 osds: 2 up, 2 in
  pgmap v121: 492 pgs, 6 pools, 0 bytes data, 0 objects
80636 kB used, 928 GB / 928 GB avail
 492 active+clean

--snip--

Can I pass these values via ceph.conf  ?








On Wed, May 21, 2014 at 4:05 PM, 10 minus  wrote:

> Hi,
>
> I have just started to dabble with ceph - went thru the docs
> http://ceph.com/howto/deploying-ceph-with-ceph-deploy/
>
>
> I have a 3 node setup with 2 nodes for OSD
>
> I use ceph-deploy mechanism.
>
> The ceph init scripts expects that cluster.conf  to be ceph.conf . If I
> give any other name the init scripts dont work. So for test purpose Im
> using  ceph.conf
>
>
> --ceph.conf--
> [global]
> auth_service_required = cephx
> filestore_xattr_use_omap = true
> auth_client_required = cephx
> auth_cluster_required = cephx
> mon_host = 172.18.1.31,172.18.1.32,172.18.1.33
> mon_initial_members = cc01, cc02, cc03
> fsid = b58e50f1-13a3-4b14-9cff-32b6edd851c9
> --snip--
>
> I managed to get mon deployed but ceph -s returns health error
>
> --snip--
>  ceph -s
> cluster b58e50f1-13a3-4b14-9cff-32b6edd851c9
>  health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
> osds
>  monmap e1: 3 mons at {cc01=
> 172.18.1.31:6789/0,cc02=172.18.1.32:6789/0,cc03=172.18.1.33:6789/0},
> election epoch 4, quorum 0,1,2 cc01,cc02,cc03
>  osdmap e1: 0 osds: 0 up, 0 in
>   pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
> 0 kB used, 0 kB / 0 kB avail
>  192 creating
> --snip--
>
> I tried creating two osds. Well they fail too probably has to do with
> health error message.
>
>  --snip--
>  ceph-deploy osd create cc01:/dev/sdb cc02:/dev/sdb
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.2): /usr/bin/ceph-deploy osd create
> cc01:/dev/sdb cc02:/dev/sdb
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cc01:/dev/sdb:
> cc02:/dev/sdb:
> [cc01][DEBUG ] connected to host: cc01
> [cc01][DEBUG ] detect platform information from remote host
> [cc01][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
> [ceph_deploy.osd][DEBUG ] Deploying osd to cc01
> [cc01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [cc01][INFO  ] Running command: udevadm trigger --subsystem-match=block
> --action=add
> [ceph_deploy.osd][DEBUG ] Preparing host cc01 disk /dev/sdb journal None
> activate True
> [cc01][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster
> ceph -- /dev/sdb
> [cc01][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
> [cc01][WARNIN] Could not create partition 2 from 10485761 to 10485760
> [cc01][WARNIN] Error encountered; not saving changes.
> [cc01][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk',
> '--new=2:0:5120M', '--change-name=2:ceph journal',
> '--partition-guid=2:d882631c-0069-4238-86df-9762ad478daa',
> '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
> '/dev/sdb']' returned non-zero exit status 4
> [cc01][DEBUG ] Setting name!
> [cc01][DEBUG ] partNum is 1
> [cc01][DEBUG ] REALLY setting name!
> [cc01][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
> --fs-type xfs --cluster ceph -- /dev/sdb
> [cc02][DEBUG ] connected to host: cc02
> [cc02][DEBUG ] detect platform information from remote host
> [cc02][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
> [ceph_deploy.osd][DEBUG ] Deploying os

[ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-21 Thread 10 minus
Hi,

I have just started to dabble with ceph - went thru the docs
http://ceph.com/howto/deploying-ceph-with-ceph-deploy/


I have a 3 node setup with 2 nodes for OSD

I use ceph-deploy mechanism.

The ceph init scripts expects that cluster.conf  to be ceph.conf . If I
give any other name the init scripts dont work. So for test purpose Im
using  ceph.conf


--ceph.conf--
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 172.18.1.31,172.18.1.32,172.18.1.33
mon_initial_members = cc01, cc02, cc03
fsid = b58e50f1-13a3-4b14-9cff-32b6edd851c9
--snip--

I managed to get mon deployed but ceph -s returns health error

--snip--
 ceph -s
cluster b58e50f1-13a3-4b14-9cff-32b6edd851c9
 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
osds
 monmap e1: 3 mons at {cc01=
172.18.1.31:6789/0,cc02=172.18.1.32:6789/0,cc03=172.18.1.33:6789/0},
election epoch 4, quorum 0,1,2 cc01,cc02,cc03
 osdmap e1: 0 osds: 0 up, 0 in
  pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
 192 creating
--snip--

I tried creating two osds. Well they fail too probably has to do with
health error message.

 --snip--
 ceph-deploy osd create cc01:/dev/sdb cc02:/dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.2): /usr/bin/ceph-deploy osd create
cc01:/dev/sdb cc02:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cc01:/dev/sdb:
cc02:/dev/sdb:
[cc01][DEBUG ] connected to host: cc01
[cc01][DEBUG ] detect platform information from remote host
[cc01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to cc01
[cc01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cc01][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host cc01 disk /dev/sdb journal None
activate True
[cc01][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster
ceph -- /dev/sdb
[cc01][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[cc01][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cc01][WARNIN] Error encountered; not saving changes.
[cc01][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk',
'--new=2:0:5120M', '--change-name=2:ceph journal',
'--partition-guid=2:d882631c-0069-4238-86df-9762ad478daa',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdb']' returned non-zero exit status 4
[cc01][DEBUG ] Setting name!
[cc01][DEBUG ] partNum is 1
[cc01][DEBUG ] REALLY setting name!
[cc01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--fs-type xfs --cluster ceph -- /dev/sdb
[cc02][DEBUG ] connected to host: cc02
[cc02][DEBUG ] detect platform information from remote host
[cc02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to cc02
[cc02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cc02][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host cc02 disk /dev/sdb journal None
activate True
[cc02][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster
ceph -- /dev/sdb
[cc02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[cc02][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cc02][WARNIN] Error encountered; not saving changes.
[cc02][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk',
'--new=2:0:5120M', '--change-name=2:ceph journal',
'--partition-guid=2:486c9081-a73c-4906-b97a-c03458feba26',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdb']' returned non-zero exit status 4
[cc02][DEBUG ] Found valid GPT with corrupt MBR; using GPT and will write
new
[cc02][DEBUG ] protective MBR on save.
[cc02][DEBUG ] Setting name!
[cc02][DEBUG ] partNum is 1
[cc02][DEBUG ] REALLY setting name!
[cc02][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--fs-type xfs --cluster ceph -- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
--snip--

Any pointers to fix the issue.

Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com