[ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread Dyweni - Ceph-Users

Hi All,


Just setup the monitor for a new cluster based on Giant (0.87) and I 
find that only the 'rbd' pool was created automatically.  I don't see 
the 'data' or 'metadata' pools in 'ceph osd lspools' or the log files.  
I haven't setup any OSDs or MDSs yet.  I'm following the manual 
deployment guide.


Would you mind looking over the setup details/logs below and letting me 
know my mistake please?




Here's my /etc/ceph/ceph.conf file:
---
[global]
fsid = xx

public network = xx.xx.xx.xx/xx
cluster network = xx.xx.xx.xx/xx

auth cluster required = cephx
auth service required = cephx
auth client required = cephx

osd pool default size = 2
osd pool default min size = 1

osd pool default pg num = 100
osd pool default pgp num = 100

[mon]
mon initial members = a

[mon.a]
host = xx
mon addr = xx.xx.xx.xx
---


Here's the commands used to setup the monitor:
---
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. 
--cap mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring 
--gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 
'allow *' --cap mds 'allow'
ceph-authtool /tmp/ceph.mon.keyring --import-keyring 
/etc/ceph/ceph.client.admin.keyring

monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
mkdir /var/lib/ceph/mon/ceph-a
ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
/tmp/ceph.mon.keyring

/etc/init.d/ceph-mon.a start
---


Here's the ceph-mon.a logfile:
---
2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87 
(c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in 
monmap, will attempt to join an existing cluster
2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr 
xx.xx.xx.xx:0/0 - xx.xx.xx.xx:6789/0
2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at 
xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit 
fsid xx
2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0  
initial_members a, filtering seed monmap
2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank 
is now 0 (was -1)
2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0 
win_standalone_election
2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
[INF] : mon.a@0 won leader election with quorum 0
2014-12-18 12:35:45.863846 7fb008d4b700  1 
mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 
on_upgrade discarding in-core PGMap
2014-12-18 12:35:45.865662 7fb008d4b700  1 
mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1 
win_standalone_election
2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
[INF] : mon.a@0 won leader election with quorum 0
2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log 
[INF] : monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
2014-12-18 12:35:46.040555 7fb008d4b700  1 
mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log 
[INF] : pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 
print_map

epoch   1
flags   0
created 0.00
modified2014-12-18 12:35:46.038418
tableserver 0
root0
session_timeout 0
session_autoclose   0
max_file_size   0
last_failure0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up  {}
failed
stopped
data_pools
metadata_pool   0
inline_data disabled

2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log 
[INF] : mdsmap e1: 0/0/0 up
2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 0 
osds: 0 up, 0 in
2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
map has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154580 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
map has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154588 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
map has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154592 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
map has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.157078 7fb008d4b700  0 log_channel(cluster) log 
[INF] : osdmap e1: 0 osds: 0 up, 0 in
2014-12-18 12:35:46.220701 7fb008d4b700  1 
mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 - 1
2014-12-18 12:35:46.334457 7fb008d4b700  0 log_channel(cluster) log 
[INF] : pgmap v2: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 
kB avail

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread John Spray
No mistake -- the Ceph FS pools are no longer created by default, as
not everybody needs them.  Ceph FS users now create these pools
explicitly:
http://ceph.com/docs/master/cephfs/createfs/

John

On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:
 Hi All,


 Just setup the monitor for a new cluster based on Giant (0.87) and I find
 that only the 'rbd' pool was created automatically.  I don't see the 'data'
 or 'metadata' pools in 'ceph osd lspools' or the log files.  I haven't setup
 any OSDs or MDSs yet.  I'm following the manual deployment guide.

 Would you mind looking over the setup details/logs below and letting me know
 my mistake please?



 Here's my /etc/ceph/ceph.conf file:
 ---
 [global]
 fsid = xx

 public network = xx.xx.xx.xx/xx
 cluster network = xx.xx.xx.xx/xx

 auth cluster required = cephx
 auth service required = cephx
 auth client required = cephx

 osd pool default size = 2
 osd pool default min size = 1

 osd pool default pg num = 100
 osd pool default pgp num = 100

 [mon]
 mon initial members = a

 [mon.a]
 host = xx
 mon addr = xx.xx.xx.xx
 ---


 Here's the commands used to setup the monitor:
 ---
 ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap
 mon 'allow *'
 ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key
 -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap
 mds 'allow'
 ceph-authtool /tmp/ceph.mon.keyring --import-keyring
 /etc/ceph/ceph.client.admin.keyring
 monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
 mkdir /var/lib/ceph/mon/ceph-a
 ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
 /etc/init.d/ceph-mon.a start
 ---


 Here's the ceph-mon.a logfile:
 ---
 2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87
 (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
 2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in monmap,
 will attempt to join an existing cluster
 2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr xx.xx.xx.xx:0/0
 - xx.xx.xx.xx:6789/0
 2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at
 xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
 2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit fsid
 xx
 2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0
 initial_members a, filtering seed monmap
 2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank is
 now 0 (was -1)
 2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0
 win_standalone_election
 2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log [INF] :
 mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:45.863846 7fb008d4b700  1
 mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 on_upgrade
 discarding in-core PGMap
 2014-12-18 12:35:45.865662 7fb008d4b700  1 mon.a@0(leader).paxosservice(auth
 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1
 win_standalone_election
 2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log [INF] :
 mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log [INF] :
 monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
 2014-12-18 12:35:46.040555 7fb008d4b700  1 mon.a@0(leader).paxosservice(auth
 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log [INF] :
 pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
 2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 print_map
 epoch   1
 flags   0
 created 0.00
 modified2014-12-18 12:35:46.038418
 tableserver 0
 root0
 session_timeout 0
 session_autoclose   0
 max_file_size   0
 last_failure0
 last_failure_osd_epoch  0
 compat  compat={},rocompat={},incompat={}
 max_mds 0
 in
 up  {}
 failed
 stopped
 data_pools
 metadata_pool   0
 inline_data disabled

 2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log [INF] :
 mdsmap e1: 0/0/0 up
 2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 0
 osds: 0 up, 0 in
 2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 crush map
 has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154580 7fb008d4b700  0 mon.a@0(leader).osd e1 crush map
 has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154588 7fb008d4b700  0 mon.a@0(leader).osd e1 crush map
 has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154592 7fb008d4b700  0 mon.a@0(leader).osd e1 crush map
 has features 1107558400, adjusting msgr requires
 2014-12-18 

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread Thomas Lemarchand
I remember reading somewhere (maybe in changelogs) that default pools
were not created automatically anymore.

You can create pools you need yourself.

-- 
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information



On jeu., 2014-12-18 at 06:52 -0600, Dyweni - Ceph-Users wrote:
 Hi All,
 
 
 Just setup the monitor for a new cluster based on Giant (0.87) and I 
 find that only the 'rbd' pool was created automatically.  I don't see 
 the 'data' or 'metadata' pools in 'ceph osd lspools' or the log files.  
 I haven't setup any OSDs or MDSs yet.  I'm following the manual 
 deployment guide.
 
 Would you mind looking over the setup details/logs below and letting me 
 know my mistake please?
 
 
 
 Here's my /etc/ceph/ceph.conf file:
 ---
 [global]
  fsid = xx
 
  public network = xx.xx.xx.xx/xx
  cluster network = xx.xx.xx.xx/xx
 
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
 
  osd pool default size = 2
  osd pool default min size = 1
 
  osd pool default pg num = 100
  osd pool default pgp num = 100
 
 [mon]
  mon initial members = a
 
 [mon.a]
  host = xx
  mon addr = xx.xx.xx.xx
 ---
 
 
 Here's the commands used to setup the monitor:
 ---
 ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. 
 --cap mon 'allow *'
 ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring 
 --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 
 'allow *' --cap mds 'allow'
 ceph-authtool /tmp/ceph.mon.keyring --import-keyring 
 /etc/ceph/ceph.client.admin.keyring
 monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
 mkdir /var/lib/ceph/mon/ceph-a
 ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
 /tmp/ceph.mon.keyring
 /etc/init.d/ceph-mon.a start
 ---
 
 
 Here's the ceph-mon.a logfile:
 ---
 2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87 
 (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
 2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in 
 monmap, will attempt to join an existing cluster
 2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr 
 xx.xx.xx.xx:0/0 - xx.xx.xx.xx:6789/0
 2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at 
 xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
 2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit 
 fsid xx
 2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0  
 initial_members a, filtering seed monmap
 2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank 
 is now 0 (was -1)
 2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0 
 win_standalone_election
 2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
 [INF] : mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:45.863846 7fb008d4b700  1 
 mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 
 on_upgrade discarding in-core PGMap
 2014-12-18 12:35:45.865662 7fb008d4b700  1 
 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1 
 win_standalone_election
 2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
 [INF] : mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log 
 [INF] : monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
 2014-12-18 12:35:46.040555 7fb008d4b700  1 
 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log 
 [INF] : pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
 2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 
 print_map
 epoch   1
 flags   0
 created 0.00
 modified2014-12-18 12:35:46.038418
 tableserver 0
 root0
 session_timeout 0
 session_autoclose   0
 max_file_size   0
 last_failure0
 last_failure_osd_epoch  0
 compat  compat={},rocompat={},incompat={}
 max_mds 0
 in
 up  {}
 failed
 stopped
 data_pools
 metadata_pool   0
 inline_data disabled
 
 2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log 
 [INF] : mdsmap e1: 0/0/0 up
 2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 0 
 osds: 0 up, 0 in
 2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
 map has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154580 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
 map has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154588 7fb008d4b700  0 mon.a@0(leader).osd e1 crush 
 map has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154592 7fb008d4b700  0 mon.a@0(leader).osd 

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread Dyweni - Ceph-Users

Thanks!!

Looks like the the manual installation instructions should be updated, 
to eliminate future confusion.


Dyweni



On 2014-12-18 07:11, John Spray wrote:

No mistake -- the Ceph FS pools are no longer created by default, as
not everybody needs them.  Ceph FS users now create these pools
explicitly:
http://ceph.com/docs/master/cephfs/createfs/

John

On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:

Hi All,


Just setup the monitor for a new cluster based on Giant (0.87) and I 
find
that only the 'rbd' pool was created automatically.  I don't see the 
'data'
or 'metadata' pools in 'ceph osd lspools' or the log files.  I haven't 
setup

any OSDs or MDSs yet.  I'm following the manual deployment guide.

Would you mind looking over the setup details/logs below and letting 
me know

my mistake please?



Here's my /etc/ceph/ceph.conf file:
---
[global]
fsid = xx

public network = xx.xx.xx.xx/xx
cluster network = xx.xx.xx.xx/xx

auth cluster required = cephx
auth service required = cephx
auth client required = cephx

osd pool default size = 2
osd pool default min size = 1

osd pool default pg num = 100
osd pool default pgp num = 100

[mon]
mon initial members = a

[mon.a]
host = xx
mon addr = xx.xx.xx.xx
---


Here's the commands used to setup the monitor:
---
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. 
--cap

mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring 
--gen-key
-n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' 
--cap

mds 'allow'
ceph-authtool /tmp/ceph.mon.keyring --import-keyring
/etc/ceph/ceph.client.admin.keyring
monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
mkdir /var/lib/ceph/mon/ceph-a
ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
/tmp/ceph.mon.keyring

/etc/init.d/ceph-mon.a start
---


Here's the ceph-mon.a logfile:
---
2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in 
monmap,

will attempt to join an existing cluster
2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr 
xx.xx.xx.xx:0/0

- xx.xx.xx.xx:6789/0
2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at
xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 
preinit fsid

xx
2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0
initial_members a, filtering seed monmap
2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my 
rank is

now 0 (was -1)
2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0
win_standalone_election
2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
[INF] :

mon.a@0 won leader election with quorum 0
2014-12-18 12:35:45.863846 7fb008d4b700  1
mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 
0
2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 
on_upgrade

discarding in-core PGMap
2014-12-18 12:35:45.865662 7fb008d4b700  1 
mon.a@0(leader).paxosservice(auth

0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1
win_standalone_election
2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
[INF] :

mon.a@0 won leader election with quorum 0
2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log 
[INF] :

monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
2014-12-18 12:35:46.040555 7fb008d4b700  1 
mon.a@0(leader).paxosservice(auth

0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log 
[INF] :

pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 
print_map

epoch   1
flags   0
created 0.00
modified2014-12-18 12:35:46.038418
tableserver 0
root0
session_timeout 0
session_autoclose   0
max_file_size   0
last_failure0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up  {}
failed
stopped
data_pools
metadata_pool   0
inline_data disabled

2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log 
[INF] :

mdsmap e1: 0/0/0 up
2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 
0

osds: 0 up, 0 in
2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 
crush map

has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154580 7fb008d4b700  0 mon.a@0(leader).osd e1 
crush map

has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154588 7fb008d4b700  0 mon.a@0(leader).osd e1 
crush map

has features 1107558400, adjusting msgr requires
2014-12-18 12:35:46.154592 7fb008d4b700  0 

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread John Spray
Can you point out the specific page that's out of date so that we can update it?

Thanks,
John

On Thu, Dec 18, 2014 at 5:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:
 Thanks!!

 Looks like the the manual installation instructions should be updated, to
 eliminate future confusion.

 Dyweni




 On 2014-12-18 07:11, John Spray wrote:

 No mistake -- the Ceph FS pools are no longer created by default, as
 not everybody needs them.  Ceph FS users now create these pools
 explicitly:
 http://ceph.com/docs/master/cephfs/createfs/

 John

 On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
 6exbab4fy...@dyweni.com wrote:

 Hi All,


 Just setup the monitor for a new cluster based on Giant (0.87) and I find
 that only the 'rbd' pool was created automatically.  I don't see the
 'data'
 or 'metadata' pools in 'ceph osd lspools' or the log files.  I haven't
 setup
 any OSDs or MDSs yet.  I'm following the manual deployment guide.

 Would you mind looking over the setup details/logs below and letting me
 know
 my mistake please?



 Here's my /etc/ceph/ceph.conf file:
 ---
 [global]
 fsid = xx

 public network = xx.xx.xx.xx/xx
 cluster network = xx.xx.xx.xx/xx

 auth cluster required = cephx
 auth service required = cephx
 auth client required = cephx

 osd pool default size = 2
 osd pool default min size = 1

 osd pool default pg num = 100
 osd pool default pgp num = 100

 [mon]
 mon initial members = a

 [mon.a]
 host = xx
 mon addr = xx.xx.xx.xx
 ---


 Here's the commands used to setup the monitor:
 ---
 ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon.
 --cap
 mon 'allow *'
 ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring
 --gen-key
 -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap
 mds 'allow'
 ceph-authtool /tmp/ceph.mon.keyring --import-keyring
 /etc/ceph/ceph.client.admin.keyring
 monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
 mkdir /var/lib/ceph/mon/ceph-a
 ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
 /etc/init.d/ceph-mon.a start
 ---


 Here's the ceph-mon.a logfile:
 ---
 2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87
 (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
 2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in
 monmap,
 will attempt to join an existing cluster
 2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr
 xx.xx.xx.xx:0/0
 - xx.xx.xx.xx:6789/0
 2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at
 xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
 2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit
 fsid
 xx
 2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0
 initial_members a, filtering seed monmap
 2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank
 is
 now 0 (was -1)
 2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0
 win_standalone_election
 2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log [INF]
 :
 mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:45.863846 7fb008d4b700  1
 mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0
 on_upgrade
 discarding in-core PGMap
 2014-12-18 12:35:45.865662 7fb008d4b700  1
 mon.a@0(leader).paxosservice(auth
 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1
 win_standalone_election
 2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log [INF]
 :
 mon.a@0 won leader election with quorum 0
 2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log [INF]
 :
 monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
 2014-12-18 12:35:46.040555 7fb008d4b700  1
 mon.a@0(leader).paxosservice(auth
 0..0) refresh upgraded, format 1 - 0
 2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log [INF]
 :
 pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
 2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1
 print_map
 epoch   1
 flags   0
 created 0.00
 modified2014-12-18 12:35:46.038418
 tableserver 0
 root0
 session_timeout 0
 session_autoclose   0
 max_file_size   0
 last_failure0
 last_failure_osd_epoch  0
 compat  compat={},rocompat={},incompat={}
 max_mds 0
 in
 up  {}
 failed
 stopped
 data_pools
 metadata_pool   0
 inline_data disabled

 2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log [INF]
 :
 mdsmap e1: 0/0/0 up
 2014-12-18 12:35:46.152873 7fb008d4b700  1 mon.a@0(leader).osd e1 e1: 0
 osds: 0 up, 0 in
 2014-12-18 12:35:46.154551 7fb008d4b700  0 mon.a@0(leader).osd e1 crush
 map
 has features 1107558400, adjusting msgr requires
 2014-12-18 12:35:46.154580 

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread Thomas Lemarchand
No !
It would have been a really bad idea. I upgraded without losing my
default pools, hopefully ;)

-- 
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information



On jeu., 2014-12-18 at 10:10 -0800, JIten Shah wrote:
 So what happens if we upgrade from Firefly to Giant? Do we loose the pools?
 
 —Jiten
 On Dec 18, 2014, at 5:12 AM, Thomas Lemarchand 
 thomas.lemarch...@cloud-solutions.fr wrote:
 
  I remember reading somewhere (maybe in changelogs) that default pools
  were not created automatically anymore.
  
  You can create pools you need yourself.
  
  -- 
  Thomas Lemarchand
  Cloud Solutions SAS - Responsable des systèmes d'information
  
  
  
  On jeu., 2014-12-18 at 06:52 -0600, Dyweni - Ceph-Users wrote:
  Hi All,
  
  
  Just setup the monitor for a new cluster based on Giant (0.87) and I 
  find that only the 'rbd' pool was created automatically.  I don't see 
  the 'data' or 'metadata' pools in 'ceph osd lspools' or the log files.  
  I haven't setup any OSDs or MDSs yet.  I'm following the manual 
  deployment guide.
  
  Would you mind looking over the setup details/logs below and letting me 
  know my mistake please?
  
  
  
  Here's my /etc/ceph/ceph.conf file:
  ---
  [global]
  fsid = xx
  
  public network = xx.xx.xx.xx/xx
  cluster network = xx.xx.xx.xx/xx
  
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
  
  osd pool default size = 2
  osd pool default min size = 1
  
  osd pool default pg num = 100
  osd pool default pgp num = 100
  
  [mon]
  mon initial members = a
  
  [mon.a]
  host = xx
  mon addr = xx.xx.xx.xx
  ---
  
  
  Here's the commands used to setup the monitor:
  ---
  ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. 
  --cap mon 'allow *'
  ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring 
  --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 
  'allow *' --cap mds 'allow'
  ceph-authtool /tmp/ceph.mon.keyring --import-keyring 
  /etc/ceph/ceph.client.admin.keyring
  monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
  mkdir /var/lib/ceph/mon/ceph-a
  ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
  /tmp/ceph.mon.keyring
  /etc/init.d/ceph-mon.a start
  ---
  
  
  Here's the ceph-mon.a logfile:
  ---
  2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87 
  (c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 3225
  2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in 
  monmap, will attempt to join an existing cluster
  2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr 
  xx.xx.xx.xx:0/0 - xx.xx.xx.xx:6789/0
  2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at 
  xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
  2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 preinit 
  fsid xx
  2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0  
  initial_members a, filtering seed monmap
  2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my rank 
  is now 0 (was -1)
  2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0 
  win_standalone_election
  2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
  [INF] : mon.a@0 won leader election with quorum 0
  2014-12-18 12:35:45.863846 7fb008d4b700  1 
  mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 - 0
  2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0 
  on_upgrade discarding in-core PGMap
  2014-12-18 12:35:45.865662 7fb008d4b700  1 
  mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
  2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1 
  win_standalone_election
  2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
  [INF] : mon.a@0 won leader election with quorum 0
  2014-12-18 12:35:46.003223 7fb008d4b700  0 log_channel(cluster) log 
  [INF] : monmap e1: 1 mons at {a=xx.xx.xx.xx:6789/0}
  2014-12-18 12:35:46.040555 7fb008d4b700  1 
  mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 1 - 0
  2014-12-18 12:35:46.087081 7fb008d4b700  0 log_channel(cluster) log 
  [INF] : pgmap v1: 0 pgs: ; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
  2014-12-18 12:35:46.141415 7fb008d4b700  0 mon.a@0(leader).mds e1 
  print_map
  epoch   1
  flags   0
  created 0.00
  modified2014-12-18 12:35:46.038418
  tableserver 0
  root0
  session_timeout 0
  session_autoclose   0
  max_file_size   0
  last_failure0
  last_failure_osd_epoch  0
  compat  compat={},rocompat={},incompat={}
  max_mds 0
  in
  up  {}
  failed
  stopped
  data_pools
  metadata_pool   0
  inline_data disabled
  
  2014-12-18 12:35:46.151117 7fb008d4b700  0 log_channel(cluster) log 
  [INF] : mdsmap 

Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread John Spray
On Thu, Dec 18, 2014 at 6:10 PM, JIten Shah jshah2...@me.com wrote:
 So what happens if we upgrade from Firefly to Giant? Do we loose the pools?

Sure, you didn't have any data you wanted to keep, right? :-D

Seriously though, no, we don't delete anything during an upgrade.
It's just newly installed clusters that would never have those pools
created.

John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New Cluster (0.87), Missing Default Pools?

2014-12-18 Thread Dyweni - Ceph-Users

Hi John,

Yes, no problem!  I have few items that I noticed.  They are:


1.   The missing 'data' and 'metadata' pools

  http://ceph.com/docs/master/install/manual-deployment/

  Monitor Bootstrapping - Steps # 17  18


2.   The setting 'mon initial members'

  On page 
'http://ceph.com/docs/master/rados/configuration/mon-config-ref/', 'mon 
initial members' are the IDs of the initial monitors in the cluster.


  On page 'http://ceph.com/docs/master/install/manual-deployment/'  
(Monitor Bootstrapping - Steps # 6  14) it lists the members as the 
hostnames.




3.   Creating the default data directory for the monitors:

  On page 
'http://ceph.com/docs/master/rados/configuration/mon-config-ref/', 'mon 
data' defaults to '/var/lib/ceph/mon/$cluster-$id'.


  On page 'http://ceph.com/docs/master/install/manual-deployment/'  
(Monitor Bootstrapping - Step # 12) it hostname instead.





4.   Populating the monitor daemons

  The man page for 'ceph-mon' shows that '-i' is the monitor ID.

  On page 'http://ceph.com/docs/master/install/manual-deployment/'  
(Monitor Bootstrapping - Step # 13) it uses the hostname instead.




Thanks,
Dyweni





On 2014-12-18 11:55, John Spray wrote:
Can you point out the specific page that's out of date so that we can 
update it?


Thanks,
John

On Thu, Dec 18, 2014 at 5:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:

Thanks!!

Looks like the the manual installation instructions should be updated, 
to

eliminate future confusion.

Dyweni




On 2014-12-18 07:11, John Spray wrote:


No mistake -- the Ceph FS pools are no longer created by default, as
not everybody needs them.  Ceph FS users now create these pools
explicitly:
http://ceph.com/docs/master/cephfs/createfs/

John

On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:


Hi All,


Just setup the monitor for a new cluster based on Giant (0.87) and I 
find

that only the 'rbd' pool was created automatically.  I don't see the
'data'
or 'metadata' pools in 'ceph osd lspools' or the log files.  I 
haven't

setup
any OSDs or MDSs yet.  I'm following the manual deployment guide.

Would you mind looking over the setup details/logs below and letting 
me

know
my mistake please?



Here's my /etc/ceph/ceph.conf file:
---
[global]
fsid = xx

public network = xx.xx.xx.xx/xx
cluster network = xx.xx.xx.xx/xx

auth cluster required = cephx
auth service required = cephx
auth client required = cephx

osd pool default size = 2
osd pool default min size = 1

osd pool default pg num = 100
osd pool default pgp num = 100

[mon]
mon initial members = a

[mon.a]
host = xx
mon addr = xx.xx.xx.xx
---


Here's the commands used to setup the monitor:
---
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n 
mon.

--cap
mon 'allow *'
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring
--gen-key
-n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' 
--cap

mds 'allow'
ceph-authtool /tmp/ceph.mon.keyring --import-keyring
/etc/ceph/ceph.client.admin.keyring
monmaptool --create --add xx xx.xx.xx.xx --fsid xx /tmp/monmap
mkdir /var/lib/ceph/mon/ceph-a
ceph-mon --mkfs -i a --monmap /tmp/monmap --keyring 
/tmp/ceph.mon.keyring

/etc/init.d/ceph-mon.a start
---


Here's the ceph-mon.a logfile:
---
2014-12-18 12:35:45.768752 7fb00df94780  0 ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-mon, pid 
3225

2014-12-18 12:35:45.856851 7fb00df94780  0 mon.a does not exist in
monmap,
will attempt to join an existing cluster
2014-12-18 12:35:45.857069 7fb00df94780  0 using public_addr
xx.xx.xx.xx:0/0
- xx.xx.xx.xx:6789/0
2014-12-18 12:35:45.857126 7fb00df94780  0 starting mon.a rank -1 at
xx.xx.xx.xx:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid xx
2014-12-18 12:35:45.857330 7fb00df94780  1 mon.a@-1(probing) e0 
preinit

fsid
xx
2014-12-18 12:35:45.857402 7fb00df94780  1 mon.a@-1(probing) e0
initial_members a, filtering seed monmap
2014-12-18 12:35:45.858322 7fb00df94780  0 mon.a@-1(probing) e0  my 
rank

is
now 0 (was -1)
2014-12-18 12:35:45.858360 7fb00df94780  1 mon.a@0(probing) e0
win_standalone_election
2014-12-18 12:35:45.859803 7fb00df94780  0 log_channel(cluster) log 
[INF]

:
mon.a@0 won leader election with quorum 0
2014-12-18 12:35:45.863846 7fb008d4b700  1
mon.a@0(leader).paxosservice(pgmap 0..0) refresh upgraded, format 1 
- 0

2014-12-18 12:35:45.863867 7fb008d4b700  1 mon.a@0(leader).pg v0
on_upgrade
discarding in-core PGMap
2014-12-18 12:35:45.865662 7fb008d4b700  1
mon.a@0(leader).paxosservice(auth
0..0) refresh upgraded, format 1 - 0
2014-12-18 12:35:45.865719 7fb008d4b700  1 mon.a@0(probing) e1
win_standalone_election
2014-12-18 12:35:45.867394 7fb008d4b700  0 log_channel(cluster) log 
[INF]

:
mon.a@0 won leader election with quorum 0