Re: [ceph-users] Size of RBD images

2013-11-20 Thread Bernhard Glomm
That might be,
manpage of
ceph version 0.72.1
tells me it isn't though.
anyhow still running kernel 3.8.xx

Bernhard

Am 19.11.2013 20:10:04, schrieb Wolfgang Hennerbichler:

> On Nov 19, 2013, at 3:47 PM, Bernhard Glomm <> bernhard.gl...@ecologic.eu> > 
> wrote:
> 
> > Hi Nicolas
> > just fyi
> > rbd format 2 is not supported yet by the linux kernel (module)

> I believe this is wrong. I think linux supports rbd format 2 images since 
> 3.10. 
> 
> wogri



-- 


  
     

    
  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Size of RBD images

2013-11-19 Thread Bernhard Glomm
Hi Nicolas
just fyi
rbd format 2 is not supported yet by the linux kernel (module)
it only can be used as a target for virtual machines using librbd
see: man rbd --> --image-format 

shrinking time: same happend to me,
rbd (v1) device
took about a week to shrink from 1PB  to 10TB
the good news: I had already about 5TB data on it
and ongoing processes using the device and 
neither was there any data loss nor was there
a significant performance issue.
(3mons + 4machines with different amount of OSDs each)

Bernhard

> EDIT: sorry about the "No such file" error
> 
> Now, it seems this is a separate issue: the system I was using was 
> apparently unable to map devices to images in format 2. I will be 
> investigating that further before mentioning it again.
> 
> I would still appreciate answers about the 1PB image and the time to 
> shrink it.
> 
> Best regards,
> 
> Nicolas Canceill
> Scalable Storage Systems
> SURFsara (Amsterdam, NL)
> 
> 
> On 11/19/2013 03:20 PM, nicolasc wrote:
> > Hi every one,
> > 
> > In the course of playing with RBD, I noticed a few things:
> > 
> > * The RBD images are so thin-provisioned, you can create arbitrarily 
> > large ones.
> > On my 0.72.1 freshly-installed empty 200TB cluster, I was able to 
> > create a 1PB image:
> > 
> > $ rbd create --image-format 2 --size 1073741824 test_img
> > 
> > This command is successful, and I can check the image status:
> > 
> > $ rbd info test
> > rbd image 'test':
> > size 1024 TB in 268435456 objects
> > order 22 (4096 kB objects)
> > block_name_prefix: rbd_data.19f76b8b4567
> > format: 2
> > features: layering
> > 
> > * Such an oversized image seems unmountable on my 3.2.46 kernel.
> > However the error message is not very explicit:
> > 
> > $ rbd map test_img
> > rbd: add failed: (2) No such file or directory
> > 
> > There is no error or explanation to be seen anywhere in the logs.
> > dmesg reports the connection to the cluster through RBD as usual, 
> > and that's it.
> > Using the exact same commands with image size 32GB will successfully 
> > map the device.
> > 
> > * Such an oversize image takes an awfully long time to shrink or remove.
> > However, the image has just been created and is empty.
> > In RADOS, I only see the corresponding rbd_id and rbd_header, but no 
> > data object at all.
> > Still, removing the 1PB image takes roughly 8 hours.
> > 
> > Cluster config:
> > 3 mons, 8nodes * 72osds, about 4800pgs (2400pgs in pool "rbd")
> > cluster and public network are 10GbE, each node has 8 cores and 64GB mem
> > 
> > Oh, so my questions:
> > - why is it possible to create an image five times the size of the 
> > cluster without warning?
> > - where could this "No such file" error come from?
> > - why does it take long to shrink/delete a 
> > large-but-empty-and-thin-provisioned image?
> > 
> > I know that 1PB is oversized ("No such file" when trying to map), and 
> > 32GB is not, so I am currently looking for the oversize threshold. 
> > More info coming soon.
> > 
> > Best regards,
> > 
> > Nicolas Canceill
> > Scalable Storage Systems
> > SURFsara (Amsterdam, NL)
> > 
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread Bernhard Glomm
Hi all,

I also would like to see cephfs stable, especially  with the snapshot function.
I tried to figure out the roadmap but couldn't get a clear picture?
Is there a target date for production-ready snapshot-functionality?

until than a possible alternative (sorry without ceph :-/)
is using glusterfs which can be really fast.
2 years ago I had a setup utilizing raid6 bricks consisting each of 7+1hotspare 
disks (1TB sata)
several of them in a gluster stripe connected via 4GB FC (needed to be cheap 
;-) to a server (debian)
that exported the space via samba.
I liked it because it was:
- really fast
- really robust
- as cheap as poss (for huge productive data IMHO)
- easy to set up and maintain
- smoothly scalable ad infinitum
(one can start with one server, one raid-array than grow for volume and 
redundancy/off-site replication)

big drawback: no snapshots, no easy readonly / cow functionality,
that's what I hope cephfs will bring us!
I tried it since some days, and it works, mds hasn't crashed (yet ;-)
it took 2TB of data with acceptable performance - BUT
erasing that data is a no go :-(  13MB/s??

Again, is there any roadmap on cephfs (incl. snaps?)

best regards 
Bernhard





> Actually #3 is a novel idea, I have not thought of it. Thinking about the 
> difference just off the top of my head though, comparatively, #3 will have 


> 1) more overheads (because of the additional VM)
> 
> 2) Can't grow once you reach the hard limit of 14TB, and if you have multiple 
> of such machines, then fragmentation becomes a problem
> > 

> 
> 3) might have the risk of 14TB partition corruption wiping out all your shares
> 
> 4) not as easy as HA. Although I have not worked HA into NFSCEPH yet, it 
> should be doable by drdb-ing the NFS data directory, or any other techniques 
> that people use for redundant NFS servers.
> > 


> - WP
> 
> 
> 
> On Fri, Nov 15, 2013 at 10:26 PM, Gautam Saxena > <> gsax...@i-a-inc.com> >>  
> wrote:
> > Yip,
> > I went to the link. Where can the script ( nfsceph) be downloaded? How's 
> > the robustness and performance of this technique? (That is, is there are 
> > any reason to believe that it would more/less robust and/or performant than 
> > option #3 mentioned in the original thread?)
> > > > 



> > 
> > 
> > On Fri, Nov 15, 2013 at 1:57 AM, YIP Wai Peng > > <> > 
> > yi...@comp.nus.edu.sg> > >> >  wrote:
> > > On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena > > > <> > > 
> > > gsax...@i-a-inc.com> > > >> > >  wrote:

> > > > 1) nfs over rbd (> > > > 
> > > > http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/> > > > )
> > > > > > > > 




> > > > 
> > > We are now running this - basically an intermediate/gateway node that 
> > > mounts ceph rbd objects and exports them as NFS. > > > 
> > > http://waipeng.wordpress.com/2013/11/12/nfsceph/
> > > 
> > > - WP
> > > 
> > > 
> > > 
> > > > > > 

> > 

> > 
> > 
-- 

Bernhard GlommNetwork & System AdministratorEcologic 
institutebernhard.gl...@ecologic.euwww.ecologic.eu


> > 
> > 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy again

2013-09-24 Thread Bernhard Glomm
Am 23.09.2013 21:56:56, schrieb Alfredo Deza:


> 
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm > <> 
> bernhard.gl...@ecologic.eu> >>  wrote:
> > Hi all,
> > 
> > something with ceph-deploy doesen't work at all anymore.
> > After an upgrade ceph-depoly failed to roll out a new monitor
> > 
with "permission denied. are you root?"
> > (obviously there shouldn't be a root login so I had another user
> > for ceph-deploy before which worked perfectly, why not now?)
> > 
> > ceph_deploy.install][DEBUG ] Purging host ping ...
> > 
Traceback (most recent call last):
> > E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission 
> > denied)
> > E: Unable to lock the administration directory (/var/lib/dpkg/), are you 
> > root?
> > 
> > Does this mean I have to let root log into my Cluster with a passwordless 
> > key?
> > 
I would rather like to use another log in, like so far, if possible.
> > 
> Can you paste here the exact command you are running (and with what user) ?


  well I used to run this script 

##
#!/bin/bash
# initialize the ceph cluster

# our csgstems
ceph_osds="ping pong"
ceph_mons="ping pong nuke36"
options="-v"

cd /tmp

for i in $ceph_mons; do
    ssh $i "sudo service ntp stop && sudo ntpdate-debian && sudo service ntp 
start && date";echo -e "\n\n"
done

ceph-deploy $options purge $ceph_mons
ceph-deploy $options purgedata $ceph_mons

mkdir /etc/ceph
cd /etc/ceph

# install ceph
ceph-deploy $options install --stable dumpling $ceph_mons

# create cluster
ceph-deploy $options new $ceph_mons

# inject your extra configuration options here
# switch on debugging
echo -e "debug ms = 1
debug mon = 20" >> /etc/ceph/ceph.conf

# create the monitors
ceph-deploy $options --overwrite-conf mon create $ceph_mons

sleep 10
# get the keys
for host in $ceph_mons; do
    ceph-deploy $options gatherkeys $host
done

for host in $ceph_osds;do
    ceph-deploy disk zap $host:/dev/sdb
    ceph-deploy $options osd create $host:/dev/sdb
done

# check
ceph status

exit 0

##
I ran this script as rootwith a .ssh/config to switchto the user I can log into 
the cluuster nodes.there is no problem with the ssh nor the sudosince the ntp 
commands in the beginning are working fine



>  
> > > 
> > The howto on > > ceph.com> >  doesn't say anything about it,
> > the  > > changelog.Debian.gz> >  isn't very helpful either and
> > another changelog isn't (provided nor a README)
> > 
> > ceph-deploy is version 1.2.6
> > system is freshly installed raring
> > 
> > got this both lines in my sources.list
> > deb > > http://192.168.242.91:3142/ceph.com/debian/> >  raring main
> > 
deb > > http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/> >  
raring main
> > 
> > since this both didn't work
> > #deb > > 
> > http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/> >    
> >     raring main
> > 
#deb > > http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/> > 
    raring main
> > (couldn't find the python-pushy version ceph-deploy depends on)
> > 
> > TIA
> > 
> > Bernhard

> 
> 
> > 



-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy again

2013-09-23 Thread Bernhard Glomm
Hi all,

something with ceph-deploy doesen't work at all anymore.
After an upgrade ceph-depoly failed to roll out a new monitor
with "permission denied. are you root?"
(obviously there shouldn't be a root login so I had another user
for ceph-deploy before which worked perfectly, why not now?)

ceph_deploy.install][DEBUG ] Purging host ping ...
Traceback (most recent call last):
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Does this mean I have to let root log into my Cluster with a passwordless key?
I would rather like to use another log in, like so far, if possible.

The howto on ceph.com doesn't say anything about it,
the  changelog.Debian.gz isn't very helpful either and
another changelog isn't (provided nor a README)

ceph-deploy is version 1.2.6
system is freshly installed raring

got this both lines in my sources.list
deb http://192.168.242.91:3142/ceph.com/debian/ raring main
deb http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/ raring main

since this both didn't work
#deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/  
 raring main
#deb http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/    
raring main
(couldn't find the python-pushy version ceph-deploy depends on)

TIA

Bernhard

-- 


  
 

    
      
    
  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-06 Thread Bernhard Glomm
Thnx a lot for making this clear!

I thought 4 out of seven wouldn't be good because it's not an odd number...
but I guess after I would have brought up the cluster with 4 MONs I could
have removed one of the MONs to reach that (well, or add one)

thnx again

Bernhard

Am 06.09.2013 13:26:37, schrieb Jens Kristian Søgaard:
> Hi,
> 
> > In order to reach a quorum after reboot, you need to have more than half
> > of yours mons running.

> > with 7 MONs I have to have at least 5 MONS running?

> No. 4 is more than half of 7, so 4 would be a majority and thus would be 
> able to form a quorum.
> 
> > 4 would be more than the half but insufficient to reach a quorum.

> I don't see why you think 4 would be insufficient.
> 
> > But, would the cluster come up at all if I could get only 3 out of the 7
> > initial MONs up and running?

> It wouldn't be able to form a quorum, and thus you would not be able to 
> use the cluster.
> 
> > I.e. you have 2 mons out of 5 running for example - it will not reach a
> > quorum because you need at least 3 mons to do that.

> > Well, as I said, I had 3 MONs running, just the sequence in which they
> > came up after the reboot was odd,

> The number 3 was specifically for the example of having 5 mons in total.
> 
> In your case where you have 7 mons in total, you need to have 4 running 
> to do anything meaningful.
> 
> The order they are started in does not matter as such.
> 
> > And do I need to make sure that a quorum capable number of MONs is up
> > BEFORE I restart the OSDs?

> No, that is not important. The OSDs will wait until the required number 
> of mons become available.
> 
> > - stop MON / one after the other
> > - start MON / in reverse order, last shutdown is first boot

> It is not required to stop or start mons in a specific order.
> 
> > central question remains, do I need 5 out of 7 MONs running or would 3
> > out of 7 be sufficient?

> The magic number here is 4.
> 
> -- 
> Jens Kristian Søgaard, Mermaid Consulting ApS,
> j...@mermaidconsulting.dk> ,
> http://.mermaidconsulting.com/
> 
> 




-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-06 Thread Bernhard Glomm
thnx Jens

> > I have my testcluster consisting two OSDs that also host MONs plus
> > one to five MONs.

> Are you saying that you have a total of 7 mons?


yepp 

> > down the at last, not the other MON though (since - surprise - they
> > are in this test szenario just virtual instances residing on some
> > ceph rbds)

> This seems to be your problem. When you shutdown the cluster, you
> haven't got those extra mons.
> 
> In order to reach a quorum after reboot, you need to have more than half
> of yours mons running.


wait! 
lets say in my setup (which might be silly giving the MON / OSD ratio but 
still)with 7 MONs I have to have at least 5 MONS running?4 would be more than 
the half but insufficient to reach a quorum.But, would the cluster come up at 
all if I could get only 3 out of the 7 initial MONs up and running?

> If you have 5 or more mons in total, this means that the two physical
> servers running mons cannot reach quorum by themselves.
> 
> I.e. you have 2 mons out of 5 running for example - it will not reach a
> quorum because you need at least 3 mons to do that.


Well, as I said, I had 3 MONs running, just the sequence in which they came up 
after the reboot was odd,first the first MON/OSD combination came to life, than 
a second MON, than the second MON/OSD combination
 
> You need to either move mons to physical machines, or virtual instances
> not depending on the same Ceph cluster, or reduce the number of mons in
> the system to 3 (or 1).


And do I need to make sure that a quorum capable number of MONs is up BEFORE I 
restart the OSDs?

than the sequence would be:

- free cluster of load/usage
- stop MDS / any given order
- stop OSD / any given order
- stop MON / one after the other
- start MON / in reverse order, last shutdown is first boot
- start OSD
- start MDS
- allow load to the cluster again

central question remains, do I need 5 out of 7 MONs running or would 3 out of 7 
be sufficient?

TIA

Bernhard


-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-06 Thread Bernhard Glomm

> > > > And a second question regarding ceph-deploy:
> > > > How do I specify a second NIC/address to be used as the intercluster 
> > > > communication?

> > > You will not be able to do something like this with ceph-deploy. This
> > > sounds like a very specific (or a bit more advanced)
> > > configuration than what ceph-deploy offers.

> > Actually, you can — when editing the ceph.conf (before creating any
> > daemons) simply set public addr and cluster addr in whatever section
> > is appropriate. :)

> Oh, you are right! I was thinking about a flag in ceph-deploy for some reason 
> :) 


yepp, flag or option would be nice, 
but doing it via ceph.conf would work for me too.
So immediately after running
ceph-deploy new $my_initial_instances
I inject what ever I need into ceph.conf
than with the " --overwrite-conf" option
ceph.conf is pushed to the nodes on the 
ceph-deploy mon create $my_initial_instances
right? 
ceph-deploy only creates a basic ceph.conf when called with the "new" command,
not editing the conf afterwards anymore?
so if I add additional MONs/OSDs later on, shall I adjust the 
"mon_host" list?
and the "mon_initial_members" list?
Can I introduce the cluster network later on, after the cluster is deployed and 
started working?
(by editing ceph.conf, push it to the cluster members and restart the daemons?)

TIA

Bernhard



-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 http://ceph.com/docs/master/rados/configuration/ceph-conf/
  

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-05 Thread Bernhard Glomm
Hi all, 

as a ceph newbie I got another question that is probably solved long ago.
I have my testcluster consisting two OSDs that also host MONs
plus one to five MONs.
Now I want to reboot all instance, simulating a power failure.
So I shutdown the extra MONs, 
Than shutting down the first OSD/MON instance (call it "ping")
and after shutdown is complete, shutting down the second OSD/MON
instance (call it "pong")
5 Minutes later I restart "pong", than after I checked all services are 
up and running I restart "pong", afterwards I restart the MON that I brought
down the at last, not the other MON though (since - surprise - they are in 
this test szenario just virtual instances residing on some ceph rbds)

I think this is the wrong way to do it, since it brakes the cluster 
unrecoverable...
at least that's what it seems, ceph tries to call one of the MONs that isn't 
there yet
How to shut down and restart the whole cluster in a coordinated way in case
of a powerfailure (need a script for our UPS)

And a second question regarding ceph-deploy:
How do I specify a second NIC/address to be used as the intercluster 
communication?

TIA

Bernhard



-- 


  
 
    
    
      

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping failes - maybe solved

2013-08-30 Thread bernhard glomm
Thanks Sage,

I just tried various versions from gitbuilder and finally found one that worked 
;-)

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/   
raring main

looks like it works perfectly, on first glance with much better performance 
than cuttlefish.

Do you need some test for my problem with 0.67.2-16-gd41cf86?
could do so on monday.

I didn't ran udev nor cat /proc/partitions but checked 
/dev/rbd* -> not present
and 
tree /dev/disk
also not showing any hint for a new device other than my hard disk partitions

Since the dumpling version now seems to work I would otherwise keep using that
to get more familiar with ceph.

Bernhard

 Bernhard Glomm
IT Administration

Phone:   +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype:   bernhard.glomm.ecologic
   
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | 
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: 
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

On Aug 30, 2013, at 5:05 PM, Sage Weil  wrote:

> Hi Bernhard,
> 
> On Fri, 30 Ag 2013, Bernhard Glomm wrote:
>> Hi all,
>> 
>> due to a problem with ceph-deploy I currently use
>> 
>> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
>> raring main
>> (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
>> 
>> Now the initialization of the cluster works like a charm,
>> ceph health is okay,
> 
> Great; this will get backported to dumpling shortly and will be included 
> in teh 0.67.3 release.
> 
>> just the mapping of the created rbd is failing.
>> 
>> -
>> root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
>> --yes-i-really-really-mean-it
>> pool 'kvm-pool' deleted
>> root@ping[/1]:~ # ceph osd lspools
>> 
>> 0 data,1 metadata,2 rbd,
>> root@ping[/1]:~ #
>> root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
>> pool 'kvm-pool' created
>> root@ping[/1]:~ # ceph osd lspools
>> 0 data,1 metadata,2 rbd,4 kvm-pool,
>> root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
>> set pool 4 min_size to 2
>> root@ping[/1]:~ # ceph osd dump | grep 'rep size'
>> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
>> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0
>> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
>> pg_num 64 pgp_num 64 last_change 1 owner 0
>> pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
>> pg_num 1000 pgp_num 1000 last_change 33 owner 0
>> root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
>> root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
>> root@ping[/1]:~ # rbd ls kvm-pool
>> atom03.cimg
>> atom04.cimg
>> root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
>> rbd image 'atom03.cimg':
>> size 4000 MB in 1000 objects
>> order 22 (4096 KB objects)
>> block_name_prefix: rb.0.114d.2ae8944a
>> format: 1
>> root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
>> rbd image 'atom04.cimg':
>> size 4000 MB in 1000 objects
>> order 22 (4096 KB objects)
>> block_name_prefix: rb.0.127d.74b0dc51
>> format: 1
>> root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
>> rbd: '/sbin/udevadm settle' failed! (256)
>> root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
>> --keyring /etc/ceph/ceph.client.admin.keyring
>> ^Crbd: '/sbin/udevadm settle' failed! (2)
>> root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
>> /etc/ceph/ceph.client.admin.keyring
>> rbd: '/sbin/udevadm settle' failed! (256)
>> -----
> 
> What happens if you run '/sbin/udevadm settle' from the command line?
> 
> Also, this the very last step before rbd exits (normally with success), so 
> my guess is that the rbd mapping actually succeeded.  cat /proc/partitions 
> or ls /dev/rbd
> 
> sage
> 
>> 
>> Do I miss something?
>> But I think this set of commands worked perfectly with cuttlefish?
>> 
>> TIA
>> 
>> Bernhard
>> 
>> --
>> 
>> 
>

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
,4 atom01,atom02,nuke36,ping,pong
   osdmap e34: 2 osds: 2 up, 2 in
    pgmap v367: 1192 pgs: 1192 active+clean; 9788 bytes data, 94460 KB used, 
3722 GB / 3722 GB avail
   mdsmap e17: 1/1/1 up {0=atom02=up:active}, 2 up:standby

2013-08-30 15:03:18.516793 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 
mark_down 0x7f3b7800e4e0 -- 0x7f3b7800e280
2013-08-30 15:03:18.517204 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 
mark_down_all
2013-08-30 15:03:18.517921 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 shutdown 
complete.
root@nuke36[/1]:/etc/ceph # mount -t ceph 192.168.242.36:6789:/ /mnt/ceph/
mount error 5 = Input/output error
--



Am 30.08.2013 11:48:35, schrieb Bernhard Glomm:
> Hi all,
> 
> due to a problem with ceph-deploy I currently use
> 
> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
> raring main
> (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
> 
> Now the initialization of the cluster works like a charm,
> ceph health is okay, 
> just the mapping of the created rbd is failing.
> 
> -
> root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
> --yes-i-really-really-mean-it
> pool 'kvm-pool' deleted
> root@ping[/1]:~ # ceph osd lspools
> 
> 0 data,1 metadata,2 rbd,
> root@ping[/1]:~ # 
> root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
> pool 'kvm-pool' created
> root@ping[/1]:~ # ceph osd lspools
> 0 data,1 metadata,2 rbd,4 kvm-pool,
> root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
> set pool 4 min_size to 2
> root@ping[/1]:~ # ceph osd dump | grep 'rep size'
> pool
 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
> pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins 
> pg_num 64 pgp_num 64 last_change 1 owner 0
> pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
> pg_num 1000 pgp_num 1000 last_change 33 owner 0
> root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
> root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
> root@ping[/1]:~ # rbd ls kvm-pool
> atom03.cimg
> atom04.cimg
> root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
> rbd image 'atom03.cimg':
>     size 4000 MB in 1000 objects
>     order 22 (4096 KB objects)
>     block_name_prefix: rb.0.114d.2ae8944a
>     format: 1
> root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
> rbd image 'atom04.cimg':
>     size 4000 MB in 1000 objects
>     order 22 (4096 KB objects)
>     block_name_prefix: rb.0.127d.74b0dc51
>     format: 1
> root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
> rbd: '/sbin/udevadm settle' failed! (256)
> root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
> --keyring /etc/ceph/ceph.client.admin.keyring 
> ^Crbd: '/sbin/udevadm settle' failed! (2)
> root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
> /etc/ceph/ceph.client.admin.keyring 
> rbd: '/sbin/udevadm settle' failed! (256)
> -
> 
> Do I miss something? 
> But I think this set of commands worked perfectly with cuttlefish?
> 
> TIA
> 
> Bernhard
> 
> -- 
> 
  > 
 > 
> 

> 
  > 
> 
  > 
> Bernhard Glomm
> 
IT Administration
> 
> 
  > 
> 
  Phone:
> 
> 
  +49 (30) 86880 134
> 
  > 
  Fax:
> 
> 
  +49 (30) 86880 100
> 
  > 
  Skype:
> 
> 
  bernhard.glomm.ecologic
> 
  > 

  

> 
> 
> 
> 
> 
> 
> 
> 
      
    
> 
  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany
> 
  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464
> 
  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> 
  

 
> > ___
> ceph-users 

[ceph-users] ceph-deploy howto

2013-08-30 Thread Bernhard Glomm
Is there an _actual_ howto, man page or other documentation
about ceph-deploy?
I can't find any documentation about how to specify different 
networks (storage/public), use folders or partitions instead of disks...

TIA

Bernhard


-- 


  
 


  

  
    Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay, 
just the mapping of the created rbd is failing.

-
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ # 
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool
 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.114d.2ae8944a
    format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.127d.74b0dc51
    format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
--keyring /etc/ceph/ceph.client.admin.keyring 
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
/etc/ceph/ceph.client.admin.keyring 
rbd: '/sbin/udevadm settle' failed! (256)
-

Do I miss something? 
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard

-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay, 
just the mapping of the created rbd is failing.

-
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ # 
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.114d.2ae8944a
    format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
    size 4000 MB in 1000 objects
    order 22 (4096 KB objects)
    block_name_prefix: rb.0.127d.74b0dc51
    format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
--keyring /etc/ceph/ceph.client.admin.keyring 
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
/etc/ceph/ceph.client.admin.keyring 
rbd: '/sbin/udevadm settle' failed! (256)
-

Do I miss something? 
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard





> -- 
> 
  > 
 > 
> 
> 
  > 
> 
  > 
> Bernhard Glomm
> 
IT Administration
> 
> 
  > 
> 
  Phone:
> 
> 
  +49 (30) 86880 134
> 
  > 
  Fax:
> 
> 
  +49 (30) 86880 100
> 
  > 
  Skype:
> 
> 
  bernhard.glomm.ecologic
> 
  > 
> 
  > 
> 
> 
> 
> 
> 
> 
> 
> 
> 
  > 
> 
> 
  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany
> 
  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464
> 
  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> 
  > 
> 
 > Cuttlefish> 
  > 

> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy 1.2.1 ceph-create-keys broken?

2013-08-16 Thread Bernhard Glomm
Hi all,

since ceph-deploy/ceph-create-keys is broken
(see bug 4924)
and mkcephfs is deprecated

is there a howto for deploying the system without using
neither of this tools? (especially not ceph-create-keys 
since that won't stop running without doing anything ;-)

Since I have only 5 instances I like to set up
I could do a manually configuration an import that
into my cfengine management, so wouldn't need to rely on ceph-deploy

TIA

Bernhard

-- 


  
 


  

  
    Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy missing?

2013-08-15 Thread bernhard glomm
Hi all,

I would like to use ceph in our company and had some test setups running.
Now all of a sudden ceph-deploy is not in the repos anymore

this is my sources list,

...
deb http://archive.ubuntu.com/ubuntu raring main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu raring-security main restricted universe 
multiverse
deb http://archive.ubuntu.com/ubuntu raring-updates main restricted universe 
multiverse
deb http:///ceph.com/debian/ raring main
…

up to last week I had no problem installing ceph-deploy

Any ideas? Why is it not at least in the ceph repo?
Will it come back?

TIA

Bernhard

 Bernhard Glomm
IT Administration

Phone:   +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype:   bernhard.glomm.ecologic
   
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | 
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: 
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com