Re: [ceph-users] Perl Bindings for Ceph

2013-10-20 Thread Michael Lowe
1. How about enabling trim/discard support in virtio-SCSI and using fstrim?  
That might work for you.

4.  Well you can mount them rw in multiple vm's with predictably bad results, 
so I don't see any reason why you could not specify ro as a mount option and do 
ok.

Sent from my iPad

 On Oct 21, 2013, at 12:09 AM, Jon three1...@gmail.com wrote:
 
 Hello,
 
 Are there any current Perl modules for Ceph?  I found a thread [1] from 2011 
 with a version of Ceph::RADOS, but it only has functions to deal with pools, 
 and the -list_pools function causes a seg. fault.
 
 I'm interested in controlling Ceph via script / application and I was 
 wondering [hoping] if anyone else had a current module before I go 
 reinventing the wheel.  (My wheel would likely leverage calls to system() and 
 use the rbd/rados/ceph functions directly initially...  I'm not proficient 
 with C/XS)
 
 I've been primarily using OpenNebula, though I've evaluated OpenStack, 
 CloudStack, and even Eucalyptus and they all seem to meet ($x-1)/$x criteria 
 (one project seems to do one thing better than another, but they all are 
 missing one feature that another project has--this is a generalization, but 
 this isn't the OpenNebula mailing list).  What I'm looking to do at the 
 moment is simplify my lab deployments.  My current workflow only takes 10 
 minutes or so to deploy a new vm:  
 
 1) dump xml of existing vm (usually the base vm that the template was 
 created from, I actually have a template that I just copy and modify now)
 2) clone rbd to new vm (usually using vmname)
 3) edit vm template to relflect new values
-- change name of vm to the new vmname
-- remove specific identifiers (MAC, etc. unnecessary when copying 
 template)
-- update disk to reflect new rbd
 4) login to console and pre-provision vm
   -- update system
   -- assign hostname
   -- generate ssh-keys (I remove the sshd host keys when sysprepping for 
 cloning, ubuntu I know for sure doesn't regenerate the keys on boot, I 
 _THINK_ RHEL might)
 
 I actually already did this work on automating deployments[2], but that was 
 back when I was primarily using qcow2 images.  It leverages guestfish to do 
 all of the vm :management (setting IP, hostname, generating ssh host keys, 
 etc).  But now I want to leverage my Ceph cluster for images.
 
 Couple of tangentially related questions that I don't think warrant a whole 
 thread:
 
 1) Is it possible to zero and compress rbds?  (I like to use virt-sysprep and 
 virt-sparcify to prepare my images, then, when I was using qcow images, I 
 would compress them before cloning)
 2)  has anyone used virt-sysprep|virt-sparcify against rbd images?  I suppose 
 if I'm creating a template image, I could create the qcow image then convert 
 it to an rbd, but qcow-img creates format 1 images.
 3) anyone know of a way to create format 2 images with qemu-img?  When I 
 specify -O rbd qemu-img seg faults, and rbd2 is an invalid format.
 4) Is it possible to mount an RBD to multiple vms as readonly?  I'm thinking 
 like readonly iso images converted to rbds? (is it even possible to convert 
 an iso to an image?)
 
 
 Thanks for your help.
 
 Best Regards,
 Jon A
 
 [1]  http://www.spinics.net/lists/ceph-devel/msg04147.html
 [2]  https://github.com/three18ti/PrepVM-App
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Full OSD with 29% free

2013-10-14 Thread Michael Lowe
How fragmented is that file system?

Sent from my iPad

 On Oct 14, 2013, at 5:44 PM, Bryan Stillwell bstillw...@photobucket.com 
 wrote:
 
 This appears to be more of an XFS issue than a ceph issue, but I've
 run into a problem where some of my OSDs failed because the filesystem
 was reported as full even though there was 29% free:
 
 [root@den2ceph001 ceph-1]# touch blah
 touch: cannot touch `blah': No space left on device
 [root@den2ceph001 ceph-1]# df .
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/sdc1486562672 342139340 144423332  71% 
 /var/lib/ceph/osd/ceph-1
 [root@den2ceph001 ceph-1]# df -i .
 FilesystemInodes   IUsed   IFree IUse% Mounted on
 /dev/sdc160849984 4097408 567525767% /var/lib/ceph/osd/ceph-1
 [root@den2ceph001 ceph-1]#
 
 I've tried remounting the filesystem with the inode64 option like a
 few people recommended, but that didn't help (probably because it
 doesn't appear to be running out of inodes).
 
 This happened while I was on vacation and I'm pretty sure it was
 caused by another OSD failing on the same node.  I've been able to
 recover from the situation by bringing the failed OSD back online, but
 it's only a matter of time until I'll be running into this issue again
 since my cluster is still being populated.
 
 Any ideas on things I can try the next time this happens?
 
 Thanks,
 Bryan
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] kvm live migrate wil ceph

2013-10-14 Thread Michael Lowe
I live migrate all the time using the rbd driver in qemu, no problems.  Qemu 
will issue a flush as part of the migration so everything is consistent.  It's 
the right way to use ceph to back vm's. I would strongly recommend against a 
network file system approach.  You may want to look into format 2 rbd images, 
the cloning and writable snapshots may be what you are looking for.

Sent from my iPad

 On Oct 14, 2013, at 5:37 AM, Jon three1...@gmail.com wrote:
 
 Hello,
 
 I would like to live migrate a VM between two hypervisors.  Is it possible 
 to do this with a rbd disk or should the vm disks be created as qcow images 
 on a CephFS/NFS share (is it possible to do clvm over rbds? OR GlusterFS over 
 rbds?)and point kvm at the network directory.  As I understand it, rbds 
 aren't cluster aware so you can't mount an rbd on multiple hosts at once, 
 but maybe libvirt has a way to handle the transfer...?  I like the idea of 
 master or golden images where guests write any changes to a new image, I 
 don't think rbds are able to handle copy-on-write in the same way kvm does so 
 maybe a clustered filesystem approach is the ideal way to go.
 
 Thanks for your input. I think I'm just missing some piece. .. I just don't 
 grok...
 
 Bestv Regards,
 Jon A
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] monitor failover of ceph

2013-10-11 Thread Michael Lowe
You must have a quorum or MORE than 50% of your monitors functioning for the 
cluster to function.  With one of two you only have 50% which isn't enough and 
stops i/o.

Sent from my iPad

 On Oct 11, 2013, at 11:28 PM, 飞 duron...@qq.com wrote:
 
 hello, I am a new user of ceph,
 I have built a ceph testing Environment for block storage,
 I have 2 osd and 2 monitor,In addition to failover test, other tests are 
 normal.
 when I perform failover test, if I stop one osd , the cluster is OK,
 but if I stop one monitor , the cluster have entire die , why ? thank you.
 
 my configure file :
 ; global
 [global]
  ; enable secure authentication
  ; auth supported = cephx
  
  auth cluster required = none
  auth service required = none
  auth client required = none
  
  mon clock drift allowed = 3
  
 ;  monitors
 ;  You need at least one.  You need at least three if you want to
 ;  tolerate any node failures.  Always create an odd number.
 [mon]
  mon data = /home/ceph/mon$id
  ; some minimal logging (just message traffic) to aid debugging
  debug ms = 1
 [mon.0]
  host = sheepdog1
  mon addr = 192.168.0.19:6789
  
 [mon.1]
  mon data = /var/lib/ceph/mon.$id
 host = sheepdog2
 mon addr = 192.168.0.219:6789
  
 ; mds
 ;  You need at least one.  Define two to get a standby.
 [mds]
  ; where the mds keeps it's secret encryption keys
  keyring = /home/ceph/keyring.mds.$id
 [mds.0]
  host = sheepdog1
 ; osd
 ;  You need at least one.  Two if you want data to be replicated.
 ;  Define as many as you like.
 [osd]
 ; This is where the btrfs volume will be mounted.  
  osd data = /home/ceph/osd.$id
  osd journal = /home/ceph/osd.$id/journal
  osd journal size = 512
  ; working with ext4
  filestore xattr use omap = true
  
  ; solve rbd data corruption
  filestore fiemap = false
 
 [osd.0]
 host = sheepdog1
 osd data = /var/lib/ceph/osd/diskb
 osd journal = /var/lib/ceph/osd/diskb/journal
 [osd.2]
  host = sheepdog2
  osd data = /var/lib/ceph/osd/diskc
  osd journal = /var/lib/ceph/osd/diskc/journal
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-09 Thread Michael Lowe
There used to be, can't find it right now.  Something like 'ceph osd set pg_num 
num' then 'ceph osd set pgp_num num' to actually move your data into the 
new pg's.  I successfully did it several months ago, when bobtail was current.

Sent from my iPad

 On Oct 9, 2013, at 10:30 PM, Guang yguan...@yahoo.com wrote:
 
 Thanks Mike.
 
 Is there any documentation for that?
 
 Thanks,
 Guang
 
 On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
 
 You can add PGs,  the process is called splitting.  I don't think PG 
 merging, the reduction in the number of PGs, is ready yet.
 
 On Oct 8, 2013, at 11:58 PM, Guang yguan...@yahoo.com wrote:
 
 Hi ceph-users,
 Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my 
 understanding, the number of PGs for a pool should be fixed even we scale 
 out / in the cluster by adding / removing OSDs, does that mean if we double 
 the OSD numbers, the PG number for a pool is not optimal any more and there 
 is no chance to correct it?
 
 
 Thanks,
 Guang
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Michael Lowe
FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script and 
the stock libvirt from raring.



 On Oct 2, 2013, at 10:59 PM, Josh Durgin josh.dur...@inktank.com wrote:
 
 On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
 Josh,
 
 On 3 October 2013 10:36, Josh Durgin josh.dur...@inktank.com wrote:
 The version base of qemu in precise has the same problem. It only
 affects writeback caching.
 
 You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
 cloud archive.
 
 Thanks for the pointer! I had not realised there were newer than 1.0
 qemu-kvm packages available anywhere for Precise. We'll definitely look
 into that for other reasons too, especially better live-migration.
 
 I know it's not specifically Ceph related, but are you aware of any
 problems with these against Grizzly?
 
 I'm not aware of any. libvirt maintains a stable interface so it
 shouldn't be an issue to use newer versions of qemu and libvirt
 with older versions of openstack. If you upgrade qemu, you may
 need the newer libvirt in the cloud archive as well.
 
 Josh
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Osd crash and misplaced objects after rapid object deletion

2013-07-23 Thread Michael Lowe
On two different occasions I've had an osd crash and misplace objects when 
rapid object deletion has been triggered by discard/trim operations with the 
qemu rbd driver.  Has anybody else had this kind of trouble?  The objects are 
still on disk, just not in a place where the osd thinks is valid. 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Michael Lowe
Did you also set the pgp_num, as I understand it the newly created pg's aren't 
considered for placement until you increase the pgp_num aka effective pg number.

Sent from my iPad

On Jul 3, 2013, at 11:54 AM, Pierre BLONDEAU pierre.blond...@unicaen.fr wrote:

 Le 03/07/2013 11:12, Pierre BLONDEAU a écrit :
 Le 01/07/2013 19:17, Gregory Farnum a écrit :
 On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh a...@alex.org.uk wrote:
 
 On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
 
 Oh, that's out of date! PG splitting is supported in Cuttlefish:
 ceph osd pool set foo pg_num number
 http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
 
 Ah, so:
   pg_num: The placement group number.
 means
   pg_num: The number of placement groups.
 
 Perhaps worth demystifying for those hard of understanding such as
 myself.
 
 I'm still not quite sure how that relates to pgp_num.
 
 Pools are sharded into placement groups. That's the pg_num. Those
 placement groups can be placed all independently, or as if there were
 a smaller number of placement groups (this is so you can double the
 number of PGs but not move any data until the splitting is done).
 -Greg
 
 Hy,
 
 Thank you very much for your answer. Sorry for the late reply but a
 modification of a cluster of 67T is long ;)
 
 Actually my pg number was very insufficient :
 
 ceph osd pool get data pg_num
 pg_num: 48
 
 As I'm not sure of the rate of replication that I will set, I change the
 number of pg to 1800:
 ceph osd pool set data pg_num 1800
 
 But the placement is always heterogeneous especially on the machine
 where I had an full osd. I now have two osd on this machine to the limit
 and I can not write to the cluster
 
 jack
 67 - 67% /var/lib/ceph/osd/ceph-6
 86 - 86% /var/lib/ceph/osd/ceph-8
 85 - 77% /var/lib/ceph/osd/ceph-11
 ?  - 66% /var/lib/ceph/osd/ceph-7
 47 - 47% /var/lib/ceph/osd/ceph-10
 29 - 29% /var/lib/ceph/osd/ceph-9
 
 joe
 86 - 77% /var/lib/ceph/osd/ceph-15
 67 - 67% /var/lib/ceph/osd/ceph-13
 95 - 96% /var/lib/ceph/osd/ceph-14
 92 - 95% /var/lib/ceph/osd/ceph-17
 86 - 87% /var/lib/ceph/osd/ceph-12
 20 - 20% /var/lib/ceph/osd/ceph-16
 
 william
 68 - 86% /var/lib/ceph/osd/ceph-0
 86 - 86% /var/lib/ceph/osd/ceph-3
 67 - 61% /var/lib/ceph/osd/ceph-4
 79 - 71% /var/lib/ceph/osd/ceph-1
 58 - 58% /var/lib/ceph/osd/ceph-18
 64 - 50% /var/lib/ceph/osd/ceph-2
 
 ceph -w :
 2013-07-03 10:56:06.610928 mon.0 [INF] pgmap v174071: 1928 pgs: 1816
 active+clean, 84 active+remapped+backfill_toofull, 9
 active+degraded+backfill_toofull, 19
 active+degraded+remapped+backfill_toofull; 300 TB data, 45284 GB used,
 21719 GB / 67004 GB avail; 15EB/s rd, 15EB/s wr, 15Eop/s;
 9975324/165229620 degraded (6.037%);  recovering 15E o/s, 15EB/s
 2013-07-03 10:56:08.404701 osd.14 [WRN] OSD near full (95%)
 2013-07-03 10:56:29.729297 osd.17 [WRN] OSD near full (94%)
 
 And I do not understand why the OSD 16 and 19 are hardly used
 
 Regards
 Hy,
 
 I made a mistake, when I restored the weight of all osd to 1, i forgot the 
 osd.15 .
 
 So after this mistake solve i have only one osd which is full and the ceph 
 message is a little bit différent :
 
 joe
 77 - 86% /var/lib/ceph/osd/ceph-15
 95 - 85% /var/lib/ceph/osd/ceph-17
 
 2013-07-03 17:38:16.700846 mon.0 [INF] pgmap v177380: 1928 pgs: 1869 
 active+clean, 28 active+remapped+backfill_toofull, 9 
 active+degraded+backfill_toofull, 19 
 active+degraded+remapped+backfill_toofull, 3 active+clean+scrubbing+deep; 221 
 TB data, 45284 GB used, 21720 GB / 67004 GB avail; 4882468/118972792 degraded 
 (4.104%)
 2013-07-03 17:38:20.813192 osd.14 [WRN] OSD near full (95%)
 
 Can I change the default ratio of full osd?
 If so, that can help ceph to move some pg of osd.14 on another osd as 16 for 
 example?
 
 Regards
 
 -- 
 --
 Pierre BLONDEAU
 Administrateur Systèmes  réseaux
 Université de Caen
 Laboratoire GREYC, Département d'informatique
 
 tel: 02 31 56 75 42
 bureau: Campus 2, Science 3, 406
 --
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Michael Lowe
That's where 'ceph osd set noout' comes in handy.



On Jun 24, 2013, at 7:28 PM, Nigel Williams nigel.willi...@utas.edu.au wrote:

 On 25/06/2013 5:59 AM, Brian Candler wrote:
 On 24/06/2013 20:27, Dave Spano wrote:
 Here's my procedure for manually adding OSDs.
 
 The other thing I discovered is not to wait between steps; some changes 
 result in a new crushmap, that then triggers replication. You want to speed 
 through the steps so the cluster does not waste time moving objects around to 
 meet the replica requirements until you have finished crushmap changes.
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Michael Lowe
I believe that this is fixed in the most recent versions of libvirt, sheepdog 
and rbd were marked erroneously as unsafe.

http://libvirt.org/git/?p=libvirt.git;a=commit;h=78290b1641e95304c862062ee0aca95395c5926c

Sent from my iPad

On May 11, 2013, at 8:36 AM, Mike Kelly pi...@pioto.org wrote:

 (Sorry for sending this twice... Forgot to reply to the list)
 
 Is rbd caching safe to enable when you may need to do a live migration of the 
 guest later on? It was my understanding that it wasn't, and that libvirt 
 prevented you from doing the migration of it knew about the caching setting.
 
 If it isn't, is there anything else that could help performance? Like, some 
 tuning of block size parameters for the rbd image or the qemu
 
 On May 10, 2013 8:57 PM, Mark Nelson mark.nel...@inktank.com wrote:
 On 05/10/2013 07:21 PM, Yun Mao wrote:
 Hi Mark,
 
 Given the same hardware, optimal configuration (I have no idea what that
 means exactly but feel free to specify), which is supposed to perform
 better, kernel rbd or qemu/kvm? Thanks,
 
 Yun
 
 Hi Yun,
 
 I'm in the process of actually running some tests right now.
 
 In previous testing, it looked like kernel rbd and qemu/kvm performed about 
 the same with cache off.  With cache on (in cuttlefish), small sequential 
 write performance improved pretty dramatically vs without cache.  Large 
 write performance seemed to take more concurrency to reach peak performance, 
 but ultimately aggregate throughput was about the same.
 
 Hopefully I should have some new results published in the near future.
 
 Mark
 
 
 
 On Fri, May 10, 2013 at 6:56 PM, Mark Nelson mark.nel...@inktank.com
 mailto:mark.nel...@inktank.com wrote:
 
 On 05/10/2013 12:16 PM, Greg wrote:
 
 Hello folks,
 
 I'm in the process of testing CEPH and RBD, I have set up a small
 cluster of  hosts running each a MON and an OSD with both
 journal and
 data on the same SSD (ok this is stupid but this is simple to
 verify the
 disks are not the bottleneck for 1 client). All nodes are
 connected on a
 1Gb network (no dedicated network for OSDs, shame on me :).
 
 Summary : the RBD performance is poor compared to benchmark
 
 A 5 seconds seq read benchmark shows something like this :
 
 sec Cur ops   started  finished  avg MB/s  cur MB/s
   last lat   avg
 lat
   0   0 0 0 0 0 -
0
   1  163923   91.958692
 0.966117  0.431249
   2  166448   95.9602   100
 0.513435   0.53849
   3  169074   98.6317   104
 0.25631   0.55494
   4  119584   83.973540
 1.80038   0.58712
   Total time run:4.165747
 Total reads made: 95
 Read size:4194304
 Bandwidth (MB/sec):91.220
 
 Average Latency:   0.678901
 Max latency:   1.80038
 Min latency:   0.104719
 
 
 91MB read performance, quite good !
 
 Now the RBD performance :
 
 root@client:~# dd if=/dev/rbd1 of=/dev/null bs=4M count=100
 100+0 records in
 100+0 records out
 419430400 bytes (419 MB) copied, 13.0568 s, 32.1 MB/s
 
 
 There is a 3x performance factor (same for write: ~60M
 benchmark, ~20M
 dd on block device)
 
 The network is ok, the CPU is also ok on all OSDs.
 CEPH is Bobtail 0.56.4, linux is 3.8.1 arm (vanilla release + some
 patches for the SoC being used)
 
 Can you show me the starting point for digging into this ?
 
 
 Hi Greg, First things first, are you doing kernel rbd or qemu/kvm?
   If you are doing qemu/kvm, make sure you are using virtio disks.
   This can have a pretty big performance impact.  Next, are you
 using RBD cache? With 0.56.4 there are some performance issues with
 large sequential writes if cache is on, but it does provide benefit
 for small sequential writes.  In general RBD cache behaviour has
 improved with Cuttlefish.
 
 Beyond that, are the pools being targeted by RBD and rados bench
 setup the same way?  Same number of Pgs?  Same replication?
 
 
 
 Thanks!
 _
 ceph-users mailing list
 ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
 http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 _
 ceph-users mailing list
 ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
 

Re: [ceph-users] Debian Squeeze - Ceph and RBD Kernel Modules Missing

2013-05-05 Thread Michael Lowe
Well, Debian stable is now wheezy, so 



On May 5, 2013, at 7:16 AM, Matt Chipman mrb...@gmail.com wrote:

 
 
 
 
 On Sun, May 5, 2013 at 2:37 PM, Gregory Farnum g...@inktank.com wrote:
 Squeeze is running 2.6.32, and the Ceph filesystem client was first
 merged in 2.6.33 (rbd in 2.6.37 I think). We don't have any backports
 to that far, sorry.
 
 Apart from that, if using the kernel clients you really want to be
 using very up-to-date kernels (eg, new Ubuntu) — somebody else will
 have to speak up about precisely how new. Otherwise I'd try for a
 userspace solution. :)
 -Greg
 Software Engineer #42 @ http://inktank.com | http://ceph.com
 
 
 Thats ok and thanks for the explanation. I'm thinking it would be a good 
 addition to the 5 minute quickstart to say wheezy is a better bet than 
 squeeze??
 
 cheers,
 
 -Matt
 
 
 
  
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HEALTH WARN: clock skew detected

2013-05-05 Thread Michael Lowe
Are you running ntpd?  If so you may need to stop, run ntpdate, and restart 
ntpd.  Sometimes if the clock is too far out of sync ntp won't update the time.

On May 5, 2013, at 8:52 AM, Varun Chandramouli varun@gmail.com wrote:

 Hi All,
 
 I have a cluster of 4 nodes with 1 mds, 3 mons and 4 osds. Whenever I do ceph 
 health or ceph -s, it shows a health warning saying clock skew detected in 2 
 of the 3 mons. When I run a mapreduce application on the cluster, one of the 
 monitors crashes (the one in which the skew is not detected) soon after the 
 application is started. Sometimes the application completes, sometimes, it 
 fails. I would like to know what this warning means. Is it responsible for 
 the failing of the application. If yes, how to remove the warning?
 
 Here is my ceph.conf:
 
 [global]
 auth client required = none
 auth cluster required = none
 auth service required = none
 
 [osd]
 osd journal data = 1000
 filestore xattr use omap = true
 
 [mon.a]
 host = lnx147-73
 mon addr = 10.72.147.73:6789
 
 [mon.b]
 host = lnx148-20
 mon addr = 10.72.148.20:6789
 
 [mon.c]
 host = lnx-148-27
 mon addr = 10.72.148.27:6789
 
 [mds.a]
 host = lnx147-73
 
 [osd.0]
 host = lnx147-73
 
 [osd.1]
 host = lnx148-20
 
 [osd.2]
 host = lnx-148-27
 
 [osd.3]
 host = ln148-28
 
 I can mail the mon logs and the output of ceph -w for the duration of the 
 application. 
 
 Regards
 Varun
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd command error librbd::ImageCtx: error finding header

2013-04-23 Thread Michael Lowe
My initial reaction is that you should use -p pool because rbd defaults to 
the rbd pool.  You are in effect trying to get info about mypool/odm-kvm-img 
from rbd/odm-kvm-img which doesn't exist.

Sent from my iPad

On Apr 23, 2013, at 11:24 PM, Dennis Chen xsc...@tnsoft.com.cn wrote:

 Hi list,
 
 I am using a ceph cluster (version 0.56.4) with all nodes (mon, mds, osd...) 
 deployed in the RHEL 6 distro, the client is based on Ubuntu 12.10.
 Now I am confused by a strange issue, seems the issue has been asked before 
 by google but no a clear answer for it. The specific details as below--
 in the client side, I want to create a rbd image, so I run the commands:
 
 root@~# ceph osd pool create mypool 100 100
 pool 'mypool' created
 
 root@~# rbd ls -p mypool
 odm-kvm-img
 
 root@~# rbd --image odm-kvm-img info
 rbd: error opening image 2013-04-24 10:43:42.800917 7fdb47d76780 -1 
 librbd::ImageCtx: error finding header: (2) No such file or 
 directoryodm-kvm-img:
 (2) No such file or directory
 
 So I tried those steps followed according the goolged:
 
 root@~# rados ls -p mypool
 odm-kvm-img.rbd
 rbd_directory
 root@~# rbd info odm-kvm-img.rbd
 rbd: error opening image 2013-04-24 10:54:19.468770 7f8332dea780 -1 
 librbd::ImageCtx: error finding header: (2) No such file or directory
 odm-kvm-img.rbd: (2) No such file or directory
 
 odm-kvm-img.rbd is showed by 'rados ls' command and it's there, but why I get 
 an error when run the 'rbd info' command upon odm-kvm-img.rbd? does anybody 
 can be help about this?
 
 BRs,
 Dennis
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com