[no subject]

2015-11-22 Thread Dong Wu
subscribe ceph-devel
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-11-13 Thread Guang Yang
Hi Joao,
We have a problem when trying to add new monitors to the cluster on an
unhealthy cluster, which I would like ask for your suggestion.

After adding the new monitor, it  started syncing the store and went
into an infinite loop:

2015-11-12 21:02:23.499510 7f1e8030e700 10
mon.mon04c011@2(synchronizing) e5 handle_sync_chunk mon_sync(chunk
cookie 4513071120 lc 14697737 bl 929616 bytes last_key
osdmap,full_22530) v2
2015-11-12 21:02:23.712944 7f1e8030e700 10
mon.mon04c011@2(synchronizing) e5 handle_sync_chunk mon_sync(chunk
cookie 4513071120 lc 14697737 bl 799897 bytes last_key
osdmap,full_3259) v2


We talked early in the morning on IRC, and at the time I thought it
was because the osdmap epoch was increasing, which lead to this
infinite loop.

I then set those nobackfill/norecovery flags and the osdmap epoch
freezed, however, the problem is still there.

While the osdmap epoch is 22531, the switch always happened at
osdmap.full_22530 (as showed by the above log).

Looking at the code at both sides, it looks this check
(https://github.com/ceph/ceph/blob/master/src/mon/Monitor.cc#L1389)
always true, and I can confirm from the log that (sp.last_commited <
paxos->get_version()) was false, so the chance is that the
sp.synchronizer always has next chunk?

Does this look familiar to you? Or any other trouble shoot I can try?
Thanks very much.

Thanks,
Guang
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-10-20 Thread maillist_linux
subscribe ceph-devel

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-10-06 Thread Aakanksha Pudipeddi-SSI
subscribe ceph-devel
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-09-22 Thread Redynk, Lukasz
subscribe ceph-devel


Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial 
Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | 
Kapital zakladowy 200.000 PLN.

Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i 
moze zawierac informacje poufne. W razie przypadkowego otrzymania tej 
wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; 
jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). If you are not the intended recipient, please 
contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.


[no subject]

2015-05-16 Thread Haomai Wang
Even if from /dev/zero, the data crc shouldn't be 0.

I guess osd(arm) doesn't do crc computing. But from code, crc for arm
should be fine

On Sat, May 16, 2015 at 6:21 PM, huang jun hjwsm1...@gmail.com wrote:
 that always happen, every test have such errors. And our cluster and
 client that  running on X86 works fine, never seen bad crc error.


 2015-05-16 17:30 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
 is this always happen or occasionally?

 On Sat, May 16, 2015 at 10:10 AM, huang jun hjwsm1...@gmail.com wrote:
 hi,steve

 2015-05-15 16:36 GMT+08:00 Steve Capper steve.cap...@linaro.org:
 On 15 May 2015 at 00:51, huang jun hjwsm1...@gmail.com wrote:
 hi,all

 Hi HuangJun,


 We run ceph cluster on ARM platform (arm64, linux kernel 3.14, OS
 ubuntu 14.10), and use dd if=/dev/zero of=/mnt/test bs=4M count=125
 to write data.  On the osd side, we got bad data CRC error.

 The kclient log: (tid=6)
 May 14 17:21:08 node103 kernel: [  180.194312] CPU[0] libceph:
 send_request ffc8d252f000 tid-6 to osd0 flags 36 pg 1.9aae829f req
 data size is 4194304
 May 14 17:21:08 node103 kernel: [  180.194316] CPU[0] libceph: tid-6
 - ffc0702f66c8 to osd0 42=osd_op len 197+0+4194304 -
 libceph: tid-6 front_crc is 388648745 middle_crc is 0 data_crc is 
 3036014994

 The OSD-0 log:
 2015-05-13 08:12:50.049345 7f378d8d8700  0 seq  3 tid 6 front_len 197
 mid_len 0 data_len 4194304
 2015-05-13 08:12:50.049348 7f378d8d8700  0 crc in front 388648745 exp 
 388648745
 2015-05-13 08:12:50.049395 7f378d8d8700  0 crc in middle 0 exp 0
 2015-05-13 08:12:50.049964 7f378d8d8700  0 crc in data 0 exp 3036014994
 2015-05-13 08:12:50.050234 7f378d8d8700  0 bad crc in data 0 != exp 
 3036014994

 some considerations:
 1) we use ceph 0.80.7 realse version and compile it on ARM, did this
 works? or  does ceph's code has ARM branch?

 We did run a Ceph version close to that for 64-bit ARM, I'm checking
 out 0.80.7 now to test.
 In v9.0.0, there is some code to use the ARM optional crc32c
 instructions, but this isn't in 0.80.7.


 2) as we have write 125 objects, only few of them report CRC error,
 and the right object's data_crc is 0 both on osd and kclient. the
 wrong object's data_crc is not 0 on kclient, but osd calculate result
 0. the object data came from /dev/zero, i think the data_crc should be
 0, am i right?


 If the initial CRC seed value is non-zero, then the CRC of a buffer
 full of zeros won't be zero.
 So ceph_crc32c(somethingnonzero, zerofilledbuffer, len), will be non-zero.

 I would like to reproduce this problem here.
 What steps did you take before this error occurred?
 Is this a cephfs filesystem or something on top of an RBD image?
 Which kernel are you running? Is it the one that comes with Ubuntu?
 (If so which package version is it?)

 We use linux kernel version 3.14 and we just tested it on Ubuntu, and
 ceph version v0.80.7. Both cephfs and RBD image have CRC problems.
 I'm not sure whether it's related to Memory, since we tested many
 times, but just a few reported CRC error.
 As i mentioned, i doubt the memory fault changed the data, because we
 write 125 objects, and the all data_crc is 0 except the Bad CRC
 object's data_crc. Any tips are welcome.

 Cheers,
 --
 Steve



 --
 thanks
 huangjun
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



 --
 Best Regards,

 Wheat



 --
 thanks
 huangjun



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-03-10 Thread Andrew Shewmaker
The following patches are based on the work of Marios Kogias, first
posted in August. http://www.spinics.net/lists/ceph-devel/msg19890.html
This patch is against HEAD as of March 10th,
commit 5d5b510810e96503b9323b010149f7bd5b45db7c.
It can also be found at https://github.com/agshew/ceph/tree/wip-blkin-v6

Thanks to Josh Durgin again for comments on the V5 patchset.

I think the blkin patchset is looking pretty good at this point.


Outstanding issues:

1) librados will need more general blkin tracing
currently it only has aio_read_traced() and aio_write_traced() calls

2) some work will need to be done filtering blkin events/keyvalues


To Do:

 1. push a wip-blkin branch to github.com/ceph and take advantage of gitbuilder 
test/qa
 2. submit a pull request
 3. add Andreas' tracepoints https://github.com/ceph/ceph/pull/2877 using Blkin
and investigate how easy it is to select the level of tracing detail


Changes since V5:

  * put MOSDOp encode(features) statement back that was accidentally left out 
in V5
  * moved OSD daemonize call back to original spot
  * initialized blkin in ceph-mds (and moved all initializations to first patch)
  * updated aio_read_traced() and aio_write_traced() to match non-traced 
versions
  * improved blkin wrapper readability by removing unnecessary stringification

Changes since V4:

  * removed messenger_end trace event
In Pipe::reader(), when message is enqueued, it will be destroyed.
Naive pointer checks don't work here. You can't depend on
pointers being set to null on destruction. It may be possible to wrap
trace event with m-get() and m-put() to keep it around, or put this
trace event in dispatch paths, but just removing trace event for now
in order to move forward.
  * removed mutex in aio_*_traced() methods
A mutex was carried forward from Marios' original patch while rebasing
when it should have been removed.
  * removed Message::trace_end_after_span
Message::trace_end_after_span did not ever appear to be true, so
it has been removed
  * added asserts for master trace and endpoint checks
Tried to use asserts in more places, but they prevented execution.
Tried to use douts and ldouts instead, but they didn't work.
  * added parens around macro pointer args
parens make it safer to use pointers passed as arguments in a macro

Changes since V2:

  * WITH_BLKIN added to makefile vars when necessary
  * added Blkin build instructions
  * added Zipkin build instructions
  * Blkin wrapper macros do not stringify args any longer.
The macro wrappers will be more flexible/robust if they don't
turn arguments into strings.
  * added missing blkin_trace_info struct prototype to librados.h
  * TrackedOp trace creation methods are virtual, implemented in OpRequest
  * avoid faults due to non-existent traces
Check if osd_trace exists when creating a pg_trace, etc.
Return true only if trace creation was successful.
Use dout() if trace_osd, trace_pg, etc. fail, in order to ease debugging.
  * create trace_osd in ms_fast_dispatch

Changes since V1:
  * split build changes into separate patch
  * conditional build support for blkin (default off)
  * blkin is not a Ceph repo submodule
build and install packages from https://github.com/agshew/blkin.git
Note: rpms don't support babeltrace plugins for use with Zipkin
  * removal of debugging in Message::init_trace_info()

With this patchset Ceph can use Blkin, a library created by
Marios Kogias and others, which enables tracking a specific request
from the time it enters the system at higher levels till it is finally
served by RADOS.

In general, Blkin implements the tracing semantics described in the Dapper
paper 
http://static.googleusercontent.com/media/research.google.com/el/pubs/archive/36356.pdf
in order to trace the causal relationships between the different
processing phases that an IO request may trigger. The goal is an end-to-end
visualisation of the request's route in the system, accompanied by information
concerning latencies in each processing phase. Thanks to LTTng this can happen
with a minimal overhead and in realtime. In order to visualize the results Blkin
was integrated with Twitter's Zipkin http://twitter.github.io/zipkin/
(which is a tracing system entirely based on Dapper).

A short document describing how to test Blkin tracing in Ceph with Zipkin
is in doc/dev/trace.rst
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-03-09 Thread Joshua Schmid
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

subscribe ceph-devel
- -- 
Freundliche Grüße - Kind regards,
Joshua Schmid
Trainee - Storage SAP HANA
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nürnberg
-

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,
Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg)
-

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBAgAGBQJU/c73AAoJEPUnwXO0u5uWxbcP/Rd3Ru9DWHfqBF965/JH7DL8
PQXlrJwPRftFO1guZDEPTAAC8rl0gdDv903Di0R7O0ujbZP/hvKJYCgxzMvpvqWe
opNEkLSKFfsKZgXGNKjuw3IKbpBUnX8EyvxA0NKnmLo/IJ417W6E2GOeO/dd5hgj
xALhatwqeqntVnr1NGmbg+bKejV1Y0iIG6bJ0t9UW1Yx7soAvHmvElg2lzDnA9kb
RkNQIT8PVojnjUfZsKZhgZtHdBKU00qurpojVBVJ+sM1jeZDDB3J5VtTKWDeZPLI
B0WoEHw5b/BNEZGbzEdBsLhJeV6gMKkMM/KKGKQZTZ99FnITt2f2NwlMVlcS3BDn
vSxpvnXgNehhMlm4TxCOVPDZGBFU5Np+R7pgCOi27JDcZm6MnPpAp+e+oI4HRVsj
jLxskhS2srbtXy1w9SHKyNNV4feJ9Pki4q+SKnm8sNbmbyYyTCnCqoB7Ed30GNIT
Xud+K36rIfgRjWz82PC+MkBWPhHmPznwcxd5TdyquGsNKK/XqgdR+Deq3UZ3i5P4
o3dz3J1HN9cw7dIzzXRFPUVXFwjGI3TL7PPLONyFhMAGxK4EGF7VkR4YJFwZHNrH
l2b9Ib2ZbRpe0/Lx/vPCPZsy1x7aDYPh4sOSTevlpB6hiZC2AzGeEDSb7Cjuotfc
h87d2OBSIaRnRL1e2rSl
=D1QL
-END PGP SIGNATURE-
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-12-12 Thread wanglin

subscribe ceph-devel

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-10-26 Thread Logan Vig
subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-06-13 Thread Mrs Teresa AU
Although you might be nervous about my e-mail as we have not met  
before. My name is Mrs Teresa Au, I work with HSBC Hong Kong; there is  
a sum of USD$23,200,000.00 business proposal I want to share with you.  
It is absolutely risk  free; if you are interested send me a reply to  
my private e-mail below : mrs_t...@126.com


Best Regards,
Email: mrs_t...@126.com
Mrs Teresa Au.







Provincia di Treviso - http://www.provincia.treviso.it

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-05-18 Thread Songjiang Zhao


subscribe ceph-devel
--
___
Songjiang Zhao
songjiangz...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-04-16 Thread Ilya Storozhilov
subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-01-15 Thread Elite Homes
Apply today for an affordable loan at 3% interest rate, kindly reply if 
interested.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-01-11 Thread Songjiang Zhao

subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2013-10-22 Thread COMPANY


UK NATIONAL.pdf
Description: Adobe PDF document


[no subject]

2013-06-17 Thread AFG GTBANK LOAN



Loan Syndicacion

Am AFG Guaranty Trust Bank, zu strukturieren wir Kreditlinien treffen Sie
unsere
Kunden spezifischen geschäftlichen Anforderungen und einen deutlichen
Mehrwert für unsere
Kunden Unternehmen.
eine Division der AFG Finance und Private Bank plc.

Wenn Sie erwägen, eine große Akquisition oder ein Großprojekt sind, können
Sie
brauchen eine erhebliche Menge an Kredit. AFG Guaranty Trust Bank setzen
können
zusammen das Syndikat, das die gesamte Kredit schnürt für
Sie.


Als Bank mit internationaler Reichweite, sind wir gekommen, um Darlehen zu
identifizieren
Syndizierungen als Teil unseres Kerngeschäfts und durch spitzte diese Zeile
aggressiv sind wir an einem Punkt, wo wir kommen, um als erkannt haben
Hauptakteur in diesem Bereich.


öffnen Sie ein Girokonto heute mit einem Minimum Bankguthaben von 500 £ und
Getup zu £ 10.000 als Darlehen und auch den Hauch einer Chance und gewann
die Sterne
Preis von £ 500.000 in die sparen und gewinnen promo in may.aply jetzt.


mit dem Folowing Informationen über Rechtsanwalt steven lee das Konto
Offizier.


FULL NAME;


Wohnadresse;


E-MAIL-ADRESSE;

Telefonnummer;

Nächsten KINS;

MUTTER MAIDEN NAME;


Familienstand;


BÜROADRESSE;

ALTERNATIVE Telefonnummer;

TO @ yahoo.com bar.stevenlee
NOTE; ALLE Darlehen sind für 10JAHRE RATE VALID
ANGEBOT ENDET BALD SO JETZT HURRY

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2013-06-10 Thread Ta Ba Tuan

subscribe ceph-devel

Hi Everyone

I am TuanTB (full name: Tuan Ta Ba),
I 'm  working about the Cloud Computing
Were are using the Ceph, and I'm a new Ceph'member
so, I hope to be joined ceph-delvelmailist.

Thank you so much
--TuanTB

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2013-05-29 Thread Ta Ba Tuan

subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2012-12-10 Thread Alexandre Maumené
subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2012-11-19 Thread Stefan Priebe
From Stefan Priebe s.pri...@profihost.ag # This line is ignored.
From: Stefan Priebe s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Cc: pbonz...@redhat.com
Cc: ceph-devel@vger.kernel.org
Subject: QEMU/PATCH: rbd block driver: fix race between completition and cancel
In-Reply-To:


ve-de...@pve.proxmox.com
pbonz...@redhat.com
ceph-devel@vger.kernel.org
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2012-05-07 Thread Tim Flavin
The new site is great!  I like the Ceph documentation, however I found
a couple of typos.  Is this the best place address them?  (Some of the
apparent typos may be my not understanding what is going on.)



http://ceph.com/docs/master/config-cluster/ceph-conf/

The  Hardware Recommendations link near the bottom of the page gives
a 404.  Did you want to point to
http://ceph.com/docs/master/install/hardware-recommendations/ ?


http://ceph.com/docs/master/config-ref/osd-config

For  osd client message size cap  The default value is 500 MB but
the description lists it a 200 MB.


http://ceph.com/docs/master/api/librbdpy/

The line of code: size = 4 * 1024 * 1024  # 4 GiB appears to be
missing a * 1024, and the next line
 is rbd_inst.create('myimage', 4) when it probably should be
rbd_inst.create('myimage', size) This is repeated several times.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2012-02-03 Thread Masuko Tomoya
Hi, all.

I'm trying to attach rbd volume from instance on KVM.
But I have problem.
Could you help me ?

---
I tried to attach rbd volume on ceph01 to instance on compute1 with
virsh command.

root@compute1:~# virsh attach-device test-ub16 /root/testvolume.xml
error: Failed to attach device from /root/testvolume.xml
error: cannot resolve symlink rbd/testvolume: No such file or directory

/var/log/messages
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: error :
qemuMonitorTextAddDevice:2417 : operation failed: adding
virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk4,id=virtio-disk4
device failed: Device needs media, but drive is empty#015#012Device
'virtio-blk-pci' could not be initialized#015#012
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: warning :
qemuDomainAttachPciDiskDevice:188 : qemuMonitorAddDevice failed on
file=rbd:rbd/testvolume,if=none,id=drive-virtio-disk4,format=raw
(virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk4,id=virtio-disk4)
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: error :
virSecurityDACRestoreSecurityFileLabel:143 : cannot resolve symlink
rbd/testvolume: No such file or directory
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: warning :
qemuDomainAttachPciDiskDevice:229 : Unable to restore security label
on rbd/testvolume

there is no log in /var/log/ceph/mon.0.log of host ceph01.
---


My environment is below.
*There are two servers. All server are ubuntu 10.10 x86_64.
*ceph01: single server configured ceph.(version: 0.41-1maverick)
*compute1: kvm hypervisor
 -librados2 and librbd1 packages are installed.
 (version: 0.41-1maverick)
 -qemu-kvm is 0.14.0-rc1. I built qemu with rbd enable.
  the output of run 'qemu-img' show 'rbd' at supported formats field.
  (I built qemu reffering this page.
  http://ceph.newdream.net/wiki/QEMU-RBD)
 -apparmor is disable.
 -libvirt is 0.8.8


 -there is ceph.conf on compute1.
root@compute1:~# ls -l /etc/ceph/
total 20
-rw-r--r-- 1 root root 508 2012-02-03 14:38 ceph.conf
-rw--- 1 root root  63 2012-02-03 17:04 keyring.admin
-rw--- 1 root root  63 2012-02-03 14:38 keyring.bin
-rw--- 1 root root  56 2012-02-03 14:38 keyring.mds.0
-rw--- 1 root root  56 2012-02-03 14:38 keyring.osd.0

=
 -contents of ceph.conf is below.
root@compute1:~# cat /etc/ceph/ceph.conf
[global]
auth supported = cephx
keyring = /etc/ceph/keyring.bin
[mon]
mon data = /data/data/mon$id
debug ms = 1
[mon.0]
host = ceph01
mon addr = 10.68.119.191:6789
[mds]
keyring = /etc/ceph/keyring.$name
[mds.0]
host = ceph01
[osd]
keyring = /etc/ceph/keyring.$name
osd data = /data/osd$id
osd journal = /data/osd$id/journal
osd journal size = 512
osd class tmp = /var/lib/ceph/tmp
debug osd = 20
debug ms = 1
debug filestore = 20
[osd.0]
host = ceph01
btrfs devs = /dev/sdb1

===
*conten of keyring.admin is below
root@compute1:~# cat /etc/ceph/keyring.admin
[client.admin]
key = AQDFeCxPyBlNIRAAxS1DcRHpMXRpcjY/GNMwYg==


===
*output of run 'ceph auth list'
root@ceph01:/etc/ceph# ceph auth list
2012-02-03 20:34:59.507451 mon - [auth,list]
2012-02-03 20:34:59.508785 mon.0 - 'installed auth entries:
mon.
key: AQDFeCxPiK04IxAAslDBNkrOGKWxcbCh2iysqg==
mds.0
key: AQDFeCxPsJ+LGhAAJ3/rmkAtGXSv/eHh0yXgww==
caps: [mds] allow
caps: [mon] allow rwx
caps: [osd] allow *
osd.0
key: AQDFeCxPoEK+ExAAecD7+tWgpIRoZx2AT7Jwbg==
caps: [mon] allow rwx
caps: [osd] allow *
client.admin
key: AQDFeCxPyBlNIRAAxS1DcRHpMXRpcjY/GNMwYg==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
' (0)


*xml file is below.
root@compute1:~# cat /root/testvolume.xml
disk type='network' device='disk'
  driver name='qemu' type='raw'/
  source protocol='rbd' name='rbd/testvolume'
host name='10.68.119.191' port='6789'/
  /source
  target dev='vde' bus='virtio'/
/disk


*testvolume is on rados pools.
root@compute1:~# qemu-img info rbd:rbd/testvolume
image: rbd:rbd/testvolume
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: unavailable


Waiting for reply,

Tomoya.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2011-11-16 Thread Wido den Hollander

When upgrading or installing we do not want to stop or start the init scripts.

This could break upgrades but also do harmfull stuff we don't want.

Let the sysadmin decide when to (re)start the daemons
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html