Hi, all:
I am doing some benchmark of rbd.
The platform is on a NAS storage.
CPU: Intel E5640 2.67GHz
Memory: 192 GB
Hard Disk: SATA 250G * 1, 7200 rpm (H0) + SATA 1T * 12 , 7200rpm
(H1~ H12)
RAID Card: LSI 9260-4i
OS: Ubuntu12.04 with Kernel 3.2.0-24
Network:
Hi everyone,
I did some simple tests in order to figure out if Ceph has a direct
I/O full support.
I don't know why but when I try create a file to 9M I get an error
from dd, same with sg_dd:
$ sudo dd if=/dev/zero of=/mnt/directio bs=9M count=1 oflag=direct
1+0 records in
1+0 records out
Hi Eric!
On 6/13/12 5:06 AM, eric_yh_c...@wiwynn.com wrote:
Hi, all:
I am doing some benchmark of rbd.
The platform is on a NAS storage.
CPU: Intel E5640 2.67GHz
Memory: 192 GB
Hard Disk: SATA 250G * 1, 7200 rpm (H0) + SATA 1T * 12 , 7200rpm
(H1~ H12)
RAID Card:
On Tue, Jun 12, 2012 at 09:46:26PM -0600, Alexandre Oliva wrote:
Hi, Greg,
There's a btrfs regression in 3.4 that's causing a lot of grief to
ceph-on-btrfs users like myself. This small and nice patch cures it.
It's in Linus' master already. I've been running it on top of 3.4.2,
and it
On Wed, 13 Jun 2012, Laszlo Boszormenyi (GCS) wrote:
On Tue, 2012-06-12 at 10:06 -0700, Sage Weil wrote:
On Tue, 12 Jun 2012, Laszlo Boszormenyi (GCS) wrote:
I can package it if
you want. Now my fingers are crossed to accept libs3 soon, it's freeze
for Wheezy soon[1]. Ceph 0.47.2 is
The wip-auth branch has a revamp of the authentication settings.
Currently, there is a single option, 'auth supported', which is an ordered
list of authentication methods (cephx or none) to use. This is somewhat
limiting
This branch replaces that with 3 new settings:
auth cluster required
We've had some user reports lately on rbd images being broken by
misbehaving clients — namely, rbd image I is mounted on computer A,
computer A starts misbehaving, and so I is mounted on computer B. But
because A is misbehaving it keeps writing to the image, corrupting it
horribly.
To handle this,
Hi Stefan,
to give some hints it would be very helpful if you can describe your
setup a bit.
What hardware is used,
what network cards,
how many interfaces per server,
do you use bonding,
what kind of switching hw
MTU size
network bandwith measurent tool
Greetings
Stefan Majer
On Tue, Jun
Greg,
My understanding of Ceph code internals is far too limited to comment on
your specific points, but allow me to ask a naive question.
Couldn't you be stealing a lot of ideas from SCSI-3 Persistent
Reservations? If you had server-side (OSD) persistence of information of
the this device is in
On Wed, Jun 13, 2012 at 10:40 AM, Gregory Farnum g...@inktank.com wrote:
2) Client fencing. See http://tracker.newdream.net/issues/2531. There
is an existing blacklist functionality in the OSDs/OSDMap, where you
can specify an entity_addr_t (consisting of an IP, a port, and a
nonce — so
On Wed, Jun 13, 2012 at 10:40 AM, Gregory Farnum g...@inktank.com wrote:
2) Client fencing. See http://tracker.newdream.net/issues/2531. There
You know, I'd be really happy if this could be achieved by means of
removing cephx keys.
--
To unsubscribe from this list: send the line unsubscribe
On Wednesday, June 13, 2012 at 1:37 PM, Florian Haas wrote:
Greg,
My understanding of Ceph code internals is far too limited to comment on
your specific points, but allow me to ask a naive question.
Couldn't you be stealing a lot of ideas from SCSI-3 Persistent
Reservations? If you had
On Thu, 14 Jun 2012, Xiaopong Tran wrote:
Cool. Will that be merged into master and ready for 0.48 as wellÿÿ
cheers
Xiaopong
Unfortunately this will to make it into 0.48. It needs more careful
testing to make sure we are handling the range of cases correctly, and
there isn't enough
Hi, Mark:
I forget to mention one thing, I create the rbd at the same machine and
test it. That means the network latency may be lower than normal case.
1.
I use ext4 as the backend filesystem and with following attribute.
data=writeback,noatime,nodiratime,user_xattr
2.
I use the default
14 matches
Mail list logo