"fio --size=100m --ioengine=libaio --invalidate=1 --direct=1
--numjobs=10 --rw=read --name=fiojob --blocksize_range=4K-512k
--iodepth=16"
Since size=100m so reads would be entirely cached and, if hypervisor is
write-back, potentially many writes would never make it to the cluster
as well?
Also some of the keys can take some time to appear.
On 2013-12-18 12:54, Alfredo Deza wrote:
On Tue, Dec 17, 2013 at 9:40 PM, Yuri Weinstein
wrote:
Hello all,
I am a new Ceph user and trying to use ceph-deploy going thru all
steps on this page -
http://ceph.com/docs/master/start/quick-ceph-d
The OSD can be stopped from the host directly,
sudo stop ceph-osd id=3
I don't know if that's the 'proper' way mind.
On 2013-12-16 09:40, david.zhang...@gmail.com wrote:
ceph osd start osd.{num}
=
What SSDs are you using, and is there any under-provisioning on them?
On 2013-12-09 16:06, Greg Poirier wrote:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood
wrote:
I'd suggest testing the components separately - try to rule out NIC
(and switch) issues and SSD performance issues, then when you
I still have trouble believing that fast-but-not-super-fast journals
is the main reason for the poor performances observed. Maybe I am
mistaken?
Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)
On 12/03/2013 03:01 PM, James Pearce wrote:
I would really appreciate
Most servers also have internal SD card slots. There are SD cards
advertising >90MB/s, though I haven't tried them as OS boot personally.
On 2013-12-06 11:14, Gandalf Corvotempesta wrote:
2013/12/6 Sebastien Han :
@James: I think that Gandalf’s main idea was to save some
costs/space on the se
Native block support is coming for Hyper-V next year we hope... would
be great to hear from InkTank on anything that can be shared publicly on
that front :)
On 2013-12-05 22:02, James Harper wrote:
Can someone point me to directions on how to mount a Ceph storage
volume on Linux as well as Wi
Another option is to run journals on individually presented SSDs, in a
5:1 ratio (spinning-disk:ssd) and have the OS somewhere else. Then the
failure domain is smaller.
Ideally implement some way to monitor SSD write life SMART data - at
least it gives a guide as to device condition compared
ckground info here:
http://en.wikipedia.org/wiki/Write_amplification
Hope that helps.
On 2013-12-03 15:10, Loic Dachary wrote:
On 03/12/2013 13:38, James Pearce wrote:
Since the journal partitions are generally small, it shouldn't need
to be.
For example implement with substantial under-provision
I would really appreciate it if someone could:
- explain why the journal setup is way more important than striping
settings;
I'm not sure if it's what you're asking, but any write must be
physically written to the journal before the operation is acknowledged.
So the overall cluster performa
still a good idea - enabling discard mount
option is generally counter-productive since trim is issue way too
often, destroying performance (in my testing).
On 2013-12-03 13:12, Emmanuel Lacour wrote:
On Tue, Dec 03, 2013 at 12:48:21PM +, James Pearce wrote:
How much (%) is left
How much (%) is left unprovisioned on those (840s?) ? And were they
trim'd/secure erased before deployment?
On 2013-12-03 12:45, Emmanuel Lacour wrote:
On Tue, Dec 03, 2013 at 12:38:54PM +0000, James Pearce wrote:
Since the journal partitions are generally small, it shouldn't
Since the journal partitions are generally small, it shouldn't need to
be.
For example implement with substantial under-provisioning, either via
HPA or simple partitions.
On 2013-12-03 12:18, Loic Dachary wrote:
Hi Ceph,
When an SSD partition is used to store a journal
https://github.com/c
On a related note, is there any discard/trim support in rbd-fuse?
Apparently so (but not in the kernel module unfortunately).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I will try to look into this issue of device cache flush. Do you have
a tracker link for the bug?
How I wish this were a forum! But here is a link:
http://www.spinics.net/lists/ceph-users/msg05966.html
And this:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?
the disk.
I'll have to get some to see, but a 3.5" 4TB version would surely be
perfect in this case for Ceph OSD!
On 2013-11-29 07:25, James Pearce wrote:
Expensive at launch and limited to Windows as of now, but very
interesting nevertheless. 120GB SSD and 1TB spinning disk -
Did you try moving the journals to separate SSDs?
It was recently discovered that due to a kernel bug/design, the journal
writes are translated into device cache flush commands, so thinking
about that I wonder also whether there would be performance improvement
in the case that journal and OSD
Expensive at launch and limited to Windows as of now, but very
interesting nevertheless. 120GB SSD and 1TB spinning disk - separately
addressable - all in a 2.5" FF:
http://www.wdc.com/en/products/products.aspx?id=1190
___
ceph-users mailing li
I was going to add something to the bug tracker, but it looks to me
that contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Many thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
Does rbd-fuse support cephx? I can't see any reference in the docs:
http://ceph.com/docs/master/man/8/rbd-fuse/
On 2013-11-26 14:40, nicolasc wrote:
Thank you for that quick answer. I will try the fuse client.
___
ceph-users mailing list
ceph-use
Hi, I created an EXT4 partition on an RBD volume (0.72.1) on Ubuntu
(3.8.0-33) and noticed TRIM seems to be unsupported:
$ sudo fstrim /mnt/ceph
fstrim: /mnt/ceph: FITRIM ioctl failed: Operation not supported
It seems this is an open issue: http://tracker.ceph.com/issues/190
Is there some work
e_attribute *attr,
sd_print_sense_hdr(sdkp, &sshdr);
return -EINVAL;
}
+out:
revalidate_disk(sdkp->disk);
return count;
}
-- 1.7.10.4
Stefan
Am 25.11.2013 10:59, schrieb James Pearce:
Having a configurable would be ideal. User shoul
On 2013-11-25 21:55, Kyle Bader wrote:
Several people have reported issues with combining OS and OSD
journals
on the same SSD drives/RAID due to contention.
I wonder whether this has been due to the ATA write cache flush command
bug that was found yesterday. It would seem to explain why a
c
Having a configurable would be ideal. User should be made beware of
the need for super-caps via documentation in that case.
Quickly eye-balling the code... can this be patched via journaller.cc
for testing?
___
ceph-users mailing list
ceph-users@lis
2) Can't grow once you reach the hard limit of 14TB, and if you have
multiple of such machines, then fragmentation becomes a problem
3) might have the risk of 14TB partition corruption wiping out all
your shares
14TB limit is due to EXT(3/4) recommendation(/implementation)?
__
Do you have journals on separate disks (SSD, preferably)?
On 2013-11-15 10:14, Dnsbed Ops wrote:
Hello,
We have the plan to run ceph as block storage for openstack, but from
test we found the IOPS is slow.
Our apps primarily use the block storage for saving logs (i.e,
nginx's access logs).
How
On 2013-11-14 19:59, Dimitri Maziuk wrote:
Cehpfs is in fact one of ceph's big selling points,
IMO the issue is more that since it's not supported, the Enterprise
sector won't touch it.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://li
On 2013-11-14 19:42, Alfredo Deza wrote:
- osd activate reports failure, but actually succeeds. On the
target, the
last entry in syslog is an error on fd0, so maybe some VMware rescan
issue
I would need some logs here. How does it report a failure?
Thanks, I'll post the logs in the morning
On 2013-11-14 16:08, Gautam Saxena wrote:
I've recently accepted the fact CEPH-FS is not stable...SAMBA no
longer working...
Alternatives
1) nfs over rbd...
2) nfs-ganesha for ceph...
3) create a large Centos 6.4 VM (eg 15 TB, 1 TB for OS using EXT4,
remaining 14 TB using either EXT4 or XTRFS)
On 2013-11-14 17:14, Alfredo Deza wrote:
I think that is just some leftover mishandled connection from the
library that ceph-deploy uses to connect remotely and can be ignored.
Could you share the complete log that got you to this point? I
believe
this bit is coming at the very end, right?
Sorry, should have included some output as well...,
$ ceph-deploy mon create mon0-0
...
Exception in thread Thread-5 (most likely raised ruing interpreter
shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in
__bootstrap_inner
File
"/usr/lib/py
On 2013-11-13 13:29, Alfredo Deza wrote:
I'm happy to announce a new release of ceph-deploy, the easy
deployment tool for Ceph.
Hi, is 1.3.2 known working with 0.72 release? With Ubuntu 13.04 clean
builds it seems to be failing even to generate keys. I've also seen the
breaks mentioned in
when one head out of ten fails: disks can keep working with the
nine remaining heads...
some info on this at last in the SATA-IO 3.2 Spec... "Rebuild
Assist...
Some info on the command set (SAS & SATA implementations):
http://www.seagate.com/files/staticfiles/docs/pdf/whitepaper/tp620-1-1110us
33 matches
Mail list logo