Looks like it's just following the warnings from libvirt
https://bugzilla.redhat.com/show_bug.cgi?id=751631 heh, but I found one
of the inktank guys confirming that RBD was safe to add to the whitelist
last year
http://www.redhat.com/archives/libvir-list/2012-July/msg00021.html which
is good
Hello,
With AWS it is possible to do user browser-based uploads using POST [1].
Is it possible to do with RadosGW. Is the feature supported?
Cheers,
Valery
[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html
--
SWITCH
--
Valery Tschopp, Software
Hi,
I now did an upgrade to dumpling (ceph version 0.67.5
(a60ac9194718083a4b6a225fc17cad6096c69bd1)), but the osd still fails at startup
with a trace.
Heres the trace:
http://paste.ubuntu.com/6755307/
If you need any more infos I will provide them. Can someone please help?
Thanks
Von:
Hi there,
We have a production Ceph cluster with 12 OSDs spread over 6 hosts
running version 0.72.2.
From time to time, we're seeing some nasty multi-second latencies
(typically 1-3 second, sometimes as high as 5 seconds) inside QEMU VMs
for both read and write loads.
The VMs are still
Hello List,
I'm going to build a build a rbd cluster this year, with 5 nodes
I would like to have this kind of configuration for each node:
- 2U
- 2,5inch drives
os : 2 disk sas drive
journal : 2 x ssd intel dc s3700 100GB
osd : 10 or 12 x sas Seagate Savvio 10K.6 900GB
I see on the
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS 10K gives you around 140MB/sec for sequential writes.
So if
Hi Sebastian,
Am 15.01.2014 13:55, schrieb Sebastien Han:
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS
Hello Sebastien,
thanks for your reply.
Are you going with a 10Gb network? It’s not an issue for IOPS but more for
the bandwidth. If so read the following:
Currently it's planned to use 1gb network for the public network (vm--rbd
cluster).
Maybe 10gbe for cluster network replication is
Le 15/01/2014 14:15, Stefan Priebe a écrit :
THe DC S3700 isn't good as squential but the 520 or 525 series has the
problem that it doesn't have a capicator. We've used Intel SSDs since
the 160 series but for ceph we now go for Crucial m500 (has capicitor).
Curcial m500 has capacitor, could
I would like to have this kind of configuration for each node:
- 2U
- 2,5inch drives
os : 2 disk sas drive
journal : 2 x ssd intel dc s3700 100GB
osd : 10 or 12 x sas Seagate Savvio 10K.6 900GB
Another option could be to use supermicro server,
they have some 2U - 16 disks chassis + one
Am 15.01.2014 14:33, schrieb Cedric Lemarchand:
Le 15/01/2014 14:15, Stefan Priebe a écrit :
THe DC S3700 isn't good as squential but the 520 or 525 series has the
problem that it doesn't have a capicator. We've used Intel SSDs since
the 160 series but for ceph we now go for Crucial m500 (has
Hi all,
I have to build a new Ceph storage architecture replicated between two
datacenters (for Disastry Recovery Plan) so basically 2x30 terabits (2x3.75
terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the replication network and also 2x1gb (LACP)
for production
Seem that s3700 have supercapacitor too
http://www.thessdreview.com/our-reviews/s3700/
The S3700 has power loss protection to keep a sudden outage from corrupting
data, but if the system detects a fault in the two capacitors powering the
system, it will voluntarily disable the volatile cache
Power-Loss Protection: In the rare event that power fails while the
drive is operating, power-loss protection helps ensure that data isn’t
corrupted.
Seems that not all power protected SSDs are created equal:
http://lkcl.net/reports/ssd_analysis.html
The m500 is not tested but the m4 is.
Up
We are using Supermicro 2uTwin nodes.
These have 2 nodes in 2u with each 12 disks.
We use X9DRT-HF+ mainboards, 2x Intel DC S3500 SSD and 10x 2.5 1TB 7.2k HDD
Seagate Constellation.2
They have SAS2008 controllers on board which can be flashed to be a JBOD
controller.
Thanks Robert !
I
Am 15.01.2014 15:03, schrieb Robert van Leeuwen:
Power-Loss Protection: In the rare event that power fails while the
drive is operating, power-loss protection helps ensure that data isn’t
corrupted.
Seems that not all power protected SSDs are created equal:
On 01/15/2014 07:52 AM, NEVEU Stephane wrote:
Hi all,
I have to build a new Ceph storage architecture replicated between
two datacenters (for Disastry Recovery Plan) so basically 2x30 terabits
(2x3.75 terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the
It's also good to note that the m500 has built in RAIN protection
(basically, diagonal parity at the nand level). Should be very good for
journal consistency.
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:07 AM, Stefan Priebe
On 01/15/2014 08:03 AM, Robert van Leeuwen wrote:
Power-Loss Protection: In the rare event that power fails while the
drive is operating, power-loss protection helps ensure that data isn’t
corrupted.
Seems that not all power protected SSDs are created equal:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
Actually, I don’t know the price difference between the crucial and the intel
but the intel looks more suitable for me. Especially after Mark’s comment.
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this values from? I've some 960GB and they all have
450Mb/s write speed. Also in tests like here
On 01/15/2014 08:29 AM, Derek Yarnell wrote:
On 1/15/14, 9:20 AM, Mark Nelson wrote:
I guess I'd probably look at the R520 in an 8 bay configuration with an
E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
setup you want). That gives you 4 PCIE slots for the extra network
I use H700 on Dell R815, 4 nodes. No problem performance.
Configuration:
1 SSD Intel 530 - OS and Journal.
5 OSD HDD 600G: certified DELL - WD/HITACHI/SEAGATE.
Size replication=2. Iops ~ 4k no VM.
15 янв. 2014 г. 15:47 пользователь Alexandre DERUMIER aderum...@odiso.com
написал:
Hello List,
Am 15.01.2014 15:44, schrieb Mark Nelson:
On 01/15/2014 08:39 AM, Stefan Priebe wrote:
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even
reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this
On 01/15/2014 08:39 AM, Stefan Priebe wrote:
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even
reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this values from? I've some 960GB and they all have
Sorry I was only looking at the 4K aligned results.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
Kernel Patch for Intel S3700, Intel 530...
diff --git a/drivers/scsi/sd.c b/drivers//scsi/sd.c
--- a/drivers/scsi/sd.c 2013-09-14 12:53:21.0 +0400
+++ b/drivers//scsi/sd.c2013-12-19 21:43:29.0 +0400
@@ -137,6 +137,7 @@
char *buffer_data;
struct
The S3700 does not need stuff like this. It internally ignores flushes.
Also there is an upstream one:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=39c60a0948cc06139e2fbfe084f83cb7e7deae3b
Stefan
Am 15.01.2014 15:47, schrieb Ирек Фасихов:
Kernel Patch for
Am 15.01.2014 15:50, schrieb Sebastien Han:
However you have to get 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
that's correct it just them instead of SATA or SAS disks in ceph ;-) so
960GB makes sense.
Sébastien Han
Cloud Engineer
actually, they're very inexpensive as far as SSD's go. The 960gb m500 can
be had on Amazon for $499 US on prime (as of yesterday anyway).
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:50 AM, Sebastien Han sebastien@enovance.com wrote:
On 01/15/2014 08:50 AM, Sebastien Han wrote:
However you have to get 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
Looks like the M500 in 480GB capacity is around $300 on amazon right now
VS about $300 for a 200GB DC S3700. The M500 has more
However you have to get 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire -
We just got in a test chassis of the 4 node in 4U fattwin setup with 10
spinning disks, 2x DC S3700s, 1 system disk, and dual E5 CPUs per node.
The guys in our data center said the thing weighs about 260lbs and
hangs out the back of the rack. :D
Thanks Mark.
what cpu frequency/number of core
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
We just got in a test chassis of the 4 node in 4U fattwin setup with 10
spinning disks, 2x DC S3700s, 1 system disk, and dual E5 CPUs per node.
The guys in our data center said the thing weighs about 260lbs and
hangs out the back of the rack. :D
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, the boston solution was based on Calxeda gear...
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Cheers
Le 15/01/2014 16:16, Mark Nelson a écrit :
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
We just got in a test chassis of the 4 node in 4U
Hi there,
We have a production Ceph cluster with 12 OSDs spread over 6 hosts
running version 0.72.2.
From time to time, we're seeing some nasty multi-second latencies
(typically 1-3 second, sometimes as high as 5 seconds) inside QEMU VMs
for both read and write loads.
The VMs are still
Le 15/01/2014 16:25, Mark Nelson a écrit :
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, the boston solution was based on Calxeda gear...
On 01/15/2014 09:35 AM, Cedric Lemarchand wrote:
Le 15/01/2014 16:25, Mark Nelson a écrit :
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik,
On 1/15/2014 9:16 AM, Mark Nelson wrote:
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
For the system disk, do you use some kind of internal flash memory disk ?
We probably should have, but ended up with I think just a 500GB 7200rpm
disk, whatever was cheapest. :)
If your system has
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
- Mail original -
De: Derek Yarnell de...@umiacs.umd.edu
À: ceph-users@lists.ceph.com
Envoyé: Mercredi 15 Janvier
Le 15/01/2014 17:34, Alexandre DERUMIER a écrit :
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
From what I understand the flexbay are inside the box, typically
usefull
From what I understand the flexbay are inside the box, typically
usefull for OS (SSD) drives, then it lets you use all the front hotlug
slot with larger platter drives.
Yes, it's inside the box.
I ask the question because of the derek message:
They currently give me a hard time about
On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
From what I understand the flexbay are inside the box, typically
usefull for OS (SSD) drives, then it lets you use all the front hotlug
slot with larger platter drives.
Yes, it's inside the box.
I ask the question because of the derek
On 1/15/14, 1:35 PM, Dimitri Maziuk wrote:
On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
From what I understand the flexbay are inside the box,
typically usefull for OS (SSD) drives, then it lets you use
all the front hotlug slot with larger platter drives.
Yes, it's inside the box.
On 01/15/2014 12:42 PM, Derek Yarnell wrote:
...
I think this is more a configuration Dell has been
unwilling to sell is all.
Ah.
Every once in a while they make their bios complain when it finds a
non-Dell approved disk. Once enough customers start screaming they
release a bios update that
Hi,
perhaps the disk has an problem?
Have you look with smartctl?
(apt-get install smartmontools; smartctl -A /dev/sdX )
Udo
On 15.01.2014 10:49, Rottmann, Jonas (centron GmbH) wrote:
Hi,
I now did an upgrade to dumpling (ceph version 0.67.5
(a60ac9194718083a4b6a225fc17cad6096c69bd1)),
Randy,
Use librados. If you want to test out my latest doc and provide some
feedback, I'd appreciate it:
http://ceph.com/docs/wip-doc-librados-intro/rados/api/librados-intro/
On Mon, Jan 13, 2014 at 11:40 PM, Randy Breunling rbreunl...@gmail.comwrote:
New to CEPH...so I'm on the
Jeff,
First, if you've specified the public and cluster networks in [global], you
don't need to specify it anywhere else. If you do, they get overridden.
That's not the issue here. It appears from your ceph.conf file that you've
specified an address on the cluster network. Specifically, you
If I understand correctly then, I should either not specify mon addr or
set it to an external IP?
Thanks for the clarification,
Jeff
On 01/15/2014 03:58 PM, John Wilkins wrote:
Jeff,
First, if you've specified the public and cluster networks in
[global], you don't need to specify it
Monitors use the public network, not the cluster network. Only OSDs use the
cluster network. The purpose of the cluster network is that OSDs do a lot
of heartbeat checks, data replication, recovery, and rebalancing. So the
cluster network will see more traffic than the front end public network.
I am facing a problem in requesting ceph radosgw using swift api.
Connection is getting closed after reading 512 bytes from stream. This
problem is only occurring if I send a GET object request with range header.
Here is the request and response:
Request---
GET
52 matches
Mail list logo