On 18/07/15 12:53, Steve Thompson wrote:
Ceph newbie (three weeks).
Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's
(1 TB each), three MON's, one active MDS and two standby MDS's. 10GbE
cluster network, 1GbE public network. Using CephFS on a single client
via the
I suggest you look at the current offload settings on the NIC
There have been quite a few bugs when bonding, vlans, bridges etc. are involved
- sometimes you have to set the offload settings on the logical interface (like
bond0.vlan_id) and not on the NIC.
Even then, what I’ve seen is that if
On 20/07/15 10:29, Thomas Lemarchand wrote:
Hi Shane, list,
I have no knowledge of HDFS at all. Why would someone choose CephFS
instead of HDFS for Hadoop ?
HDFS is meant for Hadoop right ?
I may have to work on Hadoop in a few months, so I'd like to be able to
do the right choice.
One
Hello,
Can you please send the output of the commands :
ceph osd tree
ceph osd dump
Regards,
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
ryan_h...@supercluster.cn
Sent: lundi 20 juillet 2015 02:53
To: ceph-users
Cc: vincent_liu
Subject: [ceph-users] HEALTH_WARN
Hi Shane, list,
I have no knowledge of HDFS at all. Why would someone choose CephFS
instead of HDFS for Hadoop ?
HDFS is meant for Hadoop right ?
I may have to work on Hadoop in a few months, so I'd like to be able to
do the right choice.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable
Thanks John, I'll keep that in mind.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On lun., 2015-07-20 at 10:40 +0100, John Spray wrote:
On 20/07/15 10:29, Thomas Lemarchand wrote:
Hi Shane, list,
I have no knowledge of HDFS at all. Why would
Reading on SourceForge blog, there are experienced ceph corruption. IMHO there
will be good idea to know technical details. Version, what happened...
http://sourceforge.net/blog/sourceforge-infrastructure-and-service-restoration/
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich,
I asked them on twitter, let’s hope they elaborate on that.
But yeah, I bet someone optimized it with mount -o nobarrier …
Jan
On 20 Jul 2015, at 14:56, Gregory Farnum g...@gregs42.com wrote:
We responded immediately and confirmed the issue was related to filesystem
corruption on our
http://tracker.ceph.com/issues/11586
On Mon, Jul 20, 2015 at 7:42 AM, Dzianis Kahanovich maha...@bspu.unibel.by
wrote:
Reading on SourceForge blog, there are experienced ceph corruption. IMHO
there will be good idea to know technical details. Version, what happened...
We responded immediately and confirmed the issue was related to filesystem
corruption on our storage platform. This incident impacted all block
devices on our Ceph cluster.
Just guessing from that, I bet they lost power and discovered their local
filesystems/disks were misconfigured to not be
Hi all,
I have a cluster with 28 nodes (all physical, 4Cores, 32GB Ram), each node
has 4 OSDs for a total of 112 OSDs. Each OSD has 106 PGs (counted including
replication). There are 3 MONs on this cluster.
I'm running on Ubuntu trusty with kernel 3.13.0-52-generic, with Hammer
(0.94.2).
This
Hi,
As I explained in various previous threads, I'm having a hard time getting the
most out of my test ceph cluster.
I'm benching things with rados bench.
All Ceph hosts are on the same 10GB switch.
Basically, I know I can get about 1GB/s of disk write performance per host,
when I bench things
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
David Casier
Sent: 20 July 2015 00:27
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] osd_agent_max_ops relating to number of OSDs in
the cache pool
Nick Fisk nick@... writes:
On 7/20/15, 11:52 AM, ceph-users on behalf of Campbell, Bill
ceph-users-boun...@lists.ceph.commailto:ceph-users-boun...@lists.ceph.com on
behalf of
bcampb...@axcess-financial.commailto:bcampb...@axcess-financial.com wrote:
We use VMware with Ceph, however we don't use RBD directly (we have an
Hi Bill,
Would you be kind enough to share how your setup looks like today as we are
planning to use ESXi back-ended with CEPH storage. When you tested iSCSI what
were the issues you noticed ? What version of CEPH were you running then ? What
iSCSI software did you use for setup ?
Regards,
Hi,
Has anyone implemented using CEPH RBD with Vmware ESXi hypervisor. Just looking
to use it as a native VMFS datastore to host VMDK’s. Please let me know if
there are any documents out there that might point me in the right direction to
get started on this. Thank you.
Regards,
Nikhil Mitra
We use VMware with Ceph, however we don't use RBD directly (we have an NFS
server which has RBD volumes exported as datastores in VMware). We did attempt
iSCSI with RBD to connect to VMware but ran into stability issues (could have
been the target software we were using) but have found NFS to
Hi Nikil. We just posted slides from Ceph Day (Los Angeles) about the use of
iSCSI with Ceph at Electronic Arts. The slides can be found
herehttp://www.slideshare.net/DaystromTech/ceph-days-la-2015-holmesevans if
you want to review them. (Note: it doesn’t answer your specific question, but
Hi,
I just noticed a strange behavior on one OSD (and only one, other OSDs on the
same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on Debian
7 with a self-made 4.1 Kernel).
The OSD started to accumulate slow requests, a restart didn’t help.
After a few seconds the log is
I don't have much of the details (our engineering group handled most of the
testing), however we currently have 10 Dell PowerEdge R720xd systems, each with
24 600GB 10k SAS OSDs (the system has a RAID controller with 2GB NVRAM, in
testing performance was better with this then with 6 SSD drives
Hi David,
I'm also using Ceph to provide block devices to ESXi datastores.
Currently using tgt with the RBD backend to provide iSCSI.
Also tried SCST, LIO and NFS, here's my take on them.
TGT
Pros: Very Stable, talks direct RBD, easy to setup, pacemaker agents, ok
performance
Cons: Can't do
Hi,
I just noticed a strange behavior on one OSD (and only one, other OSDs on the
same server didn’t show that behavior) in a ceph-cluster (all 0.94.2 on
Debian 7 with a self-made 4.1 Kernel).
The OSD started to accumulate slow requests, a restart didn’t help.
After a few seconds the log
Hi,
Can someone give an insights, if it possible to mixed SSD with HDD? on the
OSD.
How can we speed up the uploading for file for example, as per experience
it took around 18mins to load 20Gb images (via glance), in 1Gb network. Or
it is just normal?
Regards,
Mario
Hi,
Can someone give an insights, if it possible to mixed SSD with HDD? on the
OSD.
you’ll have more or less four options:
- SSDs for the journals of the OSD-process (SSD must be able to perform good on
synchronous writes)
- an SSD only pool for „high performance“ data
- Using SSDs for the
24 matches
Mail list logo