Yes, I would like to know too… I decided nott to update the kernel as it could
possibly affect xenserver’s stability and/or performance.
Cheers,
Mike
> On Jun 30, 2016, at 11:54 PM, Josef Johansson wrote:
>
> Also, is it possible to recompile the rbd kernel module in XenServer? I am
> under th
Thanks, Jason--
Turns out AppArmor was indeed enabled (I was not aware of that).
Disabled it and now I see the socket but it seems to only be there
temporarily while some client app is running.
The original reason I wanted to use this socket was that I am also
using an rbd images thru kvm and I w
To summarise,
LIO is just not working very well at the moment because of the ABORT Tasks
problem, this will hopefully be fixed at some point. I'm not sure if SUSE works
around this, but see below for other pain points with RBD + ESXi + iSCSI
TGT is easy to get going, but performance isn't the b
On 01/07/16 12:59, John Spray wrote:
On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman
wrote:
Hi all,
While syncing a lot of files to cephfs, our mds cluster got haywire: the
mdss have a lot of segments behind on trimming: (58621/30)
Because of this the mds cluster gets degraded. RAM usage
On Fri, Jul 1, 2016 at 6:59 PM, John Spray wrote:
> On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman
> wrote:
>> Hi all,
>>
>> While syncing a lot of files to cephfs, our mds cluster got haywire: the
>> mdss have a lot of segments behind on trimming: (58621/30)
>> Because of this the mds cluste
Hello,
>>> I found a performance drop between kernel 3.13.0-88 (default kernel on
>>> Ubuntu
>>> Trusty 14.04) and kernel 4.4.0.24.14 (default kernel on Ubuntu Xenial
>>> 16.04)
>>>
>>> ceph version is Jewel (10.2.2).
>>> All tests have been done under Ubuntu 14.04
>>
To start safely you need a replication factor of 3 and at least 4 nodes
(think size+1) to allow for smooth maintenance on your nodes.
On Fri, Jul 1, 2016 at 2:31 PM, Ashley Merrick
wrote:
> Hello,
>
> Okie makes perfect sense.
>
> So if run CEPH with a replication of 3, is it still required to r
Hello,
Okie makes perfect sense.
So if run CEPH with a replication of 3, is it still required to run an odd
number of OSD Nodes.
Or could I run 4 OSD Nodes to start with, with a replication of 3, with each
replication on a separate server.
,Ashley Merrick
-Original Message-
From: cep
Still in case of object corruption you will not be able to determine
which copy is valid. Ceph does not provide data integrity with filestore
(it's planned for bluestore).
On 01.07.2016 14:20, David wrote:
> It will work but be aware 2x replication is not a good idea if your data
> is important. T
It will work but be aware 2x replication is not a good idea if your data is
important. The exception would be if the OSD's are DC class SSD's that you
monitor closely.
On Fri, Jul 1, 2016 at 1:09 PM, Ashley Merrick
wrote:
> Hello,
>
> Perfect, I want to keep on separate node's, so wanted to make
Hello,
Perfect, I want to keep on separate node's, so wanted to make sure the expected
behaviour was that it would do that.
And no issues with running an odd number of nodes for a replication of 2? I
know you have quorum, just wanted to make sure would not effect when running an
even replicati
It will put each object on 2 OSD, on 2 separate node
All nodes, and all OSDs will have the same used space (approx)
If you want to allow both copies of an object to put stored on the same
node, you should use osd_crush_chooseleaf_type = 0 (see
http://docs.ceph.com/docs/master/rados/operations/crus
Hello,
Looking at setting up a new CEPH Cluster, starting with the following.
3 x CEPH OSD Servers
Each Server:
20Gbps Network
12 OSD's
SSD Journal
Looking at running with replication of 2, will there be any issues using 3
nodes with a replication of two, this should "technically" give me ½ t
Hi,
> In Infernalis there was this command:
radosgw-admin regions list
But this is missing in Jewel.
Ok, I just found out that this was renamed to zonegroup list:
root@rgw01:~ # radosgw-admin --id radosgw.rgw zonegroup list
read_default_id : -2
{
"default_info": "",
"zonegroups": [
On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman
wrote:
> Hi all,
>
> While syncing a lot of files to cephfs, our mds cluster got haywire: the
> mdss have a lot of segments behind on trimming: (58621/30)
> Because of this the mds cluster gets degraded. RAM usage is about 50GB. The
> mdses were r
Hi all,
While syncing a lot of files to cephfs, our mds cluster got haywire: the
mdss have a lot of segments behind on trimming: (58621/30)
Because of this the mds cluster gets degraded. RAM usage is about 50GB.
The mdses were respawning and replaying continiously, and I had to stop
all syncs
HI
1.
2 sw iscsi gateways(deploy on osd/monitor ) using lrbd to create,the iscsi
target is LIO
configuration:
{
"auth": [
{
"target": "iqn.2016-07.org.linux-iscsi.iscsi.x86:testvol",
"authentication": "none"
}
],
"targets": [
{
"target": "iqn.2016-07.org.l
Hi List,
Sorry if this question was answered before.
I'm new to ceph and following the ceph document to setting up a ceph cluster.
However, I noticed that the manual install guide said below
http://docs.ceph.com/docs/master/install/install-storage-cluster/
> Ensure your YUM ceph.repo entry incl
Hi,
> See this thread,
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23852.html
Yes, I found this as well, but I don't think I have configured more than
one region.
I never touched any region settings, and I have to admit I wouldn't
even know how to check which regions I have.
In
Hi,
my experience:
ceph + iscsi ( multipath ) + vmware == worst
Better you search for another solution.
vmware + nfs + vmware might have a much better performance.
If you are able to get vmware run with iscsi and ceph, i would be
>>very<< intrested in what/how you did that.
--
Mit
Hello,
On Fri, 1 Jul 2016 13:04:45 +0800 mq wrote:
> Hi list
> I have tested suse enterprise storage3 using 2 iscsi gateway attached
> to vmware. The performance is bad.
First off, it's somewhat funny that you're testing the repackaged SUSE
Ceph, but asking for help here (with Ceph being ow
Hi,
is there meanwhile a proven solution to this issue ?
What can be done do fix the scheduler bug ? 1 Patch, 3 Patches, 20 Patches ?
Thanks
Christoph
On Wed, Jun 29, 2016 at 12:02:11PM +0200, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> to be precise i've far more patches attached to the s
22 matches
Mail list logo