Infernalis 9.2.1, Centos 72. My cluster is in recovery and i've noticed a
lot of 'waiting for rw locks'. Some of these can last quite a long time.
Any idea what can cause this?
Because this is a RGW bucket index file, this causes backup effects --
since the index can't be updated, S3 updates to ot
I've been running a cluster like this for several years serving VM block
devices and CephFS with no difficulty. As previously mentioned it's a question
of resources and the corresponding expectations. I use a set of original HP
Microservers each with 16GB RAM which is more than enough, althoug
Hi,
I am following the documentation on how to prepare and activate ceph-disk
and ran into the following problem:
command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph
--mkfs --mkkey -i 8 --monmap /var/lib/ceph/tmp/mnt.RxRUd8/activate.monmap
--osd-data /var/lib/ceph/tmp/mnt.RxRUd8
On Fri, May 6, 2016 at 2:27 PM, Sage Weil wrote:
> On Fri, 6 May 2016, Yehuda Sadeh-Weinraub wrote:
>> On Fri, May 6, 2016 at 12:41 PM, Sage Weil wrote:
>> > This PR
>> >
>> > https://github.com/ceph/ceph/pull/8975
>> >
>> > removes the 'rados cppool' command. The main problem is that th
On Fri, 6 May 2016, Yehuda Sadeh-Weinraub wrote:
> On Fri, May 6, 2016 at 12:41 PM, Sage Weil wrote:
> > This PR
> >
> > https://github.com/ceph/ceph/pull/8975
> >
> > removes the 'rados cppool' command. The main problem is that the command
> > does not make a faithful copy of all data be
On Fri, May 6, 2016 at 12:41 PM, Sage Weil wrote:
> This PR
>
> https://github.com/ceph/ceph/pull/8975
>
> removes the 'rados cppool' command. The main problem is that the command
> does not make a faithful copy of all data because it doesn't preserve the
> snapshots (and snapshot related
unsubscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
As it should be working, I will increase the logging level in my
smb.conf file and see what info I can get out of the logs, and report
back.
I would like to use the native Samba's CephFS VFS interface, but I
could not get Samba ACLs to work when testing it, as it it looks like
the Samba vfs_ceph.c
This PR
https://github.com/ceph/ceph/pull/8975
removes the 'rados cppool' command. The main problem is that the command
does not make a faithful copy of all data because it doesn't preserve the
snapshots (and snapshot related metadata). That means if you copy an RBD
pool it will rend
Hello,
I met the message with ceph 10.2.0 in following situation
My details
[ceph@osd1 ~]$ date;ceph -v; ceph osd crush show-tunables
Fri May 6 22:29:56 MSK 2016
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
{
"choose_local_tries": 0,
"choose_local_
Thank you Alan,
It works but I didn't find this option on ceph docs. It seems it is not
update/complete.
Regards,
Roozbeh
On May 6, 2016 23:25, "Alan Johnson" wrote:
> Try with --release
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>
Try with --release
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Roozbeh Shafiee
Sent: Friday, May 06, 2016 2:54 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Installing Ceph Hammer
Hi,
I need to install Ceph Hammer because of some
Hi,
I need to install Ceph Hammer because of some kernel issues with Jewel by
ceph-deploy tool.
But when I enter “ceph deploy install node1 node2 node3” , ceph deploy starts
installing Ceph-Jewel.
How can I install older versions of Ceph, like Hammer on CentOS 7 ?
Thank you
Roozbeh
On Fri, May 6, 2016 at 9:53 AM, Eric Eastman
wrote:
> I was doing some SAMBA testing and noticed that a kernel mounted share
> acted differently then a fuse mounted share with Windows security on
> my windows client. I cut my test down to as simple as possible, and I
> am seeing the kernel mounted
I was doing some SAMBA testing and noticed that a kernel mounted share
acted differently then a fuse mounted share with Windows security on
my windows client. I cut my test down to as simple as possible, and I
am seeing the kernel mounted Ceph file system working as expected with
SAMBA and the fuse
Oliver Dzombic writes:
>
> Hi Blade,
>
> you can try to set the min_size to 1, to get it back online, and if/when
> the error vanish ( maybe after another repair command ) you can set the
> min_size again to 2.
>
> you can try to simply out/down/?remove? the osd where it is on.
>
Hi Oliver
It's all about resources.
If you have lots of CPU and memory it is completely doable.
If you're using lower specification hardware, it might be a little
difficult.
-Tu
On Fri, May 6, 2016 at 7:11 AM David Turner
wrote:
> There is potential for locking due to hung processes or such when you have
There is potential for locking due to hung processes or such when you have OSDs
on Mons. My test cluster has OSDS on Mons and it hasn't come across it, but I
have heard of this happening in this mailing list. I don't think you would
ever hear a recommendation for this in a production environme
That's good to know. My plan is to have three nodes initially; each with every
role.
Thanks,
Alex
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Oliver
Dzombic
Sent: 06 May 2016 12:06
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Du
Hi Alex,
thats no problem.
It might get harder to troubleshoot problems, because the
serverressourced will be used for everything. Also you will have on the
same node logs for every different service.
But in general, its no problem technically to have all ( Mon+Osd+MDS )
on one node.
--
Mit fr
Hi,
I'm looking at standing up a small Ceph cluster, but I'm curious as to what the
general opinion is around having dual role nodes. E.g. each node is both an OSD
and MON.
Regards,
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
Hey,
you can try adding few disks at a time, wait for rebalance and then add
more. Repeat until all disks added
On 16-05-06 11:36, M Ranga Swami Reddy wrote:
Hi,
I wanted to add 2 new nodes (21 OSDs per node) to the current ceph
cluster (v 0.80.7).
What is best way doing the same without imp
Hello,
We have been running the Rados GW with the S3 API and we did not have
problems for more than a year.
We recently enabled also the SWIFT API for our users.
radosgw --version
ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
The idea is that each user of the system is free of
Hello,
On Fri, 6 May 2016 09:58:31 +0200 Peter Kerdisle wrote:
> Hey Mark,
>
> Sorry I missed your message as I'm only subscribed to daily digests.
>
>
> > Date: Tue, 3 May 2016 09:05:02 -0500
> > From: Mark Nelson
> > To: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Erasure pool
Hi,
I wanted to add 2 new nodes (21 OSDs per node) to the current ceph
cluster (v 0.80.7).
What is best way doing the same without impacting the customers with
no downtime (and not network impact due to rebalcing activity)?
Thanks
Swami
___
ceph-users m
Hey Mark,
Sorry I missed your message as I'm only subscribed to daily digests.
> Date: Tue, 3 May 2016 09:05:02 -0500
> From: Mark Nelson
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Erasure pool performance expectations
> Message-ID:
> Content-Type: text/plain; charset=windows-
26 matches
Mail list logo