Hi Ceph-users,
TL;DR - I can't seem to pin down why an unloaded system with flash based OSD
journals has higher than desired write latencies for RBD devices. Any ideas?
I am developing a storage system based on Ceph and an SCST+pacemaker cluster.
Our initial testing showed promising resul
Thanks for your reply.
Server :
[root@ceph-1 ~]# rpm -qa | grep ceph
ceph-mon-0.94.1-13.el7cp.x86_64
ceph-radosgw-0.94.1-13.el7cp.x86_64
ceph-0.94.1-13.el7cp.x86_64
ceph-osd-0.94.1-13.el7cp.x86_64
ceph-deploy-1.5.25-1.el7cp.noarch
ceph-common-0.94.1-13.el7cp.x86_64
[root@ceph-1 ~]# uname -a
Linux
Just my quriosity, what had you come up with this file? Probably it
would be help for someone who face similar issue, I guess.
/var/lib/ceph/osd/ceph-4/current/3.2_head/rb.0.19f2e.238e1f29.0728__head_813E90A3__3
Cheers,
S
On Thu, Mar 3, 2016 at 3:58 PM, Alexander Gubanov wrote:
> Nothin
Nothing of this did't happen. After OSDs fell I found this file
/var/lib/ceph/osd/ceph-4/current/3.2_head/rb.0.19f2e.238e1f29.0728__head_813E90A3__3.
Location of this file seemed for me is very strange and I just remove it
and then all osds was started up.
On Fri, Feb 26, 2016 at 7:03 PM,
Yes.
On Wed, Jan 27, 2016 at 1:10 PM, Dan Mick wrote:
> Is the client.test-admin key in the keyring read by ceph-rest-api?
>
> On 01/22/2016 04:05 PM, Shinobu Kinjo wrote:
>> Does anyone have any idea about that?
>>
>> Rgds,
>> Shinobu
>>
>> - Original Message -
>> From: "Shinobu Kinjo"
On 03/03/16 13:00, Gregory Farnum wrote:
Yes; it goes through the journal (or whatever the full storage stack
is on the OSD in question).
Thanks
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
On Sun, Feb 28, 2016 at 8:15 PM, Lindsay Mathieson
wrote:
> As per the subject - when using tell to benchmark a OSD does it go through
> the OSD's journal or just the osd disk itself?
Yes; it goes through the journal (or whatever the full storage stack
is on the OSD in question).
Ceph 9.2.1. Shortly after updating 9.2.0 to 9.2.1 all radosgws are refusing
to start up, it's stuck on this 'notify' object:
[root@sm-cld-mtl-033 ceph]# ceph daemon /var/run/ceph/ceph-client.<>.asok
objecter_requests
{
"ops": [
{
"tid": 13,
"pg": "4.88aa5c95",
On Wed, Mar 2, 2016 at 9:40 AM, Василий Ангапов wrote:
> Greg,
> Can you give us some examples of that?
Just looking at the header source, one of the examples is
'allow command foo', 'allow command bar with arg1=val1 arg2 prefix val2'
So you can do things like that. Substitute "auth create" or
On 02/03/16 02:41, Robert LeBlanc wrote:
With a fresh disk, you will need to remove the old key in ceph (ceph
auth del osd.X) and the old osd (ceph osd rm X), but I think you can
leave the CRUSH map alone (don't do ceph osd crush rm osd.X) so that
there isn't any additional data movement (if ther
Ok, reduced my recovery I/O with
ceph tell osd.* injectargs '--osd-max-backfills 1'
ceph tell osd.* injectargs '--osd-recovery-max-active 1'
ceph tell osd.* injectargs '--osd-client-op-priority 63'
Now I can put it back to the default values explicity (10, 15), but is
there a way to tell ceph
Hey cephers,
Just a reminder that tonight is the Ceph Developer Monthly meeting for
March. At 9p EST we will be discussing current Ceph development
efforts. We ask that anyone currently working on Ceph development
please submit a short description on the wiki:
http://tracker.ceph.com/projects/cep
Unfortunately, VSM can manage only pools / clusters created by itself.
Pozdrawiam
Michał Chybowski
Tiktalik.com
W dniu 02.03.2016 o 20:23, Василий Ангапов pisze:
You may also look at Intel Virtual Storage Manager:
https://github.com/01org/virtual-storage-manager
2016-03-02 13:57 GMT+03:00 Joh
You may also look at Intel Virtual Storage Manager:
https://github.com/01org/virtual-storage-manager
2016-03-02 13:57 GMT+03:00 John Spray :
> On Tue, Mar 1, 2016 at 2:42 AM, Vlad Blando wrote:
>
>> Hi,
>>
>> We already have a user interface that is admin facing (ex. calamari,
>> kraken, ceph-d
Hello everyone,
I am trying to repair a cluster that has 74 pgs that are down, I have
seen that the pgs in question are presently with 0 data on the OSD.
I have exported data from OSD's that were pulled when the client had
thought the disk were bad.
I am using the recovery method describe in "
Greg,
Can you give us some examples of that?
2016-03-02 19:34 GMT+03:00 Gregory Farnum :
> On Tue, Mar 1, 2016 at 7:37 PM, chris holcombe
> wrote:
>> Hey Ceph Users!
>>
>> I'm wondering if it's possible to restrict the ceph keyring to only
>> being able to run certain commands. I think the answe
Hi,
i could also not find any delete, but a create.
I found this here, its basically your situation:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-July/032412.html
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
Thans for info even if it is a bad info.
Anyway I am reading docs again and I do not see a way to delete PGs.
How can I remove them?
Thanks,
Mario
2016-03-02 17:59 GMT+01:00 Oliver Dzombic :
> Hi,
>
> as i see your situation, somehow this 4 pg's got lost.
>
> They will not recover, because they a
Hi Yan,
Unfortunately mount as is is not working, and is not clear to me the issue
admin@myvm:~$ sudo mount -t ceph 10.10.2.1:6789:/ /mnt/ceph -o
name=,secret=
mount: wrong fs type, bad option, bad superblock on 10.10.2.1:6789:/,
missing codepage or helper program, or other error
Hi,
as i see your situation, somehow this 4 pg's got lost.
They will not recover, because they are incomplete. So there is no data
from which it could be recovered.
So all what is left is to delete this pg's.
Since all 3 osd's are in and up, it does not seem like you can somehow
access this los
Here it is:
cluster ac7bc476-3a02-453d-8e5c-606ab6f022ca
health HEALTH_WARN
4 pgs incomplete
4 pgs stuck inactive
4 pgs stuck unclean
1 requests are blocked > 32 sec
monmap e8: 3 mons at {0=
10.1.0.12:6789/0,1=10.1.0.14:6789/0,2=10.1.0.17:
Tried to set min_size=1 but unfortunately nothing has changed.
Thanks for the idea.
2016-02-29 22:56 GMT+01:00 Lionel Bouton :
> Le 29/02/2016 22:50, Shinobu Kinjo a écrit :
>
> the fact that they are optimized for benchmarks and certainly not
> Ceph OSD usage patterns (with or without internal j
On Tue, Mar 1, 2016 at 7:37 PM, chris holcombe
wrote:
> Hey Ceph Users!
>
> I'm wondering if it's possible to restrict the ceph keyring to only
> being able to run certain commands. I think the answer to this is no
> but I just wanted to ask. I haven't seen any documentation indicating
> whether
On Wed, Mar 2, 2016 at 4:21 AM, Fred Rolland wrote:
> Hi,
>
> I am trying to use CEPH FS in oVirt (RHEV).
> The mount is created OK, however, the hypervisor need access to the mount
> from different users (eg: vdsm, sanlock)
> It seems that Sanlock user is having permissions issues.
>
> When using
Hi Dan,
On 02/03/2016 19:48, Dan van der Ster wrote:
> Hi Loic,
>
> On Wed, Mar 2, 2016 at 12:32 PM, Loic Dachary wrote:
>>
>>
>> On 02/03/2016 17:15, Odintsov Vladislav wrote:
>>> Hi,
>>>
>>> it looks very strange, that LTS release suddenly stopped support of one of
>>> OS'es in the middle of
On Wed, Mar 2, 2016 at 1:58 PM, Randy Orr wrote:
> Ilya,
>
> That's great, thank you. I will certainly try the updated kernel when
> available. Do you have pointers to the two bugs in question?
4.5-rc6 is already available. 4.4.4 should come out shortly.
Reports:
1. http://www.spinics.net/list
I would like to know more about the project that are listed in your idea
page like:
RADOS PROXY
RBD DIFF CHECKSUMS
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ilya,
That's great, thank you. I will certainly try the updated kernel when
available. Do you have pointers to the two bugs in question?
Jan,
We have tried nfs export as both sync and async and have seen the issue
using both options. I have seen this on hosts with 24G of mem and 128G of
mem.
Th
Hi Loic,
On Wed, Mar 2, 2016 at 12:32 PM, Loic Dachary wrote:
>
>
> On 02/03/2016 17:15, Odintsov Vladislav wrote:
>> Hi,
>>
>> it looks very strange, that LTS release suddenly stopped support of one of
>> OS'es in the middle of lifecycle. Especially when there are no technical
>> problems.
>>
Hi,
can someone report their experiences with the PMC Adaptec HBA 1000
series of controllers?
https://www.adaptec.com/en-us/smartstorage/hba/
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Hi,
I am trying to use CEPH FS in oVirt (RHEV).
The mount is created OK, however, the hypervisor need access to the mount
from different users (eg: vdsm, sanlock)
It seems that Sanlock user is having permissions issues.
When using NFS, configuring the export as all_squash and defining
anonuid/ano
Are you exporting (or mounting) the NFS as async or sync?
How much memory does the server have?
Jan
> On 02 Mar 2016, at 12:54, Shinobu Kinjo wrote:
>
> Ilya,
>
>> We've recently fixed two major long-standing bugs in this area.
>
> If you could elaborate more, it would be reasonable for the
Ilya,
> We've recently fixed two major long-standing bugs in this area.
If you could elaborate more, it would be reasonable for the community.
Is there any pointer?
Cheers,
Shinobu
- Original Message -
From: "Ilya Dryomov"
To: "Randy Orr"
Cc: "ceph-users"
Sent: Wednesday, March 2, 2
Hi Christian,
thank you very much for your hint! I am usually using the search
function of the mailing list archiv and didnt find this.
I installed munin on all nodes to get a better overview what happens
where to a specific time.
When the problem happens, munin does not receive/show any values
On Tue, Mar 1, 2016 at 10:57 PM, Randy Orr wrote:
> Hello,
>
> I am running the following:
>
> ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
> ubuntu 14.04 with kernel 3.19.0-49-generic #55~14.04.1-Ubuntu SMP
>
> For this use case I am mapping and mounting an rbd using the kernel c
On 02/03/2016 17:15, Odintsov Vladislav wrote:
> Hi,
>
> it looks very strange, that LTS release suddenly stopped support of one of
> OS'es in the middle of lifecycle. Especially when there are no technical
> problems.
> How can we help community with building hammer branch officially?
>
Hi,
On Tue, Mar 1, 2016 at 2:42 AM, Vlad Blando wrote:
> Hi,
>
> We already have a user interface that is admin facing (ex. calamari,
> kraken, ceph-dash), how about a client facing interface, that can cater for
> both block and object store. For object store I can use Swift via Horizon
> dashboard,
Hi,
it looks very strange, that LTS release suddenly stopped support of one of
OS'es in the middle of lifecycle. Especially when there are no technical
problems.
How can we help community with building hammer branch officially?
Regards,
Vladislav Odintsov
Hi,
I've got two questions!
First. We are currently running Hammer in production. You are thinking of
upgrading to Infernalis. Should we upgrade now or wait for the next LTS,
Jewel? On ceph releases i can see Hammers EOL is estimated in november 2016
while Infernalis is June 2016.
If i follow the
Is "ceph -s" still showing you same output?
> cluster ac7bc476-3a02-453d-8e5c-606ab6f022ca
> health HEALTH_WARN
> 4 pgs incomplete
> 4 pgs stuck inactive
> 4 pgs stuck unclean
> monmap e8: 3 mons at
> {0=10.1.0.12:6789/0,1=10.1.0.14:6789/0,2=10.1.0
40 matches
Mail list logo