Hi all,
Same question with CEPH 10.2.3 and 11.2.0.
Is this command only for client.admin ?
client.symphony
key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg==
caps: [mon] allow *
caps: [osd] allow *
Traceback (most recent call last):
File "/usr/bin/ceph-rest-api", line
Thanks both of your reply.
Hi haomai.
If we compare the RDMA with Tcp/Ip stack, as I know, we can use the RDMA to
offload the traffic and reduce the CPU usage, which means the other
components can user more CPU to increase some performance metrics, such as
IOPS ?
Hi Deepak,
I would describe mo
Oh wow, I completely misunderstood your question.
Yes, src/osd/PG.cc and src/osd/PG.h are compiled into the ceph-osd binary which
is included in the ceph-osd rpm as you said in your OP.
On Fri, Mar 24, 2017 at 3:10 AM, nokia ceph wrote:
> Hello Piotr,
>
> I didn't understand, could you please el
Hi Nick,
I didn't test with a colocated journal. I figure ceph knows what it's
doing with the journal device, and it has no filesystem, so there's no
xfs journal, file metadata, etc. to cache due to small random sync writes.
I tested the bcache and journals on some SAS SSDs (rados bench was ok
bu
Hello Ceph community!
I would like some help with a new CEPH installation.
I have install Jewel on CentOS7 and after the reboot my OSDs are not
mount automatically and as a consequence ceph is not operating
normally...
What can I do?
Could you please help me solve the problem?
Regards,
G
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mike
Lovell
Sent: 20 March 2017 22:31
To: n...@fisk.me.uk
Cc: Webert de Souza Lima ; ceph-users
Subject: Re: [ceph-users] cephfs cache tiering - hitset
On Mon, Mar 20, 2017 at 4:20 PM, Nick Fisk mailto:n...@
Fixing typo
What version of ceph-fuse?
ceph-fuse-10.2.6-0.el7.x86_64
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, March 23, 2017 9:49 AM
To: John Spray
Cc: ceph-users
Subject: Re: [ceph-users] How to mount different ceph FS using ceph-fuse or
kernel cephfs mount
Hi Peter,
Interesting graph. Out of interest, when you use bcache, do you then just
leave the journal collocated on the combined bcache device and rely on the
writeback to provide journal performance, or do you still create a separate
partition on whatever SSD/NVME you use, effectively giving t
Hi Alexandro,
As I understand you are planning NVMe for Journal for SATA HDD and collocated
journal for SATA SSD?
Option 1:
- 24x SATA SSDs per server, will have a bottleneck with the storage
bus/controller. Also, I would consider the network capacity 24xSSDs will
deliver more performance tha
Nope. This is a theoretical possibility but would take a lot of code change
that nobody has embarked upon yet.
-Greg
On Wed, Mar 22, 2017 at 2:16 PM Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
> Hi all,
>
> Is it possible to create a pool where the minimum number of replicas for
> th
Hi,
ceph speeds up with more nodes and more OSDs - so go for 6 nodes with
mixed SSD+SATA.
Udo
On 23.03.2017 18:55, Alejandro Comisario wrote:
> Hi everyone!
> I have to install a ceph cluster (6 nodes) with two "flavors" of
> disks, 3 servers with SSD and 3 servers with SATA.
>
> Y will purchase
RDMA is of interest to me. So my below comment.
>> What surprised me is that the result of RDMA mode is almost the same as the
>> basic mode, the iops, latency, throughput, etc.
Pardon my knowledge here. If I read your ceph.conf and your notes. It seems
that you are using RDMA only for “cluste
Hi everyone!
I have to install a ceph cluster (6 nodes) with two "flavors" of
disks, 3 servers with SSD and 3 servers with SATA.
Y will purchase 24 disks servers (the ones with sata with NVE SSD for
the SATA journal)
Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
OS, and 1.
Hey cephers,
Just a reminder that the next Ceph Developer Monthly meeting is coming up:
http://wiki.ceph.com/Planning
If you have work that you are doing that is feature work, significant
backports, or anything you would like to discuss with the core team,
please add it to the following page:
h
Deffinitelly in our case OSD were not the guilty ones, since all osd that
where blocking requests allways from the same pool, worked flawlesly (and
still do) after we deleted the pool where we always saw the blocked PG's.
Since the pool was accesed by just one client, and had almost no ops to it,
Hello Piotr,
I didn't understand, could you please elaborate about this procedure as
mentioned in the last update. It would be really helpful if you share any
useful link/doc to understand what you actually meant. Yea correct,
normally we do this procedure but it takes more time. But here my inte
>> What version of ceph-fuse?
I have ceph-common-10.2.6-0 ( on CentOS 7.3.1611)
--
Deepak
>> On Mar 23, 2017, at 6:28 AM, John Spray wrote:
>>
>> On Wed, Mar 22, 2017 at 3:30 PM, Deepak Naidu wrote:
>> Hi John,
>>
>>
>>
>> I tried the below option for ceph-fuse & kernel mount. Below is wh
Hey cephers,
Just a reminder that the next Ceph Tech Talk will begin in
approximately 20 minutes. I hope you can all join us:
http://ceph.com/ceph-tech-talks/
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey |
Hello!
I have installed
ceph-deploy-1.5.36git.1479985814.c561890-6.6.noarch.rpm
on SLES11 SP4.
When I start ceph-deploy, I get an error:
ceph@ldcephadm:~/dlm-lve-cluster> ceph-deploy new ldcephmon1
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 18, in
from ceph_deplo
On Wed, Mar 22, 2017 at 3:30 PM, Deepak Naidu wrote:
> Hi John,
>
>
>
> I tried the below option for ceph-fuse & kernel mount. Below is what I
> see/error.
>
>
>
> 1) When trying using ceph-fuse, the mount command succeeds but I see
> parse error setting 'client_mds_namespace' to 'dataX' . N
On 03/23/2017 02:02 PM, nokia ceph wrote:
Hello Piotr,
We do customizing ceph code for our testing purpose. It's a part of our R&D :)
Recompiling source code will create 38 rpm's out of these I need to find
which one is the correct rpm which I made change in the source code. That's
what I'm tr
Hello Piotr,
We do customizing ceph code for our testing purpose. It's a part of our R&D
:)
Recompiling source code will create 38 rpm's out of these I need to find
which one is the correct rpm which I made change in the source code. That's
what I'm try to figure out.
Thanks
On Thu, Mar 23, 201
On 03/23/2017 01:41 PM, nokia ceph wrote:
Hey brad,
Thanks for the info.
Yea we know that these are test rpm's.
The idea behind my question is if I made any changes in the ceph source
code, then I recompile it. Then I need to find which is the appropriate rpm
mapped to that changed file. If I
Hey brad,
Thanks for the info.
Yea we know that these are test rpm's.
The idea behind my question is if I made any changes in the ceph source
code, then I recompile it. Then I need to find which is the appropriate rpm
mapped to that changed file. If I find the exact RPM, then apply that RPM
in o
Hi,
no i did not enable the journalling feature since we do not use mirroring.
On Thu, Mar 23, 2017 at 08:10:05PM +0800, Dongsheng Yang wrote:
> Did you enable the journaling feature?
>
> On 03/23/2017 07:44 PM, Christoph Adomeit wrote:
> >Hi Yang,
> >
> >I mean "any write" to this image.
> >
>
Did you enable the journaling feature?
On 03/23/2017 07:44 PM, Christoph Adomeit wrote:
Hi Yang,
I mean "any write" to this image.
I am sure we have a lot of not-used-anymore rbd images in our pool and I am
trying to identify them.
The mtime would be a good hint to show which images might be
Hi Yang,
I mean "any write" to this image.
I am sure we have a lot of not-used-anymore rbd images in our pool and I am
trying to identify them.
The mtime would be a good hint to show which images might be unused.
Christoph
On Thu, Mar 23, 2017 at 07:32:49PM +0800, Dongsheng Yang wrote:
> Hi C
On 03/23/2017 07:32 PM, Dongsheng Yang wrote:
Hi Christoph,
On 03/23/2017 07:16 PM, Christoph Adomeit wrote:
Hello List,
i am wondering if there is meanwhile an easy method in ceph to find
more information about rbd-images.
For example I am interested in the modification time of an rbd im
On Thu, Mar 23, 2017 at 5:49 AM, Hung-Wei Chiu (邱宏瑋)
wrote:
> Hi,
>
> I use the latest (master branch, upgrade at 2017/03/22) to build ceph with
> RDMA and use the fio to test its iops/latency/throughput.
>
> In my environment, I setup 3 hosts and list the detail of each host below.
>
> OS: ubunt
Hi Christoph,
On 03/23/2017 07:16 PM, Christoph Adomeit wrote:
Hello List,
i am wondering if there is meanwhile an easy method in ceph to find more
information about rbd-images.
For example I am interested in the modification time of an rbd image.
Do you mean some metadata changing? such as
Hello List,
i am wondering if there is meanwhile an easy method in ceph to find more
information about rbd-images.
For example I am interested in the modification time of an rbd image.
I found some posts from 2015 that say we have to go over all the objects of an
rbd image and find the newest
Hi,
I follow this link
https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/
to install my uBuntu cluster
I'm stopped at
*Install Ceph on All Nodes*This command crash my VM cause by lack of
ressources.
*ceph-deploy install ceph-admin ceph-osd1 ceph-osd2 ceph-osd3 mon
Hi,
I use the latest (master branch, upgrade at 2017/03/22) to build ceph with
RDMA and use the fio to test its iops/latency/throughput.
In my environment, I setup 3 hosts and list the detail of each host below.
OS: ubuntu 16.04
Storage: SSD * 4 (256G * 4)
Memory: 64GB.
NICs: two NICs, one (inte
I think Greg (who appears to be a ceph committer) basically said he was
interested in looking at it, if only you had the pool that failed this way.
Why not try to reproduce it, and make a log of your procedure so he can
reproduce it too? What caused the slow requests... copy on write from
snapshot
> Op 22 maart 2017 om 18:05 schreef Patrick McGarry :
>
>
> Hey cephers,
>
> Just wanted to share that the new interactive metrics dashboard is now
> available for tire-kicking.
>
> https://metrics.ceph.com
>
Very nice!
> There are still a few data pipeline issues and other misc cleanup tha
35 matches
Mail list logo