Hi Brad,
You are correct. the librados.so has the symbol but what I copied is a
wrong file.
Now I can test the striper api with the previous C example and this cpp
example:
http://mrbojangles3.github.io/ceph/systems/striping/alignment/2017/05/28/Ceph-Stripe/
Both are working, but I haven't got
Hi all,
I'm new to Luminous, when I use ceph-volume create to add a new
filestore OSD, it will tell me that the journal's header magic is not
good. But the journal device is a new LV. How to make it write the new
OSD's header to the journal?
And it seems this error message will not affect the
That could be it. Every time it happens for me, it is indeed from a non-auth
MDS.
From: Yan, Zheng
Sent: Wednesday, 30 May 2018 11:25:59 AM
To: Linh Vu
Cc: Oliver Freyermuth; Ceph Users; Peter Wienemann
Subject: Re: [ceph-users] Ceph-fuse getting stuck with
On Wed, May 30, 2018 at 11:52 AM, Jialin Liu wrote:
> Thanks Brad,
> I run nm on those .so file, it prints 'no symbol'
OK, well you need to link to a library that exports that symbol (has
it defined in its Text section). I suspect you'll find it is defined
in libceph-common.so so try linking to
Thanks Brad,
I run nm on those .so file, it prints 'no symbol'
Then with ldd librados.so, I don't see the libceph-common.so, instead:
> jialin@cori12: ldd librados.so
> linux-vdso.so.1 (0x2aacf000)
> libboost_thread-mt.so.1.53.0
> =>/rados_install/lib/libboost_thread-mt.so.1.53.0
I could be http://tracker.ceph.com/issues/24172
On Wed, May 30, 2018 at 9:01 AM, Linh Vu wrote:
> In my case, I have multiple active MDS (with directory pinning at the very
> top level), and there would be "Client xxx failing to respond to capability
> release" health warning every single time
On Wed, May 30, 2018 at 10:42 AM, Jialin Liu wrote:
> Hi,
> I'm trying to use the libradosstriper api, but having some trouble in
> linking to lradosstriper. I copied only the `required' libraries from an
> pre-installed ceph (10.2.10), and put them under my local directory
> /rados_install/lib
In my case, I have multiple active MDS (with directory pinning at the very top
level), and there would be "Client xxx failing to respond to capability
release" health warning every single time that happens.
From: ceph-users on behalf of Yan, Zheng
Sent:
Hi,
I'm trying to use the libradosstriper api, but having some trouble in
linking to lradosstriper. I copied only the `required' libraries from an
pre-installed ceph (10.2.10), and put them under my local directory
/rados_install/lib and rados_install/include, on a linux machine.
On Fri, May 25, 2018 at 5:36 PM Jesus Cea wrote:
> I have a Erasure Coded 8+2 pool with 8 PGs.
>
> Each PG is spread on 10 OSDs using Reed-Solomon (the Erasure Code).
>
> When I rebalance the cluster I see two PGs moving:
> "active+remapped+backfilling".
>
> A "pg dump" shows this:
>
> """
>
On Sat, May 26, 2018 at 11:51 AM Bryan Henderson
wrote:
> >> Suppose I lost all monitors in a ceph cluster in my laboratory. I have
> >> all OSDs intact. Is it possible to recover something from Ceph?
> >
> >Yes, there is. Using ceph-objectstore-tool you are able to rebuild the
> >MON database.
On Tue, May 29, 2018 at 3:59 AM Steffen Winther Sørensen
wrote:
> (ie. would Jewel be able to connect to both clusters)?
>
Yes; that should work without any issues
You could also update the Hammer cluster, although you'd need to go through
a few intermediate upgrades.
Thank you all
My goal is to have an SSD based Ceph ( NVME + SSD) cluster so I need to
consider performance as well as reliability
( although I do realize that a performant cluster that breaks my VMware is
not ideal ;-))
It appears that NFS is the safe way to do it but will it be the bottleneck
For cephfs you can run the following on the server with the current active
mds:
ceph daemon mds.$(hostname) session ls
Paul
2018-05-29 21:20 GMT+02:00 Reed Dier :
> Possibly helpful,
>
> If you are able to hit your ceph-mgr dashboard in a web browser, I find it
> possible to see a table of
Possibly helpful,
If you are able to hit your ceph-mgr dashboard in a web browser, I find it
possible to see a table of currently connected cephfs clients, hostnames,
state, type (userspace/kernel), and ceph version.
Assuming that the link is persistent, for me the url is
On Tue, May 29, 2018, 11:00 AM Marc Roos wrote:
>
> I guess we will not get this ssl_private_key option unless we upgrade
> from Luminous?
>
>
> http://docs.ceph.com/docs/master/radosgw/frontends/
>
> That option is only for Beast. For civetweb you just feed it
ssl_certificate with a combined
Hi,
we use PetaSAN for our VMWare-Cluster. It provides an webinterface for
management and does clustered active-active ISCSI. For us the easy
management was the point to choose this, so we need not to think about
how to configure ISCSI...
Regards,
Dennis
Am 28.05.2018 um 21:42 schrieb
I guess we will not get this ssl_private_key option unless we upgrade
from Luminous?
http://docs.ceph.com/docs/master/radosgw/frontends/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, May 28, 2018 at 3:42 PM, Steven Vacaroaia wrote:
> Hi,
>
> I need to design and build a storage platform that will be "consumed" mainly
> by VMWare
>
> CEPH is my first choice
>
> As far as I can see, there are 3 ways CEPH storage can be made available to
> VMWare
>
> 1. iSCSI
> 2.
Single or multiple acitve mds? Were there "Client xxx failing to
respond to capability release" health warning?
On Mon, May 28, 2018 at 10:38 PM, Oliver Freyermuth
wrote:
> Dear Cephalopodians,
>
> we just had a "lockup" of many MDS requests, and also trimming fell behind,
> for over 2 days.
>
Using the kernel driver to map RBDs to a host with OSDs is known to cause
system locks. The answer to avoiding this is to use rbd-nbd or rbd-fuse
instead of the kernel driver if you NEED to map the RBD to the same host as
any OSDs.
On Tue, May 29, 2018 at 7:34 AM Joshua Collins
wrote:
> Hi
>
>
Hi
I've had a go at setting up a Ceph cluster but I've ran into some issues.
I have 3 physical machines to set up a Ceph cluster, and two of these
machines will be part of a HA pair using corosync and Pacemaker.
I keep running into filesystem lock issues on unmount when I have a
machine
This is not common unless you are using the kernel driver to map an RBD on
a host running OSDs. I've never had a problem unmounting an RBD that
didn't have open file handles. Note that Linux considers an FS to be
active if a terminal, screen, etc is cd'd into the mounted directory. Do
you need
People have RGW and CephFS all the time. You have it for different use
cases. Sometimes you need an S3 object store and other times you need a
networked POSIX filesystem. If you don't use one or the other for
production, perhaps he was testing it. But in any case, you can rip one or
the other
List,
Got an old Hammer Cluster where I would like to migrate it’s data (rbd images)
to a newly installed Mimic Cluster.
Would this be possible if I could upgrade the clients from Hammer to Jewel (ie.
would Jewel be able to connect to both clusters)?
/Steffen
On 24/05/18 19:21, Lionel Bouton wrote:
Has anyone successfully used Ceph with S4600 ? If so could you share if
you used filestore or bluestore, which firmware was used and
approximately how much data was written on the most used SSDs ?
I have 4 new OSD nodes which have 480GB S4600s (Firmware
https://github.com/ceph/ceph/pull/17535
It's not in Luminous, though.
Paul
2018-05-29 9:41 GMT+02:00 Linh Vu :
> Ah I remember that one, I still have it on my watch list on
> tracker.ceph.com
>
>
> Thanks
>
>
> Alternatively, is there a way to check on a client node what ceph features
>
We are using the iSCSI gateway in ceph-12.2 with vsphere-6.5 as the client.
It's an active/passive setup, per. LUN.
We choose this solution because that's what we could get RH support for and it
sticks to the "no SPOF" philosophy.
Performance is ~25-30% slower then krbd mounting the same rbd
> On 29 May 2018, at 10.08, Eugen Block wrote:
>
> Hi,
>
>> [root@n1 ~]# ceph osd pool rm mytestpool mytestpool --yes-i-really-mean-it
>> Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored
>
> if the command you posted is complete then you forgot one "really" in the
>
On Mon, May 28, 2018 at 1:50 PM, Fulvio Galeazzi
wrote:
> Hallo,
> I am using 12.2.4 and started using "ceph balancer". Indeed it does a
> great job, thanks!
>
> I have few comments:
>
> - in the documentation http://docs.ceph.com/docs/master/mgr/balancer/
>I think there is an error,
I get the feeling this is not dependent on the exact Ceph version...
In our case, I know what the user has done (and he'll not do it again). He
misunderstood how our cluster works and started 1100 cluster jobs,
all entering the very same directory on CephFS (mounted via ceph-fuse on 38
Hi,
[root@n1 ~]# ceph osd pool rm mytestpool mytestpool --yes-i-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored
if the command you posted is complete then you forgot one "really" in
the --yes-i-really-really-mean-it option.
Regards
Zitat von Steffen
>>Could you try path https://github.com/ceph/ceph/pull/22240/files.
>>
>>The leakage of MMDSBeacon messages can explain your issue.
Thanks. I can't test it in production for now, and I can't reproduce it in my
test environment.
I'll wait for next luminous release to test it.
Thanks you very
List,
I’ve just installed a new mimic cluster and wonder why I can’t remove a initial
test pool like this:
[root@n1 ~]# ceph -s
cluster:
id: 2284bf30-a27e-4543-af8f-b2726207762a
health: HEALTH_OK
services:
mon: 3 daemons, quorum n1,n2,n3
mgr: n1.ceph(active), standbys:
Ah I remember that one, I still have it on my watch list on tracker.ceph.com
Thanks
Alternatively, is there a way to check on a client node what ceph features
(jewel, luminous etc.) it has? In our case, it's all CephFS clients, and it's a
mix between ceph-fuse (which is Luminous 12.2.5)
As far as I know the status wrt this issue is still the one reported in
this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020585.html
See also:
http://tracker.ceph.com/issues/21315
Cheers, Massimo
On Tue, May 29, 2018 at 8:39 AM, Linh Vu wrote:
> Hi all,
>
>
>
Hi all,
I have a Luminous 12.2.4 cluster. This is what `ceph features` tells me:
...
"client": {
"group": {
"features": "0x7010fb86aa42ada",
"release": "jewel",
"num": 257
},
"group": {
"features":
37 matches
Mail list logo