Using the commands you provided, I actually find some differences:
On my CentOS VM:
```
# sudo find ./lib* -iname '*.so*' | xargs nm -AD 2>&1 | grep
_ZTIN13PriorityCache8PriCacheE
./libceph-common.so:0221cc08 V _ZTIN13PriorityCache8PriCacheE
./libceph-common.so.0:0221cc08 V _ZTIN1
Does it define _ZTIN13PriorityCache8PriCacheE ? If it does, and all is
as you say, then it should not say that _ZTIN13PriorityCache8PriCacheE
is undefined. Does ldd show that it is finding the libraries you think
it is? Either it is finding a different version of that library
somewhere else or the
It's already in LD_LIBRARY_PATH, under the same directory of
libfio_ceph_objectstore.so
$ ll lib/|grep libceph-common
lrwxrwxrwx. 1 root root19 Apr 17 11:15 libceph-common.so ->
libceph-common.so.0
-rwxr-xr-x. 1 root root 211853400 Apr 17 11:15 libceph-common.so.0
Best,
Can Zhang
On
On Wed, Apr 17, 2019 at 1:37 PM Can Zhang wrote:
>
> Thanks for your suggestions.
>
> I tried to build libfio_ceph_objectstore.so, but it fails to load:
>
> ```
> $ LD_LIBRARY_PATH=./lib ./bin/fio --enghelp=libfio_ceph_objectstore.so
>
> fio: engine libfio_ceph_objectstore.so not loadable
> IO eng
I fully rebuilt libfio_ceph_objectstore file on my Ubuntu VM.
Best,
Can Zhang
On Wed, Apr 17, 2019 at 10:39 PM Igor Fedotov wrote:
>
> Or try full rebuild?
>
> On 4/17/2019 5:37 PM, Igor Fedotov wrote:
> > Could you please check if libfio_ceph_objectstore.so has been rebuilt
> > with your last
I have deployed, expanded and upgraded multiple Ceph clusters using
ceph-ansible. Works great.
What information are you looking for?
--
Sinan
> Op 17 apr. 2019 om 16:24 heeft Francois Lafont
> het volgende geschreven:
>
> Hi,
>
> +1 for ceph-ansible too. ;)
>
> --
> François (flaf)
>
The man page for gwcli indicates:
"Disks exported through the gateways use ALUA attributes to provide
ActiveOptimised and ActiveNonOptimised access to the rbd images. Each disk is
assigned a primary owner at creation/import time"
I am trying to determine whether I can explicitly set which gat
Or try full rebuild?
On 4/17/2019 5:37 PM, Igor Fedotov wrote:
Could you please check if libfio_ceph_objectstore.so has been rebuilt
with your last build?
On 4/17/2019 6:37 AM, Can Zhang wrote:
Thanks for your suggestions.
I tried to build libfio_ceph_objectstore.so, but it fails to load:
`
Could you please check if libfio_ceph_objectstore.so has been rebuilt
with your last build?
On 4/17/2019 6:37 AM, Can Zhang wrote:
Thanks for your suggestions.
I tried to build libfio_ceph_objectstore.so, but it fails to load:
```
$ LD_LIBRARY_PATH=./lib ./bin/fio --enghelp=libfio_ceph_object
On Wed, 17 Apr 2019 16:08:34 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 20:01:28 +0900
> Christian Balzer ==> Ceph Users :
> > On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
> >
> > > Wed, 17 Apr 2019 10:47:32 +0200
> > > Paul Emmerich ==> Lars Täuber
> > > :
> > > > The standa
Hi,
+1 for ceph-ansible too. ;)
--
François (flaf)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Wed, 17 Apr 2019 20:01:28 +0900
Christian Balzer ==> Ceph Users :
> On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
>
> > Wed, 17 Apr 2019 10:47:32 +0200
> > Paul Emmerich ==> Lars Täuber :
> > > The standard argument that it helps preventing recovery traffic from
> > > clogging the ne
Just for the records:
After recreating the config from scratch (after the upgrade to
ceph-iscsi-3.0) the problem went away. I can use the gateway without
client.admin access now.
thanks
matthias
Am 01.04.19 um 17:05 schrieb Jason Dillaman:
What happens when you run "rados -p rbd lock list ga
It should not be best effort. As written, exactly
rgw_usage_log_flush_threshold outstanding log entries will be
buffered. The default value for this parameter is 1024, which is
probably not high for a sustained workload, but you could experiment
with reducing it.
Matt
On Fri, Apr 12, 2019 at 11
This is just a followup for those who will encounter similar problem.
Originally this was a pool with only 4 nodes, size 3, min_size 2, big
node/osd weight difference(node weights 10, 2, 4, 4, osd weights from 2.5
to 0.5. detailed CRUSH map below(only 3 nodes left, issue persisted at this
point)[1
On 4/17/19 4:24 AM, John Molefe wrote:
Hi everyone,
I currently have a ceph cluster running on SUSE and I have an expansion
project that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster
via ansible?? I would appreciate it if you'd sha
Hi @ll,
I have a Nautilus Ceph cluster UP with radosgw
in a zonegroup. I'm using the web frontend Beast
(the default in Nautilus). All seems to work fine
but in the log of radosgw I have this message:
Apr 17 14:02:56 rgw-m-1 ceph-m-rgw.rgw-m-1.rgw0[888]: 2019-04-17 14:02:56.410
7fe659803700
Someone with access to a mon disk can access your whole cluster, it
contains the mon keyring which has full admin capabilities.
And yes, it also has all the encryption keys for the OSDs stored it in it...
Usually disks running mons are just destroyed instead of RMA'd if they
fail on an encrypted c
Hello,
after reading the documentation[1], I'm uncertain whether the OSD
encryption keys are stored in a safe way. If I understand correctly,
they are kept on the monitor(s) but not necessarily with extra
protection.
In other words, is the default setup safe against the situation where
one disk g
On Mon, 8 Apr 2019 at 10:33, Iain Buclaw wrote:
>
> On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote:
> >
> > Hi Christian,
> >
> > Dynamic bucket-index sharding for multi-site setups is being worked
> > on, and will land in the N release cycle.
> >
>
> What about removing orphaned shards on the
Hi Matt,
On 4/17/19 1:08 AM, Matt Benjamin wrote:
Why is using an explicit unix socket problematic for you? For what it
does, that decision has always seemed sensible.
In fact, I don't understand why the "ops" logs have a different
way from the logs of the process radosgw itself. Personally,
On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 10:47:32 +0200
> Paul Emmerich ==> Lars Täuber :
> > The standard argument that it helps preventing recovery traffic from
> > clogging the network and impacting client traffic is missleading:
>
> What do you mean by "it"
Then I tried to build libfio_ceph_objectstore.so on a Ubuntu 18.04 vm,
it seems to be working now.
Best,
Can Zhang
On Wed, Apr 17, 2019 at 11:37 AM Can Zhang wrote:
>
> Thanks for your suggestions.
>
> I tried to build libfio_ceph_objectstore.so, but it fails to load:
>
> ```
> $ LD_LIBRARY_PAT
Wed, 17 Apr 2019 10:47:32 +0200
Paul Emmerich ==> Lars Täuber :
> The standard argument that it helps preventing recovery traffic from
> clogging the network and impacting client traffic is missleading:
What do you mean by "it"? I don't know the standard argument.
Do you mean separating the netw
Quoting Lars Täuber (taeu...@bbaw.de):
> > > This is something i was told to do, because a reconstruction of failed
> > > OSDs/disks would have a heavy impact on the backend network.
> >
> > Opinions vary on running "public" only versus "public" / "backend".
> > Having a separate "backend" netwo
On Wed, Apr 17, 2019 at 7:56 AM Lars Täuber wrote:
>
> Thanks Paul for the judgement.
>
> Tue, 16 Apr 2019 10:13:03 +0200
> Paul Emmerich ==> Lars Täuber :
> > Seems in line with what I'd expect for the hardware.
> >
> > Your hardware seems to be way overspecced, you'd be fine with half the
> >
25 Gbit/s doesn't have a significant latency advantage over 10 Gbit/s.
For reference: a point-to-point 10 Gbit/s fiber link takes around 300
ns of processing for rx+tx on standard Intel X520 NICs (measured it),
so not much to save here.
Then there's serialization latency which changes from 0.8ns/b
On Wed, 17 Apr 2019 10:39:10 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 09:52:29 +0200
> Stefan Kooman ==> Lars Täuber :
> > Quoting Lars Täuber (taeu...@bbaw.de):
> > > > I'd probably only use the 25G network for both networks instead of
> > > > using both. Splitting the network usually does
The standard argument that it helps preventing recovery traffic from
clogging the network and impacting client traffic is missleading:
* write client traffic relies on the backend network for replication
operations: your client (write) traffic is impacted anyways if the
backend network is full
* y
Wed, 17 Apr 2019 09:52:29 +0200
Stefan Kooman ==> Lars Täuber :
> Quoting Lars Täuber (taeu...@bbaw.de):
> > > I'd probably only use the 25G network for both networks instead of
> > > using both. Splitting the network usually doesn't help.
> >
> > This is something i was told to do, because a
Hi everyone,
I currently have a ceph cluster running on SUSE and I have an expansion project
that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster via
ansible?? I would appreciate it if you'd share your experiences, challenges,
topolog
Quoting Lars Täuber (taeu...@bbaw.de):
> > I'd probably only use the 25G network for both networks instead of
> > using both. Splitting the network usually doesn't help.
>
> This is something i was told to do, because a reconstruction of failed
> OSDs/disks would have a heavy impact on the backend
32 matches
Mail list logo