Re: [ceph-users] RBD features(kernel client) with kernel version

2017-09-26 Thread Jason Dillaman
I have to admit that it's probably buried deep within the backlog [1].
Immediate term alternative solutions to presenting a RBD-backed block
device that support journaling is available via rbd-nbd (creates
/dev/nbdX devices) and also via LIO's tcmu-runner and a loopback
target (creates /dev/sdX devices). There is also a conceptual plan for
tcmu-runner to provide a libtcmu v2 library that could facilitate the
development of something like rbd-tcmu at some point in the future.
These solutions utilize kernel pass-through to userspace so there is a
performance hit -- but journaling would probably already be the larger
bottleneck.

[1] https://trello.com/c/9zeeZsIO

On Tue, Sep 26, 2017 at 10:03 AM, Maged Mokhtar  wrote:
> On 2017-09-25 14:29, Ilya Dryomov wrote:
>
> On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell
>  wrote:
>
> Hi Ilya,
>
> Hope you are doing great.
> Sorry for bugging you. I did not find enough resources for my question.  I
> would be really helped if you could reply me. My questions are in red
> colour.
>
>  - layering: layering support:
> Kernel: 3.10 and plus, right?
>
>
> Yes.
>
>  - striping: striping v2 support:
> What kernel is supporting this feature?
>
>
> Only the default striping v2 pattern (i.e. stripe unit == object size
> and stripe count == 1) is supported.
>
> - exclusive-lock: exclusive locking support:
> It's supposed to be 4.9. Right?
>
>
> Yes.
>
>
>
> rest the the features below is under development? or any feature is
> available in any latest kernel?
>   - object-map: object map support (requires exclusive-lock):
>   - fast-diff: fast diff calculations (requires object-map):
>   - deep-flatten: snapshot flatten support:
>   - journaling: journaled IO support (requires exclusive-lock):
>
>
> The former, none of these are available in latest kernels.
>
> A separate data pool feature (rbd create --data-pool ) is
> supported since 4.11.
>
> Thanks,
>
> Ilya
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> Hello Ilya,
>
> Any rough estimate when rbd journaling will be added to the kernel rbd ? I
> realize it is a lot of work..
>
> Cheers /Maged
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] inconsistent pg will not repair

2017-09-26 Thread David Zafman


The following is based on the discussion in: 
http://tracker.ceph.com/issues/21388


--

There is a particular scenario which if identified can be repaired 
manually. In this case the automatic repair rejects all copies because 
none match the selected_object_info thus setting data_digest_mismatch_oi 
on all shards.


Doing the following should produce list-inconsistent-obj information:

$ ceph pg deep-scrub 1.0
(Wait for scrub to finish)
$ rados list-inconsistent-obj 1.0 --format=json-pretty

Requirements:

1. data_digest_mismatch_oi is set on all shards make it unrepairable
2. union_shard_errors has only data_digest_mismatch_oi listed, no other
   issues involved
3. Object "errors" is empty { "inconsistent": [ { ..."errors": []}
   ] } which means the data_digest value is the same on all shards
   (0x2d4a11c2 in the example below)
4. No down OSDs which might have different/correct data

To fix use rados get/put followed by a deep-scrub to clear the 
"inconsistent" pg state.  Use -b option with a value smaller than the 
file size so that the read doesn't compare the digest and return EIO.


1. rados -p pool -b 10240 get mytestobject tempfile
2. rados -p pool put mytestobject tempfile
3. ceph pg deep-scrub 1.0


Here is an example list-inconsistent-obj output of what this scenario 
looks like:


{ "inconsistents": [ { "shards": [ { "data_digest": "0x2d4a11c2", 
"omap_digest": "0xf5fba2c6", "size": 143456, "errors": [ 
"data_digest_mismatch_oi" ], "osd": 0, "primary": true }, { 
"data_digest": "0x2d4a11c2", "omap_digest": "0xf5fba2c6", "size": 
143456, "errors": [ "data_digest_mismatch_oi" ], "osd": 1, "primary": 
false }, { "data_digest": "0x2d4a11c2", "omap_digest": "0xf5fba2c6", 
"size": 143456, "errors": [ "data_digest_mismatch_oi" ], "osd": 2, 
"primary": false } ], "selected_object_info": "3:ce3f1d6a::: 
mytestobject:head(47'54 osd.0.0:53 dirty|omap|data_digest|omap_digest s 
143456 uv 3 dd 2ddbf8f5 od f5fba2c6 alloc_hint [0 0 0])", 
"union_shard_errors": [ "data_digest_mismatch_oi" ], "errors": [ ], 
"object": { "version": 3, "snap": "head", "locator": "", "nspace": "", 
"name": "mytestobject" } } ], "epoch": 103443 }



David

On 9/26/17 10:55 AM, Gregory Farnum wrote:

[ Re-send due to HTML email part]

IIRC, this is because the object info and the actual object disagree
about what the checksum should be. I don't know the best way to fix it
off-hand but it's been discussed on the list (try searching for email
threads involving David Zafman).
-Greg

On Tue, Sep 26, 2017 at 7:03 AM, Wyllys Ingersoll
 wrote:

I have an inconsistent PG that I cannot seem to get to repair cleanly.
I can find the 3 objects in question and they all have the same size
and md5sum, but yet whenever I repair it, it is reported as an error
"failed to pick suitable auth object".

Any suggestions for fixing or workaround this issue to resolve the
inconsistency?

Ceph 10.2.9
Ubuntu 16.04.2


2017-09-26 09:54:03.123938 7fd31048e700 -1 log_channel(cluster) log
[ERR] : 1.5b8 shard 7: soid 1:1daab06b:::14d6662.:head
data_digest 0x923deb74 != data_digest 0x23f10be8 from auth oi
1:1daab06b:::14d6662.:head(204442'221517
client.5654254.1:2371279 dirty|data_digest|omap_digest s 1421644 uv
203993 dd 23f10be8 od  alloc_hint [0 0])
2017-09-26 09:54:03.123944 7fd31048e700  0 log_channel(cluster) do_log
log to syslog
2017-09-26 09:54:03.123999 7fd31048e700 -1 log_channel(cluster) log
[ERR] : 1.5b8 shard 26: soid 1:1daab06b:::14d6662.:head
data_digest 0x923deb74 != data_digest 0x23f10be8 from auth oi
1:1daab06b:::14d6662.:head(204442'221517
client.5654254.1:2371279 dirty|data_digest|omap_digest s 1421644 uv
203993 dd 23f10be8 od  alloc_hint [0 0])
2017-09-26 09:54:03.124005 7fd31048e700  0 log_channel(cluster) do_log
log to syslog
2017-09-26 09:54:03.124013 7fd31048e700 -1 log_channel(cluster) log
[ERR] : 1.5b8 shard 44: soid 1:1daab06b:::14d6662.:head
data_digest 0x923deb74 != data_digest 0x23f10be8 from auth oi
1:1daab06b:::14d6662.:head(204442'221517
client.5654254.1:2371279 dirty|data_digest|omap_digest s 1421644 uv
203993 dd 23f10be8 od  alloc_hint [0 0])
2017-09-26 09:54:03.124015 7fd31048e700  0 log_channel(cluster) do_log
log to syslog
2017-09-26 09:54:03.124022 7fd31048e700 -1 log_channel(cluster) log
[ERR] : 1.5b8 soid 1:1daab06b:::14d6662.:head: failed to
pick suitable auth object
2017-09-26 09:54:03.124023 7fd31048e700  0 log_channel(cluster) do_log
log to syslog
2017-09-26 09:56:14.461015 7fd31048e700 -1 log_channel(cluster) log
[ERR] : 1.5b8 deep-scrub 3 errors
2017-09-26 09:56:14.461021 7fd31048e700  0 log_channel(cluster) do_log
log to syslog
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the 

[ceph-users] osd max scrubs not honored?

2017-09-26 Thread J David
With “osd max scrubs” set to 1 in ceph.conf, which I believe is also
the default, at almost all times, there are 2-3 deep scrubs running.

3 simultaneous deep scrubs is enough to cause a constant stream of:

mon.ceph1 [WRN] Health check update: 69 slow requests are blocked > 32
sec (REQUEST_SLOW)

This seems to correspond with all three deep scrubs hitting the same
OSD at the same time, starving out all other I/O requests for that
OSD.  But it can happen less frequently and less severely with two or
even one deep scrub running.  Nonetheless, consumers of the cluster
are not thrilled with regular instances of 30-60 second disk I/Os.

The cluster is five nodes, 15 OSDs, and there is one pool with 512
placement groups.  The cluster is running:

ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)

All of the OSDs are bluestore, with HDD storage and SSD block.db.

Even setting “osd deep scrub interval = 1843200” hasn’t resolved this
issue, though it seems to get the number down from 3 to 2, which at
least cuts down on the frequency of requests stalling out.  With 512
pgs, that should mean that one pg gets deep-scrubbed per hour, and it
seems like a deep-scrub takes about 20 minutes.  So what should be
happening is that 1/3rd of the time there should be one deep scrub,
and 2/3rds of the time there shouldn’t be any.  Yet instead we have
2-3 deep scrubs running at all times.

Looking at “ceph pg dump” shows that about 7 deep scrubs get launched per hour:

$sudo ceph pg dump | fgrep active | awk ‘{print$23” “$24" "$1}' |
fgrep 2017-09-26 | sort -rn | head -22
dumped all
2017-09-26 16:42:46.781761 0.181
2017-09-26 16:41:40.056816 0.59
2017-09-26 16:39:26.216566 0.9e
2017-09-26 16:26:43.379806 0.19f
2017-09-26 16:24:16.321075 0.60
2017-09-26 16:08:36.095040 0.134
2017-09-26 16:03:33.478330 0.b5
2017-09-26 15:55:14.205885 0.1e2
2017-09-26 15:54:31.413481 0.98
2017-09-26 15:45:58.329782 0.71
2017-09-26 15:34:51.777681 0.1e5
2017-09-26 15:32:49.669298 0.c7
2017-09-26 15:01:48.590645 0.1f
2017-09-26 15:01:00.082014 0.199
2017-09-26 14:45:52.893951 0.d9
2017-09-26 14:43:39.870689 0.140
2017-09-26 14:28:56.217892 0.fc
2017-09-26 14:28:49.665678 0.e3
2017-09-26 14:11:04.718698 0.1d6
2017-09-26 14:09:44.975028 0.72
2017-09-26 14:06:17.945012 0.8a
2017-09-26 13:54:44.199792 0.ec

What’s going on here?

Why isn’t the limit on scrubs being honored?

It would also be great if scrub I/O were surfaced in “ceph status” the
way recovery I/O is, especially since it can have such a significant
impact on client operations.

Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Gregory Farnum
Keep in mind you can also do prefix-based cephx with caps. That was set up
so you can give a key ring access to specific RBD images (although you
can’t do live updates on what the client can access without making him
reconnect).
On Tue, Sep 26, 2017 at 7:44 AM Jason Dillaman  wrote:

> On Tue, Sep 26, 2017 at 9:36 AM, Yoann Moulin 
> wrote:
> >
> >>> ok, I don't know where I read the -o option to write the key but the
> file was empty I do a ">" and seems to work to list or create rbd now.
> >>>
> >>> and for what I have tested then, the good syntax is « mon 'profile
> rbd' osd 'profile rbd pool=rbd' »
> >>>
>  In the case we give access to those rbd inside the container, how I
> can be sure users in each container do not have access to others rbd ? Is
>  the namespace good to isolate each user ?
> >>>
> >>> The question about namespace is still open, if I have a namespace in
> the osd caps, I can't create rbd volume. How I can isolate each client to
> >>> only his own volumes ?
> >>
> >> Unfortunately, RBD doesn't currently support namespaces, but it's on
> >> our backlog.
> >
> > So if I want to separate data between each container, I need to create a
> pool per user (one user can have multiple containers).
>
> Definitely don't want to create a pool per user assuming you have more
> than a handful of users. Usually the higher level container management
> system handles the user separation since the end-user cannot directly
> access the Ceph storage system and instead the RBD image is mapped
> into the container. That's why RBD support for namespaces has been
> low-priority since there hasn't been a lot of end-user demand.
>
> > I'm gonna give a look to cephfs, it seems possible to allow access only
> to a subdirectory per user, could you confirm it ?
>
> Yes, I believe that is correct.
>
> > Thanks,
> >
> > Best regards,
> >
> > --
> > Yoann Moulin
> > EPFL IC-IT
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
Nick,

Thanks, I will look into the latest bareos version.  They did mention
libradosstriper on github.

There is another question.  On jewel I have 25GB size objects.  Once I
upgrade to luminous those objects will be "out of bounds".
1. Will OSD start and Will I be able to read them?
2. Will they chop themselves into little pieces automatically or do I need
to get -- put_back them?

Thank you,
Alexander



On Tue, Sep 26, 2017 at 4:29 PM, Nick Fisk  wrote:

> Bareos needs to be re-written to use libradosstriper or it should
> internally shard the data across multiple objects. Objects shouldn’t be
> stored as large as that and performance will also suffer.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Alexander Kushnirenko
> *Sent:* 26 September 2017 13:50
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] osd crashes with large object size (>10GB) in
> luminos Rados
>
>
>
> Hello,
>
>
>
> We successfully use rados to store backup volumes in jewel version of
> CEPH. Typical volume size is 25-50GB.  Backup software (bareos) use Rados
> objects as backup volumes and it works fine.  Recently we tried luminous
> for the same purpose.
>
>
>
> In luminous developers reduced osd_max_object_size from 100G to 128M.  As
> I understood for the performance reasons.  But it broke down interaction
> with bareos backup software.  You can reverse osd_max_object_size to 100G,
> but then the OSD start to crash once you start to put objects of about 4GB
> in size (4,294,951,051).
>
>
>
> Any suggestion how to approach this problem?
>
>
>
> Alexander.
>
>
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]: 
> /build/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
> In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*,
> ObjectStore::Transaction*)' thread 7f04ac2f9700 time 2017-09-26
> 15:12:58.230268
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]: 
> /build/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
> 9282: FAILED assert(0 == "unexpected error")
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229837
> 7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction
> error (7) Argument list too long not handled on operation 10 (op 1,
> counting from 0)
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229869
> 7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ceph-0) unexpected error code
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  ceph version 12.2.0
> (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  1: (ceph::__ceph_assert_fail(char
> const*, char const*, int, char const*)+0x102) [0x563c7b5f83a2]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  2: 
> (BlueStore::_txc_add_transaction(BlueStore::TransContext*,
> ObjectStore::Transaction*)+0x15fa) [0x563c7b4ac2ba]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  3: 
> (BlueStore::queue_transactions(ObjectStore::Sequencer*,
> std::vector
> >&, boost::intrusive_ptr, ThreadPool::TPHandle*)+0x536)
> [0x563c7b4ad916]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  4: (PrimaryLogPG::queue_transacti
> ons(std::vector std::allocator
> >&, boost::intrusive_ptr)+0x66) [0x563c7b1d17f6]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  5: 
> (ReplicatedBackend::submit_transaction(hobject_t
> const&, object_stat_sum_t const&, eversion_t const&,
> std::unique_ptr >&&,
> eversion_t const&, eversion_t const&, std::vector std::allocator > const&, 
> boost::optional&,
> Context*, Context*, Context*, unsigned long, osd_reqid_t,
> boost::intrusive_ptr)+0xcbf) [0x563c7b30436f]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  6: 
> (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*,
> PrimaryLogPG::OpContext*)+0x9fa) [0x563c7b16d68a]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  7: 
> (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x131d)
> [0x563c7b1b7a5d]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  8: (PrimaryLogPG::do_op(boost::in
> trusive_ptr&)+0x2ece) [0x563c7b1bb26e]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  9: 
> (PrimaryLogPG::do_request(boost::intrusive_ptr&,
> ThreadPool::TPHandle&)+0xea6) [0x563c7b175446]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  10: 
> (OSD::dequeue_op(boost::intrusive_ptr,
> boost::intrusive_ptr, ThreadPool::TPHandle&)+0x3ab)
> [0x563c7aff919b]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  11: (PGQueueable::RunVis::operator
> ()(boost::intrusive_ptr const&)+0x5a) [0x563c7b29154a]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  12: 
> (OSD::ShardedOpWQ::_process(unsigned
> int, ceph::heartbeat_handle_d*)+0x103d) [0x563c7b01fd9d]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  13: 
> (ShardedThreadPool::shardedthreadpool_worker(unsigned
> int)+0x8ef) [0x563c7b5fd20f]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  14: 
> (ShardedThreadPool::WorkThreadSharded::entry()+0x10)
> [0x563c7b600510]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  15: (()+0x7494) [0x7f04c56e2494]
>
> Sep 26 15:12:58 ceph02 ceph-osd[1417]:  16: (clone()+0x3f) [0x7f04c4769aff]
>
>
>
>
_

Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Jason Dillaman
On Tue, Sep 26, 2017 at 9:36 AM, Yoann Moulin  wrote:
>
>>> ok, I don't know where I read the -o option to write the key but the file 
>>> was empty I do a ">" and seems to work to list or create rbd now.
>>>
>>> and for what I have tested then, the good syntax is « mon 'profile rbd' osd 
>>> 'profile rbd pool=rbd' »
>>>
 In the case we give access to those rbd inside the container, how I can be 
 sure users in each container do not have access to others rbd ? Is
 the namespace good to isolate each user ?
>>>
>>> The question about namespace is still open, if I have a namespace in the 
>>> osd caps, I can't create rbd volume. How I can isolate each client to
>>> only his own volumes ?
>>
>> Unfortunately, RBD doesn't currently support namespaces, but it's on
>> our backlog.
>
> So if I want to separate data between each container, I need to create a pool 
> per user (one user can have multiple containers).

Definitely don't want to create a pool per user assuming you have more
than a handful of users. Usually the higher level container management
system handles the user separation since the end-user cannot directly
access the Ceph storage system and instead the RBD image is mapped
into the container. That's why RBD support for namespaces has been
low-priority since there hasn't been a lot of end-user demand.

> I'm gonna give a look to cephfs, it seems possible to allow access only to a 
> subdirectory per user, could you confirm it ?

Yes, I believe that is correct.

> Thanks,
>
> Best regards,
>
> --
> Yoann Moulin
> EPFL IC-IT
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-26 Thread Mark Nelson



On 09/26/2017 01:10 AM, Dietmar Rieder wrote:

thanks David,

that's confirming what I was assuming. To bad that there is no
estimate/method to calculate the db partition size.


It's possible that we might be able to get ranges for certain kinds of 
scenarios.  Maybe if you do lots of small random writes on RBD, you can 
expect a typical metadata size of X per object.  Or maybe if you do lots 
of large sequential object writes in RGW, it's more like Y.  I think 
it's probably going to be tough to make it accurate for everyone though.


Mark



Dietmar

On 09/25/2017 05:10 PM, David Turner wrote:

db/wal partitions are per OSD.  DB partitions need to be made as big as
you need them.  If they run out of space, they will fall back to the
block device.  If the DB and block are on the same device, then there's
no reason to partition them and figure out the best size.  If they are
on separate devices, then you need to make it as big as you need to to
ensure that it won't spill over (or if it does that you're ok with the
degraded performance while the db partition is full).  I haven't come
across an equation to judge what size should be used for either
partition yet.

On Mon, Sep 25, 2017 at 10:53 AM Dietmar Rieder
mailto:dietmar.rie...@i-med.ac.at>> wrote:

On 09/25/2017 02:59 PM, Mark Nelson wrote:
> On 09/25/2017 03:31 AM, TYLin wrote:
>> Hi,
>>
>> To my understand, the bluestore write workflow is
>>
>> For normal big write
>> 1. Write data to block
>> 2. Update metadata to rocksdb
>> 3. Rocksdb write to memory and block.wal
>> 4. Once reach threshold, flush entries in block.wal to block.db
>>
>> For overwrite and small write
>> 1. Write data and metadata to rocksdb
>> 2. Apply the data to block
>>
>> Seems we don’t have a formula or suggestion to the size of block.db.
>> It depends on the object size and number of objects in your pool. You
>> can just give big partition to block.db to ensure all the database
>> files are on that fast partition. If block.db full, it will use block
>> to put db files, however, this will slow down the db performance. So
>> give db size as much as you can.
>
> This is basically correct.  What's more, it's not just the object
size,
> but the number of extents, checksums, RGW bucket indices, and
> potentially other random stuff.  I'm skeptical how well we can
estimate
> all of this in the long run.  I wonder if we would be better served by
> just focusing on making it easy to understand how the DB device is
being
> used, how much is spilling over to the block device, and make it
easy to
> upgrade to a new device once it gets full.
>
>>
>> If you want to put wal and db on same ssd, you don’t need to create
>> block.wal. It will implicitly use block.db to put wal. The only case
>> you need block.wal is that you want to separate wal to another disk.
>
> I always make explicit partitions, but only because I (potentially
> illogically) like it that way.  There may actually be some benefits to
> using a single partition for both if sharing a single device.

is this "Single db/wal partition" then to be used for all OSDs on a node
or do you need to create a seperate "Single  db/wal partition" for each
OSD  on the node?

>
>>
>> I’m also studying bluestore, this is what I know so far. Any
>> correction is welcomed.
>>
>> Thanks
>>
>>
>>> On Sep 22, 2017, at 5:27 PM, Richard Hesketh
>>> mailto:richard.hesk...@rd.bbc.co.uk>> wrote:
>>>
>>> I asked the same question a couple of weeks ago. No response I got
>>> contradicted the documentation but nobody actively confirmed the
>>> documentation was correct on this subject, either; my end state was
>>> that I was relatively confident I wasn't making some horrible
mistake
>>> by simply specifying a big DB partition and letting bluestore work
>>> itself out (in my case, I've just got HDDs and SSDs that were
>>> journals under filestore), but I could not be sure there wasn't some
>>> sort of performance tuning I was missing out on by not specifying
>>> them separately.
>>>
>>> Rich
>>>
>>> On 21/09/17 20:37, Benjeman Meekhof wrote:
 Some of this thread seems to contradict the documentation and
confuses
 me.  Is the statement below correct?

 "The BlueStore journal will always be placed on the fastest device
 available, so using a DB device will provide the same benefit
that the
 WAL device would while also allowing additional metadata to be
stored
 there (if it will fix)."



http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#devices


  it seems to be saying that there's no reason to create
separate WAL
 and DB partitions

Re: [ceph-users] RBD features(kernel client) with kernel version

2017-09-26 Thread Maged Mokhtar
On 2017-09-25 14:29, Ilya Dryomov wrote:

> On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell
>  wrote: 
> 
>> Hi Ilya,
>> 
>> Hope you are doing great.
>> Sorry for bugging you. I did not find enough resources for my question.  I
>> would be really helped if you could reply me. My questions are in red
>> colour.
>> 
>> - layering: layering support:
>> Kernel: 3.10 and plus, right?
> 
> Yes.
> 
>> - striping: striping v2 support:
>> What kernel is supporting this feature?
> 
> Only the default striping v2 pattern (i.e. stripe unit == object size
> and stripe count == 1) is supported.
> 
>> - exclusive-lock: exclusive locking support:
>> It's supposed to be 4.9. Right?
> 
> Yes.
> 
>> rest the the features below is under development? or any feature is
>> available in any latest kernel?
>> - object-map: object map support (requires exclusive-lock):
>> - fast-diff: fast diff calculations (requires object-map):
>> - deep-flatten: snapshot flatten support:
>> - journaling: journaled IO support (requires exclusive-lock):
> 
> The former, none of these are available in latest kernels.
> 
> A separate data pool feature (rbd create --data-pool ) is
> supported since 4.11.
> 
> Thanks,
> 
> Ilya
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Hello Ilya, 

Any rough estimate when rbd journaling will be added to the kernel rbd ?
I realize it is a lot of work.. 

Cheers /Maged___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin

>> ok, I don't know where I read the -o option to write the key but the file 
>> was empty I do a ">" and seems to work to list or create rbd now.
>>
>> and for what I have tested then, the good syntax is « mon 'profile rbd' osd 
>> 'profile rbd pool=rbd' »
>>
>>> In the case we give access to those rbd inside the container, how I can be 
>>> sure users in each container do not have access to others rbd ? Is
>>> the namespace good to isolate each user ?
>>
>> The question about namespace is still open, if I have a namespace in the osd 
>> caps, I can't create rbd volume. How I can isolate each client to
>> only his own volumes ?
> 
> Unfortunately, RBD doesn't currently support namespaces, but it's on
> our backlog.

So if I want to separate data between each container, I need to create a pool 
per user (one user can have multiple containers).

I'm gonna give a look to cephfs, it seems possible to allow access only to a 
subdirectory per user, could you confirm it ?

Thanks,

Best regards,

-- 
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-26 Thread David Turner
You can update the server with the mapped rbd and shouldn't see as much as
a blip on your VMs.

On Tue, Sep 26, 2017, 3:32 AM Götz Reinicke 
wrote:

> Hi Thanks David & David,
>
> we don’t use the fuse code. And may be I was a bit unclear, but your
> feedback clears some other aspects in that context.
>
> I did an update already on our OSD/MONs while a NFS Fileserver still had a
> rbd connected and was exporting files (Virtual disks for XEN server) online.
>
> Now the question is, can I update the ceph packages on the NFS Fileserver
> while the export (and VM Images) are online? Or should I shutdown the VMs
> and unmount the NFS export …?
>
> Lots of nuts and bolts, but I like to screw …. :)
>
> Cheers . Götz
>
>
> Am 25.09.2017 um 17:05 schrieb David Turner :
>
> It depends a bit on how you have the RBDs mapped.  If you're mapping them
> using krbd, then they don't need to be updated to use the new rbd-fuse or
> rbd-nbd code.  If you're using one of the latter, then you should schedule
> a time to restart the mounts so that they're mapped with the new Ceph
> version.
>
> In general RBDs are not affected by upgrades as long as you don't take
> down too much of the cluster at once and are properly doing a rolling
> upgrade.
>
> On Mon, Sep 25, 2017 at 8:07 AM David  wrote:
>
>> Hi Götz
>>
>> If you did a rolling upgrade, RBD clients shouldn't have experienced
>> interrupted IO and therefor IO to NFS exports shouldn't have been affected.
>> However, in the past when using kernel NFS over kernel RBD, I did have some
>> lockups when OSDs went down in the cluster so that's something to watch out
>> for.
>>
>>
>> On Mon, Sep 25, 2017 at 8:38 AM, Götz Reinicke <
>> goetz.reini...@filmakademie.de> wrote:
>>
>>> Hi,
>>>
>>> I updated our ceph OSD/MON Nodes from 10.2.7 to 10.2.9 and everything
>>> looks good so far.
>>>
>>> Now I was wondering (as I may have forgotten how this works) what will
>>> happen to a  NFS server which has the nfs shares on a ceph rbd ? Will the
>>> update interrupt any access to the NFS share or is it that smooth that e.g.
>>> clients accessing the NFS share will not notice?
>>>
>>> Thanks for some lecture on managing ceph and regards . Götz
>>
>>
> <…>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
Hello,

We successfully use rados to store backup volumes in jewel version of CEPH.
Typical volume size is 25-50GB.  Backup software (bareos) use Rados objects
as backup volumes and it works fine.  Recently we tried luminous for the
same purpose.

In luminous developers reduced osd_max_object_size from 100G to 128M.  As I
understood for the performance reasons.  But it broke down interaction with
bareos backup software.  You can reverse osd_max_object_size to 100G, but
then the OSD start to crash once you start to put objects of about 4GB in
size (4,294,951,051).

Any suggestion how to approach this problem?

Alexander.

Sep 26 15:12:58 ceph02 ceph-osd[1417]:
/build/ceph-12.2.0/src/os/bluestore/BlueStore.cc: In function 'void
BlueStore::_txc_add_transaction(BlueStore::TransContext*,
ObjectStore::Transaction*)' thread 7f04ac2f9700 time 2017-09-26
15:12:58.230268
Sep 26 15:12:58 ceph02 ceph-osd[1417]:
/build/ceph-12.2.0/src/os/bluestore/BlueStore.cc: 9282: FAILED assert(0 ==
"unexpected error")
Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229837
7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction
error (7) Argument list too long not handled on operation 10 (op 1,
counting from 0)
Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229869
7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ceph-0) unexpected error code
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  1: (ceph::__ceph_assert_fail(char
const*, char const*, int, char const*)+0x102) [0x563c7b5f83a2]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  2:
(BlueStore::_txc_add_transaction(BlueStore::TransContext*,
ObjectStore::Transaction*)+0x15fa) [0x563c7b4ac2ba]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  3:
(BlueStore::queue_transactions(ObjectStore::Sequencer*,
std::vector >&,
boost::intrusive_ptr, ThreadPool::TPHandle*)+0x536)
[0x563c7b4ad916]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  4:
(PrimaryLogPG::queue_transactions(std::vector >&,
boost::intrusive_ptr)+0x66) [0x563c7b1d17f6]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  5:
(ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t
const&, eversion_t const&, std::unique_ptr >&&, eversion_t const&, eversion_t
const&, std::vector >
const&, boost::optional&, Context*, Context*,
Context*, unsigned long, osd_reqid_t,
boost::intrusive_ptr)+0xcbf) [0x563c7b30436f]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  6:
(PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*,
PrimaryLogPG::OpContext*)+0x9fa) [0x563c7b16d68a]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  7:
(PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x131d)
[0x563c7b1b7a5d]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  8:
(PrimaryLogPG::do_op(boost::intrusive_ptr&)+0x2ece)
[0x563c7b1bb26e]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  9:
(PrimaryLogPG::do_request(boost::intrusive_ptr&,
ThreadPool::TPHandle&)+0xea6) [0x563c7b175446]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  10:
(OSD::dequeue_op(boost::intrusive_ptr, boost::intrusive_ptr,
ThreadPool::TPHandle&)+0x3ab) [0x563c7aff919b]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  11:
(PGQueueable::RunVis::operator()(boost::intrusive_ptr
const&)+0x5a) [0x563c7b29154a]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  12:
(OSD::ShardedOpWQ::_process(unsigned int,
ceph::heartbeat_handle_d*)+0x103d) [0x563c7b01fd9d]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  13:
(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x8ef)
[0x563c7b5fd20f]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  14:
(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563c7b600510]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  15: (()+0x7494) [0x7f04c56e2494]
Sep 26 15:12:58 ceph02 ceph-osd[1417]:  16: (clone()+0x3f) [0x7f04c4769aff]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Jason Dillaman
On Tue, Sep 26, 2017 at 4:52 AM, Yoann Moulin  wrote:
> Hello,
>
>> I try to give access to a rbd to a client on a fresh Luminous cluster
>>
>> http://docs.ceph.com/docs/luminous/rados/operations/user-management/
>>
>> first of all, I'd like to know the exact syntax for auth caps
>>
>> the result of "ceph auth ls" give this :
>>
>>> osd.9
>>>  key: AQDjAsVZ+nI7NBAA14X9U5Xjunlk/9ovTht3Og==
>>>  caps: [mgr] allow profile osd
>>>  caps: [mon] allow profile osd
>>>  caps: [osd] allow *
>>
>> but in the documentation, it writes :
>>
>>> osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
>>
>> Does the "allow" needed before "profile" ? it's not clear
>>
>> If I create a user like this :
>>
>>> # ceph --cluster container auth get-or-create client.container001 \
>>>  mon 'allow profile rbd' \
>>>  osd 'allow profile rbd \
>>>  pool=rbd namespace=container001' \
>>>  -o /etc/ceph/container.client.container001.keyring
>
> ok, I don't know where I read the -o option to write the key but the file was 
> empty I do a ">" and seems to work to list or create rbd now.
>
> and for what I have tested then, the good syntax is « mon 'profile rbd' osd 
> 'profile rbd pool=rbd' »
>
>> In the case we give access to those rbd inside the container, how I can be 
>> sure users in each container do not have access to others rbd ? Is
>> the namespace good to isolate each user ?
>
> The question about namespace is still open, if I have a namespace in the osd 
> caps, I can't create rbd volume. How I can isolate each client to
> only his own volumes ?

Unfortunately, RBD doesn't currently support namespaces, but it's on
our backlog.

> Thanks for your help
>
> Best regards,
>
> --
> Yoann Moulin
> EPFL IC-IT
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Luminous release_type "rc"

2017-09-26 Thread Irek Fasikhov
Hi
No cause for concern:
https://github.com/ceph/ceph/pull/17348/commits/2b5f84586ec4d20ebb5aacd6f3c71776c621bf3b

2017-09-26 11:23 GMT+03:00 Stefan Kooman :

> Hi,
>
> I noticed the ceph version still gives "rc" although we are using the
> latest Ceph packages: 12.2.0-1xenial
> (https://download.ceph.com/debian-luminous xenial/main amd64 Packages):
>
> ceph daemon mon.mon5 version
> {"version":"12.2.0","release":"luminous","release_type":"rc"}
>
> Why is this important (to me)? I want to make a monitoring check that
> ensures we
> are running identical, "stable" packages, instead of "beta" / "rc" in
> production.
>
> Gr. Stefan
>
>
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
Hello,

> I try to give access to a rbd to a client on a fresh Luminous cluster
> 
> http://docs.ceph.com/docs/luminous/rados/operations/user-management/
> 
> first of all, I'd like to know the exact syntax for auth caps
> 
> the result of "ceph auth ls" give this :
> 
>> osd.9
>>  key: AQDjAsVZ+nI7NBAA14X9U5Xjunlk/9ovTht3Og==
>>  caps: [mgr] allow profile osd
>>  caps: [mon] allow profile osd
>>  caps: [osd] allow *
> 
> but in the documentation, it writes :
> 
>> osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
> 
> Does the "allow" needed before "profile" ? it's not clear
> 
> If I create a user like this :
> 
>> # ceph --cluster container auth get-or-create client.container001 \
>>  mon 'allow profile rbd' \
>>  osd 'allow profile rbd \
>>  pool=rbd namespace=container001' \
>>  -o /etc/ceph/container.client.container001.keyring

ok, I don't know where I read the -o option to write the key but the file was 
empty I do a ">" and seems to work to list or create rbd now.

and for what I have tested then, the good syntax is « mon 'profile rbd' osd 
'profile rbd pool=rbd' »

> In the case we give access to those rbd inside the container, how I can be 
> sure users in each container do not have access to others rbd ? Is
> the namespace good to isolate each user ?

The question about namespace is still open, if I have a namespace in the osd 
caps, I can't create rbd volume. How I can isolate each client to
only his own volumes ?
Thanks for your help

Best regards,

-- 
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Luminous release_type "rc"

2017-09-26 Thread Stefan Kooman
Hi,

I noticed the ceph version still gives "rc" although we are using the
latest Ceph packages: 12.2.0-1xenial
(https://download.ceph.com/debian-luminous xenial/main amd64 Packages):

ceph daemon mon.mon5 version
{"version":"12.2.0","release":"luminous","release_type":"rc"}

Why is this important (to me)? I want to make a monitoring check that ensures we
are running identical, "stable" packages, instead of "beta" / "rc" in
production.

Gr. Stefan



-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Access to rbd with a user key

2017-09-26 Thread Yoann Moulin
Hello,

I try to give access to a rbd to a client on a fresh Luminous cluster

http://docs.ceph.com/docs/luminous/rados/operations/user-management/

first of all, I'd like to know the exact syntax for auth caps

the result of "ceph auth ls" give this :

> osd.9
>   key: AQDjAsVZ+nI7NBAA14X9U5Xjunlk/9ovTht3Og==
>   caps: [mgr] allow profile osd
>   caps: [mon] allow profile osd
>   caps: [osd] allow *

but in the documentation, it writes :

> osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'

Does the "allow" needed before "profile" ? it's not clear

If I create a user like this :

> # ceph --cluster container auth get-or-create client.container001 \
>   mon 'allow profile rbd' \
>   osd 'allow profile rbd \
>   pool=rbd namespace=container001' \
>   -o /etc/ceph/container.client.container001.keyring

Is this user able to create an rbd volume ?

> # rbd --cluster container  create --size 1024 rbd/container003 --id 
> client.container001 --keyring /etc/ceph/container.client.container001.keyring 
> 2017-09-26 09:54:10.158234 7fbda23270c0  0 librados: 
> client.client.container001 authentication error (22) Invalid argument
> rbd: couldn't connect to the cluster!

In that case client.client.container001 does not exist, I tried without 
"client." but failed as well with another error.

> # rbd --cluster container  create --size 1024 rbd/container003 --id 
> container001 --keyring /etc/ceph/container.client.container001.keyring 
> 2017-09-26 09:55:11.869745 7f10de6d30c0  0 librados: client.container001 
> authentication error (22) Invalid argument
> rbd: couldn't connect to the cluster!

it works if I create the rbd volume like :

> # rbd --cluster container  create --size 1024 rbd/container003

Then I can get rbd volume information with the admin key but not with the user 
key.

> # rbd --cluster container info rbd/container003  
> rbd image 'container003':
>   size 1024 MB in 256 objects
>   order 22 (4096 kB objects)
>   block_name_prefix: rbd_data.5f7c74b0dc51
>   format: 2
>   features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
>   flags: 
>   create_timestamp: Tue Sep 26 09:54:50 2017

> # rbd --cluster container info rbd/container003   --keyring 
> /etc/ceph/container.client.container001.keyring 
> 2017-09-26 09:58:29.864348 7f2fe60780c0  0 librados: client.admin 
> authentication error (22) Invalid argument
> rbd: couldn't connect to the cluster!

> # rbd --cluster container info rbd/container003   --keyring 
> /etc/ceph/container.client.container001.keyring  --id client.container001
> 2017-09-26 09:58:38.971827 7fcafa7aa0c0  0 librados: 
> client.client.container001 authentication error (22) Invalid argument
> rbd: couldn't connect to the cluster!

> # rbd --cluster container info rbd/container003   --keyring 
> /etc/ceph/container.client.container001.keyring  --id container001
> 2017-09-26 09:58:45.515253 7fbb0208c0c0  0 librados: client.container001 
> authentication error (22) Invalid argument
> rbd: couldn't connect to the cluster!

I might have missed something somewhere, but I don't know where.

Does the "rbd profile" give the capability to create rbd volumes to the user ? 
or it just gives the access to rbd volume previously create by
the admin ?

In the case we give access to those rbd inside the container, how I can be sure 
users in each container do not have access to others rbd ? Is
the namespace good to isolate each user ?

I haven't used a lot rbd before and never use client keys capabilities, it 
might a bit confuse for me.

Thanks for your help

Best regards,

-- 
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-26 Thread Götz Reinicke
Hi Thanks David & David,

we don’t use the fuse code. And may be I was a bit unclear, but your feedback 
clears some other aspects in that context.

I did an update already on our OSD/MONs while a NFS Fileserver still had a rbd 
connected and was exporting files (Virtual disks for XEN server) online.

Now the question is, can I update the ceph packages on the NFS Fileserver while 
the export (and VM Images) are online? Or should I shutdown the VMs and unmount 
the NFS export …?

Lots of nuts and bolts, but I like to screw …. :)

Cheers . Götz


> Am 25.09.2017 um 17:05 schrieb David Turner :
> 
> It depends a bit on how you have the RBDs mapped.  If you're mapping them 
> using krbd, then they don't need to be updated to use the new rbd-fuse or 
> rbd-nbd code.  If you're using one of the latter, then you should schedule a 
> time to restart the mounts so that they're mapped with the new Ceph version.
> 
> In general RBDs are not affected by upgrades as long as you don't take down 
> too much of the cluster at once and are properly doing a rolling upgrade.
> 
> On Mon, Sep 25, 2017 at 8:07 AM David  > wrote:
> Hi Götz 
> 
> If you did a rolling upgrade, RBD clients shouldn't have experienced 
> interrupted IO and therefor IO to NFS exports shouldn't have been affected. 
> However, in the past when using kernel NFS over kernel RBD, I did have some 
> lockups when OSDs went down in the cluster so that's something to watch out 
> for.
> 
> 
> On Mon, Sep 25, 2017 at 8:38 AM, Götz Reinicke 
> mailto:goetz.reini...@filmakademie.de>> 
> wrote:
> Hi,
> 
> I updated our ceph OSD/MON Nodes from 10.2.7 to 10.2.9 and everything looks 
> good so far.
> 
> Now I was wondering (as I may have forgotten how this works) what will happen 
> to a  NFS server which has the nfs shares on a ceph rbd ? Will the update 
> interrupt any access to the NFS share or is it that smooth that e.g. clients 
> accessing the NFS share will not notice?
> 
> Thanks for some lecture on managing ceph and regards . Götz

<…>



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com