[ceph-users] Re: Building a petabyte cluster from scratch

2019-12-04 Thread Konstantin Shalygin
On 12/4/19 3:06 AM, Fabien Sirjean wrote: * ZFS on RBD, exposed via samba shares (cluster with failover) Why not use samba vfs_ceph instead? It's scalable direct access. * What about CephFS ? We'd like to use RBD diff for backups but it looks impossible to use snapshot diff with Cephf

[ceph-users] Re: Building a petabyte cluster from scratch

2019-12-04 Thread Jack
You can snapshot, but you cannot export a diff of snapshots On 12/4/19 9:19 AM, Konstantin Shalygin wrote: > On 12/4/19 3:06 AM, Fabien Sirjean wrote: >>   * ZFS on RBD, exposed via samba shares (cluster with failover) > > Why not use samba vfs_ceph instead? It's scalable direct access. > >>   *

[ceph-users] Re: Balancing PGs across OSDs

2019-12-04 Thread Konstantin Shalygin
On 12/3/19 1:30 PM, Lars Täuber wrote: here it comes: $ ceph osd df tree ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETAAVAIL %USE VAR PGS STATUS TYPE NAME -1 195.40730- 195 TiB 130 TiB 128 TiB 58 GiB 476 GiB 66 TiB 66.45 1.00 -root defau

[ceph-users] Re: Error in add new ISCSI gateway

2019-12-04 Thread Gesiel Galvão Bernardes
I updated and restarted rbd-target-api, and this error appearly resolved. Now, I have other problem: A gateway that I losed: s> create ceph-iscsi3 192.168.201.3*Adding gateway, sync'ing 3 disk(s) and 2 client(s)**Failed : Gateway creation failed, gateway(s) unavailable:ceph-iscsi2(UNKNOWN state)*

[ceph-users] Re: Balancing PGs across OSDs

2019-12-04 Thread Lars Täuber
Hi Konstantin, thanks for your suggestions. > Lars, you have too much PG's for this OSD's. I suggest to disable PG > autoscaler and: > > - reduce number of PG's for cephfs_metada pool to something like 16 PG's. Done. > > - reduce number of PG's for cephfs_data to something like 512. Done.

[ceph-users] Re: Balancing PGs across OSDs

2019-12-04 Thread Konstantin Shalygin
On 12/4/19 4:04 PM, Lars Täuber wrote: So I just wait for the remapping and merging being done and see what happens. Thanks so far! Also don't forget to call `ceph osd crush weight-set rm-compat`. And stop mgr balancer `ceph balancer off`. After your rebalance is complete you can try: `ceph

[ceph-users] Re: Building a petabyte cluster from scratch

2019-12-04 Thread Darren Soothill
Hi Fabien, ZFS ontop of RBD really makes me shudder. ZFS expects to have individual disk devices that it can manage. It thinks it has them with this config but CEPH is masking the real data behind it. As has been said before why not just use Samba directly from CephFS and remove that layer of

[ceph-users] Re: Building a petabyte cluster from scratch

2019-12-04 Thread Phil Regnauld
Darren Soothill (darren.soothill) writes: > Hi Fabien, > > ZFS ontop of RBD really makes me shudder. ZFS expects to have individual disk > devices that it can manage. It thinks it has them with this config but CEPH > is masking the real data behind it. > > As has been said before why not just u

[ceph-users] Re: Error in add new ISCSI gateway

2019-12-04 Thread Jason Dillaman
See the thread "iSCSI Gateway reboots and permanent loss" from yesterday -- it's not currently possible to remove a gateway that no longer exists. On Wed, Dec 4, 2019 at 3:49 AM Gesiel Galvão Bernardes wrote: > > I updated and restarted rbd-target-api, and this error appearly resolved. > Now, I

[ceph-users] Re: iSCSI Gateway reboots and permanent loss

2019-12-04 Thread Gesiel Galvão Bernardes
Hi, Em qua., 4 de dez. de 2019 às 00:31, Mike Christie escreveu: > On 12/03/2019 04:19 PM, Wesley Dillingham wrote: > > Thanks. If I am reading this correctly the ability to remove an iSCSI > > gateway would allow the remaining iSCSI gateways to take over for the > > removed gateway's LUN's as o

[ceph-users] Re: iSCSI Gateway reboots and permanent loss

2019-12-04 Thread Jason Dillaman
On Wed, Dec 4, 2019 at 9:27 AM Gesiel Galvão Bernardes wrote: > > Hi, > > Em qua., 4 de dez. de 2019 às 00:31, Mike Christie > escreveu: >> >> On 12/03/2019 04:19 PM, Wesley Dillingham wrote: >> > Thanks. If I am reading this correctly the ability to remove an iSCSI >> > gateway would allow the

[ceph-users] bluestore rocksdb behavior

2019-12-04 Thread Frank R
Hi all, How is the following situation handled with bluestore: 1. You have a 200GB OSD (no separate DB/WAL devices) 2. The metadata grows past 30G for some reason and wants to create a 300GB level but can't? Where is the metadata over 30G stored? ___ c

[ceph-users] Re: bluestore rocksdb behavior

2019-12-04 Thread Igor Fedotov
Hi Frank, no spillover happens/applies for the main device hence data beyond 30G is written to main device as well. Thanks, Igor On 12/4/2019 6:13 PM, Frank R wrote: Hi all, How is the following situation handled with bluestore: 1. You have a 200GB OSD (no separate DB/WAL devices) 2. The

[ceph-users] Re: iSCSI Gateway reboots and permanent loss

2019-12-04 Thread Mike Christie
On 12/04/2019 08:26 AM, Gesiel Galvão Bernardes wrote: > Hi, > > Em qua., 4 de dez. de 2019 às 00:31, Mike Christie > escreveu: > > On 12/03/2019 04:19 PM, Wesley Dillingham wrote: > > Thanks. If I am reading this correctly the ability to remove an iSCSI >

[ceph-users] Re: bluestore rocksdb behavior

2019-12-04 Thread Frank R
Thanks. Can you recommend any docs for understanding the BlueStore on disk format/behavior when there is no separate device for the WAL/DB? On Wed, Dec 4, 2019 at 10:19 AM Igor Fedotov wrote: > Hi Frank, > > no spillover happens/applies for the main device hence data beyond 30G is > written to

[ceph-users] Re: iSCSI Gateway reboots and permanent loss

2019-12-04 Thread Wesley Dillingham
I have never had a permanent loss of a gateway but I'm a believer in Murphy's law and want to have a plan. Glad to hear that there is a solution in-the-works, curious when might that be available in a release? If sooner than later I'll plan to upgrade then immediately, otherwise, if far down the qu