[ceph-users] SSD sizing for Bluestore

2018-11-12 Thread Brendan Moloney
Hi,

I have been reading up on this a bit, and found one particularly useful mailing 
list thread [1].

The fact that there is such a large jump when your DB fits into 3 levels (30GB) 
vs 4 levels (300GB) makes it hard to choose SSDs of an appropriate size. My 
workload is all RBD, so objects should be large, but I am also looking at 
purchasing rather large HDDs (12TB).  It seems wasteful to spec out 300GB per 
OSD, but I am worried that I will barely cross the 30GB threshold when the 
disks get close to full.

It would be nice if we could either enable "dynamic level sizing" (done here 
[2] for monitors, but not bluestore?), or allow changing the 
"max_bytes_for_level_base" to something that better suits our use case. For 
example, if it were set it to 25% of the default (75MB L0 and L1, 750MB L2, 
7.5GB L3, 75GB L4) then I could allocate ~85GB per OSD and feel confident there 
wouldn't be any spill over onto the slow HDDs. I am far from on expert on 
RocksDB, so I might be overlooking something important here.

[1] 
https://ceph-users.ceph.narkive.com/tGcDsnAB/slow-used-bytes-slowdb-being-used-despite-lots-of-space-free-in-blockdb-on-ssd
[2] https://tracker.ceph.com/issues/24361

Thanks,
Brendan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] searching mailing list archives

2018-11-12 Thread Marc Roos


This one i am using

https://www.mail-archive.com/ceph-users@lists.ceph.com/On Nov 12, 2018 10:32 
PM, Bryan Henderson  wrote:
>
> Is it possible to search the mailing list archives? 
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/ 
>
> seems to have a search function, but in my experience never finds anything. 
>
> -- 
> Bryan Henderson   San Jose, California 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph BoF at SC18

2018-11-12 Thread Douglas Fuller
Hi ceph-users,

If you’re in Dallas for SC18, please join us for the Ceph Community BoF, Ceph 
Applications in HPC Environments.

It’s Tomorrow night, from 5:15-6:45PM central. See below for all the details!
https://sc18.supercomputing.org/presentation/?id=bof103&sess=sess364

Cheers,
—Doug
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] searching mailing list archives

2018-11-12 Thread Bryan Henderson
Is it possible to search the mailing list archives?

http://lists.ceph.com/pipermail/ceph-users-ceph.com/

seems to have a search function, but in my experience never finds anything.

-- 
Bryan Henderson   San Jose, California
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ensure Hammer client compatibility

2018-11-12 Thread Kees Meijs
Hi again,

I just read (and reread, and again) the chapter of Ceph Cookbook on
upgrades and
http://docs.ceph.com/docs/jewel/rados/operations/crush-map/#tunables and
figured there's a way back if needed.

The sortbitwise flag is set (repeering was almost instant) and tunables
to "hammer".

There's a lot of data shuffling going on now, so fingers crossed.

Cheers,
Kees

On 12-11-18 09:14, Kees Meijs wrote:
> However, what about sortbitwise and tunables? 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW and keystone integration requiring admin credentials

2018-11-12 Thread Ronnie Lazar
Hello,

The documentation mentions that in order to integrate RGW to keystone, we
need to supply an admin user.
We are using S3 APIs only and don't require openstack integration, except
for keystone.

We can make authentication requests to keystone without requiring an admin
token (POST v3/s3tokens).  Why does RGW require an admin user when
configuring the service?

Thank you,
*Ronnie Lazar*
*R&D*

T: +972 77 556-1727
E: ron...@stratoscale.com


Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Ashley Merrick
Thanks does look like it ticks all the boxes.

As it’s been merged I’ll hold off till the next release than rebuilding
from source. As from what it seems it won’t cause an issue outside of just
re running the deep-scrub manually which is what the fix is basically doing
(but isolated to just the failed read)

Thanks!

On Mon, 12 Nov 2018 at 11:56 PM, Jonas Jelten  wrote:

> Maybe you are hitting the kernel bug worked around by
> https://github.com/ceph/ceph/pull/23273
>
> -- Jonas
>
>
> On 12/11/2018 16.39, Ashley Merrick wrote:
> > Is anyone else seeing this?
> >
> > I have just setup another cluster to check on completely different
> hardware and everything running EC still.
> >
> > And getting inconsistent PG’s flagged after an auto deep scrub which can
> be fixed by just running another deep-scrub.
> >
> > On Thu, 8 Nov 2018 at 4:23 PM, Ashley Merrick  > wrote:
> >
> > Have in the past few days noticed that every single automated deep
> scrub comes back as inconsistent, once I run a
> > manual deep-scrub it finishes fine and the PG is marked as clean.
> >
> > I am running the latest mimic but have noticed someone else under
> luminous is facing the same issue
> > :
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031166.html
> >
> > I don't believe this is any form of hardware failure as every time
> it is different OSD's and every time a manual
> > started deep-scrub finishes without issue.
> >
> > Is there something that was released in the most recent mimic and
> luminous patches that could be linked to this? Is
> > it somehow linked with the main issue with the 12.2.9 release?
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Jonas Jelten
Maybe you are hitting the kernel bug worked around by 
https://github.com/ceph/ceph/pull/23273

-- Jonas


On 12/11/2018 16.39, Ashley Merrick wrote:
> Is anyone else seeing this?
> 
> I have just setup another cluster to check on completely different hardware 
> and everything running EC still.
> 
> And getting inconsistent PG’s flagged after an auto deep scrub which can be 
> fixed by just running another deep-scrub.
> 
> On Thu, 8 Nov 2018 at 4:23 PM, Ashley Merrick  > wrote:
> 
> Have in the past few days noticed that every single automated deep scrub 
> comes back as inconsistent, once I run a
> manual deep-scrub it finishes fine and the PG is marked as clean.
> 
> I am running the latest mimic but have noticed someone else under 
> luminous is facing the same issue
> : 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031166.html
> 
> I don't believe this is any form of hardware failure as every time it is 
> different OSD's and every time a manual
> started deep-scrub finishes without issue.
> 
> Is there something that was released in the most recent mimic and 
> luminous patches that could be linked to this? Is
> it somehow linked with the main issue with the 12.2.9 release?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Ashley Merrick
Is anyone else seeing this?

I have just setup another cluster to check on completely different hardware
and everything running EC still.

And getting inconsistent PG’s flagged after an auto deep scrub which can be
fixed by just running another deep-scrub.

On Thu, 8 Nov 2018 at 4:23 PM, Ashley Merrick 
wrote:

> Have in the past few days noticed that every single automated deep scrub
> comes back as inconsistent, once I run a manual deep-scrub it finishes fine
> and the PG is marked as clean.
>
> I am running the latest mimic but have noticed someone else under luminous
> is facing the same issue :
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031166.html
>
> I don't believe this is any form of hardware failure as every time it is
> different OSD's and every time a manual started deep-scrub finishes without
> issue.
>
> Is there something that was released in the most recent mimic and luminous
> patches that could be linked to this? Is it somehow linked with the main
> issue with the 12.2.9 release?
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Dan van der Ster
Hi,
Is it identical?
In the places we use sync=disabled (e.g. analysis scratch areas),
we're totally content with losing last x seconds/minutes of writes,
and understood that on-disk consistency is not impacted.
Cheers,Dan

On Mon, Nov 12, 2018 at 3:16 PM Kevin Olbrich  wrote:
>
> Hi Dan,
>
> ZFS without sync would be very much identical to ext2/ext4 without journals 
> or XFS with barriers disabled.
> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high risk 
> (using ext4 with kvm-mode unsafe would be similar I think).
>
> Also, ZFS only works as expected with scheduler set to noop as it is 
> optimized to consume whole, non-shared devices.
>
> Just my 2 cents ;-)
>
> Kevin
>
>
> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster 
> :
>>
>> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
>> It's very stable and if your use-case permits you can set zfs
>> sync=disabled to get very fast write performance that's tough to beat.
>>
>> But if you're building something new today and have *only* the NAS
>> use-case then it would make better sense to try CephFS first and see
>> if it works for you.
>>
>> -- Dan
>>
>> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
>> >
>> > Hi!
>> >
>> > ZFS won't play nice on ceph. Best would be to mount CephFS directly with 
>> > the ceph-fuse driver on the endpoint.
>> > If you definitely want to put a storage gateway between the data and the 
>> > compute nodes, then go with nfs-ganesha which can export CephFS directly 
>> > without local ("proxy") mount.
>> >
>> > I had such a setup with nfs and switched to mount CephFS directly. If 
>> > using NFS with the same data, you must make sure your HA works well to 
>> > avoid data corruption.
>> > With ceph-fuse you directly connect to the cluster, one component less 
>> > that breaks.
>> >
>> > Kevin
>> >
>> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril 
>> > :
>> >>
>> >> Hi,
>> >>
>> >>
>> >> We are planning to build NAS solution which will be primarily used via 
>> >> NFS and CIFS and workloads ranging from various archival application to 
>> >> more “real-time processing”. The NAS will not be used as a block storage 
>> >> for virtual machines, so the access really will always be file oriented.
>> >>
>> >>
>> >> We are considering primarily two designs and I’d like to kindly ask for 
>> >> any thoughts, views, insights, experiences.
>> >>
>> >>
>> >> Both designs utilize “distributed storage software at some level”. Both 
>> >> designs would be built from commodity servers and should scale as we 
>> >> grow. Both designs involve virtualization for instantiating "access 
>> >> virtual machines" which will be serving the NFS and CIFS protocol - so in 
>> >> this sense the access layer is decoupled from the data layer itself.
>> >>
>> >>
>> >> First design is based on a distributed filesystem like Gluster or CephFS. 
>> >> We would deploy this software on those commodity servers and mount the 
>> >> resultant filesystem on the “access virtual machines” and they would be 
>> >> serving the mounted filesystem via NFS/CIFS.
>> >>
>> >>
>> >> Second design is based on distributed block storage using CEPH. So we 
>> >> would build distributed block storage on those commodity servers, and 
>> >> then, via virtualization (like OpenStack Cinder) we would allocate the 
>> >> block storage into the access VM. Inside the access VM we would deploy 
>> >> ZFS which would aggregate block storage into a single filesystem. And 
>> >> this filesystem would be served via NFS/CIFS from the very same VM.
>> >>
>> >>
>> >> Any advices and insights highly appreciated
>> >>
>> >>
>> >> Cheers,
>> >>
>> >> Prema
>> >>
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Yes, the access VM layer is there because of multi-tenancy - we need to
provide parts of the storage into different private environments (can be
potentially on private IP addresses). And we need both - NFS as well as
CIFS.

On Mon, Nov 12, 2018 at 3:54 PM Ashley Merrick 
wrote:

> Does your use case mean you need something like nfs/cifs and can’t use
> CephFS mount directly?
>
> Has been quite a few advances in that area with quotas and user management
> in recent versions.
>
> But obviously all depends on your use case at client end.
>
> On Mon, 12 Nov 2018 at 10:51 PM, Premysl Kouril 
> wrote:
>
>> Some kind of single point will always be there I guess. Because even if
>> we go with the distributed filesystem, it will be mounted to the access VM
>> and this access VM will be providing NFS/CIFS protocol access. So this
>> machine is single point of failure (indeed we would be running two of them
>> for active-passive HA setup. In case of distributed filesystem approach the
>> failure of the access VM would mean re-mounting the filesystem on the
>> passive access VM. In case of "monster VM" approach, in case of the VM
>> failure it would mean reattaching all block volumes to a new VM.
>>
>> On Mon, Nov 12, 2018 at 3:40 PM Ashley Merrick 
>> wrote:
>>
>>> My 2 cents would be depends how H/A you need.
>>>
>>> Going with the monster VM you have a single point of failure and a
>>> single point of network congestion.
>>>
>>> If you go the CephFS route you remove that single point of failure if
>>> you mount to clients directly. And also can remove that single point of
>>> network congestion.
>>>
>>> Guess depends on the performance and uptime required , as I’d say that
>>> could factory into your decisions.
>>>
>>> On Mon, 12 Nov 2018 at 10:36 PM, Premysl Kouril <
>>> premysl.kou...@gmail.com> wrote:
>>>
 Hi Kevin,

 I should have also said, that we are internally inclined towards the
 "monster VM" approach due to seemingly simpler architecture (data
 distribution on block layer rather than on file system layer). So my
 original question is more about comparing the two approaches (distribution
 on block layer vs distribution on filesystem layer). "Monster VM" approach
 being the one where we just keep mounting block volumes to a single VM
 with normal non-distributed filesystem and then exporting via NFS/CIFS.

 Regards,
 Prema

 On Mon, Nov 12, 2018 at 3:17 PM Kevin Olbrich  wrote:

> Hi Dan,
>
> ZFS without sync would be very much identical to ext2/ext4 without
> journals or XFS with barriers disabled.
> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very
> high risk (using ext4 with kvm-mode unsafe would be similar I think).
>
> Also, ZFS only works as expected with scheduler set to noop as it is
> optimized to consume whole, non-shared devices.
>
> Just my 2 cents ;-)
>
> Kevin
>
>
> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
> d...@vanderster.com>:
>
>> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
>> It's very stable and if your use-case permits you can set zfs
>> sync=disabled to get very fast write performance that's tough to beat.
>>
>> But if you're building something new today and have *only* the NAS
>> use-case then it would make better sense to try CephFS first and see
>> if it works for you.
>>
>> -- Dan
>>
>> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
>> >
>> > Hi!
>> >
>> > ZFS won't play nice on ceph. Best would be to mount CephFS directly
>> with the ceph-fuse driver on the endpoint.
>> > If you definitely want to put a storage gateway between the data
>> and the compute nodes, then go with nfs-ganesha which can export CephFS
>> directly without local ("proxy") mount.
>> >
>> > I had such a setup with nfs and switched to mount CephFS directly.
>> If using NFS with the same data, you must make sure your HA works well to
>> avoid data corruption.
>> > With ceph-fuse you directly connect to the cluster, one component
>> less that breaks.
>> >
>> > Kevin
>> >
>> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
>> premysl.kou...@gmail.com>:
>> >>
>> >> Hi,
>> >>
>> >>
>> >> We are planning to build NAS solution which will be primarily used
>> via NFS and CIFS and workloads ranging from various archival application 
>> to
>> more “real-time processing”. The NAS will not be used as a block storage
>> for virtual machines, so the access really will always be file oriented.
>> >>
>> >>
>> >> We are considering primarily two designs and I’d like to kindly
>> ask for any thoughts, views, insights, experiences.
>> >>
>> >>
>> >> Both designs utilize “distributed storage software at some level”.
>> Both designs 

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Ashley Merrick
Does your use case mean you need something like nfs/cifs and can’t use
CephFS mount directly?

Has been quite a few advances in that area with quotas and user management
in recent versions.

But obviously all depends on your use case at client end.

On Mon, 12 Nov 2018 at 10:51 PM, Premysl Kouril 
wrote:

> Some kind of single point will always be there I guess. Because even if we
> go with the distributed filesystem, it will be mounted to the access VM and
> this access VM will be providing NFS/CIFS protocol access. So this machine
> is single point of failure (indeed we would be running two of them for
> active-passive HA setup. In case of distributed filesystem approach the
> failure of the access VM would mean re-mounting the filesystem on the
> passive access VM. In case of "monster VM" approach, in case of the VM
> failure it would mean reattaching all block volumes to a new VM.
>
> On Mon, Nov 12, 2018 at 3:40 PM Ashley Merrick 
> wrote:
>
>> My 2 cents would be depends how H/A you need.
>>
>> Going with the monster VM you have a single point of failure and a single
>> point of network congestion.
>>
>> If you go the CephFS route you remove that single point of failure if you
>> mount to clients directly. And also can remove that single point of network
>> congestion.
>>
>> Guess depends on the performance and uptime required , as I’d say that
>> could factory into your decisions.
>>
>> On Mon, 12 Nov 2018 at 10:36 PM, Premysl Kouril 
>> wrote:
>>
>>> Hi Kevin,
>>>
>>> I should have also said, that we are internally inclined towards the
>>> "monster VM" approach due to seemingly simpler architecture (data
>>> distribution on block layer rather than on file system layer). So my
>>> original question is more about comparing the two approaches (distribution
>>> on block layer vs distribution on filesystem layer). "Monster VM" approach
>>> being the one where we just keep mounting block volumes to a single VM
>>> with normal non-distributed filesystem and then exporting via NFS/CIFS.
>>>
>>> Regards,
>>> Prema
>>>
>>> On Mon, Nov 12, 2018 at 3:17 PM Kevin Olbrich  wrote:
>>>
 Hi Dan,

 ZFS without sync would be very much identical to ext2/ext4 without
 journals or XFS with barriers disabled.
 The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very
 high risk (using ext4 with kvm-mode unsafe would be similar I think).

 Also, ZFS only works as expected with scheduler set to noop as it is
 optimized to consume whole, non-shared devices.

 Just my 2 cents ;-)

 Kevin


 Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
 d...@vanderster.com>:

> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
> It's very stable and if your use-case permits you can set zfs
> sync=disabled to get very fast write performance that's tough to beat.
>
> But if you're building something new today and have *only* the NAS
> use-case then it would make better sense to try CephFS first and see
> if it works for you.
>
> -- Dan
>
> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
> >
> > Hi!
> >
> > ZFS won't play nice on ceph. Best would be to mount CephFS directly
> with the ceph-fuse driver on the endpoint.
> > If you definitely want to put a storage gateway between the data and
> the compute nodes, then go with nfs-ganesha which can export CephFS
> directly without local ("proxy") mount.
> >
> > I had such a setup with nfs and switched to mount CephFS directly.
> If using NFS with the same data, you must make sure your HA works well to
> avoid data corruption.
> > With ceph-fuse you directly connect to the cluster, one component
> less that breaks.
> >
> > Kevin
> >
> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
> premysl.kou...@gmail.com>:
> >>
> >> Hi,
> >>
> >>
> >> We are planning to build NAS solution which will be primarily used
> via NFS and CIFS and workloads ranging from various archival application 
> to
> more “real-time processing”. The NAS will not be used as a block storage
> for virtual machines, so the access really will always be file oriented.
> >>
> >>
> >> We are considering primarily two designs and I’d like to kindly ask
> for any thoughts, views, insights, experiences.
> >>
> >>
> >> Both designs utilize “distributed storage software at some level”.
> Both designs would be built from commodity servers and should scale as we
> grow. Both designs involve virtualization for instantiating "access 
> virtual
> machines" which will be serving the NFS and CIFS protocol - so in this
> sense the access layer is decoupled from the data layer itself.
> >>
> >>
> >> First design is based on a distributed filesystem like Gluster or
> CephFS. We would deploy this softwar

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Some kind of single point will always be there I guess. Because even if we
go with the distributed filesystem, it will be mounted to the access VM and
this access VM will be providing NFS/CIFS protocol access. So this machine
is single point of failure (indeed we would be running two of them for
active-passive HA setup. In case of distributed filesystem approach the
failure of the access VM would mean re-mounting the filesystem on the
passive access VM. In case of "monster VM" approach, in case of the VM
failure it would mean reattaching all block volumes to a new VM.

On Mon, Nov 12, 2018 at 3:40 PM Ashley Merrick 
wrote:

> My 2 cents would be depends how H/A you need.
>
> Going with the monster VM you have a single point of failure and a single
> point of network congestion.
>
> If you go the CephFS route you remove that single point of failure if you
> mount to clients directly. And also can remove that single point of network
> congestion.
>
> Guess depends on the performance and uptime required , as I’d say that
> could factory into your decisions.
>
> On Mon, 12 Nov 2018 at 10:36 PM, Premysl Kouril 
> wrote:
>
>> Hi Kevin,
>>
>> I should have also said, that we are internally inclined towards the
>> "monster VM" approach due to seemingly simpler architecture (data
>> distribution on block layer rather than on file system layer). So my
>> original question is more about comparing the two approaches (distribution
>> on block layer vs distribution on filesystem layer). "Monster VM" approach
>> being the one where we just keep mounting block volumes to a single VM
>> with normal non-distributed filesystem and then exporting via NFS/CIFS.
>>
>> Regards,
>> Prema
>>
>> On Mon, Nov 12, 2018 at 3:17 PM Kevin Olbrich  wrote:
>>
>>> Hi Dan,
>>>
>>> ZFS without sync would be very much identical to ext2/ext4 without
>>> journals or XFS with barriers disabled.
>>> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
>>> risk (using ext4 with kvm-mode unsafe would be similar I think).
>>>
>>> Also, ZFS only works as expected with scheduler set to noop as it is
>>> optimized to consume whole, non-shared devices.
>>>
>>> Just my 2 cents ;-)
>>>
>>> Kevin
>>>
>>>
>>> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
>>> d...@vanderster.com>:
>>>
 We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
 It's very stable and if your use-case permits you can set zfs
 sync=disabled to get very fast write performance that's tough to beat.

 But if you're building something new today and have *only* the NAS
 use-case then it would make better sense to try CephFS first and see
 if it works for you.

 -- Dan

 On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
 >
 > Hi!
 >
 > ZFS won't play nice on ceph. Best would be to mount CephFS directly
 with the ceph-fuse driver on the endpoint.
 > If you definitely want to put a storage gateway between the data and
 the compute nodes, then go with nfs-ganesha which can export CephFS
 directly without local ("proxy") mount.
 >
 > I had such a setup with nfs and switched to mount CephFS directly. If
 using NFS with the same data, you must make sure your HA works well to
 avoid data corruption.
 > With ceph-fuse you directly connect to the cluster, one component
 less that breaks.
 >
 > Kevin
 >
 > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
 premysl.kou...@gmail.com>:
 >>
 >> Hi,
 >>
 >>
 >> We are planning to build NAS solution which will be primarily used
 via NFS and CIFS and workloads ranging from various archival application to
 more “real-time processing”. The NAS will not be used as a block storage
 for virtual machines, so the access really will always be file oriented.
 >>
 >>
 >> We are considering primarily two designs and I’d like to kindly ask
 for any thoughts, views, insights, experiences.
 >>
 >>
 >> Both designs utilize “distributed storage software at some level”.
 Both designs would be built from commodity servers and should scale as we
 grow. Both designs involve virtualization for instantiating "access virtual
 machines" which will be serving the NFS and CIFS protocol - so in this
 sense the access layer is decoupled from the data layer itself.
 >>
 >>
 >> First design is based on a distributed filesystem like Gluster or
 CephFS. We would deploy this software on those commodity servers and mount
 the resultant filesystem on the “access virtual machines” and they would be
 serving the mounted filesystem via NFS/CIFS.
 >>
 >>
 >> Second design is based on distributed block storage using CEPH. So
 we would build distributed block storage on those commodity servers, and
 then, via virtualization (like OpenStack Cinder) we would allocate the
 block storage into the a

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Ashley Merrick
My 2 cents would be depends how H/A you need.

Going with the monster VM you have a single point of failure and a single
point of network congestion.

If you go the CephFS route you remove that single point of failure if you
mount to clients directly. And also can remove that single point of network
congestion.

Guess depends on the performance and uptime required , as I’d say that
could factory into your decisions.

On Mon, 12 Nov 2018 at 10:36 PM, Premysl Kouril 
wrote:

> Hi Kevin,
>
> I should have also said, that we are internally inclined towards the
> "monster VM" approach due to seemingly simpler architecture (data
> distribution on block layer rather than on file system layer). So my
> original question is more about comparing the two approaches (distribution
> on block layer vs distribution on filesystem layer). "Monster VM" approach
> being the one where we just keep mounting block volumes to a single VM
> with normal non-distributed filesystem and then exporting via NFS/CIFS.
>
> Regards,
> Prema
>
> On Mon, Nov 12, 2018 at 3:17 PM Kevin Olbrich  wrote:
>
>> Hi Dan,
>>
>> ZFS without sync would be very much identical to ext2/ext4 without
>> journals or XFS with barriers disabled.
>> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
>> risk (using ext4 with kvm-mode unsafe would be similar I think).
>>
>> Also, ZFS only works as expected with scheduler set to noop as it is
>> optimized to consume whole, non-shared devices.
>>
>> Just my 2 cents ;-)
>>
>> Kevin
>>
>>
>> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
>> d...@vanderster.com>:
>>
>>> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
>>> It's very stable and if your use-case permits you can set zfs
>>> sync=disabled to get very fast write performance that's tough to beat.
>>>
>>> But if you're building something new today and have *only* the NAS
>>> use-case then it would make better sense to try CephFS first and see
>>> if it works for you.
>>>
>>> -- Dan
>>>
>>> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
>>> >
>>> > Hi!
>>> >
>>> > ZFS won't play nice on ceph. Best would be to mount CephFS directly
>>> with the ceph-fuse driver on the endpoint.
>>> > If you definitely want to put a storage gateway between the data and
>>> the compute nodes, then go with nfs-ganesha which can export CephFS
>>> directly without local ("proxy") mount.
>>> >
>>> > I had such a setup with nfs and switched to mount CephFS directly. If
>>> using NFS with the same data, you must make sure your HA works well to
>>> avoid data corruption.
>>> > With ceph-fuse you directly connect to the cluster, one component less
>>> that breaks.
>>> >
>>> > Kevin
>>> >
>>> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
>>> premysl.kou...@gmail.com>:
>>> >>
>>> >> Hi,
>>> >>
>>> >>
>>> >> We are planning to build NAS solution which will be primarily used
>>> via NFS and CIFS and workloads ranging from various archival application to
>>> more “real-time processing”. The NAS will not be used as a block storage
>>> for virtual machines, so the access really will always be file oriented.
>>> >>
>>> >>
>>> >> We are considering primarily two designs and I’d like to kindly ask
>>> for any thoughts, views, insights, experiences.
>>> >>
>>> >>
>>> >> Both designs utilize “distributed storage software at some level”.
>>> Both designs would be built from commodity servers and should scale as we
>>> grow. Both designs involve virtualization for instantiating "access virtual
>>> machines" which will be serving the NFS and CIFS protocol - so in this
>>> sense the access layer is decoupled from the data layer itself.
>>> >>
>>> >>
>>> >> First design is based on a distributed filesystem like Gluster or
>>> CephFS. We would deploy this software on those commodity servers and mount
>>> the resultant filesystem on the “access virtual machines” and they would be
>>> serving the mounted filesystem via NFS/CIFS.
>>> >>
>>> >>
>>> >> Second design is based on distributed block storage using CEPH. So we
>>> would build distributed block storage on those commodity servers, and then,
>>> via virtualization (like OpenStack Cinder) we would allocate the block
>>> storage into the access VM. Inside the access VM we would deploy ZFS which
>>> would aggregate block storage into a single filesystem. And this filesystem
>>> would be served via NFS/CIFS from the very same VM.
>>> >>
>>> >>
>>> >> Any advices and insights highly appreciated
>>> >>
>>> >>
>>> >> Cheers,
>>> >>
>>> >> Prema
>>> >>
>>> >> ___
>>> >> ceph-users mailing list
>>> >> ceph-users@lists.ceph.com
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mail

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Hi Kevin,

I should have also said, that we are internally inclined towards the
"monster VM" approach due to seemingly simpler architecture (data
distribution on block layer rather than on file system layer). So my
original question is more about comparing the two approaches (distribution
on block layer vs distribution on filesystem layer). "Monster VM" approach
being the one where we just keep mounting block volumes to a single VM with
normal non-distributed filesystem and then exporting via NFS/CIFS.

Regards,
Prema

On Mon, Nov 12, 2018 at 3:17 PM Kevin Olbrich  wrote:

> Hi Dan,
>
> ZFS without sync would be very much identical to ext2/ext4 without
> journals or XFS with barriers disabled.
> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
> risk (using ext4 with kvm-mode unsafe would be similar I think).
>
> Also, ZFS only works as expected with scheduler set to noop as it is
> optimized to consume whole, non-shared devices.
>
> Just my 2 cents ;-)
>
> Kevin
>
>
> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
> d...@vanderster.com>:
>
>> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
>> It's very stable and if your use-case permits you can set zfs
>> sync=disabled to get very fast write performance that's tough to beat.
>>
>> But if you're building something new today and have *only* the NAS
>> use-case then it would make better sense to try CephFS first and see
>> if it works for you.
>>
>> -- Dan
>>
>> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
>> >
>> > Hi!
>> >
>> > ZFS won't play nice on ceph. Best would be to mount CephFS directly
>> with the ceph-fuse driver on the endpoint.
>> > If you definitely want to put a storage gateway between the data and
>> the compute nodes, then go with nfs-ganesha which can export CephFS
>> directly without local ("proxy") mount.
>> >
>> > I had such a setup with nfs and switched to mount CephFS directly. If
>> using NFS with the same data, you must make sure your HA works well to
>> avoid data corruption.
>> > With ceph-fuse you directly connect to the cluster, one component less
>> that breaks.
>> >
>> > Kevin
>> >
>> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
>> premysl.kou...@gmail.com>:
>> >>
>> >> Hi,
>> >>
>> >>
>> >> We are planning to build NAS solution which will be primarily used via
>> NFS and CIFS and workloads ranging from various archival application to
>> more “real-time processing”. The NAS will not be used as a block storage
>> for virtual machines, so the access really will always be file oriented.
>> >>
>> >>
>> >> We are considering primarily two designs and I’d like to kindly ask
>> for any thoughts, views, insights, experiences.
>> >>
>> >>
>> >> Both designs utilize “distributed storage software at some level”.
>> Both designs would be built from commodity servers and should scale as we
>> grow. Both designs involve virtualization for instantiating "access virtual
>> machines" which will be serving the NFS and CIFS protocol - so in this
>> sense the access layer is decoupled from the data layer itself.
>> >>
>> >>
>> >> First design is based on a distributed filesystem like Gluster or
>> CephFS. We would deploy this software on those commodity servers and mount
>> the resultant filesystem on the “access virtual machines” and they would be
>> serving the mounted filesystem via NFS/CIFS.
>> >>
>> >>
>> >> Second design is based on distributed block storage using CEPH. So we
>> would build distributed block storage on those commodity servers, and then,
>> via virtualization (like OpenStack Cinder) we would allocate the block
>> storage into the access VM. Inside the access VM we would deploy ZFS which
>> would aggregate block storage into a single filesystem. And this filesystem
>> would be served via NFS/CIFS from the very same VM.
>> >>
>> >>
>> >> Any advices and insights highly appreciated
>> >>
>> >>
>> >> Cheers,
>> >>
>> >> Prema
>> >>
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Cephfs Snapshots in Luminous

2018-11-12 Thread Marc Roos


>>
>> is anybody using cephfs with snapshots on luminous? Cephfs snapshots 
>> are declared stable in mimic, but I'd like to know about the risks 
>> using them on luminous. Do I risk a complete cephfs failure or just 
>> some not working snapshots? It is one namespace, one fs, one data and 

>> one metadata pool.
>>
>
>For luminous, snapshot in single mds setup basically works. But 
snapshot is 
>complete broken in multiple setup.
>

Single active mds not? And hardlinks are not supported with snapshots?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Cephfs Snapshots in Luminous

2018-11-12 Thread Yan, Zheng
On Mon, Nov 12, 2018 at 3:53 PM Felix Stolte  wrote:
>
> Hi folks,
>
> is anybody using cephfs with snapshots on luminous? Cephfs snapshots are
> declared stable in mimic, but I'd like to know about the risks using
> them on luminous. Do I risk a complete cephfs failure or just some not
> working snapshots? It is one namespace, one fs, one data and one
> metadata pool.
>

For luminous, snapshot in single mds setup basically works. But
snapshot is complete broken in multiple setup.

> Also I'd like to know if there is a way to list all current snapshot
> (folders) in cephfs. Since a snaphot is created by the mkdir command,
> snapshot creation can be done by every user and I would like to monitor
> snapshotted folders and snaphshot size (if possible).
>

In mimic, there is command "ceph daemon mds.x dump snaps"

> Best regards
>
> Felix
>
> Forschungszentrum Jülich GmbH
> 52425 Jülich
> Sitz der Gesellschaft: Jülich
> Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir. Dr. Karl Eugen Huthmacher
> Geschäftsführung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Kevin Olbrich
Hi Dan,

ZFS without sync would be very much identical to ext2/ext4 without journals
or XFS with barriers disabled.
The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
risk (using ext4 with kvm-mode unsafe would be similar I think).

Also, ZFS only works as expected with scheduler set to noop as it is
optimized to consume whole, non-shared devices.

Just my 2 cents ;-)

Kevin


Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster <
d...@vanderster.com>:

> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
> It's very stable and if your use-case permits you can set zfs
> sync=disabled to get very fast write performance that's tough to beat.
>
> But if you're building something new today and have *only* the NAS
> use-case then it would make better sense to try CephFS first and see
> if it works for you.
>
> -- Dan
>
> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
> >
> > Hi!
> >
> > ZFS won't play nice on ceph. Best would be to mount CephFS directly with
> the ceph-fuse driver on the endpoint.
> > If you definitely want to put a storage gateway between the data and the
> compute nodes, then go with nfs-ganesha which can export CephFS directly
> without local ("proxy") mount.
> >
> > I had such a setup with nfs and switched to mount CephFS directly. If
> using NFS with the same data, you must make sure your HA works well to
> avoid data corruption.
> > With ceph-fuse you directly connect to the cluster, one component less
> that breaks.
> >
> > Kevin
> >
> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
> premysl.kou...@gmail.com>:
> >>
> >> Hi,
> >>
> >>
> >> We are planning to build NAS solution which will be primarily used via
> NFS and CIFS and workloads ranging from various archival application to
> more “real-time processing”. The NAS will not be used as a block storage
> for virtual machines, so the access really will always be file oriented.
> >>
> >>
> >> We are considering primarily two designs and I’d like to kindly ask for
> any thoughts, views, insights, experiences.
> >>
> >>
> >> Both designs utilize “distributed storage software at some level”. Both
> designs would be built from commodity servers and should scale as we grow.
> Both designs involve virtualization for instantiating "access virtual
> machines" which will be serving the NFS and CIFS protocol - so in this
> sense the access layer is decoupled from the data layer itself.
> >>
> >>
> >> First design is based on a distributed filesystem like Gluster or
> CephFS. We would deploy this software on those commodity servers and mount
> the resultant filesystem on the “access virtual machines” and they would be
> serving the mounted filesystem via NFS/CIFS.
> >>
> >>
> >> Second design is based on distributed block storage using CEPH. So we
> would build distributed block storage on those commodity servers, and then,
> via virtualization (like OpenStack Cinder) we would allocate the block
> storage into the access VM. Inside the access VM we would deploy ZFS which
> would aggregate block storage into a single filesystem. And this filesystem
> would be served via NFS/CIFS from the very same VM.
> >>
> >>
> >> Any advices and insights highly appreciated
> >>
> >>
> >> Cheers,
> >>
> >> Prema
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Dan van der Ster
We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
It's very stable and if your use-case permits you can set zfs
sync=disabled to get very fast write performance that's tough to beat.

But if you're building something new today and have *only* the NAS
use-case then it would make better sense to try CephFS first and see
if it works for you.

-- Dan

On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich  wrote:
>
> Hi!
>
> ZFS won't play nice on ceph. Best would be to mount CephFS directly with the 
> ceph-fuse driver on the endpoint.
> If you definitely want to put a storage gateway between the data and the 
> compute nodes, then go with nfs-ganesha which can export CephFS directly 
> without local ("proxy") mount.
>
> I had such a setup with nfs and switched to mount CephFS directly. If using 
> NFS with the same data, you must make sure your HA works well to avoid data 
> corruption.
> With ceph-fuse you directly connect to the cluster, one component less that 
> breaks.
>
> Kevin
>
> Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril 
> :
>>
>> Hi,
>>
>>
>> We are planning to build NAS solution which will be primarily used via NFS 
>> and CIFS and workloads ranging from various archival application to more 
>> “real-time processing”. The NAS will not be used as a block storage for 
>> virtual machines, so the access really will always be file oriented.
>>
>>
>> We are considering primarily two designs and I’d like to kindly ask for any 
>> thoughts, views, insights, experiences.
>>
>>
>> Both designs utilize “distributed storage software at some level”. Both 
>> designs would be built from commodity servers and should scale as we grow. 
>> Both designs involve virtualization for instantiating "access virtual 
>> machines" which will be serving the NFS and CIFS protocol - so in this sense 
>> the access layer is decoupled from the data layer itself.
>>
>>
>> First design is based on a distributed filesystem like Gluster or CephFS. We 
>> would deploy this software on those commodity servers and mount the 
>> resultant filesystem on the “access virtual machines” and they would be 
>> serving the mounted filesystem via NFS/CIFS.
>>
>>
>> Second design is based on distributed block storage using CEPH. So we would 
>> build distributed block storage on those commodity servers, and then, via 
>> virtualization (like OpenStack Cinder) we would allocate the block storage 
>> into the access VM. Inside the access VM we would deploy ZFS which would 
>> aggregate block storage into a single filesystem. And this filesystem would 
>> be served via NFS/CIFS from the very same VM.
>>
>>
>> Any advices and insights highly appreciated
>>
>>
>> Cheers,
>>
>> Prema
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Kevin Olbrich
Hi!

ZFS won't play nice on ceph. Best would be to mount CephFS directly with
the ceph-fuse driver on the endpoint.
If you definitely want to put a storage gateway between the data and the
compute nodes, then go with nfs-ganesha which can export CephFS directly
without local ("proxy") mount.

I had such a setup with nfs and switched to mount CephFS directly. If using
NFS with the same data, you must make sure your HA works well to avoid data
corruption.
With ceph-fuse you directly connect to the cluster, one component less that
breaks.

Kevin

Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril <
premysl.kou...@gmail.com>:

> Hi,
>
> We are planning to build NAS solution which will be primarily used via NFS
> and CIFS and workloads ranging from various archival application to more
> “real-time processing”. The NAS will not be used as a block storage for
> virtual machines, so the access really will always be file oriented.
>
> We are considering primarily two designs and I’d like to kindly ask for
> any thoughts, views, insights, experiences.
>
> Both designs utilize “distributed storage software at some level”. Both
> designs would be built from commodity servers and should scale as we grow.
> Both designs involve virtualization for instantiating "access virtual
> machines" which will be serving the NFS and CIFS protocol - so in this
> sense the access layer is decoupled from the data layer itself.
>
> First design is based on a distributed filesystem like Gluster or CephFS.
> We would deploy this software on those commodity servers and mount the
> resultant filesystem on the “access virtual machines” and they would be
> serving the mounted filesystem via NFS/CIFS.
>
> Second design is based on distributed block storage using CEPH. So we
> would build distributed block storage on those commodity servers, and then,
> via virtualization (like OpenStack Cinder) we would allocate the block
> storage into the access VM. Inside the access VM we would deploy ZFS which
> would aggregate block storage into a single filesystem. And this filesystem
> would be served via NFS/CIFS from the very same VM.
>
>
> Any advices and insights highly appreciated
>
>
> Cheers,
>
> Prema
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Influx Plugin in luminous

2018-11-12 Thread Wido den Hollander


On 11/12/18 12:54 PM, mart.v wrote:
> Hi,
> 
> I'm trying to set up a Influx plugin
> (http://docs.ceph.com/docs/mimic/mgr/influx/). The docs says that it
> will be available in Mimic release, but I can see it (and enable) in
> current Luminous. It seems that someone else acutally used it in
> Luminous
> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/023864.html):
> 
> # ceph mgr module ls
> {
>     "enabled_modules": [
>         "balancer",
>         "dashboard",
>         "influx",
>         "restful",
>         "status"
>     ],
>     "disabled_modules": [
>         "localpool",
>         "prometheus",
>         "selftest",
>         "zabbix"
>     ]
> }
> 
> I tried the most simple setup (local influxdb without SSL) and
> configured plugin this way:
> 
> # ceph influx config-show
> {"username": "ceph", "database": "ceph", "hostname": "localhost", "ssl":
> false, "verify_ssl": false, "password": "*", "port": 8086}
> 
> Influx is accessible and running. There are no messages in logs but also
> no measurements in influxdb. Running "ceph influx self-test" produces
> reasonable output.
> 
> (I also tried to connect the plugin to our remote influxdb, same result.
> 
> Any ideas?

Did you install the Python InfluxDB packages? That is required for this
module to run.

python-influxdb it's called on most distributions.

Wido

> 
> Thanks! 
> Martin
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Influx Plugin in luminous

2018-11-12 Thread mart.v
Hi,



I'm trying to set up a Influx plugin (http://docs.ceph.com/docs/mimic/mgr/
influx/). The docs says that it will be available in Mimic release, but I 
can see it (and enable) in current Luminous. It seems that someone else 
acutally used it in Luminous (http://lists.ceph.com/pipermail/ceph-users-
ceph.com/2018-January/023864.html):





# ceph mgr module ls

{

    "enabled_modules": [

        "balancer",

        "dashboard",

        "influx",

        "restful",

        "status"

    ],

    "disabled_modules": [

        "localpool",

        "prometheus",

        "selftest",

        "zabbix"

    ]

}




I tried the most simple setup (local influxdb without SSL) and configured 
plugin this way:





# ceph influx config-show

{"username": "ceph", "database": "ceph", "hostname": "localhost", "ssl": 
false, "verify_ssl": false, "password": "*", "port": 8086}





Influx is accessible and running. There are no messages in logs but also no
measurements in influxdb. Running "ceph influx self-test" produces
reasonable output.




(I also tried to connect the plugin to our remote influxdb, same result.




Any ideas?




Thanks! 

Martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Hi,

We are planning to build NAS solution which will be primarily used via NFS
and CIFS and workloads ranging from various archival application to more
“real-time processing”. The NAS will not be used as a block storage for
virtual machines, so the access really will always be file oriented.

We are considering primarily two designs and I’d like to kindly ask for any
thoughts, views, insights, experiences.

Both designs utilize “distributed storage software at some level”. Both
designs would be built from commodity servers and should scale as we grow.
Both designs involve virtualization for instantiating "access virtual
machines" which will be serving the NFS and CIFS protocol - so in this
sense the access layer is decoupled from the data layer itself.

First design is based on a distributed filesystem like Gluster or CephFS.
We would deploy this software on those commodity servers and mount the
resultant filesystem on the “access virtual machines” and they would be
serving the mounted filesystem via NFS/CIFS.

Second design is based on distributed block storage using CEPH. So we would
build distributed block storage on those commodity servers, and then, via
virtualization (like OpenStack Cinder) we would allocate the block storage
into the access VM. Inside the access VM we would deploy ZFS which would
aggregate block storage into a single filesystem. And this filesystem would
be served via NFS/CIFS from the very same VM.


Any advices and insights highly appreciated


Cheers,

Prema
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Effects of restoring a cluster's mon from an older backup

2018-11-12 Thread Hector Martin

On 10/11/2018 06:35, Gregory Farnum wrote:
Yes, do that, don't try and back up your monitor. If you restore a 
monitor from backup then the monitor — your authoritative data source — 
will warp back in time on what the OSD peering intervals look like, 
which snapshots have been deleted and created, etc. It would be a huge 
disaster and probably every running daemon or client would have to pause 
IO until the monitor generated enough map epochs to "catch up" — and 
then the rest of the cluster would start applying those changes and 
nothing would work right.


Thanks, I suspected this might be the case. Is there any reasonable safe 
"backwards warp" time window (that would permit asynchronous replication 
of mon storage to be good enough for disaster recovery), e.g. on the 
order of seconds? I assume synchronous replication is fine (e.g. RAID or 
DRBD configured correctly) since that's largely equivalent to local 
storage. I'll probably go with something like that for mon durability.


Unlike the OSDMap, the MDSMap doesn't really keep track of any 
persistent data so it's much safer to rebuild or reset from scratch.

-Greg


Good to know. I'll see if I can do some DR tests when I set this up, to 
prove to myself that it all works out :-)


--
Hector Martin (hec...@marcansoft.com)
Public Key: https://marcan.st/marcan.asc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ensure Hammer client compatibility

2018-11-12 Thread Kees Meijs

Hi list,

Having finished our adventures with Infernalis we're now finally running 
Jewel (10.2.11) on all Ceph nodes. Woohoo!


However, there's still KVM production boxes with block-rbd.so being 
linked to librados 0.94.10 which is Hammer.


Current relevant status parts:


 health HEALTH_WARN
    crush map has legacy tunables (require bobtail, min is 
firefly)

    no legacy OSD present but 'sortbitwise' flag is not set


Obviously we would like go to HEALTH_OK again without the warnings 
mentioned maintaining Hammer client support.


Running ceph osd set require_jewel_osds seemed harmless in terms of 
client compatibility so that's done already.


However, what about sortbitwise and tunables?

Thanks,
Kees

On 21-08-18 03:47, Kees Meijs wrote:
We're looking at (now existing) RBD support using KVM/QEMU, so this is 
an upgrade path. 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I can't find the configuration of user connection log in RADOSGW

2018-11-12 Thread Janne Johansson
Den mån 12 nov. 2018 kl 06:19 skrev 대무무 :
>
> Hello.
> I installed ceph framework in 6 servers and I want to manage the user access 
> log. So I configured ceph.conf in the server which installing the rgw.
>
> ceph.conf
> [client.rgw.~~~]
> ...
> rgw enable usage log = True
>
> However, I cannot find the connection log of each user.
> I want to know the method of user connection log like the linux command 
> 'last'.
>
> I’m looking forward to hearing from you.
>

That config setting will make summary statistics on bucket creation,
file ops and so on.

For a more simple "login"/"logout" you should see it on the webserver
log (civetweb or
whatever frontend you have in front of radosgw-admin).


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com