Re: [ceph-users] Using CephFS in LXD containers

2017-12-13 Thread Bogdan SOLGA
ghput than 1 mount point). It wasn't until > we got up to about 100 concurrent mount points that we capped our > throughput, but our total throughput just kept going up the more mount > points we had of ceph-fuse for cephfs. > > On Tue, Dec 12, 2017 at 12:06 PM Bogdan SOLGA

[ceph-users] Using CephFS in LXD containers

2017-12-12 Thread Bogdan SOLGA
Hello, everyone! We have recently started to use CephFS (Jewel, v12.2.1) from a few LXD containers. We have mounted it on the host servers and then exposed it in the LXD containers. Do you have any recommendations (dos and don'ts) on this way of using CephFS? Thank you, in advance! Kind regards

Re: [ceph-users] Kernel version recommendation

2017-10-28 Thread Bogdan SOLGA
in 5ms. Please don't be an OS bigot while touting >> that you should learn the proper tool. Redhat is not the only distribution >> with a large support structure. >> >> The OS is a tool, but you should actually figure out and use the proper >> tool for your job. If

Re: [ceph-users] Kernel version recommendation

2017-10-28 Thread Bogdan SOLGA
eople working at redhat to produce a stable os. I am > very pleased with the level of knowledge here and what redhat is doing > general. > > I just have to finish with; You people working on Ceph are doing a great > job and are working on a great project! > > > > >

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
recommendation. We'll continue to use 4.10, then. Thanks, a lot! Kind regards, Bogdan On Fri, Oct 27, 2017 at 8:04 PM, Ilya Dryomov wrote: > On Fri, Oct 27, 2017 at 6:33 PM, Bogdan SOLGA > wrote: > > Hello, everyone! > > > > We have recently upgraded our Ceph pool t

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
t; disable RBD features until the RBD is compatible to be mapped by that > kernel. > > On Fri, Oct 27, 2017 at 12:34 PM Bogdan SOLGA > wrote: > >> Hello, everyone! >> >> We have recently upgraded our Ceph pool to the latest Luminous release. >> On one of the

[ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
Hello, everyone! We have recently upgraded our Ceph pool to the latest Luminous release. On one of the servers that we used as Ceph clients we had several freeze issues, which we empirically linked to the concurrent usage of some I/O operations - writing in an LXD container (backed by Ceph) while

Re: [ceph-users] Creating a custom cluster name using ceph-deploy

2017-10-15 Thread Bogdan SOLGA
om/2017-June/018520.html > > On Mon, Oct 16, 2017 at 11:42 AM, Erik McCormick > wrote: > > Do not, under any circumstances, make a custom named cluster. There be > pain > > and suffering (and dragons) there, and official support for it has been > > deprecated. >

[ceph-users] Creating a custom cluster name using ceph-deploy

2017-10-15 Thread Bogdan SOLGA
Hello, everyone! We are trying to create a custom cluster name using the latest ceph-deploy version (1.5.39), but we keep getting the error: *'ceph-deploy new: error: subnet must have at least 4 numbers separated by dots like x.x.x.x/xx, but got: cluster_name'* We tried to run the new command us

[ceph-users] CephFS vs RBD

2017-06-23 Thread Bogdan SOLGA
Hello, everyone! We are working on a project which uses RBD images (formatted with XFS) as home folders for the project's users. The access speed and the overall reliability have been pretty good, so far. >From the architectural perspective, our main focus is on providing a seamless user experien

Re: [ceph-users] Java librados issue

2016-12-27 Thread Bogdan SOLGA
Thank you, Wido! It was indeed the keyring; the connection works after setting it. Thanks a lot for your help! Bogdan On Tue, Dec 27, 2016 at 3:43 PM, Wido den Hollander wrote: > > > Op 27 december 2016 om 14:25 schreef Bogdan SOLGA < > bogdan.so...@gmail.com>: > >

Re: [ceph-users] Java librados issue

2016-12-27 Thread Bogdan SOLGA
lative and absolute paths, but to no avail. Any further recommendations are highly welcome. Thanks, Bogdan On Tue, Dec 27, 2016 at 3:11 PM, Wido den Hollander wrote: > > > Op 26 december 2016 om 19:24 schreef Bogdan SOLGA < > bogdan.so...@gmail.com>: > > > > > &

[ceph-users] Advised Ceph release

2015-11-18 Thread Bogdan SOLGA
Hello, everyone! We have recently setup a Ceph cluster running on the Hammer release (v0.94.5), and we would like to know what is the advised release for preparing a production-ready cluster - the LTS version (Hammer) or the latest stable version (Infernalis)? The cluster works properly (so far),

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
2015 at 11:00 PM, Jan Schermer wrote: > Can you post the output of: > > blockdev --getsz --getss --getbsz /dev/rbd5 > and > xfs_info /dev/rbd5 > > rbd resize can actually (?) shrink the image as well - is it possible that > the device was actually larger and you shrunk

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
By running rbd resize <http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/> and then 'xfs_growfs -d' on the filesystem. Is there a better way to resize an RBD image and the filesystem? On Thu, Nov 12, 2015 at 10:35 PM, Jan Schermer wrote: > > On 12 Nov 2015, at 20:4

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
> > Is this just one machine or RBD image or is there more? > > I'd first create a snapshot and then try running fsck on it, it should > hopefully tell you if there's a problem in setup or a corruption. > > If it's not important data and it's just one instance

[ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
Hello everyone! We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and today I noticed a lot of 'attempt to access beyond end of device' messages in the /var/log/syslog file. They are related to a mounted RBD image, and have the following format: *Nov 12 21:06:44 ceph-client-01

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-09 Thread Bogdan SOLGA
thing > wrong. > > Any further advice on how to be able to use EC pools is highly welcomed. > > > > Thank you! > > > > Regards, > > Bogdan > > > > > > On Mon, Nov 9, 2015 at 12:20 AM, Gregory Farnum > wrote: > >> > >> Wit

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-08 Thread Bogdan SOLGA
> With that release it shouldn't be the EC pool causing trouble; it's the > CRUSH tunables also mentioned in that thread. Instructions should be > available in the docs for using older tunable that are compatible with > kernel 3.13. > -Greg > > > On Saturday, November

[ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-07 Thread Bogdan SOLGA
Hello, everyone! I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I have created an erasure coded pool, which has a caching pool in front of it. When trying to map RBD images, regardless if they are created in the rbd or in the erasure coded pool, the operation fails with '

[ceph-users] ceph-deploy on lxc container - 'initctl: Event failed'

2015-11-06 Thread Bogdan SOLGA
Hello, everyone! I just tried to create a new Ceph cluster, using 3 LXC clusters as monitors, and the 'ceph-deploy mon create-initial' command fails for each of the monitors with a 'initctl: Event failed' error, when running the following command: [ceph-mon-01][INFO ] Running command: sudo initc

Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Bogdan SOLGA
Hello! A retry of this question, as I'm still stuck at the install step, due to the old version issue. Any help is highly appreciated. Regards, Bogdan On Sat, Oct 31, 2015 at 9:22 AM, Bogdan SOLGA wrote: > Hello everyone! > > I'm struggling to get a new Ceph cluster

[ceph-users] ceph-deploy - default release

2015-10-31 Thread Bogdan SOLGA
Hello everyone! I'm struggling to get a new Ceph cluster installed, and I'm wondering why am I always getting the version 0.80.10 installed, regardless if I'm running just 'ceph-deploy install' or 'ceph-deploy install --release hammer'. Trying a 'ceph-deploy install -h', on the --release command

Re: [ceph-users] CephFS questions

2015-03-23 Thread Bogdan SOLGA
y bugs to the bug tracker. Thanks, again! Kind regards, Bogdan On Mon, Mar 23, 2015 at 12:47 PM, John Spray wrote: > On 22/03/2015 08:29, Bogdan SOLGA wrote: > >> Hello, everyone! >> >> I have a few questions related to the CephFS part of Ceph: >> >> * is it

[ceph-users] CephFS questions

2015-03-22 Thread Bogdan SOLGA
Hello, everyone! I have a few questions related to the CephFS part of Ceph: - is it production ready? - can multiple CephFS be created on the same cluster? The CephFS creation page describes how to create a CephFS using (at least)

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
t is the sum total of the leaf items aggregated by > the bucket. > > Thanks > > Sahana > > On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA > wrote: > >> Thank you for your suggestion, Nick! I have re-weighted the OSDs and the >> status has changed to '256 active

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the status has changed to '256 active+clean'. Is this information clearly stated in the documentation, and I have missed it? In case it isn't - I think it would be recommended to add it, as the issue might be encountered by other

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
e paste the output of `ceph osd dump` and ceph osd tree` > > Thanks > Sahana > > On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA > wrote: > >> Hello, Nick! >> >> Thank you for your reply! I have tested both with setting the replicas >> number to 2 an

Re: [ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
rs-boun...@lists.ceph.com] On Behalf Of > > Bogdan SOLGA > > Sent: 19 March 2015 20:51 > > To: ceph-users@lists.ceph.com > > Subject: [ceph-users] PGs issue > > > > Hello, everyone! > > I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick &g

[ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
Hello, everyone! I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick deploy ' page, with the following setup: - 1 x admin / deploy node; - 3 x OSD and MON nodes; - each OSD node has 2 x 8 GB HDDs; The set