[ceph-users] performance exporting RBD over NFS

2018-06-18 Thread Marc Boisis
Hi,

I want to export rbd over nfs in a 10Gb network. Server and Client are DELL 
R620 with 10Gb nics.
rbd cache is disabled ont the server. 

NFS server write bandwith on his rbd is 1196MB/s 

NFS client write bandwith on the rbd export is only 233MB/s.
NFS client write bandwith on a "local-server-disk" export is  839MB/s

my bench is: fio --time_based --name=benchmark --size=20G --runtime=30 
--filename=/video1/fiobench --ioengine=libaio --randrepeat=0 --iodepth=128 
--direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=write 
--blocksize=256k --group_reporting
my export: /video1 X.X.X.X(rw,sync,no_root_squash)
mount : type nfs 
(rw,noatime,nodiratime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=X.X.X.X,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=X.X.X.X)
rbd: rbd image 'video1':
size 5120 GB in 1310720 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1c9dc674b0dc51
format: 2
features: layering
flags:


My conclusion: 
- rbd write performance is good
- nfs write permormance is good
- nfs write on rbd performance is bad  

do you encounter the same problem ?

Marc

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] a big cluster or several small

2018-05-14 Thread Marc Boisis

Hi,

Hello,
Currently we have a 294 OSD (21 hosts/3 racks) cluster with RBD clients only, 1 
single pool (size=3).

We want to divide this cluster into several to minimize the risk in case of 
failure/crash.
For example, a cluster for the mail, another for the file servers, a test 
cluster ...
Do you think it's a good idea ?

Do you have experience feedback on multiple clusters in production on the same 
hardware:
- containers (LXD or Docker)
- multiple cluster on the same host without virtualization (with ceph-deploy 
... --cluster ...) 
- multilple pools
...

Do you have any advice?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Infernalis: best practices to start/stop

2015-11-26 Thread Marc Boisis

Hi,

I want to know what are the best practices to start or stop all OSDs of a node 
with infernalis.
Before with init, we used « /etc/init.d/ceph start »  now with systemd I have a 
script per osd : "systemctl start ceph-osd@171.service"
Where is the global one ?
Thanks in advance!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Infernalis: best practices to start/stop

2015-11-26 Thread Marc Boisis
the documentation is good but it doesn’t work on my centos7:
root@cephrr1n8:/root > systemctl status "ceph*"
ceph\x2a.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

maybe a bug with centos’s systemd release, is there anybody with centos 7 + 
infernalis ?


> Le 26 nov. 2015 à 13:15, Daniel Swarbrick <daniel.swarbr...@profitbricks.com> 
> a écrit :
> 
> SUSE has pretty good documentation about interacting with Ceph using
> systemctl -
> https://www.suse.com/documentation/ses-1/book_storage_admin/data/ceph_operating_services.html
> 
> The following should work:
> 
> systemctl start ceph-osd*
> 
> On 26/11/15 12:46, Marc Boisis wrote:
>> 
>> Hi,
>> 
>> I want to know what are the best practices to start or stop all OSDs of a 
>> node with infernalis.
>> Before with init, we used « /etc/init.d/ceph start »  now with systemd I 
>> have a script per osd : "systemctl start ceph-osd@171.service"
>> Where is the global one ?
>> Thanks in advance!
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] problem with rbd map

2015-03-12 Thread Marc Boisis
I’m trying to create my first ceph disk from a client named bjorn :

[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k 
/etc/ceph/ceph.client.admin.keyring
[ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1 --name client.admin -m 
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
rbd: sysfs write failed
rbd: map failed: (2) No such file or directory

Can you help me to find the problem ?




[ceph@bjorn ~]$ ceph -v
ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)

[ceph@bjorn ~]$ ceph -s
cluster cd7dd0a4-075c-4317-8aed-0758085ea9d2
 health HEALTH_OK
 monmap e5: 5 mons at 
{borg=10.10.10.58:6789/0,floki=10.10.10.57:6789/0,helga=10.10.10.64:6789/0,horik=10.10.10.60:6789/0,siggy=10.10.10.59:6789/0},
 election epoch 88, quorum 0,1,2,3,4 floki,borg,siggy,horik,helga
 osdmap e732: 60 osds: 60 up, 60 in
  pgmap v2352: 4160 pgs, 2 pools, 131 bytes data, 2 objects
4145 MB used, 218 TB / 218 TB avail
4160 active+clean

[ceph@bjorn ~]$ lsmod | grep rbd
rbd73133  0 
libceph   235953  1 rbd


with strace: 

open(/sys/bus/rbd/add_single_major, O_WRONLY) = 4
write(4, 10.10.10.64:6789 name=admin,key=..., 61) = -1 ENOENT (No such file 
or directory)
close(4)= 0
write(2, rbd: sysfs write failed, 23rbd: sysfs write failed) = 23
write(2, \n, 1
)   = 1
close(3)= 0
write(2, rbd: map failed: , 17rbd: map failed: )   = 17
write(2, (2) No such file or directory, 29(2) No such file or directory) = 29
write(2, \n, 1
)   = 1
exit_group(2)   = ?
+++ exited with 2 +++



[ceph@bjorn ~]$ ll /sys/bus/rbd/add_single_major
--w--- 1 root root 4096 Mar 12 12:01 /sys/bus/rbd/add_single_major

thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem with rbd map

2015-03-12 Thread Marc Boisis
In dmesg:
[ 5981.113104] libceph: client14929 fsid cd7dd0a4-075c-4317-8aed-0758085ea9d2
[ 5981.115853] libceph: mon0 10.10.10.64:6789 session established

My systems are RHEL 7 with 3.10.0-229.el7.x86_64 kernel



 
 On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
 I’m trying to create my first ceph disk from a client named bjorn :
 
 [ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k 
 /etc/ceph/ceph.client.admin.keyring
 [ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1 --name client.admin -m 
 helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
 rbd: sysfs write failed
 rbd: map failed: (2) No such file or directory
 
 Can you help me to find the problem ?
 
 Which kernel is this?  Is there anything in dmesg?
 
 Thanks,
 
Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem with rbd map

2015-03-12 Thread Marc Boisis
Thanks a lot it’s good

ROOT:bjorn:/root  rbd create foo --pool pool_ulr_1 --size 512000 -m 
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
ROOT:bjorn:/root  rbd map foo --pool pool_ulr_1 --name client.admin -m 
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
/dev/rbd0
ROOT:bjorn:/root  


 Le 12 mars 2015 à 13:42, Ilya Dryomov idryo...@gmail.com a écrit :
 
 On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
 I’m trying to create my first ceph disk from a client named bjorn :
 
 [ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k 
 /etc/ceph/ceph.client.admin.keyring
 [ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1 --name client.admin -m 
 helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
 rbd: sysfs write failed
 rbd: map failed: (2) No such file or directory
 
 Can you help me to find the problem ?
 
 Ah, you are creating an image an a standard pool (rbd), but trying to
 map from a custom pool (pool_ulr_1) - hence the -ENOENT.
 
 Thanks,
 
Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com