Re: [ceph-users] ceph nautilus cluster name

2020-01-16 Thread Ignazio Cassano
Hello Stefan, but if I want to use rbd mirroring I must have site-a.conf
and site-b.conf on one of my nodes.probably one of the mon nodes. Is it
only a configuration on ceph client side ?
Thanks
Ignazio


Il Gio 16 Gen 2020, 22:13 Stefan Kooman  ha scritto:

> Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> > Hello, I just deployed nautilus with ceph-deploy.
> > I did not find any option to give a cluster name to my ceph so its name
> is
> > "ceph".
> > Please, how can I chenge my cluster name without reinstalling ?
> >
> > Please, how can I set the cluster name in installation phase ?
>
> TL;DR: You don't want to name it anything else. This feature was hardly
> used
> and therefore deprecated. You can find some historic info here:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022202.html
>
> I'm not sure if nameing support is already removed from the code but in
> any case don't try to name it anything else.
>
> Gr. Stefan
>
>
> --
> | BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph nautilus cluster name

2020-01-16 Thread Ignazio Cassano
Hello, I just deployed nautilus with ceph-deploy.
I did not find any option to give a cluster name to my ceph so its name is
"ceph".
Please, how can I chenge my cluster name without reinstalling ?

Please, how can I set the cluster name in installation phase ?

Many thanks for help
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph luminous bluestore poor random write performances

2020-01-03 Thread Ignazio Cassano
Effectively performances are not so bad but they decrease a lot if you run
the same test with 2/3 instances at the same time.
With iscsi on an emc unity with sas disks, performances  are a little more
high.
But they do not decrease so much when you run the same test with 2/3
instances at the same time.
Ignazio

Il Gio 2 Gen 2020, 11:19 Sinan Polat  ha scritto:

> Hi,
>
> Your performance is not that bad, is it? What performance do you expect?
>
> I just ran the same test.
> 12 Node, SATA SSD Only:
>READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s),
> io=3070MiB (3219MB), run=48097-48097msec
>   WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s),
> io=1026MiB (1076MB), run=48097-48097msec
>
> 6 Node, SAS Only:
>READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s),
> io=3070MiB (3219MB), run=138650-138650msec
>   WRITE: bw=7578KiB/s (7759kB/s), 7578KiB/s-7578KiB/s (7759kB/s-7759kB/s),
> io=1026MiB (1076MB), run=138650-138650msec
>
> This is OpenStack Queens with Ceph FileStore (Luminous).
>
> Kind regards,
> Sinan Polat
>
> > Op 2 januari 2020 om 10:59 schreef Stefan Kooman :
> >
> >
> > Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> > > Hello All,
> > > I installed ceph luminous with openstack, an using fio in a virtual
> machine
> > > I got slow random writes:
> > >
> > > fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test
> > > --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> > > --readwrite=randrw --rwmixread=75
> >
> > Do you use virtio-scsi with a SCSI queue per virtual CPU core? How many
> > cores do you have? I suspect that the queue depth is hampering
> > throughput here ... but is throughput performance really interesting
> > anyway for your use case? Low latency generally matters most.
> >
> > Gr. Stefan
> >
> >
> > --
> > | BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
> > | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph luminous bluestore poor random write performances

2020-01-02 Thread Ignazio Cassano
Hi Stefan, using fio with bs=64k I got very good performances.
I am not skilled on storage, but linux file system block size is 4k.
So, How can I modify the configuration on ceph to obtain best performances
with bs=4k ?
Regards
Ignazio



Il giorno gio 2 gen 2020 alle ore 10:59 Stefan Kooman  ha
scritto:

> Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> > Hello All,
> > I installed ceph luminous with openstack, an using fio in a virtual
> machine
> > I got slow random writes:
> >
> > fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test
> > --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> > --readwrite=randrw --rwmixread=75
>
> Do you use virtio-scsi with a SCSI queue per virtual CPU core? How many
> cores do you have? I suspect that the queue depth is hampering
> throughput here ... but is throughput performance really interesting
> anyway for your use case? Low latency generally matters most.
>
> Gr. Stefan
>
>
> --
> | BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph luminous bluestore poor random write performances

2020-01-02 Thread Ignazio Cassano
Hi Stefan,
I did not understand you question but it's may fault.
I am using virtio-scsi on my virtual machine.
The virtual machine has two cores.

Or dow yum mean cores on osd servers ?


Regards
Ignazio

Il giorno gio 2 gen 2020 alle ore 10:59 Stefan Kooman  ha
scritto:

> Quoting Ignazio Cassano (ignaziocass...@gmail.com):
> > Hello All,
> > I installed ceph luminous with openstack, an using fio in a virtual
> machine
> > I got slow random writes:
> >
> > fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test
> > --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> > --readwrite=randrw --rwmixread=75
>
> Do you use virtio-scsi with a SCSI queue per virtual CPU core? How many
> cores do you have? I suspect that the queue depth is hampering
> throughput here ... but is throughput performance really interesting
> anyway for your use case? Low latency generally matters most.
>
> Gr. Stefan
>
>
> --
> | BIT BV  https://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph luminous bluestore poor random write performances

2020-01-02 Thread Ignazio Cassano
Hello All,
I installed ceph luminous with openstack, an using fio in a virtual machine
I got slow random writes:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
--readwrite=randrw --rwmixread=75

Run status group 0 (all jobs):
   READ: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s),
io=3070MiB (3219MB), run=58211-58211msec
  WRITE: bw=17.6MiB/s (18.5MB/s), 17.6MiB/s-17.6MiB/s (18.5MB/s-18.5MB/s),
io=1026MiB (1076MB), run=58211-58211msec

[root@tst2-osctrl01 ansible]# ceph osd tree
ID CLASS WEIGHTTYPE NAME   STATUS REWEIGHT PRI-AFF
-1   108.26358 root default
-336.08786 host p2-ceph-01
 0   hdd   3.62279 osd.0   up  1.0 1.0
 1   hdd   3.60529 osd.1   up  1.0 1.0
 2   hdd   3.60529 osd.2   up  1.0 1.0
 3   hdd   3.60529 osd.3   up  1.0 1.0
 4   hdd   3.60529 osd.4   up  1.0 1.0
 5   hdd   3.62279 osd.5   up  1.0 1.0
 6   hdd   3.60529 osd.6   up  1.0 1.0
 7   hdd   3.60529 osd.7   up  1.0 1.0
 8   hdd   3.60529 osd.8   up  1.0 1.0
 9   hdd   3.60529 osd.9   up  1.0 1.0
-536.08786 host p2-ceph-02
10   hdd   3.62279 osd.10  up  1.0 1.0
11   hdd   3.60529 osd.11  up  1.0 1.0
12   hdd   3.60529 osd.12  up  1.0 1.0
13   hdd   3.60529 osd.13  up  1.0 1.0
14   hdd   3.60529 osd.14  up  1.0 1.0
15   hdd   3.62279 osd.15  up  1.0 1.0
16   hdd   3.60529 osd.16  up  1.0 1.0
17   hdd   3.60529 osd.17  up  1.0 1.0
18   hdd   3.60529 osd.18  up  1.0 1.0
19   hdd   3.60529 osd.19  up  1.0 1.0
-736.08786 host p2-ceph-03
20   hdd   3.62279 osd.20  up  1.0 1.0
21   hdd   3.60529 osd.21  up  1.0 1.0
22   hdd   3.60529 osd.22  up  1.0 1.0
23   hdd   3.60529 osd.23  up  1.0 1.0
24   hdd   3.60529 osd.24  up  1.0 1.0
25   hdd   3.62279 osd.25  up  1.0 1.0
26   hdd   3.60529 osd.26  up  1.0 1.0
27   hdd   3.60529 osd.27  up  1.0 1.0
28   hdd   3.60529 osd.28  up  1.0 1.0
29   hdd   3.60529 osd.29  up  1.0 1.0

Each osd server has 10  4TB osd  and  two ssd (2 x 2TB).

Each ssd is partitioned  with 5 partitions (ehach partiotion is 384 GB) for
bluestore db and wal.
Each osd and mon host have two 10GB nics for public and cluster ceph
network.

Osd servers are  Power Edge R7425 with 256 GB RAM and  MegaRAID SAS-3 3108.
No nvme disks are present.


Ceph.conf is the following:
[global]
fsid = 9a33214b-86df-4ef0-9199-5f7637cff1cd
public_network = 10.102.189.128/25
cluster_network = 10.102.143.16/28
mon_initial_members = tst2-osctrl01, tst2-osctrl02, tst2-osctrl03
mon_host = 10.102.189.200,10.102.189.201,10.102.189.202
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 3
osd pool default min size = 2
mon_max_pg_per_osd = 1024
osd max pg per osd hard ratio = 20

[mon]
mon compact on start = true

[osd]
bluestore cache autotune = 0
#bluestore cache kv ratio = 0.2
#bluestore cache meta ratio = 0.8
bluestore cache size ssd = 8G
bluestore csum type = none
bluestore extent map shard max size = 200
bluestore extent map shard min size = 50
bluestore extent map shard target size = 100
bluestore rocksdb options =
compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
osd map share max epochs = 100
osd max backfills = 5
osd memory target = 4294967296
osd op num shards = 8
osd op num threads per shard = 2


Any help, please ?

Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph site is very slow

2015-04-15 Thread Ignazio Cassano
Many thanks

2015-04-15 10:44 GMT+02:00 Wido den Hollander w...@42on.com:

 On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
  Hi all,
  why ceph.com is very slow ?

 Not known right now. But you can try eu.ceph.com for your packages and
 downloads.

  It is impossible download files for installing ceph.
  Regards
  Ignazio
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph site is very slow

2015-04-15 Thread Ignazio Cassano
Hi all,
why ceph.com is very slow ?
It is impossible download files for installing ceph.
Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-06 Thread Ignazio Cassano
Hi all, thank you for your answers and your effort.
I ' like also to know if there is difference between mapping rbd device and
using them like normal block device (/dev/rbd) in kvm and using
qemu-img e libvirt support for rbd.
Are there performances issues in first case ?
Regards
Ignazio

2014-10-06 7:16 GMT+02:00 Vladislav Gorbunov vadi...@gmail.com:

 Try to disable selinux or run
 setsebool -P virt_use_execmem 1


 2014-10-06 8:38 GMT+12:00 Nathan Stratton nat...@robotics.net:

 I did the same thing, build the RPMs and now show rbd support, however
 when I try to start an image I get:

 2014-10-05 19:48:08.058+: 4524: error :
 qemuProcessWaitForMonitor:1889 : internal error: process exited while
 connecting to monitor: Warning: option deprecated, use lost_tick_policy
 property of kvm-pit instead.
 qemu-kvm: -drive
 file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
 could not open disk image
 rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
 Driver 'rbd' is not whitelisted

 I tried with an without auth.


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc li...@kirneh.eu wrote:

  Hi,
 Centos 7 qemu out of the box does not support rbd.

 I had to build package with rbd support manually with %define rhev 1
 in qemu-kvm spec file. I also had to salvage some files from src.rpm file
 which were missing from centos git.


 On 2014.10.04 11:31, Ignazio Cassano wrote:

 Hi all,
 I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there
 are some extra packages.
 Regards

 Ignazio


 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-06 Thread Ignazio Cassano
Hi,
but what kernel version you are using ?
I think rbd kernel module is not in centos 7 kernel .
Have you buill it by sources ?


2014-10-06 14:08 GMT+02:00 Nathan Stratton nat...@robotics.net:

 SELinux is already disabled

 [root@virt01a /]# setsebool -P virt_use_execmem 1
 setsebool:  SELinux is disabled.



 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Mon, Oct 6, 2014 at 1:16 AM, Vladislav Gorbunov vadi...@gmail.com
 wrote:

 Try to disable selinux or run
 setsebool -P virt_use_execmem 1


 2014-10-06 8:38 GMT+12:00 Nathan Stratton nat...@robotics.net:

 I did the same thing, build the RPMs and now show rbd support, however
 when I try to start an image I get:

 2014-10-05 19:48:08.058+: 4524: error :
 qemuProcessWaitForMonitor:1889 : internal error: process exited while
 connecting to monitor: Warning: option deprecated, use lost_tick_policy
 property of kvm-pit instead.
 qemu-kvm: -drive
 file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
 could not open disk image
 rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
 Driver 'rbd' is not whitelisted

 I tried with an without auth.


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc li...@kirneh.eu wrote:

  Hi,
 Centos 7 qemu out of the box does not support rbd.

 I had to build package with rbd support manually with %define rhev 1
 in qemu-kvm spec file. I also had to salvage some files from src.rpm file
 which were missing from centos git.


 On 2014.10.04 11:31, Ignazio Cassano wrote:

 Hi all,
 I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there
 are some extra packages.
 Regards

 Ignazio


 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Centos 7 qemu

2014-10-04 Thread Ignazio Cassano
Hi all,
I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there are
some extra packages.
Regards

Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd snap protect error

2014-06-09 Thread Ignazio Cassano
Hi all,
I installed cep firefly and now I am playing with rbd snapshot.
I created a pool (libvirt-pool) with two images:

libvirtimage1 (format 1)
image2 (format 2).

When I try to protect the first image:

rbd --pool libvirt-pool snap protect --image libvirtimage1 --snap
libvirt-snap

it gives me an error because the image is in format 1:

image must support layering.

This is correct because libvirtimage1 is in format 1.

But If I try with the second image:
rbd --pool libvirt-pool snap protect --image image2  --snap image2-snap

it gives the following:

snap failed (2) No such file or directory


Image2 exists infact I can see it :

rbd -p libvirt-pool ls

libvirtimage1
image2


Could someone help me, please ?

Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd snap protect error

2014-06-09 Thread Ignazio Cassano
Many thanks...
Can I create a format 2 image (with support for linear snapshot)  using
qemu-img command ?


2014-06-09 13:05 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:

 On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
 ignaziocass...@gmail.com wrote:
  Hi all,
  I installed cep firefly and now I am playing with rbd snapshot.
  I created a pool (libvirt-pool) with two images:
 
  libvirtimage1 (format 1)
  image2 (format 2).
 
  When I try to protect the first image:
 
  rbd --pool libvirt-pool snap protect --image libvirtimage1 --snap
  libvirt-snap
 
  it gives me an error because the image is in format 1:
 
  image must support layering.
 
  This is correct because libvirtimage1 is in format 1.
 
  But If I try with the second image:
  rbd --pool libvirt-pool snap protect --image image2  --snap image2-snap
 
  it gives the following:
 
  snap failed (2) No such file or directory
 
 
  Image2 exists infact I can see it :
 
  rbd -p libvirt-pool ls
 
  libvirtimage1
  image2
 
 
  Could someone help me, please ?

 You have to create the snapshot first:

 rbd --pool libvirt-pool snap create --image image2  --snap image2-snap
 rbd --pool libvirt-pool snap protect --image image2  --snap image2-snap

 Thanks,

 Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd snap protect error

2014-06-09 Thread Ignazio Cassano
Many thanks


2014-06-09 14:04 GMT+02:00 Wido den Hollander w...@42on.com:

 On 06/09/2014 02:00 PM, Ignazio Cassano wrote:

 Many thanks...
 Can I create a format 2 image (with support for linear snapshot)  using
 qemu-img command ?


 Yes:

 qemu-img create -f raw rbd:rbd/image1:rbd_default_format=2 10G

 'rbd_default_format' is a Ceph setting which is passed down to librbd
 directly.

 Wido



 2014-06-09 13:05 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com
 mailto:ilya.dryo...@inktank.com:


 On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
 ignaziocass...@gmail.com mailto:ignaziocass...@gmail.com wrote:
   Hi all,
   I installed cep firefly and now I am playing with rbd snapshot.
   I created a pool (libvirt-pool) with two images:
  
   libvirtimage1 (format 1)
   image2 (format 2).
  
   When I try to protect the first image:
  
   rbd --pool libvirt-pool snap protect --image libvirtimage1 --snap
   libvirt-snap
  
   it gives me an error because the image is in format 1:
  
   image must support layering.
  
   This is correct because libvirtimage1 is in format 1.
  
   But If I try with the second image:
   rbd --pool libvirt-pool snap protect --image image2  --snap
 image2-snap
  
   it gives the following:
  
   snap failed (2) No such file or directory
  
  
   Image2 exists infact I can see it :
  
   rbd -p libvirt-pool ls
  
   libvirtimage1
   image2
  
  
   Could someone help me, please ?

 You have to create the snapshot first:

 rbd --pool libvirt-pool snap create --image image2  --snap image2-snap
 rbd --pool libvirt-pool snap protect --image image2  --snap
 image2-snap

 Thanks,

  Ilya




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbdmap issue

2014-06-06 Thread Ignazio Cassano
Hi all,
I configured a ceph cluster firefly on ubuntu 12.04.
I also confiured a centos 6.5 client with ceph-0.80.1-2.el6.x86_64
and kernel 3.14.2-1.el6.elrepo.x86_64
On Centeos I am able to use rbd remote block devices but if I try to map
them with rbdmap no link are generated.
Last week, before updates, when I mapped rbd , some links was created:

/dev/rbd/poolname/imagename linked with /dev/rbd0

Now only /dec/rbd0 is created and if I restart rbdmap, maps are dublicated.
The following is the rbd shomapped command after 2 rbdmap restart:

id pool image   snap device
0  libvirt-pool ubuntu2 -/dev/rbd0
1  libvirt-pool ubuntu2 -/dev/rbd1

Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbdmap issue

2014-06-06 Thread Ignazio Cassano
I am sorry for my mistake
service rbdmap restart
afther restart no links are created end rbd is duplicated


2014-06-06 16:07 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:

 On Fri, Jun 6, 2014 at 4:47 PM, Ignazio Cassano
 ignaziocass...@gmail.com wrote:
  Hi all,
  I configured a ceph cluster firefly on ubuntu 12.04.
  I also confiured a centos 6.5 client with ceph-0.80.1-2.el6.x86_64
  and kernel 3.14.2-1.el6.elrepo.x86_64
  On Centeos I am able to use rbd remote block devices but if I try to map
  them with rbdmap no link are generated.
  Last week, before updates, when I mapped rbd , some links was created:
 
  /dev/rbd/poolname/imagename linked with /dev/rbd0
 
  Now only /dec/rbd0 is created and if I restart rbdmap, maps are
 dublicated.
  The following is the rbd shomapped command after 2 rbdmap restart:
 
  id pool image   snap device
  0  libvirt-pool ubuntu2 -/dev/rbd0
  1  libvirt-pool ubuntu2 -/dev/rbd1

 Hi Ignazio,

 What do you mean by a 'rbd restart' ?

 Can you try locating a file named 50-rbd.rules on your system?  It
 should be in /usr/lib/udev/rules.d/ or similar.

 If that file is present, what's the output of

 udevadm test /sys/devices/virtual/block/rbd0

 provided /dev/rbd0 exists, i.e. you have an image mapped?

 Thanks,

 Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbdmap issue

2014-06-06 Thread Ignazio Cassano
Many thanks


2014-06-06 16:25 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:

 On Fri, Jun 6, 2014 at 6:15 PM, Ignazio Cassano
 ignaziocass...@gmail.com wrote:
  I Ilya, no file 50-rbd.rules exist on my system.

 My guess would be that the upgrade went south.  In order for the symlinks
 to be
 created that file should exist on the client (i.e. the system you run 'rbd
 map'
 on).  As a temporary fix, you can grab it from [1] and stuck it into
 /lib/udev/rules.d.

 [1] https://raw.githubusercontent.com/ceph/ceph/master/udev/50-rbd.rules

 Thanks,

 Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph networks explanation

2014-06-05 Thread Ignazio Cassano
Hi all,
could somoene explain what's happen when I create a rbd file in a cluster
using a public and a cluster network ?

The client is on the public network and create a file with rbd command.
I think the client contacts the monitor  on the public network .



What is the network used for replication ?

Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph networks explanation

2014-06-05 Thread Ignazio Cassano
many thanks


2014-06-05 13:33 GMT+02:00 Wido den Hollander w...@42on.com:

 On 06/05/2014 12:37 PM, Ignazio Cassano wrote:

 Hi all,
 could somoene explain what's happen when I create a rbd file in a cluster
 using a public and a cluster network ?

 The client is on the public network and create a file with rbd command.
 I think the client contacts the monitor  on the public network .



 What is the network used for replication ?


 The cluster network. Clients communicate with the OSDs over the public
 network, but replication (and thus recovery) happens over the cluster
 network (if available).


 Regards
 Ignazio


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] why use hadoop with ceph ?

2014-05-30 Thread Ignazio Cassano
Hi all,
I am testing ceph because I found it is very interesting as far as remote
block
device is concerned.
But my company is very interested in big data.
So I read something about hadoop and ceph integration.
Anyone can suggest me some documentation explaining the purpose of
ceph/hadoop integration ?
Why don't use only hadoop for big data ?

Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph hostnames

2014-05-29 Thread Ignazio Cassano
Hi all, I am planning to install a ceph cluster and I have 3 nodes with 2
nic for each one.
I read the documentation and it suggests to set a public netowork and a
cluster network.
Firstly I need to know if public network is the network used by clients to
mount ceph file system or to access rbd volumes and if cluster network is
used by nodes to exchange all information about cluster (replica ecc ecc
...).
If it is the correct assumption, the nodes and monitor hostnames must refer
to
the public network address or to the cluster network address when I use the
command ceph-deploy ?

Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph nodes operanting system suggested

2014-05-29 Thread Ignazio Cassano
Hi all,
I am planning to install a ceph cluster for testing but, in the future, I'd
like to use
it in production for kvm virtualization .
I'd like to use rbd and not ceph file system.
Which is the best linux operating system you suggest for ceph nodes and
which
is the more stable ceph version for my cluster environment ?

Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph.conf public network

2014-05-27 Thread Ignazio Cassano
Hi all,
I read a lot of emails  messages and I am confused because in some public
network in /etc/ceph/ceph.com is reported like :
public_network = a.b.c.d/netmask
in others like :

public network = a.b.c.d/netmask

Does it depend by ceph version ?

Ragards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] is cephfs ready for production ?

2014-05-19 Thread Ignazio Cassano
Hi all, I'd like to know if cephfs is in heavy developement or it is ready
for production .
Documentation  reports it is not for production but I think documentation
on ceph.com is not enough recent .

Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] attive+degraded cluster

2014-05-16 Thread Ignazio Cassano
Hit all, I successfully installed a ceph cluster firefly version  made up
of 3 osd and one monitor host.
After that I created a pool and 1 rdb  object  for kvm .
It works fine .
I verified my pool has a replica size = 3 but a I read the default should
be = 2.
Trying to shut down an osd and getting it out, ceph health displays
attive+degraded state and remains in this state until I add again one osd .
Is this a correct behaviour ?
Reading documentation I understood that cluster should repair itself going
in active clean state .
Is  possible it remains in degraded state because I have a replica size = 3
and only 2 osd ?

Sorry for my bad english.

Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com