Re: [ceph-users] data size less than 4 mb

2015-11-02 Thread mad Engineer
thanks Gregory On Sun, Nov 1, 2015 at 12:06 AM, Gregory Farnum <gfar...@redhat.com> wrote: > On Friday, October 30, 2015, mad Engineer <themadengin...@gmail.com> > wrote: > >> i am learning ceph,block storage and read that each object size is 4 Mb.I >> am not c

[ceph-users] data size less than 4 mb

2015-10-31 Thread mad Engineer
i am learning ceph,block storage and read that each object size is 4 Mb.I am not clear about the concepts of object storage still what will happen if the actual size of data written to block is less than 4 Mb lets say 1 Mb.Will it still create object with 4 mb size and keep the rest of the space

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-03-09 Thread mad Engineer
challenges. Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of mad Engineer Sent: 07 March 2015 10:55 To: Somnath Roy Cc: ceph-users Subject: Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-03-07 Thread mad Engineer
, February 28, 2015 12:59 PM *To:* 'mad Engineer'; Alexandre DERUMIER *Cc:* ceph-users *Subject:* RE: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel I would say check with rados tool like ceph_smalliobench/rados bench first to see how much performance

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
. In my last test with giant, I was able to reach around 12iops with 6osd/intel s3500 ssd, but I was cpu limited. - Mail original - De: mad Engineer themadengin...@gmail.com À: ceph-users ceph-users@lists.ceph.com Envoyé: Samedi 28 Février 2015 12:19:56 Objet: [ceph-users] Extreme

[ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
Hello All, I am trying ceph-firefly 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7) with 9 OSD ,all Samsung SSD 850 EVO on 3 servers with 24 G RAM,16 cores @2.27 Ghz Ubuntu 14.04 LTS with 3.16-3 kernel.All are connected to 10G ports with maximum MTU.There are no extra disks for

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
Thanks for the reply Philippe,we were using these disks in our NAS,now it looks like i am in big trouble :-( On Sat, Feb 28, 2015 at 5:02 PM, Philippe Schwarz p...@schwarz-fr.net wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Le 28/02/2015 12:19, mad Engineer a écrit : Hello All, I

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
helping for for sequential writes) but your results seem quite low, 926kb/s with 4k, it's only 200io/s. check if you don't have any big network latencies, or mtu fragementation problem. Maybe also try to bench with fio, with more parallel jobs. - Mail original - De: mad Engineer

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
reinstalled ceph packages and now with memstore backend [osd objectstore =memstore] its giving 400Kbps .No idea where the problem is. On Sun, Mar 1, 2015 at 12:30 AM, mad Engineer themadengin...@gmail.com wrote: tried changing scheduler from deadline to noop also upgraded to Gaint and btrfs

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread mad Engineer
io is going on. Thanks Regards Somnath *From:* Somnath Roy *Sent:* Saturday, February 28, 2015 12:59 PM *To:* 'mad Engineer'; Alexandre DERUMIER *Cc:* ceph-users *Subject:* RE: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel I would say

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-03 Thread mad Engineer
- or SSD SAS 12gbits (more expensive) Florent Monthel Le 2 févr. 2015 à 18:29, mad Engineer themadengin...@gmail.com a écrit : Thanks Florent, can ceph distribute write to multiple hosts? On Mon, Feb 2, 2015 at 10:17 PM, Florent MONTHEL fmont...@flox-arts.net

Re: [ceph-users] features of the next stable release

2015-02-03 Thread mad Engineer
I am also planning to create SSD only cluster using multiple OSD on using few hosts,Whats the best way to get maximum performance out of SSD disks. I dont have the cluster running,but seeing this thread makes me worry that RBD will not be able to extract full capability of SSD disks.I am beginner

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread mad Engineer
Thanks Sent from my iPhone On 2 févr. 2015, at 09:27, mad Engineer themadengin...@gmail.com wrote: I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD on each server.Each server will have 10G NIC. SSD disks are of good quality and as per label it can support ~300 MBps

[ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread mad Engineer
I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD on each server.Each server will have 10G NIC. SSD disks are of good quality and as per label it can support ~300 MBps What are the limiting factor that prevents from utilizing full speed of SSD disks? Disk controllers are 3