thanks Gregory
On Sun, Nov 1, 2015 at 12:06 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Friday, October 30, 2015, mad Engineer <themadengin...@gmail.com>
> wrote:
>
>> i am learning ceph,block storage and read that each object size is 4 Mb.I
>> am not c
i am learning ceph,block storage and read that each object size is 4 Mb.I
am not clear about the concepts of object storage still what will happen if
the actual size of data written to block is less than 4 Mb lets say 1
Mb.Will it still create object with 4 mb size and keep the rest of the
space
challenges.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
mad Engineer
Sent: 07 March 2015 10:55
To: Somnath Roy
Cc: ceph-users
Subject: Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes
and 9
OSD with 3.16-3 kernel
, February 28, 2015 12:59 PM
*To:* 'mad Engineer'; Alexandre DERUMIER
*Cc:* ceph-users
*Subject:* RE: [ceph-users] Extreme slowness in SSD cluster with 3 nodes
and 9 OSD with 3.16-3 kernel
I would say check with rados tool like ceph_smalliobench/rados bench first
to see how much performance
.
In my last test with giant, I was able to reach around 12iops with
6osd/intel s3500 ssd, but I was cpu limited.
- Mail original -
De: mad Engineer themadengin...@gmail.com
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Samedi 28 Février 2015 12:19:56
Objet: [ceph-users] Extreme
Hello All,
I am trying ceph-firefly 0.80.8
(69eaad7f8308f21573c604f121956e64679a52a7) with 9 OSD ,all Samsung SSD
850 EVO on 3 servers with 24 G RAM,16 cores @2.27 Ghz Ubuntu 14.04 LTS
with 3.16-3 kernel.All are connected to 10G ports with maximum
MTU.There are no extra disks for
Thanks for the reply Philippe,we were using these disks in our NAS,now
it looks like i am in big trouble :-(
On Sat, Feb 28, 2015 at 5:02 PM, Philippe Schwarz p...@schwarz-fr.net wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 28/02/2015 12:19, mad Engineer a écrit :
Hello All,
I
helping for for sequential
writes)
but your results seem quite low, 926kb/s with 4k, it's only 200io/s.
check if you don't have any big network latencies, or mtu fragementation
problem.
Maybe also try to bench with fio, with more parallel jobs.
- Mail original -
De: mad Engineer
reinstalled ceph packages and now with memstore backend [osd objectstore
=memstore] its giving 400Kbps .No idea where the problem is.
On Sun, Mar 1, 2015 at 12:30 AM, mad Engineer themadengin...@gmail.com
wrote:
tried changing scheduler from deadline to noop also upgraded to Gaint and
btrfs
io is going on.
Thanks Regards
Somnath
*From:* Somnath Roy
*Sent:* Saturday, February 28, 2015 12:59 PM
*To:* 'mad Engineer'; Alexandre DERUMIER
*Cc:* ceph-users
*Subject:* RE: [ceph-users] Extreme slowness in SSD cluster with 3 nodes
and 9 OSD with 3.16-3 kernel
I would say
- or SSD SAS 12gbits (more expensive)
Florent Monthel
Le 2 févr. 2015 à 18:29, mad Engineer themadengin...@gmail.com a écrit :
Thanks Florent,
can ceph distribute write to multiple hosts?
On Mon, Feb 2, 2015 at 10:17 PM, Florent MONTHEL fmont...@flox-arts.net
I am also planning to create SSD only cluster using multiple OSD on
using few hosts,Whats the best way to get maximum performance out of
SSD disks.
I dont have the cluster running,but seeing this thread makes me worry
that RBD will not be able to extract full capability of SSD disks.I am
beginner
Thanks
Sent from my iPhone
On 2 févr. 2015, at 09:27, mad Engineer themadengin...@gmail.com wrote:
I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
on each server.Each server will have 10G NIC.
SSD disks are of good quality and as per label it can support ~300 MBps
I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
on each server.Each server will have 10G NIC.
SSD disks are of good quality and as per label it can support ~300 MBps
What are the limiting factor that prevents from utilizing full speed
of SSD disks?
Disk controllers are 3
14 matches
Mail list logo