thanks Gregory
On Sun, Nov 1, 2015 at 12:06 AM, Gregory Farnum wrote:
> On Friday, October 30, 2015, mad Engineer
> wrote:
>
>> i am learning ceph,block storage and read that each object size is 4 Mb.I
>> am not clear about the concepts of object storage still what will ha
i am learning ceph,block storage and read that each object size is 4 Mb.I
am not clear about the concepts of object storage still what will happen if
the actual size of data written to block is less than 4 Mb lets say 1
Mb.Will it still create object with 4 mb size and keep the rest of the
space fr
ve the SSD local to it using some sort of caching software
> running on the client , although this can bring its own challenges.
>
> Nick
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > mad Engineer
>
tbeatmap = 0/0
>
> debug_perfcounter = 0/0
>
> debug_asok = 0/0
>
> debug_throttle = 0/0
>
> debug_mon = 0/0
>
> debug_paxos = 0/0
>
> debug_rgw = 0/0
>
>
>
> 5. Give us the ceph –s output and the iostat output while io is going on.
>
>
>
> Than
debug_auth = 0/0
>
> debug_finisher = 0/0
>
> debug_heartbeatmap = 0/0
>
> debug_perfcounter = 0/0
>
> debug_asok = 0/0
>
> debug_throttle = 0/0
>
> debug_mon = 0/0
>
> debug_paxos = 0/0
>
> debug_rgw = 0/0
>
>
>
> 5. Give us the ce
reinstalled ceph packages and now with memstore backend [osd objectstore
=memstore] its giving 400Kbps .No idea where the problem is.
On Sun, Mar 1, 2015 at 12:30 AM, mad Engineer
wrote:
> tried changing scheduler from deadline to noop also upgraded to Gaint and
> btrfs filesystem,down
ally helping for for sequential
> writes)
>
> but your results seem quite low, 926kb/s with 4k, it's only 200io/s.
>
> check if you don't have any big network latencies, or mtu fragementation
> problem.
>
> Maybe also try to bench with fio, with more parallel jobs.
&
reads sharding.
>
> In my last test with giant, I was able to reach around 12iops with
> 6osd/intel s3500 ssd, but I was cpu limited.
>
> - Mail original -
> De: "mad Engineer"
> À: "ceph-users"
> Envoyé: Samedi 28 Février 2015 12:19:56
>
Thanks for the reply Philippe,we were using these disks in our NAS,now
it looks like i am in big trouble :-(
On Sat, Feb 28, 2015 at 5:02 PM, Philippe Schwarz wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Le 28/02/2015 12:19, mad Engineer a écrit :
>> Hello All,
Hello All,
I am trying ceph-firefly 0.80.8
(69eaad7f8308f21573c604f121956e64679a52a7) with 9 OSD ,all Samsung SSD
850 EVO on 3 servers with 24 G RAM,16 cores @2.27 Ghz Ubuntu 14.04 LTS
with 3.16-3 kernel.All are connected to 10G ports with maximum
MTU.There are no extra disks for journ
TA 6gbits
> - or SSD SAS 12gbits (more expensive)
>
>
>
> Florent Monthel
>
>
>
>
>
> Le 2 févr. 2015 à 18:29, mad Engineer a écrit :
>
> Thanks Florent,
>can ceph distribute write to multiple hosts?
>
> On Mon, Feb 2, 2015
I am also planning to create SSD only cluster using multiple OSD on
using few hosts,Whats the best way to get maximum performance out of
SSD disks.
I dont have the cluster running,but seeing this thread makes me worry
that RBD will not be able to extract full capability of SSD disks.I am
beginner i
t; Thanks
>
> Sent from my iPhone
>
>> On 2 févr. 2015, at 09:27, mad Engineer wrote:
>>
>> I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
>> on each server.Each server will have 10G NIC.
>> SSD disks are of good quality and as per l
I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
on each server.Each server will have 10G NIC.
SSD disks are of good quality and as per label it can support ~300 MBps
What are the limiting factor that prevents from utilizing full speed
of SSD disks?
Disk controllers are 3 G
14 matches
Mail list logo