Re: [ceph-users] xfs corruption

2016-03-07 Thread Ferhat Ozkasgarli
rom being able to reuse those blocks after a > trim command (even without a raid card of any kind). > > Regards, > > Ric > > > On 03/07/2016 12:58 PM, Ferhat Ozkasgarli wrote: > >> Rick; you mean Raid 0 environment right? >> >> If you use raid 5 or ra

Re: [ceph-users] xfs corruption

2016-03-06 Thread Ferhat Ozkasgarli
the others are in write-back mode. I will set all raid cards >> pass-throughput mode and observe for a period of time. >> >> >> Best Regards >> sunspot >> >> >> 2016-02-25 20:07 GMT+08:00 Ferhat Ozkasgarli <ozkasga...@gmail.com >> <mailto:o

Re: [ceph-users] xfs corruption

2016-02-25 Thread Ferhat Ozkasgarli
This has happened me before but in virtual machine environment. The VM was KVM and storage was RBD. My problem was a bad cable in network. You should check following details: 1-) Do you use any kind of hardware raid configuration? (Raid 0, 5 or 10) Ceph does not work well on hardware raid

Re: [ceph-users] List of SSDs

2016-02-25 Thread Ferhat Ozkasgarli
Hello, I have also had some good experience with Micron M510DC. The disk has pretty solid performance scores and works good with Ceph. P.S.: Do not forget: If you are going to use raid controller, make sure your raid card in HBA (Non-Raid) mode. On Thu, Feb 25, 2016 at 8:23 AM, Shinobu Kinjo

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido, Than let me solve the IPv6 problem and get back to you. Thx On Mon, Feb 15, 2016 at 2:16 PM, Wido den Hollander <w...@42on.com> wrote: > > > Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli < > ozkasga...@gmail.com>: > > > > > > Hello

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido, I have just talked with our network admin. He said we are not ready for IPv6 yet. So, if it is ok with IPv4 only, I will start the process. On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander <w...@42on.com> wrote: > Hi, > > > Op 15 februari 2016 om 11:00 schreef

Re: [ceph-users] Ceph mirrors wanted!

2016-02-15 Thread Ferhat Ozkasgarli
Hello Wido, As Radore Datacenter we also want to become mirror for Ceph project. Our URL will be http://ceph-mirros.radore.com We would be happy to become tr.ceph.com The server will be ready tomorrow or the day after. On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop

Re: [ceph-users] Help: pool not responding

2016-02-14 Thread Ferhat Ozkasgarli
Hello Mario, This kind of problem usually happens for following reasons: 1-) One of the OSD nodes has network problem. 2-) Disk failure 3-) Not enough resource for OSD nodes 4-) Slow OSD Disks This happened before me. The problem was network cable problem. As soon as I replaced the cable,

Re: [ceph-users] ceph 9.2.0 SAMSUNG ssd performance issue?

2016-02-12 Thread Ferhat Ozkasgarli
Hello Huan, If you look at Sebestien blog ( https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/) at comment section. You can see that Samsung SSD behaves very and very poorly on tests: Samsung SSD 850 PRO 256GB 40960 bytes (410 MB)

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Ferhat Ozkasgarli
Release the Kraken! (Please...) On Feb 9, 2016 1:05 PM, "Dan van der Ster" wrote: > On Mon, Feb 8, 2016 at 8:10 PM, Sage Weil wrote: > > On Mon, 8 Feb 2016, Karol Mroz wrote: > >> On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote: > >> > I didn't

Re: [ceph-users] Same SSD-Cache-Pool for multiple Spinning-Disks-Pools?

2016-02-03 Thread Ferhat Ozkasgarli
Hello Udo, You can not use one cache pool for multiple back end pools. You must create new caching pool for every back end pool. On Wed, Feb 3, 2016 at 12:32 PM, Udo Waechter wrote: > Hello everyone, > > I'm using ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) >

Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-03 Thread Ferhat Ozkasgarli
As message satates, you must increase placement group number for the pool. Because 108T data require larger pg mumber. On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy" wrote: > Hi, > > I am using ceph for my storage cluster and health shows as WARN state > with too few pgs. >

[ceph-users] High IOWAIT On OpenStack Instance

2016-02-01 Thread Ferhat Ozkasgarli
Hi All, We are testing Ceph with OpenStack and installed 3 Mon (This three monitor nodes are also OpenStack controller and network node), 6 OSD (3 of the OSDs are also Nova Computer Node). There are total 24 OSDs (21 SAS, 3 SSD and all journals are in SSD). There is no cache tiering for now.

[ceph-users] Ceph Cache Tiering Error error listing images

2016-01-27 Thread Ferhat Ozkasgarli
Hello, I have installed OpenStack cluster with Mirantis Fuel 7.0. Back end storage is ceph and works great. But when I try to activate ssd caching tier. All my working VMs suddenly stops working and I can not create new instance. If I disable caching tier, everything returns to normal. My Ceph