rom being able to reuse those blocks after a
> trim command (even without a raid card of any kind).
>
> Regards,
>
> Ric
>
>
> On 03/07/2016 12:58 PM, Ferhat Ozkasgarli wrote:
>
>> Rick; you mean Raid 0 environment right?
>>
>> If you use raid 5 or ra
the others are in write-back mode. I will set all raid cards
>> pass-throughput mode and observe for a period of time.
>>
>>
>> Best Regards
>> sunspot
>>
>>
>> 2016-02-25 20:07 GMT+08:00 Ferhat Ozkasgarli <ozkasga...@gmail.com
>> <mailto:o
This has happened me before but in virtual machine environment.
The VM was KVM and storage was RBD. My problem was a bad cable in network.
You should check following details:
1-) Do you use any kind of hardware raid configuration? (Raid 0, 5 or 10)
Ceph does not work well on hardware raid
Hello,
I have also had some good experience with Micron M510DC. The disk has
pretty solid performance scores and works good with Ceph.
P.S.: Do not forget: If you are going to use raid controller, make sure
your raid card in HBA (Non-Raid) mode.
On Thu, Feb 25, 2016 at 8:23 AM, Shinobu Kinjo
Hello Wido,
Than let me solve the IPv6 problem and get back to you.
Thx
On Mon, Feb 15, 2016 at 2:16 PM, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 15 februari 2016 om 11:41 schreef Ferhat Ozkasgarli <
> ozkasga...@gmail.com>:
> >
> >
> > Hello
Hello Wido,
I have just talked with our network admin. He said we are not ready for
IPv6 yet.
So, if it is ok with IPv4 only, I will start the process.
On Mon, Feb 15, 2016 at 12:28 PM, Wido den Hollander <w...@42on.com> wrote:
> Hi,
>
> > Op 15 februari 2016 om 11:00 schreef
Hello Wido,
As Radore Datacenter we also want to become mirror for Ceph project.
Our URL will be http://ceph-mirros.radore.com
We would be happy to become tr.ceph.com
The server will be ready tomorrow or the day after.
On Sun, Feb 7, 2016 at 6:03 PM, Tyler Bishop
Hello Mario,
This kind of problem usually happens for following reasons:
1-) One of the OSD nodes has network problem.
2-) Disk failure
3-) Not enough resource for OSD nodes
4-) Slow OSD Disks
This happened before me. The problem was network cable problem. As soon as
I replaced the cable,
Hello Huan,
If you look at Sebestien blog (
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/)
at comment section. You can see that Samsung SSD behaves very and very
poorly on tests:
Samsung SSD 850 PRO 256GB
40960 bytes (410 MB)
Release the Kraken! (Please...)
On Feb 9, 2016 1:05 PM, "Dan van der Ster" wrote:
> On Mon, Feb 8, 2016 at 8:10 PM, Sage Weil wrote:
> > On Mon, 8 Feb 2016, Karol Mroz wrote:
> >> On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote:
> >> > I didn't
Hello Udo,
You can not use one cache pool for multiple back end pools. You must create
new caching pool for every back end pool.
On Wed, Feb 3, 2016 at 12:32 PM, Udo Waechter wrote:
> Hello everyone,
>
> I'm using ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
>
As message satates, you must increase placement group number for the pool.
Because 108T data require larger pg mumber.
On Feb 3, 2016 8:09 PM, "M Ranga Swami Reddy" wrote:
> Hi,
>
> I am using ceph for my storage cluster and health shows as WARN state
> with too few pgs.
>
Hi All,
We are testing Ceph with OpenStack and installed 3 Mon (This three monitor
nodes are also OpenStack controller and network node), 6 OSD (3 of the OSDs
are also Nova Computer Node).
There are total 24 OSDs (21 SAS, 3 SSD and all journals are in SSD).
There is no cache tiering for now.
Hello,
I have installed OpenStack cluster with Mirantis Fuel 7.0. Back end storage
is ceph and works great.
But when I try to activate ssd caching tier. All my working VMs suddenly
stops working and I can not create new instance.
If I disable caching tier, everything returns to normal.
My Ceph
14 matches
Mail list logo