I have some experience with Kingstons - which model do you plan to use?
Shorter version: don't use Kingstons. For anything. Ever.
Jan
> On 30 Sep 2015, at 11:24, Andrija Panic wrote:
>
> Make sure to check this blog page
>
Thanks to all for responses. Great thread with a lot of info.
I will go with the 3 partitions on Kingstone SDD for 3 OSDs on each node.
Thanks
Jiri
On 30/09/2015 00:38, Lionel Bouton wrote:
Hi,
Le 29/09/2015 13:32, Jiri Kanicky a écrit :
Hi Lionel.
Thank you for your reply. In this case I
Make sure to check this blog page
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Since Im not sure if you are playing arround with CEPH, or plan it for
production and good performance.
My experience SSD as journal: SSD Samsung 850 PRO =
On Tue, Sep 29, 2015 at 7:32 AM, Jiri Kanicky wrote:
> Thank you for your reply. In this case I am considering to create separate
> partitions for each disk on the SSD drive. Would be good to know what is the
> performance difference, because creating partitions is kind of waste
Jiri,
if you colocate more Journals on 1 SSD (we do...), make sure to understand
the following:
- if SSD dies, all OSDs that had their journals on it, are lost...
- the more journals you put on single SSD (1 journal being 1 partition),
the worse performance, since total SSD performance is not
Hi Lionel.
Thank you for your reply. In this case I am considering to create
separate partitions for each disk on the SSD drive. Would be good to
know what is the performance difference, because creating partitions is
kind of waste of space.
One more question, is it a good idea to move
Hi,
Le 29/09/2015 13:32, Jiri Kanicky a écrit :
> Hi Lionel.
>
> Thank you for your reply. In this case I am considering to create
> separate partitions for each disk on the SSD drive. Would be good to
> know what is the performance difference, because creating partitions
> is kind of waste of
I think I got over 10% improvement when I changed from cooked journal
file on btrfs based system SSD to a raw partition on the system SSD.
The cluster I've been testing with is all consumer grade stuff running
on top of AMD piledriver and kaveri based mobo's with the on-board
SATA. My SSDs
Thank you Lionel,
This was very helpful. I actually chose to split the partition and then
recreated the OSDs. Everything is up and running now.
Rimma
On 7/13/15 6:34 PM, Lionel Bouton wrote:
On 07/14/15 00:08, Rimma Iontel wrote:
Hi all,
[...]
Is there something that needed to be done
Hi all,
I am trying to set up a three-node ceph cluster. Each node is running
RHEL 7.1 and has three 1TB HDD drives for OSDs (sdb, sdc, sdd) and an
SSD partition (/dev/sda6) for the journal.
I zapped the HDDs and used the following to create OSDs:
# ceph-deploy --overwrite-conf osd create
On 07/14/15 00:08, Rimma Iontel wrote:
Hi all,
[...]
Is there something that needed to be done to journal partition to
enable sharing between multiple OSDs? Or is there something else
that's causing the isssue?
IIRC you can't share a volume between multiple OSDs. What you could do
if
11 matches
Mail list logo