[ceph-users] Mixing cache-mode writeback with read-proxy

2017-05-18 Thread Guillaume Comte
Hi list,

Does it makes sense to split an SSD in two parts, one on which i will put
writeback cache-mode and an other with read-proxy mode in order to benefit
of the two modes ?

Thks

-- 
*Guillaume Comte*
06 25 85 02 02  | guillaume.co...@blade-group.com

90 avenue des Ternes, 75 017 Paris
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Ok i will try without creating by myself

Never the less thanks a lot Christian for your patience, i will try more
clever questions when i'm ready for them

Le 5 août 2016 02:44, "Christian Balzer"  a écrit :

Hello,

On Fri, 5 Aug 2016 02:41:47 +0200 Guillaume Comte wrote:

> Maybe you are mispelling, but in the docs they dont use white space but :
> this is quite misleading if it works
>
I'm quoting/showing "ceph-disk", which is called by ceph-deploy, which
indeed uses a ":".

Christian
> Le 5 août 2016 02:30, "Christian Balzer"  a écrit :
>
> >
> > Hello,
> >
> > On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
> >
> > > I am reading half your answer
> > >
> > > Do you mean that ceph will create by itself the partitions for the
> > journal?
> > >
> > Yes, "man ceph-disk".
> >
> > > If so its cool and weird...
> > >
> > It can be very weird indeed.
> > If sdc is your data (OSD) disk and sdb your journal device then:
> >
> > "ceph-disk prepare /dev/sdc /dev/sdb1"
> > will not work, but:
> >
> > "ceph-disk prepare /dev/sdc /dev/sdb"
> > will and create a journal partition on sdb.
> > However you have no control over numbering or positioning this way.
> >
> > Christian
> >
> > > Le 5 août 2016 02:01, "Christian Balzer"  a écrit :
> > >
> > > >
> > > > Hello,
> > > >
> > > > you need to work on your google skills. ^_-
> > > >
> > > > I wrote about his just yesterday and if you search for "ceph-deploy
> > wrong
> > > > permission" the second link is the issue description:
> > > > http://tracker.ceph.com/issues/13833
> > > >
> > > > So I assume your journal partitions are either pre-made or non-GPT.
> > > >
> > > > Christian
> > > >
> > > > On Thu, 4 Aug 2016 15:34:44 +0200 Guillaume Comte wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > With ceph jewel,
> > > > >
> > > > > I'm pretty stuck with
> > > > >
> > > > >
> > > > > ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
> > > > >
> > > > > Because when i specify a journal path like this:
> > > > > ceph-deploy osd prepare ceph-osd1:sdd:sdf7
> > > > > And then:
> > > > > ceph-deploy osd activate ceph-osd1:sdd:sdf7
> > > > > I end up with "wrong permission" on the osd when activating,
> > complaining
> > > > > about "tmp" directory where the files are owned by root, and it
> > seems it
> > > > > tryes to do stuff as ceph user.
> > > > >
> > > > > It works when i don't specify a separate journal
> > > > >
> > > > > Any idea of what i'm doing wrong ?
> > > > >
> > > > > thks
> > > >
> > > >
> > > > --
> > > > Christian BalzerNetwork/Systems Engineer
> > > > ch...@gol.com   Global OnLine Japan/Rakuten Communications
> > > > http://www.gol.com/
> > > >
> >
> >
> > --
> > Christian BalzerNetwork/Systems Engineer
> > ch...@gol.com   Global OnLine Japan/Rakuten Communications
> > http://www.gol.com/
> >


--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Maybe you are mispelling, but in the docs they dont use white space but :
this is quite misleading if it works

Le 5 août 2016 02:30, "Christian Balzer"  a écrit :

>
> Hello,
>
> On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
>
> > I am reading half your answer
> >
> > Do you mean that ceph will create by itself the partitions for the
> journal?
> >
> Yes, "man ceph-disk".
>
> > If so its cool and weird...
> >
> It can be very weird indeed.
> If sdc is your data (OSD) disk and sdb your journal device then:
>
> "ceph-disk prepare /dev/sdc /dev/sdb1"
> will not work, but:
>
> "ceph-disk prepare /dev/sdc /dev/sdb"
> will and create a journal partition on sdb.
> However you have no control over numbering or positioning this way.
>
> Christian
>
> > Le 5 août 2016 02:01, "Christian Balzer"  a écrit :
> >
> > >
> > > Hello,
> > >
> > > you need to work on your google skills. ^_-
> > >
> > > I wrote about his just yesterday and if you search for "ceph-deploy
> wrong
> > > permission" the second link is the issue description:
> > > http://tracker.ceph.com/issues/13833
> > >
> > > So I assume your journal partitions are either pre-made or non-GPT.
> > >
> > > Christian
> > >
> > > On Thu, 4 Aug 2016 15:34:44 +0200 Guillaume Comte wrote:
> > >
> > > > Hi All,
> > > >
> > > > With ceph jewel,
> > > >
> > > > I'm pretty stuck with
> > > >
> > > >
> > > > ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
> > > >
> > > > Because when i specify a journal path like this:
> > > > ceph-deploy osd prepare ceph-osd1:sdd:sdf7
> > > > And then:
> > > > ceph-deploy osd activate ceph-osd1:sdd:sdf7
> > > > I end up with "wrong permission" on the osd when activating,
> complaining
> > > > about "tmp" directory where the files are owned by root, and it
> seems it
> > > > tryes to do stuff as ceph user.
> > > >
> > > > It works when i don't specify a separate journal
> > > >
> > > > Any idea of what i'm doing wrong ?
> > > >
> > > > thks
> > >
> > >
> > > --
> > > Christian BalzerNetwork/Systems Engineer
> > > ch...@gol.com   Global OnLine Japan/Rakuten Communications
> > > http://www.gol.com/
> > >
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
I am reading half your answer

Do you mean that ceph will create by itself the partitions for the journal?

If so its cool and weird...

Le 5 août 2016 02:01, "Christian Balzer"  a écrit :

>
> Hello,
>
> you need to work on your google skills. ^_-
>
> I wrote about his just yesterday and if you search for "ceph-deploy wrong
> permission" the second link is the issue description:
> http://tracker.ceph.com/issues/13833
>
> So I assume your journal partitions are either pre-made or non-GPT.
>
> Christian
>
> On Thu, 4 Aug 2016 15:34:44 +0200 Guillaume Comte wrote:
>
> > Hi All,
> >
> > With ceph jewel,
> >
> > I'm pretty stuck with
> >
> >
> > ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
> >
> > Because when i specify a journal path like this:
> > ceph-deploy osd prepare ceph-osd1:sdd:sdf7
> > And then:
> > ceph-deploy osd activate ceph-osd1:sdd:sdf7
> > I end up with "wrong permission" on the osd when activating, complaining
> > about "tmp" directory where the files are owned by root, and it seems it
> > tryes to do stuff as ceph user.
> >
> > It works when i don't specify a separate journal
> >
> > Any idea of what i'm doing wrong ?
> >
> > thks
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Yeah you are right

>From what i understand is that using a ceph is a good idea

But the fact is that it dont work

So i circumvent that by configuring ceph-deploy to use root

Was it the main goal, i dont think so

Thanks for your answer

Le 5 août 2016 02:01, "Christian Balzer"  a écrit :

>
> Hello,
>
> you need to work on your google skills. ^_-
>
> I wrote about his just yesterday and if you search for "ceph-deploy wrong
> permission" the second link is the issue description:
> http://tracker.ceph.com/issues/13833
>
> So I assume your journal partitions are either pre-made or non-GPT.
>
> Christian
>
> On Thu, 4 Aug 2016 15:34:44 +0200 Guillaume Comte wrote:
>
> > Hi All,
> >
> > With ceph jewel,
> >
> > I'm pretty stuck with
> >
> >
> > ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
> >
> > Because when i specify a journal path like this:
> > ceph-deploy osd prepare ceph-osd1:sdd:sdf7
> > And then:
> > ceph-deploy osd activate ceph-osd1:sdd:sdf7
> > I end up with "wrong permission" on the osd when activating, complaining
> > about "tmp" directory where the files are owned by root, and it seems it
> > tryes to do stuff as ceph user.
> >
> > It works when i don't specify a separate journal
> >
> > Any idea of what i'm doing wrong ?
> >
> > thks
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Guillaume Comte
Hi All,

With ceph jewel,

I'm pretty stuck with


ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]

Because when i specify a journal path like this:
ceph-deploy osd prepare ceph-osd1:sdd:sdf7
And then:
ceph-deploy osd activate ceph-osd1:sdd:sdf7
I end up with "wrong permission" on the osd when activating, complaining
about "tmp" directory where the files are owned by root, and it seems it
tryes to do stuff as ceph user.

It works when i don't specify a separate journal

Any idea of what i'm doing wrong ?

thks
-- 
*Guillaume Comte*
06 25 85 02 02  | guillaume.co...@blade-group.com

90 avenue des Ternes, 75 017 Paris
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] osd inside LXC

2016-07-14 Thread Guillaume Comte
Thanks for all your answers,

Today people dedicate servers to act as ceph osd nodes which serve data
stored inside to other dedicated servers which run applications or VMs, can
we think about squashing the 2 inside 1?

Le 14 juil. 2016 18:15, "Daniel Gryniewicz"  a écrit :

> This is fairly standard for container deployment: one app per container
> instance.  This is how we're deploying docker in our upstream ceph-docker /
> ceph-ansible as well.
>
> Daniel
>
> On 07/13/2016 08:41 PM, Łukasz Jagiełło wrote:
>
>> Hi,
>>
>> Just wonder why you want each OSD inside separate LXC container? Just to
>> pin them to specific cpus?
>>
>> On Tue, Jul 12, 2016 at 6:33 AM, Guillaume Comte
>> > <mailto:guillaume.co...@blade-group.com>> wrote:
>>
>> Hi,
>>
>> I am currently defining a storage architecture based on ceph, and i
>> wish to know if i don't misunderstood some stuffs.
>>
>> So, i plan to deploy for each HDD of each servers as much as OSD as
>> free harddrive, each OSD will be inside a LXC container.
>>
>> Then, i wish to turn the server itself as a rbd client for objects
>> created in the pools, i wish also to have a SSD to activate caching
>> (and also store osd logs as well)
>>
>> The idea behind is to create CRUSH rules which will maintain a set
>> of object within a couple of servers connected to the same pair of
>> switch in order to have the best proximity between where i store the
>> object and where i use them (i don't bother having a very high
>> insurance to not loose data if my whole rack powerdown)
>>
>> Am i already on the wrong track ? Is there a way to guaranty
>> proximity of data with ceph whitout making twisted configuration as
>> i am ready to do ?
>>
>> Thks in advance,
>>
>> Regards
>> --
>> *Guillaume Comte*
>> 06 25 85 02 02  | guillaume.co...@blade-group.com
>> <mailto:guilla...@blade-group.com>
>> 90 avenue des Ternes, 75 017 Paris
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>> --
>> Łukasz Jagiełło
>> lukaszjagielloorg
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd inside LXC

2016-07-12 Thread Guillaume Comte
Hi,

I am currently defining a storage architecture based on ceph, and i wish to
know if i don't misunderstood some stuffs.

So, i plan to deploy for each HDD of each servers as much as OSD as free
harddrive, each OSD will be inside a LXC container.

Then, i wish to turn the server itself as a rbd client for objects created
in the pools, i wish also to have a SSD to activate caching (and also store
osd logs as well)

The idea behind is to create CRUSH rules which will maintain a set of
object within a couple of servers connected to the same pair of switch in
order to have the best proximity between where i store the object and where
i use them (i don't bother having a very high insurance to not loose data
if my whole rack powerdown)

Am i already on the wrong track ? Is there a way to guaranty proximity of
data with ceph whitout making twisted configuration as i am ready to do ?

Thks in advance,

Regards
-- 
*Guillaume Comte*
06 25 85 02 02  | guillaume.co...@blade-group.com

90 avenue des Ternes, 75 017 Paris
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] multiple journals on SSD

2016-07-12 Thread Guillaume Comte
2016-07-12 15:03 GMT+02:00 Vincent Godin :

> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD. I
> stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
> sometime do not appear after partition creation). And I'm thinking that
> partition is not that useful for OSD management, because linux do no
> allow partition rereading with it contains used volumes.
>
On my side if i launch "partprobe" after creating with fdisk on a disk
which has mounted partitions, then it works



>
> So my question: How you store many journals on SSD? My initial thoughts:
>
> 1)  filesystem with filebased journals
> 2) LVM with volumes
>
> Anything else? Best practice?
>
> P.S. I've done benchmarking: 3500 can support up to 16 10k-RPM HDD.
>
> Hello,
>
> I would like to advertise you not using 1 SSD for 16 HDD. Ceph journal is
> not only a journal but a write cache during operation. I had that kind of
> configuration with 1 SSD for 20  SATA HDD. With a Ceph bench, i notice that
> my rate whas limited between 350 and 400 MB/s. In fact, a iostat show me
> that my SSD was 100% utilised with a rate of 350-400 MB/s.
>
> If you consider that a SATA HDD can have a max average rate of 100 MB/s,
> you need to configure one SSD (which can rate till 400 MB/s) for 4 SATA HDD
>
> Vincent
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
*Guillaume Comte*
06 25 85 02 02  | guillaume.co...@blade-group.com

90 avenue des Ternes, 75 017 Paris
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com