Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Shawn Iverson
The cephfs_metadata pool makes sense on ssd, but it won't need a lot of
space.  Chances are that you'll have plenty of ssd storage to spare for
other uses.

Personally, I'm migrating away from a cache tier and rebuilding my OSDs. I
am finding that performance with Bluestore OSDs with the block.db on SSDs
performs better in most cases than a cache tier, and it is a simpler
design.  There's some good notes here on good and bad use cases for cache
tiering:  http://docs.ceph.com/docs/master/rados/operations/cache-tiering/


On Mon, Jun 3, 2019 at 3:35 PM Daniele Riccucci  wrote:

> Hello,
> sorry to jump in.
> I'm looking to expand with SSDs on an HDD cluster.
> I'm thinking about moving cephfs_metadata to the SSDs (maybe with device
> class?) or to use them as cache layer in front of the cluster.
> Any tips on how to do it with ceph-ansible?
> I can share the config I currently have if necessary.
> Thank you.
>
> Daniele
>
> On 17/04/19 17:01, Sinan Polat wrote:
> > I have deployed, expanded and upgraded multiple Ceph clusters using
> ceph-ansible. Works great.
> >
> > What information are you looking for?
> >
> > --
> > Sinan
> >
> >> Op 17 apr. 2019 om 16:24 heeft Francois Lafont <
> francois.lafont.1...@gmail.com> het volgende geschreven:
> >>
> >> Hi,
> >>
> >> +1 for ceph-ansible too. ;)
> >>
> >> --
> >> François (flaf)
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Shawn Iverson, CETL
Director of Technology
Rush County Schools
765-932-3901 option 7
ivers...@rushville.k12.in.us

[image: Cybersecurity]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Daniele Riccucci

Hello,
sorry to jump in.
I'm looking to expand with SSDs on an HDD cluster.
I'm thinking about moving cephfs_metadata to the SSDs (maybe with device 
class?) or to use them as cache layer in front of the cluster.

Any tips on how to do it with ceph-ansible?
I can share the config I currently have if necessary.
Thank you.

Daniele

On 17/04/19 17:01, Sinan Polat wrote:

I have deployed, expanded and upgraded multiple Ceph clusters using 
ceph-ansible. Works great.

What information are you looking for?

--
Sinan


Op 17 apr. 2019 om 16:24 heeft Francois Lafont  
het volgende geschreven:

Hi,

+1 for ceph-ansible too. ;)

--
François (flaf)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Sinan Polat
I have deployed, expanded and upgraded multiple Ceph clusters using 
ceph-ansible. Works great.

What information are you looking for?

--
Sinan

> Op 17 apr. 2019 om 16:24 heeft Francois Lafont 
>  het volgende geschreven:
> 
> Hi,
> 
> +1 for ceph-ansible too. ;)
> 
> -- 
> François (flaf)
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Francois Lafont

Hi,

+1 for ceph-ansible too. ;)

--
François (flaf)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Daniel Gryniewicz

On 4/17/19 4:24 AM, John Molefe wrote:

Hi everyone,

I currently have a ceph cluster running on SUSE and I have an expansion 
project that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster 
via ansible?? I would appreciate it if you'd share your experiences, 
challenges, topology, etc.




I've done both, many times, although only with test clusters in the 3-8 
machine range.  My experience with ceph-ansible is that initial config 
is finicky, and needs to be gotten exactly right; but that once you have 
a working config, it just works for installs and expansions.  I've added 
machines one at a time and multiples at a time, and it works well.


Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH Expansion

2015-01-25 Thread Georgios Dimitrakakis

Hi Craig!

Indeed I had reduced the replicated size to 2 instead of 3 while the 
minimum size is 1.


I hadn't touched the crushmap though.

I would like to keep on going with the replicated size of 2 . Do you 
think this would be a problem?


Please find below the output of the command:

$ ceph osd dump | grep ^pool
pool 3 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 512 pgp_num 512 last_change 524 flags hashpspool 
stripe_width 0
pool 4 'metadata' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 526 flags 
hashpspool stripe_width 0
pool 5 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 512 pgp_num 512 last_change 528 flags hashpspool 
stripe_width 0
pool 6 '.rgw' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 512 pgp_num 512 last_change 618 flags hashpspool 
stripe_width 0
pool 7 '.rgw.control' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 616 flags 
hashpspool stripe_width 0
pool 8 '.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 614 flags 
hashpspool stripe_width 0
pool 9 '.log' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 512 pgp_num 512 last_change 612 flags hashpspool 
stripe_width 0
pool 10 '.intent-log' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 610 flags 
hashpspool stripe_width 0
pool 11 '.usage' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 608 flags 
hashpspool stripe_width 0
pool 12 '.users' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 606 flags 
hashpspool stripe_width 0
pool 13 '.users.email' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 604 flags 
hashpspool stripe_width 0
pool 14 '.users.swift' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 602 flags 
hashpspool stripe_width 0
pool 15 '.users.uid' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 600 flags 
hashpspool stripe_width 0
pool 16 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 512 pgp_num 512 last_change 598 flags 
hashpspool stripe_width 0
pool 17 '.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset 
0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 596 flags 
hashpspool stripe_width 0
pool 18 '.rgw.buckets' replicated size 2 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 826 flags 
hashpspool stripe_width 0
pool 19 '.rgw.buckets.extra' replicated size 2 min_size 1 crush_ruleset 
0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 722 owner 
18446744073709551615 flags hashpspool stripe_width 0




Warmest regards,


George


Youve either modified the crushmap, or changed the pool size to 1. 
The defaults create 3 replicas on different hosts.

What does `ceph osd dump | grep ^pool` output?  If the size param is
1, then you reduced the replica count.  If the size param is > 1, you
mustve adjusted the crushmap.

Either way, after you add the second node would be the ideal time to
change that back to the default.

Given that you only have 40GB of data in the cluster, you shouldnt
have a problem adding the 2nd node.

On Fri, Jan 23, 2015 at 3:58 PM, Georgios Dimitrakakis  wrote:


Hi Craig!

For the moment I have only one node with 10 OSDs.
I want to add a second one with 10 more OSDs.

Each OSD in every node is a 4TB SATA drive. No SSD disks!

The data ara approximately 40GB and I will do my best to have zero
or at least very very low load during the expansion process.

To be honest I havent touched the crushmap. I wasnt aware that I
should have changed it. Therefore, it still is with the default
one.
Is that OK? Where can I read about the host level replication in
CRUSH map in order
to make sure that its applied or how can I find if this is already
enabled?

Any other things that I should be aware of?

All the best,

George


It depends.  There are a lot of variables, like how many nodes
and
disks you currently have.  Are you using journals on SSD.  How
much
data is already in the cluster.  What the client load is on the
cluster.

Since you only have 40 GB in the cluster, it shouldnt take long
to
backfill.  You may find that it finishes backfilling faster than
you
can format the new disks.

Since you only have a single OSD node, you mustve changed the
crushmap
to allow replication over OSDs instead of hosts.  After you get
the
new node in would be the best time to switch back to host level
replication.  The more data you have, the more painful that
change
will become.

On Sun, Jan 18, 2015 at 10:09 AM, Georgio

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
You've either modified the crushmap, or changed the pool size to 1.  The
defaults create 3 replicas on different hosts.

What does `ceph osd dump | grep ^pool` output?  If the size param is 1,
then you reduced the replica count.  If the size param is > 1, you must've
adjusted the crushmap.

Either way, after you add the second node would be the ideal time to change
that back to the default.


Given that you only have 40GB of data in the cluster, you shouldn't have a
problem adding the 2nd node.


On Fri, Jan 23, 2015 at 3:58 PM, Georgios Dimitrakakis  wrote:

> Hi Craig!
>
>
> For the moment I have only one node with 10 OSDs.
> I want to add a second one with 10 more OSDs.
>
> Each OSD in every node is a 4TB SATA drive. No SSD disks!
>
> The data ara approximately 40GB and I will do my best to have zero
> or at least very very low load during the expansion process.
>
> To be honest I haven't touched the crushmap. I wasn't aware that I
> should have changed it. Therefore, it still is with the default one.
> Is that OK? Where can I read about the host level replication in CRUSH map
> in order
> to make sure that it's applied or how can I find if this is already
> enabled?
>
> Any other things that I should be aware of?
>
> All the best,
>
>
> George
>
>
>  It depends.  There are a lot of variables, like how many nodes and
>> disks you currently have.  Are you using journals on SSD.  How much
>> data is already in the cluster.  What the client load is on the
>> cluster.
>>
>> Since you only have 40 GB in the cluster, it shouldnt take long to
>> backfill.  You may find that it finishes backfilling faster than you
>> can format the new disks.
>>
>> Since you only have a single OSD node, you mustve changed the crushmap
>> to allow replication over OSDs instead of hosts.  After you get the
>> new node in would be the best time to switch back to host level
>> replication.  The more data you have, the more painful that change
>> will become.
>>
>> On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis  wrote:
>>
>>  Hi Jiri,
>>>
>>> thanks for the feedback.
>>>
>>> My main concern is if its better to add each OSD one-by-one and
>>> wait for the cluster to rebalance every time or do it all-together
>>> at once.
>>>
>>> Furthermore an estimate of the time to rebalance would be great!
>>>
>>> Regards,
>>>
>>
>>
>> Links:
>> --
>> [1] mailto:gior...@acmac.uoc.gr
>>
>
> --
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Georgios Dimitrakakis

Hi Craig!


For the moment I have only one node with 10 OSDs.
I want to add a second one with 10 more OSDs.

Each OSD in every node is a 4TB SATA drive. No SSD disks!

The data ara approximately 40GB and I will do my best to have zero
or at least very very low load during the expansion process.

To be honest I haven't touched the crushmap. I wasn't aware that I
should have changed it. Therefore, it still is with the default one.
Is that OK? Where can I read about the host level replication in CRUSH 
map in order
to make sure that it's applied or how can I find if this is already 
enabled?


Any other things that I should be aware of?

All the best,


George



It depends.  There are a lot of variables, like how many nodes and
disks you currently have.  Are you using journals on SSD.  How much
data is already in the cluster.  What the client load is on the
cluster.

Since you only have 40 GB in the cluster, it shouldnt take long to
backfill.  You may find that it finishes backfilling faster than you
can format the new disks.

Since you only have a single OSD node, you mustve changed the 
crushmap

to allow replication over OSDs instead of hosts.  After you get the
new node in would be the best time to switch back to host level
replication.  The more data you have, the more painful that change
will become.

On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis  wrote:


Hi Jiri,

thanks for the feedback.

My main concern is if its better to add each OSD one-by-one and
wait for the cluster to rebalance every time or do it all-together
at once.

Furthermore an estimate of the time to rebalance would be great!

Regards,



Links:
--
[1] mailto:gior...@acmac.uoc.gr


--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
It depends.  There are a lot of variables, like how many nodes and disks
you currently have.  Are you using journals on SSD.  How much data is
already in the cluster.  What the client load is on the cluster.

Since you only have 40 GB in the cluster, it shouldn't take long to
backfill.  You may find that it finishes backfilling faster than you can
format the new disks.


Since you only have a single OSD node, you must've changed the crushmap to
allow replication over OSDs instead of hosts.  After you get the new node
in would be the best time to switch back to host level replication.  The
more data you have, the more painful that change will become.






On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis <
gior...@acmac.uoc.gr> wrote:

> Hi Jiri,
>
> thanks for the feedback.
>
> My main concern is if it's better to add each OSD one-by-one and wait for
> the cluster to rebalance every time or do it all-together at once.
>
> Furthermore an estimate of the time to rebalance would be great!
>
> Regards,
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Georgios Dimitrakakis

Hi Jiri,

thanks for the feedback.

My main concern is if it's better to add each OSD one-by-one and wait 
for the cluster to rebalance every time or do it all-together at once.


Furthermore an estimate of the time to rebalance would be great!

Regards,


George


Hi George,

 List disks available:
 # $ ceph-deploy disk list {node-name [node-name]...}

 Add OSD using osd create:
 # $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]

 Or you can use the manual steps to prepare and activate disk
described at

http://ceph.com/docs/master/start/quick-ceph-deploy/#expanding-your-cluster
[3]

 Jiri

On 15/01/2015 06:36, Georgios Dimitrakakis wrote:


Hi all!

I would like to expand our CEPH Cluster and add a second OSD node.

In this node I will have ten 4TB disks dedicated to CEPH.

What is the proper way of putting them in the already available
CEPH node?

I guess that the first thing to do is to prepare them with
ceph-deploy and mark them as out at preparation.

I should then restart the services and add (mark as in) one of
them. Afterwards, I have to wait for the rebalance
to occur and upon finishing I will add the second and so on. Is
this safe enough?

How long do you expect the rebalancing procedure to take?

I already have ten more 4TB disks at another node and the amount of
data is around 40GB with 2x replication factor.
The connection is over Gigabit.

Best,

George
___
ceph-users mailing list
ceph-users@lists.ceph.com [1]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [2]




Links:
--
[1] mailto:ceph-users@lists.ceph.com
[2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[3]

http://ceph.com/docs/master/start/quick-ceph-deploy/#expanding-your-cluster


--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Jiri Kanicky

Hi George,

List disks available:
# $ ceph-deploy disk list {node-name [node-name]...}

Add OSD using osd create:
# $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]

Or you can use the manual steps to prepare and activate disk described 
at 
http://ceph.com/docs/master/start/quick-ceph-deploy/#expanding-your-cluster


Jiri

On 15/01/2015 06:36, Georgios Dimitrakakis wrote:

Hi all!

I would like to expand our CEPH Cluster and add a second OSD node.

In this node I will have ten 4TB disks dedicated to CEPH.

What is the proper way of putting them in the already available CEPH 
node?


I guess that the first thing to do is to prepare them with ceph-deploy 
and mark them as out at preparation.


I should then restart the services and add (mark as in) one of them. 
Afterwards, I have to wait for the rebalance
to occur and upon finishing I will add the second and so on. Is this 
safe enough?



How long do you expect the rebalancing procedure to take?


I already have ten more 4TB disks at another node and the amount of 
data is around 40GB with 2x replication factor.

The connection is over Gigabit.


Best,


George
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com