Re: [ceph-users] Should I use different pool?

2016-06-28 Thread EM - SC
Thanks for the answers.

SSD could be an option, but the idea is to grow (if business goes well)
from those 18TB.
I am thinking, however, after reading some bad comments about CephFS
with very large directories with many subdirectories (which is our case)
doesn't perform very well.

The big picture here is that we are moving to a new datacenter.
Currently our NAS is on ZFS and has the 18TB of content for our application.
We would like to move away from NFS and use CEPH object gateway. But
this will require dev time, which we will have only after the DC migration.

So, the idea was to go to CephFS just to migrate from our current ZFS
nas, and then, eventually, migrate that data to the object gateway. But,
I'm starting to believe that it is better to have a ZFS NAS in the new
DC and migrate directly from ZFS to object gateway once we are in the
new DC.






Brian :: wrote:
> +1 for 18TB and all SSD - If you need any decent IOPS with a cluster
> this size then I all SSDs are the way to go.
>
>
> On Mon, Jun 27, 2016 at 11:47 AM, David  wrote:
>> Yes you should definitely create different pools for different HDD types.
>> Another decision you need to make is whether you want dedicated nodes for
>> SSD or want to mix them in the same node. You need to ensure you have
>> sufficient CPU and fat enough network links to get the most out of your
>> SSD's.
>>
>> You can add multiple data pools to Cephfs so if you can identify the hot and
>> cold data in your dataset you could do "manual" tiering as an alternative to
>> using a cache tier.
>>
>> 18TB is a relatively small capacity, have you considered an all-SSD cluster?
>>
>> On Sun, Jun 26, 2016 at 10:18 AM, EM - SC 
>> wrote:
>>> Hi,
>>>
>>> I'm new to ceph and in the mailing list, so hello all!
>>>
>>> I'm testing ceph and the plan is to migrate our current 18TB storage
>>> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
>>> application.
>>> We are also planning on using virtualisation (opennebula) with rbd for
>>> images and, if it makes sense, use rbd for our oracle server.
>>>
>>> My question is about pools.
>>> For what I read, I should create different pools for different HD speed
>>> (SAS, SSD, etc).
>>> - What else should I consider for creating pools?
>>> - should I create different pools for rbd, cephfs, etc?
>>>
>>> thanks in advanced,
>>> em
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I use different pool?

2016-06-28 Thread Brian ::
+1 for 18TB and all SSD - If you need any decent IOPS with a cluster
this size then I all SSDs are the way to go.


On Mon, Jun 27, 2016 at 11:47 AM, David  wrote:
> Yes you should definitely create different pools for different HDD types.
> Another decision you need to make is whether you want dedicated nodes for
> SSD or want to mix them in the same node. You need to ensure you have
> sufficient CPU and fat enough network links to get the most out of your
> SSD's.
>
> You can add multiple data pools to Cephfs so if you can identify the hot and
> cold data in your dataset you could do "manual" tiering as an alternative to
> using a cache tier.
>
> 18TB is a relatively small capacity, have you considered an all-SSD cluster?
>
> On Sun, Jun 26, 2016 at 10:18 AM, EM - SC 
> wrote:
>>
>> Hi,
>>
>> I'm new to ceph and in the mailing list, so hello all!
>>
>> I'm testing ceph and the plan is to migrate our current 18TB storage
>> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
>> application.
>> We are also planning on using virtualisation (opennebula) with rbd for
>> images and, if it makes sense, use rbd for our oracle server.
>>
>> My question is about pools.
>> For what I read, I should create different pools for different HD speed
>> (SAS, SSD, etc).
>> - What else should I consider for creating pools?
>> - should I create different pools for rbd, cephfs, etc?
>>
>> thanks in advanced,
>> em
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I use different pool?

2016-06-27 Thread Kanchana. P
calamari URL displays below error:

New Calamari Installation
This appears to be the first time you have started Calamari and there are
no clusters currently configured.
3 Ceph servers are connected to Calamari, but no Ceph cluster has been
created yet. Please use ceph-deploy to create a cluster; please see the
Inktank Ceph Enterprise documentation for more details.

When executed ceph-deploy connect again, "calamari.conf"file changes to
"master: None".

My cluster have 4 nodes:
AMCNode: admin + mon + calamari
siteAosd
siteBosd
siteCosd

ceph version 10.2.2 on ubuntu 14.04
salt version  2014.7.5+ds-1ubuntu1
diamond 3.4.67_all.deb

1. Install deb packages for calamari on admin/calamari server node

wget
http://download.ceph.com/debian-jewel/pool/main/c/ceph-deploy/ceph-deploy_1.5.34_all.deb
sudo dpkg -i ceph-deploy_1.5.34_all.deb

2. Downloaded calamari deb packages on admin/calamari server node

sudo wget
http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari/calamari-server_1.3.1.1-1trusty_amd64.deb
sudo wget
http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari-clients/calamari-clients_1.3.1.1-1trusty_all.deb
sudo wget
http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/d/diamond/diamond_3.4.67_all.deb


3. Install salt 2014.7 version on admin/calamari server node

sudo add-apt-repository ppa:saltstack/salt2014-7

4. Then ran the commands below from calamari server  or admin node

sudo apt-get update
sudo apt-get install salt-master
sudo apt-get install salt-minion
sudo apt-get install -y apache2 libapache2-mod-wsgi libcairo2
supervisor python-cairo libpq5 postgresql
sudo apt-get -f install
sudo dpkg -i calamari-server*.deb calamari-clients*.deb
sudo calamari-ctl initialize

5. Edited ceph.conf file and added

[ceph-deploy-calamari]
master = amcnode

pushed config file to all other nodes
ceph-deploy --overwrite-conf config push amcnode siteAosd siteBosd
siteCosd

6. Installed salt package on other nodes. Copied diamond package to all
other nodes and installed

sudo add-apt-repository ppa:saltstack/salt2014-7
sudo dpkg -i diamond_3.4.67_all.deb
sudo apt-get install  python-support

7. Executed the below command from calamari server node /admin node

ceph-deploy calamari connect siteAosd siteBosd siteCosd

8. URL shows all the 3 nodes and requested to "Add". Adding the nodes
failed.

9. calamari.conf file in calamari client nodes has "master: None", modified
it to "master: amcnode", restarted salt minion.

sudo vi /etc/salt/minion.d/calamari.conf
master: amcnode

sudo service salt-minion restart

10. URL still shows the below error:

New Calamari Installation
This appears to be the first time you have started Calamari and there are
no clusters currently configured.
3 Ceph servers are connected to Calamari, but no Ceph cluster has been
created yet. Please use ceph-deploy to create a cluster; please see the
Inktank Ceph Enterprise documentation for more details.

11. Executed ceph-deploy connect again, now calamari.conf file is changed
to "master: None"again.

Thanks for your help in advance.

On Sun, Jun 26, 2016 at 2:48 PM, EM - SC 
wrote:

> Hi,
>
> I'm new to ceph and in the mailing list, so hello all!
>
> I'm testing ceph and the plan is to migrate our current 18TB storage
> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
> application.
> We are also planning on using virtualisation (opennebula) with rbd for
> images and, if it makes sense, use rbd for our oracle server.
>
> My question is about pools.
> For what I read, I should create different pools for different HD speed
> (SAS, SSD, etc).
> - What else should I consider for creating pools?
> - should I create different pools for rbd, cephfs, etc?
>
> thanks in advanced,
> em
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I use different pool?

2016-06-27 Thread David
Yes you should definitely create different pools for different HDD types.
Another decision you need to make is whether you want dedicated nodes for
SSD or want to mix them in the same node. You need to ensure you have
sufficient CPU and fat enough network links to get the most out of your
SSD's.

You can add multiple data pools to Cephfs so if you can identify the hot
and cold data in your dataset you could do "manual" tiering as an
alternative to using a cache tier.

18TB is a relatively small capacity, have you considered an all-SSD cluster?

On Sun, Jun 26, 2016 at 10:18 AM, EM - SC 
wrote:

> Hi,
>
> I'm new to ceph and in the mailing list, so hello all!
>
> I'm testing ceph and the plan is to migrate our current 18TB storage
> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
> application.
> We are also planning on using virtualisation (opennebula) with rbd for
> images and, if it makes sense, use rbd for our oracle server.
>
> My question is about pools.
> For what I read, I should create different pools for different HD speed
> (SAS, SSD, etc).
> - What else should I consider for creating pools?
> - should I create different pools for rbd, cephfs, etc?
>
> thanks in advanced,
> em
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I use different pool?

2016-06-26 Thread Oliver Dzombic
Hi Em,

its highly recommanded to bring the journals on SSDs

considering

https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

---

Also, if you like speed, its highly recommanded to use cache tier

---

Create the pool with a not too much high pg_num value. You can increase
it anytime. But you can not decrease it just like that. Also your
cluster will stop working if the PG / OSD ratio is too high.

---

You must create different pools for rbd and cephfs, but the
documentation will anyway inform you about that.

http://docs.ceph.com/docs/jewel/rados/

is in general a very good start point.

I suggest you to read it, and i mean read, not just flying over it.

And after you read it, read it again, because for sure you missed some
useful informations.

Good luck !


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:i...@ip-interactive.de

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 26.06.2016 um 11:18 schrieb EM - SC:
> Hi,
> 
> I'm new to ceph and in the mailing list, so hello all!
> 
> I'm testing ceph and the plan is to migrate our current 18TB storage
> (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
> application.
> We are also planning on using virtualisation (opennebula) with rbd for
> images and, if it makes sense, use rbd for our oracle server.
> 
> My question is about pools.
> For what I read, I should create different pools for different HD speed
> (SAS, SSD, etc).
> - What else should I consider for creating pools?
> - should I create different pools for rbd, cephfs, etc?
> 
> thanks in advanced,
> em
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com