On 9/2/16, 8:48 AM, "Paul Bourke" <paul.bou...@oracle.com> wrote:

>Hi Kolla,
>
>We have been experiencing a long running issue with Kolla that I have 
>brought up briefly a few times, but not made too much noise about.
>
>I'm taking the time to write about it in the hopes that a) as a 
>community we may be able to solve it, and b) if other operators start 
>complaining down the line we'll have discussed it.
>
>The issue is that right now all data is stored using named volumes under 
>/var/lib/docker. We have encountered users who are not happy with this. 
>As well as issues of scale, even with a large /var/ partition this is 
>liable to fill up fast, as (in a default setup) it has to store both 
>docker images, glance images, nova instance data, and other potentially 
>large files such as logs. It's not typical for operators to be expected 
>to store all of this on one partition, and doesn't offer the choice 
>expected from a standard (non containerised) deploy.
>
>In our Kilo based solution we were solving this using host bind mounts, 
>e.g. -v /var/lib/nova:/var/lib/nova, where the directory on the left 
>hand side can be mounted wherever you like. Two major issues with this 
>approach are:
>
>1) Kolla tasks have to be refactored in many places to replace 
>"nova_data:/var/lib/kolla" with "/var/lib/nova:/var/lib/nova" (easily 
>solvable)
>
>2) This appears to be incompatible with the 'drop root' work done, as 
>even though /var/lib/nova is created and chowned during the build 
>process, it's permissions are replaced with those of root when bind 
>mounted from the host.
>
>Other avenues I've explored are seeing the docker volume driver can be 
>configured to place data elsewhere (appears not), and symlinking the 
>location of the volume to another filesystem as suggested by Michał. 
>Symlinking unfortunately also appears to not play well with the Docker 
>volume mechanisms.
>
>Do people see this is a potential limitation of Kolla (or maybe 
>Docker?), or are we (and our users) being unreasonable in expecting to 
>be able to place data on more than one filesystem?

Paul,

I don’t think it is an unreasonable request, but at present our architecture 
highly depends on named volumes.  I think this more of a problem for docker to 
solve (how to distribute /var/lib/docker over the host OS or many host Oses.

I think for the second case to work well, we need to stick with named volumes 
and let the Docker community sort it out.

The next step would be to file an issue with Docker's github account and 
essentially write down our requirements.

It is possible they are already working on this problem.

The fact that storage plugins are available with docker is one possible 
solution (storing /var/lib/docker in some third party storage such as Swift, 
AWS, CEPH, etc) that Docker Inc could possibly accelerate to meet the specific 
problem you have.

Meanwhile, I think it makes good sense to document that our "40gb disk space" 
is for compute nodes when used with ceph, and document the disk space 
requirements for control nodes separately (which can be 30-50gb/day with debug 
enabled).

Also we need to sort out log rotation of elasticsearch.  I think this is a 
pretty easy problem to solve, possibly even using the logrotate tool and cron.  
This would permit operators to specify the max size of disk space they want 
consumed by the control node, and we could use that to do log 
rotation/compression and eventually removal from elasticsearch's file-based 
database.

Regards
-steve
>
>Thanks,teve

>-Paul
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to