Re: [Openstack] Swift accounts and container replication

2016-04-06 Thread Pete Zaitcev
On Mon, 04 Apr 2016 18:13:17 +0100
Carlos Rodrigues  wrote:

> I have two regions nodes of swift, geographically dispersed, and i have
> storage policies for both regions. 
> 
> How can i do to replicate the accounts and containers between two
> regions?

Policies do not apply to accounts and containers (although a container
signifies what policy applies to objects in it, the container itself
is not a subject to the policy). So all of your accounts and containers
are going to be replicated within the distributed cluster.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-04-06 Thread Pete Zaitcev
On Wed, 16 Mar 2016 13:23:31 +1300
Mark Kirkwood  wrote:

> So integrating swift-recon into regular monitoring/alerting 
> (collectd/nagios or whatever) is one approach (mind you most folk 
> already monitor disk usage data... and there is nothing overly special 
> about ensuring you don't run of space)!

So the overall conclusion is that the operator must monitor the cluster's
state and not let it run out of space. If you do in fact run out, second
order trouble starts happening, in particular pending processing will
not run right.

In case of one or a few nodes run out of space due to a bug or some
unrelated problem, Swift may maintain the desired durability by using
so-called "handoff" devices. If you restore the primaries, replication
will relocate affected partitions from handoffs. That will keep the
cluster functional while the recovery is being implemented.

But overall there's no magic. The general idea is, you make your customers
pay and if the business is profitable, they pay you enough to buy new
storage just fast enough to keep ahead of them filling it. For operators
of private clouds, we have quotas.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] all projects status?

2016-04-06 Thread Francisco Araya
The Project Navigator http://www.openstack.org/software/project-navigator/
should address that, but it seems a little bit outdated =(



On Wed, Apr 6, 2016 at 6:29 PM, CHOW Anthony  wrote:

> Adam,
>
>
>
> This one list the projects:
> http://governance.openstack.org/reference/projects/index.html
>
>
>
> Under the “Big Tent”, I think there is no more incubated and integrated
> status, all projects are tagged and I think existing tags can be found
> here: http://governance.openstack.org/reference/tags/
>
>
>
> I think there used to be a “integrated-release” to tag those existing
> projects for transition into “Big Tent” but I do not see this now (
> http://governance.openstack.org/reference/tags/integrated-release.html).
>
>
>
> Anthony.
>
>
>
> *From:* Adam Lawson [mailto:alaw...@aqorn.com]
> *Sent:* Wednesday, April 06, 2016 3:42 PM
> *To:* openstack
> *Subject:* [Openstack] all projects status?
>
>
>
> Of all of the projects unde the big tent on their respective flight paths,
> is there a central repo where their integration status is
> listed/maintained? I.e. incubated/integrated/core etc)? I can't find it but
> I know it exists...
>
>
>
> //adam
>
>
> * Adam Lawson*
>
>
>
> AQORN, Inc.
>
> 427 North Tatnall Street
>
> Ste. 58461
>
> Wilmington, Delaware 19801-2230
>
> Toll-free: (844) 4-AQORN-NOW ext. 101
>
> International: +1 302-387-4660
>
> Direct: +1 916-246-2072
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 
Francisco J Araya
Co-founder & CEO | sentinel.la
Enjoy a better OpenStack Experience
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] all projects status?

2016-04-06 Thread CHOW Anthony
Adam,

This one list the projects: 
http://governance.openstack.org/reference/projects/index.html

Under the “Big Tent”, I think there is no more incubated and integrated status, 
all projects are tagged and I think existing tags can be found here: 
http://governance.openstack.org/reference/tags/

I think there used to be a “integrated-release” to tag those existing projects 
for transition into “Big Tent” but I do not see this now 
(http://governance.openstack.org/reference/tags/integrated-release.html).

Anthony.

From: Adam Lawson [mailto:alaw...@aqorn.com]
Sent: Wednesday, April 06, 2016 3:42 PM
To: openstack
Subject: [Openstack] all projects status?

Of all of the projects unde the big tent on their respective flight paths, is 
there a central repo where their integration status is listed/maintained? I.e. 
incubated/integrated/core etc)? I can't find it but I know it exists...

//adam

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] all projects status?

2016-04-06 Thread Adam Lawson
Of all of the projects unde the big tent on their respective flight paths,
is there a central repo where their integration status is
listed/maintained? I.e. incubated/integrated/core etc)? I can't find it but
I know it exists...

//adam

*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] OpenStack Austin Summit Pass

2016-04-06 Thread Aqsa Malik
Hi All,

I am willing to sell my Austin Summit full access pass, as I won't be able
to attend the summit myself. I got it when it was of 600$. So willing to
give it away for the same amount. Anyone interested please let me know.

Thanks
--
Aqsa Malik
SEECS, NUST
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: How should I spawn an instance using ceph

2016-04-06 Thread Tom Walsh
Yang,

> what do you mean by "usernames for CephX don't match"

I meant that on the page I linked the Ceph documentation doesn't use
the same usernames that the OpenStack documentation uses (cinder
versus volumes). You are using your own usernames, so the point is
moot.

Raw is the correct format. It is required in order to allow for CoW
volumes which is what you are trying to do.

Your configuration looks correct and I don't see anything that might
indicate a problem. There are a few configuration directives that I
don't have in my configuration:

cinder.conf:

volume_backend_name = rbd

glance-api.conf:

stores = glance.store.rbd.Store

I have:
stores = rbd

(I am not sure which is correct.)

My next suggestion is to turn on debug in the configs of both and
watch them carefully when you are spawning a new instance from an
image. You should see some log entries that indicate where the problem
was. That is how I realized that I didn't have the correct permissions
on my images store so that the volume user could access it.

I hope that helps.


On Mon, Apr 4, 2016 at 9:22 AM, yang sheng  wrote:
>
> Hi Tom
>
> thanks for your reply!
>
> what do you mean by "usernames for CephX don't match" ?
>
> I checked my configuration files and updated ceph user permission.
>
> In my deployment, I defined 2 pools in my ceph cluster, imagesliberty and
> volumesliberty. Now I granted full access for both users(glanceliberty and
> cinderliberty) to both pools.
>
> $ ceph auth list
>
> client.cinderliberty
> key: AQA4v7tW823HLhAAmqf/rbxCbQgyfrfFJMTxDQ==
> caps: [mon] allow r
> caps: [osd] allow *
> client.glanceliberty
> key: AQBHv7tWY5ofNxAAIueTUXRUs2lJWkfjiJkLKw==
> caps: [mon] allow r
> caps: [osd] allow *
>
> When I ran "glance image-show" command, I can see my image url and the
> format is raw (I read somewhere, saying the format has to be raw):
>
> $ glance image-show 595ee912-993c-4878-a833-7bdffda1f692
> +--+--+
> | Property | Value
> |
> +--+--+
> | checksum | 1ee004d7fd75fd518ab5c8dba589ba73
> |
> | container_format | bare
> |
> | created_at   | 2016-03-29T19:44:39Z
> |
> | direct_url   | rbd://2e906379-f211-4329-8faf-
> |
> |  |
> a8e7600b8418/imagesliberty/595ee912-993c-4878-a833-7bdffda1f692/snap
> |
> | disk_format  | raw
> |
> | id   | 595ee912-993c-4878-a833-7bdffda1f692
> |
> | min_disk | 0
> |
> | min_ram  | 0
> |
> | name | centosraw
> |
> | owner| 0f861e423bc248f3896dc17b5bc3f140
> |
> | protected| False
> |
> | size | 10737418240
> |
> | status   | active
> |
> | tags | []
> |
> | updated_at   | 2016-03-29T19:52:05Z
> |
> | virtual_size | None
> |
> | visibility   | public
> |
> +--+--+
>
>  But still cinder has to download and upload the image. Just wondering is
> there anything I missed or misconfigured?
>
>
> Here is my glance-api.conf:
>
> [database]
>
> connection =
> mysql://glanceliberty:b7828017cd0e939c3625@vsusnjhhdiosdbwvip/glanceliberty
>
> [keystone_authtoken]
>
> auth_uri = http://vsusnjhhdiosconvip:5000
> auth_url = http://vsusnjhhdiosconvip:35357
> auth_plugin = password
> project_domain_id = default
> user_domain_id = default
> project_name = service
> username =glanceliberty
> password =91f0bffdb95a11432eeb
>
> [paste_deploy]
>
> flavor = keystone
>
>
> [DEFAULT]
>
> notification_driver = noop
> verbose = True
> registry_host=vsusnjhhdiosconvip
> show_image_direct_url = True
>
> [glance_store]
> stores = glance.store.rbd.Store
> default_store = rbd
> rbd_store_pool = imagesliberty
> rbd_store_user = glanceliberty
> rbd_store_ceph_conf = /etc/ceph/ceph_glance.conf
> rbd_store_chunk_size = 8
>
> [oslo_messaging_rabbit]
> rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672
> rabbit_retry_interval=1
> rabbit_retry_backoff=2
> rabbit_max_retries=0
> rabbit_durable_queues=true
> rabbit_ha_queues=true
> rabbit_userid = osliberty
> rabbit_password = 8854da21c3881e45a269
>
> and my cinder.conf file:
>
> [database]
>
> connection =
> mysql://cinderliberty:a679ac3149ead0562135@vsusnjhhdiosdbwvip/cinderliberty
>
> [DEFAULT]
>
> rpc_backend = rabbit
> auth_strategy = keystone
> my_ip = 192.168.2.12
> verbose = True
>
> [paste_deploy]
>
> flavor = keystone
>
> [oslo_messaging_rabbit]
> rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672
> rabbit_retry_interval=1
> rabbit_retry_backoff=2
> rabbit_max_retries=0
> rabbit_durable_queues=true
> rabbit_ha_queues=true
> rabbit_userid = osliberty
> rabbit_password = 8854da21c3881e45a269
>
> [keystone_authtoken]
>
> auth_uri = http://vsusnjhhdiosconvip:5000
> auth_url = http://vsusnjhhdiosconvip:35357
>

[Openstack] OpenStack Austin Summit Pass

2016-04-06 Thread shzeb75
Hi
I have a full Access Pass for Austin Summit. I am not attending the the
summit therefore willing to give away the pass. If anyone is interested let
me know.
The pass is of 1200$ now, but I got it when it was 600$. So will give it
away in 600$ :)
Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack