Re: [Openstack] what is the difference between 2013.1 and grizzly?

2013-03-27 Thread Oleg Gelbukh
Hello, Liu

Generally, grizzly-X is a milestone tag inside release cycle codenamed
'Grizzly'.
Note that tagging scheme has changed between milestones 2 and 3 of
'Grizzly' release cycle, so you see 'grizzly-1' and 'grizzly-2' tags but no
'grizzly-3'. Milestone 3 of 'Grizzly' is tagged '2013.1.g3' instead. Looks
like we won't see codenames in tags anymore in following development cycles.
'2013.1.rc1' is a tag referring to release candidate 1 version, and you can
expect 2013.1.rc2 and so on as well.
Finally, '2013.1' is the official release version and it reflects that it
is first release made during year 2013.

Hope this helps and if I've mistaken, someone will correct me.

--
Best regards,
Oleg Gelbukh
Sr. IT Engineer
Mirantis, Inc.


On Wed, Mar 27, 2013 at 8:07 AM, heckj  wrote:

> 2013.1 is the release, grizzly-1 is a release candidate
>
> -joe
>
> On Mar 26, 2013, at 8:12 PM, Liu Wenmao  wrote:
>
> > I notice that openstack components have two different develop code
> names, for example, openstack grizzly has 2013.1 and grizzly, so what is
> the difference between the two?
> >
> > There is a rc version of 2013.1 but none of grizzly, so I think they are
> not equal to the developers.
> >
> > root@controller:/usr/src/nova# git tag
> > 0.9.0
> > 2011.1rc1
> > 2011.2
> > 2011.2gamma1
> > 2011.2rc1
> > 2011.3
> > 2011.3.1
> > 2012.1
> > 2012.1.1
> > 2012.1.2
> > 2012.1.3
> > 2012.2
> > 2012.2.1
> > 2012.2.2
> > 2012.2.3
> > 2013.1.g3
> > 2013.1.rc1
> > diablo-1
> > diablo-2
> > diablo-3
> > diablo-4
> > essex-1
> > essex-2
> > essex-3
> > essex-4
> > essex-rc1
> > essex-rc2
> > essex-rc3
> > essex-rc4
> > folsom-1
> > folsom-2
> > folsom-3
> > folsom-rc1
> > folsom-rc2
> > folsom-rc3
> > grizzly-1
> > grizzly-2
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] account-level and container-level usage information

2012-11-13 Thread Oleg Gelbukh
Hello, Ning,

On the second question: Keystone 'tenant' maps to 'account' in Swift.
Keystone 'user' directly corresponds to Swift 'user'.

--
Best regards,
Oleg Gelbukh
Mirantis IT


On Tue, Nov 13, 2012 at 12:19 PM, ning2008wisc wrote:

> Thanks Alex!
>
> But, it seems that "Swift -V2 -A" can not show
> separate usage stat for differnt users in the
> same account (tenant)? Is there any technical
> difficulty/reason why the usage stat for a certian
> user (under a tenant name) can not be shown by
> "Swift -V2 -A" ? or there is no sense to do that?
>
> One more question: What "tenant" means to the
> Swift? What is the relationship between "tenant"
> and "user" in keystone and Swift?
>
> Thanks,
>
> Ning
>
>
>
> On Mon, Nov 12, 2012 at 10:01 PM, Alex Yang  wrote:
>
>> Hi,
>> The python-swiftclient(https://github.com/openstack/python-swiftclient)
>> can retrieve the account-level and container-level usage infomation.
>>
>> To retrieve the account usage information:
>> $ swift -V 2 -A http://127.0.0.1:5000/v2.0 -U tenant1:swift_user -K
>> 19561212 stat
>>Account: AUTH_70b51a6d180f4f1da78d80316c69e85c
>> Containers: 10
>>Objects: 0
>>  Bytes: 0
>> Meta Quota: L1
>> Accept-Ranges: bytes
>> X-Timestamp: 1352735608.55267
>> X-Trans-Id: tx72878dfcba9948298a6f4efb4e51e569
>>
>> To retrieve the container usage information:
>> $ swift -V 2 -A http://127.0.0.1:5000/v2.0 -U tenant1:swift_user -K
>> 19561212 stat c1
>>  No handlers could be found for logger "keystoneclient.v2_0.client"
>>   Account: AUTH_70b51a6d180f4f1da78d80316c69e85c
>> Container: c1
>>   Objects: 0
>> Bytes: 0
>>  Read ACL:
>> Write ACL:
>>   Sync To:
>>  Sync Key:
>> Accept-Ranges: bytes
>> X-Timestamp: 1352735637.83676
>> X-Trans-Id: tx7968942b927b4f1fba0c40fb1372adba
>>
>> You can also use the REST API,
>> http://docs.openstack.org/cli/quick-start/content/swift_client_commands.html
>>
>> But the bytes and objects of account is not accurate.
>> You can get the accurate result by retrieve all the containers and add
>> them up.
>>
>>
>>  2012/11/13 Ning Zhang 
>>
>>> Hello All,
>>>
>>> Is there any Swift (GUI or command line) tool that can
>>> retrieve the account-level and
>>> container-level usage information (e.g. how large space
>>> has been used under an account, how large space has been
>>> used under a tenant) and also works with keystone?
>>>
>>> Thanks,
>>>
>>> Ning
>>>
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>
>>
>>
>> --
>>   杨雨
>>   Email:   alex890...@gmail.com
>> GitHub:   https://github.com/AlexYangYu
>> Blog:http://alexyang.sinaapp.com
>>  Weibo:   http://www.weibo.com/alexyangyu
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Rack-awareness

2012-11-01 Thread Oleg Gelbukh
John,

Could you please share a link to the materials of the discussion you've
mentioned, or may be place where the discussion takes place (if it's
online)? We're currently working on R&D project that involves prototype
implementation of regions, and I'm eager to see suggestions for replication
optimizations and possibly get some feedback about our approach.

--
Best regards,
Oleg Gelbukh
Mirantis, Inc.

On Thu, Nov 1, 2012 at 7:55 PM, John Dickinson  wrote:

> This is already supported in Swift with the concept of availability zones.
> Swift will place each replica in different availability zones, if possible.
> If you only have one zone, Swift will place the replicas on different
> machines. If you only have one machine, Swift will place the replicas on
> different drives.
>
> There are active discussions right now about how Swift can support a tier
> above these availability zones: "regions". A region would be defined by a
> hogher latency link and can provide additional data durability, and,
> depending on your deployment details, better availability.
> http://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/has
>  more info on the ideas we're talking about.
>
> --John
>
>
>
>
> On Nov 1, 2012, at 8:45 AM, Leandro Reox  wrote:
>
> > Hi guys,
> >
> > Any plans to implement something like hadoop rack-awareness where we can
> define "rack" spaces to guarantee that a copy of an object is stored for
> example on another datacenter, on another coast. Or should this be managed
> by container sync to the other datacenter
> >
> > I think that this can be a nice-to-have feature, i dont know if its on
> the dev roadmap
> >
> > Best
> > Lean
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum -. Multi-Plugin and support for provisioning of other devices.

2012-09-12 Thread Oleg Gelbukh
Tim,

It's possible that SAN appliance used to provide storage to VMs under
Cinder management will need to directly plug some logical port into tenant
network. In this case, it seems that it should be Quantum actually
performing plug, probably through some specialized agent.

--
Best regards,
Oleg

On Mon, Sep 10, 2012 at 9:56 AM, Tim Bell  wrote:

> ** **
>
> There has been some load balancing discussion and more is due at the
> summit. The various current activities are summarised in
> http://wiki.openstack.org/Quantum/LBaaS
>
> ** **
>
> Can you explain what you mean by storage devices with respect to Quantum ?
> The storage activities are underway as part of Cinder.
>
> ** **
>
> Tim
>
> ** **
>
> *From:* openstack-bounces+tim.bell=cern...@lists.launchpad.net [mailto:
> openstack-bounces+tim.bell=cern...@lists.launchpad.net] *On Behalf Of *Endre
> Karlson
> *Sent:* 09 September 2012 19:57
> *To:* openstack@lists.launchpad.net
> *Subject:* [Openstack] Quantum -. Multi-Plugin and support for
> provisioning of other devices.
>
> ** **
>
> Hi, I am wondering if there are any community plans to support to use
> Quantum to provision up interfaces on other devices in areas of say
> LoadBalancers, Storage devices etc and if there are plans to support
> multi-active plugins in Quantum for this?
>
> ** **
>
> Is there a timeframe maybe?
>
> ** **
>
> Endre.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-22 Thread Oleg Gelbukh
Hello,

Is it possible that, during snapshotting, libvirt just tears down virtual
interface at some point, and then re-creates it, with hairpin_mode disabled
again?
This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that fix
works on spawn of instance. This means that upon resume after snapshot,
hairpin is not restored. May be if we insert the _enable_hairpin() call in
snapshot procedure, it helps.
We're currently investigating this issue in one of our environments, hope
to come up with answer by tomorrow.

--
Best regards,
Oleg

On Wed, Aug 22, 2012 at 11:29 PM, Sam Su  wrote:

> My friend has found a way to enable ping itself, when this problem
> happened. But not found why this happen.
> sudo echo "1" >
> /sys/class/net/br1000/brif//hairpin_mode
>
> I file a ticket to report this problem:
> https://bugs.launchpad.net/nova/+bug/1040255
>
> hopefully someone can find why this happen and solve it.
>
> Thanks,
> Sam
>
>
> On Fri, Jul 20, 2012 at 3:50 PM, Gabriel Hurley  > wrote:
>
>>  I ran into some similar issues with the _*enable*_hairpin() call. The
>> call is allowed to fail silently and (in my case) was failing. I couldn’t
>> for the life of me figure out why, though, and since I’m really not a
>> networking person I didn’t trace it along too far.
>>
>> ** **
>>
>> Just thought I’d share my similar pain.
>>
>> ** **
>>
>> **-  **Gabriel
>>
>> ** **
>>
>> *From:* 
>> openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net[mailto:
>> openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] *On
>> Behalf Of *Sam Su
>> *Sent:* Thursday, July 19, 2012 11:50 AM
>> *To:* Brian Haley
>> *Cc:* openstack
>> *Subject:* Re: [Openstack] VM can't ping self floating IP after a
>> snapshot is taken
>>
>> ** **
>>
>> Thank you for your support.
>>
>> ** **
>>
>> I checked the file  nova/virt/libvirt/connection.py, the sentence
>> self._enable_hairpin(instance) is already added to the
>> function  _hard_reboot().
>>
>> It looks like there are some difference between taking snapshot and
>> reboot instance. I tried to figure out how to fix this bug but failed. **
>> **
>>
>> ** **
>>
>> It will be much appreciated if anyone can give some hints.
>>
>> ** **
>>
>> Thanks,
>>
>> Sam
>>
>> ** **
>>
>> On Thu, Jul 19, 2012 at 8:37 AM, Brian Haley  wrote:*
>> ***
>>
>> On 07/17/2012 05:56 PM, Sam Su wrote:
>> > Hi,
>> >
>> > Just This always happens in Essex release. After I take a snapshot of
>> my VM ( I
>> > tried Ubuntu 12.04 or CentOS 5.8), VM can't ping its self floating IP;
>> before I
>> > take a snapshot though, VM can ping its self floating IP.
>> >
>> > This looks closely related to
>> https://bugs.launchpad.net/nova/+bug/933640, but
>> > still a little different. In 933640, it sounds like VM can't ping its
>> self
>> > floating IP regardless whether we take a snapshot or not.
>> >
>> > Any suggestion to make an easy fix? And what is the root cause of the
>> problem?
>>
>> It might be because there's a missing _enable_hairpin() call in the
>> reboot()
>> function.  Try something like this...
>>
>> nova/virt/libvirt/connection.py, _hard_reboot():
>>
>>  self._create_new_domain(xml)
>> +self._enable_hairpin(instance)
>>  self.firewall_driver.apply_instance_filter(instance,
>> network_info)
>>
>> At least that's what I remember doing myself recently when testing after a
>> reboot, don't know about snapshot.
>>
>> Folsom has changed enough that something different would need to be done
>> there.
>>
>> -Brian
>>
>> ** **
>>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Running multiple Glance instances?

2012-08-16 Thread Oleg Gelbukh
Lars,

I think it's possible if you set glance_api_servers option in nova.conf on
each compute node to something like:

glance_api_servers = localhost:9292

You don't have to run glance-registry service in every compute node. Just
make sure that glance-registry host configured properly in glance-api.conf
files on all compute nodes.

--
Best regards,
Oleg Gelbukh
Mirantis Inc

On Thu, Aug 16, 2012 at 10:42 PM, Lars Kellogg-Stedman <
l...@seas.harvard.edu> wrote:

> Assuming some sort of shared filesystem, can I run multiple glance
> indexes in order to distribute the i/o load across multiple systems?
> Do I need to run both the registry and API service in each location?
>
> We're running with an NFS-backed data store, and it seems that we
> could eliminate some network i/o if we were to have each compute note
> run the glance service locally (but all managing the same directory).
>
> Does this make any sense?
>
> --
> Lars Kellogg-Stedman|
> Senior Technologist|
> http://ac.seas.harvard.edu/
> Academic Computing |
> http://code.seas.harvard.edu/
> Harvard School of Engineering and Applied Sciences |
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Oleg Gelbukh
Eugene,

I suggest just add option 'rabbit_servers' that will override
'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
my understanding.

--
Best regards,
Oleg Gelbukh
Mirantis, Inc.

On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov wrote:

> Hi,
>
> I'm working on a RabbitMQ H/A patch right now.
>
> It actually involves more than just using H/A queues (unless you're
> willing to add a TCP load balancer on top of your RMQ cluster).
> You also need to add support for multiple RabbitMQ's directly to nova.
> This is not hard at all, and I have the patch ready and tested in
> production.
>
> Alessandro, if you need this urgently, I can send you the patch right
> now before the discussion codereview for inclusion in core nova.
>
> The only problem is, it breaks backward compatibility a bit: my patch
> assumes you have a flag "rabbit_addresses" which should look like
> "rmq-host1:5672,rmq-host2:5672" instead of the prior rabbit_host and
> rabbit_port flags.
>
> Guys, can you advise on a way to do this without being ugly and
> without breaking compatibility?
> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But that sounds
> weird, as their names are in singular.
> Maybe have "rabbit_host", "rabbit_port" and also "rabbit_host2",
> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
> Something else?
>
> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes  wrote:
> > On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
> >> Hi guys,
> >>
> >> just an idea, i'm deploying Openstack trying to make it HA.
> >> The missing thing is rabbitmq, which can be easily started in
> >> active/active mode, but it needs to declare the queues adding an
> >> x-ha-policy entry.
> >> http://www.rabbitmq.com/ha.html
> >> It would be nice to add a config entry to be able to declare the queues
> >> in that way.
> >> If someone know where to edit the openstack code, else i'll try to do
> >> that in the next weeks maybe.
> >
> >
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
> >
> > You'll need to add the config options there and the queue is declared
> > here with the options supplied to the ConsumerBase constructor:
> >
> >
> https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
> >
> > Best,
> > -jay
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
>
> --
> Eugene Kirpichov
> http://www.linkedin.com/in/eugenekirpichov
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why is an image required when booting from volume

2012-05-28 Thread Oleg Gelbukh
Gabriel,

There is a folsom-targeted blueprint for #2 at least:
https://blueprints.launchpad.net/nova/+spec/auto-create-boot-volumes

--
Oleg

On Sun, May 27, 2012 at 10:51 PM, Gabriel Hurley
wrote:

>  To the best of my understanding there are two parts to this, neither of
> which is fully where it ought to be:
>
> ** **
>
> **1.   **It shouldn’t be a required parameter. If you give it a
> volume with everything it needs and not an image it should try to boot from
> that without throwing an exception. Horizon only enforces the requirement
> because Nova does.
>
> **2.   **As an optional parameter, specifying an image should allow
> you to create the instance with that image loaded onto the volume, which is
> a very important part of the workflow for creating your own bootable
> volumes. This feature doesn’t currently exist in Nova, however.
>
> ** **
>
> I also don’t know offhand if there are blueprints or bug reports for
> either of those…
>
> ** **
>
> **-  **Gabriel
>
> ** **
>
> *From:* 
> openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net[mailto:
> openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] *On
> Behalf Of *Vishvananda Ishaya
> *Sent:* Saturday, May 26, 2012 8:35 PM
> *To:* Lorin Hochstein
> *Cc:* openstack@lists.launchpad.net (openstack@lists.launchpad.net)
> *Subject:* Re: [Openstack] Why is an image required when booting from
> volume
>
> ** **
>
> If there is a separate kernel and ramdisk needed for the boot from volume,
> it is pulled from image properties.  Otherwise it is basically useless.***
> *
>
> ** **
>
> Vish
>
> ** **
>
> On May 26, 2012, at 8:22 AM, Lorin Hochstein wrote:
>
>
>
> 
>
> I'm trying to figure out boot from volume, both so I can use it and so I
> can add it to the docs. 
>
> ** **
>
> ** **
>
>  It seems that when calling "nova boot" or using Horizon, you need to
> specify an image. Why is that?
>
> ** **
>
> I naively tried to create a volume image by creating a volume and then
> doing on my volume server:
>
> ** **
>
> dd if=/tmp/precise-server-cloudimg-amd64-disk1.img
> of=/dev/nova-volumes/volume-000d
>
> ** **
>
> Then I tried this:
>
> ** **
>
> $ nova boot --flavor 2 --key_name lorin --block_device_mapping
> /dev/vda=13:::0 test
>
> ** **
>
> Which generated an error:
>
> ** **
>
> Invalid imageRef provided. (HTTP 400)
>
> ** **
>
> If I try to specify an image, it at least attempts to boot:
>
> ** **
>
> $ nova boot --flavor 2 --key_name lorin --block_device_mapping
> /dev/vda=13:::0 --image 7d6923d9-1c13-4405-ba0c-41c7487dd6bc test
>
> ** **
>
> I noticed that the devstack example specifies an image:
> https://github.com/openstack-dev/devstack/blob/master/exercises/boot_from_volume.sh
> :
>
> ** **
>
> VOL_VM_UUID=`nova boot --flavor $INSTANCE_TYPE --image $IMAGE
> --block_device_mapping vda=$VOLUME_ID:::0 --security_groups=$SECGROUP
> --key_name $KEY_NAME $VOL_INSTANCE_NAME | grep ' id ' | get_field 2`
>
> ** **
>
> Looking at nova/api/openstack/compute/servers.py, it does look
> like _image_uuid_from_href() is called regardless of whether we are booting
> from volume or not. What is "--image" used for when booting from volume?**
> **
>
> ** **
>
> ** **
>
> Take care,
>
> ** **
>
> Lorin
>
> --
>
> Lorin Hochstein
>
> Lead Architect - Cloud Services
>
> Nimbis Services, Inc.
>
> www.nimbisservices.com
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ** **
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Swift3 Github pages

2012-05-21 Thread Oleg Gelbukh
Gentlemen,

We have a feature for swift3 middleware that we'd like to propose for
merge. How we can do this now, when it is split into associated project?
How has the procedure changed?

--
Best reagards,
Oleg Gelbukh
Mirantis Inc.

On Mon, May 21, 2012 at 8:01 PM, Chmouel Boudjnah wrote:

> Hello Fujita,
>
> I  sent you a documentation pull req that should generate the sphinx
> doc, It would be nice to generate some static html page of the
> documentation as what Greg has done  for his projects for example :
>
> http://gholt.github.com/swauth/
>
> This is something done by the github project owner see
>
> http://pages.github.com or http://help.github.com/pages/
>
> for how to do that.
>
> I have sent already the gerrit review to remove swift3 from swift :
>
> https://review.openstack.org/#/c/7628/
>
> and reference your repository http://github.com/fujita/swift3 in
> associated project.
>
> Thanks,
> Chmouel.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with swift upload when using swift

2012-05-03 Thread Oleg Gelbukh
Hello,

You need to do exactly what is written in last paragraph of your mail: use
-k with curl to turn off certificate verification.

--
Best regards,
Oleg

On Thu, May 3, 2012 at 1:46 PM, khabou imen  wrote:

> hi everybody ,
> when trying to upload images using keystone for authentification I got
>  curl -X PUT -D - \
> >  -H "X-Auth-Token: 012345SECRET99TOKEN012345" \
> >  https://192.168.1.13:8080/v1/AUTH_MyTenant/images
> curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
> error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify
> failed
> More details here: http://curl.haxx.se/docs/sslcerts.html
>
> curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
> If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
> If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
>
>
> can any one help me?
>
>
> --
> cordialement,
>  Imen Khabou,
> Elève Ingénieur en Informatique
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-volumes] relationship between nova-volumes and swift?

2012-04-28 Thread Oleg Gelbukh
Hello, Eric

Swift is actually an object store rather then volume store. It is used for
storing any types of objects as files in underlying file system. This files
can be anything, including binary images of block volumes. HTTP is used for
transporting objects to and from the store.
Nova-volume service is totally different. It allows to attach virtual block
devices to VMs by providing correct parameters for nova-compute's
'attach_volume' method. It has a number of drivers that can manage
different storage back-ends (Linux LVs, SAN virtual disks, distributed
storage systems), but all these back-ends must be able to provide a virtual
block device to VMs, which Swift is not capable of.
Primary integration point between Swift and nova-volume is a way to store
snapshots of virtual volumes to Swift store as files:
https://blueprints.launchpad.net/nova/+spec/store-snapshots-to-swift.
Status of work on this blueprint is unknown to me.

--
Best regards,
Oleg

On Sat, Apr 28, 2012 at 10:15 AM, Eric Luo  wrote:

> Hello ,all
> I am a little confused about the nova-volumes service and the swift.I know
> both are about the volume store,but
> what's the relationship between them ?Can some one please explain it to me
> please :)
> Thanks!
>
> Eric Luo
> 2012-04-28
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift replication analysis and questions (high CPU usage)

2012-04-18 Thread Oleg Gelbukh
Hello, Julien

Thanks for insightful numbers!

It really seems that having 1 node per zone for Swift cluster is
inefficient,
and it may be better to have more not so CPU-packed storage nodes with less
devices per node then few high-performance nodes with disk shelves.

--
Best regards,
Oleg

On Wed, Apr 18, 2012 at 12:56 PM, Julien Danjou
wrote:

> On Mon, Apr 16 2012, Julien Danjou wrote:
>
> I'm adding a note about where the numbers come from.
>
> > Volumetry:
> > - 450 GB of storage used (each node has 19 TB)
> This has been measured with:
>  df -h /srv/node/c0d0p1
>
> > - 57 accounts
> > - 7929 containers in 7870 partitions
> > - 58158 objects partitions used so far
>
> This is what is reported in log:
>  Attempted to replicate 60 dbs in 0.29485 seconds (203.49118/s)
>  Attempted to replicate 7930 dbs in 27.41175 seconds (289.29204/s)
>  58158/58158 (100.00%) partitions replicated in 122.97s (472.95/sec, 0s
> remaining)
>
> Regards,
> --
> Julien Danjou
> // eNovance  http://enovance.com
> // ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Does there any exist blueprint or sub-project of user's storage space quota or counting method for Swift ?

2012-04-13 Thread Oleg Gelbukh
Alex,

Thank you for important point and interesting information on large-scale
Swift performance!

Can you please explain a little what these times stand for? Is this a
single process runtime, or the time needed to converge cluster in case of
device failure, or something else?

--
Best regards,
Oleg

On Fri, Apr 13, 2012 at 7:23 AM, Alex Yang  wrote:

> In my view, the biggest problem of swift is not the new features but the
> improvement of performance.
>
> At first, we knew that container-updater, *-auditor, *-replicator will
> loop all the files in the disk during the running interval. When the number
> of files is very large, the process of replicator, updater and auditor will
> spend much time. So, the time of eventual consistency is very long.
>
> Our practice of  Sina Web Service Team https://launchpad.net/~sws:
>
> total accounts:  121,961;
>
> total containers:160,703;
>
> total objects: 14,291,519;
>
> total storage usage:  1.3T
>
> account replication time:10 hours;
>
> container replication time:  10 hours;
>
> object replication time:   48 hours;
>
> account audit time:   2 hours;
>
> container audit time: 9 hours;
>
> container update time:19 hours;
>
> This is terrible If we develop quota upon account db. There is long time
> for eventual consistency.
>
> Secondly,  there is a vicious circle during replication. The replicator
> will query the account-server, container-server and object-server to
> compare the metadata, and determine whether to sync. When the number of
> files is very large, the frequent query make the account-server,
> container-server and object-server become a bottle neck. This will
> influence the process of proxy-server to work with back-end servers. There
> a lot of Timeout(10s) ERROR in proxy-server's log and the load-average is
> very high.
> So, some PUT, POST operations failed, and the replicator to sync, and fail
> more, and sync more...
> .
> In my opinion, we need to improve the process of replication and container
> update by using event drive framework or something else..
> My leader may discuss this topic at Design Summit,
> http://openstackconferencespring2012.sched.org/speaker/huicheng
>
>
> 2012/4/13 John Dickinson 
>
>> Swift keeps total bytes, container, and object count (eventually)
>> up-to-date in the account metadata. There are also log processing tools
>> (like slogging - http://github.com/notmyname/slogging) that can provide
>> usage information (including bandwidth) based on swift logs.
>>
>> While I think that it's appropriate for swift to generate the usage
>> information (via internal processes or log processing), the appropriate
>> place for quotas is in whatever system handles the concept of a user
>> (normally the auth system). This way quotas are enforced by revoking or
>> limiting access of the auth token.
>>
>> --John
>>
>>
>> On Apr 12, 2012, at 11:53 AM, Frederik Van Hecke wrote:
>>
>> > Hi Kuo,
>> >
>> > One option would be to keep the usage information (num files, num
>> bytes, etc) per container / account in an sqlite DB, just like it is done
>> for account and container info.
>> >
>> > To avoid having to loop through all data at regular intervals (to
>> update the info), additional logic could be added to the api methods to
>> update the sqlite DB's when new files are added, files are deleted, etc.
>> Such approach will require more lines of code, but will be far less
>> stressful on performance.
>> >
>> > (the brute-force approach to loop through it at regular intervals will
>> be hell on performance on large deployments..)
>> >
>> >
>> > For data transfer billing based on download / upload amounts, a similar
>> approach could be used.
>> >
>> > If no one else is looking into this, I would certainly be willing to
>> help to help get this started.
>> >
>> >
>> > Kind regards,
>> > Frederik Van Hecke
>> >
>> > T:  +32487733713
>> > E:  frede...@cluttr.be
>> > W: www.cluttr.be
>> >
>> >
>> >
>> >
>> >
>> > This e-mail and any attachments thereto may contain information which
>> is confidential and/or protected by intellectual property rights and are
>> intended for the sole use of the recipient(s)named above. Any use of the
>> information contained herein (including, but not limited to, total or
>> partial reproduction, communication or distribution in any form) by persons
>> other than the designated recipient(s) is prohibited. If you have received
>> this e-mail in error, please notify the sender either by telephone or by
>> e-mail and delete the material from any computer. Thank you for your
>> cooperation.
>> >
>> >
>> >
>> > On Thu, Apr 12, 2012 at 17:45, Kuo Hugo  wrote:
>> > Hi folks ,
>> >
>> > I'm thinking about the better approach to manage "an user" or "an
>> account" space usage quota in swift.
>> > Is  there any related blueprint or sub-project even an idea around ?
>> > Any suggestion of benefits to be an external service or to be a
>> middle-ware in swi

Re: [Openstack] Any block storage folks interested in getting together?

2012-02-13 Thread Oleg Gelbukh
Hello,

We are interested in participating. Looking forward to talk to all Nova
block storage developers.

--
Best regards,
Oleg

On Tue, Feb 14, 2012 at 2:31 AM, John Griffith
wrote:

> Hi Bob,
> Just pop into IRC: #openstack-meeting
>
> John
>
> On Mon, Feb 13, 2012 at 3:17 PM, Bob Van Zant  wrote:
> > I'm interested in joining in. I've never joined one of the calls before,
> > where do I get more information on how to join?
> >
> >
> > On Mon, Feb 13, 2012 at 12:06 PM, Diego Parrilla
> >  wrote:
> >>
> >> Sounds great. We will try to join the meeting.
> >>
> >> Enviado desde mi iPad
> >>
> >> El 13/02/2012, a las 19:06, John Griffith 
> >> escribió:
> >>
> >> > There's been a lot of new work going on specific to Nova Volumes the
> >> > past month or so.  I was thinking that it's been a long time since
> >> > we've had a Nova-Volume team meeting and thought I'd see if there was
> >> > any interest in trying to get together next week?  I'm open to
> >> > suggestions regarding time slots but thought I'd propose our old slot,
> >> > Thursday Feb 23, 18:00 - 19:00 UTC.
> >> >
> >> > Here's a proposed agenda:
> >> >
> >> >* Quick summary of new blueprints you have submitted and completed
> >> > (or targeting for completion) in Essex
> >> >* Any place folks might need some help with items they've targeted
> >> > for Essex (see if we have any volunteers to help out if needed)
> >> >* Any updates regarding BSaaS
> >> >* Gauge interest in resurrecting a standing meeting, perhaps every
> 2
> >> > weeks?
> >> >
> >> > If you have specific items that you'd be interested in
> >> > sharing/discussing let me know.
> >> >
> >> > Thanks,
> >> > John
> >> >
> >> > ___
> >> > Mailing list: https://launchpad.net/~openstack
> >> > Post to : openstack@lists.launchpad.net
> >> > Unsubscribe : https://launchpad.net/~openstack
> >> > More help   : https://help.launchpad.net/ListHelp
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >
> >
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Atlas-LB - what's the project status?

2012-02-06 Thread Oleg Gelbukh
Jesse,

Thank you for quick answer and interesting information. Personally I like
the idea of multiple projects as ecosystem around OpenStack core.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Mon, Feb 6, 2012 at 10:13 PM, Jesse Andrews wrote:

> Hi Oleg,
>
> NOTE: this is my opinion - I do not speak for all of OpenStack!
>
> While our focus is on a successful Essex, the RCB team has started
> thinking about Folsom.  Our current thoughts is focusing on enabling
> an eco-system **around** core.  OpenStack shouldn't try to be IaaS,
> PaaS and SaaS - instead a solid base to build these other systems on.
> [1]
>
> OpenStack is about "Essential Infrastructure Services" (currently
> compute, storage, network) and supporting tools/apis/docs.
> Determining if LB is considered Infrastructure (vs. platform) and if
> it is Essential (a fuzzy word - what is essential to one isn't
> essential to another)
>
> That said - regardless of whether Atlas land in core [2], my team wants to
> add:
>
>  * documentation/tutorials/examples about how to add a new (iaas or
> paas) services to a cloud
>  * simple integration of LB service (for instance an optional devstack
> component).
>  * an opensource backend for the LB service (haproxy, pound, ...)
>
> The thought is that an entire eco-system of components that plug into
> a cloud is more powerful than having OpenStack "choose winners" that
> become "core". [3]
>
> I look forward to conversations about LBaaS and the definition of
> OpenStack.
>
> Jesse Andrews
> Rackspace Cloud Builders
>
> [1] the analogy I use is that the Apache Web Server doesn't try to be
> Django or Rails, but instead be a great web server to run rails on top
> of.
> [2] in addition to the question about if lbaas belongs in core, the
> incubation process would need to be gone through
> http://wiki.openstack.org/Governance/Approved/Incubation
> [3] rather than blessing project X to be an official platform
> component, enable many projects to run on top and let open source /
> market dynamics determine winners.
>
> On Mon, Feb 6, 2012 at 9:48 AM, Oleg Gelbukh 
> wrote:
> > Hello, everyone
> >
> > What is the status of this LBaaS project for the OpenStack? As far as I
> > know, the open-source version is compatible with OpenStack. But is it
> > possible to merge the Java code in the OpenStack ecosystem? Is someone
> > working on re-implementing Atlas-LB in Python and eventually adding to
> the
> > projects incubator, or there are some other lbaas projects out there?
> >
> > Thanks in advance,
> >
> > --
> > Oleg Gelbukh
> > Mirantis Inc.
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Atlas-LB - what's the project status?

2012-02-06 Thread Oleg Gelbukh
Hello, everyone

What is the status of this LBaaS project for the OpenStack? As far as I
know, the open-source version is compatible with OpenStack. But is it
possible to merge the Java code in the OpenStack ecosystem? Is someone
working on re-implementing Atlas-LB in Python and eventually adding to the
projects incubator, or there are some other lbaas projects out there?

Thanks in advance,

--
Oleg Gelbukh
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nexenta SAN volume driver -- FF exception proposal

2012-01-25 Thread Oleg Gelbukh
Hello,

We propose inclusion of Nexenta SAN driver for nova-volume in trunk at
Essex-4 milestone. The blueprint in question is:
https://blueprints.launchpad.net/nova/+spec/nexenta-volume-driver

There is already a fully functional version of the driver for Diablo
release shipped separately from trunk. We've started work on this code this
week to make it compatible with Essex. The tentative delivery date is  Feb
10.
This code is purely driver-related, it does not affect core code.

The benefit of this driver is support for NexentaStor software SAN
appliance as block volume storage back-end. NexentaStor is based on
OpenSolaris and uses ZFS file system for storage. Volumes are exported via
iSCSI.

--
Best regards,
Oleg Gelbukh
Sr. Engineer
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] BSaaS project status

2012-01-16 Thread Oleg Gelbukh
Hello,

I have read about BSaaS project codenamed 'Lunr' in the Openstack wiki and
in this list archive. But it seems that its development was abandoned by
the team and focus was switched to developing integrated nova-volume
service. Was that a governance decision not to use external block storage
service, or it was decided to focus on integrated service first and then
expand it into dedicated project? Or does the Lunr development continue
somewhere?

--
Best regards,
Oleg Gelbukh
Sr. Engineer
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Oleg Gelbukh
Hello,

Here at Mirantis we are working on deployment of Openstack that intended to
manage HPC cluster eventually. There are few features that we are going to
incorporate, and we are still researching. The general idea is to use LXC
as a lightweight virtualization engine, and make use of faster I/O system
than that based on disk image file.

--
Oleg Gelbukh,
Sr. IT Engineer
Mirantis Inc.

On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh wrote:

> I've recently had inquiries about High Performance Computing (HPC) on
> Openstack. As opposed to the Service Provider (SP) model, HPC is interested
> in fast provisioning, potentially short lifetime instances with precision
> metrics and scheduling. Real-time vs. Eventually.
>
> Anyone planning on using Openstack in that way?
>
> If so, I'll direct those inquires to this thread.
>
> Thanks in advance,
> Sandy
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DRBD storage for Openstack installations

2011-06-09 Thread Oleg Gelbukh
Hello everyone,

A bit of 
follow-up<http://mirantis.blogspot.com/2011/06/clustered-lvm-on-drbd-resource-in.html>on
the subject, concerning clustered locking for the LVM on DRBD
resource.

On Mon, May 30, 2011 at 11:04 AM, Oleg Gelbukh wrote:

> The current OpenStack paradigm seems to be built around external storage,
> which contains user data on attached volumes. However, we wanted to create
> distributed storage on the same nodes we are running nova-compute on.
>
> 2011/5/26 Peter J. Pouliot 
>
> Greetings Programs,
>>
>> We to have been toying with a similar idea in our lab.   We are using the
>> same model as Oleg, for existing clouds.   The current OpenStack paradigm is
>> a bit different.   Having not read all his info yet, I hope they include
>> service resources for the openstacks bits configured into his CIB.
>>
>> We have been toying with the idea, of doing linux-ha clusters under the
>> openstack services for service availability across the cloud.
>>
>> p
>>
>>
>>
>>
>> On Thu, May 26, 2011 at 03:11:34PM +0200, Diego Parrilla Santamaría wrote:
>> >Hi Oleg,
>> >thank you very much for your post, it's really didactic. We are
>> taking a
>> >different approach for HA at storage level, but I have worked
>> formerly
>> >with DRBD and I think it's a very good choice.
>> >I'm curious about how you have deployed nova-volume nodes in your
>> >architecture. You don't specify if the two nodes of the DRBD cluster
>> run
>> >one or two instances of nova-volume. If you run one instance probably
>> you
>> >have implemented some kind of fault-tolerant active-passive service
>> if the
>> >nova-volume process fails in the active node, but I would like to
>> know if
>> >you can run an active-active two nova-volume instances on two
>> different
>> >physical nodes on top of the DRBD shared resource.
>> >Regards
>> >Diego�
>> >--
>> >Diego Parrilla
>> >CEO
>> >[1]www.stackops.com |� [2]diego.parri...@stackops.com | +34 649 94
>> 43 29 |
>> >skype:diegoparrilla
>> >
>> >On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh <[3]
>> ogelb...@mirantis.com>
>> >wrote:
>> >
>> >  Hi,
>> >  We were researching Openstack for our private cloud, and want to
>> share
>> >  experience and get tips from community as we go on.�
>> >  We have settled on DRBD as shared storage platform for our
>> installation.
>> >  LVM is used over the drbd device to mange logical volumes. OCFS2
>> file
>> >  system is created on one of volumes, mounted and set up as
>> >  image_path�and�instance_path in the nova.conf, other space is
>> reserved
>> >  for storage volumes (managed by nova-volume).�
>> >  As a result, we have shared storage suitable for features such as
>> live
>> >  migration and snapshots. We also have some level of
>> fault-tolerance,
>> >  with DRBD I/O error handling, which automatically redirects I/O
>> requests
>> >  to peer node over network in case of primary node failure. We
>> created
>> >  [4]script for bootstrapping lost VMs in two crash scenarios:
>> >  * dom0 host restart/domU failure: restore VMs on the same host
>> >  * dom0 host failure: restore VMs on peer node
>> >  We are considering such pair of servers with shared storage as a
>> basic
>> >  block for the cloud structure.
>> >  For whom it may interest, the details of DRBD installation are
>> [5]here.
>> >  I'll be glad to answer any questions and highly appreciate feedback
>> on
>> >  this.
>> >  Oleg S. Gelbukh,
>> >  Mirantis Inc.
>> >  [6]www.mirantis.com
>> >  ___
>> >  Mailing list: [7]https://launchpad.net/~openstack
>> >  Post to � � : [8]openstack@lists.launchpad.net
>> >  Unsubscribe : [9]https://launchpad.net/~openstack
>> >  More help � : [10]https://help.launchpad.net/ListHelp
>> >
>> > References
>> >
>> >Visible links
>> >1. http://www.stackops.com/
>> >2. mailto:diego.parri...@stackops.com
>> >3. mailto:ogelb...@mirantis.com
>> >4.
>> https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_

Re: [Openstack] DRBD storage for Openstack installations

2011-05-30 Thread Oleg Gelbukh
The current OpenStack paradigm seems to be built around external storage,
which contains user data on attached volumes. However, we wanted to create
distributed storage on the same nodes we are running nova-compute on.

2011/5/26 Peter J. Pouliot 

> Greetings Programs,
>
> We to have been toying with a similar idea in our lab.   We are using the
> same model as Oleg, for existing clouds.   The current OpenStack paradigm is
> a bit different.   Having not read all his info yet, I hope they include
> service resources for the openstacks bits configured into his CIB.
>
> We have been toying with the idea, of doing linux-ha clusters under the
> openstack services for service availability across the cloud.
>
> p
>
>
>
>
> On Thu, May 26, 2011 at 03:11:34PM +0200, Diego Parrilla Santamaría wrote:
> >Hi Oleg,
> >thank you very much for your post, it's really didactic. We are taking
> a
> >different approach for HA at storage level, but I have worked formerly
> >with DRBD and I think it's a very good choice.
> >I'm curious about how you have deployed nova-volume nodes in your
> >architecture. You don't specify if the two nodes of the DRBD cluster
> run
> >one or two instances of nova-volume. If you run one instance probably
> you
> >have implemented some kind of fault-tolerant active-passive service if
> the
> >nova-volume process fails in the active node, but I would like to know
> if
> >you can run an active-active two nova-volume instances on two
> different
> >physical nodes on top of the DRBD shared resource.
> >Regards
> >Diego�
> >--
> >Diego Parrilla
> >CEO
> >[1]www.stackops.com |� [2]diego.parri...@stackops.com | +34 649 94 43
> 29 |
> >skype:diegoparrilla
> >
> >On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh <[3]
> ogelb...@mirantis.com>
> >wrote:
> >
> >  Hi,
> >  We were researching Openstack for our private cloud, and want to
> share
> >  experience and get tips from community as we go on.�
> >  We have settled on DRBD as shared storage platform for our
> installation.
> >  LVM is used over the drbd device to mange logical volumes. OCFS2
> file
> >  system is created on one of volumes, mounted and set up as
> >  image_path�and�instance_path in the nova.conf, other space is
> reserved
> >  for storage volumes (managed by nova-volume).�
> >  As a result, we have shared storage suitable for features such as
> live
> >  migration and snapshots. We also have some level of fault-tolerance,
> >  with DRBD I/O error handling, which automatically redirects I/O
> requests
> >  to peer node over network in case of primary node failure. We
> created
> >  [4]script for bootstrapping lost VMs in two crash scenarios:
> >  * dom0 host restart/domU failure: restore VMs on the same host
> >  * dom0 host failure: restore VMs on peer node
> >  We are considering such pair of servers with shared storage as a
> basic
> >  block for the cloud structure.
> >  For whom it may interest, the details of DRBD installation are
> [5]here.
> >  I'll be glad to answer any questions and highly appreciate feedback
> on
> >  this.
> >  Oleg S. Gelbukh,
> >  Mirantis Inc.
> >  [6]www.mirantis.com
> >  ___
> >  Mailing list: [7]https://launchpad.net/~openstack
> >  Post to � � : [8]openstack@lists.launchpad.net
> >  Unsubscribe : [9]https://launchpad.net/~openstack
> >  More help � : [10]https://help.launchpad.net/ListHelp
> >
> > References
> >
> >Visible links
> >1. http://www.stackops.com/
> >2. mailto:diego.parri...@stackops.com
> >3. mailto:ogelb...@mirantis.com
> >4.
> https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_id.py
> >5.
> http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html
> >6. http://www.mirantis.com/
> >7. https://launchpad.net/~openstack
> >8. mailto:openstack@lists.launchpad.net
> >9. https://launchpad.net/~openstack
> >   10. https://help.launchpad.net/ListHelp
>
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DRBD storage for Openstack installations

2011-05-29 Thread Oleg Gelbukh
Hello, Nelson

Sure, it will be very interesting and useful to read a word about your
solutions.

On Sat, May 28, 2011 at 12:30 AM, Nelson Nahum wrote:

> At Zadara Storage, we are working on a block storage system for the cloud.
> We didn't published much info yet but if somebody is interested I will be
> happy to be in a call and explain what we are doing.
>
> Nelson Nahum
> CTO
> nel...@zadarastorage.com
>
>
>
> 2011/5/27 Oleg Gelbukh 
>
>> Hi
>> Our approach was defined by need to combine storage and compute on the
>> same hosts.
>> Our configuration is dual-primary, so we can run nova-compute and virtual
>> servers on both nodes and have them with write access to volumes. DRBD
>> allows this mode out-of-box now, but it requires clustered file system or
>> great caution when runnning LVM on it.
>> But nova-volume must run on one node of drbd-connected pair, while the
>> second gets copy of lvm data via drbd. The tricky part is that it seems we
>> must activate volumes and volume groups on the peer node, but automation of
>> this is relatively easy.
>> For now, we are not going to share volumes outside of drbd-peers pair for
>> live migration or as attachable volumes, except some special cases like
>> migrating VMs between drbd pairs.
>> Looking forward to read couple of words on your approach.
>>
>> 2011/5/26 Diego Parrilla Santamaría 
>>
>>> Hi Oleg,
>>>
>>> thank you very much for your post, it's really didactic. We are taking a
>>> different approach for HA at storage level, but I have worked formerly with
>>> DRBD and I think it's a very good choice.
>>>
>>> I'm curious about how you have deployed nova-volume nodes in your
>>> architecture. You don't specify if the two nodes of the DRBD cluster run one
>>> or two instances of nova-volume. If you run one instance probably you have
>>> implemented some kind of fault-tolerant active-passive service if the
>>> nova-volume process fails in the active node, but I would like to know if
>>> you can run an active-active two nova-volume instances on two different
>>> physical nodes on top of the DRBD shared resource.
>>>
>>> Regards
>>> Diego
>>>
>>> --
>>> Diego Parrilla
>>> <http://www.stackops.com>*CEO*
>>> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
>>> skype:diegoparrilla*
>>> * <http://www.stackops.com>
>>>
>>>
>>>
>>> On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh wrote:
>>>
>>>> Hi,
>>>> We were researching Openstack for our private cloud, and want to share
>>>> experience and get tips from community as we go on.
>>>>
>>>> We have settled on DRBD as shared storage platform for our installation.
>>>> LVM is used over the drbd device to mange logical volumes. OCFS2 file 
>>>> system
>>>> is created on one of volumes, mounted and set up as *image_path* and *
>>>> instance_path* in the *nova.conf*, other space is reserved for storage
>>>> volumes (managed by nova-volume).
>>>>
>>>> As a result, we have shared storage suitable for features such as live
>>>> migration and snapshots. We also have some level of fault-tolerance, with
>>>> DRBD I/O error handling, which automatically redirects I/O requests to peer
>>>> node over network in case of primary node failure. We created 
>>>> script<https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_id.py>for
>>>>  bootstrapping lost VMs in two crash scenarios:
>>>> * dom0 host restart/domU failure: restore VMs on the same host
>>>> * dom0 host failure: restore VMs on peer node
>>>> We are considering such pair of servers with shared storage as a basic
>>>> block for the cloud structure.
>>>>
>>>> For whom it may interest, the details of DRBD installation are 
>>>> here<http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html>.
>>>> I'll be glad to answer any questions and highly appreciate feedback on 
>>>> this.
>>>>
>>>> Oleg S. Gelbukh,
>>>> Mirantis Inc.
>>>> www.mirantis.com
>>>>
>>>> ___
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to : openstack@lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help   : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DRBD storage for Openstack installations

2011-05-27 Thread Oleg Gelbukh
Hi
Our approach was defined by need to combine storage and compute on the same
hosts.
Our configuration is dual-primary, so we can run nova-compute and virtual
servers on both nodes and have them with write access to volumes. DRBD
allows this mode out-of-box now, but it requires clustered file system or
great caution when runnning LVM on it.
But nova-volume must run on one node of drbd-connected pair, while the
second gets copy of lvm data via drbd. The tricky part is that it seems we
must activate volumes and volume groups on the peer node, but automation of
this is relatively easy.
For now, we are not going to share volumes outside of drbd-peers pair for
live migration or as attachable volumes, except some special cases like
migrating VMs between drbd pairs.
Looking forward to read couple of words on your approach.

2011/5/26 Diego Parrilla Santamaría 

> Hi Oleg,
>
> thank you very much for your post, it's really didactic. We are taking a
> different approach for HA at storage level, but I have worked formerly with
> DRBD and I think it's a very good choice.
>
> I'm curious about how you have deployed nova-volume nodes in your
> architecture. You don't specify if the two nodes of the DRBD cluster run one
> or two instances of nova-volume. If you run one instance probably you have
> implemented some kind of fault-tolerant active-passive service if the
> nova-volume process fails in the active node, but I would like to know if
> you can run an active-active two nova-volume instances on two different
> physical nodes on top of the DRBD shared resource.
>
> Regards
> Diego
>
> --
> Diego Parrilla
> <http://www.stackops.com>*CEO*
> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
> skype:diegoparrilla*
> * <http://www.stackops.com>
>
>
>
> On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh wrote:
>
>> Hi,
>> We were researching Openstack for our private cloud, and want to share
>> experience and get tips from community as we go on.
>>
>> We have settled on DRBD as shared storage platform for our installation.
>> LVM is used over the drbd device to mange logical volumes. OCFS2 file system
>> is created on one of volumes, mounted and set up as *image_path* and *
>> instance_path* in the *nova.conf*, other space is reserved for storage
>> volumes (managed by nova-volume).
>>
>> As a result, we have shared storage suitable for features such as live
>> migration and snapshots. We also have some level of fault-tolerance, with
>> DRBD I/O error handling, which automatically redirects I/O requests to peer
>> node over network in case of primary node failure. We created 
>> script<https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_id.py>for
>>  bootstrapping lost VMs in two crash scenarios:
>> * dom0 host restart/domU failure: restore VMs on the same host
>> * dom0 host failure: restore VMs on peer node
>> We are considering such pair of servers with shared storage as a basic
>> block for the cloud structure.
>>
>> For whom it may interest, the details of DRBD installation are 
>> here<http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html>.
>> I'll be glad to answer any questions and highly appreciate feedback on this.
>>
>> Oleg S. Gelbukh,
>> Mirantis Inc.
>> www.mirantis.com
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] DRBD storage for Openstack installations

2011-05-26 Thread Oleg Gelbukh
Hi,
We were researching Openstack for our private cloud, and want to share
experience and get tips from community as we go on.

We have settled on DRBD as shared storage platform for our installation. LVM
is used over the drbd device to mange logical volumes. OCFS2 file system is
created on one of volumes, mounted and set up as *image_path* and *
instance_path* in the *nova.conf*, other space is reserved for storage
volumes (managed by nova-volume).

As a result, we have shared storage suitable for features such as live
migration and snapshots. We also have some level of fault-tolerance, with
DRBD I/O error handling, which automatically redirects I/O requests to peer
node over network in case of primary node failure. We created
scriptfor
bootstrapping lost VMs in two crash scenarios:
* dom0 host restart/domU failure: restore VMs on the same host
* dom0 host failure: restore VMs on peer node
We are considering such pair of servers with shared storage as a basic block
for the cloud structure.

For whom it may interest, the details of DRBD installation are
here.
I'll be glad to answer any questions and highly appreciate feedback on this.

Oleg S. Gelbukh,
Mirantis Inc.
www.mirantis.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp