Re: [Openstack] Install from ISO in OpenStack

2013-05-17 Thread Sam Stoelinga
Are you trying to create an Openstack instance based on Ubuntu 12.04 or are
you trying to install openstack on ubuntu?

If you're trying to just launch an ubuntu image, you can use the pre-made
qcow2 images by ubuntu, didn't have any issues with those.
See:
http://docs.openstack.org/trunk/openstack-compute/admin/content/starting-images.html


What iso are you using?



On Fri, May 17, 2013 at 4:16 PM, Ray Sun  wrote:

> I try to install ubuntu 12.04 on OpenStack, here's my steps:
> 1. Upload ubuntu iso into OpenStack
> 2. Launch a new VM and install
> 3. Network can't detect
> 4. Can't find any disk
>
> Any one met this problem before? Thanks.
>
> Best Regards
> -- Ray
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Use IOMMU even when not doing device pass-through?

2013-05-16 Thread Sam Stoelinga
Libvirt usb passthrough also doesn't need this.


On Fri, May 17, 2013 at 1:29 PM, Matthew Thode wrote:

> On 05/16/13 22:43, Blair Bethwaite wrote:
> > Hi all,
> >
> > We're running a KVM based OpenStack cloud. I recently realised we don't
> > have the IOMMU turned on in our hypervisors. All indications I know about
> > and can find suggest it's only really useful if you want guests accessing
> > host devices directly, e.g., PCI pass-through. But I wonder if there are
> > any other performance advantages to be gained...? Virtio, for one,
> doesn't
> > seem to use this/need this.
> >
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> No, it's generally only for device pass-through.
>
> --
> -- Matthew Thode (prometheanfire)
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly official packages

2013-04-07 Thread Sam Stoelinga
Checked this monning and there is no official 2013.1 out yet. Latest
version is a built of 2013.1RC1.

See http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/n/nova/

Taken from packages file:

Package: glance
Version: 1:2013.1~rc1-0ubuntu2~cloud0



On Sun, Apr 7, 2013 at 12:34 PM, Martinx - ジェームズ
wrote:

> I just figured this out too... I think that Ubuntu Cloud Archive for
> Grizzly isn't ready for production yet...
>
>
> On 6 April 2013 23:28, Jason Ford  wrote:
>
>> Dave,
>>
>> Can you point to where the cloud archive has updated 2013.1 Grizzly
>> packages for 12.04? I don't see it when I look at the packages listed here:
>>
>> http://ubuntu-cloud.archive.canonical.com
>>
>> Is there somewhere else we should be pointing to get the 2013.1 release
>> instead of seeing the RC packages? If they are indeed not out yet, when
>> will they be?
>>
>> Thanks!
>>
>> jason
>>
>> - Original Message -
>> From: "Daviey Walker" 
>> To: "Filipe Manco" 
>> Cc: openstack@lists.launchpad.net
>> Sent: Saturday, April 6, 2013 4:16:42 PM
>> Subject: Re: [Openstack] Grizzly official packages
>>
>>
>>
>>
>>
>> On 6 April 2013 19:56, Filipe Manco < filipe.ma...@gmail.com > wrote:
>>
>>
>>
>> Is there any way to use OpenStack Grizzly in Ubuntu 12.10?
>> Can we use the cloud archive repos?
>>
>>
>>
>> Filipe Manco
>>
>> http://about.me/fmanco
>>
>>
>>
>>
>>
>>
>>
>> Hi Filipe,
>>
>>
>> We have made Grizzly available in both the current Ubuntu development
>> series, which is 13.04 (Raring Ringtail), and also made it available to the
>> most recent LTS (Long Term Support) version which is 12.04 (Precise
>> Pangolin), via the Ubuntu Cloud Archive. At this current time, there are
>> not packages available for 12.10.
>>
>>
>> Our focus is currently on the current development version, and the most
>> recent LTS. May I ask what makes 12.10 interesting to you, for Grizzly?
>>
>>
>> Thanks.
>>
>> --
>> Kind Regards,
>>
>> Dave Walker < dave.wal...@canonical.com >
>> Engineering Manager,
>> Ubuntu Server
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Security concern with vncserver_listen 0.0.0.0 and multi_host

2013-04-03 Thread Sam Stoelinga
No you aren't missing something, a firewall would be probably be enough if
we didn't change nova :P I also feel that #2 is too drastic now, but #1
should be done I guess.

I didn't mention something before about why we can't use a firewall for
this: We did some dirty changes to enable spice and disabled auto_port for
both vnc and spice, so people can access their virtual machines using spice
with a password on a specific port. The company I work for was already
using this since the E version and in our next version we will start to use
the official spice implementation of openstack. Our current version has
possible bugs also.

Disabling all ports isn't an option in our current state because we still
want to enable spice. We currently have a prefixed range of ports reserved
for spice 3 to 4 that should be accessible from the outside. Those
parts may be used by VNC and/or spice currently (We have disabled autoport
of vnc and spice and let them use the prefixed range).




On Wed, Apr 3, 2013 at 6:11 PM, Mac Innes, Kiall  wrote:

> On 03/04/13 11:03, Sam Stoelinga wrote:
> > To prevent this happening to somebody else we could do the following:
> > 1. In the documentation explicitly tell the user that when you enable
> > multi_host that you can't use vncserver_listen=0.0.0.0
> > 2. Do some sanity checks on nova.conf options, if we notice that
> > vncserver_listen: 0.0.0.0 and multi_host true, we don't allow starting
> > the nova-compute service and give a clear error message saying that it's
> > stupid to do something like that and what the user should do instead.
>
> I'm probably missing something here, but would a simple firewall not work?
>
> #2 seems drastic to me, and #1 could be amended to mention the need for
> a firewall instead..
>
> Kiall Mac Innes
> HP Cloud Services - DNSaaS
>
> Mobile:   +353 86 345 9333
> Landline: +353 1 524 2177
> GPG:  E9498407
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Security concern with vncserver_listen 0.0.0.0 and multi_host

2013-04-03 Thread Sam Stoelinga
Hi,

We are using folsom with nova-networking multi_host=True, which means that
every host has direct access to the internet. In our environment that also
meant that every host had it's own public ip(office ip).

We set it to 0.0.0.0 because we needed to support live-migration and
changed to multi_host later so the config was still there.

Related documentation:
http://docs.openstack.org/trunk/openstack-compute/admin/content/important-nova-compute-options.html

But this is a big security problem, because it will make the instances
accessible to everybody who can reach an compute node.

We solved it by running nova-novncproxy on every compute node and setting
the vncserver_listen to 127.0.0.1. How did other people solve this problem?
Is this ok? Didn't see any documentation about this.

I think this problem is an obvious problem that people should notice
themself, but we were just switching to multi_host mode so overlooked this
small configuration.

To prevent this happening to somebody else we could do the following:
1. In the documentation explicitly tell the user that when you enable
multi_host that you can't use vncserver_listen=0.0.0.0
2. Do some sanity checks on nova.conf options, if we notice that
vncserver_listen: 0.0.0.0 and multi_host true, we don't allow starting the
nova-compute service and give a clear error message saying that it's stupid
to do something like that and what the user should do instead.

Regards,
Sam Stoelinga
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova client support for restore from soft delete ?

2013-01-30 Thread Sam Stoelinga
It seems we're using start to restore the instance, which also works.

novaclient(request).servers.start(instance_id)

Sam

On Thu, Jan 31, 2013 at 10:43 AM, Vishvananda Ishaya
wrote:

>
> Yes I noticed the other day that the restore and force_delete admin
> commands are not in novaclient. I was planning on adding them at some point
> soon, but it should be a really easy addition if someone wants to tackle it
> before I get to it.
>
> Vish
>
> On Jan 30, 2013, at 2:24 AM, "Day, Phil"  wrote:
>
> Hi Vish,
>
> Sorry, I wasn’t very clear in my original post.   I have
> reclaim_instance_inteval set, and the instance does go to “SOFT_DELETED”.
> I can see that the api extension adds a “restore” verb to the list of
> actions on an instance.
>
> What I was trying to find out was if that additional action was available
> from the nova client.  E.g is there a “nova restore ” command ?
> Looking through the client code I can’t see one, but thought I might be
> missing  something.
>
> Thanks
> Phil
>
> *From:* Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> *Sent:* 30 January 2013 00:32
> *To:* Day, Phil
> *Cc:* openstack@lists.launchpad.net (openstack@lists.launchpad.net) (
> openstack@lists.launchpad.net)
> *Subject:* Re: [Openstack] nova client support for restore from soft
> delete ?
> ** **
> ** **
> On Jan 29, 2013, at 8:55 AM, "Day, Phil"  wrote:
>
>
> 
> Hi Folks,
>  
> Does the nova client provide support to restore a soft deleted instance
> (and if not, what is the process for pulling an instance back from the
> brink) ?
> ** **
> If you have reclaim_instance_interval set then you can restore instances
> via an admin api command. If not then you are not going to have much luck
> reclaiming the insance becasue the drive will be deleted. If by some chance
> you have the backing files still, then you should be able to fix the db and
> do a hard reboot on the instance to get it to come back up. Fixing the db
> is mostly about setting deleted=False but keep in mind that you will also
> have to manually restore the vif and reassociate the fixed ip which
> hopefully hasn't been associated to a new instance.
> ** **
> Vish
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-10 Thread Sam Stoelinga
Maybe you need to enable the following flag in nova.conf:

resume_guests_state_on_host_boot=True

The default is False it seems (Didn't confirm it), so if you expect the
machines to be in running state when you reboot the host, you should enable
that flag. Although it seems your problem seems to be when the host is not
rebooted, so it may not help for your case.

On Thu, Dec 6, 2012 at 5:49 PM, Wangpan  wrote:

> **
>  qemu: terminating on signal 15 from pid 1957
> this means the VM is shutted off by libvirtd/libvirt api, the log of my VM
> is same as this,
> so you should check who calls the libvirt to shut down your VMs.
> I have no other ideas now, good luck, guy!
>
> 2012-12-06
>  --
>  Wangpan
>  --
>  *发件人:*pyw
> *发送时间:*2012-12-06 17:34
> *主题:*Re: Re: [Openstack] Why my vm often change into shut off status by
> itself?
> *收件人:*"Wangpan"
> *抄送:*"openstack"
>
> Individual virtual machines automatically shutoff occurs frequently,  this
> time all the virtual machines are automatic shutoff at the same time.
>
> If the nova failed to delete the virtual machine will cause the virtual
> machine is shut down?
>
>
> 2012/12/6 Wangpan 
>
>> **
>> are that all VMs shutting down at the same time?
>> such as '2012-12-04 06:54:27.150+: shutting down' or near this point?
>> if this is true, I guess it may be the host's problem.
>>
>> 2012-12-06
>>  --
>>  Wangpan
>>  --
>>  *发件人:*pyw
>> *发送时间:*2012-12-06 17:10
>>  *主题:*Re: [Openstack] Why my vm often change into shut off status by
>> itself?
>> *收件人:*"Veera Reddy"
>> *抄送:*"openstack"
>>
>>   Generally if you use virsh to restart the virtual machine,it seems to
>> be able to use some time before shutoff again。
>>
>> $ date
>> Thu Dec  6 17:04:41 CST 2012
>>
>> $ virsh start instance-006e
>> Domain instance-006e started
>>
>> $ virsh list
>>  Id Name State
>> --
>> 158 instance-006erunning
>>
>> /var/log/libvirt/qemu$ sudo tail -f instance-006e.log
>> 2012-12-03 06:14:13.488+: shutting down
>> qemu: terminating on signal 15 from pid 1957
>> 2012-12-03 06:14:59.819+: starting up
>> LC_ALL=C
>> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
>> QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
>> core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
>> -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
>> instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
>> -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
>> file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
>> -device
>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -netdev tap,fd=25,id=hostnet0 -device
>> rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
>> -chardev
>> file,id=charserial0,path=/data0/instances/instance-006e/console.log
>> -device isa-serial,chardev=charserial0,id=serial0 -chardev
>> pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
>> -device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus
>> -incoming fd:23 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>> char device redirected to /dev/pts/27
>> qemu: terminating on signal 15 from pid 1957
>> 2012-12-04 06:54:27.150+: shutting down
>> 2012-12-06 09:02:46.343+: starting up
>> LC_ALL=C
>> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
>> QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -cpu
>> core2duo,+lahf_lm,+aes,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
>> -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name
>> instance-006e -uuid d7798df8-e225-4178-9d0b-f6691d78ce18 -nodefconfig
>> -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-006e.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -drive
>> file=/data0/instances/instance-006e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
>> -device
>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -netdev tap,fd=23,id=hostnet0 -device
>> rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:17:ca:dd,bus=pci.0,addr=0x3
>> -chardev
>> file,id=charserial0,path=/data0/instances/instance-006e/console.log
>> -device isa-serial,chardev=charserial0,id=serial0 -chardev
>> pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb
>> -device usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>> char device redirected to /d

Re: [Openstack] Handling of adminPass is arguably broken (essex)

2012-11-27 Thread Sam Stoelinga
Hi,

Just noticed the following two projects:
https://github.com/rackspace/openstack-guest-agents-windows-xenserver
https://github.com/rackspace/openstack-guest-agents-unix

Would those be useful in creating an agent like Vish described?
It seems they currently only support Xen? Haven't taken a deep look yet.

a) put a public key on the instance via metadata or config drive (for ease
> of use this could actually just be the ssh public key you normally use for
> logging into the vm).
> b) have a daemon in the windows instance that:
>  * generates a random password
>  * sets the administrator password to the random password
>  * encrypts it with the public key
>  * serves the encrypted password over https on a known port (say )
> c) open up port () in the instance's security group
> d) retrieve the encrypted password and decrypt it
> e) close port () in the instances security group


Was wondering if it's planned for Grizzly a way to change the password for
libvirt/kvm guests (unix and windows)?
Is there any blueprint available?

Sam

On Sat, Nov 3, 2012 at 3:15 AM, Pádraig Brady  wrote:

> On 11/02/2012 07:03 PM, Lars Kellogg-Stedman wrote:
>
>> On Thu, Nov 01, 2012 at 11:03:14AM -0700, Vishvananda Ishaya wrote:
>>
>>> The new config drive code defaults to iso-9660, so that should work. The
>>> vfat version should probably create a partition table.
>>>
>>
>> Is that what Folsom is using?  Or is it new-er than that?
>>
>
> That's in Folsom
>
>
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] new mailing list for bare-metal provisioning

2012-10-29 Thread Sam Stoelinga
If mailing list gets separated, it would be good to have an aggregate
mailing list we can subscribe to which has all nova related mailing lists.

On Mon, Oct 29, 2012 at 3:53 PM, Gary Kotton  wrote:

>  On 10/29/2012 02:59 AM, Asher Newcomer wrote:
>
> +1
>
> On Sun, Oct 28, 2012 at 8:39 PM, Russell Bryant wrote:
>
>> On 10/28/2012 08:19 PM, David Kang wrote:
>> >
>> >  I agree that subject prefix is a way.
>> > There are pros and cons of either approach.
>> > However, when I asked a few of the people who showed interest in
>> bare-metal discussion,
>> > a new mailing list was preferred by them.
>> > And we thought a separate mailing list makes people easier to
>> participate and to manage the discussion.
>> >
>> >  We can discuss this issue again among the people who signed up the new
>> mailing list.
>>
>>  There are quite a few people, like myself, who are interested in *all*
>> nova development.  Signing up for a new mailing list for every new
>> development effort would be a nightmare to keep up with.  I *really,
>> really* think the list should be dropped and all discussions should be
>> on openstack-dev.
>>
>
> I agree.
>
>
>> --
>> Russell Bryant
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Credentials

2012-10-22 Thread Sam Stoelinga
No, I think what Vish is saying that it's possible to get the Openstack
access key and secret by doing the following:
(Based on Folsom, but think it's the same in Essex)
1. Login with your account in Openstack dashboard (Horizon)
2. Go to  page
3. Click on EC2 Credentials
4. Click on Download EC2 credentials
The access key and secret seems to be in the file ec2rc.sh.
Description:

Clicking "Download EC2 Credentials" will download a zip file which includes
an rc file with your access/secret keys, as well as your x509 private key
and certificate.
That's what you want right? Hope it helped.

Sam
On Tue, Oct 23, 2012 at 12:50 PM, Tummala Pradeep <
pradeep.tumm...@ericsson.com> wrote:

> Actually, I am trying to integrate PaaS with OpenStack. So, I require
> access key and secret access key for that. So, I don't think ec2
> credentials will work. Are you saying it is not possible to set up
> OpenStack's access key and secret access key ?
>
> Pradeep
>
>
> On 10/22/2012 10:26 PM, Vishvananda Ishaya wrote:
>
>> access and secret keys are ec2 credentials and they can be retrieved
>> using download ec2 credentials from the settings page in horizon.
>>
>> Vish
>>
>> On Oct 22, 2012, at 4:56 AM, Tummala Pradeep <
>> pradeep.tumm...@ericsson.com> wrote:
>>
>>  I deployed OpenStack Essex on my server using the documentation
>>> provided. Now, I need help with getting API credentials similar to what HP
>>> OpenStack has.
>>>
>>> For eg - Users having an account in HP Openstack can retrieve access key
>>> and secret access key from the api keys section.In my deployment, I can
>>> download Openstack credentials from the settings tab in .pem format but it
>>> does not contain access key and secret access key. Therefore I want to
>>> setup api keys so that users can view their credentials similar to HP
>>> Openstack.
>>>
>>> Someone please guide me to get started on this.
>>>
>>> Thanks
>>> Pradeep
>>>
>>> __**_
>>> Mailing list: 
>>> https://launchpad.net/~**openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : 
>>> https://launchpad.net/~**openstack
>>> More help   : 
>>> https://help.launchpad.net/**ListHelp
>>>
>>
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Glance snapshots of VMs are invisble in horizon and glance image-list

2012-10-21 Thread Sam Stoelinga
Great catch! That totally fixed it, it was missing in the
glance-registry.conf but present in glance-api.conf.
[paste_deploy]
flavor = keystone

Sam

On Sat, Oct 20, 2012 at 12:07 AM, Brian Waldon  wrote:

> It looks like you aren't deploying Glance with Keystone authentication
> enabled. Add a [paste_deploy] section to glannce-api and glance-registry
> configs with a single entry: flavor=keystone.
>
>
> On Oct 19, 2012, at 12:19 AM, Sam Stoelinga wrote:
>
> Hi all,
>
> When I create a snapshot of a VM, the snapshot just vanishes or is hidden.
>
> *Scenario:*
> 1. Create a vm with local storage
> 2. Create a snapshot of the VM after its running succesfully
> 3. In horizon create a snapshot of the VM
> 4. You get redirected to Image & Snapshots page but there is no sign of
> the snapshot.
>
> *Some more debugging:*
>
>
> glance image-list
>
> +--+--+-+--+-++
> | ID   | Name | Disk
> Format | Container Format | Size| Status |
>
> +--+--+-+--+-++
> | 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b | cirros-0.3.0-x86_64-disk | qcow2
>   | bare | 9761280 | active |
>
> +--+--+-+--+-++
>
> nova image-list
>
> +--+--++--+
> | ID   | Name | Status
> | Server   |
>
> +--+--++--+
> | 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b | cirros-0.3.0-x86_64-disk | ACTIVE
> |  |
> | 33d37a0b-0c4d-4976-8580-9e7bf8b53776 | test snapshot| ACTIVE
> | 526e1738-1b44-4509-900e-b29beed7e0f7 |
>
> +--+--++--+
>
> As you can see glance image-list and nova image-list return different
> results. The snapshot has in fact
> been created correctly as you can see here:
> ls /var/lib/glance/images/6d196c6a-b210-45f7-a4ab-4d98e5b2a31b -lh
> -rw-r- 1 glance glance 9.4M Oct 19 14:24
> /var/lib/glance/images/6d196c6a-b210-45f7-a4ab-4d98e5b2a31b
>
>
> *This is the snapshot image detail: *
> glance image-show 33d37a0b-0c4d-4976-8580-9e7bf8b53776
> +---+--+
> | Property  | Value|
> +---+--+
> | Property 'base_image_ref' | 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b |
> | Property 'image_location' | snapshot |
> | Property 'image_state'| available|
> | Property 'image_type' | snapshot |
> | Property 'instance_uuid'  | 526e1738-1b44-4509-900e-b29beed7e0f7 |
> | Property 'owner_id'   | c21b7e53480b497aac6683d618a6b3ce |
> | Property 'user_id'| 4899e879f62846f1a4926b781a7489f6 |
> | checksum  | 46742031d20be7eabf52e55c9e7bf345 |
> | container_format  | bare |
> | created_at| 2012-10-19T06:59:45  |
> | deleted   | False|
> | disk_format   | qcow2|
> | id| 33d37a0b-0c4d-4976-8580-9e7bf8b53776 |
> | is_public | False|
> | min_disk  | 0|
> | min_ram   | 0|
> | name  | test snapshot|
> | protected | False|
> | size  | 14352384 |
> | status| active   |
> | updated_at| 2012-10-19T06:59:55  |
> +---+--+
> nova image-show 33d37a0b-0c4d-4976-8580-9e7bf8b53776
> +-+--+
> | Property| Value|
> +-+--

[Openstack] Glance snapshots of VMs are invisble in horizon and glance image-list

2012-10-19 Thread Sam Stoelinga
Hi all,

When I create a snapshot of a VM, the snapshot just vanishes or is hidden.

*Scenario:*
1. Create a vm with local storage
2. Create a snapshot of the VM after its running succesfully
3. In horizon create a snapshot of the VM
4. You get redirected to Image & Snapshots page but there is no sign of the
snapshot.

*Some more debugging:*


glance image-list
+--+--+-+--+-++
| ID   | Name | Disk
Format | Container Format | Size| Status |
+--+--+-+--+-++
| 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b | cirros-0.3.0-x86_64-disk | qcow2
| bare | 9761280 | active |
+--+--+-+--+-++

nova image-list
+--+--++--+
| ID   | Name | Status
| Server   |
+--+--++--+
| 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b | cirros-0.3.0-x86_64-disk | ACTIVE
|  |
| 33d37a0b-0c4d-4976-8580-9e7bf8b53776 | test snapshot| ACTIVE
| 526e1738-1b44-4509-900e-b29beed7e0f7 |
+--+--++--+

As you can see glance image-list and nova image-list return different
results. The snapshot has in fact
been created correctly as you can see here:
ls /var/lib/glance/images/6d196c6a-b210-45f7-a4ab-4d98e5b2a31b -lh
-rw-r- 1 glance glance 9.4M Oct 19 14:24
/var/lib/glance/images/6d196c6a-b210-45f7-a4ab-4d98e5b2a31b


*This is the snapshot image detail: *
glance image-show 33d37a0b-0c4d-4976-8580-9e7bf8b53776
+---+--+
| Property  | Value|
+---+--+
| Property 'base_image_ref' | 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b |
| Property 'image_location' | snapshot |
| Property 'image_state'| available|
| Property 'image_type' | snapshot |
| Property 'instance_uuid'  | 526e1738-1b44-4509-900e-b29beed7e0f7 |
| Property 'owner_id'   | c21b7e53480b497aac6683d618a6b3ce |
| Property 'user_id'| 4899e879f62846f1a4926b781a7489f6 |
| checksum  | 46742031d20be7eabf52e55c9e7bf345 |
| container_format  | bare |
| created_at| 2012-10-19T06:59:45  |
| deleted   | False|
| disk_format   | qcow2|
| id| 33d37a0b-0c4d-4976-8580-9e7bf8b53776 |
| is_public | False|
| min_disk  | 0|
| min_ram   | 0|
| name  | test snapshot|
| protected | False|
| size  | 14352384 |
| status| active   |
| updated_at| 2012-10-19T06:59:55  |
+---+--+
nova image-show 33d37a0b-0c4d-4976-8580-9e7bf8b53776
+-+--+
| Property| Value|
+-+--+
| created | 2012-10-19T06:59:45Z |
| id  | 33d37a0b-0c4d-4976-8580-9e7bf8b53776 |
| metadata base_image_ref | 6d196c6a-b210-45f7-a4ab-4d98e5b2a31b |
| metadata image_location | snapshot |
| metadata image_state| available|
| metadata image_type | snapshot |
| metadata instance_uuid  | 526e1738-1b44-4509-900e-b29beed7e0f7 |
| metadata owner_id   | c21b7e53480b497aac6683d618a6b3ce |
| metadata user_id| 4899e879f62846f1a4926b781a7489f6 |
| minDisk | 0|
| minRam  | 0|
| name| test snapshot|
| progress| 100  |
| server  | 526e1738-1b44-4509-900e-b29bee

Re: [Openstack] Folsom Horizon Error

2012-10-18 Thread Sam Stoelinga
I had the same issue, you can remove the Ubuntu theme with the following
command:
rm /etc/openstack-dashboard/ubuntu_theme.py
Double check if that's the correct path though, im writing this out of
memory.
Or uncomment the lines in /etc/openstack-dashboard/local_settings.py which
load the ubuntu theme.

Or if you like the ubuntu theme it seems you have to install the following
package:
apt-get install openstack-dashboard-ubuntu-theme

I think it's missing some layout files which are inside the above package,
but
not sure if that's the real issue.

Sam

On Thu, Oct 18, 2012 at 7:28 PM, Jasper Aikema  wrote:

>  Hello,
>
> I also have the layout problem.
>
> Didn't had time to find out why this occurred. If you remove the ubuntu
> theme for horizon, the layout will we fine.
>
> Kind regards,
>
> Jasper Aikema
>
> Hi-
>
>  I have installed Folsom version of Openstack using the installation
> guide https://github.com/EmilienM/openstack-folsom-guide
>
>  The Horizon GUI looks awkward  and the login gives me this error in the
> apache error logs.
>
>  I have installed Horizon the way given in the Folsom guide given above.
>
>  [image: Inline image 1]
>
>
>  The Error Log:
>
>  [Thu Oct 18 10:18:19 2012] [error] unable to retrieve service catalog
> with token
> [Thu Oct 18 10:18:19 2012] [error] Traceback (most recent call last):
> [Thu Oct 18 10:18:19 2012] [error]   File
> "/usr/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py", line 135,
> in _extract_service_catalog
> [Thu Oct 18 10:18:19 2012] [error] endpoint_type='adminURL')
> [Thu Oct 18 10:18:19 2012] [error]   File
> "/usr/lib/python2.7/dist-packages/keystoneclient/service_catalog.py", line
> 73, in url_for
> [Thu Oct 18 10:18:19 2012] [error] raise
> exceptions.EndpointNotFound('Endpoint not found.')
> [Thu Oct 18 10:18:19 2012] [error] EndpointNotFound: Endpoint not found.
>
>
>  Please help me resolve the issue.
>
>  Thanking you.
>
>
>  --
> Regards,
> --
> Trinath Somanchi,
> +91 9866 235 130
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Bug in documentation where to file it?

2012-10-16 Thread Sam Stoelinga
Hi all,

Is there a specific documentation project to file bugs? Or does this just
go to the nova project?

It seems the following documentation is incorrect:
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html

The command is incorrect:
nova-manage network create --label=example-net --fixed_range_v4=
172.16.169.0/24 --vlan=169 --bridge=br169
--project_id=127cdfa47e544df080d2ede5c38797d1

Returns:
2012-10-17 13:13:55 CRITICAL nova [req-aa135dcb-1c99-45e1-9bb4-37820958377d
None None] 'num_networks'
2012-10-17 13:13:55 TRACE nova Traceback (most recent call last):
2012-10-17 13:13:55 TRACE nova   File "/usr/bin/nova-manage", line 1401, in

2012-10-17 13:13:55 TRACE nova main()
2012-10-17 13:13:55 TRACE nova   File "/usr/bin/nova-manage", line 1388, in
main
2012-10-17 13:13:55 TRACE nova fn(*fn_args, **fn_kwargs)
2012-10-17 13:13:55 TRACE nova   File "/usr/bin/nova-manage", line 477, in
create
2012-10-17 13:13:55 TRACE nova
net_manager.create_networks(context.get_admin_context(), **kwargs)
2012-10-17 13:13:55 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 2040, in
create_networks
2012-10-17 13:13:55 TRACE nova if kwargs['num_networks'] +
kwargs['vlan_start'] > 4094:
2012-10-17 13:13:55 TRACE nova KeyError: 'num_networks'
2012-10-17 13:13:55 TRACE nova


So should probably be:
nova-manage network create --label=example-net --fixed_range_v4=
172.16.169.0/24 --vlan=169 --bridge=br169
--project_id=127cdfa47e544df080d2ede5c38797d1 --num_networks=1

I previously filed a bug here which was about glance documentation:
https://bugs.launchpad.net/glance/+bug/1066822
Not sure if that was right.

I would like to fix the bug myself, if it gets confirmed but that's the
next step. Not even sure if it's a valid bug yet.
>From what I understood the trunk version of documentation == folsom
documentation?

Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Compute Node Down!

2012-09-19 Thread Sam Stoelinga
Hi Ale,

It's the first time I see nova rescue, maybe this should be somewhere else
in the documentation. Maybe the part related to migration, that's where I
looked and tried.

I first tried to do migration of the VM on a dead host, but that didn't
work then after searching I stumbled upon this patch to enable the
functionality to move VMs from a dead host:
https://review.openstack.org/#/c/11086/12
But it wasn't available for Essex, and won't be in Folsom either. So I
thought this functionality was not there yet either. Searched for ours
documentation and google, but never saw anything about rescue.
Is the functionality the same as the above patch? I looked at the code, and
the code seems much smaller. The above patch seems to do more cleaning up
also.

Because I didn't know about nova rescue I already copied the patch to Essex
successfully, but maybe it's safe to use nova rescue.
This is the evacuate patch for stable essex:
https://review.openstack.org/#/c/13282/

Do I understand right that this is the workflow:
nova rescue instance1
Look if the state changed to RESCUED
if it's rescued do a nova unrescue instance1
which cleans up the resources used for rescuing and changes the state back
to ACTIVE on the vm?

Thanks a lot, sure is helpful.

Sam

On Wed, Sep 19, 2012 at 9:12 PM, Alejandro Comisario <
alejandro.comisa...@mercadolibre.com> wrote:

> if you are on essex, you can issue a "nova rescue", if in cactus, you have
> to manipulate the "instances" table to tell where the new instance will be
> running, and then from the new compute node issue a :
>
> virsh define /path/to/XML
> virsh start instance_name
>
> From that moment, you can manage the instance using euca / nova
> *
> *
> *Ale*
>
> On Wed, Sep 19, 2012 at 4:03 AM, Wolfgang Hennerbichler <
> wolfgang.hennerbich...@risc-software.at> wrote:
>
>> Hello Folks,
>>
>> Although it seems a pretty straightforward scenario I have a hard time
>> finding documentation on this.
>> One of my compute nodes broke down. All the instances are on shared
>> storage, so no troubles here, but I don't know how to tell openstack that
>> the VM should be deployed on another compute node. I tried fiddling around
>> in the mysql-db with no success.
>> Any help is really appreciated.
>>
>> Wolfgang
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Unable to start nova-scheduler : duplicate option: scheduler_host_manager

2012-09-17 Thread Sam Stoelinga
I just encountered the same problem, when I added a decorator to a method
but the decorator did not exist.

Just putting it here as it took me quite a while to find this silly
problem, hope it helps somebody.

On Thu, Aug 23, 2012 at 12:10 AM, Ben  wrote:

> Hum, ok but why ? I'm running essex out of the box packaged in Ubuntu
> 12.04. I didn't modify the code (except comment the raise line in cfg.py
> for this problem), but I did a lot of reinstall, modify configuration, drop
> nova db, etc.
>
> What would cause this circular import ? Can I tweak the code to avoid this
> circular import (as workaround to validate this hypothesis) ?
>
> Thanks to all for your help.
>
> Ben
>
>
> Le 22/08/2012 18:00, Vishvananda Ishaya a écrit :
>
>  You have a circular import somewhere That is causing scheduler/driver.py
>> to be imported twice.
>>
>> Vish
>> On Aug 22, 2012, at 8:33 AM, Ben  wrote:
>>
>>  # grep -R scheduler_host_manager /usr/lib/python2.7/dist-**packages/nova
>>> /usr/lib/python2.7/dist-**packages/nova/scheduler/**driver.py:
>>> cfg.StrOpt('scheduler_host_**manager',
>>> /usr/lib/python2.7/dist-**packages/nova/scheduler/**driver.py:
>>> FLAGS.scheduler_host_manager)
>>> Fichier binaire 
>>> /usr/lib/python2.7/dist-**packages/nova/scheduler/**driver.pyc
>>> concordant
>>> grep: /usr/lib/python2.7/dist-**packages/nova/CA/reqs/.**gitignore:
>>> Aucun fichier ou dossier de ce type
>>> grep: /usr/lib/python2.7/dist-**packages/nova/CA/.gitignore: Aucun
>>> fichier ou dossier de ce type
>>> grep: /usr/lib/python2.7/dist-**packages/nova/CA/projects/.**gitignore:
>>> Aucun fichier ou dossier de ce type
>>>
>>> If I comment the portion of code that raise the error (in
>>> _is_opt_registered(opts, opt) of cfg.py), I get the following error :
>>>
>>> ClassNotFound: Class SimpleScheduler could not be found: cannot import
>>> name vnc
>>>
>>> I have the same error if I replace --scheduler_driver by
>>> --scheduler_manager in nova.conf.
>>>
>>> novnc is not installed because it give a configure error, but
>>> nova-vncproxy is well installed.
>>>
>>> I suspect the error raised is not the original error, only side effect...
>>> I joined the trace of nova-scheduler before and after the code comment.
>>>
>>> What can I try now ?
>>>
>>> Ben
>>>
>>> Le 22/08/2012 17:00, Joseph Suh a écrit :
>>>
 Ben,

 It is possible to have the option specified in a code. Try grep -r on
 whole code.

 Thanks,

 Joseph

 - Original Message -
 From: "Ben" 
 To: "Joseph Suh" 
 Cc: openstack@lists.launchpad.net
 Sent: Wednesday, August 22, 2012 10:47:35 AM
 Subject: Re: [Openstack] Unable to start nova-scheduler : duplicate
 option: scheduler_host_manager

 Hi Joseph,

 Thank you for your answer. Yes, but where could be that option ? It's
 not duplicated in my nova.conf file, and this is the file the
 nova-scheduler uses.
 My nova.conf file joined.

 Regards,

 Ben


 Le 22/08/2012 16:37, Joseph Suh a écrit :

> Ben,
>
> As the error message suggests, it is due to a duplicated option of
> scheduler_host_manager. It is specified more than once somewhere.
>
> Thanks,
>
> Joseph
>
> - Original Message -
> From: "Ben" 
> To: openstack@lists.launchpad.net
> Sent: Wednesday, August 22, 2012 8:27:39 AM
> Subject: Re: [Openstack] Unable to start nova-scheduler : duplicate
> option: scheduler_host_manager
>
> Hi,
>
> No idea for my scheduler problem ? It was working at the beginning (I
> suppose because I could launch instances), but I did a lot of
> modification, modifying networks, deleting nova db, rebuild it, etc.
>
> Any idea on what I can do to identify the problem ? Where can I find
> the
> mentionned option except in nova.conf ?
>
> Ben
>
> Le 22/08/2012 00:46, Ben a écrit :
>
>> Hi,
>>
>> I'm trying to setup a little nova cluster with 3 nodes :
>>
>> - 1 controller node running all services but compute
>> - 2 compute nodes running compute and network only
>>
>> I have faced a lot of issues, but I can't understand this one. When I
>> start nova-scheduler on controller node, the process dies instantly
>> with
>> this error :
>>
>> CRITICAL nova [-] duplicate option: scheduler_host_manager
>>
>> So I can't start an instance, it remains stuck in building state. I
>> have
>> checked my nova.conf file, and I only see this line :
>>
>> --scheduler_driver=nova.**scheduler.simple.**SimpleScheduler
>>
>> What does means this error, how can I solve it ?
>>
>> Thanks,
>>
>> Ben
>>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Uns

Re: [Openstack] KeyStone service is not responding while installing thorough DevStack !!

2012-08-23 Thread Sam Stoelinga
Hi,

That may means your internet connection is too slow and it's still
downloading and didn't finish yet, at least thats what I have experienced
in the past.
Maybe your HTTP request sometimes get malformed and your download isn't
continuing then you may need to just try again.

Are you behind a firewall like me(Chinese Firewall)? Maybe the resource is
being blocked?
You may just have to wait longer or try a VPN to download everything.

Sam

On Thu, Aug 23, 2012 at 4:39 PM, Trinath Somanchi <
trinath.soman...@gmail.com> wrote:

> Hi-
>
> Me too experiencing the same.
>
> I was struck at this point.
>
> "Downloading/unpacking prettytable (from -r
> python_keystoneclient.egg.info/requires.txt (line 2))"
>
> It was just struck here... and not moving forward.
>
> Can any one guide me troubleshooting the issue.
>
> -
> Trinath
>
>
>
> On Thu, Aug 23, 2012 at 1:45 PM, hitesh wadekar 
> wrote:
>
>> Guys,
>>
>> I am installing DevStack script. I stucked here.
>>
>> + screen -S stack -p key -X stuff 'cd /opt/stack/keystone &&
>> /opt/stack/keystone/bin/keystone-all --config-file
>> /etc/keystone/keystone.conf --log-config /etc/ke'stone/logging.conf -d
>> --debug
>>
>> echo 'Waiting for keystone to start...'
>>   Waiting for keystone to start...
>>   + timeout 60 sh -c 'while ! http_proxy= curl -s
>> http://192.168.1.100:5000/v2.0/ >/dev/null; do sleep 1; done'
>>   + echo 'keystone did not start'
>>   keystone did not start
>>
>> By looking at message, it is sure that KeyStone service has not been
>> responding. I installed it manually but no luck still seen the same issue.
>>
>> Any suggestion or pointers for this?
>>
>> Thanks,
>> Hitesh Wadekar
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Regards,
> --
> Trinath Somanchi,
> +91 9866 235 130
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp