Hi Vish,
Assign direct/static IP address to Instance something like bridge rather
than nating is possible? Same like vSphare
Best Regards,
Umar
On Thu, Jan 31, 2013 at 8:37 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
On Jan 30, 2013, at 11:35 AM, Umar Draz unix...@gmail.com wrote:
Perhaps you can use the multiple flat network model shown here:
http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_multi_flat.html
The TenantC VM1 gets two IPs, one from each net. But in this case each net
is in a different subnet.
May be you can create the nets for the
Hello,
Thank you for your answers, Effectively I did'nt see this point ...
I must now think about it ... I hope that copy ganlance image to
cinder bootable volume will answer to this probleme.
That's because in our hand made cloud, we use cobbler/kickstart to
deploy every VM ... so no resize
openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on
01/31/2013 12:37:07 AM:
From: Tom Fifield fifie...@unimelb.edu.au
To: openstack@lists.launchpad.net,
Date: 01/31/2013 12:38 AM
Subject: Re: [Openstack] List of Cinder compatible devices
Sent by:
Just added some stuff about RBD where E refers to Essex.
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 11:20 AM, Avishay Traeger avis...@il.ibm.com wrote:
openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on
01/31/2013 12:37:07 AM:
From: Tom Fifield fifie...@unimelb.edu.au
I think need add Vendor storage series.
like not all the EMC storage would support Cinder.
On Thu, Jan 31, 2013 at 11:19 PM, Sébastien Han han.sebast...@gmail.comwrote:
Just added some stuff about RBD where E refers to Essex.
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 11:20
In that case, it is probably best to transpose the table, with series
included, the number of products will yield too many columns to be workable.
Also: Do blank spaces indicate not supported or unknown?
koert
On 01/31/2013 04:47 PM, Shake Chen wrote:
I think need add Vendor storage series.
On Thu, Jan 31, 2013 at 8:56 AM, Koert van der Veer ko...@cloudvps.comwrote:
In that case, it is probably best to transpose the table, with series
included, the number of products will yield too many columns to be workable.
Also: Do blank spaces indicate not supported or unknown?
koert
The doc cited explains how this works with Quantum, but not with Openstack.
VM1 is shown as being accessible via two VNICs, that implies that their is a
Hypervisor that is available by two NICs. How does Nova know that this is in
fact one Hypervisor and not two?
Similar question must be
Hello,
I am trying to implement a scenario where a group of instances have
multiple public ips each on the same network. Reasons for this are legacy
related, other tenants should operate like with regular floating ips. I've
tried to create multiple vifs on an instance with the same network, but
Try to have a look at the boot from volume feature. Basically the disk
base of your instance is an RBD volume from Ceph. Something will be
remain in /var/lib/nova/instances but it's only the kvm xml file.
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack
Cheers!
--
Regards,
On Thu, Jan 31, 2013 at 11:43 AM, Sébastien Han han.sebast...@gmail.comwrote:
Try to have a look at the boot from volume feature. Basically the disk
base of your instance is an RBD volume from Ceph. Something will be
remain in /var/lib/nova/instances but it's only the kvm xml file.
Thank you Bob.
I expanded the RAM of my Virtual Machine, but the execution of the script
show me this error:
++ dirname
/root/devstack/openstack-dev-devstack-f49c410/tools/xen/scripts/install-os-vpx.sh
+ thisdir=/root/devstack/openstack-dev-devstack-f49c410/tools/xen/scripts
+ '[' '' ']'
+ '['
Even though I don't experience this problem (and prefer nginx to apache), I can
help diagnose:
Connections ending up in CLOSE_WAIT means that the socket isn't being fully
closed, which is controlled by the client lib (in this case
python-keystoneclient) which uses httplib2 under the hood.
+1 John
Since CephFS is not production ready what you can do is map RBD device
to all of your compute nodes and then mount them in
/var/lib/nova/instances. The downside of this that you have way more
IOPS since you only have one RBD per compute node for EVERY VMs that
will end up into this
Hi Yaguang,
Could you restore the review please ?
Because I cannot push the rebased code on an abandoned review and only the
review owner is authorized to do that.
Regards,
Édouard.
On Thu, Jan 24, 2013 at 4:05 PM, Yaguang Tang yagu...@canonical.com wrote:
Hi,
I am glad you have make this
On 01/31/2013 11:59 AM, Gabriel Hurley wrote:
Even though I don't experience this problem (and prefer nginx to
apache), I can help diagnose:
Connections ending up in CLOSE_WAIT means that the socket isn't being
fully closed, which is controlled by the client lib (in this case
Speaking of which guys,anything particular stability-wise regarding Ceph within OpenStack. It's officially not production-ready, yet it's often that solution that comes out when we are looking for data clustering.GlusterFS...yah or nay?
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9
Hey guys, I'm having the same problem on RHEL 6.3. Did a search on
openvswitch at RHN and it came up with nothing. Where is it? Fedora's
core repos?
On Sun, Nov 18, 2012 at 4:15 PM, George Lekatsas glekats...@gmail.comwrote:
Hello,
following the installation instruction and yum install
Disco: https://github.com/homework/openvswitch/blob/master/INSTALL.RHEL.
And then, in my case, I'm just adding it to a local mrepo configuration.
That really sucks, though. C'mon EPEL. Seriously, this is one of the
hardest application deployments I've ever done. It's actually worse than
Ceph has been officially production ready for block (rbd) and object
storage (radosgw) for a while. It's just the file system that isn't
ready yet:
http://ceph.com/docs/master/faq/#is-ceph-production-quality
Josh
On 01/31/2013 01:23 PM, Razique Mahroua wrote:
Speaking of which guys,
anything
Hi all,
How does one delete netns?
I followed the following link to create requited net,subnet, router in
which dhcp is disabled
https://raw.github.com/EmilienM/openstack-folsom-guide/master/scripts/quantum-networking.sh
but I still see as :
root@folsom-network:~# ip netns
Hi,
As of now v3 validateToken response has tokens, service catalog, users,
project , roles and domains. (i.e) Except for groups we are returning
everything. We also discussed about the possibility of 100s of endpoints.
ValidateToken is supposed to be a high frequency call .This is
+1000
On 01/31/2013 07:44 PM, Ali, Haneef wrote:
Hi,
As of now v3 validateToken response has “tokens, service catalog,
users, project , roles and domains. (i.e) Except for groups we are
returning everything. We also discussed about the possibility of 100s
of endpoints.
On 01/31/2013 07:44 PM, Ali, Haneef wrote:
Hi,
As of now v3 validateToken response has tokens, service catalog,
users, project , roles and domains. (i.e) Except for groups we are
returning everything. We also discussed about the possibility of 100s
of endpoints. ValidateToken is
Isn't signed token an optional feature? If so validateToken is going to be a
high frequency call. Also Service Catalog is a constant, the services can
cache it. It doesn't need to be part of validateToken.
Thanks
Haneef
From: openstack-bounces+haneef.ali=hp@lists.launchpad.net
On Jan 31, 2013, at 6:37 PM, Ali, Haneef haneef@hp.com wrote:
Isn’t signed token an optional feature? If so validateToken is going to be
a high frequency call. Also “Service Catalog” is a constant, the services
can cache it. It doesn’t need to be part of validateToken.
Service
On 01/31/2013 10:57 PM, Vishvananda Ishaya wrote:
On Jan 31, 2013, at 6:37 PM, Ali, Haneef haneef@hp.com
mailto:haneef@hp.com wrote:
Isn’t signed token an optional feature? If so validateToken is
going to be a high frequency call. Also “Service Catalog” is a
constant, the
The doc cited explains how this works with Quantum, but not with
Openstack.
-As of Folsom, you have the option of using Quantum for networking or Nova
Network and I believe Nova network will eventually be deprecated.
VM1 is shown as being accessible via two VNICs, that implies that their is
a
On RHEL 6.3, with EPEL repos, we have openstack-quantum-openvswitch.noarch.
It requires openvswitch.x86_64, which isn't provided by either RHEL
channels or EPEL!
So I tracked down the openvswitch source (not that hard) , and after a few
hours of battling various compilation erros, ended up with
Hi Greg,
I can install openvswitch-1.7.3-1.x86_64.rpm on CentOS6.3 if specifying
with kmod-openkmod-openvswitch-1.7.3-1.el6.x86_64.rpm:
# rpm -ivh openvswitch-1.7.3-1.x86_64.rpm
kmod-openvswitch-1.7.3-1.el6.x86_64.rpm
Preparing...###
Ken'ich,
Arigato gozaimasu. Where did you get that rpm?
On Fri, Feb 1, 2013 at 1:19 AM, Ken'ichi Ohmichi
oomi...@mxs.nes.nec.co.jpwrote:
Hi Greg,
I can install openvswitch-1.7.3-1.x86_64.rpm on CentOS6.3 if specifying
with kmod-openkmod-openvswitch-1.7.3-1.el6.x86_64.rpm:
# rpm -ivh
Hi Greg,
On Fri, 1 Feb 2013 02:25:00 -0500
Greg Chavez greg.cha...@gmail.com wrote:
Arigato gozaimasu. Where did you get that rpm?
Dou itashimashite,
I built the rpm files from source file by the following way:
# wget http://openvswitch.org/releases/openvswitch-1.7.3.tar.gz
# tar -zxvf
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/579/Project:precise_grizzly_nova_trunkDate of build:Thu, 31 Jan 2013 05:31:13 -0500Build duration:3 min 32 secBuild cause:Started by an SCM changeBuilt
cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 2281 lines...]Finished at 20130131-0736Build needed 00:01:06, 1660k disc spaceINFO:root:Uploading package
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/580/Project:precise_grizzly_nova_trunkDate of build:Thu, 31 Jan 2013 09:31:11 -0500Build duration:4 min 21 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/581/Project:precise_grizzly_nova_trunkDate of build:Thu, 31 Jan 2013 11:01:13 -0500Build duration:3 min 40 secBuild cause:Started by an SCM changeBuilt
at 20130131-1210Build needed 00:07:27, 20012k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise
Title: raring_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_trunk/112/Project:raring_grizzly_glance_trunkDate of build:Thu, 31 Jan 2013 12:01:09 -0500Build duration:14 minBuild cause:Started by an SCM changeBuilt
at 20130131-1603Build needed 00:00:00
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/582/Project:precise_grizzly_nova_trunkDate of build:Thu, 31 Jan 2013 16:04:14 -0500Build duration:3 min 44 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/583/Project:precise_grizzly_nova_trunkDate of build:Thu, 31 Jan 2013 18:20:25 -0500Build duration:4 min 46 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_swift_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_swift_trunk/101/Project:precise_grizzly_swift_trunkDate of build:Thu, 31 Jan 2013 19:01:16 -0500Build duration:4 min 4 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_swift_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_swift_trunk/98/Project:raring_grizzly_swift_trunkDate of build:Thu, 31 Jan 2013 19:01:15 -0500Build duration:4 min 25 secBuild cause:Started by an SCM changeBuilt
at 20130131-2037Build needed 00:02:59, 23324k
at 20130131-2043Build needed 00:04:05
at 20130131
at 20130131
48 matches
Mail list logo