Have to check that this doesn't mean we're forced to move to the disqus
with ads/promoted discovery...
-- Forwarded message --
From: Disqus friendly-ema...@disqus.com
Date: Sat, Feb 2, 2013 at 10:42 AM
Subject: Moving from Disqus Classic to the all new Disqus
**
[image:
ip netns del NS i believe
Den 1. feb. 2013 00:44 skrev Paras pradhan pradhanpa...@gmail.com
følgende:
Hi all,
How does one delete netns?
I followed the following link to create requited net,subnet, router in
which dhcp is disabled
Setup:
- Deployed with Juju
- Folsom
- Three nodes, all on the same network:
- Quantum gateway running GRE tunnels (default supported by juju charm)
- Cloud controller
- Compute
The gateway has another interface to a public network.
I created a private logical network in quantum and booted a
Hi all again:
Finally I got dump logs to file 'console.log'.
It needed to add 'console=tty0 console=ttyS0,115200' to kernel line in
/etc/grub.conf.
Regards,
JuanFra.
2013/1/30 JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com
Hi all:
I've tested default image 'cirros' and
Thanks for that clarification Josh,I had a small doubt you just alleviated :)
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 1 févr. 2013 à 00:36, Josh Durgin josh.dur...@inktank.com a écrit :Ceph has been officially production ready for block (rbd) and objectstorage
Hello Lei,
When I use the command nova secgroup-list-rules, it returns:
usage: nova secgroup-list-rules secgroup
error: too few arguments
What must I insert with the command, the secgroup name? But I haven't
created an secgroup.
Regards.
Guilherme.
2013/1/31 Lei Zhang
If you dont created any secgroup should be :
nova secgroup-list-rules default
Cheerios
On Fri, Feb 1, 2013 at 8:35 AM, Guilherme Russi
luisguilherme...@gmail.comwrote:
Hello Lei,
When I use the command nova secgroup-list-rules, it returns:
usage: nova secgroup-list-rules secgroup
2013/2/1 JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com:
Hi all again:
Finally I got dump logs to file 'console.log'.
It needed to add 'console=tty0 console=ttyS0,115200' to kernel line in
/etc/grub.conf.
Hi JuanFra,
Thanks for this hint. I was facing the same issue and this
Hello Leandro,
It's looking like that:
nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+---+--+
| icmp |
No, that does not work.
Cannot remove /var/run/netns/qdhcp-20554b0b-dc5f-48c5-87fa-47b90dc9242f:
Device or resource busy
Thanks
Paras.
On Fri, Feb 1, 2013 at 2:41 AM, Endre Karlson endre.karl...@gmail.com wrote:
ip netns del NS i believe
Den 1. feb. 2013 00:44 skrev Paras pradhan
Ken'ichi, thank you again. You actually had the answer to my problem in
your first reply, but I did not listen.
If you install openvswitch first, it fails dependency checking looking for
openvswitch-kmod. However, if you install kmod-openvswitch *first*,
the dependency checks for the
Evan,
Where are you pinging from? Displaying the ifconfig -a output on all nodes
could be useful.
Cheers,
__
Richard Whitney
Federal SE Team Lead
JNCIE-SP, BCNE, BCFP
Arista Networks, Inc.
Mobile: 703-627-6092
Support: : 866-476-
On Thu, Jan 31, 2013 at 4:29
Highlights of the week
H stands for Havana
https://launchpad.net/%7Eopenstack/+poll/h-release-naming
The polls closed, *Havana* will be the code name for the OpenStack
release following Grizzly.
Contributing to OpenStack http://www.icchasethi.com/?p=18
Rackspace’s Iccha
Hi Paras,
If your goal is to delete namespaces, have you tried the
quantum-netns-cleanup utility?
If the quantum network 20554b0b-dc5f-48c5-87fa-47b90dc9242f and the
quantum router 39d5fa21-c604-4d3b-a37b-90457c9b11fe have not beed
deleted it's unlikely however the namespaces will be deleted.
Hi Vish,
Yes, I did it assigned 45 ip address and then configured my HTTP load
balancer according to incoming domains.
Thanks Its work for me.
Best Regards,
Umar
On Thu, Jan 31, 2013 at 8:37 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
On Jan 30, 2013, at 11:35 AM, Umar Draz
Hi All,
while testing the latest code under devstackon a multi-node setup, I'm
having problems starting up VMs, with the following symptoms.
Any hints as to causes or possible fixes?
Thanks,
Woj.
controller:~/devstack$ nova show test1
Hi All,
I have 3 Tenant (admin, rebel, penguin). Also have 3 different users for
these Tenants
I have /25 network pool from my datacenter. I have created my default pool
using this name
nova-manage floating create --pool mypool --ip_range 73.63.93.128/25
Now the problem is I can only see this
Hello!
I've installed swift + keystone and have incomprehensible problem
First of all I get auth tokens
curl -d '{auth: {tenantName: service,
passwordCredentials:{username: swift, password: swiftpass}}}' -H
Content-type: application/json http://host:5000/v2.0/tokens | python
-mjson.tool
This
Sounds like swift isn't listening on that port. What is the bind_port in your
proxy-server.conf?
On Feb 1, 2013, at 12:53 PM, Andrey V. Romanchev wrote:
Hello!
I've installed swift + keystone and have incomprehensible problem
First of all I get auth tokens
curl -d '{auth: {tenantName:
I'm sorry- I didn't read that part about the proxy restart :) The proxy may
not log if it gets hung up in some middleware. What middleware do you have
running? You can try adding in some log messages into the middleware you have
running to find out where.
On Feb 1, 2013, at 1:37 PM, David
Hello Everyone,
I am following a guide to deploy openstack, it says that at the stage I am I
should be able to list nova flavor -list however I am getting this error:
root@CloudController:/home/esarias# nova --os_username=admin
--os_password=admin --os_auth_url=http://localhost:35357/v2.0
Guys,
At my Cinder logs, I'm seeing this when I run /etc/init.d/cinder-volume
start:
2013-02-01 18:16:20 1075 AUDIT cinder.service [-] Starting cinder-volume
node (version 2012.2.1-LOCALBRANCH:LOCALREVISION)
2013-02-01 18:16:21 DEBUG cinder.utils
[req-1ebb9638-2300-456a-995e-382c96f6632d
BTW,
Just for the record, 1 cinder volume that I created yesterday just
DISAPPEAR!!!
lvdisplay shows nothing, cinder-volumes is empty...
Can you guys imagine the cinder deleting my client's persistent storage
without any command???
I'm glad that this is just a PoC.
Tks,
Thiago
On 1
Guys, forger about my previous message:
On 1 February 2013 19:25, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:
BTW,
Just for the record, 1 cinder volume that I created yesterday just
DISAPPEAR!!!
lvdisplay shows nothing, cinder-volumes is empty...
Can you guys imagine the cinder
Hi Antonio,
That error is most likely due to the SRs not being set up correctly on
XenServer.
Could you check that you have an ext based SR as the pool default?
Use xe pool-param-get uuid=tabtab param-name=default-SR
Then check that against xe sr-list params=all
See
What do you mean it isn't visible?
you should be able to do:
nova floating-ip-create mypool
as any user.
Vish
On Feb 1, 2013, at 10:29 AM, Umar Draz unix...@gmail.com wrote:
Hi All,
I have 3 Tenant (admin, rebel, penguin). Also have 3 different users for
these Tenants
I have /25
I suspect you are suffering from this recently fixed bug:
https://bugs.launchpad.net/nova/+bug/1103436
If you update your nova code and and run everything you should be ok.
Vish
On Feb 1, 2013, at 10:20 AM, Wojciech Dec wdec.i...@gmail.com wrote:
Hi All,
while testing the latest code
Folks,
I'm working on building a pilot OpenStack cluster using the Basic
Installation Guide for Ubuntu 12.04/12.10 and I have a quick question
about Quantum networking configuration.
I have a 10.10.10.0/24 data network with interfaces configured for my
network node (10.10.10.1) and compute node
HI Vish,
I always connect my Controller or Compute node with root user for nova
commands and here is the .bashrc of root user
export OS_NO_CACHE=1
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=plainJet b0j@n1ca
export OS_AUTH_URL=http://172.168.1.2:5000/v2.0/;
export
Floating-ip-create is a user command that allocates a floating ip to a
tenant. It pulls it out of the pool so other tenants cannot use it.
Floating IPS are available for all projects. Any user can allocate an IP
and then associate it.
Vish
On Feb 1, 2013 7:35 PM, Umar Draz unix...@gmail.com
So this is not possible that create a dedicated floating ip pools that
share all tenant.
I have 128 ip pools and different tenant, I don't want a tenant hold the ip
even if its not needed. I want a central pool every tenant should acquire
the ip address from that pool.
Br.
Umar
On Sat, Feb 2,
Hi,
Is it possible to do migration in KVM by notifying other VMs to stop
communicating with the migrating VM ?
--
Apurva Shirish Patil
B.Tech(I.T.)
CoEP
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Hello,
could you please elaborate more what exactly you want to do in the
Migration for VMs?
you mean, 4 VM's are communicating with each other, and you want to tell
other 3 VMs to stop communicating 1st VM as its being migrating? is it
like this?
Thanks,
Hitesh
On Sat, Feb 2, 2013 at 12:54
You probably want to past your proxy-server.conf this may be helpful for us
to help you.
Chmouel.
On Fri, Feb 1, 2013 at 8:49 PM, David Goetz david.go...@rackspace.comwrote:
I'm sorry- I didn't read that part about the proxy restart :) The proxy
may not log if it gets hung up in some
Title: precise_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/105/Project:precise_grizzly_keystone_trunkDate of build:Fri, 01 Feb 2013 04:31:09 -0500Build duration:1 min 31 secBuild cause:Started by an SCM
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/588/Project:precise_grizzly_nova_trunkDate of build:Fri, 01 Feb 2013 14:36:31 -0500Build duration:10 minBuild cause:Started by user Chuck ShortBuilt
at 20130201-1502Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during
at 20130201-1505Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during
38 matches
Mail list logo