Re: [Openstack] DHCP Replies

2013-08-21 Thread Édouard Thuleau
It's not a dnsmasq problem.
I think the problem is: https://bugs.launchpad.net/neutron/+bug/1185916 with
the Neutron Server and the DHCP agent updates loosed when the load is too
high.

Édouard.


On Thu, Aug 22, 2013 at 6:47 AM, Linus Nova  wrote:

>
> Hi,
>
> Some times Dnsmasq (dhcp lease offer) does not function correctly.
>
> try this :
>
> killall dnsmasq
>
> service quantum-dhcp restart
>
> Hard reboot your VM and start tcpdump on the tap near Dnsmasq to check the
> traffic.
>
> Best.
>
> Linus
>
>
>
> Le mercredi 21 août 2013, Mina Nagy Zaki a écrit :
>
> Hello,
>> I have working network configuration, VMs have access to the external
>> network, hosts have access to VMs. But DHCP replies are not making it
>> back into the VMs.
>>
>> tcpdump and iptables tracing show me that the requests make it through
>> just fine, but the replies don't make it out of the qdhcp-
>> namespace (the go out the tap interface there but I'm not sure what
>> happens to them next)
>>
>> How should I go about debugging this?
>>
>> Thanks!
>> --
>> Mina Nagy Zaki
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [swift] ERROR Insufficient Storage

2013-08-21 Thread pangj

I have found the issues from this thread,
https://answers.launchpad.net/swift/+question/161796

b/c my second disk is a soft link to /srv/node/sdb1, not mounted under 
the path.


So I added this statement:

mount_check = false

to the storage nodes' config files.

Now it works fine:

$ swift -A http://172.17.6.32:8080/auth/v1.0 -U system:root -K testpass stat
   Account: AUTH_system
Containers: 0
   Objects: 0
 Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1377151983.03609
X-Put-Timestamp: 1377151983.03609


After adding the mount_check option I restarted all the five storage 
service with:


# swift-init all restart

But one of the servers got disconnected, even can't ping to:
C:\Documents and Settings\Administrator>ping 172.17.6.24

Pinging 172.17.6.24 with 32 bytes of data:

Request timed out.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for 172.17.6.24:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),


Can you tell what happened? Thanks.



On 2013-8-22 13:57, pangj wrote:

It got a 503 error. Then I checked the syslog and found:

Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage
172.17.6.22:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4)
(client_ip: 172.17.6.32)
Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage
172.17.6.25:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4)
(client_ip: 172.17.6.32)
Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage
172.17.6.21:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4)
(client_ip: 172.17.6.32)



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [swift] ERROR Insufficient Storage

2013-08-21 Thread Hugo
1) Are all disks mounted properly ? Default swift drives mount point is 
/arc/node.
$> ls /srv/node

2) how's the ring looks like now?
$> swift-ring-builder /etc/swift/object.builder



從我的 iPhone 傳送

pangj  於 2013/8/21 下午10:57 寫道:

> Hi,
> 
> After installing the Swift I run the test command:
> 
> $ curl -k -v -H 'X-Auth-Token: AUTH_tk02863583400d4141a522fd185432a5f3' 
> http://172.17.6.32:8080/v1/AUTH_system
> * About to connect() to 172.17.6.32 port 8080 (#0)
> *   Trying 172.17.6.32... connected
> > GET /v1/AUTH_system HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
> > zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 172.17.6.32:8080
> > Accept: */*
> > X-Auth-Token: AUTH_tk02863583400d4141a522fd185432a5f3
> >
> < HTTP/1.1 503 Internal Server Error
> < Content-Length: 118
> < Content-Type: text/html; charset=UTF-8
> < Date: Thu, 22 Aug 2013 05:51:50 GMT
> <
> * Connection #0 to host 172.17.6.32 left intact
> * Closing connection #0
> Service UnavailableThe server is currently unavailable. 
> Please try again at a later time.
> 
> 
> It got a 503 error. Then I checked the syslog and found:
> 
> Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
> 172.17.6.22:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) (client_ip: 
> 172.17.6.32)
> Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
> 172.17.6.25:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) (client_ip: 
> 172.17.6.32)
> Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
> 172.17.6.21:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) (client_ip: 
> 172.17.6.32)
> 
> 
> It says Insufficient Storage. All my five test storage nodes have 10GB of 
> disk each.And I built rings as this:
> 
> swift-ring-builder account.builder create 18 3 1
> swift-ring-builder container.builder create 18 3 1
> swift-ring-builder object.builder create 18 3 1
> 
> Can you tell me where am I wrong? Thanks.
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] DHCP Replies

2013-08-21 Thread Linus Nova
Hi,

Some times Dnsmasq (dhcp lease offer) does not function correctly.

try this :

killall dnsmasq

service quantum-dhcp restart

Hard reboot your VM and start tcpdump on the tap near Dnsmasq to check the
traffic.

Best.

Linus



> Le mercredi 21 août 2013, Mina Nagy Zaki a écrit :
>
>> Hello,
>> I have working network configuration, VMs have access to the external
>> network, hosts have access to VMs. But DHCP replies are not making it
>> back into the VMs.
>>
>> tcpdump and iptables tracing show me that the requests make it through
>> just fine, but the replies don't make it out of the qdhcp-
>> namespace (the go out the tap interface there but I'm not sure what
>> happens to them next)
>>
>> How should I go about debugging this?
>>
>> Thanks!
>> --
>> Mina Nagy Zaki
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [swift] ERROR Insufficient Storage

2013-08-21 Thread pangj

Hi,

After installing the Swift I run the test command:

$ curl -k -v -H 'X-Auth-Token: AUTH_tk02863583400d4141a522fd185432a5f3' 
http://172.17.6.32:8080/v1/AUTH_system

* About to connect() to 172.17.6.32 port 8080 (#0)
*   Trying 172.17.6.32... connected
> GET /v1/AUTH_system HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3

> Host: 172.17.6.32:8080
> Accept: */*
> X-Auth-Token: AUTH_tk02863583400d4141a522fd185432a5f3
>
< HTTP/1.1 503 Internal Server Error
< Content-Length: 118
< Content-Type: text/html; charset=UTF-8
< Date: Thu, 22 Aug 2013 05:51:50 GMT
<
* Connection #0 to host 172.17.6.32 left intact
* Closing connection #0
Service UnavailableThe server is currently 
unavailable. Please try again at a later time.



It got a 503 error. Then I checked the syslog and found:

Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
172.17.6.22:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) 
(client_ip: 172.17.6.32)
Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
172.17.6.25:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) 
(client_ip: 172.17.6.32)
Aug 22 13:51:48 default proxy-server ERROR Insufficient Storage 
172.17.6.21:6002/sdb1 (txn: tx78a964e974714cd7b5832-005215a6f4) 
(client_ip: 172.17.6.32)



It says Insufficient Storage. All my five test storage nodes have 10GB 
of disk each.And I built rings as this:


swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

Can you tell me where am I wrong? Thanks.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] oversubscribe node

2013-08-21 Thread Alex Glikson
Steve Heistand  wrote on 22/08/2013 01:45:08 AM:

> From: Steve Heistand 
> To: James R Penick , 
> Cc: "openstack@lists.openstack.org" 
> Date: 22/08/2013 02:10 AM
> Subject: Re: [Openstack] oversubscribe node
> 
> that would probably have been useful info yes :)
> 
> latest grizzly on ubuntu 12.04 and 
> 
> compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
> 
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter

In order to use cpu_allocation_ratio, and ram_allocation_ratio, you need 
to use FilterScheduler with RamFilter and CoreFilter.
Also, as already mentioned, you need these parameters need to be 
configured in nova.conf of the machine running the scheduler.

Regards,
Alex

> 
> 
> On 08/21/2013 03:40 PM, James R Penick wrote:
> > Which scheduler are you using? Also, which version of Openstack?
> > 
> > :)=
> > 
> > 
> > 
> > 
> > 
> > 
> > On 8/21/13 2:56 PM, "Steve Heistand"  wrote:
> > 
> >> Hi folks,
> >>
> >> so  Im a little confused, on my compute nodes I have the allocation 
ratio
> >> set:
> >>
> >> root@node004:~# cat /etc/nova/nova-compute.conf
> >> [DEFAULT]
> >> ..
> >> cpu_allocation_ratio=1.0
> >> ram_allocation_ratio=1.0
> >> ...
> >>
> >> but I got into a state where the node is oversubscribed.  From nova
> >> hypervisor-show node004:
> >>
> >> node004
> >> ..
> >> vcpus_used  36
> >> memory_mb_used  72837
> >> memory_mb64379
> >> vcpus   32
> >> ..
> >>
> >> I have zones/aggregates set up and of the instances, one was started 
in
> >> just launch it anywhere mode and another was told to start on
> >> a zone that is this node.
> >>
> >> In looking back I would have expected the zone specified instance be
> >> rejected.
> >>
> >> is this to be expected or a bug?
> >>
> >> thanks
> >>
> >> s
> >>
> >>
> >> -- 
> >> 

> >> Steve Heistand   NASA Ames Research Center
> >> SciCon Group Mail Stop 258-6
> >> steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
> >> 

> >> "Any opinions expressed are those of our alien overlords, not my 
own."
> >>
> >> # For Remedy#
> >> #Action: Resolve#
> >> #Resolution: Resolved   #
> >> #Reason: No Further Action Required #
> >> #Tier1: User Code   #
> >> #Tier2: Other   #
> >> #Tier3: Assistance  #
> >> #Notification: None #
> >>
> >>
> > 
> 
> -- 
> 
>  Steve Heistand   NASA Ames Research Center
>  SciCon Group Mail Stop 258-6
>  steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
> 
>  "Any opinions expressed are those of our alien overlords, not my own."
> 
> # For Remedy#
> #Action: Resolve#
> #Resolution: Resolved   #
> #Reason: No Further Action Required #
> #Tier1: User Code   #
> #Tier2: Other   #
> #Tier3: Assistance  #
> #Notification: None #
> 
> 
> [attachment "signature.asc" deleted by Alex Glikson/Haifa/IBM] 
> ___
> Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] DHCP Replies

2013-08-21 Thread Linus Nova
Hi,

Some times Dnsmasq (dhcp lease offer) does not function correctly.

try this :

killall dnsmasq

service quantum-dhcp restart

Hard reboot your VM and start tcpdump on the tap near Dnsmasq to check the
traffic.

Best.

Linus



Le mercredi 21 août 2013, Mina Nagy Zaki a écrit :

> Hello,
> I have working network configuration, VMs have access to the external
> network, hosts have access to VMs. But DHCP replies are not making it
> back into the VMs.
>
> tcpdump and iptables tracing show me that the requests make it through
> just fine, but the replies don't make it out of the qdhcp-
> namespace (the go out the tap interface there but I'm not sure what
> happens to them next)
>
> How should I go about debugging this?
>
> Thanks!
> --
> Mina Nagy Zaki
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org 
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] object-oriented design in nova--room for improvement?

2013-08-21 Thread Joshua Harlow
There is always room for improvement I hope ;)

+openstack-dev (I think where u wanted this to go).

A question, are u thinking about organizing the 'metadata' associated with
resources?

If so it might be interesting to see if there could be a grand unification
around something like 'ResourceTracker' & 'Stats' that exposes 'metadata'
(different types) via an API that all the other classes could use? Is that
inline of what u are thinking?? Sort of like a resource + metadata
'database' that everyone uses (and accesses and updates via a single set
of APIs).

I might have misread your idea though.

On 8/21/13 7:55 PM, "Chris Friesen"  wrote:

>
>Hi,
>
>I'm pretty new to OpenStack, so maybe I'm still not grokking the overall
>design.  Feel free to tell me I'm totally full of it. :)
>
>Anyways, I've been poking around in the code with an eye towards maybe
>extending the set of information exported by the compute nodes for use
>in scheduler filters.
>
>I started putting together a list of areas that would need to be
>updated, and it seems like there are quite a few separate chunks of code
>all over the codebase that are aware of the details of what is exported:
>
>
>LibvirtDriver class
>Claim² class
>ComputeNode class
>compute_node_statistics() in sqlalchemy/api.py
>ServiceCommands class (to show host resources)
>ResourceTracker class (to track used/free resources)
>Stats class
>FakeDriver class
>HostState class in libvirt/driver.py
>json/xml stuff in nova/doc/api_samples
>HostController class
>make_hypervisor() in compute/plugins/v3/hypervisors.py
>HypervisorStatisticsTemplate API class
>HypervisorsController API class
>HostController API class
>SchedulerManager class
>
>
>I've probably missed some, the above was generated looking for cases of
>"vcpus_used".
>
>Maybe I'm dreaming, but it seems like there should be a way to do this
>more efficiently rather than manually copying knowledge into different
>parts of the code.
>
>Thoughts?
>
>Chris
>
>___
>Mailing list: 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to : openstack@lists.openstack.org
>Unsubscribe : 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] object-oriented design in nova--room for improvement?

2013-08-21 Thread Chris Friesen


Hi,

I'm pretty new to OpenStack, so maybe I'm still not grokking the overall 
design.  Feel free to tell me I'm totally full of it. :)


Anyways, I've been poking around in the code with an eye towards maybe 
extending the set of information exported by the compute nodes for use 
in scheduler filters.


I started putting together a list of areas that would need to be 
updated, and it seems like there are quite a few separate chunks of code 
all over the codebase that are aware of the details of what is exported:



LibvirtDriver class
Claim” class
ComputeNode class
compute_node_statistics() in sqlalchemy/api.py
ServiceCommands class (to show host resources)
ResourceTracker class (to track used/free resources)
Stats class
FakeDriver class
HostState class in libvirt/driver.py
json/xml stuff in nova/doc/api_samples
HostController class
make_hypervisor() in compute/plugins/v3/hypervisors.py
HypervisorStatisticsTemplate API class
HypervisorsController API class
HostController API class
SchedulerManager class


I've probably missed some, the above was generated looking for cases of 
"vcpus_used".


Maybe I'm dreaming, but it seems like there should be a way to do this 
more efficiently rather than manually copying knowledge into different 
parts of the code.


Thoughts?

Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [trove] - Discussion on Clustering and Replication API

2013-08-21 Thread McReynolds, Auston
Blueprint:

https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API

Questions:

* Today, /instance/{instance_id}/action is the single endpoint for all
actions on an instance (where the action is parsed from the payload).
I see in the newly proposed /clusters api that there's
/clusters/{cluster_id}/restart, etc. Is this a purposeful move from
"field of a resource" to sub-resources? If so, is there a plan to
retrofit the /instance api?

* For "Promote a Slave Node to Master", where is the request
indicating the promote action (explicitly or implicitly)? I don't see
it in the uri or the payload.

* "Create Replication Set" is a POST to /clusters, but "Add Node" is a
PUT to /clusters/{cluster_id}/nodes. This seems inconsistent given
both are essentially doing the same thing: adding nodes to a cluster.
What's the reasoning behind the divergence?

* What is the expected result of a resize action request on
/instance/{instance_id} for an instance that's a part of a cluster
(meaning the request could have alternatively been executed against
/cluster/{cluster_id}/nodes/{node_id})? Will it return an error?
Redirect the request to the /clusters internals?

Discussion:

Although it's common and often advised that the same flavor be used
for every node in a cluster, there are many situations in which you'd
purposefully buck the tradition. One example would be choosing a
beefier flavor for a slave to support ad-hoc queries from a tertiary
web application (analytics, monitoring, etc.).

Therefore,

{
  "cluster":{
"nodes":3,
"flavorRef":"https://service/v1.0/1234/flavors/1";,
"name":"replication_set_1",
"volume":{
  "size":2
},
"clusterConfig":{
  "type":"https://service/v1.0/1234/clustertypes/1234";
}
  }
}

is not quite expressive enough. One "out" is that you could force the
user to resize the slave(s) after the cluster has been completely
provisioned, but that seems a bit egregious.

Something like the following seems to fit the bill:

{
  "cluster":{
"clusterConfig":{
  "type":"https://service/v1.0/1234/clustertypes/1234";
},
"nodes":[
{
  "flavorRef":"https://service/v1.0/1234/flavors/1";,
  "volume":{
"size":2
  }
},
{
  "flavorRef":"https://service/v1.0/1234/flavors/3";,
  "volume":{
"size":2
  }
}]
  }
}

but, which node is arbitrarily elected the master if the clusterConfig
is set to MySQL Master/Slave? When region awareness is supported in
Trove, how would you pin a specifically configured node to its
earmarked region/datacenter? What will the names of the nodes of the
cluster be?

{
  "cluster":{
"clusterConfig":{
  "type":"https://service/v1.0/1234/clustertypes/1234";
},
"nodes":[
{
  "name":"usecase-master",
  "flavorRef":"https://service/v1.0/1234/flavors/1";,
  "volume":{
"size":2
  },
  "region": "us-west",
  "nodeConfig": {
"type": "master"
  }
},
{
  "name":"usecase-slave-us-east"
  "flavorRef":"https://service/v1.0/1234/flavors/3";,
  "volume":{
"size":2
  },
  "region": "us-east",
  "nodeConfig": {
"type": "slave"
  }
},
{
  "name":"usecase-slave-eu-de"
  "flavorRef":"https://service/v1.0/1234/flavors/3";,
  "volume":{
"size":2
  },
  "region": "eu-de",
  "nodeConfig": {
"type": "slave"
  }
}]
  }
}

This works decently enough, but it assumes a simple master/slave
architecture. What about MySQL multi-master with replication?
See /doc/refman/5.5/en/mysql-cluster-replication-multi-master.html.
Now, a 'slaveof' or 'primary'/'parent' field is necessary to be more
specific (either that, or nesting of JSON to indicate relationships).

>From above, it's clear that a "nodeConfig" of sorts is needed to
indicate whether the node is a slave or master, and to whom. Thus far,
a RDBMS has been assumed, but consider other offerings in the space:
How will you designate if the node is a seed in the case of Cassandra?
The endpoint snitch for a Cassandra node? The cluster name for
Cassandra or the replica-set for Mongo? Whether a slave should be
daisy-chained to another slave or attached to directly to master in
the case of Redis?

Preventing service type specifics from bleeding into what should be a
generic (as possible) schema is paramount. Unfortunately, "nodeConfig"
as you can see starts to become an amalgamation of fields that are
only applicable in certain situations, making documentation, codegen
for clients, and ease of use, a bit challenging. Fast-forward to when
editable parameter groups become a priority (a.k.a. being able to set
name-value-pairs in the service type's CONF). If users/customers
demand the ability to set things like buffer-pool-size while
provisioning, these fields would likely be placed in "nodeConfig",
making the situation worse.

Here's an attempt with a slightly different approach:
https://gist.github

Re: [Openstack] oversubscribe node

2013-08-21 Thread Gangur, Hrushikesh (HP Converged Cloud - R&D - Sunnyvale)
The Filter Scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the 
default scheduler for scheduling virtual machine instances. It supports 
filtering and weighting to make informed decisions on where a new instance 
should be created. This Scheduler can only be used for scheduling compute 
requests, not volume requests, i.e. it can only be used with the 
compute_scheduler_driver configuration option

--



Here is how it picks up the node (in the exact order):

1.Check the disk space - in your it is tiny flavor so no size check 
is done

2.Check the memory - it does allow you provision beyond 1.5x 
(default) of physical memory. For example, if you have 72 GB physical memory, 
and each tiny flavor takes 512 MB (0.5 GB), the no. of instances that can be 
provisioned must not be more than (72*1.5)/0.5 =216 instances

3.Check the CPU - it does not allow you provision vCPUs beyond 16x 
(default) of the physical core no. For example, if you have 24 cores, the 
scheduler will not consider the host that has more than 16x24= 384 vCPUs. In 
case of tiny flavor, it maps to 384 instances (1 vcpu each).



For further reading: 
http://docs.openstack.org/developer/nova/devref/filter_scheduler.html


From: Diego Parrilla Santamaría [mailto:diego.parrilla.santama...@gmail.com]
Sent: Wednesday, August 21, 2013 3:43 PM
To: Steve Heistand
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] oversubscribe node

Try to put those options in the nova.conf file that the nova-scheduler process 
reads, not in the nova-compute servers.

Cheers
Diego

 --
Diego Parrilla
CEO
www.stackops.com |  
diego.parri...@stackops.com | +34 649 94 43 
29 | skype:diegoparrilla


[http://stackops.s3-external-3.amazonaws.com/STACKOPSLOGO-ICON.png]


On Wed, Aug 21, 2013 at 11:56 PM, Steve Heistand 
mailto:steve.heist...@nasa.gov>> wrote:
Hi folks,

so  Im a little confused, on my compute nodes I have the allocation ratio set:

root@node004:~# cat /etc/nova/nova-compute.conf
[DEFAULT]
..
cpu_allocation_ratio=1.0
ram_allocation_ratio=1.0
...

but I got into a state where the node is oversubscribed.  From nova 
hypervisor-show node004:

node004
..
vcpus_used  36
memory_mb_used  72837
memory_mb64379
vcpus   32
..

I have zones/aggregates set up and of the instances, one was started in just 
launch it anywhere mode and another was told to start on
a zone that is this node.

In looking back I would have expected the zone specified instance be rejected.

is this to be expected or a bug?

thanks

s


--

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  
Moffett Field, CA 94035-1000

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] oversubscribe node

2013-08-21 Thread James R Penick
Which scheduler are you using? Also, which version of Openstack?

:)=






On 8/21/13 2:56 PM, "Steve Heistand"  wrote:

>Hi folks,
>
>so  Im a little confused, on my compute nodes I have the allocation ratio
>set:
>
>root@node004:~# cat /etc/nova/nova-compute.conf
>[DEFAULT]
>..
>cpu_allocation_ratio=1.0
>ram_allocation_ratio=1.0
>...
>
>but I got into a state where the node is oversubscribed.  From nova
>hypervisor-show node004:
>
>node004
>..
>vcpus_used  36
>memory_mb_used  72837
>memory_mb64379
>vcpus   32
>..
>
>I have zones/aggregates set up and of the instances, one was started in
>just launch it anywhere mode and another was told to start on
>a zone that is this node.
>
>In looking back I would have expected the zone specified instance be
>rejected.
>
>is this to be expected or a bug?
>
>thanks
>
>s
>
>
>-- 
>
> Steve Heistand   NASA Ames Research Center
> SciCon Group Mail Stop 258-6
> steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
>
> "Any opinions expressed are those of our alien overlords, not my own."
>
># For Remedy#
>#Action: Resolve#
>#Resolution: Resolved   #
>#Reason: No Further Action Required #
>#Tier1: User Code   #
>#Tier2: Other   #
>#Tier3: Assistance  #
>#Notification: None #
>
>


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] oversubscribe node

2013-08-21 Thread Steve Heistand
that would probably have been useful info yes :)

latest grizzly on ubuntu 12.04 and 

compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter


On 08/21/2013 03:40 PM, James R Penick wrote:
> Which scheduler are you using? Also, which version of Openstack?
> 
> :)=
> 
> 
> 
> 
> 
> 
> On 8/21/13 2:56 PM, "Steve Heistand"  wrote:
> 
>> Hi folks,
>>
>> so  Im a little confused, on my compute nodes I have the allocation ratio
>> set:
>>
>> root@node004:~# cat /etc/nova/nova-compute.conf
>> [DEFAULT]
>> ..
>> cpu_allocation_ratio=1.0
>> ram_allocation_ratio=1.0
>> ...
>>
>> but I got into a state where the node is oversubscribed.  From nova
>> hypervisor-show node004:
>>
>> node004
>> ..
>> vcpus_used  36
>> memory_mb_used  72837
>> memory_mb64379
>> vcpus   32
>> ..
>>
>> I have zones/aggregates set up and of the instances, one was started in
>> just launch it anywhere mode and another was told to start on
>> a zone that is this node.
>>
>> In looking back I would have expected the zone specified instance be
>> rejected.
>>
>> is this to be expected or a bug?
>>
>> thanks
>>
>> s
>>
>>
>> -- 
>> 
>> Steve Heistand   NASA Ames Research Center
>> SciCon Group Mail Stop 258-6
>> steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
>> 
>> "Any opinions expressed are those of our alien overlords, not my own."
>>
>> # For Remedy#
>> #Action: Resolve#
>> #Resolution: Resolved   #
>> #Reason: No Further Action Required #
>> #Tier1: User Code   #
>> #Tier2: Other   #
>> #Tier3: Assistance  #
>> #Notification: None #
>>
>>
> 

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] oversubscribe node

2013-08-21 Thread Steve Heistand
does that imply that you cant have different allocation values for different 
physical nodes?
that seems a little wrong if its the case.

steve


On 08/21/2013 03:42 PM, Diego Parrilla Santamaría wrote:
> Try to put those options in the nova.conf file that the nova-scheduler
> process reads, not in the nova-compute servers.
> 
> Cheers
> Diego
> 
>  --
> Diego Parrilla
>  *CEO*
> *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
> skype:diegoparrilla*
> * 
> *
> 
> *
> 
> 
> 
> On Wed, Aug 21, 2013 at 11:56 PM, Steve Heistand 
> wrote:
> 
>> Hi folks,
>>
>> so  Im a little confused, on my compute nodes I have the allocation ratio
>> set:
>>
>> root@node004:~# cat /etc/nova/nova-compute.conf
>> [DEFAULT]
>> ..
>> cpu_allocation_ratio=1.0
>> ram_allocation_ratio=1.0
>> ...
>>
>> but I got into a state where the node is oversubscribed.  From nova
>> hypervisor-show node004:
>>
>> node004
>> ..
>> vcpus_used  36
>> memory_mb_used  72837
>> memory_mb64379
>> vcpus   32
>> ..
>>
>> I have zones/aggregates set up and of the instances, one was started in
>> just launch it anywhere mode and another was told to start on
>> a zone that is this node.
>>
>> In looking back I would have expected the zone specified instance be
>> rejected.
>>
>> is this to be expected or a bug?
>>
>> thanks
>>
>> s
>>
>>
>> --
>> 
>>  Steve Heistand   NASA Ames Research Center
>>  SciCon Group Mail Stop 258-6
>>  steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
>> 
>>  "Any opinions expressed are those of our alien overlords, not my own."
>>
>> # For Remedy#
>> #Action: Resolve#
>> #Resolution: Resolved   #
>> #Reason: No Further Action Required #
>> #Tier1: User Code   #
>> #Tier2: Other   #
>> #Tier3: Assistance  #
>> #Notification: None #
>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
> 

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [cinder] CreateVolumeFromSpecTask._create_from_image

2013-08-21 Thread Joshua Harlow
Is there anyway for the volume driver to have some kind of mechanism to say 
'export volume before' instead of 'after'??

That would be one way to handle this.

From: Victor Rodionov 
mailto:victor.rodio...@nexenta.com>>
Date: Tuesday, August 20, 2013 11:31 AM
To: openstack 
mailto:openstack@lists.openstack.org>>
Subject: [Openstack] [cinder] CreateVolumeFromSpecTask._create_from_image

Hello,

It seems there is an error in CreateVolumeFromSpecTask for create volume from 
image (_create_from_image method)

if not cloned:
# TODO(harlowja): what needs to be rolled back in the clone if this
# volume create fails?? Likely this should be a subflow or broken
# out task in the future. That will bring up the question of how
# do we make said subflow/task which is only triggered in the
# clone image 'path' resumable and revertable in the correct
# manner.
#
# Create the volume and then download the image onto the volume.
model_update = self.driver.create_volume(volume_ref)
updates = dict(model_update or dict(), status='downloading')
try:
volume_ref = self.db.volume_update(context,
   volume_ref['id'], updates)
except exception.CinderException:
LOG.exception(_("Failed updating volume %(volume_id)s with "
"%(updates)s") %
  {'volume_id': volume_ref['id'],
   'updates': updates})
self._copy_image_to_volume(context, volume_ref,
   image_id, image_location, image_service)
make_bootable = True

As you can see after volume created this task call driver _copy_image_to_volume 
method, problem it that for some ISCSI drivers this operation may require 
volume export before copy image to volume.

One of solution can be create export after volume created and then remove 
export when image copied.

Thanks,
Victor Rodionov
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] oversubscribe node

2013-08-21 Thread Diego Parrilla Santamaría
Try to put those options in the nova.conf file that the nova-scheduler
process reads, not in the nova-compute servers.

Cheers
Diego

 --
Diego Parrilla
 *CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* 
*

*



On Wed, Aug 21, 2013 at 11:56 PM, Steve Heistand wrote:

> Hi folks,
>
> so  Im a little confused, on my compute nodes I have the allocation ratio
> set:
>
> root@node004:~# cat /etc/nova/nova-compute.conf
> [DEFAULT]
> ..
> cpu_allocation_ratio=1.0
> ram_allocation_ratio=1.0
> ...
>
> but I got into a state where the node is oversubscribed.  From nova
> hypervisor-show node004:
>
> node004
> ..
> vcpus_used  36
> memory_mb_used  72837
> memory_mb64379
> vcpus   32
> ..
>
> I have zones/aggregates set up and of the instances, one was started in
> just launch it anywhere mode and another was told to start on
> a zone that is this node.
>
> In looking back I would have expected the zone specified instance be
> rejected.
>
> is this to be expected or a bug?
>
> thanks
>
> s
>
>
> --
> 
>  Steve Heistand   NASA Ames Research Center
>  SciCon Group Mail Stop 258-6
>  steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000
> 
>  "Any opinions expressed are those of our alien overlords, not my own."
>
> # For Remedy#
> #Action: Resolve#
> #Resolution: Resolved   #
> #Reason: No Further Action Required #
> #Tier1: User Code   #
> #Tier2: Other   #
> #Tier3: Assistance  #
> #Notification: None #
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] oversubscribe node

2013-08-21 Thread Steve Heistand
Hi folks,

so  Im a little confused, on my compute nodes I have the allocation ratio set:

root@node004:~# cat /etc/nova/nova-compute.conf 
[DEFAULT]
..
cpu_allocation_ratio=1.0
ram_allocation_ratio=1.0 
...

but I got into a state where the node is oversubscribed.  From nova 
hypervisor-show node004:

node004
..
vcpus_used  36
memory_mb_used  72837
memory_mb64379
vcpus   32  
..

I have zones/aggregates set up and of the instances, one was started in just 
launch it anywhere mode and another was told to start on
a zone that is this node.

In looking back I would have expected the zone specified instance be rejected.

is this to be expected or a bug?

thanks

s


-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] problem in creating a VM

2013-08-21 Thread Aaron Rosen
Looks like the request to glance timed out. Can you do glance image-list to
see if that is working?

Aaron


On Wed, Aug 21, 2013 at 6:16 AM, Zhengguang Ou wrote:

> Hi all,
> I have installed grizzly on ubuntu 13.04 and VMWare virtual machine, When
> I create a vm, I hava a problem.
> nova-compute.log:
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] Traceback (most recent call last):
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1103, in
> _spawn
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] block_device_info)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1520,
> in spawn
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] admin_pass=admin_password)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1806,
> in _create_image
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] project_id=instance['project_id'])
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
> 158, in cache
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] *args, **kwargs)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
> 258, in create_image
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] prepare_template(target=base,
> *args, **kwargs)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
> 228, in inner
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] retval = f(*args, **kwargs)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
> 146, in call_if_not_exists
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] fetch_func(target=target, *args,
> **kwargs)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 593, in
> fetch_image
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] images.fetch_to_raw(context,
> image_id, target, user_id, project_id)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 207, in
> fetch_to_raw
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] fetch(context, image_href,
> path_tmp, user_id, project_id)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 202, in fetch
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] image_service.download(context,
> image_id, image_file)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 278, in
> download
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]
> _reraise_translated_image_exception(image_id)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 276, in
> download
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4] image_chunks =
> self._client.call(context, 1, 'data', image_id)
>
> 2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
> 24f9d999-b53f-47c9-adab-716779b0edf4]   File
> "/usr/li

Re: [Openstack] Use of VIPs in Openstack

2013-08-21 Thread Martinx - ジェームズ
Hi!

I believe you need this:

http://www.linuxvirtualserver.org/VS-DRouting.html

With it, you can have, for example, a Dashboard with a VIP...

Best,
Thiago


On 21 August 2013 11:05, Jake G.  wrote:

>
>
> On 2013/08/21, at 19:30, JuanFra Rodriguez Cardoso <
> juanfra.rodriguez.card...@gmail.com> wrote:
>
> Hi:
>
> Can the LbaaS extension meet your need?
> http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html
>
>
>
> ---
> JuanFra
>
>
> 2013/8/21 Jake G. 
>
>> Great job! that looks awesome.  Cant wait for the patch to release.
>>
>>   --
>>  *From:* Aaron Rosen 
>> *To:* Jake G. 
>> *Cc:* "openstack@lists.openstack.org" 
>> *Sent:* Wednesday, August 21, 2013 12:04 PM
>> *Subject:* Re: [Openstack] Use of VIPs in Openstack
>>
>> Hi Jake,
>>
>> This patch implements that exact usecase:
>> https://review.openstack.org/#/c/38230/ . Hopefully, we'll get this in
>> by the end of the week.
>>
>> Best,
>>
>> Aaron
>>
>>
>> On Tue, Aug 20, 2013 at 7:47 PM, Jake G. wrote:
>>
>> Hi!
>>
>> I was wondering if it is possible to use a Virtual IP or VIP for
>> clustering services (LVS, SQL, etc...) or for 3rd party load balancers?
>> I dont see a way to assign an IP address to multiple instances for this
>> purpose.
>>
>> Thanks,
>> Jake
>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
> No I don't think that will work, because I need to be able for two servers
> to have a way to share a single IP.
>
> This is commonly used for cluster services, LVS, VRRP etc...
>
> Plus I already have the LBaas service enabled and do not see a possible
> way to do this.
>
> Thanks
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Minutes from the Technical Committee meeting (August 20)

2013-08-21 Thread Thierry Carrez
The OpenStack Technical Committee ("TC") met in #openstack-meeting at
20:00 UTC yesterday.

Here is a quick summary of the outcome of this meeting:

* We approved a motion to start using Gerrit in the near future to track
motions and record TC votes. We will still require that motions are
discussed on the development mailing-list for a minimum of 4 business
days, and during at least one Technical Committee IRC meeting.

* Jaromir Coufal and Liz Blanchard were granted exceptional ATC status
for their contributions to the OpenStack Dashboard UX.

See details and full logs at:
http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-08-20-20.01.html

More information on the Technical Committee at:
http://wiki.openstack.org/Governance/TechnicalCommittee

-- 
Thierry Carrez (ttx)
Chair, OpenStack Technical Committee

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] tap/qvb/qvo interfaces are not being deleted on compute node

2013-08-21 Thread Kyle Mestery (kmestery)
On Aug 20, 2013, at 11:37 PM, Nick Maslov  wrote:
> Hi,
> 
> In my test installation of OpenStack, I see that too many tap/qvb/qvo 
> interfaces are still there - but I have too few running instances to use them 
> all.
> 
> It looks like:
> 
> root@nova02:~# virsh list --all
> IdName   State


Hi Nick:

They should be cleaned up automatically. What version of OpenStack are you 
using? Also, look in the OVS agent logs to see if there is an indication of a 
problem occurring.

Thanks,
Kyle
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Quantum L3 Performance Problems

2013-08-21 Thread Justin Brown
Hello,

I'm having some severe network performance problems with OpenStack Quantum.

I have a pretty normal Open vSwitch Quantum configuration using GRE
tunnels. One thing to note is that I have limited hardware at this
point. Rather than having dedicated controller and Quantum hosts, they
are running on one host each as a separate libvirt VM.

Let me be clear memory and CPU resources are not scarce at the host or
VM level for Quantum. The host has 8 CPUs and load is ~2.3 and never
spikes above 2.7. Quantum has 4 vCPUs and load is ~1.3 and doesn't
spike above 2.

This controller host has a single 1Gbps NIC with trunked VLANs and
same for the compute hosts.

I have six systems for testing: controller host (CH1), Quantum server
VM (Q), compute node 1 (N1), compute node 2 (N2), instance 1 (IN1),
and instance 2 (IN2).

The instances are running on separate compute nodes.

Here are some iperf results.

CH1 <--> Q: 6.3 Gbps
This communication happens over a Linux Bridge.

CH1 <--> N1: 937Mbps
This happens over the 1Gbps physical ethernet network.

Q (GRE) <--> IN1: 451Mbps
I ran iperf on Q using the qrouter Linux network namespace to test
peformance impact of GRE tunnel.

IN1 <--> IN2: 682Mbps
Again testing GRE tunneling. The discrepancy from the previous test is
interesting since it's the same basic test.

The results above are not too bad. This is where things get interesting.

Quantum is configured with one external (192.168.27.0/24) and one
private network (10.10.1.0/24).

IN1 has address 10.10.1.2 and floating IP 192.168.27.11 (the first few
IPs are outside the allocation pool).

I connected my laptop (1 Gbps) directly to the switch and assigned IP
192.168.27.2, so there wouldn't be any routing from the physical
switch.

Laptop <--> N1: 935Mbps

Laptop <--> IN1: 26.7Mbps
That is not a typo. Traffic going through the L3 agent slows by almost
17x (from the Q GRE to IN1 result). I regularly see results below
10Mbps.

I'm having a real tough time troubleshooting the last test. I ran
tcpdump from the host, CH1, and I don't see any errors causing TCP
retransmission or duplicate packets.
Both CH1 and Quantum server have plenty of CPU available.  It's like
the L3 iptables rules are massively decreasing performance, but I've
used iptables for years in other capacities and haven't seen this sort
of problem.

The various Quantum logs don't indicate any problems.

Has anyone else seen large performance decreases when using the
Quantum L3 agent?
Any ideas on how to troubleshoot this?

Sincerely,
Justin

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Use of VIPs in Openstack

2013-08-21 Thread Jake G.


On 2013/08/21, at 19:30, JuanFra Rodriguez Cardoso 
 wrote:

> Hi:
> 
> Can the LbaaS extension meet your need?
> http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html 
> 
> 
> 
> ---
> JuanFra
> 
> 
> 2013/8/21 Jake G. 
>> Great job! that looks awesome.  Cant wait for the patch to release.  
>> 
>> From: Aaron Rosen 
>> To: Jake G.  
>> Cc: "openstack@lists.openstack.org"  
>> Sent: Wednesday, August 21, 2013 12:04 PM
>> Subject: Re: [Openstack] Use of VIPs in Openstack
>> 
>> Hi Jake, 
>> 
>> This patch implements that exact usecase: 
>> https://review.openstack.org/#/c/38230/ . Hopefully, we'll get this in by 
>> the end of the week. 
>> 
>> Best, 
>> 
>> Aaron
>> 
>> 
>> On Tue, Aug 20, 2013 at 7:47 PM, Jake G.  wrote:
>> Hi!
>> 
>> I was wondering if it is possible to use a Virtual IP or VIP for clustering 
>> services (LVS, SQL, etc...) or for 3rd party load balancers?
>> I dont see a way to assign an IP address to multiple instances for this 
>> purpose.
>> 
>> Thanks,
>> Jake
>> 
>> 
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> 
>> 
>> 
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


No I don't think that will work, because I need to be able for two servers to 
have a way to share a single IP.

This is commonly used for cluster services, LVS, VRRP etc... 

Plus I already have the LBaas service enabled and do not see a possible way to 
do this.

Thanks ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] problem in creating a VM

2013-08-21 Thread Zhengguang Ou
Hi all,
I have installed grizzly on ubuntu 13.04 and VMWare virtual machine, When I
create a vm, I hava a problem.
nova-compute.log:

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] Traceback (most recent call last):

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1103, in
_spawn

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] block_device_info)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1520,
in spawn

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] admin_pass=admin_password)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1806,
in _create_image

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] project_id=instance['project_id'])

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
158, in cache

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] *args, **kwargs)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
258, in create_image

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] prepare_template(target=base,
*args, **kwargs)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line
228, in inner

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] retval = f(*args, **kwargs)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line
146, in call_if_not_exists

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] fetch_func(target=target, *args,
**kwargs)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 593, in
fetch_image

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] images.fetch_to_raw(context,
image_id, target, user_id, project_id)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 207, in
fetch_to_raw

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] fetch(context, image_href,
path_tmp, user_id, project_id)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 202, in fetch

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] image_service.download(context,
image_id, image_file)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 278, in
download

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]
_reraise_translated_image_exception(image_id)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 276, in
download

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] image_chunks =
self._client.call(context, 1, 'data', image_id)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 182, in call

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4] return getattr(client.images,
method)(*args, **kwargs)

2013-08-21 05:21:03.592 17518 TRACE nova.compute.manager [instance:
24f9d999-b53f-47c9-adab-716779b0edf4]   File
"/usr/lib/python2

[Openstack] Host capabilities are not getting updated.

2013-08-21 Thread Peeyush Gupta
Hi,

I have been trying to use some additional capabilities in host.
As shown in the following link:

http://russellbryantnet.wordpress.com/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/


I created a flavor and an aggregate. But when I try to
boot the instance using the flavor, it results in error because
ComputeCapabilitiesFilter returns 0 hosts. I investigated further
and found that the host's capabilities werent being updated.

How can I rectify this?
 
~Peeyush Gupta___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] DHCP Replies

2013-08-21 Thread Mina Nagy Zaki
I have a single node setup so far. I manually removed all the tag ovs
properties on the ports and it worked. I'm not using VLANs, so I'm not
sure why the quantum OVS plugin is adding tags.

tcpdump of br-int and dhcp tap:
I started ping from inside the dhcp namespace. I get destination
unreachable for a while, then after a few seconds the packets start
making it through.

tcpdump br-int shows me arp requests for 10.0.0.1 from 10.0.0.5 (dhcp tap)
the arp replies do not show up, and the ping icmp packets do not show up

tcpdump of dhcp tap captures all packets as expected (arp and icmp
echo requests and replies)


OVS plugin config:
[OVS]
integration_bridge = br-int
tenant_network_type = gre

enable_tunneling = True
network_vlan_ranges =
tunnel_bridge = br-tun
tunnel_id_ranges = 1:1000
local_ip = xxx.xxx.xxx.xxx

root@box1 ~ # ovs-vsctl show
6846b700-476a-4995-b770-c9d37f11f92d
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-1"
Interface "gre-1"
type: gre
options: {in_key=flow, out_key=flow,
remote_ip="111.111.111.111"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Port "qvof0b52659-8c"
tag: 4095
Interface "qvof0b52659-8c"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qvo9a42732d-56"
tag: 1
Interface "qvo9a42732d-56"
Port "qr-7f13795a-6c"
tag: 1
Interface "qr-7f13795a-6c"
type: internal
Port "tap32f910cf-c8"
tag: 4095
Interface "tap32f910cf-c8"
type: internal
Port int-br-ex
Interface int-br-ex
Port br-int
Interface br-int
type: internal
Port "tapef814634-b1"
tag: 1
Interface "tapef814634-b1"
type: internal
Bridge br-ex
Port "qg-22f43415-49"
Interface "qg-22f43415-49"
type: internal
Port br-ex
Interface br-ex
type: internal
Port "eth0"
Interface "eth0"
ovs_version: "1.10.1"

root@box1 ~ # ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:064d98b2b243
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC
SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST
ENQUEUE
 1(qvo9a42732d-56): addr:3e:71:71:5c:08:fe
 config: 0
 state:  0
 current:10GB-FD COPPER
 speed: 1 Mbps now, 0 Mbps max
 2(tap32f910cf-c8): addr:5e:70:61:0d:4b:f8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 4(patch-tun): addr:22:f5:35:76:2a:fc
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 10(tapef814634-b1): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 14(qr-7f13795a-6c): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:06:4d:98:b2:b2:43
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

root@box1 ~ # ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2829.897s, table=0, n_packets=0, n_bytes=0,
idle_age=2829, priority=2,in_port=2 actions=drop
 cookie=0x0, duration=2896.855s, table=0, n_packets=577,
n_bytes=51006, idle_age=286, priority=1 actions=NORMAL

On Wed, Aug 21, 2013 at 10:56 AM, Sushma Korati
 wrote:
> Hi Mina Nagy,
>
> We faced smilar issue with VLAN + Openvswitch.
> Are you using VLAN?
> Can you please paste your OVS-configuration and output of below commands in 
> compute and quantum node.
>  tcpdump and ovs-ofctl dump-flows on dhcp tap, int-br-int, br-int
>
> Regards,
> Sushma Korati
> sushma_kor...@persistent.co.in
> Persistent Systems Ltd. |  Partners in Innovation | www.persistentsys.com
> P Please consider your environmental responsibility: Before printing this 
> e-mail or any other document, ask yourself whether you need a hard copy.
>
>
>
> 
> From: Mina Nagy Zaki [mnz...@gmail.com]
> Sent: Wednesday, August 21, 2013 1:37 PM
> To: openstack@lists.openstack.org
> Subject: [Openstack] DHCP Replies
>
> Hello,
> I have working network configuration, VMs have access to the external
> network, hosts have access to VMs. But DHCP replies are not making it
> back into the VMs.
>
> tcpdump and iptables tracing show me that the requests make it through
> just fine, but the replies don't make it out of the qdhcp-
> namespace (the go out the tap interface there but I'm not sur

Re: [Openstack] Raw Devices Through Cinder?

2013-08-21 Thread Steven Carter (stevenca)
Are you thinking Icehouse timeframe?  Are there some blueprint type docs that 
you can point me to that would give me an idea of how it will work?

Thanks,

Steven.

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Tuesday, August 20, 2013 7:22 PM
To: Steven Carter (stevenca)
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Raw Devices Through Cinder?



On Tue, Aug 20, 2013 at 7:40 PM, Steven Carter (stevenca) 
mailto:steve...@cisco.com>> wrote:
Is there a way to present a raw device to a VM through Cinder?  It seems like I 
can do it with KVM specifically, but I would like to stick within the OpenStack 
framework if possible.

Thanks,

Steven.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Nope, not yet.  We have a raw/disk driver but it's not really useful at this 
point, and there are some other things in progress to get what you're looking 
for in the future but it looks like it will be another release before it's all 
implemented.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [keystone][ssl] Help for configure keystone with ssl.

2013-08-21 Thread Hua ZZ Zhang
Qing Long,

Here's a document in keystone FYI.
https://github.com/openstack/keystone/blob/master/doc/source/apache-httpd.rst

Meanwhile, I'm submitting a patch into devstack to enable apache and ssl
for keystone service:
https://review.openstack.org/#/c/36474/
Please help me to test it if you want. :-)

Best Regards,

 
 Edward Zhang(张华)  
 
 
 
 
 
 
 
 





   
 "Qinglong.Meng"   
 To 
   openstack@lists.openstack.org,  
 08/21/2013 06:13  "openstack-...@lists.openstack.org" 
 PM 
   ,   
cc 
   
   Subject 
   [Openstack] [keystone][ssl] Help
   for configure keystone with ssl.
   
   
   
   
   
   




Hi All,
Os: ubuntu 12.04 LTS
keystone version: stable/grizzly

I hava seen "keystone-manage ssl_setup"  in keystone tag 2013.2.b1.
but I can't use it in my version.
So I want to know:
* how to configure ssl with keystone manual?
* how to test configure is ok?

Tks for you help.

Best Regards,

--

Lawrency Meng
mail: mengql112...@gmail.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<><><><>___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

2013-08-21 Thread Raghuram P

  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Use of VIPs in Openstack

2013-08-21 Thread JuanFra Rodriguez Cardoso
Hi:

Can the LbaaS extension meet your need?
http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html



---
JuanFra


2013/8/21 Jake G. 

> Great job! that looks awesome.  Cant wait for the patch to release.
>
>   --
>  *From:* Aaron Rosen 
> *To:* Jake G. 
> *Cc:* "openstack@lists.openstack.org" 
> *Sent:* Wednesday, August 21, 2013 12:04 PM
> *Subject:* Re: [Openstack] Use of VIPs in Openstack
>
> Hi Jake,
>
> This patch implements that exact usecase:
> https://review.openstack.org/#/c/38230/ . Hopefully, we'll get this in by
> the end of the week.
>
> Best,
>
> Aaron
>
>
> On Tue, Aug 20, 2013 at 7:47 PM, Jake G. wrote:
>
> Hi!
>
> I was wondering if it is possible to use a Virtual IP or VIP for
> clustering services (LVS, SQL, etc...) or for 3rd party load balancers?
> I dont see a way to assign an IP address to multiple instances for this
> purpose.
>
> Thanks,
> Jake
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [keystone][ssl] Help for configure keystone with ssl.

2013-08-21 Thread Qinglong.Meng
Hi All,
Os: ubuntu 12.04 LTS
keystone version: stable/grizzly

I hava seen "keystone-manage ssl_setup"  in keystone tag 2013.2.b1. but
I can't use it in my version.
So I want to know:
* how to configure ssl with keystone manual?
* how to test configure is ok?

Tks for you help.

Best Regards,

-- 

Lawrency Meng
mail: mengql112...@gmail.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to restore existing vms after host reboot in devstack

2013-08-21 Thread Qinglong.Meng
+1


2013/8/21 Swapnil Kulkarni 

> Hello,
>
> I think the cause is you are running stack.sh everytime you restart
> devstack which clears the configuration. rejoin-stack.sh can help for your
> cause.
>
> Before proceeding for running the script please associate the vg file with
> a loop device with following command with root
>
> losetup /dev/loop2 /opt/stack/data/stack-volumes-backing-file
>
> Please use rejoin-stack.sh script located at your devstack installation
> directory or /opt/stack/devstack
>
> Best Regards,
> Swapnil Kulkarni
> swapnilkulkarni2...@gmail.com
>
>
>
> On Wed, Aug 21, 2013 at 12:37 PM, Batsayan Das wrote:
>
>> Hi,
>>
>> I launch the openstack by running stack.sh as mentioned in
>> http://devstack.org/guides/single-machine.html
>>
>> I created some images and instances in devstack using dashboard. When I
>> reboot Host machine and run stack.sh again, those images and instance got
>> lost, and I am not able to see those images and instances that I already
>> created using dashboard.
>>
>> My question is
>>
>> 1. Is this what is expected from host reboot?
>> 2. What I need to do to retain images and instances between multiple host
>> reboots.
>>
>> The thread
>> http://www.mail-archive.com/openstack@lists.launchpad.net/msg08453.htmlsays 
>> to use the switch  --start_guests_on_host_boot
>> --resume_guests_state_on_host_boot. I am not sure how I should use these ,
>> which config files should have those switched enabled when I launch the
>> devstack by simply running stack.sh
>>
>> I guess I need to pass some parameters or need to  do **something**
>> before running to stack.sh so that it does not destroy already crested
>> instances and images. As I understand stack.sh  reinitialize  all the stuffs
>>
>> Looking for any help.
>>
>> Regards,
>> Batsayan Das
>> Tata Consultancy Services
>> Mailto: batsayan@tcs.com
>> Website: http://www.tcs.com
>> 
>> Experience certainty.IT Services
>>Business Solutions
>>Consulting
>> 
>>
>> =-=-=
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 

Lawrency Meng
mail: mengql112...@gmail.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to restore existing vms after host reboot in devstack

2013-08-21 Thread Batsayan Das
 
Hello,
It solves the problem. 
Thanks for the help.

Regards,
Batsayan Das
Tata Consultancy Services
Mailto: batsayan@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting


-Swapnil Kulkarni  wrote: -
To: Batsayan Das 
From: Swapnil Kulkarni 
Date: 08/21/2013 12:51PM
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] How to restore existing vms after host reboot in 
devstack

Hello,

I think the cause is you are running stack.sh everytime you restart devstack 
which clears the configuration. rejoin-stack.sh can help for your cause.

Before proceeding for running the script please associate the vg file with a 
loop device with following command with root
 
losetup /dev/loop2 /opt/stack/data/stack-volumes-backing-file

Please use rejoin-stack.sh script located at your devstack installation 
directory or /opt/stack/devstack

 
Best Regards,
Swapnil Kulkarni
swapnilkulkarni2...@gmail.com

 

On Wed, Aug 21, 2013 at 12:37 PM, Batsayan Das  wrote:
 Hi, 
 
I launch the openstack by running stack.sh as mentioned in 
http://devstack.org/guides/single-machine.html 
 
I created some images and instances in devstack using dashboard. When I reboot 
Host machine and run stack.sh again, those images and instance got lost, and I 
am not able to see those images and instances that I already created using 
dashboard. 
 
My question is 
 
1. Is this what is expected from host reboot? 
2. What I need to do to retain images and instances between multiple host 
reboots. 
 
The thread 
http://www.mail-archive.com/openstack@lists.launchpad.net/msg08453.html says to 
use the switch  --start_guests_on_host_boot --resume_guests_state_on_host_boot. 
I am not sure how I should use these , which config files should have those 
switched enabled when I launch the devstack by simply running stack.sh  
 
I guess I need to pass some parameters or need to  do **something** before 
running to stack.sh so that it does not destroy already crested instances and 
images. As I understand stack.sh  reinitialize  all the stuffs 
 
Looking for any help. 
 
Regards, 
Batsayan Das
 Tata Consultancy Services
 Mailto: batsayan@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.        IT Services
                         Business Solutions
                         Consulting
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain 
 confidential or privileged information. If you are 
 not the intended recipient, any dissemination, use, 
 review, distribution, printing or copying of the 
 information contained in this e-mail message 
 and/or attachments to it are strictly prohibited. If 
 you have received this communication in error, 
 please notify us by reply e-mail or telephone and 
 immediately and permanently delete the message 
 and any attachments. Thank you
  

___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to     : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 

 ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Baremetal OS image support

2013-08-21 Thread Jake G.




 From: Robert Collins 
To: Jake G.  
Sent: Wednesday, August 21, 2013 6:24 PM
Subject: Re: [Openstack] Baremetal OS image support
 

On 21 August 2013 15:03, Jake G.  wrote:
>
> From: Clint Byrum 
> To: openstack 
> Sent: Wednesday, August 21, 2013 12:49 AM
>
> Subject: Re: [Openstack] Baremetal OS image support


> I built a seperate ubuntu server to run diskbuilder, however now when I run
> # bin/ramdisk-image-create deploy -k $KERNEL -o my-deploy-ramdisk
> I get the error ramdisk-image-create: invalid option -- 'k'
>
> Have you see this error?

You may be following old documentation - where did you get the sample
command line? -k is no longer needed, and hasn't been accepted for a
month or so.

> Also, do you happen to know of any resources where i can download pre-made
> images?
> All the sites I have seen are for "cloud" images only. Could these possibly
> work as well?

We'll make some available again soon, but for the moment you should
use diskimage-builder to setup the deploy ramdisk.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud






Hi Rob,
Thank you for the info. I look forward to those images.

I am using this document for baremetal deployment, which has the -k in it -> 
https://wiki.openstack.org/wiki/Baremetal
This seem to be the only documentation available for how to implement the 
baremetal feature. Would you know of a better doc?

Thanks again!___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] DHCP Replies

2013-08-21 Thread Mina Nagy Zaki
Hello,
I have working network configuration, VMs have access to the external
network, hosts have access to VMs. But DHCP replies are not making it
back into the VMs.

tcpdump and iptables tracing show me that the requests make it through
just fine, but the replies don't make it out of the qdhcp-
namespace (the go out the tap interface there but I'm not sure what
happens to them next)

How should I go about debugging this?

Thanks!
-- 
Mina Nagy Zaki

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to restore existing vms after host reboot in devstack

2013-08-21 Thread Swapnil Kulkarni
Hello,

I think the cause is you are running stack.sh everytime you restart
devstack which clears the configuration. rejoin-stack.sh can help for your
cause.

Before proceeding for running the script please associate the vg file with
a loop device with following command with root

losetup /dev/loop2 /opt/stack/data/stack-volumes-backing-file

Please use rejoin-stack.sh script located at your devstack installation
directory or /opt/stack/devstack

Best Regards,
Swapnil Kulkarni
swapnilkulkarni2...@gmail.com



On Wed, Aug 21, 2013 at 12:37 PM, Batsayan Das  wrote:

> Hi,
>
> I launch the openstack by running stack.sh as mentioned in
> http://devstack.org/guides/single-machine.html
>
> I created some images and instances in devstack using dashboard. When I
> reboot Host machine and run stack.sh again, those images and instance got
> lost, and I am not able to see those images and instances that I already
> created using dashboard.
>
> My question is
>
> 1. Is this what is expected from host reboot?
> 2. What I need to do to retain images and instances between multiple host
> reboots.
>
> The thread
> http://www.mail-archive.com/openstack@lists.launchpad.net/msg08453.htmlsays 
> to use the switch  --start_guests_on_host_boot
> --resume_guests_state_on_host_boot. I am not sure how I should use these ,
> which config files should have those switched enabled when I launch the
> devstack by simply running stack.sh
>
> I guess I need to pass some parameters or need to  do **something** before
> running to stack.sh so that it does not destroy already crested instances
> and images. As I understand stack.sh  reinitialize  all the stuffs
>
> Looking for any help.
>
> Regards,
> Batsayan Das
> Tata Consultancy Services
> Mailto: batsayan@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty.IT Services
>Business Solutions
>Consulting
> 
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] How to restore existing vms after host reboot in devstack

2013-08-21 Thread Batsayan Das
Hi,

I launch the openstack by running stack.sh as mentioned in 
http://devstack.org/guides/single-machine.html

I created some images and instances in devstack using dashboard. When I 
reboot Host machine and run stack.sh again, those images and instance got 
lost, and I am not able to see those images and instances that I already 
created using dashboard.

My question is

1. Is this what is expected from host reboot?
2. What I need to do to retain images and instances between multiple host 
reboots.

The thread 
http://www.mail-archive.com/openstack@lists.launchpad.net/msg08453.html 
says to use the switch  --start_guests_on_host_boot 
--resume_guests_state_on_host_boot. I am not sure how I should use these , 
which config files should have those switched enabled when I launch the 
devstack by simply running stack.sh 

I guess I need to pass some parameters or need to  do **something** before 
running to stack.sh so that it does not destroy already crested instances 
and images. As I understand stack.sh  reinitialize  all the stuffs

Looking for any help.

Regards,
Batsayan Das
Tata Consultancy Services
Mailto: batsayan@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] What Action take if VM/Instance Broken

2013-08-21 Thread Mahardhika
Hi, i want to know about your guide/tips or how to , if vm/instance 
cannot get into the system and stuck at booting/bios screen?
i purpose to know how to make a bootable cd with iso in it that attache 
to vm.


thanks
--
Regards,
Mahardhika Gilang

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack