Thanks for link! By running the OpenHiddenSystemDrive exe, I am able to see
the injected file.
Regards,
Balu
On Thu, Apr 25, 2013 at 10:30 AM, Wangpan hzwang...@corp.netease.comwrote:
**
have you open and check the 'system reserved partition'? see the refer at
bellow:
Is there was way to inject in to the regular filesystem(C: drive) in
Windows7/Windows8?
Regards,
Balu
On Thu, Apr 25, 2013 at 11:46 AM, Balamurugan V G
balamuruga...@gmail.comwrote:
Thanks for link! By running the OpenHiddenSystemDrive exe, I am able to
see the injected file.
Regards,
I check the master codes of nova several days ago, and find the first logical
partition of root disk will be choosed to inject files, so you may have to
change the nova code to implement what you want.
I also will try to fix this issue in the havana edition.
2013-04-25
Wangpan
Balu ?
Have you tried looking in the /var/lib/dhcp directory (the directory might
depend
on the DHCP-client you are using) of the Ubuntu image ?
As this isn't a clean image but it has been connected to an other network,
maybe a
previous DHCP-server told it to add the route ? And now the client
Hello Giuseppe,
I am not sure which Hypervisor you are using, but it seems that it is not
KVM which would be the reason why only partial information is collected.
At this time, KVM is the only fully instrumented Hypervisor, even though
we would really welcome patches from users of other
Waiting Daniel's reply, but afaik, the new NBD server is available since
qemu-1.4 and libvirt 1.0.2
Daniel, feel free to confirm or blame me.
Btw, kvm capabilities are available in upstream qemu since qemu-1.3
which works like a charm with Nova (Folsom).
-Sylvain
Le 24/04/2013 23:23, Lorin
On Thu, Apr 25, 2013 at 12:45:03PM +0530, Balamurugan V G wrote:
Hi Leen,
I do not have any other DHCP sever which can do this other than the one
created by quantum. Infact, If i delete the route manually and restart the
network(interface down and up), I get the routed added back. Please
Hi,
I am really stuck, any help will be highly appreciated.
I installed a two node openstack deployment on debian wheezy:
# nova-manage service list
Binary Host Zone Status
State Updated_At
nova-network aopcach
Hi Mballo,
looks good to me. Please check your keystone endpoints too. You need to
check the other service parts too if all endpoints are correct.
Do you see on your project page your images, networks and volumes? It's
mostly a good indicator if your horizon can communicate with these API's.
Hi,
A place to look for eventually error messages is also /var/log/messages. Look
for dnsmasq related message and post them here.
Regards,
Gabriel
From: Arindam Choudhury arin...@live.com
To: openstack openstack@lists.launchpad.net
Sent: Thursday, April 25,
Community,
That did the trick. Just install repoze-lru and keystone can start.
--
Viktor
On Thu, Apr 25, 2013 at 12:09 AM, Viktor Viking
viktor.viking...@gmail.comwrote:
Hi Dolph,
Now I got an exception. It seems like I am missing repoze.lru. I will
download and install it. I will let you
Hi,
Thanks for your reply.
Here is the logs from /var/log/messages:
Apr 25 11:14:40 aopcso1 kernel: [589256.953753] device vnet0 entered
promiscuous mode
Apr 25 11:14:40 aopcso1 kernel: [589257.014414] br100: port 2(vnet0) entering
forwarding state
Apr 25 11:14:40 aopcso1 kernel:
On Wed, Apr 24, 2013 at 05:23:11PM -0400, Lorin Hochstein wrote:
On Wed, Apr 24, 2013 at 11:59 AM, Daniel P. Berrange d...@berrange.comwrote:
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
In the docs, we describe how to configure KVM block-based live migration,
and it
hi community
i create a port which contains two subnet
and quantum returns the following json.
{
port: {
status: DOWN,
name: ,
admin_state_up: true,
network_id: f9d3bd8e-377b-4f21-bfc6-64ae4257e44d,
tenant_id: 82da519b676d400ab24e9ee38d138c3c,
I have encountered other problems too.
First of all, when starting the Central Agent I have had Glance endpoint
404 not found errors. As, Julien pointed out (
https://bugs.launchpad.net/ceilometer/+bug/1083104), I have removed the
v1 from the Glance URLs and it worked well.
Secondly, when
For the sake of completion, the whole thing was a false alarm, has
nothing to do with db.pending files and it has been documented here:
https://bugs.launchpad.net/swift/+bug/1172358
On Tue, Apr 23, 2013 at 11:39 AM, Sergio Rubio rubi...@frameos.org wrote:
Howdy folks,
While populating an
Hi,
Does anybody know where to find all the presentation material for the sessions
held during Havana summit?
I'm especially interested in the NOVA related ones.
/Anton
___
Mailing list: https://launchpad.net/~openstack
Post to :
http://www.openstack.org/summit/portland-2013/session-videos
Jarret
From: Openstack
[mailto:openstack-bounces+jarret.raim=rackspace@lists.launchpad.net] On
Behalf Of Anton Massoud
Sent: Thursday, April 25, 2013 6:14 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Hava sumit
Dear all,
I am running openstack grizzly on 2 nodes, with multi-host options and
without conductor.
Node 1 has compute, api, network, etc
Node 2 has compute, network
When launching new instances, those that are launched on the 1st node
launches fine. Those not on node 2 gets stuck in BUILD.
Hi,
The problem was nbd module. I installed it and now it good.
Date: Thu, 25 Apr 2013 02:47:39 -0700
From: gabriel_sta...@yahoo.com
Subject: Re: [Openstack] nova-network no ip for vm
To: arin...@live.com; openstack@lists.launchpad.net
Hi,
What I would do now it to make a tcpdum on interface
On Thursday, April 25, 2013, Riki Arslan wrote:
I have encountered other problems too.
First of all, when starting the Central Agent I have had Glance endpoint
404 not found errors. As, Julien pointed out (
https://bugs.launchpad.net/ceilometer/+bug/1083104), I have removed the
v1 from the
I thought Ceilometer did not set a dependency on any DB drivers. I have
installed the driver Mongo using sudo pip install pymongo.
Regarding the current problem; the traceback is as follows:
Traceback (most recent call last):
File /usr/local/bin/ceilometer-api, line 5, in module
Hi all, especially our friendly neighborhood Horizon developers -
Can you better explain to Jamie how keypair injection works and what
Compute API commands correspond with the Dashboard creation and association
of keypairs?
Thanks,
Anne
-- Forwarded message --
From: Jamie
On Thu, Apr 25, 2013 at 8:34 AM, Anne Gentle a...@openstack.org wrote:
Can you better explain to Jamie how keypair injection works and what Compute
API commands correspond with the Dashboard creation and association of
keypairs?
I'll take a stab at the basic use of keypairs from the Nova CLI.
Hi,
today there will be regular Savanna community meeting at 18:00 UTC on irc
channel #openstack-meeting-alt at freenode.
Come along.
Sergey Lukjanov
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Hi,
Was anyone able to install kvm package on ubuntu server 13.04.
It keeps on switching to another package named qemu-system-x86
Message returned when i do apt-get install kvm = Note, selecting
'qemu-system-x86' instead of 'kvm'
Without KVM, i can't create VMs !
Anyhelp or advice is highly
Hi Bilel,
qemu-system-x86 is a virtual package which includes the required kvm
libraries...
Regards,
Jaren
On Thu, Apr 25, 2013 at 9:27 AM, skible.openst...@gmail.com
skible.openst...@gmail.com wrote:
Hi,
Was anyone able to install kvm package on ubuntu server 13.04.
It keeps on
Hi,
If your cpu doesn't support virtualization, kvm wouldn't run properly;
maybe that's why qemu-system-x86 is selected.
Other than that, I have no idea why.
Cheers,
Heling
On Thu, Apr 25, 2013 at 11:27 PM, skible.openst...@gmail.com
skible.openst...@gmail.com wrote:
Hi,
Was anyone able
Hi,
I mistakenly installed nova-network in a compute node and its became the
default one. Though I can start the nova-network in the controller, it does not
start the dnsmasq. How to disable the nova-network from the compute node and
enable the one in the controller?
Hi,
Actually the problem was solved by installing nbd module. Now I have one weird
problem. I have mistakenly installed nova-network in compute node and it had
became the default one. Can you tell be how to fix it?
To: arin...@live.com
Subject: Re: [Openstack] nova-network no ip for vm
From:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
when I try to snapshot multiple instances at the same time weird things happen.
should I be able to do this or do I really need to do only 1 at a time?
thanks
s
- --
Steve
Arindam,
You need to make sure that nova-network is not started and that quantum is
started on your control node. Also ensure that your /etc/nova/nova.conf
is configured to use quantum and a plugin.
For my setup I have:
network_api_class = nova.network.quantumv2.api.API
Thanks.
To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] nova-network no ip for vm
From: jsbry...@us.ibm.com
Date: Thu, 25 Apr 2013 11:59:10 -0500
Arindam,
You need to make sure that nova-network
is not started and that quantum is started on your control node.
On Apr 25, 2013, at 7:48 AM, Daniel Ellison dan...@syrinx.net wrote:
I've come across a situation that has stumped me. I've searched the archives
here but can find no solution. I /did/ find a bug filed in Launchpad
(https://bugs.launchpad.net/nova/+bug/1169439) that may be what's happening,
I'd like to be able to
1. checkpoint a running virtual machine
2. run a test
3. rollback to the checkpoint from step 1
Has anyone had experience of doing this using OpenStack (such as with
snapshots) ?
Tim
smime.p7s
Description: S/MIME cryptographic signature
On Thu, Apr 25, 2013 at 2:06 PM, Tim Bell tim.b...@cern.ch wrote:
I'd like to be able to
1. checkpoint a running virtual machine
2. run a test
3. rollback to the checkpoint from step 1
Has anyone had experience of doing this using OpenStack (such as with
snapshots) ?
For slow cycling
Hi,
So the ubuntu 13.04 has just been released. Everything is fine except
for the virtualization using KVM.
After installing nova-compute-kvm, nova-compute does not start and this
is what i found in my log file:
Connection to libvirt failed: Failed to connect socket to
On Apr 23, 2013, at 8:44 AM, Daniel Ellison dan...@syrinx.net wrote:
I've slowly been configuring a single server with OpenStack for a
proof-of-concept I want to present to my managers. This single server is
co-located and directly exposed to the Internet. It has one active Ethernet
port
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
so I had a tenant that was assigned a floating IP. I deleted the project before
I freed up the floating IP. Now I cant clean up a subnet as it still thinks
there is an IP attached to it.
any ideas how to get rid of a subnet when it thinks there is a
Hi all,
We are participating again in the GNOME Outreach Program for Women and are
also in the early planning stages for a mentor program for OpenStack. I'd
like to find out who in the community is interested in becoming a mentor
within the OpenStack projects.
If you're interested in being a
On Thu, 25 Apr 2013, skible.openst...@gmail.com wrote:
Hi,
Was anyone able to install kvm package on ubuntu server 13.04.
It keeps on switching to another package named qemu-system-x86
Message returned when i do apt-get install kvm = Note, selecting
'qemu-system-x86' instead of 'kvm'
Hi,
My experience with virtualization has been limited to the ProxmoxVE
platform, and with it I'm quite adept after working with it for a few years,
however I'm looking forward to migrating to OpenStack would like to find a
way to learn as much about it as I can, in as efficient a manner as
On Thu, Apr 25, 2013 at 2:15 PM, Chris Bartels ch...@christopherbartels.com
wrote:
Hi,
** **
My experience with virtualization has been limited to the ProxmoxVE
platform, and with it I’m quite adept after working with it for a few
years, however I’m looking forward to migrating to
On Thu, Apr 25, 2013 at 8:37 AM, Riki Arslan riki.ars...@cloudturk.netwrote:
I thought Ceilometer did not set a dependency on any DB drivers. I have
installed the driver Mongo using sudo pip install pymongo.
Ceilometer does use a database. You have to install the right driver. If
you want
On 04/23/2013 10:15 AM, Steven Hardy wrote:
Repost to correctly include openstack-dev on Cc
On Tue, Apr 23, 2013 at 02:45:31PM +0100, Steven Hardy wrote:
Hi!
I'd like to propose myself as a candidate for the Heat PTL role, ref
Thierry's nominations email [1]
I've been professionally involved
hi community
i am trying to list all volumes which attached to a vm, with a vm id
specified
.
i am not sure whether there is any command line filter to achieve this
,i use api instead
here is the ref
http://api.openstack.org/api-ref.html
invoked api is
I don't know of any elegant solution but one option may be to database and
delete the individual records along with the all the interconnected
records.
On Thu, Apr 25, 2013 at 12:13 PM, Steve Heistand steve.heist...@nasa.govwrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
so I had a
You should be able to delete the floating ip via an admin user and then
delete the subnet.
Aaron
On Thu, Apr 25, 2013 at 12:13 PM, Steve Heistand steve.heist...@nasa.govwrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
so I had a tenant that was assigned a floating IP. I deleted the
-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFunctional tests use a clean cached db that is only created once.by jbresnaheditglance/tests/functional/__init__.pyConsole Output[...truncated 2649 lines...]Finished at 20130425-1434Build needed 00:01:29, 12080k
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14641/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/35/Project:precise_havana_keystone_trunkDate of build:Thu, 25 Apr 2013 16:31:36 -0400Build duration:2 min 15 secBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14642/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14643/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14644/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14646/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14648/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14651/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14652/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14653/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14654/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14656/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14659/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14660/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14661/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14662/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14663/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14664/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14665/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14667/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/36/Project:precise_havana_keystone_trunkDate of build:Thu, 25 Apr 2013 23:01:36 -0400Build duration:2 min 36 secBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14668/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14670/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/4/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/ws/
[cloud-archive_grizzly_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14672/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14674/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14675/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/60/Project:precise_havana_quantum_trunkDate of build:Fri, 26 Apr 2013 01:01:36 -0400Build duration:1 min 16 secBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14676/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14677/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/14678/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
80 matches
Mail list logo