[Openstack] Can I configure a cluster to do this?

2013-06-01 Thread Chris Bartels
Hi,

 

I have a server & a laptop I'd like to use to make a cluster with OpenStack,
so I can run a Windows 7 VM on either one. So that when I'm home I can run
the VM on the fast server & rdp to it from another desktop at the house. And
so when I'm out I can migrate it to the laptop & rdp to it locally from the
KDE desktop on the laptop.

 

I'd be using the Grizzly install guide on github to install them.

 

My concern is that I don't know if the OpenStack node will function when it
gets disconnected from the other node of the cluster. Can I connect &
disconnect them at will & still run them as if they were standalone nodes?

 

Another thing I'd like to do with this cluster is to have the storage on
each configured such that there is a copy of the data stored in the system
available locally on each node, so when the Windows 7 VM is located at
either node it can always be accessing a copy of the data locally. Somehow
when the laptop comes & goes the copy on the server would have to be updated
to reflect the changes made on the laptop's copy as it was out- & vice-versa
if changes are made to the server node's copy while the laptop node is
disconnected ever.

 

Can this be done?

 

Thanks in advance for your advice.

 

-Chris

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM disk affinity during live migration

2013-06-01 Thread Chris Bartels
Thanks. This approach looks promising.

 

From: Alex Glikson [mailto:glik...@il.ibm.com] 
Sent: Saturday, June 01, 2013 2:18 AM
To: openstack@lists.launchpad.net; ch...@christopherbartels.com
Subject: Re: [Openstack] VM disk affinity during live migration

 

Right. A slightly different approach (requiring admin effort) would be to
define two host aggregates -- one reporting SSD as one of the capabilities
of its hosts, and another one reporting SAS. Then the admin can attach the
corresponding capability as a an extra spec of an instance flavor, and use
Filter Scheduler with AggregateInstanceExtraSpecsFilter to make sure
instances would not be placed on a hosts which belong to a wrong aggregate.
All this can be done already (see
http://docs.openstack.org/trunk/openstack-compute/admin/content/host-aggrega
tes.html). The missing piece (which is, I believe, going to be resolved in
Havana) would be to prevent admin from live-migrating an instance to a wrong
location manually (but this wouldn't be an issue if the admin live-migrates
without explicitly specifying destination, as Jay pointed out). 

Regards, 
Alex 




From:Lau Jay  
To:ch...@christopherbartels.com, 
Cc:Alex Glikson/Haifa/IBM@IBMIL, openstack@lists.launchpad.net 
Date:01/06/2013 07:39 AM 
Subject:Re: [Openstack] VM disk affinity during live migration 

  _  




Hi Chris, 

I think that you are using live migration without specifying target host,
right? OpenStack cannot handle your case for now, but it has very flexible
framework to enable you DIY your migration logic.

1) Make sure SSD or SAS can be reported by nova compute, you might want to
update nova compute driver to report those metrics? 
2) Add a new scheduler filter to do your logic checking for SSD and SAS.

Thanks,

Jay



2013/6/1 Chris Bartels  
Thanks for your reply. 

  

Your reply implies that its possible to ensure that the disks stay on the
right target manually. What would you have to do to make sure this happened?


  

The SAS space is 228GB & the SSD space is only 64GB. 

  

So the SAS disk image wouldn't fit on the SSD, but the SSD image would fit
on the SAS, so the migration system I imagine wouldn't be able to screw it
up since it would have to keep the large SAS image on the SAS target, and
would then only be able to place the smaller SSD image on the SSD. 

  

But you say it's a work in progress so that could mean anything could
happen. 

  

What does the actual process look like when I would migrate a VM from one
server to another? What exactly would I have to do to make sure it went
right? 

  

Thanks. 

  

From: Alex Glikson [mailto:  glik...@il.ibm.com] 
Sent: Friday, May 31, 2013 7:34 AM
To:   ch...@christopherbartels.com
Cc:   openstack@lists.launchpad.net
Subject: Re: [Openstack] VM disk affinity during live migration 

  

There is an ongoing work to refactor live migration code, including use of
scheduler to find/validate placement. At the moment the admin would need to
make sure he/she is doing the right thing. 

Regards, 
Alex 



From:"Chris Bartels" < 
ch...@christopherbartels.com> 
To:< 
openstack@lists.launchpad.net>, 
Date:31/05/2013 02:12 PM 
Subject:[Openstack] VM disk affinity during live migration 
Sent by:"Openstack" <

openstack-bounces+glikson=il.ibm@lists.launchpad.net> 

  _  




Hi, 
  
Please forgive me if I've asked already here on the list- I didn't get a
reply & I really need an answer, so I'm asking again in simpler terms this
time. 
  
If I have a cluster of servers, each with spindle drives & SSDs, how can I
be sure VM disks which reside on spindle drives migrate to spindle drives &
those which reside on SSDs stay on SSDs as they migrate between servers? 
  
Thanks, 
Chris___
Mailing list:  
https://launchpad.net/~openstack
Post to :  
openstack@lists.launchpad.net
Unsubscribe :  
https://launchpad.net/~openstack
More help   :  
https://help.launchpad.net/ListHelp 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Benefits for moving live migration/resize/code migration/provision to conductor

2013-06-01 Thread Alex Glikson
One of the goals was to separate between instance placement calculation 
logic and the orchestration logic, having each in a separate runtime (see 
https://blueprints.launchpad.net/nova/+spec/query-scheduler). Scheduler 
and conductor (respectively) seemed like a reasonable choice.

Regards,
Alex




From:   Lau Jay 
To: Michael Still , 
Cc: OpenStack general mailing list 
Date:   01/06/2013 06:19 PM
Subject:Re: [Openstack] Benefits for moving live 
migration/resize/code migration/provision to conductor
Sent by:"Openstack" 




Hi Michael and other Stackers,

Sorry one more question, for provision VM instance, there is no 
interaction between compute nodes, why also move provision logic to 
conductor?

Thanks,
Jay


2013/6/1 Lau Jay 
Thanks Michael for the answer, just want to dig more.

>From your answer, it seems that we do not want libvirt on one node opens 
up a connection to the other, but from the Gerrit code diff, I did not 
notice any change on nova compute, but only move the logic of live 
migraiton/resize/code migration from scheduler to conductor, and conductor 
still call nova compute directly and once the request cast to nova 
compute, libvirt on one node still opens up a connection to the another, 
so what is the difference?

Thanks,
Jay



2013/6/1 Michael Still 
IIRC the discussion from the summit, there was concern about compute
nodes talking directly to each other. The way live migration works in
libvirt is that the libvirt on one node opens up a connection to the
other and then streams the instance across. If this is bounced off a
conductor, then it makes firewall rules much easier to construct.

Cheers,
Michael

On Sat, Jun 1, 2013 at 2:53 PM, Lau Jay  wrote:
> Hi Stackers,
>
> I noticed that there are some blueprints trying to move the logic of 
live
> migration/resize/code migration/provision from nova scheduler to nova
> conductor, but the blueprint did not describe clearly the benefits of 
doing
> so, can some experts give some explanation on this?
>
> I know the original design for nova conductor is for a non-db nova 
compute,
> but what's the reason of moving scheduling logic to nova conductor?
>
> Thanks,
>
> Jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Benefits for moving live migration/resize/code migration/provision to conductor

2013-06-01 Thread Lau Jay
Hi Michael and other Stackers,

Sorry one more question, for provision VM instance, there is no interaction
between compute nodes, why also move provision logic to conductor?

Thanks,
Jay


2013/6/1 Lau Jay 

> Thanks Michael for the answer, just want to dig more.
>
> From your answer, it seems that we do not want libvirt on one node opens
> up a connection to the other, but from the Gerrit code diff, I did not
> notice any change on nova compute, but only move the logic of live
> migraiton/resize/code migration from scheduler to conductor, and conductor
> still call nova compute directly and once the request cast to nova compute,
> libvirt on one node still opens up a connection to the another, so what is
> the difference?
>
> Thanks,
> Jay
>
>
>
> 2013/6/1 Michael Still 
>
>> IIRC the discussion from the summit, there was concern about compute
>> nodes talking directly to each other. The way live migration works in
>> libvirt is that the libvirt on one node opens up a connection to the
>> other and then streams the instance across. If this is bounced off a
>> conductor, then it makes firewall rules much easier to construct.
>>
>> Cheers,
>> Michael
>>
>> On Sat, Jun 1, 2013 at 2:53 PM, Lau Jay  wrote:
>> > Hi Stackers,
>> >
>> > I noticed that there are some blueprints trying to move the logic of
>> live
>> > migration/resize/code migration/provision from nova scheduler to nova
>> > conductor, but the blueprint did not describe clearly the benefits of
>> doing
>> > so, can some experts give some explanation on this?
>> >
>> > I know the original design for nova conductor is for a non-db nova
>> compute,
>> > but what's the reason of moving scheduling logic to nova conductor?
>> >
>> > Thanks,
>> >
>> > Jay
>> >
>> > ___
>> > Mailing list: https://launchpad.net/~openstack
>> > Post to : openstack@lists.launchpad.net
>> > Unsubscribe : https://launchpad.net/~openstack
>> > More help   : https://help.launchpad.net/ListHelp
>> >
>>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Benefits for moving live migration/resize/code migration/provision to conductor

2013-06-01 Thread Lau Jay
Thanks Michael for the answer, just want to dig more.

>From your answer, it seems that we do not want libvirt on one node opens up
a connection to the other, but from the Gerrit code diff, I did not notice
any change on nova compute, but only move the logic of live
migraiton/resize/code migration from scheduler to conductor, and conductor
still call nova compute directly and once the request cast to nova compute,
libvirt on one node still opens up a connection to the another, so what is
the difference?

Thanks,
Jay



2013/6/1 Michael Still 

> IIRC the discussion from the summit, there was concern about compute
> nodes talking directly to each other. The way live migration works in
> libvirt is that the libvirt on one node opens up a connection to the
> other and then streams the instance across. If this is bounced off a
> conductor, then it makes firewall rules much easier to construct.
>
> Cheers,
> Michael
>
> On Sat, Jun 1, 2013 at 2:53 PM, Lau Jay  wrote:
> > Hi Stackers,
> >
> > I noticed that there are some blueprints trying to move the logic of live
> > migration/resize/code migration/provision from nova scheduler to nova
> > conductor, but the blueprint did not describe clearly the benefits of
> doing
> > so, can some experts give some explanation on this?
> >
> > I know the original design for nova conductor is for a non-db nova
> compute,
> > but what's the reason of moving scheduling logic to nova conductor?
> >
> > Thanks,
> >
> > Jay
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM disk affinity during live migration

2013-06-01 Thread Lau Jay
Cool, Alex! I think that this is the best way for Chris.

Thanks,

Jay



2013/6/1 Alex Glikson 

> Right. A slightly different approach (requiring admin effort) would be to
> define two host aggregates -- one reporting SSD as one of the capabilities
> of its hosts, and another one reporting SAS. Then the admin can attach the
> corresponding capability as a an extra spec of an instance flavor, and use
> Filter Scheduler with AggregateInstanceExtraSpecsFilter to make sure
> instances would not be placed on a hosts which belong to a wrong aggregate.
> All this can be done already (see *
> http://docs.openstack.org/trunk/openstack-compute/admin/content/host-aggregates.html
> *).
> The missing piece (which is, I believe, going to be resolved in Havana)
> would be to prevent admin from live-migrating an instance to a wrong
> location manually (but this wouldn't be an issue if the admin live-migrates
> without explicitly specifying destination, as Jay pointed out).
>
> Regards,
> Alex
>
>
>
>
> From:Lau Jay 
> To:ch...@christopherbartels.com,
> Cc:Alex Glikson/Haifa/IBM@IBMIL, openstack@lists.launchpad.net
> Date:01/06/2013 07:39 AM
> Subject:Re: [Openstack] VM disk affinity during live migration
> --
>
>
>
> Hi Chris,
>
> I think that you are using live migration without specifying target host,
> right? OpenStack cannot handle your case for now, but it has very flexible
> framework to enable you DIY your migration logic.
>
> 1) Make sure SSD or SAS can be reported by nova compute, you might want to
> update nova compute driver to report those metrics?
> 2) Add a new scheduler filter to do your logic checking for SSD and SAS.
>
> Thanks,
>
> Jay
>
>
>
> 2013/6/1 Chris Bartels 
> <*ch...@christopherbartels.com*
> >
> Thanks for your reply.
>
>
>
> Your reply implies that its possible to ensure that the disks stay on the
> right target manually. What would you have to do to make sure this happened?
>
>
>
> The SAS space is 228GB & the SSD space is only 64GB.
>
>
>
> So the SAS disk image wouldn’t fit on the SSD, but the SSD image would fit
> on the SAS, so the migration system I imagine wouldn’t be able to screw it
> up since it would have to keep the large SAS image on the SAS target, and
> would then only be able to place the smaller SSD image on the SSD.
>
>
>
> But you say it’s a work in progress so that could mean anything could
> happen.
>
>
>
> What does the actual process look like when I would migrate a VM from one
> server to another? What exactly would I have to do to make sure it went
> right?
>
>
>
> Thanks.
>
>
>
> *From:* Alex Glikson [mailto:*glik...@il.ibm.com* ] *
> Sent:* Friday, May 31, 2013 7:34 AM*
> To:* *ch...@christopherbartels.com* *
> Cc:* *openstack@lists.launchpad.net* *
> Subject:* Re: [Openstack] VM disk affinity during live migration
>
>
>
> There is an ongoing work to refactor live migration code, including use of
> scheduler to find/validate placement. At the moment the admin would need to
> make sure he/she is doing the right thing.
>
> Regards,
> Alex
>
>
>
> From:"Chris Bartels" 
> <*ch...@christopherbartels.com*
> >
> To:<*openstack@lists.launchpad.net*>,
>
> Date:31/05/2013 02:12 PM
> Subject:[Openstack] VM disk affinity during live migration
> Sent by:"Openstack" <*
> openstack-bounces+glikson=il.ibm@lists.launchpad.net*
> >
> --
>
>
>
>
> Hi,
>
> Please forgive me if I’ve asked already here on the list- I didn’t get a
> reply & I really need an answer, so I’m asking again in simpler terms this
> time.
>
> If I have a cluster of servers, each with spindle drives & SSDs, how can I
> be sure VM disks which reside on spindle drives migrate to spindle drives &
> those which reside on SSDs stay on SSDs as they migrate between servers?
>
> Thanks,
> Chris___
> Mailing list: 
> *https://launchpad.net/~openstack*
> Post to : *openstack@lists.launchpad.net*
> Unsubscribe : 
> *https://launchpad.net/~openstack*
> More help   : 
> *https://help.launchpad.net/ListHelp*
>
> ___
> Mailing list: 
> *https://launchpad.net/~openstack*
> Post to : *openstack@lists.launchpad.net*
> Unsubscribe : 
> *https://launchpad.net/~openstack*
> More help   : 
> *https://help.launchpad.net/ListHelp*
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

[Openstack] l3-agent iptables-restore: line 23 failed

2013-06-01 Thread Martin Mailand
Hi List,

if I add my routers gateway to an external network, I get an error in
the l3-agent.log, about a failure in iptables-restore.
As far as I know iptables-restore gets the information on stdin, how
could I see the iptable rules which do not apply?
How could I debug this further?
Full log is attachted.

-martin

Command:
root@controller:~# quantum router-gateway-set
ac1a85c9-d5e1-4976-a16b-14ccdac49c17 61bf1c06-aea7-4966-9718-2be029abc18d
Set gateway for router ac1a85c9-d5e1-4976-a16b-14ccdac49c17
root@controller:~#

Log:

2013-06-01 16:07:35DEBUG [quantum.agent.linux.utils] Running
command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ip', 'netns', 'exec', 'qrouter-ac1a85c9-d5e1-4976-a16b-14ccdac49c17',
'iptables-restore']
2013-06-01 16:07:35DEBUG [quantum.agent.linux.utils]
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ip', 'netns', 'exec', 'qrouter-ac1a85c9-d5e1-4976-a16b-14ccdac49c17',
'iptables-restore']
Exit code: 1
Stdout: ''
Stderr: 'iptables-restore: line 23 failed\n'


quantum router-show ac1a85c9-d5e1-4976-a16b-14ccdac49c17
+---++
| Field | Value
 |
+---++
| admin_state_up| True
 |
| external_gateway_info | {"network_id":
"61bf1c06-aea7-4966-9718-2be029abc18d"} |
| id| ac1a85c9-d5e1-4976-a16b-14ccdac49c17
 |
| name  | router1
 |
| routes|
 |
| status| ACTIVE
 |
| tenant_id | b5e5af3504964760ad51c4980d30f89a
 |
+---++


quantum net-show 61bf1c06-aea7-4966-9718-2be029abc18d
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 61bf1c06-aea7-4966-9718-2be029abc18d |
| name  | ext_net  |
| provider:network_type | gre  |
| provider:physical_network |  |
| provider:segmentation_id  | 2|
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   | ccde4243-5857-4ee2-957e-a11304366f85 |
| tenant_id | 43b2bbbf5daf4badb15d67d87ed2f3dc |
+---+--+
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 5eded77d48f9461aa029b6dfe3a72a2f
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is a97579c5bb304cf2b8e99099bfdfeca6.
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is cc0672ba1cc749059d6c8190a18d4721
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 7534a2f4def14829b35186215e2aa146.
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 2d5cde249edd4905868c401bb5075e60
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 5d0d2d2554cf4ae1b4d2485b181fe9ad.
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is e456789e89c843b8b098549f0a6dffd8
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is cd6d6fe271534ea4b0bed159238b9aa9.
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 99f0c003e361442baa9acd04f1f22cd2
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 076d0a178ddc4b97af53ad4bd979c9bc.
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 822f072b44874a7b93cd2d222574343f
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 7513ed78f63c43a8950d6837856bcad8.
2013-06-01 16:07:27DEBUG [quantum.openstack.common.periodic_task] Running periodic task L3NATAgentWithStateReport._sync_routers_task
2013-06-01 16:07:27DEBUG [qu

Re: [Openstack] Benefits for moving live migration/resize/code migration/provision to conductor

2013-06-01 Thread Michael Still
IIRC the discussion from the summit, there was concern about compute
nodes talking directly to each other. The way live migration works in
libvirt is that the libvirt on one node opens up a connection to the
other and then streams the instance across. If this is bounced off a
conductor, then it makes firewall rules much easier to construct.

Cheers,
Michael

On Sat, Jun 1, 2013 at 2:53 PM, Lau Jay  wrote:
> Hi Stackers,
>
> I noticed that there are some blueprints trying to move the logic of live
> migration/resize/code migration/provision from nova scheduler to nova
> conductor, but the blueprint did not describe clearly the benefits of doing
> so, can some experts give some explanation on this?
>
> I know the original design for nova conductor is for a non-db nova compute,
> but what's the reason of moving scheduling logic to nova conductor?
>
> Thanks,
>
> Jay
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Remote access to the windows virtual machine

2013-06-01 Thread Narayanan, Krishnaprasad
Currently, we are trying to deploy a windows image on our OpenStack cloud for 
testing biological softwares for the students at our University. Is it possible 
to extend the license?
How to have a licensed version of the Windows 2012 Server Image with the cloud 
init package?

From: Peter Pouliot [mailto:ppoul...@microsoft.com]
Sent: Samstag, 1. Juni 2013 06:54
To: Brian Schott
Cc: openstack@lists.launchpad.net; Narayanan, Krishnaprasad
Subject: RE: [Openstack] Remote access to the windows virtual machine

Actually images for those  hypervisors should already be on the Cloudbase site 
and I encourage you to use those.

I have some puppet modules that are still work in progress for automating the 
Windows adk and deployment process.  But you are more than welcome to check 
them out.


Https://github.com/ppouliot/ppouliot-petools

This builds a fresh winpe image and files necessary to Pxe on a Windows 
instance via the adk dynamically.

Also

Https://github.com/ppouliot/ppouliot-quartermaster

Is my Pxe infrastructure including Windows unattended puppet templates.

Feel free to check them out.   Also I'm more than happy to discuss further if 
interested.

Please don't hesitate to contact me directly.



Sent from my Verizon Wireless 4G LTE Smartphone



 Original message 
From: Brian Schott 
mailto:brian.sch...@nimbisservices.com>>
Date: 05/31/2013 7:21 PM (GMT-08:00)
To: Peter Pouliot mailto:ppoul...@microsoft.com>>
Cc: 
openstack@lists.launchpad.net,Krishnaprasad
 Narayanan mailto:naray...@uni-mainz.de>>
Subject: RE: [Openstack] Remote access to the windows virtual machine

Good catch.  Is the license associated with the CloudInit port, or the 
particular image build, or an issue redistributing Windows?
​
While testing our GRID K2 GPU under different hypervisors, we recently built 
OpenStack Windows images for kvm, xen, xenserver, and hyper-v using the 
straight windows installer from Windows evaluation ISOs, but they lacked the 
CloudInit package.  We didn't bother with CloudInit for our testing, but down 
the road we will need to do that.

What is the appropriate way to build a Windows guest image for 
bring-your-own-license deployments?
—
Sent from Mailbox for iPad


On Fri, May 31, 2013 at 5:51 PM, Peter Pouliot 
mailto:ppoul...@microsoft.com>> wrote:
Hi All,


Just to be clear, the Windows 2012 OpenStack Additional available via 
Cloudbase.it has a very specific license.Please make sure you are aware 
prior to using, as it is meant for “OpenStack Testing” and not production 
workloads.


p


From: Openstack 
[mailto:openstack-bounces+ppouliot=microsoft@lists.launchpad.net] On Behalf 
Of Narayanan, Krishnaprasad
Sent: Thursday, May 30, 2013 7:53 PM
To: Brian Schott
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Remote access to the windows virtual machine


Hi Brian,


The Windows server 2012 OpenStack edition comes with TCP port 3389 enabled. I 
don’t think we should specify it in the security groups as this is already 
taken care in the Windows firewall.


Thanks
Krishnaprasad
From: Brian Schott [mailto:brian.sch...@nimbisservices.com]
Sent: Donnerstag, 30. Mai 2013 23:03
To: Narayanan, Krishnaprasad
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Remote access to the windows virtual machine


For windows, you could add TCP port 3389 to your security group and enable 
remote desktop access in windows.  The VNC console access in Horizon is really 
intended for administrative/management access rather than production usage.


Brian


-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060






On May 30, 2013, at 3:58 PM, "Narayanan, Krishnaprasad" 
mailto:naray...@uni-mainz.de>> wrote:


Hallo All,

Currently, I use VNC to access the windows virtual machine deployed in 
OpenStack. But this gives me a smaller view of the Windows GUI or Desktop. Is 
there any way from the Horizon GUI to have an enlarged view of the Windows 
Desktop?

Can I get any suggestions to connect to the windows virtual machine remotely 
apart from using VNC?

Thanks
Krishnaprasad
From: Narayanan, Krishnaprasad
Sent: Dienstag, 28. Mai 2013 02:42
To: 'JuanFra Rodriguez Cardoso'
Cc: openstack@lists.launchpad.net
Subject: RE: [Openstack] Windows Image 2008 in OpenStack

Hi JuanFra,

Thanks for the suggestion regarding the usage of cloudinit for windows 
instances.

For all Stackers - I found this URI useful 
where there is a Windows Server 2012 Evaluation image available for download 
and it can be directly deployed to OpenStack. I was able to download and deploy 
the image in our ESSEX cloud and create a VM successfully