RE: Disk IO on Cloudstack VMs

2014-03-04 Thread Suresh Sadhu
HI Nishan,

Its nfs mount and it has to pass through your network.

I see for guest vms ,QOS limit set to 25600kbytes/s .I think increase this 
value will give better throughput.

Also while you mount nfs  storage ,you can set the rsize and wsize  parameters 
for better  I/O.

Regards
Sadhu


-Original Message-
From: Nishan Sanjeewa Gunasekara [mailto:nishan.sanje...@gmail.com] 
Sent: 05 March 2014 12:30
To: users
Subject: Re: Disk IO on Cloudstack VMs

How does it relate to the network rate of the vm or the VR for that matter?
My vm is running on primary storage which is an nfs mount  on my host on a 10G 
link.
On 05/03/2014 4:38 PM, "Carlos Reategui"  wrote:

> Check with xencenter on the network tab for that vm and see what the 
> rate limit is there.
>
>
> > On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara <
> nishan.sanje...@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm running CS 4.2.1 on XenServer 6.2 hosts.
> >
> > My primary storage is a ZFS file system provided via NFS.
> >
> > When I do a dd test directly from one of the hosts on the NFS mount 
> > I
> get a
> > write speed of about 150MBps
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> >
> > But when I do the same test on a Cloudstack VM running on the same 
> > host (root disk on the same nfs mount ofcourse) I get a very low write 
> > speed.
> > 20MBps.
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> >
> > Any ideas how I can improve this ?
> >
> > Regards,
> > Nishan
>


Re: Disk IO on Cloudstack VMs

2014-03-04 Thread Nishan Sanjeewa Gunasekara
How does it relate to the network rate of the vm or the VR for that matter?
My vm is running on primary storage which is an nfs mount  on my host on a
10G link.
On 05/03/2014 4:38 PM, "Carlos Reategui"  wrote:

> Check with xencenter on the network tab for that vm and see what the rate
> limit is there.
>
>
> > On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara <
> nishan.sanje...@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm running CS 4.2.1 on XenServer 6.2 hosts.
> >
> > My primary storage is a ZFS file system provided via NFS.
> >
> > When I do a dd test directly from one of the hosts on the NFS mount I
> get a
> > write speed of about 150MBps
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> >
> > But when I do the same test on a Cloudstack VM running on the same host
> > (root disk on the same nfs mount ofcourse) I get a very low write speed.
> > 20MBps.
> >
> > dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> >
> > Any ideas how I can improve this ?
> >
> > Regards,
> > Nishan
>


Re: Disk IO on Cloudstack VMs

2014-03-04 Thread Carlos Reategui
Check with xencenter on the network tab for that vm and see what the rate limit 
is there.


> On Mar 4, 2014, at 8:31 PM, Nishan Sanjeewa Gunasekara 
>  wrote:
> 
> Hi,
> 
> I'm running CS 4.2.1 on XenServer 6.2 hosts.
> 
> My primary storage is a ZFS file system provided via NFS.
> 
> When I do a dd test directly from one of the hosts on the NFS mount I get a
> write speed of about 150MBps
> 
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
> 
> But when I do the same test on a Cloudstack VM running on the same host
> (root disk on the same nfs mount ofcourse) I get a very low write speed.
> 20MBps.
> 
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
> 
> Any ideas how I can improve this ?
> 
> Regards,
> Nishan


Re: Disk IO on Cloudstack VMs

2014-03-04 Thread Shanker Balan
Comments inline.

On 05-Mar-2014, at 10:45 am, Sanjeev Neelarapu  
wrote:

> Hi Nishan,
>
> If your vm is deployed in an isolated network all the traffic goes
> via VR. So please check the rate limiting on the VR.
> You may have to increase the rate limit value to get more BW.


Hi Sanjeev,

Does primary storage traffic go over the VR also. I thought the primary storage
traffic over the host’s interface and not over any VR.

Could you please clarify?



--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, 
Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
18th-19th February 2014, Brazil. 
Classroom
17th-23rd March 2014, Region A. Instructor led, 
On-line
24th-28th March 2014, Region B. Instructor led, 
On-line
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


RE: Disk IO on Cloudstack VMs

2014-03-04 Thread Sanjeev Neelarapu
Hi Nishan,

If your vm is deployed in an isolated network all the traffic goes via VR. So 
please check the rate limiting on the VR.
You may have to increase the rate limit value to get more BW.

Thanks,
Sanjeev

-Original Message-
From: Nishan Sanjeewa Gunasekara [mailto:nishan.sanje...@gmail.com] 
Sent: Wednesday, March 05, 2014 10:02 AM
To: users
Subject: Disk IO on Cloudstack VMs

Hi,

I'm running CS 4.2.1 on XenServer 6.2 hosts.

My primary storage is a ZFS file system provided via NFS.

When I do a dd test directly from one of the hosts on the NFS mount I get a 
write speed of about 150MBps

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s

But when I do the same test on a Cloudstack VM running on the same host (root 
disk on the same nfs mount ofcourse) I get a very low write speed.
20MBps.

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s

Any ideas how I can improve this ?

Regards,
Nishan


Re: Disk IO on Cloudstack VMs

2014-03-04 Thread Shanker Balan
Comments inline.

On 05-Mar-2014, at 10:01 am, Nishan Sanjeewa Gunasekara 
 wrote:

> Hi,
>
> I'm running CS 4.2.1 on XenServer 6.2 hosts.
>
> My primary storage is a ZFS file system provided via NFS.
>
> When I do a dd test directly from one of the hosts on the NFS mount I get a
> write speed of about 150MBps
>
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s
>
> But when I do the same test on a Cloudstack VM running on the same host
> (root disk on the same nfs mount ofcourse) I get a very low write speed.
> 20MBps.
>
> dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s
>
> Any ideas how I can improve this ?


Is this a PV VM?

I just did a unscientific test in my lab on a PV VM. Results below:

On the XenServer:

[root@vxen1-1 ebb66062-d46f-7b3a-07be-b9ec583ec1a9]# dd if=/dev/zero 
of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 26.1396 seconds, 40.1 MB/s

On a VM:

[root@scan1 ~]# dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 29.4688 s, 35.6 MB/s


I guess 10% is the virtual disk overhead.




--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, 
Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
18th-19th February 2014, Brazil. 
Classroom
17th-23rd March 2014, Region A. Instructor led, 
On-line
24th-28th March 2014, Region B. Instructor led, 
On-line
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Disk IO on Cloudstack VMs

2014-03-04 Thread Nishan Sanjeewa Gunasekara
Hi,

I'm running CS 4.2.1 on XenServer 6.2 hosts.

My primary storage is a ZFS file system provided via NFS.

When I do a dd test directly from one of the hosts on the NFS mount I get a
write speed of about 150MBps

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.99829 seconds, 150 MB/s

But when I do the same test on a Cloudstack VM running on the same host
(root disk on the same nfs mount ofcourse) I get a very low write speed.
20MBps.

 dd if=/dev/zero of=test.dmp bs=1M conv=fdatasync count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 44.6171 s, 23.5 MB/s

Any ideas how I can improve this ?

Regards,
Nishan


Re: Adding VM created outside of CS

2014-03-04 Thread Shanker Balan
Comments inline.

On 05-Mar-2014, at 12:25 am, Michael Phillips  wrote:

> So I've been pondering something. Has anyone ever added a VM that was created 
> outside of the CS UI? For example what if I created a VM using the vcenter 
> interface, but I then wanted that VM to be managed inside of CS?
> I could see this being a use case if I wanted to P2V a machine...


Thats pretty much how I have created a Ubuntu 10.04 PV instance on a
standalone XenServer. This template was later imported into CS.

Regards.


--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, 
Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
18th-19th February 2014, Brazil. 
Classroom
17th-23rd March 2014, Region A. Instructor led, 
On-line
24th-28th March 2014, Region B. Instructor led, 
On-line
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Can we boot a vm with a ubuntu 10.04 iso

2014-03-04 Thread Shanker Balan
On 04-Mar-2014, at 1:29 pm, Deepak Sihag  wrote:

> Hi Team,
>
> I am trying to boot a VM using an ISO of Ubuntu 10.04 provided by Ubuntu 
> community. I have register the iso but when I am trying to launch an Instance 
> it gives error.
> I have tried this with cloudstack 3.0 to 4.2 all versions. Every time the 
> same thing. So I need our help here. First thing is it possible to have a VM 
> with GUI. If yes then how can I use Ubuntu iso.

Hi Deepak,

I am assuming you are using Xen as hypervisor. If so, then:

Ubuntu 10.04 ISO does not have Xen PV kernel and will fail to boot on
XenServer hypervisors. You have two options:

1) Set the OS type to “other” to boot from the ISO
2) Prepare a Ubuntu 10.04 template on a standalone XenServer host using the 
network install method.

See http://shankerbalan.net/blog/cloudstack-ubuntu-10-04-xenserver-template/ 
also.

Hth.


--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, 
Bangalore - 560 055

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
18th-19th February 2014, Brazil. 
Classroom
17th-23rd March 2014, Region A. Instructor led, 
On-line
24th-28th March 2014, Region B. Instructor led, 
On-line
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Customise XenServer ISO?

2014-03-04 Thread Tim Mackey
That took a bit of digging.  Again, I've tried neither, but there are two
other options:

1. Modify XS-REPOSITORY-LIST in the ISO to contain your supplemental packs
(aka drivers) and re-seal it
2. Use an answerfile and add in the line(s): 
ftp://172.16.0.249/ftp/xs62/driver

If you do the answerfile, you can have multiple driver-source lines listed.
 That would be my preferred option.

I don't have any servers which require custom drives, so if this works for
you, I'd love to capture the info and blog it on xenserver.org.

-tim


On Tue, Mar 4, 2014 at 7:13 PM, Nux!  wrote:

> On 05.03.2014 00:02, Tim Mackey wrote:
>
>> I've not tried this, but it looks like this should work for you with
>> modifications for a fresh ISO.
>> http://maufderheiden.wordpress.com/2013/05/13/
>> slipstream-supplemental-packs-and-install-xenserver-6-1-from-usb-drive/
>>
>> I'm going to add this topic to my list of guides for xenserver.org.
>>
>>
> Hello Tim,
>
> Your document says "Drivers are installed after XenServer Base
> Installation and cannot be integrated using this method if needed for the
> Installation itself!" and that's exactly what I need.
> Any other options?
>
>
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>


Re: Customise XenServer ISO?

2014-03-04 Thread Nux!

On 05.03.2014 00:02, Tim Mackey wrote:

I've not tried this, but it looks like this should work for you with
modifications for a fresh ISO.
http://maufderheiden.wordpress.com/2013/05/13/slipstream-supplemental-packs-and-install-xenserver-6-1-from-usb-drive/

I'm going to add this topic to my list of guides for xenserver.org.



Hello Tim,

Your document says "Drivers are installed after XenServer Base 
Installation and cannot be integrated using this method if needed for 
the Installation itself!" and that's exactly what I need.

Any other options?


Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: Customise XenServer ISO?

2014-03-04 Thread Tim Mackey
I've not tried this, but it looks like this should work for you with
modifications for a fresh ISO.
http://maufderheiden.wordpress.com/2013/05/13/slipstream-supplemental-packs-and-install-xenserver-6-1-from-usb-drive/

I'm going to add this topic to my list of guides for xenserver.org.

Please let me know if this helps, and if you run into any issues.

-tim


On Tue, Mar 4, 2014 at 5:40 PM, Nux!  wrote:

> Hi,
>
> Anyone knows if it's possible to customise the XenServer ISO installer?
> Need to add some newer LSI and Intel NIC drivers and I can't seem to find
> any documentation on this.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>


Re: Can we boot a vm with a ubuntu 10.04 iso

2014-03-04 Thread Nitin Mehta
Indeed logs will help. But apart from that do also google a bit, as I
recall seeing these issues reported before.
Generally its to do with the guest os and hypervisor capabilities if my
memory serves me right.

Thanks,
-Nitin

On 04/03/14 1:50 PM, "Marty Sweet"  wrote:

>Hi,
>
>This is possible, could you let us know what error is being returned?
>A copy + paste of the cloudstack management server log will also be ideal.
>
>Thanks,
>Marty Sweet
>
>On 4 March 2014 07:59, Deepak Sihag  wrote:
>> Hi Team,
>>
>> I am trying to boot a VM using an ISO of Ubuntu 10.04 provided by
>>Ubuntu community. I have register the iso but when I am trying to launch
>>an Instance it gives error.
>> I have tried this with cloudstack 3.0 to 4.2 all versions. Every time
>>the same thing. So I need our help here. First thing is it possible to
>>have a VM with GUI. If yes then how can I use Ubuntu iso.
>>
>> Regards,
>> Deepak Sihag
>>
>>
>>
>>  CAUTION - Disclaimer *
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
>>solely
>> for the use of the addressee(s). If you are not the intended recipient,
>>please
>> notify the sender by e-mail and delete the original message. Further,
>>you are not
>> to copy, disclose, or distribute this e-mail or its contents to any
>>other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys
>>has taken
>> every reasonable precaution to minimize this risk, but is not liable
>>for any damage
>> you may sustain as a result of any virus in this e-mail. You should
>>carry out your
>> own virus checks before opening the e-mail or attachment. Infosys
>>reserves the
>> right to monitor and review the content of all messages sent to or from
>>this e-mail
>> address. Messages sent to or from this e-mail address may be stored on
>>the
>> Infosys e-mail system.
>> ***INFOSYS End of Disclaimer INFOSYS***



Customise XenServer ISO?

2014-03-04 Thread Nux!

Hi,

Anyone knows if it's possible to customise the XenServer ISO installer? 
Need to add some newer LSI and Intel NIC drivers and I can't seem to 
find any documentation on this.


Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: Can we boot a vm with a ubuntu 10.04 iso

2014-03-04 Thread Marty Sweet
Hi,

This is possible, could you let us know what error is being returned?
A copy + paste of the cloudstack management server log will also be ideal.

Thanks,
Marty Sweet

On 4 March 2014 07:59, Deepak Sihag  wrote:
> Hi Team,
>
> I am trying to boot a VM using an ISO of Ubuntu 10.04 provided by Ubuntu 
> community. I have register the iso but when I am trying to launch an Instance 
> it gives error.
> I have tried this with cloudstack 3.0 to 4.2 all versions. Every time the 
> same thing. So I need our help here. First thing is it possible to have a VM 
> with GUI. If yes then how can I use Ubuntu iso.
>
> Regards,
> Deepak Sihag
>
>
>
>  CAUTION - Disclaimer *
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are 
> not
> to copy, disclose, or distribute this e-mail or its contents to any other 
> person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has 
> taken
> every reasonable precaution to minimize this risk, but is not liable for any 
> damage
> you may sustain as a result of any virus in this e-mail. You should carry out 
> your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this 
> e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS End of Disclaimer INFOSYS***


Re: SSO

2014-03-04 Thread Erdősi Péter

Hello!

I recorded a jira feature request about this, but I make a workaround 
too - while sso would be implemented -, because I wanted to auth users 
with eduID.
My solution is a php based register page, which can be opened (from main 
login page) after we had successfull login on idP.
After that, the script split up the eppn, and check the domain part, and 
if it's not exist, make an api call, to add it.
The first part of eppn will be the username, which registred by an api 
call, and the script generate a random password, which will be sent by 
email.


If sb have account, they can also ask for a new password, and with some 
more api call, you can set up limits.


I know, it's not SSO, but enough to me. :)

Dear,
 Peter


014.03.04. 20:34 keltezéssel, María Noelia Gil írta:

Hello, I am studying the operation of single sign-on in CloudStack. Does anyone 
can provide me information about this topic?

I want to know if you can use SAML, OpenID, ..., and as it should be.

Thank you.




Can we boot a vm with a ubuntu 10.04 iso

2014-03-04 Thread Deepak Sihag
Hi Team,

I am trying to boot a VM using an ISO of Ubuntu 10.04 provided by Ubuntu 
community. I have register the iso but when I am trying to launch an Instance 
it gives error.
I have tried this with cloudstack 3.0 to 4.2 all versions. Every time the same 
thing. So I need our help here. First thing is it possible to have a VM with 
GUI. If yes then how can I use Ubuntu iso.

Regards,
Deepak Sihag



 CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are 
not
to copy, disclose, or distribute this e-mail or its contents to any other 
person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has 
taken
every reasonable precaution to minimize this risk, but is not liable for any 
damage
you may sustain as a result of any virus in this e-mail. You should carry out 
your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this 
e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS End of Disclaimer INFOSYS***


Cannot add data drive if my VM has a snapshot?

2014-03-04 Thread Francois Gaudreault

Hi,

In 4.2.1, is this an expected behavior for not being able to attach a 
data drive to a server that has a VM snapshot? It's a little odd that 
you can't do that especially when I can do it on XenCenter (for 
instance). Is it the same in 4.3.0?


--
Francois Gaudreault
Architecte de Solution Cloud | Cloud Solutions Architect
fgaudrea...@cloudops.com
514-629-6775
- - -
CloudOps
420 rue Guy
Montréal QC  H3J 1S6
www.cloudops.com
@CloudOps_



SSO

2014-03-04 Thread María Noelia Gil
Hello, I am studying the operation of single sign-on in CloudStack. Does anyone 
can provide me information about this topic? 

I want to know if you can use SAML, OpenID, ..., and as it should be. 

Thank you.

RE: Adding VM created outside of CS

2014-03-04 Thread Michael Phillips
I could see that working for sureI'll have to try that...

> Date: Tue, 4 Mar 2014 19:05:56 +
> From: n...@li.nux.ro
> To: users@cloudstack.apache.org
> Subject: Re: Adding VM created outside of CS
> 
> On 04.03.2014 18:55, Michael Phillips wrote:
> > So I've been pondering something. Has anyone ever added a VM that was
> > created outside of the CS UI? For example what if I created a VM using
> > the vcenter interface, but I then wanted that VM to be managed inside
> > of CS?
> > I could see this being a use case if I wanted to P2V a machine...
> 
> I would make that VM a template and add it to CS.
> 
> HTH
> Lucian
> 
> -- 
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
  

Re: Local Storage

2014-03-04 Thread Nux!

On 04.03.2014 16:33, Brent Clark wrote:

Again, many thanks for all the help this community/list has provided.

Question about local storage. We want to use it. I turned it on in the
Global Settings and restarted. What path is used to store the vms on 
the
host? My thinking is I need to create a FS to store the vms, bu t dont 
know

where cloudstack will put them.


If it's KVM/libvirt, the path is /var/lib/libvirt/images.

HTH
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: Adding VM created outside of CS

2014-03-04 Thread Nux!

On 04.03.2014 18:55, Michael Phillips wrote:

So I've been pondering something. Has anyone ever added a VM that was
created outside of the CS UI? For example what if I created a VM using
the vcenter interface, but I then wanted that VM to be managed inside
of CS?
I could see this being a use case if I wanted to P2V a machine...


I would make that VM a template and add it to CS.

HTH
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Adding VM created outside of CS

2014-03-04 Thread Michael Phillips
So I've been pondering something. Has anyone ever added a VM that was created 
outside of the CS UI? For example what if I created a VM using the vcenter 
interface, but I then wanted that VM to be managed inside of CS?
I could see this being a use case if I wanted to P2V a machine...   
  

Re: Adding a host with running VM's issu

2014-03-04 Thread Daan Hoogland
thirsty!

On Tue, Mar 4, 2014 at 6:01 PM, John Kinsella  wrote:
> This would be a super-cool feature to add to ACS. It's sorta the 
> cloud-orchestration equivalent of having to add 1000 nodes to nagios.  Would 
> be interesting to discuss with folks over a tasty beverage in Denver...
>
> On Mar 4, 2014, at 2:44 AM, Badi  wrote:
>
>> hello cloudstack users,
>>
>> Can any one tell me why cloudstack dont allow us to add hosts running VM's
>> ???
>>
>> thx
>>
>>
>>
>
>



-- 
Daan


Re: Local Storage

2014-03-04 Thread Michael Little
Which hypervisor are you using? I don't know the default for KVM, but am
pretty sure XenServer will use any LVM SRs that are preset. VMware will
likely use any existing local storage.

--Mike


On Tue, Mar 4, 2014 at 8:33 AM, Brent Clark  wrote:

> Again, many thanks for all the help this community/list has provided.
>
> Question about local storage. We want to use it. I turned it on in the
> Global Settings and restarted. What path is used to store the vms on the
> host? My thinking is I need to create a FS to store the vms, bu t dont know
> where cloudstack will put them.
>
>
> --
> Brent S. Clark
> NOC Engineer
>
> 2580 55th St.  |  Boulder, Colorado 80301
> www.tendrilinc.com  |  blog 
> 
>
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the sender.
> Please note that any views or opinions presented in this email are solely
> those of the author and do not necessarily represent those of the company.
> Finally, the recipient should check this email and any attachments for the
> presence of viruses.
> The company accepts no liability for any damage caused by any virus
> transmitted by this email.
>



-- 
*Mike Little *| Senior Cloud Solutions Engineer

*Redapt, Inc.*

T. 425 605 7135

E. mlit...@redapt.com


Blog  *
Site
 * LinkedIn 
 * Twitter 

Building IT that Propels Business


Ubuntu Package builds with noredist

2014-03-04 Thread David Bierce
This is more of a sanity check.  When I build an ubuntu package with noredist, 
the systemvm.iso doesn’t include the VMware jars, even though maven reports 
success for building VMware and systemvm.iso.  I have the VMware packages in 
maven and it builds RPMs fine, it just appears the files aren’t being added to 
the systemvm.iso.  My build process is pretty straight forward and isn’t 
working correctly for 4.2, 4.3 and master

I’m building the package from the root of the source directory with:

mvn clean
mvn install -P deps -Dnoredist && export ACS_BUILD_OPTS="-Dnoredist"; 
dpkg-buildpackage


Thanks,
David Bierce


Re: Adding a host with running VM's issu

2014-03-04 Thread John Kinsella
This would be a super-cool feature to add to ACS. It’s sorta the 
cloud-orchestration equivalent of having to add 1000 nodes to nagios.  Would be 
interesting to discuss with folks over a tasty beverage in Denver...

On Mar 4, 2014, at 2:44 AM, Badi  wrote:

> hello cloudstack users,
> 
> Can any one tell me why cloudstack dont allow us to add hosts running VM's  
> ??? 
> 
> thx
> 
> 
> 




Local Storage

2014-03-04 Thread Brent Clark
Again, many thanks for all the help this community/list has provided.

Question about local storage. We want to use it. I turned it on in the
Global Settings and restarted. What path is used to store the vms on the
host? My thinking is I need to create a FS to store the vms, bu t dont know
where cloudstack will put them.


-- 
Brent S. Clark
NOC Engineer

2580 55th St.  |  Boulder, Colorado 80301
www.tendrilinc.com  |  blog 


 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender.
Please note that any views or opinions presented in this email are solely those 
of the author and do not necessarily represent those of the company.
Finally, the recipient should check this email and any attachments for the 
presence of viruses.
The company accepts no liability for any damage caused by any virus transmitted 
by this email.


Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread France

On Mar 4, 2014, at 3:38 PM, Marcus wrote:

> On Tue, Mar 4, 2014 at 3:34 AM, France  wrote:
>> Hi Marcus and others.
>> 
>> There is no need to kill of the entire hypervisor, if one of the primary
>> storages fail.
>> You just need to kill the VMs and probably disable SR on XenServer, because
>> all other SRs and VMs have no problems.
>> if you kill those, then you can safely start them elsewhere. On XenServer
>> 6.2 you call destroy the VMs which lost access to NFS without any problems.
> 
> That's a great idea, but as already mentioned, it doesn't work in
> practice. You can't kill a VM that is hanging in D state, waiting on
> storage. I also mentioned that it causes problems for libvirt and much
> of the other system not using the storage.

You can on XS 6.2 as tried in in real life and reported by others as well.

> 
>> 
>> If you really want to still kill the entire host and it's VMs in one go, I
>> would suggest live migrating the VMs which have had not lost their storage
>> off first, and then kill those VMs on a stale NFS by doing hard reboot.
>> Additional time, while migrating working VMs, would even give some grace
>> time for NFS to maybe recover. :-)
> 
> You won't be able to live migrate a VM that is stuck in D state, or
> use libvirt to do so if one of its storage pools is unresponsive,
> anyway.
> 

I dont want to live migrate VMs in D state, just the working VMs. Those stuck 
can die with hypervisor reboot.


>> 
>> Hard reboot to recover from D state of NFS client can also be avoided by
>> using soft mount options.
> 
> As mentioned, soft and intr very rarely actually work, in my
> experience. I wish they did as I truly have come to loathe NFS for it.
> 
>> 
>> I run a bunch of Pacemaker/Corosync/Cman/Heartbeat/etc clusters and we don't
>> just kill whole nodes but fence services from specific nodes. STONITH is
>> implemented only when the node looses the quorum.
> 
> Sure, but how do you fence a KVM host from an NFS server? I don't
> think we've written a firewall plugin that works to fence hosts from
> any NFS server. Regardless, what CloudStack does is more of a poor
> man's clustering, the mgmt server is the locking in the sense that it
> is managing what's going on, but it's not a real clustering service.
> Heck, it doesn't even STONITH, it tries to clean shutdown, which fails
> as well due to hanging NFS (per the mentioned bug, to fix it they'll
> need IPMI fencing or something like that).

In my case as well as in the case of OP, the hypervisor got rebooted 
successfully.

> 
> I didn't write the code, I'm just saying that I can completely
> understand why it kills nodes when it deems that their storage has
> gone belly-up. It's dangerous to leave that D state VM hanging around,
> and it will until the NFS storage comes back. In a perfect world you'd
> just stop the VMs that were having the issue, or if there were no VMs
> you'd just de-register the storage from libvirt, I agree.

As previously stated on XS 6.2 you can "destroy" VMs with unaccessible NFS 
storage. I do not remember if processes were in the D state or whatever, cause 
i used the GUI, if i remember correctly. I am sure, you can test it yourself 
too.


> 
>> 
>> Regards,
>> F.
>> 
>> 
>> On 3/3/14 5:35 PM, Marcus wrote:
>>> 
>>> It's the standard clustering problem. Any software that does any sort
>>> of avtive clustering is going to fence nodes that have problems, or
>>> should if it cares about your data. If the risk of losing a host due
>>> to a storage pool outage is too great, you could perhaps look at
>>> rearranging your pool-to-host correlations (certain hosts run vms from
>>> certain pools) via clusters. Note that if you register a storage pool
>>> with a cluster, it will register the pool with libvirt when the pool
>>> is not in maintenance, which, when the storage pool goes down will
>>> cause problems for the host even if no VMs from that storage are
>>> running (fetching storage stats for example will cause agent threads
>>> to hang if its NFS), so you'd need to put ceph in its own cluster and
>>> NFS in its own cluster.
>>> 
>>> It's far more dangerous to leave a host in an unknown/bad state. If a
>>> host loses contact with one of your storage nodes, with HA, cloudstack
>>> will want to start the affected VMs elsewhere. If it does so, and your
>>> original host wakes up from it's NFS hang, you suddenly have a VM
>>> running in two locations, corruption ensues. You might think we could
>>> just stop the affected VMs, but NFS tends to make things that touch it
>>> go into D state, even with 'intr' and other parameters, which affects
>>> libvirt and the agent.
>>> 
>>> We could perhaps open a feature request to disable all HA and just
>>> leave things as-is, disallowing operations when there are outages. If
>>> that sounds useful you can create the feature request on
>>> https://issues.apache.org/jira.
>>> 
>>> 
>>> On Mon, Mar 3, 2014 at 5:37 AM, Andrei Mikhailovsky 
>>> wrote:
 
 Koushik, I und

[Events] CloudStack Bangalore March Meetup @ InMobi

2014-03-04 Thread iliyas shirol
Greetings!


We are happy to announce the Apache CloudStack Bangalore March MeetUp.

RSVP @ http://www.meetup.com/CloudStack-Bangalore-Group/events/169340552/

PFB the agenda of the meetup,


   -

   Agenda:

   4.00 - 4.30 PM

   Networking with CloudStackians (have fun !)

   4.30 - 5.15 PM

   Apache CloudStack 4.2 on Different Hypervisors (kvm/xen/ESXi/Hyper-v)
   with One Management Server

   - Shivaprasad Katta, Dell

   5.15 - 6.00 PM

   Adding HyperV in CloudStack

   - Devdeep Singh, Citrix

   6.00 - 6.45 PM

   Creating CentOS Template For CloudStack

   - Shanker Balan, ShapeBlue

Thanks.

-- 
-
Md. Iliyas Shirol
Mobile : +91 9902 977 800
Google : iliyas.shirol@ gmail.com


Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread Marcus
On Tue, Mar 4, 2014 at 3:34 AM, France  wrote:
> Hi Marcus and others.
>
> There is no need to kill of the entire hypervisor, if one of the primary
> storages fail.
> You just need to kill the VMs and probably disable SR on XenServer, because
> all other SRs and VMs have no problems.
> if you kill those, then you can safely start them elsewhere. On XenServer
> 6.2 you call destroy the VMs which lost access to NFS without any problems.

That's a great idea, but as already mentioned, it doesn't work in
practice. You can't kill a VM that is hanging in D state, waiting on
storage. I also mentioned that it causes problems for libvirt and much
of the other system not using the storage.

>
> If you really want to still kill the entire host and it's VMs in one go, I
> would suggest live migrating the VMs which have had not lost their storage
> off first, and then kill those VMs on a stale NFS by doing hard reboot.
> Additional time, while migrating working VMs, would even give some grace
> time for NFS to maybe recover. :-)

You won't be able to live migrate a VM that is stuck in D state, or
use libvirt to do so if one of its storage pools is unresponsive,
anyway.

>
> Hard reboot to recover from D state of NFS client can also be avoided by
> using soft mount options.

As mentioned, soft and intr very rarely actually work, in my
experience. I wish they did as I truly have come to loathe NFS for it.

>
> I run a bunch of Pacemaker/Corosync/Cman/Heartbeat/etc clusters and we don't
> just kill whole nodes but fence services from specific nodes. STONITH is
> implemented only when the node looses the quorum.

Sure, but how do you fence a KVM host from an NFS server? I don't
think we've written a firewall plugin that works to fence hosts from
any NFS server. Regardless, what CloudStack does is more of a poor
man's clustering, the mgmt server is the locking in the sense that it
is managing what's going on, but it's not a real clustering service.
Heck, it doesn't even STONITH, it tries to clean shutdown, which fails
as well due to hanging NFS (per the mentioned bug, to fix it they'll
need IPMI fencing or something like that).

I didn't write the code, I'm just saying that I can completely
understand why it kills nodes when it deems that their storage has
gone belly-up. It's dangerous to leave that D state VM hanging around,
and it will until the NFS storage comes back. In a perfect world you'd
just stop the VMs that were having the issue, or if there were no VMs
you'd just de-register the storage from libvirt, I agree.

>
> Regards,
> F.
>
>
> On 3/3/14 5:35 PM, Marcus wrote:
>>
>> It's the standard clustering problem. Any software that does any sort
>> of avtive clustering is going to fence nodes that have problems, or
>> should if it cares about your data. If the risk of losing a host due
>> to a storage pool outage is too great, you could perhaps look at
>> rearranging your pool-to-host correlations (certain hosts run vms from
>> certain pools) via clusters. Note that if you register a storage pool
>> with a cluster, it will register the pool with libvirt when the pool
>> is not in maintenance, which, when the storage pool goes down will
>> cause problems for the host even if no VMs from that storage are
>> running (fetching storage stats for example will cause agent threads
>> to hang if its NFS), so you'd need to put ceph in its own cluster and
>> NFS in its own cluster.
>>
>> It's far more dangerous to leave a host in an unknown/bad state. If a
>> host loses contact with one of your storage nodes, with HA, cloudstack
>> will want to start the affected VMs elsewhere. If it does so, and your
>> original host wakes up from it's NFS hang, you suddenly have a VM
>> running in two locations, corruption ensues. You might think we could
>> just stop the affected VMs, but NFS tends to make things that touch it
>> go into D state, even with 'intr' and other parameters, which affects
>> libvirt and the agent.
>>
>> We could perhaps open a feature request to disable all HA and just
>> leave things as-is, disallowing operations when there are outages. If
>> that sounds useful you can create the feature request on
>> https://issues.apache.org/jira.
>>
>>
>> On Mon, Mar 3, 2014 at 5:37 AM, Andrei Mikhailovsky 
>> wrote:
>>>
>>> Koushik, I understand that and I will put the storage into the
>>> maintenance mode next time. However, things happen and servers crash from
>>> time to time, which is not the reason to reboot all host servers, even those
>>> which do not have any running vms with volumes on the nfs storage. The
>>> bloody agent just rebooted every single host server regardless if they were
>>> running vms with volumes on the rebooted nfs server. 95% of my vms are
>>> running from ceph and those should have never been effected in the first
>>> place.
>>> - Original Message -
>>>
>>> From: "Koushik Das" 
>>> To: "" 
>>> Cc: d...@cloudstack.apache.org
>>> Sent: Monday, 3 March, 2014 5:55:34 AM
>>> Subject: Re: ALARM -

Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread Nux!

On 04.03.2014 12:55, Andrei Mikhailovsky wrote:


Regarding having nfs and ceph storage in different clusters - sounds
like a good idea for majority of cases, however, my setup will not
allow me to do that just yet. I am using ceph for my root and data
volumes and NFS for backup volumes.


Having tiered storage is one of the stronger features that have drawn 
me towards Cloudstack, it should work better.
I do plan to have a second, slower tier for backups and other more 
passive applications.



I do currently need the backup
volumes as snapshotting with KVM is somewhat broken / not fully
working in 4.2.1. It has been improved from version 4.2.0 as it was
completely broken. I am waiting for 4.3.0 where, hopefully, I would be
able to keep snapshots on the primary storage (currently this feature
is broken) which will make the snapshots with KVM usable.


KVM volume snapshots worked well in 4.2.1 AFAIK and they still work 
well in 4.3, but VM snapshots are still not supported and I don't think 
they will be any time soon. We might get somewhere with it if we opt for 
LVM thin storage and snapshots, that'd be cool.


Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread Andrei Mikhailovsky

I agree with France, sounds like a more sensible idea and killing hosts left, 
right and centre with live vms. I now understand the reasons behind killing the 
troubled host server, however, this should be done without killing live vms 
with fully working volumes. 


Regarding having nfs and ceph storage in different clusters - sounds like a 
good idea for majority of cases, however, my setup will not allow me to do that 
just yet. I am using ceph for my root and data volumes and NFS for backup 
volumes. I do currently need the backup volumes as snapshotting with KVM is 
somewhat broken / not fully working in 4.2.1. It has been improved from version 
4.2.0 as it was completely broken. I am waiting for 4.3.0 where, hopefully, I 
would be able to keep snapshots on the primary storage (currently this feature 
is broken) which will make the snapshots with KVM usable. 


Cheers for your help guys 
- Original Message -

From: "France"  
To: users@cloudstack.apache.org, d...@cloudstack.apache.org 
Sent: Tuesday, 4 March, 2014 10:34:36 AM 
Subject: Re: ALARM - ACS reboots host servers!!! 

Hi Marcus and others. 

There is no need to kill of the entire hypervisor, if one of the primary 
storages fail. 
You just need to kill the VMs and probably disable SR on XenServer, 
because all other SRs and VMs have no problems. 
if you kill those, then you can safely start them elsewhere. On 
XenServer 6.2 you call destroy the VMs which lost access to NFS without 
any problems. 

If you really want to still kill the entire host and it's VMs in one go, 
I would suggest live migrating the VMs which have had not lost their 
storage off first, and then kill those VMs on a stale NFS by doing hard 
reboot. Additional time, while migrating working VMs, would even give 
some grace time for NFS to maybe recover. :-) 

Hard reboot to recover from D state of NFS client can also be avoided by 
using soft mount options. 

I run a bunch of Pacemaker/Corosync/Cman/Heartbeat/etc clusters and we 
don't just kill whole nodes but fence services from specific nodes. 
STONITH is implemented only when the node looses the quorum. 

Regards, 
F. 

On 3/3/14 5:35 PM, Marcus wrote: 
> It's the standard clustering problem. Any software that does any sort 
> of avtive clustering is going to fence nodes that have problems, or 
> should if it cares about your data. If the risk of losing a host due 
> to a storage pool outage is too great, you could perhaps look at 
> rearranging your pool-to-host correlations (certain hosts run vms from 
> certain pools) via clusters. Note that if you register a storage pool 
> with a cluster, it will register the pool with libvirt when the pool 
> is not in maintenance, which, when the storage pool goes down will 
> cause problems for the host even if no VMs from that storage are 
> running (fetching storage stats for example will cause agent threads 
> to hang if its NFS), so you'd need to put ceph in its own cluster and 
> NFS in its own cluster. 
> 
> It's far more dangerous to leave a host in an unknown/bad state. If a 
> host loses contact with one of your storage nodes, with HA, cloudstack 
> will want to start the affected VMs elsewhere. If it does so, and your 
> original host wakes up from it's NFS hang, you suddenly have a VM 
> running in two locations, corruption ensues. You might think we could 
> just stop the affected VMs, but NFS tends to make things that touch it 
> go into D state, even with 'intr' and other parameters, which affects 
> libvirt and the agent. 
> 
> We could perhaps open a feature request to disable all HA and just 
> leave things as-is, disallowing operations when there are outages. If 
> that sounds useful you can create the feature request on 
> https://issues.apache.org/jira. 
> 
> 
> On Mon, Mar 3, 2014 at 5:37 AM, Andrei Mikhailovsky  
> wrote: 
>> Koushik, I understand that and I will put the storage into the maintenance 
>> mode next time. However, things happen and servers crash from time to time, 
>> which is not the reason to reboot all host servers, even those which do not 
>> have any running vms with volumes on the nfs storage. The bloody agent just 
>> rebooted every single host server regardless if they were running vms with 
>> volumes on the rebooted nfs server. 95% of my vms are running from ceph and 
>> those should have never been effected in the first place. 
>> - Original Message - 
>> 
>> From: "Koushik Das"  
>> To: ""  
>> Cc: d...@cloudstack.apache.org 
>> Sent: Monday, 3 March, 2014 5:55:34 AM 
>> Subject: Re: ALARM - ACS reboots host servers!!! 
>> 
>> The primary storage needs to be put in maintenance before doing any 
>> upgrade/reboot as mentioned in the previous mails. 
>> 
>> -Koushik 
>> 
>> On 03-Mar-2014, at 6:07 AM, Marcus  wrote: 
>> 
>>> Also, please note that in the bug you referenced it doesn't have a 
>>> problem with the reboot being triggered, but with the fact that reboot 
>>> never completes due to hanging NFS mount (which i

RE: Adding a host with running VM's issu

2014-03-04 Thread Dubravko Sever
Hi, 

I have the same problem. Because most of my clients are outside of organisation 
so donwtime is not an option. So it there any kind of procedure that can be 
created (or exists) in case we want to migrate existing infrastructure into the 
clodustack (convert to IaaS). Something like script that will scan all running 
VM-s and import them inside of cloud.  I believe VM volume can be managed by 
using storage migation (in case of XenServer or VmWare).

In my case that's the critical  issue, so we are looking solution that can 
convert out xensever infrastructure into the IaaS

Tnx
Dubravko 



-- 
Dubravko Sever
Sektor za računalne sustave
Sveučilište u Zagrebu, Sveučilišni računski centar (Srce), www.srce.unizg.hr
dubravko.se...@srce.hr, tel: +385 1 616 5807, fax: +385 1 616 5559


> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> Sent: Tuesday, March 04, 2014 12:53 PM
> To: users@cloudstack.apache.org
> Subject: Re: Adding a host with running VM's issu
> 
> H Badi,
> 
> Cloudstack needs to be in control of what is running on the hosts. As work
> around create templates from the vms and deploy then deploy them from
> cloudstack after adding the host
> 
> regards,
> Daan
> 
> On Tue, Mar 4, 2014 at 11:44 AM, Badi  wrote:
> > hello cloudstack users,
> >
> > Can any one tell me why cloudstack dont allow us to add hosts running
> > VM's ???
> >
> > thx
> >
> >
> >
> 
> 
> 
> --
> Daan


Re: Adding a host with running VM's issu

2014-03-04 Thread Daan Hoogland
H Badi,

Cloudstack needs to be in control of what is running on the hosts. As
work around create templates from the vms and deploy then deploy them
from cloudstack after adding the host

regards,
Daan

On Tue, Mar 4, 2014 at 11:44 AM, Badi  wrote:
> hello cloudstack users,
>
> Can any one tell me why cloudstack dont allow us to add hosts running VM's
> ???
>
> thx
>
>
>



-- 
Daan


Adding a host with running VM's issu

2014-03-04 Thread Badi
hello cloudstack users,

Can any one tell me why cloudstack dont allow us to add hosts running VM's  
??? 

thx





Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread France

Hi Marcus and others.

There is no need to kill of the entire hypervisor, if one of the primary 
storages fail.
You just need to kill the VMs and probably disable SR on XenServer, 
because all other SRs and VMs have no problems.
if you kill those, then you can safely start them elsewhere. On 
XenServer 6.2 you call destroy the VMs which lost access to NFS without 
any problems.


If you really want to still kill the entire host and it's VMs in one go, 
I would suggest live migrating the VMs which have had not lost their 
storage off first, and then kill those VMs on a stale NFS by doing hard 
reboot. Additional time, while migrating working VMs, would even give 
some grace time for NFS to maybe recover. :-)


Hard reboot to recover from D state of NFS client can also be avoided by 
using soft mount options.


I run a bunch of Pacemaker/Corosync/Cman/Heartbeat/etc clusters and we 
don't just kill whole nodes but fence services from specific nodes. 
STONITH is implemented only when the node looses the quorum.


Regards,
F.

On 3/3/14 5:35 PM, Marcus wrote:

It's the standard clustering problem. Any software that does any sort
of avtive clustering is going to fence nodes that have problems, or
should if it cares about your data. If the risk of losing a host due
to a storage pool outage is too great, you could perhaps look at
rearranging your pool-to-host correlations (certain hosts run vms from
certain pools) via clusters. Note that if you register a storage pool
with a cluster, it will register the pool with libvirt when the pool
is not in maintenance, which, when the storage pool goes down will
cause problems for the host even if no VMs from that storage are
running (fetching storage stats for example will cause agent threads
to hang if its NFS), so you'd need to put ceph in its own cluster and
NFS in its own cluster.

It's far more dangerous to leave a host in an unknown/bad state. If a
host loses contact with one of your storage nodes, with HA, cloudstack
will want to start the affected VMs elsewhere. If it does so, and your
original host wakes up from it's NFS hang, you suddenly have a VM
running in two locations, corruption ensues. You might think we could
just stop the affected VMs, but NFS tends to make things that touch it
go into D state, even with 'intr' and other parameters, which affects
libvirt and the agent.

We could perhaps open a feature request to disable all HA and just
leave things as-is, disallowing operations when there are outages. If
that sounds useful you can create the feature request on
https://issues.apache.org/jira.


On Mon, Mar 3, 2014 at 5:37 AM, Andrei Mikhailovsky  wrote:

Koushik, I understand that and I will put the storage into the maintenance mode 
next time. However, things happen and servers crash from time to time, which is 
not the reason to reboot all host servers, even those which do not have any 
running vms with volumes on the nfs storage. The bloody agent just rebooted 
every single host server regardless if they were running vms with volumes on 
the rebooted nfs server. 95% of my vms are running from ceph and those should 
have never been effected in the first place.
- Original Message -

From: "Koushik Das" 
To: "" 
Cc: d...@cloudstack.apache.org
Sent: Monday, 3 March, 2014 5:55:34 AM
Subject: Re: ALARM - ACS reboots host servers!!!

The primary storage needs to be put in maintenance before doing any 
upgrade/reboot as mentioned in the previous mails.

-Koushik

On 03-Mar-2014, at 6:07 AM, Marcus  wrote:


Also, please note that in the bug you referenced it doesn't have a
problem with the reboot being triggered, but with the fact that reboot
never completes due to hanging NFS mount (which is why the reboot
occurs, inaccessible primary storage).

On Sun, Mar 2, 2014 at 5:26 PM, Marcus  wrote:

Or do you mean you have multiple primary storages and this one was not
in use and put into maintenance?

On Sun, Mar 2, 2014 at 5:25 PM, Marcus  wrote:

I'm not sure I understand. How do you expect to reboot your primary
storage while vms are running? It sounds like the host is being
fenced since it cannot contact the resources it depends on.

On Sun, Mar 2, 2014 at 3:24 PM, Nux!  wrote:

On 02.03.2014 21:17, Andrei Mikhailovsky wrote:

Hello guys,


I've recently came across the bug CLOUDSTACK-5429 which has rebooted
all of my host servers without properly shutting down the guest vms.
I've simply upgraded and rebooted one of the nfs primary storage
servers and a few minutes later, to my horror, i've found out that all
of my host servers have been rebooted. Is it just me thinking so, or
is this bug should be fixed ASAP and should be a blocker for any new
ACS release. I mean not only does it cause downtime, but also possible
data loss and server corruption.


Hi Andrei,

Do you have HA enabled and did you put that primary storage in maintenance
mode before rebooting it?
It's my understanding that ACS relies on the shared storage to perform HA so
if the storage goe