Re: [DISCUSS/PROPOSAL] CCC13 Hackfest: Storage Architecture Summary

2013-07-09 Thread Daan Hoogland
Thanks John,

I am still studying on it but most is adequately describing our session.

On Mon, Jul 8, 2013 at 8:47 PM, John Burwell  wrote:

> 5. Backup/Storage Snapshots:  Support transfer of storage snapshots from
> device to device (e.g. from a SAN to an object store).  Dependent on the
> flexibility of the streamlined storage driver enhancements, this capability
> may be able to implemented completely in the orchestration layer.  If the
> Storage/Hypervisor Decoupling work does not split the notions of storage
> and hypervisor snapshots, this enhancement would likely require it.


I do not understand the last half sentence you write here. Does this
enhancement need the decoupling or an unsplit notion of storage and
hypervisor snapshots? I would think both but the start of the sentence with
'if' confuses me.

Does, 'We must make sure that the Storage/Hypervisor Decoupling work does
not split the notions of storage and hypervisor snapshots, as this
enhancement would likely require it.', describe what you mean?

regards,
Daan


Re: cloudstack 4.1 QinQ vlan behaviour

2013-07-09 Thread Valery Ciareszka
So, nobody uses q in q and cloudstack 4.1 ?

On Mon, Jul 8, 2013 at 3:13 PM, Valery Ciareszka
wrote:

> Hi all,
>
> I use the following environment: CS 4.1, KVM, Centos 6.4
> (management+node1+node2), OpenIndiana NFS server as primary and secondary
> storage
> I have advanced networking in zone. I split management/public/guest
> traffic into different vlans, and use kvm network labels (bridge names):
> # cat /etc/cloud/agent/agent.properties |grep device
> guest.network.device=cloudbrguest
> private.network.device=cloudbrmanage
> public.network.device=cloudbrpublic
>
> I have following network configuration:
> eth0+eth1=bond0
> eth2+eth3=bond1
>
> I use  vlan with id=211 on bond1 interface for guest traffic:
> cloudbrguest8000.90e2ba317614   yes vlan211
> cloudbrmanage   8000.90e2ba317614   yes bond1.210
> cloudbrpublic   8000.90e2ba317614   yes bond1.221
> cloudbrstor 8000.0025908814a4   yes bond0
>
>
> The problem appeared after I have upgraded CS from 4.0.2 to 4.1.
>
> How it works in 4.0.2:
> -bridge interface cloudVirBr#VLANID is created on hypervisor, #VLANID -
> value from 1024 to 4096(is specified when creating zone), i.e.
> cloudVirBr1224
> -vlan interface vlan211.#VLANID is created on hypervisor and is plugged
> into cloudVirBr#VLANID
> I should had permitted 211 vlanid on switchports and all guest traffic
> (vlans 1024-4096) was encapsulated.
>
> How it works in 4.1:
> -bridge interface br#ETHNAME-#VLANID is created on hypervisor, where
> #VLANID - value from 1024 to 4096(is specified when creating zone) and
> #ETHNAME - name of device on top of which vlan will be created
> i.e. brbond1-1224
> -vlan interface bond1.#VLANID is created on hypervisor and is plugged into
> br#ETHNAME-#VLANID
> However, vlan interface is created on top of bond1 interface, while I
> would like it to be created on top of vlan211 (bond1.211)
> Now I should permit 1024-4096 vlanid on switchports, that is not
> convenient.
>
> How do I configure CS 4.1 so that it could work with guest vlans the same
> way as it had worked in CS 4.0 ?
>
> --
> Regards,
> Valery
>
> http://protocol.by/slayer
>



-- 
Regards,
Valery

http://protocol.by/slayer


RE: CloudStack Network architecture for VPC...

2013-07-09 Thread COCHE Sébastien
Hello

 

No body deployed VPC feature on a large scale deployment ?  L

 

 

De : COCHE Sébastien [mailto:sco...@sigma.fr] 
Envoyé : lundi 8 juillet 2013 11:49
À : users@cloudstack.apache.org
Objet : RE: CloudStack Network architecture for VPC...

 

It seems the VPC feature work fine in a small-scale deployment (when cloudstack 
management is on the same network that hypervisors)

Does anyone already used VPC on large-scale deployment ?

My configuration look like this schema ( taken in Cloustack installation guide 
4.0.0, chapter 9.2)



 

-Message d'origine-
De : COCHE Sébastien [mailto:sco...@sigma.fr] 
Envoyé : lundi 8 juillet 2013 11:24
À : users@cloudstack.apache.org
Objet : CloudStack Network architecture for VPC...

 

Hello all,

 

 

 

I want to test VPC feature on CloudStack.

 

When I deploy a new VPC, le communication failed between VPC's vrouter and 
CloudStack manager.

 

After some investigation, it seems that the vRouter default gateway is set on 
public subnet, and there is no static route configured to join the Cloudstack 
manager on the management network.

 

I configure a subnet for Cloud management (cloudstack manager, vCenter server, 
...) and a subnet for each Pod (VMware hypervisor and KVM hypervisors).

 

Can you tell me what is wrong in my design ? 

 

Shoud I add a Cloudstack manager NIC in each Pod or should I put hypervisors in 
the same subnet that the CloudStack Manager ?

 

 

 

Standard vRouter worked fine in that design...

 

 

 

Thank you

 

 

 

Best regards

 

 

 

Sébastien Coché, Architecte Infrastructure Direction Veille & Méthodes

 

(+33) 2.53.48.92.57 - poste : 92.57

 

(+33) 6.22.25.03.74

 

 

 

SIGMA Informatique - http://www.sigma.fr/   
 >

3 rue Newton - BP 4127

44241 La Chapelle sur Erdre Cedex

 

 

 



How to let vm instance display the processor info of its host server?

2013-07-09 Thread WXR
I use kvm as hypervisor,cloudstack is intalled on a dell server.The cpu is 
E5-2620,the hdd is Seagate SAS.

By default the vm processor info is qemu virtual processor,the harddisk info is 
qemu or virtio driver.

I want to let the vm show the hardware info of its host server , just like 
E5-2620,SAS hdd.
Is there any global setting or other method to achieve it ?

question about migrate VMs from xenserver 6.1 to cloudstack 3.0.2/4.1

2013-07-09 Thread William Jiang
Hi,
We have a xenserver 6.1 pool with 6 hosts, there are total about 90 VMs. Some 
with one disk and others with 2 disk on NFS shared storage.
In the meanwhile, we have a cloudstack 3.0.2 with xenserver 6.0.2 hosts.(we had 
plan of upgrading to cloudstack 4.1&xenserver 6.1 soon) The storage is on iscsi.

My question is:
If I want move all the vms on xenserver pool to cloudstack, if there is a way 
for fast migration?
Or I have to migrate one by one?
For my understanding, I need export each vm in xenserver, then import it to 
cloudstack as template, and create instance from the imported template.
But in this process, it works only for one-disk vms, how about the vms with 2 
disks?
Any comments or suggestions will be greatly appreciated.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier électronique est 
confidentiel et protégé. L'expéditeur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) désigné(s) est interdite. Si vous recevez ce courrier 
électronique par erreur, veuillez m'en aviser immédiatement, par retour de 
courrier électronique ou par un autre moyen.


RE: Advanced Physical Networking query

2013-07-09 Thread Geoff Higginbottom
Unfortunately the blog article referenced has a few errors in it so could be 
confusing, you might want to take a look at the following

http://www.shapeblue.com/citrix/cloudstack-networking-considerations/
http://www.shapeblue.com/cloudstack/understanding-cloudstacks-physical-networking-architecture/
http://blog.remibergsma.com/2012/08/30/going-beyond-cloudstack-advanced-networking-how-i-replaced-the-virtual-router-with-my-own-physical-linux-router/

Regards

Geoff Higginbottom

D: +44 20 3603 0542 | S: +44 20 3603 0540 | M: +447968161581

geoff.higginbot...@shapeblue.com


-Original Message-
From: Jayapal Reddy Uradi [mailto:jayapalreddy.ur...@citrix.com]
Sent: 06 July 2013 16:31
To: 
Cc: Musayev, Ilya
Subject: Re: Advanced Physical Networking query

Hi,

Create advanced isolated network.
In advanced isolated network VMs gets internal ip address from the virtual 
router DHCP ip address. On network you can acquire public ip address which get 
configured on the VR.
To the reach VMs from the public side you can configure the either port 
forwarding or static nat rules along with firewall rules on the public ip 
address.

Please refer the following
http://blogs.clogeny.com/citrixs-cloudstack-3-0-advanced-zone-setup/

Thanks,
Jayapal

On 06-Jul-2013, at 7:51 AM, Abhinandan Prateek  wrote:

> Hi Ian,
>
>  You are looking for a basic zone.
>
> Probably go thru the admin guide here
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.0-incuba
> ting/ html-single/Admin_Guide/#basic-zone-configuration
>
> -abhi
>
>
> On 06/07/13 6:56 AM, "Ian Duffy"  wrote:
>
>> Hi Ilya/List
>>
>> I was reading the post over at
>> https://cwiki.apache.org/CLOUDSTACK/cloudstack-advanced-network-tutor
>> ial-s
>> tep-by-step.html
>> and was wondering If I could get some information from you(or anybody
>> else who can contribute).
>>
>> I want a setup where by instances are brought up with a Public IP and
>> an Internal IP for communication with other instances both got from
>> DHCP running on a physical gateway.
>>
>> In terms of networking with xen(preferably) or vcenter what
>> networking is required?
>>
>> I'm assuming I'll need the following:
>>
>> Management Network
>> Cloudstack Manager
>> Hypervisor
>> Storage
>>
>> Guest Network (Instance gets some private IP supplied by DHCP on
>> physical gateway) Hypervisor
>>
>> Public Network (Instance gets some public IP supplied by DHCP on
>> physical gateway) Hypervisor
>>
>> Is it just a matter of
>> 1) Creating a network offering as described here:
>> http://blog.remibergsma.com/2012/03/10/howto-create-a-network-in-clou
>> dstac
>> k-without-a-virtual-router/
>> 2) Creating a public and guest network within the zone
>> 3) Creating matching labels for the public and guest networks in xen
>> pointing to the uuid of the network cards
>>
>> I think what is tripping me up the most is the IP address space
>> required for a pod. I understand a pod contains hosts and primary
>> storage so am I correct in thinking that my pod address space in the
>> above outlined configuration would just be some addresses within the
>> address space given to management network?
>>
>> Thanks,
>> Ian
>
>


This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.



RE: question about migrate VMs from xenserver 6.1 to cloudstack 3.0.2/4.1

2013-07-09 Thread Brian Galura
The method you describe is the only way I know how to do it. We have done a 
similar migration with some scripts around xe on xenserver and cloudmonkey for 
importing to cloudstack.

I'm interested to know if anyone else has tried this. We have 1000's of vms 
still to migrate from legacy xenserver into cloudstack. Maybe we can publish a 
tool for this that the community can maintain?

-Original Message-
From: William Jiang [mailto:william.ji...@manwin.com] 
Sent: Tuesday, July 09, 2013 7:30 AM
To: users@cloudstack.apache.org
Subject: question about migrate VMs from xenserver 6.1 to cloudstack 3.0.2/4.1

Hi,
We have a xenserver 6.1 pool with 6 hosts, there are total about 90 VMs. Some 
with one disk and others with 2 disk on NFS shared storage.
In the meanwhile, we have a cloudstack 3.0.2 with xenserver 6.0.2 hosts.(we had 
plan of upgrading to cloudstack 4.1&xenserver 6.1 soon) The storage is on iscsi.

My question is:
If I want move all the vms on xenserver pool to cloudstack, if there is a way 
for fast migration?
Or I have to migrate one by one?
For my understanding, I need export each vm in xenserver, then import it to 
cloudstack as template, and create instance from the imported template.
But in this process, it works only for one-disk vms, how about the vms with 2 
disks?
Any comments or suggestions will be greatly appreciated.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier électronique est 
confidentiel et protégé. L'expéditeur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) désigné(s) est interdite. Si vous recevez ce courrier 
électronique par erreur, veuillez m'en aviser immédiatement, par retour de 
courrier électronique ou par un autre moyen.


Re: outage feedback and questions

2013-07-09 Thread Laurent Steff
Hi Dean,

And thanks for your answer.

Yes the network troubles lead to issue with the main storage
on clusters (iscsi).

So is that a fact if the main storage is lost on KVM, VMs are stopped
and domain destroyed ?

It was an hypothesis as I found traces in 

apache-cloudstack-4.0.2-src/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/KVMHABase.java

which "kills -9 qemu processes" if main storage is not found, but I was not 
sure when the function was called.

It's on the function  checkingMountPoint, which calls destroyVMs if mount point 
not found.

Regards,

- Mail original -
> De: "Dean Kamali" 
> À: users@cloudstack.apache.org
> Envoyé: Lundi 8 Juillet 2013 16:34:04
> Objet: Re: outage feedback and questions
> 
> Survivors VMs are on the same KVM/GFS2 Cluster.
> SSVM is one of them. Messages on the console indicates she was
> temporarily
> in read-only mode
> 
> Do you have an issue with storage?
> 
> I wouldn't expect a failure in switch could cause all of this, it
> will
> cause loss of network connectivity but it shouldn't cause your vms to
> go
> down.
> 
> This behavior usually happens when you lose your primary storage.
> 
> 
> 
> 
> On Mon, Jul 8, 2013 at 8:39 AM, Laurent Steff
> wrote:
> 
> > Hello,
> >
> > Cloudstack is used in our company as a core component of a
> > "Continuous
> > Integration"
> > Service.
> >
> > We are mainly happy with it, for a lot of reasons too long to
> > describe. :)
> >
> > We encountered recently a major service outage on Cloudstack mainly
> > linked
> > to bad practices on our side, and the aim of this post is :
> >
> > - ask questions about things we didn't understand yet
> > - gather some practical best practices we missed
> > - if problems detected are still present on Cloudstack 4.x, helping
> > to robustify Cloudstack with our feedback
> >
> > we know that 3.x version is not supported and plan to move ASAP in
> > 4.x
> > version.
> >
> > It's quite a long mail, and it may be badly directed (dev mailing
> > list ?
> > multiple bugs ?)
> >
> > Any response is appreciated ;)
> >
> > Regards,
> >
> >
> > long
> > part
> >
> > Architecture :
> > --
> >
> > Old and non Apache CloudStack 3.0.2 release
> > 1 Zone, 1 physical network, 1 pod
> > 1 Virtual Router VM, 1 SSVM
> > 4 CentOS 6.3 KVM clusters, primary storage GFS2 on iscsi storage
> > Management Server on Vmware virtual machine
> >
> >
> >
> > Incidents :
> > ---
> >
> > Day 1 : Management Server DoSed by internal synchronization scripts
> > (ldap
> > to Cloudstack)
> > Day 3 : DoS corrected, Management Server RAM and CPU ugraded, and
> > rebooted
> > (never rebooted in more than a year). Cloudstack
> > is running again normally (vm creation/stop/start/console/...)
> > Day 4 : (week-end) Network outage on core datacenter switch.
> > Network
> > unstable 2 days.
> >
> > Symptoms :
> > --
> >
> > Day 7 : The network is operationnal but most of VMs down (250 of
> > 300)
> > since Day 4.
> > Libvirt configuration (/etc/libvirt.d/qemu/VMuid.xml erased).
> >
> > VirtualRouter VM fileystem was on of them. Filesystem corruption
> > prevented
> > it to reboot normally.
> >
> > Survivors VMs are on the same KVM/GFS2 Cluster.
> > SSVM is one of them. Messages on the console indicates she was
> > temporarily
> > in read-only mode
> >
> > Hard way to revival (actions):
> > -
> >
> > 1. VirtualRouter VM destructed by an administrator, to let
> > CloudStack
> > recreate it from template.
> >
> > BUT :)
> >
> > the SystemVM KVM Template is not available. Status in GUI is
> > "CONNECTION
> > REFUSED".
> > The url from where it was downloaded during install is no more
> > valid (old
> > and unavailable
> > internal mirror server  instead of http://download.cloud.com)
> >
> > => we are unable to start again VMs stopped and create new ones
> >
> > 2. Manual download on the Managment Server of the template, like in
> > a
> > fresh install
> >
> > ---
> > /usr/lib64/cloud/agent/scripts/storage/secondary/cloud-install-sys-tmplt
> > -m /mnt/secondary/  -u
> > http://ourworkingmirror/repository/cloudstack-downloads/acton-systemvm-02062012.qcow2.bz2-h
> > kvm -F
> > ---
> >
> > It's no sufficient. mysql table template_host_ref does not change.
> > Even
> > when changing url in mysql tables.
> > We still have "CONNECTION REFUSED" on template status in mysql and
> > on the
> > GUI
> >
> > 3. after analysis, we needed to alter manualy mysql tables
> > (template_id of
> > systemVM KVM was x) :
> >
> > ---
> > update template_host_ref set download_state='DOWNLOADED' where
> > template_id=x;
> > update template_host_ref set job_id='NULL' where template_id=x; <=
> > may be
> > useless
> > update template_host_ref set job_id='NULL' where template_id=x; <=
> > may be
> > useless
> > ---
> >
> > 4. As in MySQL, status on GUI is DOWNLOADED
> >
> > 5. Poweron of a stopped VM, Cloudstack builds a new VirtualRo

Re: outage feedback and questions

2013-07-09 Thread Dean Kamali
Well, I have asked in the mailing list sometime ago, about
cloudstack behaviour when I lose connectively to primary storage, then
hypervisor start rebooting randomly.

I believe this what is very similar to what happend in your case.

This is actually 'by design'.  The logic is that if the storage goes
offline, then all VMs must have also failed, and a 'forced' reboot of the
Host 'might' automatically fix things.

This is great if you only have one Primary Storage, but typically you have
more than one, so whilst the reboot might fix the failed storage, it will
also kill off all the perfectly good VMs which were still happily running.

The answer what I got was for xenserver not KVM, it included removing the
reboot -f option for a config file.



The fix for XenServer Hosts is to:

1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting
out the two entries which have "reboot -f"

2. Identify the PID of the script  - pidof -x xenheartbeat.sh

3. Restart the Script  - kill 

4. Force reconnect Host from the UI,  the script will then re-launch on
reconnect



On Tue, Jul 9, 2013 at 7:08 PM, Laurent Steff wrote:

> Hi Dean,
>
> And thanks for your answer.
>
> Yes the network troubles lead to issue with the main storage
> on clusters (iscsi).
>
> So is that a fact if the main storage is lost on KVM, VMs are stopped
> and domain destroyed ?
>
> It was an hypothesis as I found traces in
>
>
> apache-cloudstack-4.0.2-src/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/KVMHABase.java
>
> which "kills -9 qemu processes" if main storage is not found, but I was
> not sure when the function was called.
>
> It's on the function  checkingMountPoint, which calls destroyVMs if mount
> point not found.
>
> Regards,
>
> - Mail original -
> > De: "Dean Kamali" 
> > À: users@cloudstack.apache.org
> > Envoyé: Lundi 8 Juillet 2013 16:34:04
> > Objet: Re: outage feedback and questions
> >
> > Survivors VMs are on the same KVM/GFS2 Cluster.
> > SSVM is one of them. Messages on the console indicates she was
> > temporarily
> > in read-only mode
> >
> > Do you have an issue with storage?
> >
> > I wouldn't expect a failure in switch could cause all of this, it
> > will
> > cause loss of network connectivity but it shouldn't cause your vms to
> > go
> > down.
> >
> > This behavior usually happens when you lose your primary storage.
> >
> >
> >
> >
> > On Mon, Jul 8, 2013 at 8:39 AM, Laurent Steff
> > wrote:
> >
> > > Hello,
> > >
> > > Cloudstack is used in our company as a core component of a
> > > "Continuous
> > > Integration"
> > > Service.
> > >
> > > We are mainly happy with it, for a lot of reasons too long to
> > > describe. :)
> > >
> > > We encountered recently a major service outage on Cloudstack mainly
> > > linked
> > > to bad practices on our side, and the aim of this post is :
> > >
> > > - ask questions about things we didn't understand yet
> > > - gather some practical best practices we missed
> > > - if problems detected are still present on Cloudstack 4.x, helping
> > > to robustify Cloudstack with our feedback
> > >
> > > we know that 3.x version is not supported and plan to move ASAP in
> > > 4.x
> > > version.
> > >
> > > It's quite a long mail, and it may be badly directed (dev mailing
> > > list ?
> > > multiple bugs ?)
> > >
> > > Any response is appreciated ;)
> > >
> > > Regards,
> > >
> > >
> > > long
> > > part
> > >
> > > Architecture :
> > > --
> > >
> > > Old and non Apache CloudStack 3.0.2 release
> > > 1 Zone, 1 physical network, 1 pod
> > > 1 Virtual Router VM, 1 SSVM
> > > 4 CentOS 6.3 KVM clusters, primary storage GFS2 on iscsi storage
> > > Management Server on Vmware virtual machine
> > >
> > >
> > >
> > > Incidents :
> > > ---
> > >
> > > Day 1 : Management Server DoSed by internal synchronization scripts
> > > (ldap
> > > to Cloudstack)
> > > Day 3 : DoS corrected, Management Server RAM and CPU ugraded, and
> > > rebooted
> > > (never rebooted in more than a year). Cloudstack
> > > is running again normally (vm creation/stop/start/console/...)
> > > Day 4 : (week-end) Network outage on core datacenter switch.
> > > Network
> > > unstable 2 days.
> > >
> > > Symptoms :
> > > --
> > >
> > > Day 7 : The network is operationnal but most of VMs down (250 of
> > > 300)
> > > since Day 4.
> > > Libvirt configuration (/etc/libvirt.d/qemu/VMuid.xml erased).
> > >
> > > VirtualRouter VM fileystem was on of them. Filesystem corruption
> > > prevented
> > > it to reboot normally.
> > >
> > > Survivors VMs are on the same KVM/GFS2 Cluster.
> > > SSVM is one of them. Messages on the console indicates she was
> > > temporarily
> > > in read-only mode
> > >
> > > Hard way to revival (actions):
> > > -
> > >
> > > 1. VirtualRouter VM destructed by an administrator, to let
> > > CloudStack
> > > recreate it from template.
> > >
> > > BUT :)
> > >
> > > the S

Re: outage feedback and questions

2013-07-09 Thread Dean Kamali
courtesy to geoff.higginbottom@shapeblue.comfor answering this question first


On Tue, Jul 9, 2013 at 7:33 PM, Dean Kamali  wrote:

> Well, I have asked in the mailing list sometime ago, about
> cloudstack behaviour when I lose connectively to primary storage, then
> hypervisor start rebooting randomly.
>
> I believe this what is very similar to what happend in your case.
>
> This is actually 'by design'.  The logic is that if the storage goes
> offline, then all VMs must have also failed, and a 'forced' reboot of the
> Host 'might' automatically fix things.
>
> This is great if you only have one Primary Storage, but typically you
> have more than one, so whilst the reboot might fix the failed storage, it
> will also kill off all the perfectly good VMs which were still happily
> running.
>
> The answer what I got was for xenserver not KVM, it included removing the
> reboot -f option for a config file.
>
>
>
> The fix for XenServer Hosts is to:
>
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts,
> commenting out the two entries which have "reboot -f"
>
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
>
> 3. Restart the Script  - kill 
>
> 4. Force reconnect Host from the UI,  the script will then re-launch on
> reconnect
>
>
>
> On Tue, Jul 9, 2013 at 7:08 PM, Laurent Steff wrote:
>
>> Hi Dean,
>>
>> And thanks for your answer.
>>
>> Yes the network troubles lead to issue with the main storage
>> on clusters (iscsi).
>>
>> So is that a fact if the main storage is lost on KVM, VMs are stopped
>> and domain destroyed ?
>>
>> It was an hypothesis as I found traces in
>>
>>
>> apache-cloudstack-4.0.2-src/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/KVMHABase.java
>>
>> which "kills -9 qemu processes" if main storage is not found, but I was
>> not sure when the function was called.
>>
>> It's on the function  checkingMountPoint, which calls destroyVMs if mount
>> point not found.
>>
>> Regards,
>>
>> - Mail original -
>> > De: "Dean Kamali" 
>> > À: users@cloudstack.apache.org
>> > Envoyé: Lundi 8 Juillet 2013 16:34:04
>> > Objet: Re: outage feedback and questions
>> >
>> > Survivors VMs are on the same KVM/GFS2 Cluster.
>> > SSVM is one of them. Messages on the console indicates she was
>> > temporarily
>> > in read-only mode
>> >
>> > Do you have an issue with storage?
>> >
>> > I wouldn't expect a failure in switch could cause all of this, it
>> > will
>> > cause loss of network connectivity but it shouldn't cause your vms to
>> > go
>> > down.
>> >
>> > This behavior usually happens when you lose your primary storage.
>> >
>> >
>> >
>> >
>> > On Mon, Jul 8, 2013 at 8:39 AM, Laurent Steff
>> > wrote:
>> >
>> > > Hello,
>> > >
>> > > Cloudstack is used in our company as a core component of a
>> > > "Continuous
>> > > Integration"
>> > > Service.
>> > >
>> > > We are mainly happy with it, for a lot of reasons too long to
>> > > describe. :)
>> > >
>> > > We encountered recently a major service outage on Cloudstack mainly
>> > > linked
>> > > to bad practices on our side, and the aim of this post is :
>> > >
>> > > - ask questions about things we didn't understand yet
>> > > - gather some practical best practices we missed
>> > > - if problems detected are still present on Cloudstack 4.x, helping
>> > > to robustify Cloudstack with our feedback
>> > >
>> > > we know that 3.x version is not supported and plan to move ASAP in
>> > > 4.x
>> > > version.
>> > >
>> > > It's quite a long mail, and it may be badly directed (dev mailing
>> > > list ?
>> > > multiple bugs ?)
>> > >
>> > > Any response is appreciated ;)
>> > >
>> > > Regards,
>> > >
>> > >
>> > > long
>> > > part
>> > >
>> > > Architecture :
>> > > --
>> > >
>> > > Old and non Apache CloudStack 3.0.2 release
>> > > 1 Zone, 1 physical network, 1 pod
>> > > 1 Virtual Router VM, 1 SSVM
>> > > 4 CentOS 6.3 KVM clusters, primary storage GFS2 on iscsi storage
>> > > Management Server on Vmware virtual machine
>> > >
>> > >
>> > >
>> > > Incidents :
>> > > ---
>> > >
>> > > Day 1 : Management Server DoSed by internal synchronization scripts
>> > > (ldap
>> > > to Cloudstack)
>> > > Day 3 : DoS corrected, Management Server RAM and CPU ugraded, and
>> > > rebooted
>> > > (never rebooted in more than a year). Cloudstack
>> > > is running again normally (vm creation/stop/start/console/...)
>> > > Day 4 : (week-end) Network outage on core datacenter switch.
>> > > Network
>> > > unstable 2 days.
>> > >
>> > > Symptoms :
>> > > --
>> > >
>> > > Day 7 : The network is operationnal but most of VMs down (250 of
>> > > 300)
>> > > since Day 4.
>> > > Libvirt configuration (/etc/libvirt.d/qemu/VMuid.xml erased).
>> > >
>> > > VirtualRouter VM fileystem was on of them. Filesystem corruption
>> > > prevented
>> > > it to reboot normally.
>> > >
>> > > Survivors VMs are on the same KVM/GFS2 Cluster.
>> > > SSVM is one of them. Messag

CloudStack Mirrors

2013-07-09 Thread Maurice Lawler
Greetings,Is there any plan to make use of mirrors for folks downloading / updating from the repo. Or is there one in existence now?- Maurice


Re: CloudStack Mirrors

2013-07-09 Thread Matthew E. Porter
If there is a need, we (Contegix) are happy to host one.


Cheers,
  Matthew 


---
Matthew E. Porter
Contegix
E-mail: matthew.por...@contegix.com
Twitter: @meporter | http://twitter.com/meporter

On Jul 9, 2013, at 7:24 PM, Maurice Lawler  wrote:

> Greetings,
> 
> Is there any plan to make use of mirrors for folks downloading / updating 
> from the repo. Or is there one in existence now?
> 
> 
> - Maurice


System VM

2013-07-09 Thread Maurice Lawler
Hello,I'm curious, is this the most recent up to date system VM for download for KVM?http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2


Re: CloudStack News - Moving to Wednesday

2013-07-09 Thread Ryan Lei
Hi, all. According to the Apache infra status
http://monitoring.apache.org/status/ , the ASF Blog system has been in the
status of CRITICAL or WARNING for many days, and I'm still not able to
browse the newsletter website. It's close to Wednesday now. Does anyone
know what's going on there?

---
Yu-Heng (Ryan) Lei, Associate Reasearcher
Chunghwa Telecom Laboratories / Cloud Computing Laboratory
ryan...@cht.com.tw
or
ryanlei750...@gmail.com



On Tue, Jul 9, 2013 at 2:49 AM, Mathias Mullins
wrote:

> - Multi-list send since this is a community wide announcement
>
> Just a reminder, the community news blog is moving to being published on
> Wednesdays starting this week on July 10! This is to make it more timely
> with the data that we are working on throughout the week.
>
> Please make sure to have any information, events (including dates), or
> topics that you would like to see covered or focused on posted to the
> marketing@c.a.o mailing list by Monday EOD so we have time to get it in.
> We'll try to get stuff happening in on Tuesday as well. If you would like
> to add it directly to the news feed wiki, please add it at:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News
> <
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News#
> >#<
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News#
> >
>
> Thanks,
> Matt Mullins
>


Re: CloudStack News - Moving to Wednesday

2013-07-09 Thread David Nalley
Yes, there are some problems with roller instance (the underlying
software that blogs.a.o runs). Infrastructure has been discussing the
situation.
There is no immediate timeline on having the issue resolved to my knowledge.

--David

On Tue, Jul 9, 2013 at 10:03 PM, Ryan Lei  wrote:
> Hi, all. According to the Apache infra status
> http://monitoring.apache.org/status/ , the ASF Blog system has been in the
> status of CRITICAL or WARNING for many days, and I'm still not able to
> browse the newsletter website. It's close to Wednesday now. Does anyone
> know what's going on there?
>
> ---
> Yu-Heng (Ryan) Lei, Associate Reasearcher
> Chunghwa Telecom Laboratories / Cloud Computing Laboratory
> ryan...@cht.com.tw
> or
> ryanlei750...@gmail.com
>
>
>
> On Tue, Jul 9, 2013 at 2:49 AM, Mathias Mullins
> wrote:
>
>> - Multi-list send since this is a community wide announcement
>>
>> Just a reminder, the community news blog is moving to being published on
>> Wednesdays starting this week on July 10! This is to make it more timely
>> with the data that we are working on throughout the week.
>>
>> Please make sure to have any information, events (including dates), or
>> topics that you would like to see covered or focused on posted to the
>> marketing@c.a.o mailing list by Monday EOD so we have time to get it in.
>> We'll try to get stuff happening in on Tuesday as well. If you would like
>> to add it directly to the news feed wiki, please add it at:
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News
>> <
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News#
>> >#<
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly+News#
>> >
>>
>> Thanks,
>> Matt Mullins
>>


Re: CloudStack News - Moving to Wednesday

2013-07-09 Thread Mathias Mullins
We'll try to get it up as soon as the issues clear up.

Matt 



On 7/9/13 9:06 PM, "David Nalley"  wrote:

>Yes, there are some problems with roller instance (the underlying
>software that blogs.a.o runs). Infrastructure has been discussing the
>situation.
>There is no immediate timeline on having the issue resolved to my
>knowledge.
>
>--David
>
>On Tue, Jul 9, 2013 at 10:03 PM, Ryan Lei  wrote:
>> Hi, all. According to the Apache infra status
>> http://monitoring.apache.org/status/ , the ASF Blog system has been in
>>the
>> status of CRITICAL or WARNING for many days, and I'm still not able to
>> browse the newsletter website. It's close to Wednesday now. Does anyone
>> know what's going on there?
>>
>> 
>>-
>>--
>> Yu-Heng (Ryan) Lei, Associate Reasearcher
>> Chunghwa Telecom Laboratories / Cloud Computing Laboratory
>> 
>>ryan...@cht.com.tw>YpVkiWo8SsDdf3ZqO9AIuAPTzRnFYCUi-z4YljtI_hyVKkNHfn9F1Bn-vUWJnQ4.&URL=mail
>>to%3aryanlei%40cht.com.tw>
>> or
>> ryanlei750...@gmail.com
>>
>>
>>
>> On Tue, Jul 9, 2013 at 2:49 AM, Mathias Mullins
>> wrote:
>>
>>> - Multi-list send since this is a community wide announcement
>>>
>>> Just a reminder, the community news blog is moving to being published
>>>on
>>> Wednesdays starting this week on July 10! This is to make it more
>>>timely
>>> with the data that we are working on throughout the week.
>>>
>>> Please make sure to have any information, events (including dates), or
>>> topics that you would like to see covered or focused on posted to the
>>> marketing@c.a.o mailing list by Monday EOD so we have time to get it
>>>in.
>>> We'll try to get stuff happening in on Tuesday as well. If you would
>>>like
>>> to add it directly to the news feed wiki, please add it at:
>>> 
>>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly
>>>+News
>>> <
>>> 
>>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly
>>>+News#
>>> >#<
>>> 
>>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Weekly
>>>+News#
>>> >
>>>
>>> Thanks,
>>> Matt Mullins
>>>