Re: CS Agent stuck in "bootloop" & maintenance mode after OS updates

2022-09-29 Thread vas...@gmx.de
Short update.

Was able to 'solve' this problem in DB, changeing the state from
'Maintenance' to 'Enabled'.
Afterwards the host came back online like a charm.
Nevertheless i had the same problem on all our hosts. Will be interesting
to see what happens on the next CS Upgrade then...

Regards,
Chris

Am Mo., 26. Sept. 2022 um 23:22 Uhr schrieb vas...@gmx.de :

> Hi everyone,
>
> after performing the regular security updates provided for ubuntu server
> 20.04.5 and a proper restart of the host, the CS Agent doesn't come up
> properly and is know in a "bootloop".
> Current CS Version 4.17.0.1
>
> Process for upgradeing the host:
> 1. Enable "maintenance mode" in CS Management for host -> Successfull
> 2. Stop CS Agent on host --> Successfull
> 3. Update ubutu with newest availeable security updates
> 4. Restart the host
>
> On reboot CS Agent starts as expected but the state shown in gui only
> changes into 'connecting' - 'alert' - 'connecting' - 'alert' 
>
> The logs from the management server:
>
> 2022-09-26 20:02:32,282 DEBUG [o.a.c.c.p.RootCACustomTrustManager]
> (pool-12-thread-1:null) (logid:) Client/agent connection from ip=172.17.0.2
> has been validated and trusted.
> 2022-09-26 20:02:32,405 DEBUG [o.a.c.h.HAManagerImpl]
> (BackgroundTaskPollManager-3:ctx-47ba4f29) (logid:829bd5fc) HA health check
> task is running...
> 2022-09-26 20:02:32,423 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-7:null) (logid:) Seq 0-0: Scheduling the first
> command  { Cmd , MgmtId: -1, via: 0, Ver: v1, Flags: 1,
> [{"com.cloud.agent.api.StartupRoutingCommand":{"cpuSockets":"1","cpus":"48","speed":"2650","memory":"268725788672","dom0MinMemory":"1073741824","poolSync":"false","supportsClonedVolumes":"false","caps":"hvm,snapshot","pool":"/root","hypervisorType":"KVM","hostDetails":{"Host.OS.Kernel.Version":"5.4.0-126-generic","com.cloud.network.Networks.RouterPrivateIpStrategy":"HostLocal","Host.OS.Version":"20.04","secured":"true","Host.OS":"Ubuntu"},"hostTags":[],"groupDetails":{},"type":"Routing","dataCenter":"1","pod":"1","cluster":"1","guid":"dcb7e9d3-b26a-3da5-b91b-10dd1e28d97a-LibvirtComputingResource","name":"srv-xx.x","id":"0","version":"4.17.0.1","iqn":"iqn.1993-08.org.debian:01:e0741deca62","privateIpAddress":"172.17.0.2","privateMacAddress":"b0:7b:25:c0:1a:8b","privateNetmask":"255.255.255.192","storageIpAddress":"172.17.0.2","storageNetmask":"255.255.255.192","storageMacAddress":"b0:7b:25:c0:1a:8b","resourceName":"LibvirtComputingResource","gatewayIpAddress":"172.16.2.1","msHostList":"172.17.1.2@static","wait":"0","bypassHostMaintenance":"false"}},{"com.cloud.agent.api.StartupStorageCommand":{"totalSize":"(0
> bytes)
> 0","poolInfo":{"uuid":"5603c980-a676-4b0f-8e00-b8bbc7ef740a","host":"172.17.0.2","localPath":"/var/lib/libvirt/images","hostPath":"/var/lib/libvirt/images","poolType":"Filesystem","capacityBytes":"(438.55
> GB) 470886195200","availableBytes":"(423.36 GB)
> 454582964224"},"resourceType":"STORAGE_POOL","hostDetails":{},"type":"Storage","dataCenter":"1","pod":"1","guid":"dcb7e9d3-b26a-3da5-b91b-10dd1e28d97a-LibvirtComputingResource","name":"srv-xx.x","id":"0","version":"4.17.0.1","resourceName":"LibvirtComputingResource","msHostList":"172.17.1.2@static","wait":"0","bypassHostMaintenance":"false"}}]
> }
> 2022-09-26 20:02:32,424 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-6:ctx-5ce58719) (logid:929b8bd0) Seq 0-0: Processing
> the first command  { Cmd , MgmtId: -1, via: 0, Ver: v1, Flags: 1,
> [{"com.cloud.agent.api.StartupRoutingCommand":{"cpuSockets":"1","cpus":"48","speed":"2650","memory":"268725788672","dom0MinMemory":"1073741824","poolSync":"false","supportsClonedVolumes":"false","caps":"hvm,snapshot","pool":"/root","hypervisorType":"KVM","hostDetails":{"Host.OS.Kernel.Version":"5.4.0-126-generic","com.cloud.network.Networks.RouterPrivateIpStrategy":"HostLocal","Host.OS.Version":"20.04","secured":"true","Host.OS":"Ubuntu"},"hostTags":[],"groupDetails":{},"type":"Routing","dataCenter":"1","pod":"1","cluster":"1","guid":"dcb7e9d3-b26a-3da5-b91b-10dd1e28d97a-LibvirtComputingResource","name":"srv-xx.x","id":"0","version":"4.17.0.1","iqn":"iqn.1993-08.org.debian:01:e0741deca62","privateIpAddress":"172.17.0.2","privateMacAddress":"b0:7b:25:c0:1a:8b","privateNetmask":"255.255.255.192","storageIpAddress":"172.17.0.2","storageNetmask":"255.255.255.192","storageMacAddress":"b0:7b:25:c0:1a:8b","resourceName":"LibvirtComputingResource","gatewayIpAddress":"172.16.2.1","msHostList":"172.17.1.2@static","wait":"0","bypassHostMaintenance":"false"}},{"com.cloud.agent.api.StartupStorageCommand":{"totalSize":"(0
> bytes)
> 0","poolInfo":{"uuid":"5603c980-a676-4b0f-8e00-b8bbc7ef740a","host":"172.17.0.2","localPath":"/var/lib/libvirt/images","hostPath":"/var/lib/libvirt/images","poolType":"Filesystem","capacityBytes":"(438.55
> GB) 470886195200","availableBytes":"(423.36 GB)
> 454582964224"},"resourceType":"STORAGE_POOL","hostDetails":{},"type":"Sto

Re: Remote Access VPN

2022-09-29 Thread Ricardo Pertuz
1. Make sure you have the latest updates for Windows 10 (KB5010342) or have 
Windows11

2. Configure a register and reboot your laptop

AssumeUDPEncapsulationContextOnSendRule as DWORD in 
“HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PolicyAgent” and ser 
decimal value"two" (2)

Outlook for Android


From: Christian Reichert 
Sent: Thursday, September 29, 2022, 11:11 AM
To: 'users@cloudstack.apache.org' 
Subject: Remote Access VPN


Hi All,



we setup Remote Access VPN on a VPC like descript in the current documentation 
for 4.16.1.0 but our Windows Test Client is not connecting.

Is there any way to debug the VPN configuration?



Thanks and best regards,



Christian



Remote Access VPN

2022-09-29 Thread Christian Reichert
Hi All,

we setup Remote Access VPN on a VPC like descript in the current documentation 
for 4.16.1.0 but our Windows Test Client is not connecting.
Is there any way to debug the VPN configuration?

Thanks and best regards,

Christian


smime.p7s
Description: S/MIME cryptographic signature


Proper procedure to delete orphan volumes from a dead Primary Storage

2022-09-29 Thread Antoine Boucher
I have a few VM less volumes in Destroy state in no longer existing (dead) 
Primary storage.

Since I’m unable to remove volumes from the GUI. What would be the proper 
procedure to remove the entries from the db?

Would I just delete the appropriate row of the “volumes” and “volume_view” 
tables ? Or alternatively change the value of one of the fileds?

Regards,
Antoine




Increase Virtual Router Disk Size, Cannot start instances

2022-09-29 Thread Bs Serge
Cloudstack: 4.17.0.1
Hypervisor: KVM
OS: Centos 8

I couldn't start instances and I checked management logs and found this
error " Unable to apply dhcp entry on router", "No space left on device",

and I noticed in virtual router health checks saw a red flag "Insufficient
free space is 0 MB", then I SSHed inside it and found that the root / has
3.5G and is 100% used.

So I'm wondering how do I resize the virtual router disk space to increase
it? preferably without destroying it

Any thoughts or comments would be appreciated!

Best regards,


Re: Cannot find storage pool

2022-09-29 Thread Nicolas Vazquez
Hi Groete,

If the problem still persists, can you please increase the log level to debug 
on the KVM host, recreate the issue and share if any issues on the host agent 
logs?

Regards,
Nicolas Vazquez


From: Nux 
Date: Thursday, 29 September 2022 at 11:08
To: users@cloudstack.apache.org 
Cc: Granwille Strauss 
Subject: Re: Cannot find storage pool

If you have a firewall between hosts and NFS server turn it off and try again.

Also try to force NFS protocol v3 or 4, see if it makes a difference.
---
Nux
www.nux.ro



On 2022-09-29 13:28, Granwille Strauss wrote:

The problem is back again. I tried creating new volume snapshot again. And once 
again, the process has gone stuck.

All I's up and running, NFS is also active. Maybe this is a bug for KVM?
On 9/29/22 13:41, Granwille Strauss wrote:

Hi

I am not sure what happened but during the Volume Snap I ran my NFS server 
magically stopped working. And this caused the snap process to hung and also 
caused the mount error below.

I had to comment the lines in /etc/exports, reload deamon and only then could I 
start NFS again. I then uncomment lines in /etc/exports and deamon reload again 
and now storage was back up and now host can connect to CM again.

Not sure why NFS random dropped for no reason.
On 9/29/22 12:36, Slavka Peleva wrote:

Hi Groete,



Can you share the end of this message?



internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec

156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)

unexpected exit st>



Best regards,

Slavka



On Thu, Sep 29, 2022 at 1:17 PM Granwille Strauss

 wrote:



Hi Guys



I tried making volume snapshots and the first 4 VMs were succesfull. When

I tried the 5th VM volume snapshot, the process just "hung" and managed to

get stuck in "snapshotting" and in "Creating" states.



I rebooted both my KVM server and CM server and now the host cannot

connect to CM. I am getting this error:



2022-09-29 12:11:37,104 INFO  [kvm.storage.LibvirtStorageAdaptor]

(agentRequest-Handler-2:null) (logid:c2cf2636) Didn't find an existing

storage pool 81ffca3a-9775-375d-a1c0-9504c0ec3d89 by UUID, checking for

pools with duplicate paths



I updated the snapshots and volumes table in database and set status and

state to "Ready" and "BackedUp" restarted CM and agent with no luck. I

rebooted libvrtd too but its looking for the storage pool and I have no

idea where or what happened with it:



internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec

156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)

unexpected exit st>



My primary storage is on CM server at /mnt/primary but I also have local

storage enabled at /var/lib/libvirt/images



Any help please? My host is not connecting any more and all production VMs

are down.

--

Regards / Groete



 Granwille Strauss  //  
Senior Systems Admin



*e:* granwi...@namhost.com

*m:* +264 81 323 1260 <+264813231260>

*w:* www.namhost.com



 
















Namhost Internet Services (Pty) Ltd,



24 Black Eagle Rd, Hermanus, 7210, RSA







The content of this message is confidential. If you have received it by

mistake, please inform us by email reply and then delete the message. It is

forbidden to copy, forward, or in any way reveal the contents of this

message to anyone without our explicit consent. The integrity and security

of this email cannot be guaranteed over the Internet. Therefore, the sender

will not be held liable for any damage caused by the message. For our full

privacy policy and disclaimers, please go to

https://www.namhost.com/privacy-policy



[image: Powered by AdSigner]




--
Regards / Groete
[cid:16644604786335a6be94617061139313@li.nux.ro]
Granwille Strauss  //  Senior Systems Admin

e: granwi...@namhost.com
m: +264 81 323 1260
w: www.namhost.com

[cid:16644604786335a6be94617061139313@li.nux.ro][cid:16644604786335a6be94617061139313@li.nux.ro]

Re: Cannot find storage pool

2022-09-29 Thread Nux



If you have a firewall between hosts and NFS server turn it off and try 
again.


Also try to force NFS protocol v3 or 4, see if it makes a difference.

---
Nux
www.nux.ro [10]

On 2022-09-29 13:28, Granwille Strauss wrote:

The problem is back again. I tried creating new volume snapshot again. 
And once again, the process has gone stuck.


All I's up and running, NFS is also active. Maybe this is a bug for 
KVM?


On 9/29/22 13:41, Granwille Strauss wrote:

Hi

I am not sure what happened but during the Volume Snap I ran my NFS 
server magically stopped working. And this caused the snap process to 
hung and also caused the mount error below.


I had to comment the lines in /etc/exports, reload deamon and only then 
could I start NFS again. I then uncomment lines in /etc/exports and 
deamon reload again and now storage was back up and now host can 
connect to CM again.


Not sure why NFS random dropped for no reason.

On 9/29/22 12:36, Slavka Peleva wrote:

Hi Groete,

Can you share the end of this message?

internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
unexpected exit st>

Best regards,

Slavka

On Thu, Sep 29, 2022 at 1:17 PM Granwille Strauss
 wrote:

Hi Guys

I tried making volume snapshots and the first 4 VMs were succesfull. 
When
I tried the 5th VM volume snapshot, the process just "hung" and managed 
to

get stuck in "snapshotting" and in "Creating" states.

I rebooted both my KVM server and CM server and now the host cannot
connect to CM. I am getting this error:

2022-09-29 12:11:37,104 INFO  [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-2:null) (logid:c2cf2636) Didn't find an existing
storage pool 81ffca3a-9775-375d-a1c0-9504c0ec3d89 by UUID, checking for
pools with duplicate paths

I updated the snapshots and volumes table in database and set status 
and

state to "Ready" and "BackedUp" restarted CM and agent with no luck. I
rebooted libvrtd too but its looking for the storage pool and I have no
idea where or what happened with it:

internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
unexpected exit st>

My primary storage is on CM server at /mnt/primary but I also have 
local

storage enabled at /var/lib/libvirt/images

Any help please? My host is not connecting any more and all production 
VMs

are down.
--
Regards / Groete

 [1] Granwille Strauss  //  Senior Systems 
Admin


*e:* granwi...@namhost.com
*m:* +264 81 323 1260 <+264813231260>
*w:* www.namhost.com [2]

 [3]  
[4]

 [5]
 [6]
 [7]

 
[8]


Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA

The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. 
It is

forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and 
security
of this email cannot be guaranteed over the Internet. Therefore, the 
sender
will not be held liable for any damage caused by the message. For our 
full

privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

[image: Powered by AdSigner]
 
[9]


--

 Regards / Groete

 [1]
 Granwille Strauss  //  Senior Systems Admin

e: granwi...@namhost.com
m: +264 81 323 1260 [11]
w: www.namhost.com [12]

 [3] [4] [5] [6] [7]

 [8]

 Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA

 		The content of this message is confidential. If you have received it 
by mistake, please inform us by email reply and then delete the message. 
It is forbidden to copy, forward, or in any way reveal the contents of 
this message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


[9]
--

 Regards / Groete

 [1]
 Granwille Strauss  //  Senior Systems Admin

e: granwi...@namhost.com
m: +264 81 323 1260 [11]
w: www.namhost.com [12]

 [3] [4] [5] [6] [7]

 [8]

 Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA

 		The content of this message is confidential. If you have received it 
by mistake, please inform us by email reply and then delete the m

Re: Cannot find storage pool

2022-09-29 Thread Granwille Strauss

Hi

I am not sure what happened but during the Volume Snap I ran my NFS 
server magically stopped working. And this caused the snap process to 
hung and also caused the mount error below.


I had to comment the lines in /etc/exports, reload deamon and only then 
could I start NFS again. I then uncomment lines in /etc/exports and 
deamon reload again and now storage was back up and now host can connect 
to CM again.


Not sure why NFS random dropped for no reason.

On 9/29/22 12:36, Slavka Peleva wrote:

Hi Groete,

Can you share the end of this message?


internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
unexpected exit st>

Best regards,

Slavka

On Thu, Sep 29, 2022 at 1:17 PM Granwille Strauss
  wrote:


Hi Guys

I tried making volume snapshots and the first 4 VMs were succesfull. When
I tried the 5th VM volume snapshot, the process just "hung" and managed to
get stuck in "snapshotting" and in "Creating" states.

I rebooted both my KVM server and CM server and now the host cannot
connect to CM. I am getting this error:

2022-09-29 12:11:37,104 INFO  [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-2:null) (logid:c2cf2636) Didn't find an existing
storage pool 81ffca3a-9775-375d-a1c0-9504c0ec3d89 by UUID, checking for
pools with duplicate paths

I updated the snapshots and volumes table in database and set status and
state to "Ready" and "BackedUp" restarted CM and agent with no luck. I
rebooted libvrtd too but its looking for the storage pool and I have no
idea where or what happened with it:

internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
unexpected exit st>

My primary storage is on CM server at /mnt/primary but I also have local
storage enabled at /var/lib/libvirt/images

Any help please? My host is not connecting any more and all production VMs
are down.
--
Regards / Groete

  Granwille Strauss  //  Senior Systems Admin

*e:*granwi...@namhost.com
*m:* +264 81 323 1260 <+264813231260>
*w:*www.namhost.com

  







Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. It is
forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and security
of this email cannot be guaranteed over the Internet. Therefore, the sender
will not be held liable for any damage caused by the message. For our full
privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

[image: Powered by AdSigner]



--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Cannot find storage pool

2022-09-29 Thread Slavka Peleva
Hi Groete,

Can you share the end of this message?

> internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
> 156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
> unexpected exit st>
>
> Best regards,
Slavka

On Thu, Sep 29, 2022 at 1:17 PM Granwille Strauss
 wrote:

> Hi Guys
>
> I tried making volume snapshots and the first 4 VMs were succesfull. When
> I tried the 5th VM volume snapshot, the process just "hung" and managed to
> get stuck in "snapshotting" and in "Creating" states.
>
> I rebooted both my KVM server and CM server and now the host cannot
> connect to CM. I am getting this error:
>
> 2022-09-29 12:11:37,104 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-2:null) (logid:c2cf2636) Didn't find an existing
> storage pool 81ffca3a-9775-375d-a1c0-9504c0ec3d89 by UUID, checking for
> pools with duplicate paths
>
> I updated the snapshots and volumes table in database and set status and
> state to "Ready" and "BackedUp" restarted CM and agent with no luck. I
> rebooted libvrtd too but its looking for the storage pool and I have no
> idea where or what happened with it:
>
> internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec
> 156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89)
> unexpected exit st>
>
> My primary storage is on CM server at /mnt/primary but I also have local
> storage enabled at /var/lib/libvirt/images
>
> Any help please? My host is not connecting any more and all production VMs
> are down.
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 
> 
>
>
> 
>
> Namhost Internet Services (Pty) Ltd,
>
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
>
> The content of this message is confidential. If you have received it by
> mistake, please inform us by email reply and then delete the message. It is
> forbidden to copy, forward, or in any way reveal the contents of this
> message to anyone without our explicit consent. The integrity and security
> of this email cannot be guaranteed over the Internet. Therefore, the sender
> will not be held liable for any damage caused by the message. For our full
> privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
>
> [image: Powered by AdSigner]
> 
>


Cannot find storage pool

2022-09-29 Thread Granwille Strauss

Hi Guys

I tried making volume snapshots and the first 4 VMs were succesfull. 
When I tried the 5th VM volume snapshot, the process just "hung" and 
managed to get stuck in "snapshotting" and in "Creating" states.


I rebooted both my KVM server and CM server and now the host cannot 
connect to CM. I am getting this error:


2022-09-29 12:11:37,104 INFO [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:c2cf2636) Didn't find an existing 
storage pool 81ffca3a-9775-375d-a1c0-9504c0ec3d89 by UUID, checking 
for pools with duplicate paths


I updated the snapshots and volumes table in database and set status and 
state to "Ready" and "BackedUp" restarted CM and agent with no luck. I 
rebooted libvrtd too but its looking for the storage pool and I have no 
idea where or what happened with it:


internal error: Child process (/usr/bin/mount -o nodev,nosuid,noexec 
156.38.173.122:/mnt/primary /mnt/81ffca3a-9775-375d-a1c0-9504c0ec3d89) 
unexpected exit st>


My primary storage is on CM server at /mnt/primary but I also have local 
storage enabled at /var/lib/libvirt/images


Any help please? My host is not connecting any more and all production 
VMs are down.


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature