After i upgraded from 4.2 -> 4.4 the global setting 'router.template.kvm'
still reads 'systemvm-kvm-4.3', should I change that to 'systemvm-kvm-4.4'
Also in the templates view systemvm-kvm-4.4 is showing up as Type = User,
how can I change that?
The ssvm and console server are not starting up, pr
Thanks Erik, that appears to have worked. VR is up now, SSVM and Console
proxy are stuck in starting mode... will leave for a while before fixing in
the DB.
On 30 September 2014 04:38, Erik Weber wrote:
> On Tue, Sep 30, 2014 at 2:28 AM, Nick Wales wrote:
>
> > I'm having
I'm having the same issue as
https://issues.apache.org/jira/browse/CLOUDSTACK-7217
Can anyone provide me with access to the updated packages mentioned? The
dropbox links 404 now.
Thanks
Nick
---
.
So emergency is currently over, I'll wait on the console server to
hopefully eventually give up, or kill it in the database.
Thanks for the guidance. Still a little perturbed that it broke and then
fixed itself many hours later.
On 8 July 2014 09:15, Nick Wales wrote:
> I have CentOS 6 ev
stem vms. Then, you just need to start MS, and destroy
> the system vms that were already created.
> I was using Xen.
>
>
>
> On Tue, Jul 8, 2014 at 12:56 AM, Carlos Reátegui
> wrote:
>
> > What is your environment? i.e. what management server os and hypervisors
>
I upgraded following the guide exactly.
https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.1/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.1-to-4.2.1
Everything went fine until running the script to restart the system VM's.
cloudstack-sysvmadm -u cloud -p -a
The cons
If primary NFS storage gets added to a cluster with insufficient privileges
for the host to write then the host will reboot.
This could be a problem! Is there a method to test 100% that it wont
happen.
I'm getting a huge number of similar errors in libvirtd.log ~1500 per hour
at the moment.
We're running 4.2.1 with netapp nfs for primary and secondary storage.
2014-01-31 03:00:04.704+: 4949: warning :
virStorageFileGetMetadataInternal:782 : Backing file
'/mnt/05648bab-b05a-377e-a3a7-52e616d
listen_tls=0
listen_tcp=1
tcp_port="16509"
auth_tcp="none"
mdns_adv = 0
is all i have in libvirtd.conf (partly regarding the question on IRC)
On 24 January 2014 15:22, Nick Wales wrote:
> Cloudstack - 4.2
>
> libvirt-0.10.2-18
>
> Running on cent
Cloudstack - 4.2
libvirt-0.10.2-18
Running on centos 6.2 with netapp nfs.
On 24 January 2014 15:20, David Nalley wrote:
> What version?
>
>
>
> On Fri, Jan 24, 2014 at 3:37 PM, Nick Wales wrote:
> > Hey, I'm having awful performance spinning up machines, wi
Hey, I'm having awful performance spinning up machines, with some taking
ages to eventually fail.
The majority of the error messages seem to stem from this:
==> cloudstack-agent.out <==
2014-01-24 14:14:30,779{GMT} WARN [cloud.agent.Agent]
(agentRequest-Handler-1:) Caught:
java.lang.NullPointer
We've got issues deploying successfully to either of our hosts in our 4.2.0
cluster. Instances either fail immediately or take a long time to get
started.
Most of the errors in our logs point to secondary storage, but I can mount
and unmount successfully. In both cases I have stopped libvirt,
clou
Make sure your interface definitions are set to use DHCP.
The fool proof way to get there is to install the ISO to cloudstack, start
up an instance using the ISO, setup your networking and take a snapshot.
Then you can turn that into a template.
There are a few things like removing udev rules and
s from the instances at
start up which seems to work ok. How do you approach this problem?
Nick
On 11 November 2013 17:44, Lennert den Teuling wrote:
> > Op 12 november 2013 om 0:07 schreef Nick Wales :
> >
> >
> > I have a couple issues with the current setup involv
nfs://192.168.169.202:/secondary is not a valid nfs address.
You may have better luck with
nfs://192.168.169.202/secondary
On 11 November 2013 10:17, Andrei Mikhailovsky wrote:
> Hello,
>
> I was wondering if anyone else is experiencing this issue? I am having
> identical problem with Ubuntu
I have a couple issues with the current setup involving the virtual router.
1. I'm not using the VR for port forwarding / VPN / routing or anything
traffic related so it would seem to me to be relatively trivial to have a
secondary virtual router that just provides DNS, userdata & metadata. This
w
Does anyone have advice for how to deal with linux hosts when the virtual
router gets destroyed and comes back on a different IP address?
I'm no dhclient expert, having run everything in a traditional datacenter
up until now with static IP's so looking for some useful advice / best
practise ideas
Having recently upgraded my hosts to 4.1 I am encountering this issue
again.
To confirm, the zone has no security groups however I am unable to get
access to guests until I have flushed iptables.
I have tried removing ebtables as with 4.0.x but that has not helped.
Now when I restart iptables on
gt; > >> >Sounds like your SSVM is not mounting your NFS storage. If you look
> at
> > >> >your
> > >> >SSVM, I believe you'll find the the hard drive to be 2gigs. Which is
> > >>what
> > >> >the dashboard is reporting. Log in, check that the
> Thanks
> -min
>
> On 6/19/13 3:37 PM, "Nick Wales" wrote:
>
> >In 4.0.1 I was able to make a call to the API similar to the below hich
> >would return only the instances tagged: app=my_app_name
> >
> >{"tags[0].value"=>&quo
I have this very same problem. Except mine shows 1.92GB when it is ~1TB.
I can make snapshots just fine though and I uploaded an ISO yesterday.
On 18 June 2013 09:40, Enric Muñoz wrote:
> Hi,
>
> I have a secondary storage, which is a whole hard disk of 3 TB. I do a NFS
> share with it but the
In 4.0.1 I was able to make a call to the API similar to the below hich
would return only the instances tagged: app=my_app_name
{"tags[0].value"=>"my_app_name",
"tags[0].key"=>"app",
:command=>"listVirtualMachines",
:response=>"json"}
(I'm using the cloudstack_helper ruby gem which explains t
The times I've had problems with this have been due to not having sudo
access for the cloud user.
Make sure you have the following in sudoers:
cloud ALL = NOPASSWD : ALL
On 8 June 2013 10:18, Er Krishna wrote:
> On 8 Jun 2013 20:45, "Er Krishna" wrote:
> >
> > Plz search on net . It may be a
all the volumes on sec. storage once the
> > copy operations are done.
> >
> > The only cases where its persistent on sec. storage is during upload
> > volume. Even for download volume the link expires after some time
> > and after that the volume is delet
We're in the middle of migrating our storage. I have moved all the VM's and
the associated volumes to the new storage and this appears to have been a
two part copy job on the part of cloudstack:
Primary Storage 1 -> Secondary Storage -> Primary Storage 2
Right now two and three day old copies of
You could look into third party solutions such as scalr.
http://scalr.com/features/multi_cloud_support/
Not used that myself so can't comment on its effectiveness, but its out
there!
On 19 April 2013 03:29, Oliver Leach wrote:
> From a CloudStack perspective it is only the NetScalers that autos
I have had similar issues when I was rebuilding hosts regularly while
starting out with cloudstack. I wrote a script that cleaned them up which
might help. entries in /etc/mtab were the things I forgot about most often!
Here's the script, hope it helps:
https://github.com/nickwales/cloudstack-scri
New instances I'm creating are taking an indeterminate amount of time to
return an FQDN when I run 'hostname -f' and I can't find what changes that
causes it to eventually work. 'hostname' returns the
The new instances have a resolv.conf populated by dhclient with a search
string and appropriate D
He should be able to all the resources belonging to the subdomains as
> well. Please file a bug if you don't see the behavior
> AS a workaround you can try passing the domain id of ROOT domain along
> with isRecursive = true. See if that works for you.
>
> Thanks,
> -Niti
If I set 'listall' to true while querying as the admin user in the ROOT
domain, I can see the vast majority of volumes but not those belonging to
other users in subdomains of ROOT.
Is that the intended behaviour?
If so how could I go about giving the ROOT admin user or another admin user
such per
30 matches
Mail list logo