Hi,
Finally After replacing machines NIC's/cables i have figured out the
exact issue, It is related to nfs request timed out, this issue has been
mentioned in 6.0 release notes but i am using xenserver 6.2 and still
having the same problem, somebody can help me out ?
Mar 11 02:49:05 xenserve
Hi Carlos,
I see following management servers logs, what could be the possible
reasons for these exceptions.
2014-03-11 08:05:47,985 DEBUG [xen.resource.CitrixResourceBase]
(DirectAgent-1:null) Unable to create destination path:
/etc/xapi.d/plugins on 10.11.17.32 but trying anyway
2014-03-11
I have been so much frustrated, couldn't find any solution and unable to
resolve the problem, I can mount primary secondary storage manually,
copy data also, but hypervisor logs show me nfs server not responding
and timed out
heartbeat: Problem with heartbeat, no iSCSI or NFS mount defined in
If you had it on the management server then it would have been copied over to
the hosts when you added them and there is no need for you to do it now.
> On Feb 12, 2014, at 5:46 PM, Umair Azam wrote:
>
> Hi,
>
> Carlos thanks for the follow up I really appreciate that, Well i have got the
> v
Hi,
Carlos thanks for the follow up I really appreciate that, Well i have
got the vhd-util placed in
/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util on
managment server but missed to place it on XS host in
/opt/xensource/bin. I probably missed or couldn't find anything i
Hi Umair (sorry missed the 'i' last time)
Haven't been able to digest your logs but one common problem with XS is
forgetting to get the specific vhd-util mentioned in the docs:
section 4.5.3.3 of
https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-se
Hi carlos,
I have changed switch ports, cables etc, I am successfully able to mount
and copy data. Below logs in the thread are latest after this fix.
Umair Azam
On 2/13/2014 6:03 AM, Carlos Reategui wrote:
Hi Umar,
Did you verify your network per Shanker's suggestion? You need to make
sure
Hi Umar,
Did you verify your network per Shanker's suggestion? You need to make
sure the NIC flapping is fixed first before you can expect anything to run
stably. When you say you are able to mount it manually, did you also try
copying data over that mount? I had a similar problem once and it en
Thanks for the follow up shanker. Please go through below details and
help me dig out the issue,
I am using cloudstack 4.2.1 installed through yum and xenserver 6.2, I
am getting very strange errors and i have no idea whats going on.
Managment server IP: 10.11.17.30
Primary/secondar Storage I
Comments inline.
On 12-Feb-2014, at 9:09 am, Umair Azam wrote:
> I am also getting following in hypervisor logs. Do you think its the issue
> with NIC of primary storage, well i can mount that filesystem manually
>
> /var/log/messages:Feb 11 08:16:40 xenserver-Host1 kernel: [ 3103.098811]
> e1
I am also getting following in hypervisor logs. Do you think its the
issue with NIC of primary storage, well i can mount that filesystem manually
/var/log/messages:Feb 11 08:16:40 xenserver-Host1 kernel: [ 3103.098811]
e1000e: eth0 NIC Link is Down
/var/log/messages:Feb 11 08:21:31 xenserver-Ho
Abhisek,
Yes i am able to mount manually. I dont whats the problem i m stuck here
for last 2 weeks :/
Umair Azam
On 2/12/2014 7:22 AM, abhisek basu wrote:
Are we able to mount that NFS manually on the Xen Server?
Sent from my iPhone
On 12 Feb 2014, at 6:31 am, "Umair Azam" wrote:
I am f
Are we able to mount that NFS manually on the Xen Server?
Sent from my iPhone
> On 12 Feb 2014, at 6:31 am, "Umair Azam" wrote:
>
> I am facing an error while setting up cloud system, following issue arises
> when management server machine requests hypervisor to launch CP/SS VM's.
> After thi
I am facing an error while setting up cloud system, following issue
arises when management server machine requests hypervisor to launch
CP/SS VM's. After this request hypervisor tries to contact primary
storage to access storage but that primary storage server doesn't
responds, following are th
14 matches
Mail list logo