Ilya,
thanks for the ideas.
I decided to do a clean install with ACS4.1 and it worked using the same
config (filled it in via copy pase. There was probably a bug that got
addressed in the new release.

I'll keep in touch

Best regards / Mit freundlichen Grüßen,
Pablo


On Wed, Jun 5, 2013 at 10:46 PM, Musayev, Ilya <imusa...@webmd.net> wrote:

> Pablo,
>
> You did not mention anything about firewall complexity on your side - if
> any.
>
> Execute "ip route show" and see if you have explicit route over the
> interface that may not have access to NFS.
>
> Try removing the route via "route del -host <ip>" and see it mounts
> successfully.
> If it does work, revisit your current network layout for SSVM, perhaps you
> are using wrong interface.
>
> I would also try to execute
> watch -n 1  'netstat -antup'
>
> Look for all SYNC_SENT but not established. If found, note the IP you've
> used to connect to NFS and inspect your routing table.
>
> The scripts that trigger the explicit routing setup are either bash
> scripts or most likely storage.resource.NfsSecondaryStorageResource java
> class.
>
> Review the /var/log/cloud/systemvm.log for more reference.
>
> This is definitely firewall, network or route related.
>
> Two ways to resolve, either confirm that interfaces are properly
> configured and have access to storage, or hack the
> storage.resource.NfsSecondaryStorageResource (this is worst case)
>
> Regards
> ilya
>
> PS: We are also vmware shop, and we've customized and backported a lot of
> features from ACS4.2 into just release ACS4.1 to have better vmware
> functionality (major issues was dvs support). If you need help on that
> front, let me know.
>
>
>
>
> > -----Original Message-----
> > From: Pablo Endres [mailto:pablo.end...@innovo-cloud.de]
> > Sent: Wednesday, June 05, 2013 10:29 AM
> > To: users@cloudstack.apache.org
> > Subject: SSVM not mounting the secondary storage
> >
> > Hi all,
> >
> > I've been fighting a little with my CS 4.0.2 installation on CentOS 6.4
> I'm using
> > ESX as a hypervisor and Advanced zones.
> >
> > I created a new zone defining a physical network with all traffic types
> > (Management,Public, Guest, Storage) on vSwitch1 using vlans for each
> traffic
> > type and enabled the zone.
> >
> > After a while I get a recurrent message on the management host logfiles:
> > 2013-06-05 16:01:35,791 DEBUG [cloud.server.StatsCollector]
> > (StatsCollector-3:null) StorageCollector is running...
> > 2013-06-05 16:01:35,797 DEBUG [cloud.server.StatsCollector]
> > (StatsCollector-3:null) There is no secondary storage VM for secondary
> > storage host nfs://172.77.1.2/ss-nfs
> >
> > The ssvm-check.sh says:
> >
> > ERROR: NFS is not currently mounted
> > Try manually mounting from inside the VM NFS server is  eth2
> > ping: unknown host
> > WARNING: cannot ping NFS server
> >
> >
> > But I can ping and manually mount the share from inside the SSVM. So on
> the
> > network side of both the SSVM and NFS server everything seams to be OK
> >
> > Can anyone give me a pointer on what else I can check? This case does not
> > seem to be covered in the ssvm-troubleshooting guide from the wiki
> >
> > I would also like to know where the script or command is that triggers
> the
> > mount in order to troubleshoot a little deeper.
> >
> > Thanks in advance.
> >
> > Best regards / Mit freundlichen Grüßen,
> > Pablo Endres
>
>

Reply via email to