Hi Eric,

SSVM can access my nfs and I an manual mount :(

This "s-397-VM:/# grep com.cloud.agent.api.SecStorageSetupCommand
/var/log/cloud.log" did not produced any output, but found below error

>From the VM's /var/log/cloud.log:
ERROR [cloud.agent.AgentShell] (main:null) Unable to start agent: Resource
class not found: com.cloud.storage.resource.PremiumSecondaryStorageResource
due to: java.lang.ClassNotFoundException:
com.cloud.storage.resource.PremiumSecondaryStorageResource


I used cloudstac from http://www.shapeblue.com/packages/ and the SSVM
template from
http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-vmware.ova

Do you think I need to use http://packages.shapeblue.com/systemvmtemplate/

If it so how can I replace an existing SSVM template?

Sorry to be of a asking these question, I am new to this setup

Thank you and Best Regards

Asanka

On 18 August 2017 at 03:38, Eric Lee Green <eric.lee.gr...@gmail.com> wrote:

> On 08/17/2017 11:17 AM, Asanka Gunasekara wrote:
>
>> Hi Dag, the ip 172.17.101.1 which it is looking for is my gateway IP.
>> Below
>> are the urls for the the requested query output files
>>
>> SELECT * FROM cloud.image_store;
>>
> Interesting. The only row with a NULL 'removed' column  looks good, so it
> looks like your database configuration is correct:
>
> 5  |   NFS_Secondary   |  NFS  |   nfs  | nfs://172.17.101.253/share_smb
> /export/secondary |    1  |   ZONE |  Image   |
> a4e17aca-dc16-494e-b696-8f8fae58a391     | 2017-08-14 19:12:50.0 |||
>
> Compare with my own query of my own image store, which is basically
> identical
>
> MariaDB [cloud]> select * from cloud.image_store;
> +----+------------+---------------------+----------+--------
> -----------------------------+----------------+-------+-----
> --+--------------------------------------+------------------
> --------------------+---------------------+---------+-------
> -----+------------+
> | id | name       | image_provider_name | protocol | url
>                | data_center_id | scope | role | uuid
>            | parent                               | created             |
> removed | total_size | used_bytes |
> +----+------------+---------------------+----------+--------
> -----------------------------+----------------+-------+-----
> --+--------------------------------------+------------------
> --------------------+---------------------+---------+-------
> -----+------------+
> |  1 | secondary1 | NFS                 | nfs      | nfs://
> 10.100.255.1/export/secondary |              1 | ZONE  | Image |
> fdaab425-a102-484b-b746-c07c4b564edd | 50d77b6b-4d99-3695-b830-24ed10d0155c
> | 2017-07-31 00:55:06 | NULL |       NULL |       NULL |
> +----+------------+---------------------+----------+--------
> -----------------------------+----------------+-------+-----
> --+--------------------------------------+------------------
> --------------------+---------------------+---------+-------
> -----+------------+
> 1 row in set (0.00 sec)
>
> Note that 10.100.255.1 is on my management network, which is also my
> storage network (I have everything coming in on VLAN's on a 10Gbit bond,
> the 10.100.x.x network is on VLAN 100).  When I go into my secondary
> storage VM, here is what I see:
>
> Now, getting into my secondary storage VM and asking it for a list of
> addresses, here is what I see:
>
> root@s-397-VM:~# ip addr list | grep inet
>     inet 127.0.0.1/8 scope host lo
>     inet 169.254.0.95/16 brd 169.254.255.255 scope global eth0
>     inet 10.100.196.66/16 brd 10.100.255.255 scope global eth1
>     inet 10.101.199.255/16 brd 10.101.255.255 scope global eth2
>     inet 10.100.250.159/16 brd 10.100.255.255 scope global eth3
>
> So as you can see, it definitely has access to the management network
> (10.100.x.x), as well as my public IP pool (10.101.x.x) and storage pool
> (the second 10.100 address).  As well as the local agent-visible IP (the
> 169.254.0.95) that ssh is listening on so that the agent can configure the
> VM via its shared keys.
>
> Here is what my ssvm-check says:
>
> root@s-397-VM:/opt# /usr/local/cloud/systemvm/ssvm-check.sh
> ================================================
> First DNS server is  10.100.255.2
> PING 10.100.255.2 (10.100.255.2): 48 data bytes
> 56 bytes from 10.100.255.2: icmp_seq=0 ttl=64 time=0.189 ms
> 56 bytes from 10.100.255.2: icmp_seq=1 ttl=64 time=0.438 ms
> --- 10.100.255.2 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.189/0.314/0.438/0.125 ms
> Good: Can ping DNS server
> ================================================
> Good: DNS resolves download.cloud.com
> ================================================
> nfs is currently mounted
> Mount point is /mnt/SecStorage/50d77b6b-4d99-3695-b830-24ed10d0155c
> Good: Can write to mount point
> ================================================
> Management server is 10.100.255.2. Checking connectivity.
> Good: Can connect to management server port 8250
> ================================================
> Good: Java process is running
> ================================================
> Tests Complete. Look for ERROR or WARNING above.
>
>
> Your says 'Java process not running'. I wonder if your 169 address is
> working? Let's check your cloud.log to see if you ever got a setup command:
>
> s-397-VM:/# grep com.cloud.agent.api.SecStorageSetupCommand
> /var/log/cloud.log
>
> Mine replies with:
>
> 2017-08-17 02:21:12,975 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-1:null) Request:Seq 9-231372430856159233:  { Cmd ,
> MgmtId: 11967559506, via: 9, Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"
> com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.100.255.1/export/secondary
> ","_role":"Image"}},"secUrl":"nfs://10.100.255.1/export/secondary","
> postUploadKey":"KZQd8G06ABN3D_CGAJiKBmhLe3e5dim5hfA7ouuZnvQt
> ZNoHxE3T4WiqTxOdVPBh5hHhNtvX8e9Gac0Tw7gM5g","wait":0}}] }
>
> See if you got a similar command with your own NFS server's address.
>
>

Reply via email to