Hi Prasanna,

On Mon, Jul 15, 2013 at 12:20 PM, Prasanna Santhanam <t...@apache.org> wrote:

> On Sat, Jul 13, 2013 at 02:25:35PM +0800, Indra Pramana wrote:
> > Hi Prasanna,
> >
> > Good day to you, and thank you for your e-mail.
> >
> Were you able to get beyond this error?
>

Yes. I managed to get around the network bridges issue by setting up this
on my KVM host:

===
auto eth0.5
iface eth0.5 inet static
        address X.X.X.22
        netmask 255.255.255.240
        network X.X.X.16
        broadcast X.X.X.31
        gateway X.X.X.17
        # dns-* options are implemented by the resolvconf package, if
installed
        dns-nameservers 8.8.8.8 8.8.4.4
        dns-search xxx.com

# Public network
auto cloudbr0
iface cloudbr0 inet manual
    bridge_ports eth1.201
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Guest network
auto cloudbr1
iface cloudbr1 inet manual
    bridge_ports eth1.41
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Management network
auto cloudbr2
iface cloudbr2 inet manual
    bridge_ports eth0.6
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# The 10G network interface
auto eth2
iface eth2 inet static
        address 10.237.11.22
        netmask 255.255.255.0
        network 10.237.11.0
        broadcast 10.237.11.255
===

On the Cloudstack GUI, I created 3 physical networks:

cloudbr0 - public network
cloudbr1 - guest network
cloudbr2 - management (and storage) network

However, when I tried to setup the public network using VLAN 201, it failed
because CloudStack tried to create a new bridge (i.e. breth1-201) instead
of using the existing bridge I have set up for public network (cloudbr
mapped to eth1.201). So I set up the public network using a different VLAN
(VLAN 21) and change the settings on my router's gateway to use VLAN 21
instead of VLAN 201.

===
root@hv-kvm-02:~# brctl show
bridge name     bridge id               STP enabled     interfaces
breth1-21               8000.002590c29d7d       no              eth1.21
                                                        vnet2
                                                        vnet5
cloud0          8000.fe00a9fe0165       no              vnet0
                                                        vnet3
cloudbr0                8000.002590c29d7d       no              eth1.201
cloudbr1                8000.002590c29d7d       no              eth1.41
cloudbr2                8000.002590c29d7c       no              eth0.6
                                                        vnet1
                                                        vnet4
                                                        vnet6
virbr0          8000.000000000000       yes
===

The above setup works, but I am not too sure whether this is the correct
way on how to do it. Can you advise?

Looking forward to your reply, thank you.

Cheers.





> > Yesterday I troubleshooted further and found out that the mapping of the
> > bridge is incorrect. For example, eth1 of the SSVM, which is supposed to
> be
> > private network interface for communication with management server, is
> > being mapped to cloudbr0 bridge which is used for public. While eth2 of
> the
> > SSVM, which is supposed to be public, is being mapped to cloudbr1 bridge
> > which is supposed for guest traffic. I checked with brctl show on the KVM
> > host and traced the MAC address of the SSVM's eth interfaces.
> >
> > I have tried different type of configuration on the KVM hosts'
> > /etc/network/interfaces (I am using Ubuntu), different ways on how to
> > configure the advanced network zone (physical network interface and KVM
> > label for each traffic), but I am still not able to get it working.
> >
> > Can please advise what is the best way to set-up the network bridges?
>
> Use brctl to add bridges. That's how I've done it.
>
> >
> > My /etc/network/interfaces on KVM host:
> >
> > http://pastebin.com/nx8xJ1L2
> >
> > I used 3 physical NICs:
> >
> > eth0 --> for management and secondary storage traffic
> > eth1 --> for public and guest traffic
> > eth2 --> for primary storage traffic to the Ceph RBD (separate NIC not
> > configured by CloudStack)
> >
> > On the Cloudstack GUI, while creating the zone, I used 3 physical
> networks:
> >
> > eth0.6 --> Management traffic (KVM label: eth0.6)
> > cloudbr0 --> Public traffic (KVM label: cloudbr0)
> > cloudbr1 --> Guest traffic (KVM label: cloudbr1)
> >
> Is eth0.6 on a tagged VLAN?
>
> > I didn't specify storage traffic since I presume it will use the
> management
> > VLAN.
>
> That's correct.
>
> >
> > However, I always failed to add the KVM host using the above
> configuration.
> > I tried to run cloudstack-setup-agent manually on the KVM host and
> getting
> > this error message:
> >
> > root@hv-kvm-02:/etc/cloudstack/agent# cloudstack-setup-agent -m
> x.x.x.18 -z
> > 3 -p 3 -c 3 -g 00dd5dba-7419-3689-bc51-1671035c0d8f -a --pubNic=cloudbr0
> > --prvNic=eth0.6 --guestNic=cloudbr1
> > Starting to configure your system:
> > Configure Apparmor ...        [OK]
> > Configure Network ...         [Failed]
> > eth0.6 is not a bridge
> > Try to restore your system:
> > Restore Apparmor ...          [OK]
> > Restore Network ...           [OK]
> >
> > When I tried to create another bridge called cloudbr2 with bridge_ports
> to
> > eth0.6, the KVM host's network didn't work.
> >
> > Can advise what is the best practice on configuring the network bridges
> on
> > the advanced network zone setup?
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> > On Sat, Jul 13, 2013 at 12:28 PM, Prasanna Santhanam <t...@apache.org>
> wrote:
> >
> > > See Inline,
> > >
> > > On Fri, Jul 12, 2013 at 05:12:25PM +0800, Indra Pramana wrote:
> > > > Hi Wido,
> > > >
> > > > Noted, can't wait for 4.2 to be released. :)
> > > >
> > > > Dear Prasanna, Wido and all,
> > > >
> > > > I just realised that while the system VMs are running, they are
> still not
> > > > accessible through the public IPs assigned to them. I have been
> waiting
> > > for
> > > > the SSVM to download the default CentOS template and it doesn't
> appear on
> > > > the template list.
> > > >
> > > > I tried to SSH into the SSVM via the local link address from the KVM
> > > host,
> > > > and running the health check /usr/local/cloud/systemvm/ssvm-check.sh
> > > shows
> > > > that the VM cannot reach anywhere. It cannot reach the public DNS
> server
> > > (I
> > > > used Google's 8.8.8.8), cannot reach the management server, and
> cannot
> > > even
> > > > reach the public IP gateway.
> > >
> > > Ok - we're not there yet. But there's a few troubleshooting steps you
> > > can try:
> > >
> > > 1. Which of the checks fail apart from the DNS ping?
> > > 2. Are you able to have those tests pass from the KVM host itself?
> > > 3. Can you paste the output of $route -n on your SSVM and on your
> > > host?
> > >
> > > >
> > > > Is it due to misconfiguration of the KVM network bridges? How can I
> see
> > > the
> > > > mapping between the NIC interfaces of the SSVM (eth0, eth1, eth2 and
> > > eth3)
> > > > and the actual physical NIC interfaces on the KVM hosts (eth0) and
> the
> > > > network bridges (cloudbr0, cloudbr1)? Any logs I can verify to ensure
> > > that
> > > > the VLAN and network bridging is working?
> > > >
> > >
> > > You can see this from the dumpxml.
> > >
> > > $virsh list
> > > (gives you all the  domains. find the ssvm for example)
> > >
> > > $virsh dumpxml my-domain > mydomain.xml
> > >
> > > > Appreciate any advice.
> > > >
> > > > Thank you.
> > > >
> > > >
> > > >
> > > > On Fri, Jul 12, 2013 at 4:19 PM, Wido den Hollander <w...@widodh.nl>
> > > wrote:
> > > >
> > > > > On 07/12/2013 10:14 AM, Indra Pramana wrote:
> > > > >
> > > > >> Hi Prasanna,
> > > > >>
> > > > >> I managed to fix the problem, thanks for your advice to turn the
> agent
> > > > >> log level to debug:
> > > > >>
> > > > >> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
> > > > >> KVM+agent+debug<
> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug
> >
> > > > >>
> > > > >>  From the log, I found out that the agent on the KVM host tried
> to NFS
> > > > >> mount directly to 103.25.200.19:/mnt/vol1/sec-**
> > > > >> storage/template/tmpl/1/3,
> > > > >> which was not allowed by the NFS server due to its default
> > > configuration
> > > > >> to only allow to mount to /mnt/vol1/sec-storage (the root of the
> NFS
> > > > >> share).
> > > > >>
> > > > >>
> > > > > Ah, that's odd!
> > > > >
> > > > > Btw, in 4.2 you'll be able to deploy SSVMs on RBD as well, so that
> > > > > limitation will be gone.
> > > > >
> > > > > Wido
> > > > >
> > > > >  After I changed the NFS server configuration to allow mount to all
> > > > >> sub-directories, re-export the NFS and voila, the system was able
> to
> > > > >> download the template and now both the system VMs (CPVM and SSVM)
> are
> > > > >> running!
> > > > >>
> > > > >> Many thanks for your help! :)
> > > > >>
> > > > >> Cheers.
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Fri, Jul 12, 2013 at 3:31 PM, Indra Pramana <in...@sg.or.id
> > > > >> <mailto:in...@sg.or.id>> wrote:
> > > > >>
> > > > >>     Hi Prasanna,
> > > > >>
> > > > >>     Good day to you, and thank you for your e-mail.
> > > > >>
> > > > >>     Yes, the cloudstack-agent service is running on both the KVM
> > > hosts.
> > > > >>     There is no "cloud" user being created though, when I
> installed
> > > the
> > > > >>     agent. I installed the agent as root.
> > > > >>
> > > > >>     root@hv-kvm-01:/home/indra# service cloudstack-agent status
> > > > >>       * cloud-agent is running
> > > > >>
> > > > >>     root@hv-kvm-01:/home/indra# su - cloud
> > > > >>     Unknown id: cloud
> > > > >>
> > > > >>     Please advise how can I resolve this problem, shall I create
> the
> > > > >>     Unix "cloud" user manually? Basically I follow this
> instruction to
> > > > >>     prepare the KVM host and install the CloudStack agent:
> > > > >>
> > > > >>
> http://cloudstack.apache.org/**docs/en-US/Apache_CloudStack/**
> > > > >>
> 4.1.0/html/Installation_Guide/**hypervisor-kvm-install-flow.**html<
> > >
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Installation_Guide/hypervisor-kvm-install-flow.html
> > > >
> > > > >>
> > > > >>     with this instruction from Wido on how to prepare libvirt with
> > > Ceph
> > > > >>     RBD storage pool support:
> > > > >>
> > > > >>     http://blog.widodh.nl/2013/06/**a-quick-note-on-running-**
> > > > >> cloudstack-with-rbd-on-ubuntu-**12-04/<
> > >
> http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/
> > > >
> > > > >>
> > > > >>     I also have checked /var/log/cloud/agent/agent.log and I
> don't see
> > > > >>     any error messages, except this error message which will show
> up
> > > > >>     every time I restart the agent:
> > > > >>
> > > > >>     2013-07-12 15:22:47,454 ERROR
> > > [cloud.resource.**ServerResourceBase]
> > > > >>     (main:null) Nics are not configured!
> > > > >>     2013-07-12 15:22:47,459 INFO
> > >  [cloud.resource.**ServerResourceBase]
> > > > >>     (main:null) Designating private to be nic eth0.5
> > > > >>
> > > > >>     More logs can be found here: http://pastebin.com/yeNmCt7S
> > > > >>
> > > > >>     I have configured the network bridges on the NIC interface as
> per
> > > > >>     this instruction:
> > > > >>
> > > > >>
> http://cloudstack.apache.org/**docs/en-US/Apache_CloudStack/**
> > > > >> 4.1.0/html/Installation_Guide/**hypervisor-kvm-install-flow.**
> > > > >> html#hypervisor-host-install-**network<
> > >
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Installation_Guide/hypervisor-kvm-install-flow.html#hypervisor-host-install-network
> > > >
> > > > >>
> > > > >>     On the zone, I used advanced network configuration with just
> one
> > > > >>     physical network for management, public and guest/private. I
> > > didn't
> > > > >>     include storage, which by default the traffic will use the
> > > > >>     management VLAN network.
> > > > >>
> > > > >>     Please advise if there's anything else I might have been
> missing.
> > > > >>
> > > > >>     Looking forward to your reply, thank you.
> > > > >>
> > > > >>     Cheers.
> > > > >>
> > > > >>
> > > > >>
> > > > >>     On Fri, Jul 12, 2013 at 2:56 PM, Prasanna Santhanam <
> > > t...@apache.org
> > > > >>     <mailto:t...@apache.org>> wrote:
> > > > >>
> > > > >>         Indeed, cloudstack will go through the allocation to
> startup
> > > the
> > > > >>         system VMs too. So that process is failing to recognize
> the
> > > volume
> > > > >>         (.qcow2) present on your NFS storage.
> > > > >>
> > > > >>         Can you check if your cloudstack agent service is running
> with
> > > > >>         the KVM
> > > > >>         host? And it should've created the user cloud. $id cloud
> to
> > > check
> > > > >> if
> > > > >>         there is a user.
> > > > >>
> > > > >>         Did you see what's happening in the agent logs? These are
> > > under
> > > > >>         /var/log/cloud/ on your host when the systemVMs are
> coming up.
> > > > >>         If the
> > > > >>         logs are not showing any useful information you can turn
> on
> > > debug
> > > > >>         level for more verbosity.
> > > > >>
> > > > >>         See here: https://cwiki.apache.org/**confluence/x/FgPMAQ<
> > > https://cwiki.apache.org/confluence/x/FgPMAQ>
> > > > >>
> > > > >>         On Fri, Jul 12, 2013 at 02:40:35PM +0800, Indra Pramana
> wrote:
> > > > >>          > Hi Prasanna,
> > > > >>          >
> > > > >>          > Good day to you, and thank you for your e-mail.
> > > > >>          >
> > > > >>          > Yes, when I export the NFS, I set the permission so
> that
> > > > >>         normal user will
> > > > >>          > be able to have read/write access to the files
> > > > >> (no_root_squash).
> > > > >>          >
> > > > >>          > I have tested and I can have read/write access from my
> KVM
> > > > >>         hosts using
> > > > >>          > normal user. BTW, there's no "cloud" user on the
> hosts, I
> > > > >>         believe it's not
> > > > >>          > created during cloudstack-agent installation?
> > > > >>          >
> > > > >>          > In any case, do you think the template issue and the
> > > storage
> > > > >> pool
> > > > >>          > allocation issue might be related, or are they two
> > > different
> > > > >>         problems
> > > > >>          > altogether?
> > > > >>          >
> > > > >>          > Looking forward to your reply, thank you.
> > > > >>          >
> > > > >>          > Cheers.
> > > > >>          >
> > > > >>          >
> > > > >>          >
> > > > >>          > On Fri, Jul 12, 2013 at 2:26 PM, Prasanna Santhanam
> > > > >>         <t...@apache.org <mailto:t...@apache.org>> wrote:
> > > > >>          >
> > > > >>          > > Can you access the file as user root? Or user cloud?
> The
> > > > >>         cloudstack
> > > > >>          > > agent on your KVM host runs as user cloud and the NFS
> > > > >>         permissions
> > > > >>          > > might be disallowing the volume (.qcow2) from being
> > > accessed.
> > > > >>          > >
> > > > >>          > > On Fri, Jul 12, 2013 at 02:16:41PM +0800, Indra
> Pramana
> > > > >> wrote:
> > > > >>          > > > Hi Prasanna,
> > > > >>          > > >
> > > > >>          > > > Good day to you, and thank you for your e-mail.
> > > > >>          > > >
> > > > >>          > > > Yes, the file exists. I can access the file from
> the
> > > > >>         management server
> > > > >>          > > and
> > > > >>          > > > the two hypervisors hosts if I mount manually.
> > > > >>          > > >
> > > > >>          > > > [root@cs-nas-01
> /mnt/vol1/sec-storage/**template/tmpl/1/3]#
> > > > >> ls
> > > > >>          > > > -la
> > > > >>          > > > total
> > > > >>          > > > 1418787
> > > > >>          > > > drwxr-xr-x  2 root  wheel          4 Jul 11 20:21
> > > > >>          > > > .
> > > > >>          > > > drwxr-xr-x  3 root  wheel          3 Jul 11 20:17
> > > > >>          > > > ..
> > > > >>          > > > -rw-r--r--  1 root  wheel  725811200 Jul 11 20:21
> > > > >>          > > > 425b9e5a-fbc7-4637-a33a-**fe9d0ed4fa98.qcow2
> > > > >>          > > > -rw-r--r--  1 root  wheel        295 Jul 11 20:21
> > > > >>          > > > template.properties
> > > > >>          > > > [root@cs-nas-01 /mnt/vol1/sec-storage/**
> > > > >> template/tmpl/1/3]#
> > > > >>          > > > pwd
> > > > >>          > > > /mnt/vol1/sec-storage/**template/tmpl/1/3
> > > > >>          > > >
> > > > >>          > > >
> > > > >>          > > > Any advise?
> > > > >>          > > >
> > > > >>          > > > Looking forward to your reply, thank you.
> > > > >>          > > >
> > > > >>          > > > Cheers.
> > > > >>          > > >
> > > > >>          > > >
> > > > >>          > > >
> > > > >>          > > > On Fri, Jul 12, 2013 at 2:07 PM, Prasanna Santhanam
> > > > >>         <t...@apache.org <mailto:t...@apache.org>>
> > > > >>
> > > > >>          > > wrote:
> > > > >>          > > >
> > > > >>          > > > > Can you check whether there is a file at:
> > > > >>          > > > >
> > > > >>         nfs://
> 103.25.200.19/mnt/vol1/**sec-storage/template/tmpl/1/3/
> > > <http://103.25.200.19/mnt/vol1/sec-storage/template/tmpl/1/3/>
> > > > >>         <
> > > http://103.25.200.19/mnt/**vol1/sec-storage/template/**tmpl/1/3/<
> > > http://103.25.200.19/mnt/vol1/sec-storage/template/tmpl/1/3/>
> > > > >> >
> > > > >>
> > > > >>          > > > >
> > > > >>          > > > > On Fri, Jul 12, 2013 at 01:59:34PM +0800, Indra
> > > Pramana
> > > > >>         wrote:
> > > > >>          > > > > > Hi Prasanna,
> > > > >>          > > > > >
> > > > >>          > > > > > Thanks for your e-mail.
> > > > >>          > > > > >
> > > > >>          > > > > > I have tried restarting the management server,
> and
> > > > >>         the problem still
> > > > >>          > > > > > persists. I even tried to re-do the
> installation
> > > and
> > > > >>         configuration
> > > > >>          > > again
> > > > >>          > > > > > from scratch last night, but the problem still
> > > there.
> > > > >>          > > > > >
> > > > >>          > > > > > I also noted that on the beginning of the
> logs, I
> > > > >>         found some error
> > > > >>          > > > > messages
> > > > >>          > > > > > saying that the template cannot be downloaded
> to
> > > the
> > > > >>         pool. See this
> > > > >>          > > logs:
> > > > >>          > > > > >
> > > > >>          > > > > > http://pastebin.com/BY1AVJ08
> > > > >>          > > > > >
> > > > >>          > > > > > It says it failed because cannot get volume
> from
> > > the
> > > > >>         pool. Could it
> > > > >>          > > be
> > > > >>          > > > > > related, i.e. the absence of the template
> caused
> > > the
> > > > >>         system VMs
> > > > >>          > > cannot be
> > > > >>          > > > > > created and started?
> > > > >>          > > > > >
> > > > >>          > > > > > I have ensured that I downloaded the system VM
> > > > >>         template using
> > > > >>          > > > > > cloud-install-sys-tmplt and verified that the
> > > > >>         template is already
> > > > >>          > > there
> > > > >>          > > > > in
> > > > >>          > > > > > the secondary storage server.
> > > > >>          > > > > >
> > > > >>          > > > > > Any advice is appreciated.
> > > > >>          > > > > >
> > > > >>          > > > > > Looking forward to your reply, thank you.
> > > > >>          > > > > >
> > > > >>          > > > > > Cheers.
> > > > >>          > > > > >
> > > > >>          > > > > >
> > > > >>          > > > > >
> > > > >>          > > > > > On Fri, Jul 12, 2013 at 1:21 PM, Prasanna
> Santhanam
> > > > >>         <t...@apache.org <mailto:t...@apache.org>>
> > > > >>
> > > > >>          > > > > wrote:
> > > > >>          > > > > >
> > > > >>          > > > > > > It looks like a previous attempt to start the
> > > > >>         systemVMs has failed
> > > > >>          > > > > > > putting the nfs storage in the avoid set.
> Did you
> > > > >>         try restarting
> > > > >>          > > your
> > > > >>          > > > > > > management server?
> > > > >>          > > > > > >
> > > > >>          > > > > > > This line leads me to the above mentioned:
> > > > >>          > > > > > > 2013-07-12 13:10:48,236 DEBUG
> > > > >>          > > > > > >
> > > [storage.allocator.**AbstractStoragePoolAllocator]
> > > > >>          > > (secstorage-1:null)
> > > > >>          > > > > > > StoragePool is in avoid set, skipping this
> pool
> > > > >>          > > > > > >
> > > > >>          > > > > > >
> > > > >>          > > > > > > On Fri, Jul 12, 2013 at 01:16:53PM +0800,
> Indra
> > > > >>         Pramana wrote:
> > > > >>          > > > > > > > Dear Wido and all,
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > I have managed to get the hosts, primary
> and
> > > > >>         secondary storage
> > > > >>          > > > > running.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > - 2 KVM hypervisor hosts
> > > > >>          > > > > > > > - One RBD primary storage
> > > > >>          > > > > > > > - One NFS primary storage (for system VMs,
> > > since
> > > > >>         I understand
> > > > >>          > > that
> > > > >>          > > > > system
> > > > >>          > > > > > > > VMs cannot use RBD)
> > > > >>          > > > > > > > - One NFS secondary storage
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > However, now I am having problem with the
> > > system
> > > > >>         VMs: CPVM and
> > > > >>          > > SSVM,
> > > > >>          > > > > > > unable
> > > > >>          > > > > > > > to start.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > Excerpt from management-server.log file is
> > > here:
> > > > >>          > > > > > > > http://pastebin.com/ENkpCALY
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > It seems that the VMs were not able to be
> > > created
> > > > >>         because unable
> > > > >>          > > to
> > > > >>          > > > > find
> > > > >>          > > > > > > > suitable StoragePools.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > I understand that system VMs will be using
> the
> > > > >>         NFS primary
> > > > >>          > > storage
> > > > >>          > > > > > > instead
> > > > >>          > > > > > > > of RBD, so I have confirmed that I am able
> to
> > > > >>         mount the primary
> > > > >>          > > > > storage
> > > > >>          > > > > > > via
> > > > >>          > > > > > > > NFS and have read and write access, from
> both
> > > the
> > > > >>         hypervisor and
> > > > >>          > > the
> > > > >>          > > > > > > > management server.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > Any advise how can I resolve the problem to
> > > make
> > > > >>         both the system
> > > > >>          > > VMs
> > > > >>          > > > > > > > created and started?
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > Looking forward to your reply, thank you.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > Cheers.
> > > > >>          > > > > > > >
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > On Fri, Jul 12, 2013 at 9:43 AM, Indra
> Pramana
> > > > >>         <in...@sg.or.id <mailto:in...@sg.or.id>>
> > > > >>
> > > > >>          > > > > wrote:
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > > Hi Wido,
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > > Thanks for the advice, I'm now able to
> add
> > > the
> > > > >>         RBD pool as
> > > > >>          > > primary
> > > > >>          > > > > > > storage.
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > > Many thanks! :)
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > > Cheers.
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > > On Thursday, July 11, 2013, Wido den
> > > Hollander
> > > > >>         wrote:
> > > > >>          > > > > > > > >
> > > > >>          > > > > > > > >> Hi,
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> On 07/10/2013 03:42 PM, Chip Childers
> wrote:
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >>> Cc'ing Wido, our resident Ceph expert.
> ;-)
> > > > >>          > > > > > > > >>>
> > > > >>          > > > > > > > >>>
> > > > >>          > > > > > > > >> Hehe ;)
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >>  On Wed, Jul 10, 2013 at 05:45:25PM
> +0800,
> > > > >>         Indra Pramana
> > > > >>          > > wrote:
> > > > >>          > > > > > > > >>>
> > > > >>          > > > > > > > >>>> Dear all,
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> I am installing CloudStack 4.1.0
> (upgraded
> > > > >>         from 4.0.2) and I
> > > > >>          > > > > also
> > > > >>          > > > > > > have a
> > > > >>          > > > > > > > >>>> Ceph cluster running. However, I am
> having
> > > > >>         issues in adding
> > > > >>          > > the
> > > > >>          > > > > RBD
> > > > >>          > > > > > > as
> > > > >>          > > > > > > > >>>> primary storage. Tried to follow the
> > > > >>         instruction here, but
> > > > >>          > > > > unable to
> > > > >>          > > > > > > > >>>> make
> > > > >>          > > > > > > > >>>> it work:
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>>
> > > > >>         http://ceph.com/docs/master/****rbd/rbd-cloudstack/<
> > > http://ceph.com/docs/master/**rbd/rbd-cloudstack/>
> > > > >> <
> > > > >>          > > > > > >
> > > http://ceph.com/docs/master/**rbd/rbd-cloudstack/<
> > > http://ceph.com/docs/master/rbd/rbd-cloudstack/>
> > > > >> >
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> I have setup a pool on the Ceph
> cluster.
> > > The
> > > > >>         status of the
> > > > >>          > > > > cluster
> > > > >>          > > > > > > is
> > > > >>          > > > > > > > >>>> healthy. Since I am using Ubuntu
> 12.04.2
> > > LTS
> > > > >>         (Precise) for
> > > > >>          > > the
> > > > >>          > > > > > > > >>>> hypervisors,
> > > > >>          > > > > > > > >>>> I also have compiled libvirt manually
> to
> > > > >>         ensure that the
> > > > >>          > > version
> > > > >>          > > > > > > 0.9.13
> > > > >>          > > > > > > > >>>> is
> > > > >>          > > > > > > > >>>> installed (previously it's 0.9.8).
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >> You can also use the Ubuntu Cloud
> Archive, I
> > > > >>         still need to
> > > > >>          > > get the
> > > > >>          > > > > > > docs
> > > > >>          > > > > > > > >> updated for that.
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> I described the process in a blogpost:
> > > > >>          > > > > > > http://blog.widodh.nl/2013/06/****<
> > > http://blog.widodh.nl/2013/06/**>
> > > > >>          > > > > > > > >>
> > > > >>          > >
> > > > >>
> a-quick-note-on-running-****cloudstack-with-rbd-on-ubuntu-**
> > > > >> **12-04/<
> > > > >>          > > > > > >
> > > > >>          > > > >
> > > > >>          > >
> > > > >>
> http://blog.widodh.nl/2013/06/**a-quick-note-on-running-**
> > > > >> cloudstack-with-rbd-on-ubuntu-**12-04/<
> > >
> http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/
> > > >
> > > > >>          > > > > > > >
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >>  indra@hv-kvm-01:~/rbd$ ceph
> > > > >>          > > > > > > > >>>> ceph> health
> > > > >>          > > > > > > > >>>> HEALTH_OK
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> indra@hv-kvm-01:~$ ceph osd lspools
> > > > >>          > > > > > > > >>>> 0 data,1 metadata,2 rbd,3 sc1,
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> root@hv-kvm-01:/home/indra# libvirtd
> > > > >> --version
> > > > >>          > > > > > > > >>>> libvirtd (libvirt) 0.9.13
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> I tried to add Primary Storage into
> the
> > > > >>         Cloudstack zone
> > > > >>          > > which I
> > > > >>          > > > > have
> > > > >>          > > > > > > > >>>> created:
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> Add Primary Storage:
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>> Zone: my zone name
> > > > >>          > > > > > > > >>>> Pod: my pod name
> > > > >>          > > > > > > > >>>> Cluster: my cluster name
> > > > >>          > > > > > > > >>>> Name: ceph-rbd-pri-storage
> > > > >>          > > > > > > > >>>> Protocol: RBD
> > > > >>          > > > > > > > >>>> RADOS Monitor: my first Ceph monitor
> IP
> > > > >> address
> > > > >>          > > > > > > > >>>> RADOS Pool: sc1 (the pool name on Ceph
> > > > >> cluster)
> > > > >>          > > > > > > > >>>> RADOS User: client.admin
> > > > >>          > > > > > > > >>>> RADOS Secret:
> > > > >>         /etc/ceph/ceph.client.admin.****keyring (keyring
> > > > >>          > > > > file
> > > > >>          > > > > > > > >>>> location)
> > > > >>          > > > > > > > >>>>
> > > > >>          > > > > > > > >>>
> > > > >>          > > > > > > > >> This is your problem. That shouldn't be
> the
> > > > >>         location of the
> > > > >>          > > file,
> > > > >>          > > > > but
> > > > >>          > > > > > > it
> > > > >>          > > > > > > > >> should be the secret, which is a base64
> > > > >>         encoded string.
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> $ ceph auth list
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> That should tell you what the secret is.
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >>  Storage Tags: rbd
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> This is the error message when I tried
> to
> > > add
> > > > >>         the primary
> > > > >>          > > storage
> > > > >>          > > > > by
> > > > >>          > > > > > > > >> clicking OK:
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> DB Exception on:
> > > > >>          > > com.mysql.jdbc.****JDBC4PreparedStatement@4b2eb56
> > > > >>          > > > > **:
> > > > >>          > > > > > > > >> INSERT INTO
> > > > >>          > > > > > > > >> storage_pool (storage_pool.id
> > > > >>         <http://storage_pool.id>, storage_pool.name
> > > > >>         <http://storage_pool.name>,
> > > > >>
> > > > >>          > > > > storage_pool.uuid,
> > > > >>          > > > > > > > >> storage_pool.pool_type,
> > > storage_pool.created,
> > > > >>          > > > > > > storage_pool.update_time,
> > > > >>          > > > > > > > >> storage_pool.data_center_id,
> > > > >> storage_pool.pod_id,
> > > > >>          > > > > > > > >> storage_pool.available_bytes,
> > > > >>         storage_pool.capacity_bytes,
> > > > >>          > > > > > > > >> storage_pool.status, storage_pool.scope,
> > > > >>          > > > > > > storage_pool.storage_provider_****
> > > > >>          > > > > > > > >> id,
> > > > >>          > > > > > > > >> storage_pool.host_address,
> > > storage_pool.path,
> > > > >>          > > storage_pool.port,
> > > > >>          > > > > > > > >> storage_pool.user_info,
> > > > >>         storage_pool.cluster_id) VALUES (217,
> > > > >>          > > > > > > > >> _binary'ceph-rbd-pri-storage',
> > > > >>          > > > > > > > >>
> > > > >>         _binary'a226c9a1-da78-3f3a-****b5ac-e18b925c9634', 'RBD',
> > > > >>          > > > > '2013-07-10
> > > > >>          > > > > > > > >> 09:08:28', null, 2, 2, 0, 0, 'Up', null,
> > > null,
> > > > >>         null,
> > > > >>          > > _binary'ceph/
> > > > >>          > > > > > > > >> ceph.client.admin.keyring@10.
> > > ****237.11.2/sc1<
> > > > >>          > > > > > > http://ceph.client.admin.**
> > > keyring@10.237.11.2/sc1<
> http://ceph.client.admin.keyring@10.237.11.2/sc1>
> > > > >> >',
> > > > >>          > > > > > > > >> 6789, null, 2)
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> On the management-server.log file:
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >> 2013-07-10 17:08:28,845 DEBUG
> > > > >>         [cloud.api.ApiServlet]
> > > > >>          > > > > > > > >> (catalina-exec-2:null)
> > > > >>          > > > > > > > >> ===START===  192.168.0.100 -- GET
> > > > >>          > > > > > > > >>
> > > > >>
> > > command=createStoragePool&****zoneid=c116950e-e4ae-4f23-****a7e7-
> > > > >>          > > > > > > > >>
> > > > >>
> 74a75c4ee638&podId=a748b063-****3a83-4175-a0e9-de39118fe5ce&**
> > > > >>          > > > > > > > >>
> > > > >>
> > > clusterid=1f87eb09-324d-4d49-****83c2-88d84d7a15df&name=ceph-****
> > > > >>          > > > > > > > >>
> > > > >>
> rbd-pri-storage&url=rbd%3A%2F%****2Fclient.admin%3A_etc%2Fc
> > > > >>          > > > > > > > >>
> > > > >>         eph%2Fceph.client.admin.****keyring%4010.237.11.2%2Fsc1&**
> > > > >>          > > > > > > > >>
> > > > >>          > > > >
> > > > >>         tags=rbd&response=json&****sessionkey=****
> > > > >> rDRfWpqeKfQKbKZtHr398ULV%2F8k%****
> > > > >>          > > > > > > > >> 3D&_=1373447307839
> > > > >>          > > > > > > > >> 2013-07-10 17:08:28,862 DEBUG
> > > > >>          > > [cloud.storage.****StorageManagerImpl]
> > > > >>          > > > > > > > >> (catalina-exec-2:null) createPool
> Params @
> > > > >>         scheme - rbd
> > > > >>          > > > > storageHost -
> > > > >>          > > > > > > null
> > > > >>          > > > > > > > >> hostPath - /ceph/ceph.client
> > > > >>          > > > > > > > >> .admin.keyring@10.237.11.2/sc1
> > > > >>         <http://admin.keyring@10.237.**11.2/sc1<
> > > http://admin.keyring@10.237.11.2/sc1>>
> > > > >> port - -1
> > > > >>
> > > > >>          > > > > > > > >> 2013-07-10 17:08:28,918 DEBUG
> > > > >>          > > [cloud.storage.****StorageManagerImpl]
> > > > >>          > > > > > > > >> (catalina-exec-2:null) In createPool
> Setting
> > > > >>         poolId - 217
> > > > >>          > > uuid -
> > > > >>          > > > > > > > >>
> a226c9a1-da78-3f3a-b5ac-****e18b925c9634 z
> > > > >>          > > > > > > > >> oneId - 2 podId - 2 poolName -
> > > > >>         ceph-rbd-pri-storage
> > > > >>          > > > > > > > >> 2013-07-10 17:08:28,921 DEBUG
> > > > >>         [db.Transaction.Transaction]
> > > > >>          > > > > > > > >> (catalina-exec-2:null) Rolling back the
> > > > >>         transaction: Time = 3
> > > > >>          > > > > Name =
> > > > >>          > > > > > > > >> persist; called by -Transaction.rollbac
> > > > >>          > > > > > > > >>
> > > > >>
> k:890-Transaction.removeUpTo:****833-Transaction.close:657-**
> > > > >>          > > > > > > > >>
> > > > >>         TransactionContextBuilder.****interceptException:63-**
> > > > >>          > > > > > > > >>
> > > > >>          > > > >
> > > > >>
> > > ComponentInstantiationPostProc****essor$InterceptorDispatcher.**
> > > > >> **interce
> > > > >>          > > > > > > > >> pt:133-StorageManagerImpl.****
> > > > >> createPool:1378-**
> > > > >>          > > > > > > > >>
> > > > >>
> StorageManagerImpl.createPool:****147-CreateStoragePoolCmd.**
> > > > >>          > > > > > > > >>
> > > > >>         execute:123-ApiDispatcher.****dispatch:162-ApiServer.**
> > > > >>          > > > > > > > >> queueCommand:505-ApiSe
> > > > >>          > > > > > > > >>
> > > > >>         rver.handleRequest:355-****ApiServlet.processRequest:302
> > > > >>          > > > > > > > >> 2013-07-10 17:08:28,923 ERROR
> > > > >>         [cloud.api.ApiServer]
> > > > >>          > > > > > > (catalina-exec-2:null)
> > > > >>          > > > > > > > >> unhandled exception executing api
> command:
> > > > >>         createStoragePool
> > > > >>          > > > > > > > >>
> > > > >>         com.cloud.utils.exception.****CloudRuntimeException: DB
> > > > >>          > > Exception
> > > > >>          > > > > on:
> > > > >>          > > > > > > > >>
> > > > >>         com.mysql.jdbc.****JDBC4PreparedStatement@4b2eb56****:
> INSERT
> > > > >>          > > INTO
> > > > >>          > > > > > > > >> storage_pool (
> > > > >>          > > > > > > > >> storage_pool.id <http://storage_pool.id
> >,
> > > > >>
> > > > >>         storage_pool
> > > > >>          > > > > > > > >> .name, storage_pool.uuid,
> > > > >> storage_pool.pool_type,
> > > > >>          > > > > > > storage_pool.created,
> > > > >>          > > > > > > > >> storage_pool.update_time,
> > > > >>         storage_pool.data_center_id,
> > > > >>          > > > > > > > >> storage_pool.pod_id,
> > > > >>          > > > > > > > >> storage_pool.availab
> > > > >>          > > > > > > > >> le_bytes, storage_pool.capacity_bytes,
> > > > >>         storage_pool.status,
> > > > >>          > > > > > > > >> storage_pool.scope,
> > > > >>         storage_pool.storage_provider_****id,
> > > > >>          > > > > > > > >> storage_pool.host_address,
> > > storage_pool.path,
> > > > >>         storage_
> > > > >>          > > > > > > > >> pool.port, storage_pool.user_info,
> > > > >>         storage_pool.cluster_id)
> > > > >>          > > VALUES
> > > > >>          > > > > > > (217,
> > > > >>          > > > > > > > >> _binary'ceph-rbd-pri-storage',
> > > > >>          > > > > > > > >>
> > > > >>         _binary'a226c9a1-da78-3f3a-****b5ac-e18b925c9634', 'RBD',
> > > > >>          > > '2013-07-1
> > > > >>          > > > > > > > >> 0 09:08:28', null, 2, 2, 0, 0, 'Up',
> null,
> > > > >>         null, null,
> > > > >>          > > > > _binary'ceph/
> > > > >>          > > > > > > > >> ceph.client.admin.keyring@10.
> > > ****237.11.2/sc1<
> > > > >>          > > > > > > http://ceph.client.admin.**
> > > keyring@10.237.11.2/sc1<
> http://ceph.client.admin.keyring@10.237.11.2/sc1>
> > > > >> >',
> > > > >>          > > > > > > > >> 6789, null, 2)
> > > > >>          > > > > > > > >>          at
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > >
> > > > >>          > >
> > > > >>         com.cloud.utils.db.****GenericDaoBase.persist(****
> > > > >> GenericDaoBase.java:1342)
> > > > >>          > > > > > > > >>          at
> > > > >>          > > > > > > > >>
> > > > >>         com.cloud.storage.dao.****StoragePoolDaoImpl.persist(**
> > > > >>          > > > > > > > >> StoragePoolDaoImpl.java:232)
> > > > >>          > > > > > > > >>          at
> > > > >>          > > > > > > > >>
> > > > >>
> > > com.cloud.utils.component.****ComponentInstantiationPostProc****
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > >
> > > > >>          > > > >
> > > > >>          > >
> > > > >>         essor$InterceptorDispatcher.****intercept(****
> > > > >> ComponentInstantiationPostProc***
> > > > >>          > > > > > > > >> *es
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > > > >>
> > > > >>          > > > > > >
> > > > >>          > > > > > > --
> > > > >>          > > > > > > Prasanna.,
> > > > >>          > > > > > >
> > > > >>          > > > > > > ------------------------
> > > > >>          > > > > > > Powered by BigRock.com
> > > > >>          > > > > > >
> > > > >>          > > > > > >
> > > > >>          > > > >
> > > > >>          > > > > --
> > > > >>          > > > > Prasanna.,
> > > > >>          > > > >
> > > > >>          > > > > ------------------------
> > > > >>          > > > > Powered by BigRock.com
> > > > >>          > > > >
> > > > >>          > > > >
> > > > >>          > >
> > > > >>          > > --
> > > > >>          > > Prasanna.,
> > > > >>          > >
> > > > >>          > > ------------------------
> > > > >>          > > Powered by BigRock.com
> > > > >>          > >
> > > > >>          > >
> > > > >>
> > > > >>         --
> > > > >>         Prasanna.,
> > > > >>
> > > > >>         ------------------------
> > > > >>         Powered by BigRock.com
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >
> > >
> > > --
> > > Prasanna.,
> > >
> > > ------------------------
> > > Powered by BigRock.com
> > >
> > >
>
> --
> Prasanna.,
>
> ------------------------
> Powered by BigRock.com
>
>

Reply via email to