Somesh - I'm missing something here - just initiated new Volume Snapshot -
and I see that the HOST does the qemu-img convert operation with 2 mount
points, one primary and one secondary...so my wrong...
I know the same happens when deploying new VM from template - HOST mounts
primary and secondary and does the conversion..

So the SSVM is "only" used to download templates/isos from URLs, and for
copying templates/isos across zones. ?

At the end - I guess what you said is imporant for me - SSVM doesnt have
access to Primary Storage...

Thx

On 30 December 2014 at 20:29, Somesh Naidu <somesh.na...@citrix.com> wrote:

> Interaction with primary storage would be via the hypervisor. SSVM doesn't
> access the primary storage directly.
>
> -----Original Message-----
> From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Sent: Tuesday, December 30, 2014 1:52 PM
> To: us...@cloudstack.apache.org
> Cc: dev@cloudstack.apache.org
> Subject: Re: Physical network design options - which crime to comit
>
> Somesh, thx - I understand that - one more question, since you guys are
> arround :)
>
> The primary storage network - I unnderstand how to separate that from
> management networks on the host (having separate NIC/vlan/IP inside each
> hypervisor host, etc) - but as far as I know, the SSVM doesnt have a NIC
> that goes directly into the Pimary storage network, just into management -
> does this mean, that the Primary storage network needs to be
> reachable/routable from Management network (so SSVM uses "Reserved System
> Gateway" to actually reach Primary storage network, through Management
> network) ?
> SSVM needs to reach Primary storage somehow...
>
> Thx
>
> On 30 December 2014 at 19:29, Somesh Naidu <somesh.na...@citrix.com>
> wrote:
>
> > Sorry, I was out on holidays :)
> >
> > I guess that should work. Just know that Primary traffic is hypervisor to
> > storage and Secondary traffic is SSVM/Mgmt to storage. Cloudstack
> generally
> > doesn't consider primary storage in its architecture design as it mostly
> > relies on recommendation from the hypervisor vendors.
> >
> > -----Original Message-----
> > From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Sent: Friday, December 26, 2014 5:59 PM
> > To: us...@cloudstack.apache.org
> > Cc: dev@cloudstack.apache.org
> > Subject: RE: Physical network design options - which crime to comit
> >
> > On storage nodes - yes definitively will do it.
> >
> > One finall advice/opinion please...?
> >
> > On compute nodes, since one 10G will be shared by both primary and
> > secondary traffic - would you separate that on 2 different VLANs and then
> > implement some QoS i.e. guarantie 8Gb/s for primary traffic vlan, or i.e.
> > limit sec.storage vlan to i.e. 2Gb/s. Or just simply let them compete for
> > the traffic? In afraid secondary traffic my influence or completely
> > overweight primary traffic if no QoS implemented...
> >
> > Sorry for borring you with details.
> >
> > Thanks
> >
> > Sent from Google Nexus 4
> > On Dec 26, 2014 11:51 PM, "Somesh Naidu" <somesh.na...@citrix.com>
> wrote:
> >
> > > Actually, I would highly consider nic bonding for storage network if
> > > possible.
> > >
> > > -----Original Message-----
> > > From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > > Sent: Friday, December 26, 2014 4:42 PM
> > > To: dev@cloudstack.apache.org
> > > Cc: us...@cloudstack.apache.org
> > > Subject: RE: Physical network design options - which crime to comit
> > >
> > > Thanks Somesh, first option also seems most logical to me.
> > >
> > > I guess you wouldn't consider doing nic bonding and then vlans with
> some
> > > QoS based on vlans on switch level?
> > >
> > > Thx again
> > >
> > > Sent from Google Nexus 4
> > > On Dec 26, 2014 9:48 PM, "Somesh Naidu" <somesh.na...@citrix.com>
> wrote:
> > >
> > > > I generally prefer to keep the storage traffic separate. Reason is
> that
> > > > storage performance (provision templates to primary, snapshots, copy
> > > > templates, etc) significantly impact end user experience. In
> addition,
> > it
> > > > also helps isolate network issues when troubleshooting.
> > > >
> > > > So I'd go for one of the following in that order:
> > > > Case I
> > > > 1G = mgmt network (only mgmt)
> > > > 10G = Primary and Secondary storage traffic
> > > > 10G = Guest and Public traffic
> > > >
> > > > Case II
> > > > 10G = Primary and Secondary storage traffic
> > > > 10G = mgmt network, Guest and Public traffic
> > > >
> > > > Case III
> > > > 10G = mgmt network, Primary and Secondary storage traffic
> > > > 10G = Guest and Public traffic
> > > >
> > > > -----Original Message-----
> > > > From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > > > Sent: Friday, December 26, 2014 10:06 AM
> > > > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> > > > Subject: Physical network design options - which crime to comit
> > > >
> > > > Hi folks,
> > > >
> > > > I'm designing some stuff - and wondering which crime to commit - I
> > have 2
> > > > posible scenarios in my head
> > > > I have folowing NICs available on compute nodes:
> > > > 1 x 1G NIC
> > > > 2 x 10G NIC
> > > >
> > > > I was wondering which approach would be better, as I', thinking
> about 2
> > > > possible sollutions at the moment, maybe 3.
> > > >
> > > > *First scenario:*
> > > >
> > > > 1G = mgmt network (only mgmt)
> > > > 10G = Primary and Secondary storage traffic
> > > > 10G = Guest and Public traffic
> > > >
> > > >
> > > > *Second scenario*
> > > >
> > > > 1G = not used at all
> > > > 10G = mgmt,primary,secondary storage
> > > > 10G = Guest and Public
> > > >
> > > >
> > > > And possibly a 3rd scenario:
> > > >
> > > > 1G = not used at all
> > > > 10G = mgmt+primary storage
> > > > 10G = secondary storage, guest,public network
> > > >
> > > >
> > > > I could continue here with different scenarios - but I'm wondering if
> > 1G
> > > > dedicated for mgmt would make sense - I know it is "better" to have
> it
> > > > dedicated if possible, but folowing "KISS" and knowing it's extremely
> > > light
> > > > weight traffic - I was thinkin puting everything on 2 x 10G
> interfaces.
> > > >
> > > > Any opinions are most welcome.
> > > > Thanks,
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> >
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić

Reply via email to