On Thu, 2014-01-09 at 09:30 +0100, squadra wrote:
> try it, i bet that you will get better latency results with proper
> configured iscsitarget/initiator. 
> 
> 
> btw, freebsd 10 includes kernel based iscsi-target now. which works
> pretty good for me since some time, easy to setup and working
> performing well (zfs not to forget ;) )

Yeah I see 10´s reached RC4 now, probably´ll be out for real soon, and
then a while more to wait for 10.1 to have longer support:)

Have you compared the new iscsi-target with "ports/istgt" btw?

/K

> 
> 
> On Thu, Jan 9, 2014 at 9:20 AM, Markus Stockhausen
> <stockhau...@collogia.de> wrote:
>         > Von: Karli Sjöberg [karli.sjob...@slu.se]
>         > Gesendet: Donnerstag, 9. Januar 2014 08:48
>         > An: squa...@gmail.com
>         > Cc: users@ovirt.org; Markus Stockhausen
>         > Betreff: Re: [Users] Experience with low cost NFS-Storage as
>         VM-Storage?
>         >
>         
>         > On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
>         > Right, try multipathing with nfs :)
>         >
>         > Yes, that´s what I meant, maybe could have been more clear
>         about that,
>         > sorry. Multipathing (and the load-balancing it brings) is
>         what really
>         > separates iSCSI from NFS.
>         >
>         > What I´d be interested in knowing is at what breaking-point,
>         not having
>         > multipathing becomes an issue. I mean, we might not have
>         such a big
>         > VM-park, about 300-400 VMs. But so far running without
>         multipathing
>         > using good ole' NFS and no performance issues this far.
>         Would be good to
>         > know beforehand if we´re headed for a wall of some sorts,
>         and about
>         > "when" we´ll hit it...
>         >
>         >/K
>         
>         
>         If that is really a concern for the initial question about a
>         "low cost NFS
>         solution" LACP on the NFS filer side will mitigate the
>         bottleneck from
>         too many hypervisors.
>         
>         My personal headache is the I/O performance of QEMU. More
>         details here:
>         
> http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html
>         Or to make it short: Each I/O in a VM gets a penalty of 370us.
>         That is much
>         more than in ESX environments.
>         
>         I would be interested if this the same in ISCSI setups.
>         
>         Markus
> 
> 
> 
> 
> -- 
> Sent from the Delta quadrant using Borg technology!

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to