I was thinking that a "virtual san" would primarily be for the local virt
ual
machines and if implemented as a new flavor of SFS then the cross-system 
use
would just be icing on the cake. What I don't like about NFS is the numbe
r
of hops the data has to take even when all of the processes are in the sa
me
physical platform (in this sense LPARs are different platforms). 

DASD -> CP -> SFS -> NFS -> TCPIP -> communications implementation -> Lin
ux 
 
And I don't know how many hops the data takes inside SFS, NFS or Linux. T
he
The kind of path I was thinking a "virtual san" in a single platform migh
t
have is:

DASD -> CP -> SFS -> CP -> LINUX

I understand that extending the connectivity beyond a single platform wil
l
add hops to the path but that is part of the price you pay. When I REALLY

need to extent data from one of my linux servers to beyond the platform t
hen
NFS is reasonable and is the price you pay.

/Tom Kern

On Wed, 24 Jan 2007 09:46:33 +0100, Rob van der Heij <[EMAIL PROTECTED]> 
wrote:
>On 1/23/07, Thomas Kern <[EMAIL PROTECTED]> wrote:
>> Could this "Virtual San" be some modification of the Shared/Byte File 

>> System server? With IPGATE, that could even be used across LPARs via 

>> hypersockets and across physical machines via other TCPIP connections.

>
>Well, that's basically NFS to serve BFS space. While I admit we never
>had the need to study it in more detail, so far I have not been
>impressed by the performance of the VM NFS server. If you're going NFS
>then I suppose it would be more realistic to have a Linux server host
>it. But each Linux virtual machine would still need its own local disk
>space to hold the basic materials to get the network going (or maybe
>do something with an NSS or implement booting from network).
>Have not looked at iSCSI in detail yet. Some folks I respect tell me
>not to do that...
>  ...snipped...

Reply via email to