Out of interest, and reasonably on-topic, can anyone predict
performance comparison (CIFS) between these two setups?
1) Dedicated Windows 2003 Server, Intel hardware SATA RAID controller
(single raid 5 array, 8 disks)
2) OpenSolaris+ZFS+CIFS, 8 drives with a SuperMicro controller
Hi all,
I have built out an 8TB SAN at home using OpenSolaris + ZFS. I have
yet to put it into 'production' as a lot of the issues raised on this
mailing list are putting me off trusting my data onto the platform
right now.
Throughout time, I have stored my personal data on NetWare and now NT
On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper [EMAIL PROTECTED] wrote:
The exception to the rule of multiple 12v output sections is PC
Power Cooling - who claim that there is no technical advantage to
having multiple 12v outputs (and this feature is only a marketing
gimmick). But now that they
2008/10/6 mike [EMAIL PROTECTED]:
I am trying to finish building a system and I kind of need to pick
working NIC and onboard SATA chipsets (video is not a big deal - I can
get a silent PCIe card for that, I already know one which works great)
I need 8 onboard SATA. I would prefer Intel CPU.
2008/9/30 Jean Dion [EMAIL PROTECTED]:
iSCSI requires dedicated network and not a shared network or even VLAN.
Backup cause large I/O that fill your network quickly. Like ans SAN today.
Could you clarify why it is not suitable to use VLANs for iSCSI?
2008/9/30 Jean Dion [EMAIL PROTECTED]:
Simple. You cannot go faster than the slowest link.
That is indeed correct, but what is the slowest link when using a
Layer 2 VLAN? You made a broad statement that iSCSI 'requires' a
dedicated, standalone network. I do not believe this is the case.
Any
2008/9/30 Jean Dion [EMAIL PROTECTED]:
If you want performance you do not put all your I/O across the same physical
wire. Once again you cannot go faster than the physical wire can support
(CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
single port you share the
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2008/9/17 Peter Tribble:
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy
2008/9/15 gm_sjo:
2008/9/15 Ben Rockwood:
On Thumpers I've created single pools of 44 disks, in 11 disk RAIDZ2's.
I've come to regret this. I recommend keeping pools reasonably sized
and to keep stripes thinner than this.
Could you clarify why you came to regret it? I was intending
Hi all,
I'm about to embark on my first voyage into ZFS (and Solaris, frankly) as it
seems very appealing for a low-cost SAN/NAS solution. I am in the process of
building up a HCL-compliant whitebox server which ultimately will contain
8x1TB SATA disks.
I would appreciate some advice and
2008/9/12 Malachi de Ælfweald:
Currently, you can mirror your boot but not raidz2 it. I'd recommend using 2
of the drives for a mirrored boot and the other 6 drives for raidz2. I used
2x Addonics AE5RCS35NSA to hold the drives to give me hot swappability.
Sorry, forgot to mention - I have
2008/9/12 Michael Schuster:
Solaris provide CIFS support natively too - maybe you can save yourself the
hassle of going through the vmware + windows combo.
There will be approx. 20 vmware guests running on this infrastructure,
so having a windows guest there for serving files isn't a problem.
2008/9/12 Malachi de Ælfweald:
I'd say that if you are planning on using Windows to host the VMs, then
either vmware or virtualbox is your best bet. If you are looking to have the
OpenSolaris box host the VMs, xVM might be a better choice.
I'm not - as per my original post, the vmware host
14 matches
Mail list logo