Suppose I want to build a 100-drive storage system, wondering if there is any
disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then setup
ZFS file system on these 20 virtual drives and configure them as RAIDZ?
I understand people always say ZFS doesn't prefer HW RAID. Under
:
"warning: cannot send 'pent@wdFailuresAndSol11Migrate': I/O error"
I have tried using "zfs set checksum=off" but that doesn't change anything.
Any tips how I can get these filesystems over to the new machine please ?
Thanks,
Tom.
On Wed, Mar 9, 2011 at 10:37 PM, Peter Jeremy
wrote:
> On 2011-Mar-10 05:50:53 +0800, Tom Fanning wrote:
>>I have a FreeNAS 0.7.2 box, based on FreeBSD 7.3-RELEASE-p1, running
>>ZFS with 4x1TB SATA drives in RAIDz1.
>>
>>I appear to have lost 1TB of usable space afte
back to FreeBSD and I don't have spare storage.
Any help whatsoever would be much appreciated - something's not right here.
Many thanks
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks, James, for reporting this, and thanks, Matt, for the analysis. I filed
7002362 to track this.
Tom
On 11/23/10 10:43 AM, Matthew Ahrens wrote:
I verified that this bug exists in OpenSolaris as well. The problem is that we
can't destroy the old filesystem "a" (which has
ilesystem?
>
> I believe it will just work.
Sorry, that what will just work? Moving to a distro with more recent
ZFS support and upgrading my pool?
Thanks
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
esystem with data on it, so I'm 1TB down and not sure
what I can do.
Thanks in advance.
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ears to be in ZFS version 14, and my FreeNAS distro is at
version 13. Is this the issue?
I would really appreciate some help with this. The FreeNAS forums and
documentation haven't been any help.
Thanks in advance.
--
Tom Fanning
___
zfs-discuss m
Thanks a lot for that. I'm not experienced in reading the output of dtrace,
but I'm pretty sure that dedup was the cause here, as I disabling it during
the transfer, immediately raised the transfer speed to ~100MB/s.
Thanks for the article you linked to — it seems my system would need about
16GB R
On 18/09/10 15:25, George Wilson wrote:
Tom Bird wrote:
In my case, other than an hourly snapshot, the data is not
significantly changing.
It'd be nice to see a response other than "you're doing it wrong",
rebuilding 5x the data on a drive relative to its capacit
On 18/09/10 13:06, Edho P Arief wrote:
On Sat, Sep 18, 2010 at 7:01 PM, Tom Bird wrote:
All said and done though, we will have to live with snv_134's bugs from now
on, or perhaps I could try Sol 10.
or OpenIllumos. Or Nexenta. Or FreeBSD. Or.
... none of which will receive ZFS code up
lly happening.
All said and done though, we will have to live with snv_134's bugs from
now on, or perhaps I could try Sol 10.
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Friesenhahn wrote:
On Fri, 17 Sep 2010, Tom Bird wrote:
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to nearly
10T before deciding that there was an error and not completing, reseat
disk and
d0ONLINE 0 0 0
errors: No known data errors
--
Tom
// www.portfast.co.uk
// hosted services, domains, virtual machines, consultancy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> On Wed, Aug 25, 2010 at 12:29 PM, Dr. Martin
> Mundschenk
> wrote:
> > Well, I wonder what are the components to build a
> stable system without having an enterprise solution:
> eSATA, USB, FireWire, FibreChannel?
>
> If possible to get a card to fit into a MacMini,
> eSATA would be a lot
> bet
> I'm not sure I didn't have dedup enabled. I might
> have.
> As it happens, the system rebooted and is now in
> single user mode.
> I'm trying another import. Most services are not
> running which should free ram.
>
> If it crashes again, I'll try the live CD while I see
> about more RAM.
Succ
> Tom,
>
> If you freshly installed the root pool, then those
> devices
> should be okay so that wasn't a good test. The other
> pools
> should remain unaffected by the install, and I hope,
> from
> the power failure.
Yes. I was able to import them and have since
> Hi Tom,
>
> Did you boot from the OpenSolaris LiveCD and attempt
> to manually
> mount the data3 pool? The import might take some
> time.
I haven't tried that. I am booting from a new install to the hard drive though.
>
> I'm also curious whether the d
My power supply failed. After I replaced it, I had issues staying up after
doing zpool import -f.
I reinstalled OpenSolaris 134 on my rpool and still had issues.
I have 5 pools:
rpool - 1*37GB
data - RAIDZ, 4*500GB
data1 - RAID1 2*750GB
data2 - RAID1 2*750GB
data3 - RAID1 2*2TB - WD20EARS
The s
subsequent incremental receives should
leave the mountpoint alone (after build 128).
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vy[14]; zfs inherit -S compress tank/b/c
: to...@heavy[15]; zfs get compress tank/b/c
NAME PROPERTY VALUE SOURCE
tank/b/c compression onreceived
: to...@heavy[16];
I don't remember this being an issue. I'll let you know if I find out more.
Tom
_
herit, but it
would be best if zfs receive handled failures more gracefully, and
attempted to set as many properties as possible.
Yes, that was fixed in build 128.
Thanks to Cindy and Tom for their help.
Glad to hear we identified the problem. Sorry for the trouble.
Tom
e
to clear the explicit mountpoint and prevent it from being included in
the send stream. Later set it back the way it was. (Soon there will be
an option to take care of that; see CR 6883722 want 'zfs recv -o
prop=value' to set initial property value
ect 'zdb -vvv poolname' to a file and search it for
"compression" to check the value in the ZAP.
I assume you have permission to set the compression property on the
receive side, but I'd check anyway.
Tom
On Tue, Apr 6, 2010 at 10:57 PM, Tom Erickson
mailto:thomas.
compression with 'zfs
allow'. You could pipe the send stream to zstreamdump to verify that
compression=gzip is in the send stream, but I think before build 125 you
will not have zstreamdump.
Tom
___
zfs-discuss mailing list
zfs-discuss@openso
;: I/O error
Ideas, anyone?
--
Tom
// www.portfast.co.uk -- internet services and consultancy
// hosting from 1.65 per domain
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
table? The blogs I have read so far dont specify.
Re DDT size, is (data in use)/(av blocksize) * 256bit right as a worst
case (ie all blocks non identical)
What are average block sizes?
Cheers,
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
)
PS sorry for this being a non specifically ZFS question, but ZFS is the
reason I use opensolaris so there's a link in there somewhere.
Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
staller that gives you
only one option - install everything.
Am I just doing it wrong or is there another way to get OpenSolaris
installed in a sane manner other than just sticking with community
edition at snv_129?
--
Tom
// www.portfast.co.uk -- internet services and consultancy
// hosting fr
CD wrote:
On 01/18/2010 06:36 PM, Tom Haynes wrote:
CD wrote:
Greetings.
I've go two pools, but can only access one of them from my
linux-machine. Both pools got the same settings and acl.
Both pools has sharenfs=on. Also, every filesystem got
aclinherit=passthrough
NAME PROPERTY
Interesting,
I had assumed the cause of my problem was de-dup because the symptoms are
similar to what others have reported destroying their deduped datasets, but
their system hangs didn't happen for hours while my system hard locks in 3 to 4
minutes. But now you have me thinking because the d
Fast is a relative term, because even after the first write to the end, they
are still really fast for a small server and the latency is still low <1ms
which is often more important than throughput. The topic said poor mans slog.
The vertexes can be had for $100 and the vertex turbo a little mo
Myself and others had good luck with the OCZ vertex. I use two 30GB versions
and they have very high write and read throughputs for such a cheap MLC.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Did you happen to set dedup on that zvol that you destroyed? Your symptoms
sound just like mine. Check out the threads concerning losing a pool after
destroying a deduped dataset. There's 3 or 4 of them. I get heavy read
activity for about 4 minutes and then the systems just hangs and I can'
Bump.
Any devs want to take him up on his offer, obviously this is effecting a few
users and judging from the view counts of the other threads about this problem,
many more. This would probably effect the 7000 series as well.
Thanks.
--
This message posted from opensolaris.org
I'm also curious, but for b130 of Opensolaris. Any way to try to import a pool
without the log device? Seems like the ability to rollback of the pool
recovery import should help with this scenario if you are willing to take data
loss to get to a consistent state with a failed or physically rem
In this last iteration, I switched to a completely different box with twice the
resources. Somehow, from the symptoms, I don't think trying it on one of the
48 or 128GB servers at work is going to change the outcome. The hang happens
too fast. it seems like something in the destroy is causing
> If pool isnt rpool you might to want to boot into
> singleuser mode (-s after kernel parameters on boot)
> remove /etc/zfs/zpool.cache and then reboot.
> after that you can merely ssh into box and watch
> iostat while import.
>
> Yours
> Markus Kovero
>
> ___
You might want to checkout another thread that me and some of the others
started on this topic. some of the guys in that thread got their pool back but
I haven't been able to. I have SSDs for my log and cache and it hasn't helped
me because my system hangs hard on import the way you are describ
That's the thing, the drive lights aren't blinking, but I was thinking maybe
the writes are going so slow that it's possible they aren't registering. And
since I can't keep a running iostat, Ican't tell if anything is going on. I
can however get into the KMDB. is there something in there that
Yeah, still no joy. I moved the disks to another machine altogether with 8gb
and a quad core intel versus the dual core amd I was using and it still just
hangs the box on import. this time I did a nohup zpool import -fFX vault after
booting off the b130 live dvd on this machine into single user
Yeah, still no joy on getting my pool back. I think I might have to try
grabbing another server with a lot more memory and slapping the HBA and the
drives in that. Can ZFS deal with a controller change?
--
This message posted from opensolaris.org
___
> I booted the snv_130 live cd and ran zpool import
> -fFX and it took a day, but it imported my pool and
> rolled it back to a previous version. I haven't
> looked to see what was missing, but I didn't need any
> of the changes over the last few weeks.
>
> Scott
I'll give it a shot. Hope this
4 Gigabytes. The hang on my system happens much faster. I can watch the drives
light up and run iostat but 3 minutes in like clockwork everything gets hung
and I'm left with a blinking cursor at the console that newlines but doesn't do
anything. Although if I run kmdb and hit f1-a I can get int
I am having the exact same problem after destroying a dataset with a few
gigabytes of data and dedup. I type zfs destroy vault/virtualmachines which
was a zvol with dedup turned on and the server hung, couldn't ping, couldn't
get on the console. Next bootup same thing just hangs when importing
All,
After long and long searching: I found the reason: ips package
SUNWnfsskr was missing . Thanks for all your replies and help.
Regards,
Tom.
Tom de Waal wrote:
Hi,
I'm trying to identify why my nfs server does not work. I'm using a more
or less core install of OSOL 2009.0
e (and fills
sharetab)
Any suggestion how to resolve this? Am I missing an ips package or a file?
Regards,
Tom de Waal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ross wrote:
Yup, that one was down to a known (and fixed) bug though, so it isn't
the normal story of ZFS problems.
Got a bug ID or anything for that, just out of interest?
As an update on my storage situation, I've got some JBODs now, see how
that goes.
--
Tom
// www.port
Victor Latushkin wrote:
This issue (and previous one reported by Tom) has got some publicity
recently - see here
http://www.uknof.org.uk/uknof13/Bird-Redux.pdf
So i feel like i need to provide a little bit more information about the
outcome (sorry that it is delayed and not as full as
377 63 47.1M 6.40M
content43.85T 11.2T944 0 118M 0
content53.84T 11.2T243 61 30.4M 5.97M
content619.0T 1.05T 0 0 0 0
content714.0T 991G209 0 26.1M 0
-- - - - - - -----
I'm running ClearCase on a Solaris 10u4 system. Views & vobs.
I lock the vob, snapshot /var/adm/rational, vobs, views, then unlock the vobs.
We've been able to copy the snapshot to another server & restore.
I believe ClearCase is supported by Rational on ZFS also. We would not have
done it oth
I'm running OpenSolaris 10/08 snv_101b with the auto snapshot packages.
I'm getting this error:
/usr/lib/time-slider-cleanup -y
Traceback (most recent call last):
File "/usr/lib/time-slider-cleanup", line 10, in
main(abspath(__file__))
File "/usr/lib/../share/time-slider/lib/time_slider/
Toby Thain wrote:
> On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
>
>> Hey, Tom -
>>
>> Correct me if I'm wrong here, but it seems you are not allowing ZFS any
>> sort of redundancy to manage.
Every other file system out there runs fine on a single LUN, w
Tim wrote:
> On Sun, Jan 18, 2009 at 8:02 AM, Tom Bird <mailto:t...@marmot.org.uk>> wrote:
> errors: Permanent errors have been detected in the following files:
>
>content:<0x0>
>content:<0x2c898>
>
> r...@cs4:~# f
t;0x2c898>
r...@cs4:~# find /content
/content
r...@cs4:~# (yes that really is it)
r...@cs4:~# uname -a
SunOS cs4.kw 5.11 snv_99 sun4v sparc SUNW,Sun-Fire-T200
from format:
2. c2t8d0
/p...@7c0/p...@0/p...@8/LSILogic,s...@0/s...@8,0
Also, "content" does not show in df o
hen copying binaries. A pure source based distribution
> like Gentoo has hardly any issues at all.
Nobody in their right mind is using Gentoo.
If you want it in Linux then it has to be a proper GPL compliant effort.
I for one would like this to happen.
Tom
_
What, no VirtualBox image?
This VMware image won't run on VMware Workstation 5.5 either :-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've found that SFU NFS is pretty poor in general. I setup Samba on the host
system. Let the client stay native & have the server adapt.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
>> How can I diagnose why a resilver appears to be hanging at a certain
>> percentage, seemingly doing nothing for quite a while, even though the
>> HDD LED is lit up permanently (no apparent head seeking)?
>>
>> The drives in the pool are WD Raid Editions, thus have TLER and should
>> time out on
Victor Latushkin wrote:
> Hi Tom and all,
>> [EMAIL PROTECTED]:~# uname -a
>> SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200
>
> Btw, have you considered opening support call for this issue?
As a follow up to the whole story, with the fantastic help
Victor Latushkin wrote:
> Hi Tom and all,
>
> Tom Bird wrote:
>> Hi,
>>
>> Have a problem with a ZFS on a single device, this device is 48 1T SATA
>> drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
>> a ZFS on it as a single device.
it up to the point where I
can mount it and then get some data off or run a scrub.
--
Tom
// www.portfast.co.uk -- internet services and consultancy
// hosting from 1.65 per domain
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nnot import 'content': I/O error
[EMAIL PROTECTED]:~# uname -a
SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200
Thanks
--
Tom
// www.portfast.co.uk -- internet services and consultancy
// hosting from 1.65 per domain
___
zfs-dis
>
> time gdd if=/dev/zero bs=1048576 count=10240
> of=/data/video/x
>
> real 0m13.503s
> user 0m0.016s
> sys 0m8.981s
As someone pointed out, this is a compressed file system :-)
I'll have to get a copy of Bonnie++ or some such to get more accurate numbers
This message posted from openso
> On Fri, Jun 6, 2008 at 16:23, Tom Buskey
> <[EMAIL PROTECTED]> wrote:
> > I have an AMD 939 MB w/ Nvidea on the motherboard
> and 4 500GB SATA II drives in a RAIDZ.
> ...
> > I get 550 MB/s
> I doubt this number a lot. That's almost 200
> (550/N-1 = 1
>**pci or pci-x. Yes, you might see
> *SOME* loss in speed from a pci interface, but
> let's be honest, there aren't a whole lot of
> users on this list that have the infrastructure to
> use greater than 100MB/sec who are asking this sort
> of question. A PCI bus should have no issues
> pushing t
> (2) You want a 64-bit CPU. So that probably rules
> out your P4 machines,
> unless they were extremely late-model P4s with the
> EM64T features.
> Given that file-serving alone is relatively low-CPU,
> you can get away
> with practically any 64-bit capable CPU made in the
> last 4 years.
A
> Justin,
>
> Thanks for the reply
>
> In the environment I currently work in, the "powers
> that be" are almost
> completely anti unix. Installing the nfs client on
> all machines would take
> a real good sales pitch. None the less I am still
I've pro unix & I'm against putting NFS on all the P
I've always done a disksuite mirror of the boot disk. It's been easry to do
after the install in Solaris. WIth Linux I had do do it during the install.
OpenSolaris 2008.05 didn't give me an option.
How do I add my 2nd drive to the boot zpool to make it a mirror?
This message posted from op
Spencer Shepler wrote:
> On May 21, 2008, at 1:43 PM, Will Murnane wrote:
>
>
>> Okay, all is well. Try the same thing on a Solaris client, though,
>> and it doesn't work:
>> # mount -o vers=4 ds3:/export/local-space/test /mnt/
>> # cd mnt
>> # ls
>> foo
>> # ls foo
>>
>>
>
> This behavio
> > On May 18, 2008, at 14:01, Mario Goebbels wrote:
> > ZFS on Linux on
> > humper would actually be very interesting to many
> of
> > them. I think
> > that's good for Sun. Of course, ZFS on Linux on
>
> Umm, how many Linux shops buy support and/or HW from
> Sun ?
>
> It it's a Linux sho
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are in
the HCL as supported, but do not have 64 bit drivers. So they only work in 32
bit mode.
This message posted from opensolar
Where do you get an 8 port SATA card that works with Solaris for around $100?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I never said I was a typical consumer. After all, I bought a $1600 DSLR.
If you look around photo forums, you'll see an interest the digital workflow
which includes long term storage and archiving. A chunk of these users will
opt for an external RAID box (10%? 20%?). I suspect ZFS will change
> Getting back to 'consumer' use for a moment, though,
> given that something like 90% of consumers entrust
> their PC data to the tender mercies of Windows, and a
> large percentage of those neither back up their data,
> nor use RAID to guard against media failures, nor
> protect it effectively fr
settle down after
that.
Tom Mooney
Dan Pritts wrote:
On Fri, Nov 16, 2007 at 11:31:00AM +0100, Paul Boven wrote:
Thanks for your reply. The SCSI-card in the X4200 is a Sun Single
Channel U320 card that came with the system, but the PCB artwork does
sport a nice 'LSI LOGIC
Say for an example of old custom 32-bit perl scripts.Can it work with
128bit ZFS?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If you have disks to experiment on & corrupt (and you will!) try this:
System A mounts the SAN [b]disk[/b] and format w/ UFS
System A umounts [b]disk[/b]
System B mounts [b]disk[/b]
B runs [i]touch x[/i] on [b]disk[/b].
System A mounts [b]disk[/b]
System A and B umount [b]disk[/b]
System B [i]fsck
We just had an article published on SDN about how different changes to the ways
shares are handled has an impact to the boot up time for large numbers of ZFS
filesystems.
For me, one of the neat things about it was it being a topic at several points
on OpenSolaris discussion boards.
You can vi
I am currently using 6 drives and a 550W power supply, so I'm not pushing the
hardware at this point. I do understand your point. However, if you are
willing to mod the case, there is room for a second power supply above where
the primary p/s mounts. The case modification should be fairly s
Here's a start for a suggested equipment list:
Lian Li case with 17 drive bays (12 3.5" , 5 5.25")
http://www.newegg.com/Product/Product.aspx?Item=N82E1682064
Asus M2N32-WS motherboard has PCI-X and PCI-E slots. I'm using Nevada b64 for
iSCSI targets:
http://www.newegg.com/Product/Produc
> On Wed, May 23, 2007 at 08:03:41AM -0700, Tom Buskey
> wrote:
> >
> > Solaris is 64 bits with support for 32 bits. I've
> been running 64 bit Solaris since Solaris 7 as I
> imagine most Solaris users have. I don't think any
> other major 64 bit OS h
> Sorry about that, the specific processor in question
> is the Pentium D 930 which supports 64 bit computing
> through the Extended Memory 64 Technology. It was my
> initial reaction to say I'd go with 32 bit computing
> because my general experience with 64-bit is Windows,
> Linux, and some Free
I did this on Solaris 10u3. 4 120GB -> 4 500GB drives. Replace, resilver;
repeat until all all drives replaced.
On 5/14/07, Alec Muffett <[EMAIL PROTECTED]> wrote:
Hi All,
My mate Chris posed me the following; rather than flail about with
engineering friends trying to get a "definitive-de-
>
> Doug has been doing some performance optimization to
> the sharemgr to allow faster boot up in loading
>
Doug has blogged about his performance numbers here:
http://blogs.sun.com/dougm/entry/recent_performance_improvement_in_zfs
This message posted from opensolaris.org
_
I've been using long SATA cables routed out through the case to a home built
chassis with its own power supply for a year now. Not even eSATA. That part
works well.
Substitute this for USB/Firewire/SCSI/USB thumb drives. It's really the same
problem.
Ok, now you want to deal with a ZFS zpoo
We've got some work going on in the NFS group to alleviate this problem. Doug
McCallum has introduced the sharemgr (see http://blogs.sun.com/dougm) and I'm
about to putback the In-Kernel Sharetab bits (look in http://blogs.sun.com/tdh
- especially http://blogs.sun.com/tdh/entry/in_kernel_shareta
> No 'home user' needs shrink.
> Every professional datacenter needs shrink.
I can think of a scenario. I have a n disk RAID that I built with n newly
purchased disks that are m GB. One dies. I buy a replacement disk, also m GB
but when I put it in, it's really ( m - x ) GB. I need to shrink
Sorry, that's dd from /dev/zero to /dev/null
I think there's an issue with my SATA card
On 2/7/07, Bart Smaalders <[EMAIL PROTECTED]> wrote:
Tom Buskey wrote:
>> Tom Buskey wrote:
>>> As a followup, the system I'm trying to use this on
>> is a
> Tom Buskey wrote:
> > As a followup, the system I'm trying to use this on
> is a dual PII 400 with 512MB. Real low budget.
>
> Hmm... that's lower than I would have expected.
> Something is
> ikely wrong. These machines do have very limited
> memor
[i]
I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the bat
w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair
and I can use it if I upgrade my server for pci-x
[/i]
And I'm finding the throughput isn't there. < 2MB/s in ZFS RAIDZ and worse
wi
That's good to know.
It's a new Addonics 4 port card. Specifically:
ADS3GX4R5-ERAID5/JBOD 4-port ext. SATA II PCI-X
prtconf -v output:
pci1095,7124, instance #0
Driver properties:
name='sata' type=int items=1 dev=none
.
name='compatible' type
As a followup, the system I'm trying to use this on is a dual PII 400 with
512MB. Real low budget.
2 500 GB drives with 2 120 GB in a RAIDZ. The idea is that I can get 2 more
500 GB drives later to get full capacity. I tested going from a 20GB to a
120GB and that worked well.
I'm finding th
>However, I don't think OpenSolaris/Solaris support these unless the
>Addonics eSATA PCI-X adapter supports them. I have not figured that
>one out yet. All I know is I want ZFS.
I'm not using the multiplier, but I am using the 4 port Addonics eSATA PCI-X
card in a PCI slot,
btw - eSATA == SATA w
I've been using the syba 4port card on linux and it works well. I bricked
another one trying to downgrade the bios so it was just disks, no RAID. Ah,
$20 gone.
So I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off
the bat w/ 10u3. No firmware update needed. It was $1
[i]I think the original poster, was thinking that non-enterprise users
would be most interested in only having to *purchase* one drive at a time.
Enterprise users aren't likely to balk at purchasing 6-10 drives at a
time, so for them adding an additional *new* RaidZ to stripe across is
easier.
[/i
[i]Enterprise feature questions), but it's possible now to expand a pool
containing raidz devs-- and this is the more likely case with
enterprise users:
# ls -lh /var/tmp/fakedisk/
total 1229568
-rw--T 1 root root 100M Jan 9 20:22 disk1
-rw--T 1 root root 100M Jan 9 20:22 disk2
-rw--T
[i]* Maximizing the use of different disk sizes[/i]
[i]If such capabilities exist, you could start with a single disk vdev and grow
it to consume a large disk farm with any number of parity drives, all while the
system is fully available.[/i]
Now you're just teasing me ;-)
This message post
I want to setup a ZFS server with RAID-Z. Right now I have 3 disks. In 6
months, I want to add a 4th drive and still have everything under RAID-Z
without a backup/wipe/restore scenario. Is this possible?
I've used NetApps in the past (1996 even!) and they do it. I think they're
using RAID4.
Thanks, Neil, for the assistance.
Tom
Neil Perrin wrote On 12/12/06 19:59,:
>Tom Duell wrote On 12/12/06 17:11,:
>
>
>>Group,
>>
>>We are running a benchmark with 4000 users
>>simulating a hospital management system
>>running on Solaris 10 6/06 on USIV+
1 - 100 of 111 matches
Mail list logo