On Tue, Dec 30, 2008 at 12:18 AM, Brandon High wrote:
> Use a USB enclosure for the new drive, and do:
> zfs replace bad_disk new_disk
>
> You should be able to export the volume and physically replace the
> disk at that point.
It was late when I wrote that, so let me clarify a f
nal SATA connectors. I have a dying drive in the array (hereafter
> "drive N"). Obviously I should replace it. But how?
Use a USB enclosure for the new drive, and do:
zfs replace bad_disk new_disk
You should be able to export the volume and physically replace the
disk at that poin
ile
5. Import the zpool. It should come up as degraded, since one of its
vdevs is missing.
6. Copy your files onto the zpool.
7. replace the file vdev with the 5th disk.
Like I said, I haven't tried this but it might work. I'd love to hear
if it does.
-B
be available to the open source
community as well...
-B
--
Brandon High : [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
redundant e-mail
attachments, document header pages and common user files.
-B
--
Brandon High [EMAIL PROTECTED]
"I'm not against the police; I'm just afraid of them." -Alfred Hitchcock
___
zfs-discuss mailing list
zfs-discuss@openso
e
number of drives in the 2510, etc)?
-B
--
Brandon High [EMAIL PROTECTED]
"You can't blow things up with schools and hospitals." -Stephen Dailey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
me at the moment. The SB750 is too
new to know but may be an improvement. I'd recommend an LSI 1068e
based HBA like the Supermicro AOC-USAS-L8i.
You may want to put an Intel NIC into the AMD system, since support
with other ethernet solutions seems spotty at best.
-B
--
Brandon High [EMAIL P
t; AMD chip, upgrade the BIOS, then install the new chip. I would not
The BIOS needs to know about the chip. The same thing happened on the
Intel side when the 65nm Core 2 came out (E6xxx and Q6xxx), and again
with the 45nm Core 2 (E8xxx and Q8xxx).
-B
--
Brandon High [EMAIL PROTECTED]
"T
as the quad-core stuff.
Socket 939 has been phased out for 2-3 years now, it's unlikely new
motherboards will be available.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing
I'm starting to think the
> combination doesn't exist.
The AMD 790GX boards are starting to show up:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128352
Dual 8x PCIe slots, integrated video and 6 AHCI SATA ports.
-B
--
Brandon High [EMAIL PROTECTED]
"The
ice.
I'm disappointed that there is no support for power management on the
K8, which is a bit of a shock since Sun's been selling K8 based
systems for a few years now. The cost of an X3 ($125) and AM2+ mobo
($80) is about the same as an Intel chip ($80) and motherboard ($150)
that supports EC
pectation that they'll well, work, I assume
that the drivers in Solaris should be relatively stable.
If that's not the case, then I'd think Sun would want to address it.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
here are enough VDEVs then ZFS can still
> proceed with writing.
It would have to wait on an fsync() call, since that won't return
until both halves of the mirror have completed. If the cards you're
using have NVRAM, then they could return faster.
-B
--
Brandon High [EMAIL PROTECTED]
&
7;m
sure the actual rate of incidence is lower since people are more
likely to report an error than success.
One of the Sun guys could probably set the record straight.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
__
ca.
If the Areca cards have a BBU or NVRAM they should give you good
performance in any mode.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
ovide an increase on writes,
since the system needs to wait for both halves of the mirror to
finish. It could be slightly slower than a single raid5.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
z
On Fri, Jul 25, 2008 at 9:17 AM, David Collier-Brown <[EMAIL PROTECTED]> wrote:
> And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
Or perhaps 4 RAID1 mirrors concatenated?
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best
ernal Mini Serial Attached SCSI x4 (SFF-8087) to (4) x1 Serial ATA
(controller based) fan-out cable with SFF-8448 sideband signals.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mail
d to modify the case to plug the
drives directly to the motherboard.
http://blog.flowbuzz.com/search/label/NAS
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@op
Have you tried exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss
On Thu, Jul 24, 2008 at 1:28 AM, Steve <[EMAIL PROTECTED]> wrote:
> And interesting of booting from CF, but it seems is possible to boot from the
> zraid and I would go for it!
It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.
; disks to not all fail simultaneously.
>>
> Has anyone ever seen this happen for real? I seriously doubt it will happen
> with new drives.
My new workstation in the office had it's (sole) 400gb drive die after
about 2 months. It does happen. Production lots share failure
r.
I could use a Sil3132 based card instead of the LSI, which would give
me exactly 8 SATA ports and save about $250. I may still go this route
but given the overall cost it's not that big of a deal.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." -
chose if good ECC, but
> the rest?)
2GB or more of ECC should do it. I believe all the AMD CPUs support
ECC, but you should verify this before buying.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
_
ed UFS.
> Even with UFS, there was evey 5 second peak due to fsflush invocation.
>
> However each peak is about ~5ms.
> Our application can not recover from such higher latency.
Is the pool using raidz, raidz2, or mirroring? How many drives are you using?
-B
--
Brandon High [EMAI
ss
importance to enterprise users, who are the initial target for ZFS.
Most enterprise users would just attach a new drive tray and add that
as another raid-z to the zpool.
That being said, there is an RFE for expanding the width of a raidz:
http://bugs.opensolaris.org/view_bug.do?bug_id=6718209
ll it contain the full
> solaris root?
> How do you manage redundancy (e.g. mirror) for that boot device?
4gb is enough to hold a minimal system install. /var will go to a file
system on the raidz pool.
ZFS mirroring can be used on boot devices for redundancy.
-B
--
Brandon High [EMAIL PR
I am not
> inclined to deviate from it 8^)
You might want to look at a 4 or 8 port SATA adapter rather than wait
for the southbridge fixes.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
at the usual places like Newegg, etc.
It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolari
come up by
> itself...why is that if this encl can assume 0 and the other assume 1 and
> the zfs pool will come up that way?
Are you doing a zfs export / zfs import between taking the enclosures
down and bringing them back up?
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the
al drives.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
keep the cost low.
I don't know of an inexpensive 4 port PCIe card, and PCI will easily
be saturated by one drive, let alone 4. If you don't care, then the
Supermicro 8-port card is a steal.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
p has more than that!) I'm not suggesting that you
upgrade, but it could explain things a little.
How much of the memory is in use, and how much of that is used by the ARC cache?
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
gg.com/Product/Product.aspx?Item=N82E16813128335
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-disk-controllers-for-zfs.html
Joe --
What about the LSISAS3081E-R? Does it use the same drivers as the
other LSI controllers?
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy
it at 800MHz. I have a MSI P35 Platinum for my Windows gaming
system and after trying to get my 1066 memory to run stably at speed,
I gave up and run it at 800. You should try reducing the memory speed
and relaxing the timing to 5-5-5-15 to see if it helps.
-B
--
Brandon High [EMAIL PROTECTED]
&quo
ple.
If it is bad memory that has somehow passed memtest, swapping the
memory for known good (preferably ECC) memory is one option to
diagnose it.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-d
On Mon, Jun 9, 2008 at 10:44 PM, Robert Thurlow <[EMAIL PROTECTED]> wrote:
> Brandon High wrote:
>> AFAIK, you're doing the best that you can while playing in the
>> constraints of ZFS. If you want to use nfs v3 with your clients,
>> you'll need to use UFS as t
the caches to see if that's the problem. If
you just want to take a shot i the dark and if this is the only
filesystem in your zpool, either reduce the size of the zfs ARC cache,
or reduce the size of the UFS cache.
-B
--
Brandon High [EMAIL PROTECTED]
"The go
lds that were
meant to address the performance problems that can be caused by the
ARC cache. Limiting the cache size can also help, but shouldn't be
needed in recent builds. I'm not sure if the write throttling has been
put back to Solaris 10u5 or if it's scheduled for 10u6 though.
al years ago and
things may have improved. (I think it was Legato's product running
under Linux, but I'm not certain.)
I can't think of any reason that something like this wouldn't work
with ZFS, though the ACLs may not get saved.
-B
--
Brandon High [EMAIL PROTECTED]
&
idn't really make is that Ghost and Drive Snapshot
can create images of known filesystems (NTFS, FAT, ext2/3, reiserfs)
that aren't raw images. zfs send is probably closest to that, except
both of the imaging tools allow you to mount images and browse them.
-B
--
Brandon High [EMAIL PRO
tream, and the shell redirects
it to the file "root" in the fs at /mnt. Provided your shell has large
file support, it should work just fine.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
Snapshot to image Windows and Linux systems. It
might work for Opensolaris as well. It would create a block level
backup, and the restore might not work on a system which isn't
identical. http://www.drivesnapshot.de/en/
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enem
> the GUI installer.)
There was some discussion about it recently, I think the reason is
that the GUI for SXDE is not open sourced so it was more
difficult/political to add. The 2008.05 installer should be able to do
it when they sync up to b90 or beyond.
-B
--
Brandon High [EMAIL PROTECTED]
"
6 sata drives.
It would depend on the case.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jun 2, 2008 at 2:17 PM, Scott L. Burson <[EMAIL PROTECTED]> wrote:
> Would still like advice on the 1420SA.
It's been mentioned before. The 1420SA does not work.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the
ave a
Thermaltake Matrix case that came with a PSU and it's been reliable
for 2 years. I believe the case and PSU was about $100.
For my most recent build I looked at Silent PC Review and went with a
Corsair 520W PSU based on their testing.
-B
--
Brandon High [EMAIL PROTECTED]
"The go
d. Likewise the nVidia ASUS M2N-VM (7050) is $70.
I believe both have only 4 SATA ports, but that should be ok for your
build.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing
6645543), and 2 from a
> SiI3132 chip (driver: si3124).
I had hoped to get a system with on board ports, but hadn't found one
with more than 6. Thanks for the pointer!
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
__
n a consumer PC, the drive's
The same feature can be enabled on WD's consumer SATA drives. Google
for wdtler.zip.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing li
could possibly fall into my car's trunk as I
leave work one day, but that's not something I'd consider either.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss m
GB.
The least expensive Socket 940 board with a PCI-X slot is the TYAN
S2881UG2NR at $419.
Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces.
I'd check prices on a single socket 939 Opteron with a suitable
motherboard, but neither appear to be available anymore.
-B
--
Brand
IT25672AA667
8 Western Digital Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA
3.0Gb/s Hard Drive
Subtotal: $2,386.88
I may get another drive for the OS as well, or boot off of a
CF-card/IDE adapter like this one:
http://www.newegg.com/Product/Product.aspx?Item=N82E16812186038
-B
--
B
l
> s10 releases use the new stream format.
>
> More details (and instructions on how to resurrect any pre build 36 streams)
> can be found here:
>
> http://opensolaris.org/os/community/on/flag-days/pages/2008042301
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of
x27;s a blurb here: http://news.bbc.co.uk/2/hi/technology/6376021.stm
Full results here: http://research.google.com/archive/disk_failures.pdf
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
z
ich is a PCI-X based 4-port.
-B
--
Brandon High[EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
I think I've seen reports or similar problems on the zfs list, but I
don't know if there was any resolution.
One suggestion was that a SATA drive could be attempting to correct a
read error and that was causing ZFS to block on i/o.
-B
--
Brandon High [EMAIL PROTECTED]
"The
r Hadoop, both
of which are only supported on Linux.
I remember there being an application in the Windows 95/98 timeframe
that did what you want, but do idea on what it was called, how well it
worked, or if it still exists.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of
r
> c1t1d0 and c1t2d0.
Using a separate boot volume with a swap slice on it might be a good
idea. You'll be able to upgrade or reinstall your OS without touching
the zpool.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Apr 16, 2008 at 3:19 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Brandon High wrote:
> > The stripe size will be across all vdevs that have space. For each
> > stripe written, more data will land on the empty vdev. Once the
> > previously existing vdevs fil
ahead of time. Running a scrub will
verifiy the blocks that are actually in use and works with the i/o
scheduler, so should have a lower impact on performance.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
__
On Tue, Apr 15, 2008 at 12:12 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Tue, 15 Apr 2008, Brandon High wrote:
> > I think RAID-Z is different, since the stripe needs to spread across
> > all devices for protection. I'm not sure how it's done.
>
>
e needs to spread across
all devices for protection. I'm not sure how it's done.
> 6] Still I don't see how each block becoming its own stripe unless there
> is byte level striping with each byte on a different disk block which
> would be very wasteful.
See above.
7.
The other thought that I had if ZFS would have worked for him, but it
sounds like he's a Windows guy.
... and to threadjack, has there been any talk of a Windows ZFS driver?
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
400 MB written to the existing
members and the rest written to the new device.
I did a quick search for references and could find any, so take this
with a grain of salt.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
a fork of the IET code to work with SCST. The SCST
project *claims* their code is better. I haven't used either, and it
may very well be a better solution, but I'd recommend testing both to
see.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the
n the ZFS stack,
or from a cron job that reads the config doesn't matter, but having
the configuration tied to the filesystem would be nice. You would
inherit a snapshot schedule and retention policy, just like other
filesystem properties.
-B
--
Brandon High [EMAIL PROTECTED]
&qu
get. There may be micro linux images
that fit on a USB key and allow this.
-B
--
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
much activity on the ZFS project
> at macosforge.com so I'm guessing support for v9 isn't right around the
> corner.
I'm not sure if it would work, but did you try to do zfs send / zfs
recv? If it's just sending the filesystem data, you may be able to get
around the zpool
the problem could easily be that the sil3132 chip
is causing the hiccup on a larger payload, not the RS690 PCIe
controller.
Of course, without more detailed spec on either component this is pure
conjecture but it seems to match the behavior you observed.
-B
--
Brandon High [EMAIL PROTECTE
s for the pcie max
payload size. The default value is 4096 bytes.
This doesn't help explain why the throughput dropped when increasing
max_payload_size over 512 causes a drop in throughput, but at least
you can safely run the card with a payload greater than 128.
-B
--
Brandon High [EMAIL PROT
On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand <[EMAIL PROTECTED]> wrote:
> PCI-X card...). The rest is also dirty cheap: $65 Asus M2A-VM motherboard,
> $60
> dual-core Athlon 64 X2 4000+, with 1GB of DDR2 800, and a 400W PSU.
Apologies for the threadjack (um, again) but did you know that the
RS6
On Mon, Mar 17, 2008 at 2:09 PM, Tim <[EMAIL PROTECTED]> wrote:
> On 3/17/08, Brandon High <[EMAIL PROTECTED]> wrote:
> > easier to use an external disk box like the CFI 8-drive eSATA tower
> > than find a reasonable server case that can hold that many drives.
>
&g
On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand <[EMAIL PROTECTED]> wrote:
> integrated AHCI controller (SB600 chipset), 2 disks on a 2-port $20 PCI-E 1x
> SiI3132 controller, and the 7th disk on a $65 4-port PCI-X SiI3124 controller
Do you have access to a Sil3726 port multiplier? I'd like to see
401 - 475 of 475 matches
Mail list logo