Holy crap! That sounds cool. Firmware-based-VPN connectivity!
At Intel we're getting better too I suppose.
Anyway... I don't know where you're at in the company but you should rattle
some cages about my idea :)
This message posted from opensolaris.org
W. Wayne Liauh wrote:
> As to cases, our experience is, unless you have good air-conditioning or have
> a means to nicely enclose your machine (like the BlackBox :-) ), get a box
> as big as your space would allow. We had enough bad experiences with mini
> cases, especially those Shuttle-type
> "mp" == Mattias Pantzare <[EMAIL PROTECTED]> writes:
>> This is a big one: ZFS can continue writing to an unavailable
>> pool. It doesn't always generate errors (I've seen it copy
>> over 100MB before erroring), and if not spotted, this *will*
>> cause data loss after you re
I would love to go back to using shuttles.
Actually, my ideal setup would be:
Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
then I could chain up to 8 enclosures off a single small, nearly silent host
machine.
8 enclosures x 5 drives = 40
> I have built mine the last few days, and it seems to
> be running fine right now.
>
> Originally I wanted Solaris 10, but switched to using
> SXCE (nevada build 94, the latest right now) because
> I wanted the new CIFS support and some additional ZFS
> features.
>
> Here's my setup. These were
On Mon, Jul 28, 2008 at 04:13:54PM -0700, Steve wrote:
> Since the information obtained it seems that the better choice is ASUS
> M2A-VM: tested "happily", enough cheap (47€), not bad performing, 4 sata, gb
> ethernet, dvi, firewire, ecc. The only notice was a possible DMA bug of the
> south bri
I have built mine the last few days, and it seems to be running fine right now.
Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94,
the latest right now) because I wanted the new CIFS support and some additional
ZFS features.
Here's my setup. These were my goals:
- Quie
I'd like to extend my ZFS root pool by adding the old swap and root slice
left over from the previous LU BE.
Are there any known issues with concatenating slices from the same drive?
Cheers,
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolar
Mario Goebbels wrote:
>> We already have memory scrubbers which check memory. Actually,
>> we've had these for about 10 years, but it only works for ECC
>> memory... if you have only parity memory, then you can't fix anything
>> at the hardware level, and the best you can hope is that FMA will do
Since the information obtained it seems that the better choice is ASUS M2A-VM:
tested "happily", enough cheap (47€), not bad performing, 4 sata, gb ethernet,
dvi, firewire, ecc. The only notice was a possible DMA bug of the south bridge,
but it seems not so important. (!)
Now the options will
> We already have memory scrubbers which check memory. Actually,
> we've had these for about 10 years, but it only works for ECC
> memory... if you have only parity memory, then you can't fix anything
> at the hardware level, and the best you can hope is that FMA will do
> the right thing.
In Sol
> mainboard is :
>
> KFN4-DRE
> more info you find here :
> http://www.asus.com/products.aspx?l1=9&l2=39&l3=174&l4=0&model=1844&modelmenu=2
>
> cpu:
> 2x opteron aMD Opteron 2350 2.0GHz HT 4MB SF
You'll be fine with that. Just had to make sure.
Regards,
-mg
signature.asc
Description: OpenP
On Mon, 28 Jul 2008, Richard Elling wrote:
> It is not clear to me where ARC validation occurs. Perhaps someone
> who deals with the ARC code could shed some light.
More than likely, ARC data is not stored using original filesystem
blocks so the existing filesystem block checksums are not usefu
Bob Friesenhahn wrote:
> On Mon, 28 Jul 2008, Richard Elling wrote:
>>
>> But ZFS can do better. I filed CR6674679 which basically says
>> that if redundant copies of data have the same, wrong checksum,
>> then ZFS should issue an e-report to that effect. This will allow
>> you to move suspicion
mainboard is :
KFN4-DRE
more info you find here :
http://www.asus.com/products.aspx?l1=9&l2=39&l3=174&l4=0&model=1844&modelmenu=2
cpu:
2x opteron aMD Opteron 2350 2.0GHz HT 4MB SF
memory was cheap stuff non ecc replaced it with kingston ECC mem KVR667D2D8P5/2G
in the mean time we have 4x500Gb
On Mon, 28 Jul 2008, Richard Elling wrote:
>
> But ZFS can do better. I filed CR6674679 which basically says
> that if redundant copies of data have the same, wrong checksum,
> then ZFS should issue an e-report to that effect. This will allow
> you to move suspicion away from the disks as a root
Charles Emery wrote:
> New server build with Solaris-10 u5/08,
>
Can you try on a later release? The enhanced FMA for disks did not
make the Solaris 10 5/08 release.
http://www.opensolaris.org/os/community/on/flag-days/pages/2007080901/
-- richard
> on a SunFire t5220, and this is our firs
Heh, sounds like there are a few problems with that tool then. I guess that's
one of the benefits of me being so new to Solaris. I'm still learning all the
command line tools so I'm playing with the graphical stuff as much as possible.
:)
Regarding the delay, I plan to have a go tomorrow an
Bob Friesenhahn wrote:
> On Mon, 28 Jul 2008, BG wrote:
>
>
>> indeed that's one of the nice things that ZFS is picky on data and
>> allerts you immediatly. Before some files became corrupt and one was
>> wondering what happend and how this was possible since everything
>> seems fine for mont
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am work
snv_91. I downloaded snv_94 today so I'll be testing with that tomorrow.
> Date: Mon, 28 Jul 2008 09:58:43 -0700> From: [EMAIL PROTECTED]> Subject: Re:
> [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed> To: [EMAIL
> PROTECTED]> > Which OS and revision?> -- richard> > > Ross wrote:
"File Browser" is the name of the program that Solaris opens when you open
"Computer" on the desktop. It's the default graphical file manager.
It does eventually stop copying with an error, but it takes a good long while
for ZFS to throw up that error, and even when it does, the pool doesn't
On Mon, 28 Jul 2008, Ross wrote:
>
> TEST1: Opened File Browser, copied the test data to the pool.
> Half way through the copy I pulled the drive. THE COPY COMPLETED
> WITHOUT ERROR. Zpool list reports the pool as online, however zpool
> status hung as expected.
Are you sure that this refere
> 4. While reading an offline disk causes errors, writing does not!
>*** CAUSES DATA LOSS ***
>
> This is a big one: ZFS can continue writing to an unavailable pool. It
> doesn't always generate errors (I've seen it copy over 100MB
> before erroring), and if not spotted, this *will* cause da
Ok, after doing a lot more testing of this I've found it's not the Supermicro
controller causing problems. It's purely ZFS, and it causes some major
problems! I've even found one scenario that appears to cause huge data loss
without any warning from ZFS - up to 30,000 files and 100MB of data m
On Mon, 28 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
>
> I have tried your pdf but did not get good latency numbers even after array
> tuning...
Right. And since I observed only slightly less optimal performance
from a mirror pair of USB drives it seems that your requirement is not
chal
On Mon, 28 Jul 2008, BG wrote:
> indeed that's one of the nice things that ZFS is picky on data and
> allerts you immediatly. Before some files became corrupt and one was
> wondering what happend and how this was possible since everything
> seems fine for months :)
Unfortunately, ZFS does not
Marc,
Thanks - you were right - I had two identical drives and I mixed them
up. It's going through the resilver process now... I expect it will
run all night.
Breandan
On Jul 27, 2008, at 11:20 PM, Marc Bevand wrote:
> It looks like you *think* you are trying to add the new drive, when
>
Ron Warner II wrote:
> I am trying to find any statistics on the amount of time doing a upgrade from
> one version of ZFS to another. I recently updated my system and my zpool is
> showing that I need to upgrade it. I have a large pool, around 3 TB divided
> into 3 - 1 TB LUNS in a zraid configu
Dear All,
I will try to post DTool source code asap
DTool is depend on our patented middleware, need one or two days to
clarify :-P
Very Sorry.
Bob,
I have tried your pdf but did not get good latency numbers even after
array tuning...
cheers
tharindu
Bob Friesenhahn wrote:
On Sat
I am trying to find any statistics on the amount of time doing a upgrade from
one version of ZFS to another. I recently updated my system and my zpool is
showing that I need to upgrade it. I have a large pool, around 3 TB divided
into 3 - 1 TB LUNS in a zraid configuration. I want to do the upgr
> This knowing i will never putt non ecc memory in my boxes again.
What's your mainboard and CPU? I've looked up the thread on the forum
and there's no hardware information. Don't be fooled just because the
RAM's ECC. The mainboard (and CPU in case of AMDs) have to support that.
There are two fa
indeed that's one of the nice things that ZFS is picky on data and allerts you
immediatly. Before some files became corrupt and one was wondering what happend
and how this was possible since everything seems fine for months :)
the more i use solaris the more i love it :)
This message posted
Hi,
a
zfs create -V 1M pool/foo
dd if=/dev/random of=/dev/zvol/rdsk/pool/foo bs=1k count=1k
(using Nevada b94) yields
zfs get all pool/foo
pool/foo used 1,09M -
pool/foo referenced 1,09M -
pool/foo volsize 1M-
poo
Trevor Watson wrote:
> I have had the same problem too, but managed to work around it by
> setting the mountpoint to none before performing the ZFS send. But that
> only works on file-systems you can quiesce.
Yeah, and / is always going to be a bit of a problem ;-)
> How about making a clone o
Heh, yup, memory errors are among the worst to diagnose. Glad you got to the
bottom of it, and it's good to see ZFS again catching faults that otherwise
seem to be missed.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
So we finnaly got arround the problem, after replacing almost everything it
seems that the memory was the devil. I pulled it out and replaced it with ECC
memory and now everything works fine for 14 days already.
This knowing i will never putt non ecc memory in my boxes again.
thanks for al the
37 matches
Mail list logo