On 17 January, 2008 - Bill Moloney sent me these 0,7K bytes:
> Thanks Marion and richard,
> but I've run these tests with much larger data sets
> and have never had this kind of problem when no
> cache device was involved
>
> In fact, if I remove the SSD cache device from my
> pool and run the te
Definitely a hardware problem (possibly compounded by a bug). Some key phrases
and routines:
ATA UDMA data parity error
This one actually looks like a misnomer. At least, I'd normally expect "data
parity error" not to crash the system! (It should result in a retry or EIO.)
PCI(-X) Expre
> Pardon my ignorance, but is ZFS with compression safe to use in a
> production environment?
I'd say, as safe as ZFS in general. ZFS has been well-tested by Sun, but it's
not as mature as UFS, say. There is not yet a fsck equivalent for ZFS, so if a
bug results in damage to your ZFS data pool
Hi guys,
new article available explaining details on how enterprise-like upgrades
integrated with Nexenta Core Platform starting from RC2 using ZFS
capabilities and Debian APT:
http://www.nexenta.org/os/TransactionalZFSUpgrades
What is NexentaCP?
NexentaCP is a minimal (core) foundation that ca
On Thu, 2008-01-17 at 09:29 -0800, Richard Elling wrote:
> You don't say which version of ZFS you are running, but what you
> want is the -R option for zfs send. See also the example of send
> usage in the zfs(1m) man page.
Sorry, I'm running SXCE nv75. I can't see any mention of send -R in the
m
Hey Y'all,
I've posted the program (SnapBack) my company developed internally for
backing up production MySQL servers using ZFS snapshots:
http://blogs.digitar.com/jjww/?itemid=56
Hopefully, it'll save other folks some time. We use it a lot for
standing up new MySQL slaves as well.
Best Regards,
Kent Watsen wrote:
>
> Thanks Richard and Al,
>
> I'll refrain from express how disturbing this is, as I'm trying to
> help the Internet be kid-safe ;)
>
> As for the PSU, I'd be very surprised there if that were it as it is a
> 3+1 redundant PSU that came with this system, built by a reputable
Thanks Richard and Al,
I'll refrain from express how disturbing this is, as I'm trying to help
the Internet be kid-safe ;)
As for the PSU, I'd be very surprised there if that were it as it is a
3+1 redundant PSU that came with this system, built by a reputable
integrator. Also, the PSU is
On Thu, 17 Jan 2008, Richard Elling wrote:
> Looks like flaky or broken hardware to me. It could be a
> power supply issue, those tend to rear their ugly head when
> workloads get heavy and they are usually the easiest to
> replace.
+1 PSU or memory (run memtestx86)
> -- richard
>
> Kent Wats
Thanks Marion and richard,
but I've run these tests with much larger data sets
and have never had this kind of problem when no
cache device was involved
In fact, if I remove the SSD cache device from my
pool and run the tests, they seem to run with no issues
(except for some reduced performance as
Thanks. I'll give that a shot. I neglected to notice what forum it was in since
the question morphed into "when will Solaris support port multipliers?"
Thanks again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Right. I can confirm that using port-multiplier-capable cards works
on the Mac. I've got an 8-disk zpool with only two SATA ports. Works
like a charm.
bri
On Jan 17, 2008, at 12:22 PM, Richard Elling wrote:
> Patrick O'Sullivan wrote:
>> At the risk of groveling, I'd like to add one more
Patrick O'Sullivan wrote:
> At the risk of groveling, I'd like to add one more to the set of people
> wishing for this to be completed. Any hint on a timeframe? I see reference to
> this bug back in 2006
> (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6409327), so I
> was wonderin
The solution here was to upgrade to snv_78. By "upgrade" I mean
re-jumpstart the system.
I tested snv_67 via net-boot but the pool paniced just as below. I also
attempted using zfs_recover without success.
I then tested snv_78 via net-boot, used both "aok=1" and
"zfs:zfs_recover=1" and was a
Looks like flaky or broken hardware to me. It could be a
power supply issue, those tend to rear their ugly head when
workloads get heavy and they are usually the easiest to
replace.
-- richard
Kent Watsen wrote:
>
>
> Below I create zpools isolating one card at a time
> - when just card#1 - it
At the risk of groveling, I'd like to add one more to the set of people wishing
for this to be completed. Any hint on a timeframe? I see reference to this bug
back in 2006
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6409327), so I was
wondering if there was any progress.
Thanks
Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>
>> I have a set of threads each doing random reads to about 25% of its own,
>> previously written, large file ... a test run will read in about 20GB on a
>> server with 2GB of RAM
>> . . .
>> after several successful runs of my test applicatio
[EMAIL PROTECTED] said:
> I have a set of threads each doing random reads to about 25% of its own,
> previously written, large file ... a test run will read in about 20GB on a
> server with 2GB of RAM
> . . .
> after several successful runs of my test application, some run of my test
> will be ru
You don't say which version of ZFS you are running, but what you
want is the -R option for zfs send. See also the example of send
usage in the zfs(1m) man page.
-- richard
James Andrewartha wrote:
> Hi,
>
> I have a zfs filesystem that I'd like to move to another host. It's part
> of a pool call
Hello Agile,
Comments in-between
Thursday, January 17, 2008, 2:20:42 AM, you wrote:
AA> Hi - I'm new to ZFS but not to Solaris.
AA> Is there a search able interface to the zfs-discuss mail archives?
http://opensolaris.org/os/discussions/
and look for zfs-discuss list.
AA> We have a Windows
Hi,
I have a zfs filesystem that I'd like to move to another host. It's part
of a pool called space, which is mounted at /space and has several child
filesystems. The first hurdle I came across was that zfs send only works
on snapshots, so I create one:
# zfs snapshot -r [EMAIL PROTECTED]
# zfs li
I'm using a FC flash drive as a cache device to one of my pools:
zpool add pool-name cache device-name
and I'm running random IO tests to assess performance on a
snv-78 x86 system
I have a set of threads each doing random reads to about 25% of
its own, previously written, large file
Below I create zpools isolating one card at a time
- when just card#1 - it works
- when just card #2 - it fails
- when just card #3 - it works
And then again using the two cards that seem to work:
- when cards #1 and #3 - it fails
So, at first I thought I narrowed it down to a card, but my
On a lark, I decided to create a new pool not including any devices
connected to card #3 (i.e. "c5")
It crashes again, but this time with a slightly different dump (see below)
- actually, there are two dumps below, the first is using the xVM
kernel and the second is not
Any ideas?
Kent
[
Hey all,
I'm not sure if this is a ZFS bug or a hardware issue I'm having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Lev
Agile Aspect wrote:
> Hi - I'm new to ZFS but not to Solaris.
>
> Is there a search able interface to the zfs-discuss mail archives?
Use google against the mailman archives:
http://mail.opensolaris.org/pipermail/zfs-discuss/
> Pardon my ignorance, but is ZFS with compression safe to use in a
>
26 matches
Mail list logo