Any chance the disks are being powered down, and you are waiting for
them to power back up?
Nathan. :)
Neal Pollack wrote:
> I'm running Nevada build 81 on x86 on an Ultra 40.
> # uname -a
> SunOS zbit 5.11 snv_81 i86pc i386 i86pc
> Memory size: 8191 Megabytes
>
> I sta
I see a business opportunity for someone...
Backups for the masses... of Unix / VMS and other OS/s out there.
any takers? :)
Nathan.
Jonathan Loran wrote:
>
>
> eric kustarz wrote:
>> On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
>>
>>
>>> www.mozy.c
format -e
then from there, re-label using SMI label, versus EFI.
Cheers
Al Slater wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> What is the quickest way of clearing the label information on a disk
> that has been previously used in a zpool?
>
> regards
>
> - --
> Al Sl
I was interested in that one till I read:
One 240-pin DDR2 SDRAM Dual Inline Memory Module (DIMM) sockets
Support for DDR2 667 MHz, DDR2 533 MHz and DDR2 400 MHz DIMMs (DDR 667
MHz validated to run at 533 MHz only)
Support for up to 1 GB of system memory
Boo!!!
:)
Nathan.
Vincent Fox wrote
ourse, both of these would require non-sparse file creation for the
DB etc, but would it be plausible?
For very read intensive and position sensitive applications, I guess
this sort of capability might make a difference?
Just some stabs in the dark...
Cheers!
Nathan.
Louwtjie Burger wrote:
>
se guys and the way they treat the ufs buffers versus the
zfs buffers?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and failover testing with ZFS and VCS.
Furthermore, if anyone has implemented ZFS on SRDF, I would also be
interesting in hearing about those implementation experiences.
Any and all input would be most appreciated.
Kind Regards,
Nathan Dietsch
occasion...
Maybe it's not just me... Unfortunately, I'm still running old nv and
xen bits, so I can't speak to the 'current' situation...
Cheers.
Nathan.
Martin wrote:
> Hello
>
> I've got Solaris Express Community Edition build 75 (75a) installed on an
You have not mentioned if you have swapped the 3114 based HBA itself...?
Have you tried a different HBA? :)
Nathan.
Ed Saipetch wrote:
> Hello,
>
> I'm experiencing major checksum errors when using a syba silicon image 3114
> based pci sata controller w/ nonraid firmware
Hey all -
Time for my silly question of the day, and before I bust out vi and
dtrace...
If there a simple, existing way I can observe the read / write / IOPS on
a per-zvol basis?
If not, is there interest in having one?
Cheers!
Nathan.
___
zfs
step. :)
Cheers.
Nathan.
Eric Schrock wrote:
> On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
>> Erik -
>>
>> Thanks for that, but I know the pool is corrupted - That was kind if the
>> point of the exercise.
>>
>> The bug (at least to me)
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
>>
>> Client A
&
ecause I tried to import a dud pool...
I'm ok(ish) with the panic on a failed write to a non-redundant storage.
I expect it by now...
Cheers!
Nathan.
Victor Engle wrote:
> Wouldn't this be the known feature where a write error to zfs forces a panic?
>
> Vic
>
>
this by accident and panic
a big box for what I see as no good reason. (though I'm happy to be
educated... ;)
Oh - and also - Kudos to the ZFS team and the other involved in the
whole iSCSI thing. So easy and funky. Great work guys...
Cheers!
Nathan.
__
I think I can offer a straightforward explanation to the following:
I like the error-correction quality of ZFS; however, the ZFS
> Administration Guide states: "A non-redundant pool configuration is
> not recommended for production environments even if the single storage
> object is presented from
And if there is a rubbish file somewhere, I *think* you should be able
to cat /dev/null > thatfile
Which would free up it's blocks.
Assuming you don't have snapshots... ;)
Nathan.
Anton B. Rang wrote:
> At least three alternatives --
>
> 1. If you don't have t
options, instance #0 (driver name: options)
agpgart, instance #0 (driver name: agpgart)
xsvc, instance #0 (driver name: xsvc)
used-resources
cpus
cpu, instance #0
cpu, instance #1
Nathan.
Ben Middleton wrote:
> I've just purchased an Asus P5K WS, which
take a look at this box
and see if it's a new bug or just me being a bonehead and not
understanding what I'm seeing, please respond to me directly, and I can
provide access. (I'll make an effort not to reboot the box just in case
it's only this boot that sees the problems.
s
working anyways...
My 2c...
Nathan.
Blake wrote:
> I have re-flashed the BIOS.
>
> Blake
>
> On 8/7/07, *Ian Collins* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
> Blake wrote:
> > Hi.
> >
> > I'm running snv 65
hat I have only observed this with my super cheap adapters at
home. I'm yet to see if (though also yet to try really hard) on the more
expensive ones at work...
Again - Likely nothing to do with your problem, but hey. It has made a
difference for me before...
Cheers.
Nathan.
George wrot
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
= BUDGET
Currently I have about 25-30k to start the project, more could be
allocated in the ne
Which has little benefit if it's the HBA or the Array internals change
the meaning of the message...
That's the whole point of ZFS's checksumming - It's end to end...
Nathan.
Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrot
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some
.
A salvage / undelete would have been gold.
Nathan.
James Dickens wrote:
Yes - Snapshots are great, but how often do you run a snapshot? Every 60
seconds? That's going to get real ugly if you have a filesystem per
user...
I'm sure every 15 minutes is suffient, if the worker doesn
that provided dumb dumb protection
would be very cool. I was saved a number of times by the hackery above...
cheers!
Nathan.
Robert Milkowski wrote:
Hello Jeremy,
Monday, February 19, 2007, 1:58:18 PM, you wrote:
Something similar was proposed here before and IIRC someone even has a
worki
Thank You, so that means that even if I use something that writes raw i/o to a
zfs emulated volume, I still get the checksum protection, and hence data
corruption protection.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as opposed to say, a block level. However, I have notic
Urk!
Where is this documented? And - is it something you can do nothing
about, or are we ultimately trying to address it somewhere / somehow?
Thanks!!
Nathan.
Bill Moore wrote:
On Wed, Jan 31, 2007 at 05:01:19AM -0800, Tom Buskey wrote:
As a followup, the system I'm trying to use th
h, if all disks are rotated, we
end up with a whole bunch of disks that are evenly worn out again, which
is just what we are really trying to avoid! ;)
Nathan.
Wee Yeh Tan wrote:
On 1/30/07, David Magda <[EMAIL PROTECTED]> wrote:
What about a rotating spare?
When setting up a pool a lot
warranties these days. If the disk is not super old, you might be able
to get it replaced under warranty if you send it directly to the
manufacturer...
Hope this helps at least provide some ideas. :)
Oh - and get a new disk. ;)
Nathan.
Patrick P Korsnick wrote:
i have a machine with a
Hm. If the disk has no label, why would it have an s0?
Or, did you mean p0?
Nathan.
On Wed, 2006-12-06 at 04:45, Krzys wrote:
> Does not work :(
>
> dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024
> dd: opening `/dev/rdsk/c3t6d0s0': I/O error
>
> That is so s
c/817-1985/6mhm8o5q5?a=view
And booting from grub into kmdb:
http://docs.sun.com/app/docs/doc/817-1985/6mhm8o5q2?a=view
I'm not sure how the serial console is going to impact you. I'm
expecting it'll still be f1-a to drop to the debugger...
That's assuming it's not a
feature that excites me.
As far as whiz-bang things that would excite you, only you will know
that for sure. :)
Cheers!
Nathan.
On Thu, 2006-11-09 at 08:58, Wes Williams wrote:
> I'm in the process of building a Solaris NFS server with ZFS and was
> wondering if any gurus here have a
0 issue).
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/mirror/mirror_ioctl.c#887
Or, perhaps I need more coffee...
Cheers!
Nathan. ;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
so be helpful...
Cheers,
Nathan.
On Mon, 2006-10-30 at 14:50, Pavan Reddy wrote:
> 'mv' command took very long time to copy a large file from one ZFS directory
> to another. The directories share the same pool and file system. I had a 385
> MB file in one directory and wanted to
won't need to do
anything. It should just work, as ZFS will be able to just import the zpool.
I hope I understood your question. (And I hope I'm telling no lies... ;)
Nathan.
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to S
I might be wrong here, but I think it's telling you that there are no
errors.
Something like:
errors: none
or
errors: None that we know of, but we'll let you know if there are any.
At least that is how I'd read it.
:)
Do you have an actual problem other than the text?
... :)
Nathan.
On Tue, 2006-08-15 at 01:38, James C. McPherson wrote:
> Bob Evans wrote:
> > Just getting my feet wet with zfs. I set up a test system (Sunblade
> > 1000, dual channel scsi card, disk array with 14x18GB 15K RPM SCSI
> > disks) and was trying to write a large file (1
me pressure.
If, at the end when it exits, you have lots of memory free, and nothing
swapped out, it's all good. :)
quick, dirty, possibly even smelly, with no error checking at all...
:)
Nathan.
On Fri, 2006-07-21 at 09:28, Eric Schrock wrote:
> There two things to note here:
>
>
disks to a raidz or something
like that (if it's even possible) and announce the reduction in
reliability.
Thoughts? :)
Nathan.
On Mon, 2006-07-17 at 18:35, Jeff Bonwick wrote:
> > I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot
> > the whole po
I think the other part of information that's missing is that we COW at
the block level, NOT at the file level.
So, the extra blocks are in use only during the update, and it's only
blocks, not whole files...
Hope this helps..
Nathan.
On Thu, 2006-07-13 at 14:10, Chad Lewis wrote:
filesystem 100% full messages in them...
It will be interesting to see how the current S10_u2 bits go. :)
Nathan.
On Tue, 2006-07-04 at 02:19, Eric Schrock wrote:
> You don't need to grow the pool. You should always be able truncate the
> file without consuming more space, pr
best way we can
approach this?
Also - When dding the raw slice that zfs is using, I noticed that my IO
rate also seesawed up and down between 31MB/s and 28MB/s, over a 5
second interval... I was not expecting that... Thoughts?
Thanks! :)
Nathan.
Here is the iostat example -
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verification on read
> and, if the checksum verification fails, c
- SAN-based hardware products allow sharing of storage among
> multiple hosts. This allows storage to be utilized more effectively.
How would ZFS self heal in this case?
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
perhaps we could look to address that...
Personally, I'd prefer to read a manpage than scour the web for a
tutorial that may or may not be current.
hm... man zfs_tutorial? :)
Nathan.
On Mon, 2006-06-26 at 10:34, Nathanael Burton wrote:
> > Currently the Genunix facility, includi
when there is actually data?
:)
Nathan.
On Wed, 2006-06-21 at 06:25, Eric Schrock wrote:
> On Tue, Jun 20, 2006 at 02:18:34PM -0600, Gregory Shaw wrote:
> > Wouldn't that be:
> >
> > 5 seconds per write = 86400/5 = 17280 writes per day
> > 256 rotated locations f
Not X86?
:(
(Yes - I know there are lots of other things that need to happen first,
but :( nonetheless... )
Nathan.
On Wed, 2006-05-31 at 01:51, Lori Alt wrote:
> Roland Mainz wrote:
> > Hi!
> >
> >
> It is our intention to support system suspend on SPARC
>
use!)
"ah. :)"
Wow. Even thinking about how the ZFS guys might implement that breaks my
head...
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cool -
I can see my old fav's from Netware 3.12 making a comeback.
It was always great to be able to salvage things from a disk that
someone did not mean to kill. :)
ah - salvage - my old friend...
Does this also usher in the return of purge too? :)
Nathan.
Erik Trimble wrote:
O
?
Nathan. :)
On Fri, 2006-05-19 at 05:12, Eric Schrock wrote:
> On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
> > Sorry to revive such an old thread.. but I'm struggling here.
> >
> > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I
> &
101 - 151 of 151 matches
Mail list logo