this by accident and panic
a big box for what I see as no good reason. (though I'm happy to be
educated... ;)
Oh - and also - Kudos to the ZFS team and the other involved in the
whole iSCSI thing. So easy and funky. Great work guys...
Cheers!
Nathan.
___
zfs
And if there is a rubbish file somewhere, I *think* you should be able
to cat /dev/null thatfile
Which would free up it's blocks.
Assuming you don't have snapshots... ;)
Nathan.
Anton B. Rang wrote:
At least three alternatives --
1. If you don't have the latest patches installed, apply
or just me being a bonehead and not
understanding what I'm seeing, please respond to me directly, and I can
provide access. (I'll make an effort not to reboot the box just in case
it's only this boot that sees the problems.)
Nathan. :)
___
zfs
)
agpgart, instance #0 (driver name: agpgart)
xsvc, instance #0 (driver name: xsvc)
used-resources
cpus
cpu, instance #0
cpu, instance #1
Nathan.
Ben Middleton wrote:
I've just purchased an Asus P5K WS, which seems to work OK. I had to download
the Marvell Yukon
anyways...
My 2c...
Nathan.
Blake wrote:
I have re-flashed the BIOS.
Blake
On 8/7/07, *Ian Collins* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
Blake wrote:
Hi.
I'm running snv 65 and having an issue much like this:
http://osdir.com/ml
only observed this with my super cheap adapters at
home. I'm yet to see if (though also yet to try really hard) on the more
expensive ones at work...
Again - Likely nothing to do with your problem, but hey. It has made a
difference for me before...
Cheers.
Nathan.
George wrote:
I have set up
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
= BUDGET
Currently I have about 25-30k to start the project, more could be
allocated in the
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some
that provided dumb dumb protection
would be very cool. I was saved a number of times by the hackery above...
cheers!
Nathan.
Robert Milkowski wrote:
Hello Jeremy,
Monday, February 19, 2007, 1:58:18 PM, you wrote:
Something similar was proposed here before and IIRC someone even has a
working
...
A salvage / undelete would have been gold.
Nathan.
James Dickens wrote:
Yes - Snapshots are great, but how often do you run a snapshot? Every 60
seconds? That's going to get real ugly if you have a filesystem per
user...
I'm sure every 15 minutes is suffient, if the worker doesn't have a
slight
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as opposed to say, a block level. However, I have
Urk!
Where is this documented? And - is it something you can do nothing
about, or are we ultimately trying to address it somewhere / somehow?
Thanks!!
Nathan.
Bill Moore wrote:
On Wed, Jan 31, 2007 at 05:01:19AM -0800, Tom Buskey wrote:
As a followup, the system I'm trying to use
are rotated, we
end up with a whole bunch of disks that are evenly worn out again, which
is just what we are really trying to avoid! ;)
Nathan.
Wee Yeh Tan wrote:
On 1/30/07, David Magda [EMAIL PROTECTED] wrote:
What about a rotating spare?
When setting up a pool a lot of people would (say
these days. If the disk is not super old, you might be able
to get it replaced under warranty if you send it directly to the
manufacturer...
Hope this helps at least provide some ideas. :)
Oh - and get a new disk. ;)
Nathan.
Patrick P Korsnick wrote:
i have a machine with a disk
Hm. If the disk has no label, why would it have an s0?
Or, did you mean p0?
Nathan.
On Wed, 2006-12-06 at 04:45, Krzys wrote:
Does not work :(
dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024
dd: opening `/dev/rdsk/c3t6d0s0': I/O error
That is so strange... it seems like I
-a to drop to the debugger...
That's assuming it's not a hard hang. :)
Cheers.
Nathan.
On Wed, 2006-11-15 at 14:16, Sean Ye wrote:
Hi, Chris,
You may force a panic by reboot -d.
Thanks,
Sean
On Tue, Nov 14, 2006 at 09:11:58PM -0600, Chris Csanady wrote:
I have experienced two hangs so
/onnv/onnv-gate/usr/src/uts/common/io/lvm/mirror/mirror_ioctl.c#887
Or, perhaps I need more coffee...
Cheers!
Nathan. ;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
feature that excites me.
As far as whiz-bang things that would excite you, only you will know
that for sure. :)
Cheers!
Nathan.
On Thu, 2006-11-09 at 08:58, Wes Williams wrote:
I'm in the process of building a Solaris NFS server with ZFS and was
wondering if any gurus here have any comments
. It should just work, as ZFS will be able to just import the zpool.
I hope I understood your question. (And I hope I'm telling no lies... ;)
Nathan.
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID
I might be wrong here, but I think it's telling you that there are no
errors.
Something like:
errors: none
or
errors: None that we know of, but we'll let you know if there are any.
At least that is how I'd read it.
:)
Do you have an actual problem other than the text?
Nathan.
On Tue
... :)
Nathan.
On Tue, 2006-08-15 at 01:38, James C. McPherson wrote:
Bob Evans wrote:
Just getting my feet wet with zfs. I set up a test system (Sunblade
1000, dual channel scsi card, disk array with 14x18GB 15K RPM SCSI
disks) and was trying to write a large file (10 GB) to the array to
see
when it exits, you have lots of memory free, and nothing
swapped out, it's all good. :)
quick, dirty, possibly even smelly, with no error checking at all...
:)
Nathan.
On Fri, 2006-07-21 at 09:28, Eric Schrock wrote:
There two things to note here:
1. The vast majority of the memory
to a raidz or something
like that (if it's even possible) and announce the reduction in
reliability.
Thoughts? :)
Nathan.
On Mon, 2006-07-17 at 18:35, Jeff Bonwick wrote:
I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot
the whole pool became unavailable after apparently
this?
Also - When dding the raw slice that zfs is using, I noticed that my IO
rate also seesawed up and down between 31MB/s and 28MB/s, over a 5
second interval... I was not expecting that... Thoughts?
Thanks! :)
Nathan.
Here is the iostat example -
extended device
could look to address that...
Personally, I'd prefer to read a manpage than scour the web for a
tutorial that may or may not be current.
hm... man zfs_tutorial? :)
Nathan.
On Mon, 2006-06-26 at 10:34, Nathanael Burton wrote:
Currently the Genunix facility, including the wiki,
is a resource
there is actually data?
:)
Nathan.
On Wed, 2006-06-21 at 06:25, Eric Schrock wrote:
On Tue, Jun 20, 2006 at 02:18:34PM -0600, Gregory Shaw wrote:
Wouldn't that be:
5 seconds per write = 86400/5 = 17280 writes per day
256 rotated locations for 17280/256 = 67 writes per location per day
Not X86?
:(
(Yes - I know there are lots of other things that need to happen first,
but :( nonetheless... )
Nathan.
On Wed, 2006-05-31 at 01:51, Lori Alt wrote:
Roland Mainz wrote:
Hi!
It is our intention to support system suspend on SPARC
when booted off a zfs root file system
?
Nathan. :)
On Fri, 2006-05-19 at 05:12, Eric Schrock wrote:
On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:
Sorry to revive such an old thread.. but I'm struggling here.
I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I
work for a University, where everyone has
101 - 128 of 128 matches
Mail list logo