Marcel Gschwandl schrieb:
Hi all!
I'm running a Solaris 10 Update 6 (10/08) system and had to resilver a zpool.
It's now showing
snip
scrub: resilver completed after 9h0m with 21 errors on Wed Nov 4 22:07:49
2009
/snip
but I haven't found an option to see what files where
hi folks,
i'm seeing an odd problem wondered whether others
had encountered it.
when i try to write to a nevada NFS share from a mac
os X (10.5) client via the mac's GUI, i get a
permissions error - the file is 0 bytes, date set to
jan 1, 1970, and perms set to 000. writing to the
i wonder whether this is related to an itunes update - it tends to fiddle about
in the library for a bit when you install ? (??)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Darren J Moffat darr...@opensolaris.org writes:
Mauricio Tavares wrote:
If I have a machine with two drives, could I create equal size slices
on the two disks, set them up as boot pool (mirror) and then use the
remaining space as a striped pool for other more wasteful
applications?
You
Tim Cook wrote:
On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org
mailto:j...@netmusician.org wrote:
I'm entertaining something which might be a little wacky, I'm
wondering what your general reaction to this scheme might be :)
I would like to invest in some sort of
Erik Ableson wrote:
Uhhh - for an unmanaged server you can use ESXi for free. Identical
server functionality, just requires licenses if you need multiserver
features (ie vMotion)
How does ESXi w/o vMotion, vSphere, and vCenter server stack up against
VMWare Server? My impression was that you
Am 09/11/2009 09:57 schrieb Thomas Maier-Komor unter
tho...@maier-komor.de:
Marcel Gschwandl schrieb:
Hi all!
I'm running a Solaris 10 Update 6 (10/08) system and had to resilver a zpool.
It's now showing
snip
scrub: resilver completed after 9h0m with 21 errors on Wed Nov 4 22:07:49
No, Chris, I didn't export the pool becasue I didn't expect this to
happen. It's an excellent suggestion, so I'll try it when I get my
hands on the machine.
Thank you.
Leandro.
De: Chris Murray chrismurra...@gmail.com
Para: Leandro Vanden Bosch
This new PSARC putback that allows to rollback to an earlier valid uber block
is good.
This immediately raises a question: could we use this PSARC functionality to
recover deleted files? Or some variation? I dont need that functionality now,
but I am just curious...
--
This message posted
On Mon, 9 Nov 2009, Gschwandl Marcel HSLU TA wrote:
zpool status -v poolname
I already tried that, it only gives me
snip
errors: No known data errors
/snip
Errors do not necessarily cause data loss. For example, there may
have been sufficient redundancy that the error was able to be
OpenSolaris 2009.06
I have a ST2540 Fiber Array directly attached to a X4150. There is a
zpool on the fiber device. The zpool went into a faulted state, but I
can't seem to get it back via scrub or even delete it? Do I have to
re-install the entire OS if I want to use that device again?
frequent snapshots offer outstanding oops protection.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had the same problem recently onb125. I had a one disc zpool Movies. And
shutdown the computer. Removed the disc Movies and inserted another one disc
zpool Misc. Booted and imported the Misc zpool. But the Movies zpool showed
exactly the same behaviour as you report. The Movies zpool would
Maybe to create snapshots after the fact as a part of some larger disaster
recovery effort.
(What did my pool/file-system look like at 10am?... Say 30-minutes before the
database barffed on itself...)
With some enhancements might this functionality be extendable into a poor
man's CDP offering
I'm not sure if this is exactly what you're looking for but check out the work
around in this bug:
http://bugs.opensolaris.org/view_bug.do;jsessionid=9011b9dacffa0b615db182bbcd7b?bug_id=6559281
Basically Look through cfgadm -al and run the following command on the
unusable attachment
Maybe to create snapshots after the fact
how does one quiesce a drive after the fact?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 8-Nov-09, at 12:20 PM, Joe Auty wrote:
Tim Cook wrote:
On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org wrote:
...
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
That's another possibility, but it depends on how Virtualbox
+--
| On 2009-11-09 12:18:04, Ellis, Mike wrote:
|
| Maybe to create snapshots after the fact as a part of some larger disaster
recovery effort.
| (What did my pool/file-system look like at 10am?... Say 30-minutes before
More ZFS goodness putback before close of play for snv_128.
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html
http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e
Regards
Nigel Smith
--
This message posted from opensolaris.org
Hi,
I can't find any bug-related issues with marvell88sx2 in b126.
I looked over Dave Hollister's shoulder while he searched for
marvell in his webrevs of this putback and nothing came up:
driver change with build 126?
not for the SATA framework, but for HBAs there is:
On Mon, Nov 9, 2009 at 12:45 PM, Nigel Smith
nwsm...@wilusa.freeserve.co.uk wrote:
More ZFS goodness putback before close of play for snv_128.
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html
http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e
Regards
Nigel
On 11/09/09 12:58, Brent Jones wrote:
Are these recent developments due to help/support from Oracle?
No.
Or is it business as usual for ZFS developments?
Yes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
Interesting stuff.
By the way, is there a place to watch lated news like this on zfs/opensolaris?
rss maybe?
--
Roman
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Roman Naumenko wrote:
Interesting stuff.
By the way, is there a place to watch lated news like this on zfs/opensolaris?
rss maybe?
You could subscribe to onnv-not...@opensolaris.org...
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
I'd hoped this script would work for me as a snapshot diff script, but it
seems that bart doesn't play well with large filesystems (don't know the
cutoff, but my zfs pools (other than rpool) are all well over 4TB).
'bart create' fails immediately with a Value too large for defined data type
Roman, I like to check here for recent putbacks:
http://hg.genunix.org/onnv-gate.hg/shortlog
To see new cases: http://arc.opensolaris.org/caselog/PSARC/
Also, to see what should appear in upcoming builds (although not recently
updated):
Andrew Daugherity wrote:
if I invoke bart via truss, I see it calls statvfs() and fails. Way to keep up
with the times, Sun!
% file /bin/truss /bin/amd64/truss
/bin/truss: ELF 32-bit LSB executable 80386 Version 1 [FPU],
dynamically linked, not stripped, no debugging information
Craig S. Bell wrote:
Roman, I like to check here for recent putbacks:
http://hg.genunix.org/onnv-gate.hg/shortlog
To see new cases: http://arc.opensolaris.org/caselog/PSARC/
Also, to see what should appear in upcoming builds (although not recently
updated):
Roman Naumenko wrote:
James C. McPherson wrote, On 09-11-09 04:40 PM:
Roman Naumenko wrote:
Interesting stuff.
By the way, is there a place to watch lated news like this on
zfs/opensolaris?
rss maybe?
You could subscribe to onnv-not...@opensolaris.org...
James C. McPherson
--
On Fri, 6 Nov 2009, James Andrewartha wrote:
How about attacking it the other way? Sign the SCA, get a sponsor and put
the fix into OpenSolaris, then sustaining just have to backport it.
http://hub.opensolaris.org/bin/view/Main/participate
Do you mean the samba bug or the NFS bug?
For the
On Mon, Nov 09, 2009 at 03:25:02PM -0700, Robert Thurlow wrote:
Andrew Daugherity wrote:
if I invoke bart via truss, I see it calls statvfs() and fails. Way to
keep up with the times, Sun!
% file /bin/truss /bin/amd64/truss
/bin/truss: ELF 32-bit LSB executable 80386 Version 1
On Nov 9, 2009, at 2:06 PM, Andrew Daugherity wrote:
I'd hoped this script would work for me as a snapshot diff script,
but it seems that bart doesn't play well with large filesystems
(don't know the cutoff, but my zfs pools (other than rpool) are all
well over 4TB).
'bart create' fails
On Thu Nov 5 14:38:13 PST 2009, Gary Mills wrote:
It would be nice to see this information at:
http://hub.opensolaris.org/bin/view/Community+Group+on/126-130
but it hasn't changed since 23 October.
Well it seems we have an answer:
Nigel Smith wrote:
On Thu Nov 5 14:38:13 PST 2009, Gary Mills wrote:
It would be nice to see this information at:
http://hub.opensolaris.org/bin/view/Community+Group+on/126-130
but it hasn't changed since 23 October.
Well it seems we have an answer:
Robert Thurlow robert.thur...@sun.com 11/9/2009 4:25 PM
% file /bin/truss /bin/amd64/truss
/bin/truss: ELF 32-bit LSB executable 80386 Version 1 [FPU],
dynamically linked, not stripped, no debugging information available
/bin/amd64/truss: ELF 64-bit LSB executable AMD64 Version 1
Seems to me that you really want auditing. You can configure the audit
system to only record the events you are interested in.
http://docs.sun.com/app/docs/doc/816-4557/auditov-1?l=ena=view
-- richard
On Nov 9, 2009, at 4:55 PM, Andrew Daugherity wrote:
Robert Thurlow
1. Is it true that because block sizes vary (in powers of 2 of course) on each
write that there will be very little internal fragmentation?
2. I came upon this statement in a forum post:
[i]ZFS uses 128K data blocks by default whereas other filesystems typically
use 4K or 8K blocks. This
On Mon, 9 Nov 2009, Ilya wrote:
2. I came upon this statement in a forum post:
[i]ZFS uses 128K data blocks by default whereas other filesystems
typically use 4K or 8K blocks. This naturally reduces the potential
for fragmentation by 32X over 4k blocks.[/i]
How is this true? I mean, if you
I have a repeatable test case for this indecent.Every time I access my ZFS
cifs shared file system with Adobe Photoshop elements 6.0 via my Vista
workstation the OpenSolaris server stops serving CIFS. The share functions as
expected for all other CIFS operations.
-Begin
So, I had a fun ZFS learning experience a few months ago. A server of mine
suddenly dropped off the network, or so it seemed. It was an OpenSolaris
2008.05 box serving up samba shares from a ZFS pool, but it noticed too many
checksum errors and so decided it was time to take the pool down so
On Nov 9, 2009, at 6:42 PM, Ilya wrote:
1. Is it true that because block sizes vary (in powers of 2 of
course) on each write that there will be very little internal
fragmentation?
Block size limit (aka recordsize) is in powers of 2. Block sizes are
as needed.
2. I came upon this
Wow, this forum is great and uber-fast in response, appreciate the responses,
makes sense.
Only, what does ZFS do to write to data? Let's say that you want to write x
blocks somewhere, is ZFS going to find a pointer to the space map of some
metaslab and then write there? Is it going to find a
On Nov 9, 2009, at 9:15 PM, Ilya wrote:
Wow, this forum is great and uber-fast in response, appreciate the
responses, makes sense.
Nothing on TV tonight and all of my stress tests are passing :-)
Only, what does ZFS do to write to data? Let's say that you want to
write x blocks somewhere,
43 matches
Mail list logo