On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:
> I would not use the Caviar Black drives, regardless of TLER settings. The RE3
> or RE4 drives would be a better choice, since they also have better vibration
> tolerance. This will be a significant factor in a chassis with 20 spinnin
On Fri, Sep 11, 2009 at 12:26 AM, Richard Elling
wrote:
> On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:
>>
>> On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
>> wrote:
>>>
>>> Enrico,
>>> Could you compare and contrast your effort with the existing libzfs_jni?
>>>
>>> http://src.opensolaris.o
On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:
On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
wrote:
Enrico,
Could you compare and contrast your effort with the existing
libzfs_jni?
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/
Where's the source
On Thu, 10 Sep 2009, Rich Morris wrote:
Excellent. What level of read improvement are you seeing? Is the prefetch
rate improved, or does the fix simply avoid losing the prefetch?
This fix avoids using a prefetch stream when it is no longer valid. BTW, ZFS
prefetch appears to work well for
Can anyone answer if we will get zfs de-duplication before SXCE EOL? If
possible also anser the same on encryption?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
Thanks for pointing it out, Richard. I missed libzfs_jni. I'll have a
look at it and see where we're overlapping.
As far as I can see at a quick glance is that libzfs_jni is including
functionality we'd like to build upon the libzfs wrapper (that's why I
was studying zfs and zpool commands). Maybe
On 09/10/09 16:17, Bob Friesenhahn wrote:
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state
at High priority.
CR 6859997 has recently been
Hello Rich,
On Sep 10, 2009, at 9:12 PM, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state
at High priority.
CR 6859997 has recently been fixed in Nevada. Thi
Quoting Bob Friesenhahn :
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched
state at High priority.
CR 6859997 has recently been fixed in Neva
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state at High
priority.
CR 6859997 has recently been fixed in Nevada. This fix will also be in
Ah, fantastic. Henrik also pointed out that b124 is about a month out?
I wonder if b119 is worth moving to in the meantime?
-brian
On Thu, Sep 10, 2009 at 01:59:23PM -0600, cindy.swearin...@sun.com wrote:
> Hi Brian,
>
> I'm tracking this issue and expected resolution, here:
>
> http://www.so
On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
wrote:
> Enrico,
> Could you compare and contrast your effort with the existing libzfs_jni?
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/
Where's the source for the java code that uses that library?
--
-Pet
Hi Brian,
I'm tracking this issue and expected resolution, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is still
Hello Brian,
On Sep 10, 2009, at 9:21 PM, Brian Hechinger wrote:
I've hit google and it looks like this is still an issue in b122.
Does this
look like it will be fixed any time soon? If so, what build will it
be fixed
in and is there an ETA for the build to be "released"?
Adam has inte
Enrico,
Could you compare and contrast your effort with the existing libzfs_jni?
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/
Perhaps it would be worthwhile to try and un-privatize libzfs_jni?
-- richard
On Sep 10, 2009, at 12:20 PM, Enrico Maria Crisosto
Alex Li wrote:
We finally resolved this issue by change LSI driver. For details, please
refer to here
http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
Anyone from Sun have any knowledge of when the open source mpt driver will be
less broken? Things improved greatly for
We finally resolved this issue by change LSI driver. For details, please refer
to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
I've hit google and it looks like this is still an issue in b122. Does this
look like it will be fixed any time soon? If so, what build will it be fixed
in and is there an ETA for the build to be "released"?
Thanks.
-brian
--
"Coding in C is like sending a 3 year old to do groceries. You gotta
Hi.
I'm willing to maintain a project hosted on java.net
(https://zfs.dev.java.net/) that aims to provide a Java wrapper to
libzfs. I've already wrapped, although not committed yet, the last
libzfs.h I found on OpenSolaris.org (v. 10342:108f0058f837) and the
first problem I want to address is libr
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state at
High priority.
CR 6859997 has recently been fixed in Nevada. This fix will also be in
Solaris 10 Update 9.
This fix speeds up
Why do you need 3x LSI SAS3081E-R? The back plane has LSI SAS x36 expander so
you only nedd 1x 3081E. If you want multipathing, you need E2 model.
Second, I'd say use Seagate ES 2 1TB SAS disk especially if you want
multipathing. I believe E2 only supports SAS disks.
I have Supermicro 936E1 (LS
On Sep 10, 2009, at 7:07 AM, Brandon Mercer wrote:
On Thu, Sep 10, 2009 at 5:11 AM, wrote:
Hello all, I'm running 2009.06 and I've got a "random" kernel panic
that keeps killing my system under high IO loads. It happens almost
every time I start loading up the writes on at pool. Memory ha
Eugen Leitl wrote:
> Inspired by
> http://www.webhostingtalk.com/showpost.php?p=6334764&postcount=14
> I'm considering taking the Supermicro chassis like
> http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm
> populating it with 1 TByte WD Caviar Black WD1001FALS with TLER
> set to
On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote:
>> Any suggestions?
>
> Let it run for another day.
I'll let it keep running as long as it wants this time.
> I suspect the combination of frequent time-based snapshots and a pretty
> active set of users causes the progress estimate to be off..
On Thu, Sep 10, 2009 at 11:11, Jonathan Edwards
wrote:
> out of curiousity - do you have a lot of small files in the filesystem?
Most of the space in the filesystem is taken by a few large files, but
most of the files in the filesystem are small. For example, I have my
recorded TV collection on t
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote:
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
Some hours later, here I am again:
scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
Any suggestions?
Let it run for another day.
A pool on a build server I manage takes ab
Francois you're right!
We just found that it's happening only with files >100GB and S10U7.
We have no problem with SNV_101a.
gino
> Actually there is great chance that you are hitting
> this bug :
>
> "6792701 Removing large holey file does not free
> space"
>
>
> To check run :
>
> # zdb -d
Actually there is great chance that you are hitting this bug :
"6792701 Removing large holey file does not free space"
To check run :
# zdb - /
if you find object(s) without pathname you are in ...
it should look like this :
...
Object lvl iblk dblk lsize asize type
6
> On Thu, September 10, 2009 04:27, Gino wrote:
>
> > # cd /dr/netapp11bkpVOL34
> > # rm -r *
> > # ls -la
> >
> > Now there are no files in /dr/netapp11bkpVOL34, but
> >
> > # zfs list|egrep netapp11bkpVOL34
> > dr/netapp11bkpVOL34 1.34T
> 158G1.34T
> netapp11bkpVO
>On Thu, September 10, 2009 04:27, Gino wrote:
>
>> # cd /dr/netapp11bkpVOL34
>> # rm -r *
>> # ls -la
>>
>> Now there are no files in /dr/netapp11bkpVOL34, but
>>
>> # zfs list|egrep netapp11bkpVOL34
>> dr/netapp11bkpVOL34 1.34T 158G1.34T
>> /dr/netapp11bkpVOL34
>>
On Thu, September 10, 2009 04:27, Gino wrote:
> # cd /dr/netapp11bkpVOL34
> # rm -r *
> # ls -la
>
> Now there are no files in /dr/netapp11bkpVOL34, but
>
> # zfs list|egrep netapp11bkpVOL34
> dr/netapp11bkpVOL34 1.34T 158G1.34T
> /dr/netapp11bkpVOL34
>
> Space has
On Thu, Sep 10, 2009 at 8:29 PM, Maurilio Longo
wrote:
>> Neither.
>> It'll send all necessary data (without having to
>> promote anything) so
>> that the receiving zvol has a working vol1, and it's
>> not a clone.
>
> Fajar,
>
> thanks for clarifying, this is what I was calling 'promotion'.
>
> I
> Neither.
> It'll send all necessary data (without having to
> promote anything) so
> that the receiving zvol has a working vol1, and it's
> not a clone.
Fajar,
thanks for clarifying, this is what I was calling 'promotion'.
It is like a "promotion" happening on the receiving side.
Maurilio.
On Thu, Sep 10, 2009 at 8:03 PM, Maurilio Longo
wrote:
> Hi,
>
> I have a question, let's say I have a zvol named vol1 which is a clone of a
> snapshot of another zvol (its origin property is tank/my...@mysnap).
>
> If I send this zvol to a different zpool through a zfs send does it send the
> o
On Thu, Sep 10, 2009 at 9:09 AM, Chris Kirby wrote:
> On Sep 10, 2009, at 7:07 AM, Brandon Mercer wrote:
>
>> On Thu, Sep 10, 2009 at 5:11 AM, wrote:
>>>
Hello all, I'm running 2009.06 and I've got a "random" kernel panic
that keeps killing my system under high IO loads. It happens al
Hi,
I have a question, let's say I have a zvol named vol1 which is a clone of a
snapshot of another zvol (its origin property is tank/my...@mysnap).
If I send this zvol to a different zpool through a zfs send does it send the
origin too that is, does an automatic promotion happen or do I end up
P. Anil Kumar wrote:
Hi,
I've compiled /export/testws/usr/src/lib/crypt_modules/sha256/test.c and tried
to use it to calculate the checksum of the uberblock. This I did as the sha256
executable that comes with solaris is not giving me the correct values for
uberblock.(the output is 64chars w
Hi,
I've compiled /export/testws/usr/src/lib/crypt_modules/sha256/test.c and tried
to use it to calculate the checksum of the uberblock. This I did as the sha256
executable that comes with solaris is not giving me the correct values for
uberblock.(the output is 64chars whereas zfs output is on
On Thu, Sep 10, 2009 at 5:11 AM, wrote:
>
>>Hello all, I'm running 2009.06 and I've got a "random" kernel panic
>>that keeps killing my system under high IO loads. It happens almost
>>every time I start loading up the writes on at pool. Memory has been
>>tested extensively and I'm relatively ce
Already done "ls -la". No hidden files here.
Import/export doesn't change anything.
Done a "zfs destroy dr/netapp11bkpVOL34" and is running since 7 minutes
with very high I/O
gino
Fajar A. Nugraha wrote:
On Thu, Sep 10, 2009 at 3:27 PM, Gino wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
#
The initiatory condemned for the concern is rattling grave and beggary an
tending of every one. This is the vexation which exists in the elite and needs
to
be eliminated from the elite as presently as thinkable.
Rubic
http://fourseasons-ca.com";>sunroom
--
This message posted from opensolaris.o
>Hello all, I'm running 2009.06 and I've got a "random" kernel panic
>that keeps killing my system under high IO loads. It happens almost
>every time I start loading up the writes on at pool. Memory has been
>tested extensively and I'm relatively certain this is not a hardware
>related issue. h
> >> # cd /dr/netapp11bkpVOL34
> >> # rm -r *
> >> # ls -la
> >> #
> >>
> >> Now there are no files in /dr/netapp11bkpVOL34,
> but
> >>
> >> # zfs list|egrep netapp11bkpVOL34
> >> dr/netapp11bkpVOL34 1.34T
> 158G
> 4T /dr/netapp11bkpVOL34
> >>
> >> Space has no
On 10 Sep 2009, at 09:38, Fajar A. Nugraha wrote:
On Thu, Sep 10, 2009 at 3:27 PM, Gino wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
#
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T 158G
On Thu, Sep 10, 2009 at 3:27 PM, Gino wrote:
> # cd /dr/netapp11bkpVOL34
> # rm -r *
> # ls -la
> #
>
> Now there are no files in /dr/netapp11bkpVOL34, but
>
> # zfs list|egrep netapp11bkpVOL34
> dr/netapp11bkpVOL34 1.34T 158G 1.34T
> /dr/netapp11bkpVOL34
>
> Sp
Hi all,
we have some problems with ZFS.
Our configuration: X4100 + dual 3510 JBOD, 2 zpool, Solaris 10U7
# zfs create dr/netapp11bkpVOL34
# cd /dr/netapp11bkpVOL34
# rsync -av --numeric-ids --delete /netapp11/vol/vol34/* .
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34
46 matches
Mail list logo