>> Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk]
>> is assumed to handle data with integrity. So say netcat is used as transport,
>> zfs is using sha256 on disk, but only fletcher4 over the wire with send/recv,
>> and your wire takes some undetected/uncorrected hits,
On Feb 5, 2010, at 8:09 PM, grarpamp wrote:
>>> Hmm, is that configurable? Say to match the checksums being
>>> used on the filesystem itself... ie: sha256? It would seem odd to
>>> send with less bits than what is used on disk.
>
>>> Was thinking that plaintext ethernet/wan and even some of the
> Intel's RAM is faster because it needs to be.
I'm confused how AMD's dual channel, two way interleaved
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved
DDR3-1333 into an on-cpu controller.
http://www.anandtech.com/printarti
>> Hmm, is that configurable? Say to match the checksums being
>> used on the filesystem itself... ie: sha256? It would seem odd to
>> send with less bits than what is used on disk.
>> Was thinking that plaintext ethernet/wan and even some of the 'weaker'
>> ssl algorithms
> Do you expect the sam
On Feb 5, 2010, at 7:20 PM, grarpamp wrote:
>> No. Checksums are made on the records, and there could be a different
>> record size for the sending and receiving file systems.
>
> Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
> And then entirely new stream checksums ar
> No. Checksums are made on the records, and there could be a different
> record size for the sending and receiving file systems.
Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
And then entirely new stream checksums are made while sending it all off
to the pipe.
I se
You might also want to note that with traditional filesystems, the
'shred' utility will securely erase data, but no tools like that
will work for zfs.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote:
Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared
to raid-10 pool you should get better throughput if at least 4
drives are used. Basically it is due to the fact that in RAID-10 the
maximum you can g
On Feb 5, 2010, at 3:11 AM, grarpamp wrote:
> Are the sha256/fletcher[x]/etc checksums sent to the receiver along
> with the other data/metadata?
No. Checksums are made on the records, and there could be a different
record size for the sending and receiving file systems. The stream itself
is check
> I saw this in /. and thought I'd point it out to this list. It appears
> to act as a L2 cache for a single drive, in theory providing better
> performance.
>
> http://www.silverstonetek.com/products/p_contents.php?pno=HDDBOOST&area
It's a neat device, but the notion of a hybrid drive is nothing
On Feb 5, 2010, at 5:19 PM, Nicolas Williams wrote:
>> ZFS crypto will be nice when we get either NFSv4 or NFSv3 w/krb5 for
>> over the wire encryption. Until then, not much point.
>
> You can use NFS with krb5 over the wire encryption _now_.
>
> Nico
> --
I know, that's just something I'm wo
On Fri, Feb 05, 2010 at 05:08:02PM -0500, c.hanover wrote:
> In our particular case, there won't be snapshots of destroyed
> filesystems (I create the snapshots, and destroy them with the
> filesystem).
OK.
> I'm not too sure on the particulars of NFS/ZFS, but would it be
> possible to create a 1
On 2/5/10 5:08 PM -0500 c.hanover wrote:
would it be possible to
create a 1GB file without writing any data to it, and then use a hex
editor to access the data stored on those blocks previously?
No, not over NFS and also not locally. You'd be creating a sparse file,
which doesn't allocate spa
I saw this in /. and thought I'd point it out to this list. It appears
to act as a L2 cache for a single drive, in theory providing better
performance.
http://www.silverstonetek.com/products/p_contents.php?pno=HDDBOOST&area
-B
--
Brandon High : bh...@freaks.com
Indecision is the key to flexibil
In our particular case, there won't be snapshots of destroyed filesystems (I
create the snapshots, and destroy them with the filesystem).
I'm not too sure on the particulars of NFS/ZFS, but would it be possible to
create a 1GB file without writing any data to it, and then use a hex editor to
acc
On Fri, Feb 05, 2010 at 04:41:08PM -0500, Miles Nordin wrote:
> > "ch" == c hanover writes:
>
> ch> is there a way to a) securely destroy a filesystem,
>
> AIUI zfs crypto will include this, some day, by forgetting the key.
Right.
> but for SSD, zfs above a zvol, or zfs above a SAN tha
On Fri, Feb 05, 2010 at 03:49:15PM -0500, c.hanover wrote:
> Two things, mostly related, that I'm trying to find answers to for our
> security team.
>
> Does this scenario make sense:
> * Create a filesystem at /users/nfsshare1, user uses it for a while,
> asks for the filesystem to be deleted
> *
> "ch" == c hanover writes:
ch> is there a way to a) securely destroy a filesystem,
AIUI zfs crypto will include this, some day, by forgetting the key.
but for SSD, zfs above a zvol, or zfs above a SAN that may do
snapshots without your consent, I think it's just logically not a
solveab
> "rvd" == Ray Van Dolson writes:
> "ak" == Andrey Kuzmin writes:
rvd> I missed out on this thread. How would these dropped flushed
rvd> writes manifest themselves?
presumably corrupted databases, lost mail, or strange NFS behavior
after the server reboots when the clients do n
On 2/5/10 3:49 PM -0500 c.hanover wrote:
Two things, mostly related, that I'm trying to find answers to for our
security team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while, asks
for the filesystem to be deleted * New user asks for a filesyste
rvandol...@esri.com said:
> I'm trying to figure out where I can find the firmware on the LSI
> controller... are the bootup messages the only place I could expect to see
> this? prtconf and prtdiag both don't appear to give firmware information.
> . . .
> Solaris 10 U8 x86.
The "raidctl" comman
Two things, mostly related, that I'm trying to find answers to for our security
team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while, asks for
the filesystem to be deleted
* New user asks for a filesystem and is given /users/nfsshare2. What ar
On Fri, Feb 5, 2010 at 12:20 PM, Miles Nordin wrote:
> for time machine you will probably find yourself using COMSTAR and the
> GlobalSAN iSCSI initiator because Time Machine does not seem willing
> to work over NFS. Otherwise, for Macs you should definitely use NFS,
Slightly off-topic ...
You
Trying to track down why our two Intel X-25E's are spewing out
Write/Retryable errors when being used as a ZIL (mirrored). The
system is running a LSI1068E controller with LSISASx36 expander
(box built by Silicon Mechanics).
The drives are fairly new, and it seems odd that both of the pair would
On Fri, Feb 05, 2010 at 11:55:12AM -0800, Bob Friesenhahn wrote:
> On Fri, 5 Feb 2010, Miles Nordin wrote:
> >
> >ls> r...@nexenta:/volumes# hdadm write_cache off c3t5
> >
> >ls> c3t5 write_cache> disabled
> >
> > You might want to repeat his test with X25-E. If the X25-E is also
> > drop
On 5-Feb-10, at 11:35 AM, J wrote:
Hi all,
I'm building a whole new server system for my employer, and I
really want to use OpenSolaris as the OS for the new file server.
One thing is keeping me back, though: is it possible to recover a
ZFS Raid Array after the OS crashes? I've spent h
> "b" == Brian writes:
b> (4) Hold backups from windows machines, mac (time machine),
b> linux.
for time machine you will probably find yourself using COMSTAR and the
GlobalSAN iSCSI initiator because Time Machine does not seem willing
to work over NFS. Otherwise, for Macs you sh
On Fri, Feb 5, 2010 at 10:55 PM, Bob Friesenhahn
wrote:
> On Fri, 5 Feb 2010, Miles Nordin wrote:
>>
>> ls> r...@nexenta:/volumes# hdadm write_cache off c3t5
>>
>> ls> c3t5 write_cache> disabled
>>
>> You might want to repeat his test with X25-E. If the X25-E is also
>> dropping cache flush
On Fri, 5 Feb 2010, Miles Nordin wrote:
ls> r...@nexenta:/volumes# hdadm write_cache off c3t5
ls> c3t5 write_cache> disabled
You might want to repeat his test with X25-E. If the X25-E is also
dropping cache flush commands (it might!), you may be, compared to
disabling the ZIL, slowing
> "pr" == Peter Radig writes:
> "ls" == Lutz Schumann writes:
pr> I was expecting a good performance from the X25-E, but was
pr> really suprised that it is that good (only 1.7 times slower
pr> than it takes with ZIL completely disabled). So I will use the
pr> X25-E as ZIL
Ah, I see!
Simple, easy, and saves me hundreds on HW-based RAID controllers ^_^
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Feb 5, 2010 at 12:11 PM, Cindy Swearingen
wrote:
> Hi Francois,
>
> The autoreplace property works independently of the spare
> feature.
>
> Spares are activated automatically when a device in the main
> pool fails.
>
> Thanks,
>
> Cindy
>
>
> On 02/05/10 09:43, Francois wrote:
>
>> Hi lis
Hi Francois,
The autoreplace property works independently of the spare
feature.
Spares are activated automatically when a device in the main
pool fails.
Thanks,
Cindy
On 02/05/10 09:43, Francois wrote:
Hi list,
I've a strange behaviour with autoreplace property. It is set to off by
default
> if zfs overlaps mirror reads across devices.
it does... I have one very old disk in this mirror and
when I attach another element one can see more reads going
to the faster disks... this past isn't right after the attach
but since the reboot, but one can still see the reads are
load balanced d
Hi list,
I've a strange behaviour with autoreplace property. It is set to off by
default, ok. I want to manually manage disk replacement so default "off"
matches my need.
# zpool get autoreplace mypool
NAME PROPERTY VALUESOURCE
mypool autoreplace off default
Then I added 2 s
On Fri, Feb 05, 2010 at 08:35:15AM -0800, J wrote:
> To be more descriptive, I plan to have a Raid 1 array for the OS, and
> then I will need 3 additional Raid5/RaidZ/etc arrays for data
> archiving, backups and other purposes. There is plenty of
> documentation on how to recover an array if one o
On Fri, 5 Feb 2010, Rob Logan wrote:
well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC
Intel's RAM is faster because it needs to be. It is wise to see the
role that architecture plays in total performance
Hi all,
I'm building a whole new server system for my employer, and I really want to
use OpenSolaris as the OS for the new file server. One thing is keeping me
back, though: is it possible to recover a ZFS Raid Array after the OS crashes?
I've spent hours with Google to avail
To be more
On 05/02/2010 04:11, Edward Ned Harvey wrote:
Data in raidz2 is striped so that it is split across multiple disks.
Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk. It's not
simply st
>> Was my raidz2 performance comment above correct?
>> That the write speed is that of the slowest disk?
>> That is what I believe I have
>> read.
> You are
> sort-of-correct that its the write speed of the
> slowest disk.
My experience is not in line with that statement. RAIDZ will write a co
On Fri, Feb 05, 2010 at 02:41:35PM +0100, Jesus Cea wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> When a scrub/resilver finishes, you see the date and time in "zpool
> status". But this information doesn't persist across reboots.
>
> Would be nice being able to see the date and tim
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote:
NAMESTATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirrorDEGRADED 0 0 0
c1t4d0 DEGRADED 0 028 too many errors
c1t5d0 ONLINE 0 0 0
I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
When a scrub/resilver finishes, you see the date and time in "zpool
status". But this information doesn't persist across reboots.
Would be nice being able to see the date and time it took to scrub the
pool, even if you reboot your machine :).
PS: I a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/04/2010 05:10 AM, Matthew Ahrens wrote:
> This is RFE 6425091 "want 'zfs diff' to list files that have changed
> between snapshots", which covers both file & directory changes, and file
> removal/creation/renaming. We actually have a prototype o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/03/2010 04:35 PM, Andrey Kuzmin wrote:
> At zfs_send level there are no files, just DMU objects (modified in
> some txg which is the basis for changed/unchanged decision).
Would be awesome if "zfs send" would have an option to show files
changed
Are the sha256/fletcher[x]/etc checksums sent to the receiver along
with the other data/metadata? And checked upon receipt of course.
Do they chain all the way back to the uberblock or to some calculated
transfer specific checksum value?
The idea is to carry through the integrity checks wherever po
Nicolas Williams wrote:
> There's no unionfs for Solaris.
>
> (For those of you who don't know, unionfs is a BSDism and is a
> pseudo-filesystem which presents the union of two underlying
> filesystems, but with all changes being made only to one of the two
> filesystems. The idea is that one of
On 4 Feb 2010, at 16:35, Bob Friesenhahn wrote:
> On Thu, 4 Feb 2010, Darren J Moffat wrote:
>>> Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such my customer cannot
use compression. I have asked IBM for road
48 matches
Mail list logo