Kwang Whee Lee would like to recall the message, "[zfs-discuss] Repairing
Faulted ZFS pool and missing disks".
EMAIL DISCLAIMER This email message and its attachments are confidential and
may also contain copyright or privileged material. If you are not the intended
recipient, you may not forwar
On 07/11/2012 02:18 AM, John Martin wrote:
> On 07/10/12 19:56, Sašo Kiselkov wrote:
>> Hi guys,
>>
>> I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
>> implementation to supplant the currently utilized sha256. On modern
>> 64-bit CPUs SHA-256 is actually much slower than
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
>
> I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
> implementation to supplant the currently utilized sha256. On modern
> 64-bit CPUs SHA-256 is actuall
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 can
To amplify what Mike says...
On Jul 10, 2012, at 5:54 AM, Mike Gerdts wrote:
> ls(1) tells you how much data is in the file - that is, how many bytes
> of data that an application will see if it reads the whole file.
> du(1) tells you how many disk blocks are used. If you look at the
> stat struc
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 candidates, so I went out and did some
testin
2012-07-10 15:49, Edward Ned Harvey wrote:
If you use compression=on, or lzjb, then you're using very fast compression.
Should not hurt performance, in fact, may gain performance for highly
compressible data.
If you use compression=gzip (or any gzip level 1 thru 9) then you're using a
fairly exp
I am toying with Phil Brown's zrep script.
Does anyone have an Oracle BugID for this crashdump?
#!/bin/ksh
srcfs=rpool/testvol
destfs=rpool/destvol
snap="${srcfs}@zrep_00"
zfs destroy -r $srcfs
zfs destroy -r $destfs
zfs create -V 100M $srcfs
zfs set foo:bar=foobar $srcfs
zfs create -o readonl
On Tue, Jul 10, 2012 at 6:29 AM, Jordi Espasa Clofent
wrote:
> Thanks for you explanation Fajar. However, take a look on the next lines:
>
> # available ZFS in the system
>
> root@sct-caszonesrv-07:~# zfs list
>
> NAME USED AVAIL REFER MOUNTPOINT
> opt
On 2012-07-10 13:45, Ferenc-Levente Juhos wrote:
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of
it. Nothing is aware of the compression and decompression that takes
place on-the-fly, except of course zfs.
On 07/10/12 12:45, Ferenc-Levente Juhos wrote:
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of
it. Nothing is aware of the compression and decompression that takes
place on-the-fly, except of course zfs.
Tha
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jordi Espasa Clofent
>
> root@sct-caszonesrv-07:~# zfs set compression=on opt/zones/sct-scw02-
> shared
If you use compression=on, or lzjb, then you're using very fast compression.
Should not
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of it.
Nothing is aware of the compression and decompression that takes place
on-the-fly, except of course zfs.
That's the reason why you could gain in write and r
Thanks for you explanation Fajar. However, take a look on the next lines:
# available ZFS in the system
root@sct-caszonesrv-07:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 532M 34.7G 290M /opt
opt/zones243M 34.7G
On Tue, Jul 10, 2012 at 4:40 PM, Jordi Espasa Clofent
wrote:
> On 2012-07-10 11:34, Fajar A. Nugraha wrote:
>
>> compression = possibly less data to write (depending on the data) =
>> possibly faster writes
>>
>> Some data is not compressible (e.g. mpeg4 movies), so in that case you
>> won't see
On 07/10/12 09:25 PM, Jordi Espasa Clofent wrote:
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
opt/zones 4.49G 45.9
On 2012-07-10 11:34, Fajar A. Nugraha wrote:
compression = possibly less data to write (depending on the data) =
possibly faster writes
Some data is not compressible (e.g. mpeg4 movies), so in that case you
won't see any improvements.
Thanks for your answer Fajar.
As I said in my initial ma
On Tue, Jul 10, 2012 at 4:25 PM, Jordi Espasa Clofent
wrote:
> Hi all,
>
> By default I'm using ZFS for all the zones:
>
> admjoresp@cyd-caszonesrv-15:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> opt 4.77G 45.9G 285M /opt
> opt/zones
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
opt/zones 4.49G 45.9G29K /opt/zones
opt/zones/glad-gm02-ftcl01 3
19 matches
Mail list logo