On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
> On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
>
>> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>>
>>> Is it possible to partition the global setting for the maximum ARC size
>>> with finer grained controls? Ideally, I would like
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>
>> Is it possible to partition the global setting for the maximum ARC size
>> with finer grained controls? Ideally, I would like to do this on a per
>> zvol basis but a setting per zpool w
On Jan 30, 2011, at 1:49 PM, Richard Elling wrote:
> On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
>>
>> On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
>>
>>> On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
>>>
Is there a simple way to query zfs send binary objects for bas
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
> Sent: Sunday, January 30, 2011 3:48 PM
>
> >2- When you want to restore, it's all or nothing. If a single bit is
> >corrupt in the data stream, the whole stream is lost.
> >
> OTOH, it renders ZFS send useless for backup or archival
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> We're getting down to 10-20MB/s on
Oh, one more thing. How are you measuring the speed? Because if you have data
which is highly compressible, or highly duplicated,
I'm not personally familiar with with VDI, but it feels like the VDI
bits are trying to run pkginfo on a NexentaStor target, which is a
syntax error.
I'm not sure what the fix for that would be.
- Garrett
On Sun, 2011-01-30 at 09:37 +, Thierry Delaitre wrote:
> Hello,
>
> I’ve got V
I'm not sure about *docs*, but my rough estimations:
Assume 1TB of actual used storage. Assume 64K block/slab size. (Not
sure how realistic that is -- it depends totally on your data set.)
Assume 300 bytes per DDT entry.
So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RAM for one
TB
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> The test box is a supermicro thing with a Core2duo CPU, 8 gigs of RAM, 4 gigs
> of mirrored SLOG and some 150 gigs of L2ARC on 80GB x25-M drives. The
> data drives are
On 1/30/2011 5:26 PM, Joerg Schilling wrote:
Richard Elling wrote:
ufsdump is the problem, not ufsrestore. If you ufsdump an active
file system, there is no guarantee you can ufsrestore it. The only way
to guarantee this is to keep the file system quiesced during the entire
ufsdump. Needless
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
> Is it possible to partition the global setting for the maximum ARC size
> with finer grained controls? Ideally, I would like to do this on a per
> zvol basis but a setting per zpool would be interesting as well?
While perhaps not perfect, see
Richard Elling wrote:
> ufsdump is the problem, not ufsrestore. If you ufsdump an active
> file system, there is no guarantee you can ufsrestore it. The only way
> to guarantee this is to keep the file system quiesced during the entire
> ufsdump. Needless to say, this renders ufsdump useless for
On Jan 30, 2011, at 11:08 AM, Thierry Delaitre wrote:
> Would you recommend a particular distribution to implement a persistent iscsi
> server compatible with VDI ?
Of course, I will recommend NexentaStor! :-) But I would also recommend NFS
over iSCSI, but that is
fodder for another forum...
On Jan 30, 2011, at 12:47 PM, Peter Jeremy wrote:
> On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
> wrote:
>> 2- When you want to restore, it's all or nothing. If a single bit is
>> corrupt in the data stream, the whole stream is lost.
>>
>> Regarding point #2, I contend that zfs send is be
On Jan 30, 2011, at 1:09 PM, Peter Jeremy wrote:
> On 2011-Jan-30 13:39:22 +0800, Richard Elling
> wrote:
>> I'm not sure of the way BSD enumerates devices. Some clever person thought
>> that hiding the partition or slice would be useful.
>
> No, there's no hiding. /dev/ada0 always refers to
Hi all
As I've said here on the list a few times earlier, the last on the thread 'ZFS
not usable (was ZFS Dedup question)', I've been doing some rather thorough
testing on zfs dedup, and as you can see from the posts, it wasn't very
satisfactory. The docs claim 1-2GB memory usage per terabyte s
On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
>
> On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
>
>> On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
>>
>>> Is there a simple way to query zfs send binary objects for basic
>>> information such as:
>>>
>>> 1) What snapshot they r
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
wrote:
> On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
> wrote:
>>2- When you want to restore, it's all or nothing. If a single bit is
>>corrupt in the data stream, the whole stream is lost.
>>
>>Regarding point #2, I contend that zfs send is bet
On 2011-Jan-30 13:39:22 +0800, Richard Elling wrote:
>I'm not sure of the way BSD enumerates devices. Some clever person thought
>that hiding the partition or slice would be useful.
No, there's no hiding. /dev/ada0 always refers to the entire physical disk.
If it had PC-style fdisk slices, ther
On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
wrote:
>2- When you want to restore, it's all or nothing. If a single bit is
>corrupt in the data stream, the whole stream is lost.
>
>Regarding point #2, I contend that zfs send is better than ufsdump. I would
>prefer to discover corruption in t
- Original Message -
> Is it possible to partition the global setting for the maximum ARC
> size
> with finer grained controls? Ideally, I would like to do this on a per
> zvol basis but a setting per zpool would be interesting as well?
>
> The use case is to prioritize which zvol devices
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
The use case is to prioritize which zvol devices should be fully cached
in DRAM on a serve
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
> On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
>
>> Is there a simple way to query zfs send binary objects for basic information
>> such as:
>>
>> 1) What snapshot they represent?
>> 2) When they where created?
>> 3) Whether they are t
Would you recommend a particular distribution to implement a persistent
iscsi server compatible with VDI ?
Thierry.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: 30 January 2011 16:28
To: Thierry Delaitre
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] VDI, ZFS an
On Jan 30, 2011, at 1:37 AM, Thierry Delaitre wrote:
> Hello,
>
> I’ve got VDI 3.2.1 and I’m experiencing ZFS iscsi persistence after rebooting
> the ZFS Solaris 10 (s9/10 s10x_u9wos_14a X86) server so I tried to use
> NexentaOS_134f as according
> to http://sun.systemnews.com/articles/145/5/
On Jan 30, 2011, at 4:31 AM, Mike Tancsa wrote:
> On 1/30/2011 12:39 AM, Richard Elling wrote:
>>> Hmmm, doesnt look good on any of the drives.
>>
>> I'm not sure of the way BSD enumerates devices. Some clever person thought
>> that hiding the partition or slice would be useful. I don't find it
On 1/30/2011 12:39 AM, Richard Elling wrote:
>> Hmmm, doesnt look good on any of the drives.
>
> I'm not sure of the way BSD enumerates devices. Some clever person thought
> that hiding the partition or slice would be useful. I don't find it useful.
> On a Solaris
> system, ZFS can show a disk
Hello,
I¹ve got VDI 3.2.1 and I¹m experiencing ZFS iscsi persistence after
rebooting the ZFS Solaris 10 (s9/10 s10x_u9wos_14a X86) server so I tried to
use NexentaOS_134f as according
to http://sun.systemnews.com/articles/145/5/Virtualization/22991, VDI 3.1.1
supports COMSTAR
However, with nexent
27 matches
Mail list logo