Actually it does if you have compression turned on
and the blocks
compress away to 0 bytes.
See
http://src.opensolaris.org/source/xref/onnv/onnv-gate/
usr/src/uts/common/fs/zfs/zio.c#zio_write_bp_init
Specifically line 1005:
1005 if (psize == 0) {
1006
Because of that I'm thinking that I should try
to change the hostid when booted from the CD to be
the same as the previously installed system to see if
that helps - unless that's likely to confuse it at
all...?
I've now tried changing the hostid using the code from
so has anyone done it successfully on Solaris 10 sparc?
On Fri, Jul 2, 2010 at 2:42 PM, Darren J Moffat darr...@opensolaris.orgwrote:
On 02/07/2010 17:57, Cindy Swearingen wrote:
I think the answer is no, you cannot rename the root pool and expect
that any other O/S-related boot operation
To summarise, putting 28 disks in a single vdev is nothing you would do if you
want performance. You'll end up with as many IOPS a single drive can do.
Split it up into smaller (10 disk) vdevs and try again. If you need high
performance, put them in a striped mirror (aka RAID1+0)A little
I am sorry you feel that way. I will look at your issue as soon as I am able,
but I should say that it is almost certain that whatever the problem is, it
probably is inherited from OpenSolaris and the build of NCP you were testing
was indeed not the final release so some issues are not
To summarise, putting 28 disks in a single vdev is nothing you would do if you
want performance. You'll end up with as many IOPS a single drive can do.
Split it up into smaller (10 disk) vdevs and try again. If you need high
performance, put them in a striped mirror (aka RAID1+0)
A
R. Eulenberg ron2105 at web.de writes:
op I was setting up a new systen (osol 2009.06
and updating to
op the lastest version of osol/dev - snv_134 -
with
op deduplication) and then I tried to import my
backup zpool, but
op it does not work.
op # zpool
On Wed, Jun 30, 2010 at 12:54:19PM -0400, Edward Ned Harvey wrote:
If you're talking about streaming to a bunch of separate tape drives (or
whatever) on a bunch of separate systems because the recipient storage is
the bottleneck instead of the network ... then split probably isn't the
most
Victor,
The zpool import succeeded on the next attempt following the crash that I
reported to you by private e-mail!
For completeness, this is the final status of the pool:
pool: tank
state: ONLINE
scan: resilvered 1.50K in 165h28m with 0 errors on Sat Jul 3 08:02:30 2010
config:
Hello,
I finally got the new drive and I am in the process of moving the data. The
problem I have now is that I can't mount the NTFS partition. I followed the
directions here:
http://sun.drydog.com/faq/9.html
and tried both methods, but the problem is that when I run fdisk on the ntfs
drive,
On 7/3/2010 2:22 PM, Roy Sigurd Karlsbakk wrote:
To summarise, putting 28 disks in a single vdev is nothing you
would do if you want performance. You'll end up with as many
IOPS a single drive
Hello,
I'm using opensolaris b134 and I'm trying to mount a ntfs partition. I followed
the instructions located here:
http://sun.drydog.com/faq/9.html
and tried both methods, but the problem is that when I run fdisk on the ntfs
drive, it does not detect the partitions. In all the tutorials,
On 07/ 4/10 02:54 PM, zfsnoob4 wrote:
Hello,
I'm using opensolaris b134 and I'm trying to mount a ntfs partition. I followed
the instructions located here:
http://sun.drydog.com/faq/9.html
You have posted to the wrong list, opensolaris-help would be more
appropriate so I've coped that
13 matches
Mail list logo