- Original Message -
Hi,
Anyone who has experience with 3TB HDD in ZFS? Can solaris recognize this new
HDD? I haven't tested them, but we're using multi-terabyte iscsi volumes now,
so I don't really see what could be different. The only possible issue I know
of, is that 3TB
I haven't tested them, but we're using multi-terabyte iscsi volumes now, so I
don't really see what could be different. The only possible issue I know of, is
that 3TB drives uses 4k sectors, which might not be optimal in all environments.
Vennlige hilsener / Best regards
3TB HDD needs UEFI not
Hi,
I've output of space allocation which I can't explain. I hope someone
can point me at the right direction.
The allocation of my home filesystem looks like this:
jo...@onix$ zfs list -o space p0/home
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
p0/home 31.0G 156G
On 6 December 2010 21:43, Fred Liu fred_...@issi.com wrote:
3TB HDD needs UEFI not the traditional BIOS and OS support.
Fred
Fred:
http://www.anandtech.com/show/3858/the-worlds-first-3tb-hdd-seagate-goflex-desk-3tb-review/2
Namely:
a feature of GPT is 64-bit LBA support. With 64-bit LBAs
On 12/06/2010 05:17 AM, taemun wrote:
On 6 December 2010 21:43, Fred Liu fred_...@issi.com
mailto:fred_...@issi.com wrote:
3TB HDD needs UEFI not the traditional BIOS and OS support.
Fred
Fred:
I'm trying to individually upgrade drives in my raid z configuration, but I
accidentally added my replacement drive to the root rank instead of the raidz1
under it..
Right now things look like this..
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0
here it is properly formatted!
--NAME STATE READ WRITE CKSUM
--tank DEGRADED 0 0 0
--ad4s1d ONLINE 0 0 0
--raidz1 DEGRADED 0 0 0
-ad6s1d ONLINE 0 0 0
-ad8s1d UNAVAIL 0 0 0 cannot annot open
Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes:
Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
.. or add a mirror to that drive, to keep some redundancy.
/Tomas
--
Tomas
On Mon, Dec 6, 2010 at 7:49 AM, Tomas Ögren st...@acc.umu.se wrote:
On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes:
Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
.. or add a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joost Mulders
This tells me that *86,7G* is used by *snapshots* of this filesystem.
However, when I look at the space allocation of the snapshots, I don't
see the 86,7G back!
jo...@onix$
Thanks for the pointer. AFAIK there are no clones involved. The output
of zdb -d p0 is below. I found no differences with that and the output
of the zfs list command.
r...@onix# zdb -d p0 | egrep 'p0\/home'|sort
Dataset p0/home [ZPL], ID 33, cr_txg 432, 69.7G, 192681 objects
Dataset p0/h...@s1
Folks,
Command zpool get all poolName does not provide any option to generate
parsable output. The returned output contains 4 fields - name, property, value
and source. These fields seems to be separated by spaces. I am wondering if it
is safe to assume that there are no spaces in the field
Command zpool get all poolName does not provide any option to
generate parsable output. The returned output contains 4 fields -
name, property, value and source. These fields seems to be separated
by spaces. I am wondering if it is safe to assume that there are no
spaces in the field values.
Command zpool get all poolName does not provide any option to
generate parsable output. The returned output contains 4 fields -
name, property, value and source. These fields seems to be separated
by spaces. I am wondering if it is safe to assume that there are no
spaces in the field values.
Hi all
The numbers I've heard say the number of iops for a raidzn volume should be
about the number of iops for the slowest drive in the set. While this might
sound like a good base point, I tend to disagree. I've been doing some testing
on some raidz2 volumes with various sizes and similar
Hi,
Thank you for your help.
I actually had the script working. However, I just wanted to make sure that
spaces are not permitted within the field value itself. Otherwise, the regular
expression would break.
Regards,
Peter
--
This message posted from opensolaris.org
On Mon, Dec 6 at 23:22, Roy Sigurd Karlsbakk wrote:
Hi all
The numbers I've heard say the number of iops for a raidzn volume
should be about the number of iops for the slowest drive in the
set. While this might sound like a good base point, I tend to
disagree. I've been doing some testing on
Spaces are permitted in the value field. We (myself and Nexenta) use them
extensively.
-- richard
On Dec 6, 2010, at 1:40 PM, Peter Taps wrote:
Folks,
Command zpool get all poolName does not provide any option to generate
parsable output. The returned output contains 4 fields - name,
As is altogether far too common an occurance, we were having a problem
where a file was not inheriting the correct ACL, but rather a horribly
munged one resulting in incorrect permissions and security problems.
It appeared something was chmod'ing the file after creation, but despite
best efforts
On Sun, Dec 5, 2010 at 9:35 PM, Fred Liu fred_...@issi.com wrote:
Anyone who has experience with 3TB HDD in ZFS? Can solaris recognize this
new HDD?
There shouldn't be any problems using a 3TB drive with Solaris, so
long as you're using a 64-bit kernel. Recent versions of zfs should
properly
On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote:
There shouldn't be any problems using a 3TB drive with Solaris, so
long as you're using a 64-bit kernel. Recent versions of zfs should
properly recognize the 4k sector size as well.
I think you'll find that these 3TB, 4KiB
It's based on a jumper on most new drives.
On Dec 6, 2010 8:41 PM, taemun tae...@gmail.com wrote:
On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote:
There shouldn't be any problems using a 3TB drive with Solaris, so
long as you're using a 64-bit kernel. Recent versions of zfs
On Mon, Dec 6, 2010 at 6:40 PM, taemun tae...@gmail.com wrote:
I think you'll find that these 3TB, 4KiB physical sector drives are still
exporting logical sectors of 512B (this is what Anandtech has indicated,
anyway). ZFS assumes that the drives logical sectors are directly mapped to
physical
On 7 December 2010 13:55, Tim Cook t...@cook.ms wrote:
It's based on a jumper on most new drives.
Can you back that up with anything? I've never seen anything but requests
for a jumper that forces the firmware to export 4KiB sectors.
WD EARS at launch provided the ability to force the
Hi Mark,
I've tried running zpool attach media ad24 ad12 (ad12 being the new disk)
and I get no response. I tried leaving the command run for an extended
period of time and nothing happens.
Thoughts?
On Fri, Dec 3, 2010 at 2:09 PM, Mark J Musante mark.musa...@oracle.comwrote:
On Fri, 3 Dec
26 matches
Mail list logo