zdb zpool output as below:
bash-3.00# zdb ttt
version=15
name='ttt'
state=0
txg=4
pool_guid=4724975198934143337
hostid=69113181
hostname='cdc-x4100s8'
vdev_tree
type='root'
id=0
guid=4724975198934143337
children[0]
On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote:
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
From my experience dealing with 4TB you stop writing after 80% of zpool
utilization
YMMV. I have routinely completely filled zpools. There have been some
improvements in
8 hot swap bays is not too much. The rest looks like a cake walk for OSol. But
with this HW you can't go for 2009.06 anyhow, as ICH-10 won't be recognized. (I
tried this on x58)
I have a 2U enclosure as well (12-bay), but I'd opt for at least 3U next time,
as there are too many restrictions
hello
if you want to compair it against openfiler, i would suggest not to use
opensolaris itself (too much desktop stuff) but a more server like opensolaris
distribution like eon (minimal opensolaris + napp-it) or nexentastor community
edition (free version of their commercial storage server
I would be really interested how you got past this
http://defect.opensolaris.org/bz/show_bug.cgi?id=11371
which I was so badly bitten by that I considered
giving up on OpenSolaris.
I don't get random hangs in normal use; so I haven't
done anything to get
past this.
I DO get hangs
After attempting unsuccessfully to replace a failed drive in a 10 drive raidz2
array and reading as many forum entries as I could find I followed a suggestion
to export and import the pool.
In another attempt to import the pool I reinstalled the OS, but I have so far
been unable to import the
And for the same zpool, the same issue observed when I tried to import this
zpool, and I encountered core dump also:
bash-3.00# zpool import ttt
internal error: Value too large for defined data type
Abort (core dumped)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John
Just to add more details, the issue only occurred for the first direct
access to the file.
From a windows client that has never access the file, you can issue:
dir
I had a zpool of 3 X 1TB disks in raidz1 configuration. After installing a new
version of freenas (the OS is not important) I accidentally created a new zpool
over the existing one (it took just few seconds). Now I can see the empty
raidz1 (only few KB occupied). I didn't write any file to disk
Thank you for the corrections.
Also I forgot about using an SSD to assist. My bad. =)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
After a little bit more digging I found in /var/adm/messages:-
Mar 25 13:13:08 brszfs02 scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci-...@1f,2/i...@1 (ata1):
Mar 25 13:13:08 brszfs02timeout: early timeout, target=1 lun=0
Mar 25 13:13:08 brszfs02 gda: [ID 107833 kern.warning]
My understanding of passthrough disk from the Areca documentation is that
single drives are exempted from the RAID controller regime and that the port
will behave just like a plain HBA port.
Now, on my Areca controller (r.i.p.) that mode always created the biggest havoc
with ZFS/Opensolaris,
Just to clarify, there is only one file on the opensolaris cifs server called
myfile.txt. myfile.TXT does not exist really, but the cifs server should be
case insensitive and return myfile.txt.
I did sniff the traffic and opensolaris return 'file not found' when accessing
myfile.TXT.
To
3:08pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on. (It
Yesterday, Erik Trimble wrote:
Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of
3:26pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote:
I realize that I did things in the wrong order. I should have removed the
oldest snapshot first, on to the newest, and then removed the data in the
FS itself.
For the problem in question, this is
On Fri, Mar 26, 2010 at 4:29 PM, Slack-Moehrle mailingli...@mailnewsrss.com
wrote:
OK, so I made progress today. FreeBSD see's all of my drives, ZFS is acting
correct.
Now for me confusion.
RAIDz3
# zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7
Gives: 'raidz3' no such
On Wed, 14 Apr 2010 17:54:02 +0200 =?KOI8-R?B?z8zYx8Egy9LZ1sHOz9fTy8HR?= wrote:
Can I use getconf to test if a ZFS file system is mounted in case
insensitive mode?
we would have to put in the zfs query (hopefull more generic that just for zfs)
the only current working case-insensitive checks
On 15/04/10 06:29 PM, Günther wrote:
hello
if you want to compair it against openfiler, i would suggest
not to use opensolaris itself (too much desktop stuff) but a
more server like opensolaris distribution like eon (minimal
opensolaris + napp-it) or nexentastor community edition (free
gea wrote:
if you want to compair it against openfiler, i would suggest
not to use opensolaris itself (too much desktop stuff) but a
more server like opensolaris distribution like eon (minimal
opensolaris + napp-it) or nexentastor community edition (free
version of their commercial
Thanks for the tips,
I tried EON, but it is too minimalistic, I plan to use this server for other
(monitoring server and etc.)
Nexenta is a strange hybrid, and use the not commercial version, without its
ability, i don't know...
A napp-it i'll try for sure
--
This message posted from
hello dr245
free nexentastor community edition = commercial edition without support,
without additions like high availability or vmware/ xen management
and limited to 12 tb
nexenta (core) is just the same system (opensolaris b134+ kernel with unix tools
and handling, software will be the same
jcm == James C McPherson james.mcpher...@oracle.com writes:
ga == Günther Alka a...@hfg-gmuend.de writes:
jcm I am amazed that you believe OpenSolaris binary distro has too
jcm much desktop stuff. Most people I have come across are firmly
jcm of the belief that it does not have enough.
Hi Richard,
Hm, I guess I misunderstand the function of uberblocks. I thought uberblocks
contained pointers (to...?) which the system then uses to retrieve the files.
If I'm incorrect in thinking that I could use an older uberblock to retrieve
the data, what am I missing?
I've tried to find
On 04/14/10 11:48 AM, Glenn Fowler wrote:
On Wed, 14 Apr 2010 17:54:02 +0200 =?KOI8-R?B?z8zYx8Egy9LZ1sHOz9fTy8HR?= wrote:
Can I use getconf to test if a ZFS file system is mounted in case
insensitive mode?
we would have to put in the zfs query (hopefull more generic that just for zfs)
the
/usr/bin/getconf _PC_CASE_BEHAVIOR /tmp
getconf: Invalid argument (_PC_CASE_BEHAVIOR)
I'm sure this is a bug, right?
Olga
On Thu, Apr 15, 2010 at 9:52 PM, Tim Haley tim.ha...@oracle.com wrote:
On 04/14/10 11:48 AM, Glenn Fowler wrote:
On Wed, 14 Apr 2010 17:54:02 +0200
free nexentastor community edition = commercial edition without support,
You are opened my eyes :)
start to download, tomorrow will look
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Is it? I don't really understand the nexenta license, which is why I
don't bother with it.
In the simplest terms,
NCP (nexenta.org) = Free as in speech/beer
NexentaStor Community Edition (nexentastor.org) = Free as in beer
- NCP underneath + closed WebGUI + FOSS plugins
NexentaStor
I'm looking to move our file storage from Windows to Opensolaris/zfs. The
windows box will be connected through 10g for iscsi to the storage. The windows
box will continue to serve the windows clients and will be hosting
approximately 4TB of data.
The physical box is a sunfire x4240, single
I've got a Supermicro AOC-USAS-L8I on the way because I gather from these
forums that it works well. I'll just wait for that, then try 8 disks on that an
4 on the motherboard SATA ports.
--
This message posted from opensolaris.org
___
zfs-discuss
Tim, did ZFS or CIFS add other pathconf() values which need to be
implemented in getconf or ksh93?
Olga
2010/4/15 ольга крыжановская olga.kryzhanov...@gmail.com:
/usr/bin/getconf _PC_CASE_BEHAVIOR /tmp
getconf: Invalid argument (_PC_CASE_BEHAVIOR)
I'm sure this is a bug, right?
Olga
On
On Apr 15, 2010, at 12:39 PM, fred pam wrote:
Hi Richard,
Hm, I guess I misunderstand the function of uberblocks. I thought uberblocks
contained pointers (to...?) which the system then uses to retrieve the files.
uberblocks are the trunk of the tree.
If I'm incorrect in thinking that I
hello
do you want to use it as a file smb-fileserver or do you want to have other
windows services? if you want to use it as a file server only, i would suggest
to use build in cifs server.
iscsi will be always slower than native cifs server and you have snapshots via
windows property
zpool import can be a little pessimistic about corrupted labels.
First, try physically removing the problem disk and try to import again.
If that doesn't work, then verify the labels on each disk using:
zdb -l /dev/rdsk/c5d1s0
each disk should have 4 readable labels.
-- richard
On Apr
On Tue, Apr 13 at 9:52, Bob Friesenhahn wrote:
On Mon, 12 Apr 2010, Eric D. Mudama wrote:
The advantage of TRIM, even in high end SSDs, is that it allows you to
effectively have additional considerable extra space available to
the device for garbage collection and wear management when not all
hi all
im brand new to opensolaris ... feel free to call me noob :)
i need to build a home server for media and general storage
zfs sound like the perfect solution
but i need to buy a 8 (or more) SATA controller
any suggestions for compatible 2 opensolaris products will be really
36 matches
Mail list logo