looking for maximum performance with availability so that
narrows it down to a mirrored pool, unless your Postgresql workload is very
specific that raidz would fit, but beware of the performance hit.
Regards,
--
Giovanni Tirloni
___
zfs-discuss mailing list
/datasets from each one
independently.
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/index.html
http://download.oracle.com/docs/cd/E18752_01/html/819-5461/index.html
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
shouldn't panic
just because it can't import a pool.
Try booting with the kernel debugger on (add -kv to the grub kernel
line). Take a look at dumpadm.
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
that is not a problem comments. With the 7000s appliance
I've heard that the 900hr estimated resilver time was normal and
everything is working as expected. Can't help but think there is some
walled garden syndrome floating around.
--
Giovanni Tirloni
___
zfs-discuss
On Wed, May 4, 2011 at 9:04 PM, Brandon High bh...@freaks.com wrote:
On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni gtirl...@sysdroid.com
wrote:
The problem we've started seeing is that a zfs send -i is taking hours
to
send a very small amount of data (eg. 20GB in 6 hours) while a zfs
,
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Anyone else with over 600 hours of resilver time? :-)
Thank you,
Giovanni Tirloni (gtirl...@sysdroid.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for this type of situation?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the LSI 2004/2008 HBAs connected to the backplane (both 3Gb/s
and 6Gb/s).
The MegaRAID ELP, when connected to the same backplane, doesn't exhibit
that behavior.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
Solaris and refuses to investigate it.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
basis. It's
not if they'll happen but how often.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
builds.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in a
mirror. I've always wondered what exactly it was doing since it was supposed
to be 30 seconds worth of data. It also generates lots of checksum errors.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
. Was the autoreplace code
supposed to replace the faulty disk and release the spare when
resilver is done ?
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
, then just
detach it manually.
Yes, that's working as expected (spare detaches after resilver).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have reported works OK.
If you want to try something in between b111 and b134, see the
following instructions:
http://blogs.sun.com/observatory/entry/updating_to_a_specific_build
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs
a vdev is degraded).
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 23, 2010 at 11:59 AM, Richard Elling rich...@nexenta.com wrote:
On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
Hello,
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows 5MB/s writes
On Fri, Jul 23, 2010 at 12:50 PM, Bill Sommerfeld
bill.sommerf...@oracle.com wrote:
On 07/23/10 02:31, Giovanni Tirloni wrote:
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows5MB/s writes
these issues.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Giovanni Tirloni gtirl...@sysdroid.com wrote:
On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin car...@ivy.net wrote:
IMHO it's important we don't get stuck running Nexenta in the same
spot we're now
version 134.
Have you enabled compression or deduplication ?
Check the disks with `iostat -XCn 1` (look for high asvc_t times) and
`iostat -En` (hard and soft errors).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
see/change these thresholds ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can't
get zpool status to show my pool.
vdev_path = /dev/dsk/c9t0d0s0
vdev_devid = id1,s...@ahitachi_hds7225scsun250g_0719bn9e3k=vfa100r1dn9e3k/a
parent_guid = 0xb89f3c5a72a22939
Does format(1M) show the devices where they once where ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
/messages
Perhaps with the additional information someone here can help you
better. I don't have any experience with Windows 7 to guarantee that
it hasn't messes with the disk contents.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing
the opposite.
Some companies are successfully doing the opposite of you: They are
using standard parts and a competent staff that knows how to create
solutions out of them without having to pay for GUI-powered systems
and a 4-hour on-site part swapping service.
--
Giovanni Tirloni
gtirl...@sysdroid.com
and packages, so
I can't envision that.
Would anyone have any ideas what may cause this?
It could be a disk failing and dragging I/O down with it.
Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`
Any timeouts or retries in /var/adm/messages ?
--
Giovanni Tirloni
gtirl
release.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://www.nexenta.com/corp/documentation/nexentastor-changelog
Is there a bug tracker were one can objectively list all the bugs
(with details) that went into a release ?
Many bug fixes is a bit too general.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss
.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
driver by default uses scsi_vhci, and scsi_vhci by
default does load-balance round-robin. Have you tried setting
load-balance=none in scsi_vhci.conf?
That didn't help.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
anymore so I'm
guessing it's something related to the mpt_sas driver.
I submitted bug #6963321 a few minutes ago (not available yet).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Jun 15, 2010 at 1:56 PM, Scott Squires ssqui...@gmail.com wrote:
Is ZFS dependent on the order of the drives? Will this cause any issue down
the road? Thank you all;
No. In your case the logical names changed but ZFS managed to order
the disks correctly as they were before.
--
On Thu, May 27, 2010 at 2:39 AM, Marc Bevand m.bev...@gmail.com wrote:
Hi,
Brandon High bhigh at freaks.com writes:
I only looked at the Megaraid that he mentioned, which has a PCIe
1.0 4x interface, or 1000MB/s.
You mean x8 interface (theoretically plugged into that x4 slot below...)
On Thu, May 20, 2010 at 2:19 AM, Marc Bevand m.bev...@gmail.com wrote:
Deon Cui deon.cui at gmail.com writes:
So I had a bunch of them lying around. We've bought a 16x SAS hotswap
case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as
the mobo.
In the two 16x PCI-E slots I've
On Wed, May 26, 2010 at 9:22 PM, Brandon High bh...@freaks.com wrote:
On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni gtirl...@sysdroid.com
wrote:
SuperMicro X8DTi motherboard
SuperMicro SC846E1 chassis (3Gb/s backplane)
LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable
On Fri, May 7, 2010 at 8:07 AM, Emily Grettel emilygrettelis...@hotmail.com
wrote:
Hi,
I've had my RAIDz volume working well on SNV_131 but it has come to my
attention that there has been some read issues with the drives. Previously I
thought this was a CIFS problem but I'm noticing that
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey solar...@nedharvey.comwrote:
From the information I've been reading about the loss of a ZIL device,
What the heck? Didn't I just answer that question?
I know I said this is answered in ZFS Best Practices Guide.
On Sat, Mar 27, 2010 at 6:02 PM, Harry Putnam rea...@newsguy.com wrote:
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Sat, 27 Mar 2010, Harry Putnam wrote:
What to do with a status report like the one included below?
What does it mean to have an unrecoverable error but no
On Tue, Mar 23, 2010 at 2:00 PM, Ray Van Dolson rvandol...@esri.com wrote:
Kind of a newbie question here -- or I haven't been able to find great
search terms for this...
Does ZFS recognize zpool members based on drive serial number or some
other unique, drive-associated ID? Or is it based
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen sv...@stillbilde.net wrote:
We all know that data corruption may happen, even on the most reliable of
hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (as in zpool set optionname pool) for
scrub period, in number of
On Thu, Mar 18, 2010 at 1:19 AM, Chris Paul chris.p...@rexconsulting.netwrote:
OK I have a very large zfs snapshot I want to destroy. When I do this, the
system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with
128GB of memory. Now this may be more of a function of the IO
On Wed, Mar 17, 2010 at 9:34 AM, David Dyer-Bennet d...@dd-b.net wrote:
On 3/16/2010 23:21, Erik Trimble wrote:
On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:
On 3/16/2010 17:45, Erik Trimble wrote:
David Dyer-Bennet wrote:
On Tue, March 16, 2010 14:59, Erik Trimble wrote:
Has there
On Wed, Mar 17, 2010 at 6:43 AM, wensheng liu vincent.li...@gmail.comwrote:
Hi all,
How to reserve a space on a zfs filesystem? For mkfiel or dd will write
data to the
block, it is time consuming. whiel mkfile -n will not really hold the
space.
And zfs's set reservation only work on
On Wed, Mar 17, 2010 at 11:23 AM, casper@sun.com wrote:
IMHO, what matters is that pretty much everything from the disk controller
to the CPU and network interface is advertised in power-of-2 terms and
disks
sit alone using power-of-10. And students are taught that computers work
with
On Wed, Mar 17, 2010 at 7:09 PM, Bill Sommerfeld sommerf...@sun.com wrote:
On 03/17/10 14:03, Ian Collins wrote:
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100%
done, but not complete:
scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go
Don't panic. If
On Mon, Mar 15, 2010 at 5:39 PM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Greeting ALL
I understand that L2ARC is still under enhancement. Does any one know if
ZFS can be upgrades to include Persistent L2ARC, ie. L2ARC will not loose
its contents after system reboot ?
There is a bug
. You can try to import the pool
again and see if `fmdump -e` lists any errors afterwards.
You use the spare with `zpool replace`.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of
encouragement or wisdom?
What does `iostat -En` say ?
My suggestion is to replace the cable that's connecting the c3t3d0 disk.
IMHO, the cable is much more likely to be faulty than a single port on the
disk controller.
--
Giovanni Tirloni
sysdroid.com
___
zfs
files being created on the
SSD disk ?
You can check device usage with `zpool iostat -v hdd`. Please also send the
output of `zpool status hdd`.
Thank you,
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
there is nothing to read back from later when a read()
misses the ARC cache and checks L2ARC.
I don't know what your OLTP benchmark does but my advice is to check if it's
really writing files in the 'hdd' zpool mount point.
--
Giovanni Tirloni
sysdroid.com
___
zfs
of
3.7TByte.
Please check the ZFS FAQ:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
There is a question regarding the difference between du, df and zfs list.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
is incredible in terms of resilience and
performance, no doubt. Which makes me think the pretty interface becomes an
annoyance sometimes. Let's wait for 2010.Q1 :)
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. For small
datasets/snapshots that doesn't happen or is harder to notice.
Does ZFS have to do something special when it's done releasing the data
blocks at the end of the destroy operation ?
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
together.
AFAIK, RAID0+1 is not supported since a vdev can only be of type disk,
mirror or raidz. And all vdevs are stripped together. Someone more
experienced in ZFS can probably confirm/deny this.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing
0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
cache
c7t22d0ONLINE 0 0 0
spares
c7t3d0 AVAIL
Any ideas?
Thank you,
--
Giovanni Tirloni
sysdroid.com
noted, it doesn't seem possible.
You could create a new zpool with this larger LUN and use zfs send/receive
to migrate your data.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
, and that
makes us all very nervous.
Any ideas?
Is it possible that your users are now deleting everything before starting
to write the backup data ?
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Feb 9, 2010 at 2:04 AM, Thomas Burgess wonsl...@gmail.com wrote:
On Mon, Feb 08, 2010 at 09:33:12PM -0500, Thomas Burgess wrote:
This is a far cry from an apples to apples comparison though.
As much as I'm no fan of Apple, it's a pity they dropped ZFS because
that would have
On Tue, Feb 2, 2010 at 1:58 PM, Tim Cook t...@cook.ms wrote:
It's called spreading the costs around. Would you really rather pay 10x
the price on everything else besides the drives? This is essentially Sun's
way of tiered pricing. Rather than charge you a software fee based on how
much
On Tue, Feb 2, 2010 at 9:07 PM, Marc Nicholas geekyth...@gmail.com wrote:
I believe magical unicorn controllers and drives are both bug-free and
100% spec compliant. The leprichorns sell them if you're trying to
find them ;)
Well, perfect and bug free sure don't exist in our industry.
The
On Sat, Jan 2, 2010 at 4:07 PM, R.G. Keen k...@geofex.com wrote:
OK. From the above suppositions, if we had a desktop (infinitely
long retry on fail) disk and a soft-fail error in a sector, then the
disk would effectively hang each time the sector was accessed.
This would lead to
(1) ZFS-SD-
On Mon, Jan 4, 2010 at 3:51 PM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Giovanni Tirloni tirl...@gmail.com wrote:
We use Seagate Barracuda ES.2 1TB disks and every time the OS starts
to bang on a region of the disk with bad blocks (which essentially
degrades the performance
63 matches
Mail list logo