Re: [zfs-discuss] System started crashing hard after zpool reconfigure and OI upgrade

2013-03-20 Thread Michael Schuster
How about crash dumps? michael On Wed, Mar 20, 2013 at 4:50 PM, Peter Wood wrote: > I'm sorry. I should have mentioned it that I can't find any errors in the > logs. The last entry in /var/adm/messages is that I removed the keyboard > after the last reboot and then it sh

Re: [zfs-discuss] System started crashing hard after zpool reconfigure and OI upgrade

2013-03-20 Thread Michael Schuster
Peter, sorry if this is so obvious that you didn't mention it: Have you checked /var/adm/messages and other diagnostic tool output? regards Michael On Wed, Mar 20, 2013 at 4:34 PM, Peter Wood wrote: > I have two identical Supermicro boxes with 32GB ram. Hardware details at > the

Re: [zfs-discuss] help zfs pool with duplicated and missing entry of hdd

2013-01-10 Thread Michael Hase
rdware raid controller not in jbod mode, or even an external san. jbods normally show up as lun 0 (d0) with different target numbers (t1, t2, ...). Maybe something wrong with lun numbering on your box? -- Michael ___ zfs-discuss mailing list zf

Re: [zfs-discuss] Using L2ARC on an AdHoc basis.

2012-10-13 Thread Michael Armstrong
whether or not it risked integrity. Sent from my iPhone On 13 Oct 2012, at 23:02, Ian Collins wrote: > On 10/14/12 10:02, Michael Armstrong wrote: >> Hi Guys, >> >> I have a "portable pool" i.e. one that I carry around in an enclosure. >> However, any SSD

[zfs-discuss] Using L2ARC on an AdHoc basis.

2012-10-13 Thread Michael Armstrong
Hi Guys, I have a "portable pool" i.e. one that I carry around in an enclosure. However, any SSD I add for L2ARC, will not be carried around... meaning the cache drive will become unavailable from time to time. My question is Will random removal of the cache drive put the pool into a "degr

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-07 Thread Michael Hase
without accelerator (gnu dd with oflag=sync). Not bad at all. This could be just good enough for small businesses and moderate sized pools. Michael -- Michael Hase edition-software GmbH http://edition-software.de ___ zfs-discuss mailing list zfs-dis

[zfs-discuss] Very poor small-block random write performance

2012-07-18 Thread Michael Traffanstead
I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives with an hpt27xx controller under FreeBSD 10 (but I've seen the same issue with FreeBSD 9). The system has 8gigs and I'm letting FreeBSD auto-size the ARC. Running iozone (from ports), everything is fine for file sizes

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-17 Thread Michael Hase
On Tue, 17 Jul 2012, Bob Friesenhahn wrote: On Tue, 17 Jul 2012, Michael Hase wrote: If you were to add a second vdev (i.e. stripe) then you should see very close to 200% due to the default round-robin scheduling of the writes. My expectation would be > 200%, as 4 disks are involved.

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-17 Thread Michael Hase
sorry to insist, but still no real answer... On Mon, 16 Jul 2012, Bob Friesenhahn wrote: On Tue, 17 Jul 2012, Michael Hase wrote: So only one thing left: mirror should read 2x I don't think that mirror should necessarily read 2x faster even though the potential is there to do so. L

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Michael Hase
On Mon, 16 Jul 2012, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Michael Hase got some strange results, please see attachements for exact numbers and pool config: seq write factor seq read factor

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Michael Hase
On Mon, 16 Jul 2012, Bob Friesenhahn wrote: On Mon, 16 Jul 2012, Michael Hase wrote: This is my understanding of zfs: it should load balance read requests even for a single sequential reader. zfs_prefetch_disable is the default 0. And I can see exactly this scaling behaviour with sas disks

Re: [zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Michael Hase
hen going from one disk to a mirrored configuration. It's just the sequential read/write case, that's different for sata and sas disks. Michael Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,h

[zfs-discuss] zfs sata mirror slower than single disk

2012-07-16 Thread Michael Hase
than expected, especially for a simple mirror. Any ideas? Thanks, Michael -- Michael Hase http://edition-software.de pool: ptest state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM ptest ONLINE 0 0 0 c13t4d0 O

Re: [zfs-discuss] Drive upgrades

2012-04-13 Thread Michael Armstrong
t 9:35 AM, Edward Ned Harvey > wrote: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Michael Armstrong > > > > Is there a way to quickly ascertain if my seagate/hitachi drives are as > large as > &

[zfs-discuss] Drive upgrades

2012-04-13 Thread Michael Armstrong
ing able to grow the pool...Thanks,Michael ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-01-04 Thread Michael Sullivan
just fine. You can't enable dedup on a dataset > and any writes won't dedup they will "rehydrate". > > So it is more like partial dedup support rather than it not being there at > all. "rehydrate"??? Is it instant or freeze dried? Mike - --- Michael

Re: [zfs-discuss] about btrfs and zfs

2011-11-14 Thread Michael Schuster
e that the name came after the TLA. > "zfs" came first and "zettabyte" later. as Jeff told it (IIRC), the "expanded" version of zfs underwent several changes during the development phase, until it was decided one day to attach none of them to "zfs" and just

Re: [zfs-discuss] Remove corrupt files from snapshot

2011-11-03 Thread Michael Schuster
Hi, snapshots are read-only by design; you can clone them and manipulate the clone, but the snapshot itself remains r/o. HTH Michael On Thu, Nov 3, 2011 at 13:35, wrote: > > Hello, > > I have got a bunch of corrupted files in various snapshots on my ZFS file > backing store.

Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Michael DeMan
Or, if you absolutely must run linux for the operating system, see: http://zfsonlinux.org/ On Oct 17, 2011, at 8:55 AM, Freddie Cash wrote: > If you absolutely must run Linux on your storage server, for whatever reason, > then you probably won't be running ZFS. For the next year or two, it wou

Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-10-01 Thread Michael Sullivan
ing with your Oracle Sales Rep. I think his requirements are being driven by a PHB who wants to see a "GUI". crontab, ssh - functionality already there, simple and not many "moving parts" but obviously too obfuscated for the PHB to understand. Good luck. Mike ---

Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-09-30 Thread Michael Sullivan
when combined with just plain old crontab. If it's a graphical interface you're looking for, I'm sure someone has hacked together somethings in TCL/Tk pr Perl/TK as an interface to cron which you could probably hack to have construct your particular crontab entry. Just a thought

Re: [zfs-discuss] I'm back!

2011-09-02 Thread Michael DeMan
Warm welcomes back. So whats neXt? - Mike DeMan On Sep 2, 2011, at 6:30 PM, Erik Trimble wrote: > Hi folks. > > I'm now no longer at Oracle, and the past couple of weeks have been a bit of > a mess for me as I disentangle myself from it. > > I apologize to those who may have tried to contac

Re: [zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-08-29 Thread Michael DeMan
Are you truly new to ZFS? Or do you work for NetApp or EMC or somebody else that is curious? - Mike On Aug 29, 2011, at 9:15 PM, Jesus Cea wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hi all. Sorry if I am asking a FAQ, but I haven't found a really > authorizative answer to t

Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-22 Thread Michael DeMan
I can not help but agree with Tim's comment below. If you want a free version of ZFS, in which case you are still responsible for things yourself - like having backups, then maybe: www.freenas.org www.linuxonzfs.org www.openindiana.org Meanwhile, it is grossly inappropriate to be complaining ab

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Michael Sullivan
er than rpool. Which feels kludgy. Is there a better way? > > echo "set zfs:zil_disable = 1" > /etc/system echo "set zfs:zil_disable = 1" >> /etc/system Mike --- Michael Sullivan m...@axsh.us http://www.axsh.us/ Phone: +1-662-259- Mob

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-22 Thread Michael DeMan
+1 on the below, and in addition... ...compact flash, like off of USB sticks is not designed to deal with very many writes to it. Commonly it is used to store a bootable image that maybe once a year will have an upgrade on it. Basically, trying to use those devices for a ZIL, even they are mir

Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?

2011-06-17 Thread Michael Sullivan
e using an external USB drive which was appropriately sized and turn on autoexpand. Mike --- Michael Sullivan m...@axsh.us http://www.axsh.us/ Phone: +1-662-259- Mobile: +1-662-202-7716 ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] question about COW and snapshots

2011-06-17 Thread Michael Sullivan
in use at a time and operations would need to be transaction based with commits and rollbacks. Way off-topic, but Smalltalk and its variants do this by maintaining the state of everything in an operating environment image. But then again, I could be wrong. Mike --- Michael Sullivan

Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?

2011-06-16 Thread Michael Schuster
ething like parted to shrink the NTFS partition 2) create a new partition without FS in the space now freed from NTFS 3) boot OpenSolaris, add the partition from 2) as vdev to your zpool. HTH Michael -- Michael Schuster http://recursiveramblings.wordpress.com/ __

Re: [zfs-discuss] question about COW and snapshots

2011-06-15 Thread Michael Schuster
ppens between two writes (even from a single user), it will be consistent from the POV of the FS, but may not be from the POV of the application. HTH Michael -- Michael Schuster http://recursiveramblings.wordpress.com/ ___ zfs-discuss mailing li

Re: [zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Michael DeMan
ood idea, another things to keep in mind > technology change so fast, by the time you want a replacement, may be HDD > does exist any more > or the supplier changed, so the drives are not exactly like your original > drive > > > > > On 5/28/2011 6:05 PM, Mich

Re: [zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Michael DeMan
Always pre-purchase one extra drive to have on hand. When you get it, confirm it was not dead-on-arrival by hooking up on an external USB to a workstation and running whatever your favorite tools are to validate it is okay. Then put it back in its original packaging, and put a label on it abou

Re: [zfs-discuss] best migration path from Solaris 10

2011-03-23 Thread Michael DeMan
I think on this, the big question is going to be whether Oracle continues to release ZFS updates under CDDL after their commercial releases. Overall, in the past it has obviously and necessarily been the case that FreeBSD has been a '2nd class citizen'. Moving forward, that 2nd class idea becom

Re: [zfs-discuss] best migration path from Solaris 10

2011-03-18 Thread Michael DeMan
ZFSv28 is in HEAD now and will be out in 8.3. ZFS + HAST in 9.x means being able to cluster off different hardware. In regards to OpenSolaris and Indiana - can somebody clarify the relationship there? It was clear with OpenSolaris that the latest/greatest ZFS would always be available since it

Re: [zfs-discuss] [OpenIndiana-discuss] best migration path from Solaris 10

2011-03-18 Thread Michael DeMan
Hi David, Caught your note about bonnie, actually do some testing myself over the weekend. All on older hardware for fun - dual opteron 285 with 16GB RAM. Disk systems is off a pair of SuperMicro SATA cards, with a combination of WD enterprise and Seagate ES 1TB drives. No ZIL, no L2ARC, no t

Re: [zfs-discuss] best migration path from Solaris 10

2011-03-18 Thread Michael DeMan
I think we all feel the same pain with Oracle's purchase of Sun. FreeBSD that has commercial support for ZFS maybe? Not here quite yet, but it is something being looked at by an F500 that I am currently on contract with. www.freenas.org, www.ixsystems.com. Not saying this would be the right so

Re: [zfs-discuss] zfs-discuss Digest, Vol 64, Issue 21

2011-02-07 Thread Michael Armstrong
I obtained smartmontools (which includes smartctl) from the standard apt repository (i'm using nexenta however), in addition its neccessary to use the device type of sat,12 with smartctl to get it to read attributes correctly in OS afaik. Also regarding dev id's on the system, from what i've see

[zfs-discuss] deduplication requirements

2011-02-07 Thread Michael
Hi guys, I'm currently running 2 zpools each in a raidz1 configuration, totally around 16TB usable data. I'm running it all on an OpenSolaris based box with 2gb memory and an old Athlon 64 3700 CPU, I understand this is very poor and underpowered for deduplication, so I'm looking at building a new

Re: [zfs-discuss] zfs-discuss Digest, Vol 64, Issue 13

2011-02-06 Thread Michael Armstrong
Additionally, the way I do it is to draw a diagram of the drives in the system, labelled with the drive serial numbers. Then when a drive fails, I can find out from smartctl which drive it is and remove/replace without trial and error. On 5 Feb 2011, at 21:54, zfs-discuss-requ...@opensolaris.org

[zfs-discuss] NFS slow for small files: idle disks

2011-01-20 Thread Michael Hase
y 80%-100% busy. Just for small files the array sits almost idle, the array can do way more. I discovered this on different solaris versions, not only this test system. Is there any explanation for this behaviour? Thanks, Michael -- This message posted from opensolaris.orglocal Versio

Re: [zfs-discuss] Troubleshooting help on ZFS

2011-01-20 Thread Michael Schuster
't figure out how this happened all of the sudden and how best to > troubleshoot it. > > If you have any help or technical wisdom to offer, I'd appreciate it as this > has been frustrating. look in /var/adm/messages (.*) to see whether there's anything interesting aro

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
che table of what's in the L2ARC. Using 2GB of RAM >with an SSD-based L2ARC (even without Dedup) likely won't help you too >much vs not having the SSD. > >If you're going to turn on Dedup, you need at least 8GB of RAM to go >with the SSD. > >-Erik >

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Thanks everyone, I think overtime I'm gonna update the system to include an ssd for sure. Memory may come later though. Thanks for everyone's responses Erik Trimble wrote: >On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote: >> I've since turned off dedup, ad

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
I've since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? On 18 Jan 2011, at 15:07, Richard Elling wrote: > On Jan 15, 2011, at 4:21 PM,

[zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following res

Re: [zfs-discuss] A few questions

2011-01-09 Thread Michael Sullivan
perMicro is one of the brands of choice, but even then one must adhere to a fairly tight HCL. The same holds true for Solaris/OpenSolaris with third-party hardware. SATA Controllers and multiplexers are also another example of the drivers being written by the manufacturer and Solaris/OpenSolaris

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-07 Thread Michael DeMan
On Jan 7, 2011, at 6:13 AM, David Magda wrote: > On Fri, January 7, 2011 01:42, Michael DeMan wrote: >> Then - there is the other side of things. The 'black swan' event. At >> some point, given percentages on a scenario like the example case above, >> one

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-06 Thread Michael DeMan
At the end of the day this issue essentially is about mathematical improbability versus certainty? To be quite honest, I too am skeptical about about using de-dupe just based on SHA256. In prior posts it was asked that the potential adopter of the technology provide the mathematical reason to

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-06 Thread Michael Sullivan
, not me. For my home media server, maybe, but even then I'd hate to lose any of my family photos or video due to a hash collision. I'll play it safe if I dedup. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Mobile: +1-662-202-7716 US Phon

Re: [zfs-discuss] A few questions

2011-01-05 Thread Michael Schuster
argue that that should have already happened with S11 express... I don't know it has, but that's not *the* release of S11, is it? And once the code is released, even if after the fact, it's not reverse-engineering anymore, is it? Michael PS: just in case: even while at Oracle, I

Re: [zfs-discuss] A couple of quick questions

2010-12-22 Thread Michael Schuster
H -- regards/mit freundlichen Grüssen Michael Schuster ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-16 Thread michael . p . sullivan
Ummm… there's a difference between data integrity and data corruption. Integrity is enforced programmatically by something like a DBMS. This sets up basic rules that ensure the programmer, program or algorithm adhere to a level of sanity and bounds. Corruption is where cosmic rays, bit rot, ma

Re: [zfs-discuss] Running on Dell hardware?

2010-11-01 Thread Michael Sullivan
Linux on any hardware as well. Then your hardware and software issues would probably be multiplied even more. Cheers, Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 23 Oct 2010, at 12:53

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Michael DeMan
On Oct 8, 2010, at 4:33 AM, Edward Ned Harvey wrote: >> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com] >> Sent: Thursday, October 07, 2010 10:02 PM >> >> On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey >> wrote: >>> If you're going raidz3, with 7 disks, then you might as well just

Re: [zfs-discuss] TLER and ZFS

2010-10-06 Thread Michael DeMan
Can you give us release numbers that confirm that this is 'automatic'. It is my understanding that the last available public release of OpenSolaris does not do this. On Oct 5, 2010, at 8:52 PM, Richard Elling wrote: > ZFS already aligns the beginning of data areas to 4KB offsets from the lab

Re: [zfs-discuss] TLER and ZFS

2010-10-05 Thread Michael DeMan
Hi upfront, and thanks for the valuable information. On Oct 5, 2010, at 4:12 PM, Peter Jeremy wrote: >> Another annoying thing with the whole 4K sector size, is what happens >> when you need to replace drives next year, or the year after? > > About the only mitigation needed is to ensure that a

Re: [zfs-discuss] TLER and ZFS

2010-10-05 Thread Michael DeMan
On Oct 5, 2010, at 2:47 PM, casper@sun.com wrote: > > > I've seen several important features when selecting a drive for > a mirror: > > TLER (the ability of the drive to timeout a command) > sector size (native vs virtual) > power use (specifically at home) > perform

Re: [zfs-discuss] TLER and ZFS

2010-10-05 Thread Michael DeMan
On Oct 5, 2010, at 1:47 PM, Roy Sigurd Karlsbakk wrote: >> Western Digital RE3 WD1002FBYS 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal >> Hard Drive -Bare Drive >> >> are only $129. >> >> vs. $89 for the 'regular' black drives. >> >> 45% higher price, but it is my understanding that the 'RAID Editi

Re: [zfs-discuss] TLER and ZFS

2010-10-05 Thread Michael DeMan
I'm not sure on the TLER issues by themselves, but after the nightmares I have gone through dealing with the 'green drives', which have both the TLER issue and the IntelliPower head parking issues, I would just stay away from it all entirely and pay extra for the 'RAID Editiion' drives. Just ou

Re: [zfs-discuss] "zfs unmount" versus "umount"?

2010-09-30 Thread Michael Schuster
ere's the relevant code from main(): Mark, I think that wasn't the question, rather, "what's the difference between 'zfs u[n]mount' and '/usr/bin/umount'?" HTH Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recurs

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-28 Thread Michael Eskowitz
I'm sorry to say that I am quite the newbie to ZFS. When you say zfs send/receive what exactly are you referring to? I had the zfs array mounted to a specific location in my file system (/mnt/Share) and I was sharing that location over the network with a samba server. The directory had read-

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Michael Eskowitz
Oh and yes, raidz1. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Michael Eskowitz
I don't know what happened. I was in the process of copying files onto my new file server when the copy process from the other machine failed. I turned on the monitor for the fileserver and found that it had rebooted by itself at some point (machine fault maybe?) and when I remounted the drive

[zfs-discuss] file recovery on lost RAIDZ array

2010-09-12 Thread Michael Eskowitz
I recently lost all of the data on my single parity raid z array. Each of the drives was encrypted with the zfs array built within the encrypted volumes. I am not exactly sure what happened. The files were there and accessible and then they were all gone. The server apparently crashed and reb

Re: [zfs-discuss] ZFS with SAN's and HA

2010-08-26 Thread Michael Dodwell
Lao, I had a look at the HAStoragePlus etc and from what i understand that's to mirror local storage across 2 nodes for services to be able to access 'DRBD style'. Having a read thru the documentation on the oracle site the cluster software from what i gather is how to cluster services togeth

[zfs-discuss] ZFS with SAN's and HA

2010-08-26 Thread Michael Dodwell
Hey all, I currently work for a company that has purchased a number of different SAN solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file store over fiber channel. Basically I've taken slices from each of the sans and added them to a ZFS pool on this box (which I'm cal

[zfs-discuss] zfs/iSCSI: 0000 = SNS Error Type: Current Error (0x70)

2010-08-26 Thread Michael W Lucas
Hi, I'm trying to track down an error with a 64bit x86 OpenSolaris 2009.06 ZFS shared via iSCSI and an Ubuntu 10.04 client. The client can successfully log in, but no device node appears. I captured a session with wireshark. When the client attempts a "SCSI: Inquiry LUN: 0x00", OpenSolaris s

Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-16 Thread Michael Schuster
hat OS are you using? Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Degraded Pool, Spontaneous Reboots

2010-08-12 Thread Michael Anderson
Hello, I've been getting warnings that my zfs pool is degraded. At first it was complaining about a few corrupt files, which were listed as hex numbers instead of filenames, i.e. VOL1:<0x0> After a scrub, a couple of the filenames appeared - turns out they were in snapshots I don't really nee

Re: [zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread michael schuster
- provide measurements (lockstat, iostat, maybe some DTrace) before and during test, add some timestamps so people can correlate data to events. - anything else you can think of that might be relevant. HTH Michael ___ zfs-discuss mailing list z

Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread Michael Schuster
them is to destroy snapshots. Or have I still misunderstood the question? yes, I think so. Here's how I read it: the snapshots contain lots more than the core files, and OP wants to remove only the core files (I'm assuming they weren't discovered before the snapshot

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling wrote: > I depends on if the problem was fixed or not.  What says >        zpool status -xv > >  -- richard [r...@nas01 ~]# zpool status -xv pool: tank state: DEGRADED status: One or more devices has experienced an unrecoverable error. An

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling wrote: > Aren't you assuming the I/O error comes from the drive? > fmdump -eV okay - I guess I am. Is this just telling me "hey stupid, a checksum failed" ? In which case why did this never resolve itself and the specific device get marked as degra

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote: > Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core" > and watch the drive activity lights.  The drive in the pool which isn't > blinking like crazy is a faulted/offlined drive. Actually I guess my real question is

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote: > Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core" > and watch the drive activity lights.  The drive in the pool which isn't > blinking like crazy is a faulted/offlined drive. > > Ugly and oh-so-hackerish, but it

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi wrote: > ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still > can't understand how it helps me to identify model of specific drive. Curious: [r...@nas01 ~]# zpool status -x pool: tank state: DEGRADED status: One or more de

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
Garrett D'Amore wrote: >On Fri, 2010-07-16 at 10:24 -0700, Michael Johnson wrote: >> I'm currently planning on running FreeBSD with ZFS, but I wanted to >>double-check >> how much memory I'd need for it to be stable. The ZFS wiki currently says >you >

[zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
I just don't need more than 1 TB of available storage right now, or for the next several years.) This is on an AMD64 system, and the OS in question will be running inside of VirtualBox, with raw access to the drives. Thanks, Michael

Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
saying that you employ enough kernel hackers to keep up even without Oracle? (I am admittedly ignorant about the OpenSolaris developer community; this is all based on others' statements and opinions that I've read.) Michael ___

Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
Nikola M wrote: >Freddie Cash wrote: >> You definitely want to do the ZFS bits from within FreeBSD. >Why not using ZFS in OpenSolaris? At least it has most stable/tested >implementation and also the newest one if needed? I'd love to use OpenSolaris for exactly those reasons, but I'm wary of using

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Michael Johnson
on 11/07/2010 15:54 Andriy Gapon said the following: >on 11/07/2010 14:21 Roy Sigurd Karlsbakk said the following: >> >> I'm planning on running FreeBSD in VirtualBox (with a Linux host) >> and giving it raw disk access to four drives, which I plan to >> configure as a raidz2 volume.

[zfs-discuss] Encryption?

2010-07-10 Thread Michael Johnson
storing backups of my personal files on this), so if there's a chance that ZFS wouldn't handle errors well when on top of encryption, I'll just go without it. Thanks, Michael ___ zfs-discuss mailing list zfs-discuss@opensolar

[zfs-discuss] Consequences of resilvering failure

2010-07-06 Thread Michael Johnson
detected in the middle of resilvering.) I will of course have a backup of the pool, but I may opt for additional backup if the entire pool could be lost due to data corruption (as opposed to just a few files potentially being lost). Thanks, Michael [1] http://dlc.sun.com/osol/docs/co

Re: [zfs-discuss] b134 pool borked!

2010-06-30 Thread Michael Mattsson
Just in case any stray searches finds it way here, this is what happened to my pool: http://phrenetic.to/zfs -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Michael Shadle
On Fri, Jun 11, 2010 at 2:50 AM, Alex Blewitt wrote: > You are sadly mistaken. > > From GNU.org on license compatibilities: > > http://www.gnu.org/licenses/license-list.html > >        Common Development and Distribution License (CDDL), version 1.0 >        This is a free software license. It has

Re: [zfs-discuss] zpool replace lockup / replace process now stalled, how to fix?

2010-05-21 Thread Michael Donaghy
For the record, in case anyone else experiences this behaviour: I tried various things which failed, and finally as a last ditch effort, upgraded my freebsd, giving me zpool v14 rather than v13 - and now it's resilvering as it should. Michael On Monday 17 May 2010 09:26:23 Michael Do

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Michael Schuster
On 19.05.10 17:53, John Andrunas wrote: Not to my knowledge, how would I go about getting one? (CC'ing discuss) man savecore and dumpadm. Michael On Wed, May 19, 2010 at 8:46 AM, Mark J Musante wrote: Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010,

[zfs-discuss] zpool replace lockup / replace process now stalled, how to fix?

2010-05-17 Thread Michael Donaghy
er a proper replace of the failed partitions? Many thanks, Michael ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Michael DeMan
I agree on the motherboard and peripheral chipset issue. This, and the last generation AMD quad/six core motherboards all seem to use the AMD SP56x0/SP5100 chipset, which I can't find much information about support on for either OpenSolaris or FreeBSD. Another issue is the LSI SAS2008 chipset f

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Michael Schuster
standard monitoring tools? If not, what other tools exist that can do the same? "zpool iostat" for one. Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mail

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Michael Sullivan
rks really well. > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Mi

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
ead a block from more devices simultaneously, it will cut the latency of the overall read. On 7 May 2010, at 02:57 , Marc Nicholas wrote: > Hi Michael, > > What makes you think striping the SSDs would be faster than round-robin? > > -marc > > On Thu, May 6, 2010 at 1:09 PM,

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
anything to come close in its approach to disk data management. Let's just hope it keeps moving forward, it is truly a unique way to view disk storage. Anyway, sorry for the ramble, but to everyone, thanks again for the answers. Mike --- Michael Sullivan michael.p.sulli...@m

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote: >> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com] >> >> While it explains how to implement these, there is no information >> regarding failure of a device in a striped L2ARC set of SSD's. I have > &

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
Hi Ed, Thanks for your answers. Seem to make sense, sort of… On 6 May 2010, at 12:21 , Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Michael Sullivan >> >> I have a question I canno

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
I got a suggestion to check what fmdump -eV gave to look for PCI errors if the controller might be broken. Attached you'll find the last panic's fmdump -eV. It indicates that ZFS can't open the drives. That might suggest a broken controller, but my slog is on the motherboard's internal controll

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
Thanks for your reply! I ran memtest86 and it did not report any errors. The disk controller I've not replaced, yet. The server is up in multi-user mode with the broken pool in an un-imported state. Format now works and properly lists all my devices without panic'ing. zpool import panic's the b

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
This is how my zpool import command looks like: Attached you'll find the output of zdb -l of each device. pool: tank id: 10904371515657913150 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE raidz1-0 ONLIN

Re: [zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
90 reads and not a single comment? Not the slightest hint of what's going on? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
Ok, thanks. So, if I understand correctly, it will just remove the device from the VDEV and continue to use the good ones in the stripe. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 5

  1   2   3   4   >