[zfs-discuss] Backing up ZFS metadata

2012-08-24 Thread Scott Aitken
pool buy overwriting the start and end of two member disks (and possibly some data). I assume that if I could have restored the lost metadata I could have recovered most of the real data. Thanks Scott ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Scott
On Mon, Aug 13, 2012 at 10:40:45AM -0700, Richard Elling wrote: > > On Aug 13, 2012, at 2:24 AM, Sa?o Kiselkov wrote: > > > On 08/13/2012 10:45 AM, Scott wrote: > >> Hi Saso, > >> > >> thanks for your reply. > >> > >> If all d

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Scott
Thanks again Saso, at least I have closure :) Scott On Mon, Aug 13, 2012 at 11:24:55AM +0200, Sa?o Kiselkov wrote: > On 08/13/2012 10:45 AM, Scott wrote: > > Hi Saso, > > > > thanks for your reply. > > > > If all disks are the same, is the root pointer the sam

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Scott
Hi Saso, thanks for your reply. If all disks are the same, is the root pointer the same? Also, is there a "signature" or something unique to the root block that I can search for on the disk? I'm going through the On-disk specification at the moment. Scott On Mon, Aug 13, 201

[zfs-discuss] Recovering lost labels on raidz member

2012-08-12 Thread Scott
t the labels using the infomration from the 3 valid disks? Thanks Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Corrupted pool: I/O error and Bad exchange descriptor

2012-07-16 Thread Scott Aitken
e to get some of my data back. Any recovery is a bonus. If anyone is keen, I have enabled SSH into the Open Indiana box which I'm using to try and recovery the pool, so if you'd like to take a shot please let me know. Thanks in advance, Scott ___

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-16 Thread Scott Aitken
On Sat, Jun 16, 2012 at 09:58:40AM -0500, Gregg Wonderly wrote: > > On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote: > > > On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote: > >> Use 'dd' to replicate as much of lofi/2 as you can onto another devic

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-16 Thread Scott Aitken
in that slot so that it will import and then you can 'zpool replace' > the > new disk into the pool perhaps? > > Gregg Wonderly > > On 6/16/2012 2:02 AM, Scott Aitken wrote: > > On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote: > >

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-16 Thread Scott Aitken
was /dev/lofi/2 /dev/lofi/5 ONLINE 0 0 0 /dev/lofi/4 ONLINE 0 0 0 /dev/lofi/3 ONLINE 0 0 0 /dev/lofi/1 ONLINE 0 0 0 root@openindiana-01:/mnt# zpool sc

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-15 Thread Scott Aitken
in the second import, it complains that it can't open the device, rather than saying it has corrupted data. It's interesting that even though 4 of the 5 disks are available, it still can import it as DEGRADED. Thanks again. Scott ___ zfs-

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-15 Thread Scott Aitken
disk with an incorrect label. But how I can reconstruct that label is a problem. Also, there are four drives of the five-drive RAIDZ available. Based on what criteria does ZFS decide that it is FAULTED and not DEGRADED? Odd. Thanks, Scott ps I'm downloading OpenIndiana now. > > Whe

Re: [zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-14 Thread Scott Aitken
On Thu, Jun 14, 2012 at 09:56:43AM +1000, Daniel Carosone wrote: > On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote: > > Hi all, > > Hi Scott. :-) > > > I have a 5 drive RAIDZ volume with data that I'd like to recover. > > Yeah, still.. > >

[zfs-discuss] Recovery of RAIDZ with broken label(s)

2012-06-11 Thread Scott Aitken
so make the solaris machine available via SSH if some wonderful person wants to poke around. If I lose the data that's ok, but it'd be nice to know all avenues were tried before I delete the 9TB of images (I need the space...) Many thanks, Scott zfs-list at thismonkey dot com

Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-15 Thread chris scott
Did you 4k align your partition table and is ashift=12? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
boxes we have left. M$ also heavily discounts Exchange CALS to Edu and Oracle is not very friendly the way Sun was with their JES licensing. So it is bye bye Sun Messaging Server for us. 2011-06-13 1:14, Scott Lawson пишет: Hi All, I have an interesting question that may or may not be an

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
On 13/06/11 10:28 AM, Nico Williams wrote: On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson wrote: I have an interesting question that may or may not be answerable from some internal ZFS semantics. This is really standard Unix filesystem semantics. I Understand this, just wanting

[zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
gards, Scott. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best choice - file system for system

2011-01-27 Thread Tristram Scott
I don't disagree that zfs is the better choice, but... > Seriously though. UFS is dead. It has no advantage > over ZFS that I'm aware > of. > When it comes to dumping and restoring filesystems, there is still no official replacement for the ufsdump and ufsrestore. The discussion has been had

Re: [zfs-discuss] OT: anyone aware how to obtain 1.8.0 for X2100M2?

2010-12-19 Thread Scott Lawson
Hi, Took me a couple of minutes to find the download for this in my Oracle support. Search for the patch like this . Patches and Updates Panel -> Patch Search -> Patch Name or Number is : 10275731 Pretty easy really. Scott. PS. I found that patch by using product or family equals

Re: [zfs-discuss] how to replace failed vdev on non redundant pool?

2010-10-15 Thread Scott Meilicke
-2413 > Unix Administrator > > > "From a little spark may burst a mighty flame." > -Dante Alighieri > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-13 Thread Scott Meilicke
Hello Peter, Read the ZFS Best Practices Guide to start. If you still have questions, post back to the list. http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations -Scott On Oct 13, 2010, at 3:21 PM, Peter Taps wrote: > Folks, >

Re: [zfs-discuss] Bursty writes - why?

2010-10-12 Thread Scott Meilicke
sk designed to be sequential, while writes to the ZIL/SLOG will be more random (in order to commit quickly)? Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Scott Meilicke
r reliability), and which may have bugs. At some point you have to rely on your backups for the unexpected and unforeseen. Make sure they are good! Michael, nice reliability write up! -- Scott Meilicke ___ zfs-discuss mailing lis

Re: [zfs-discuss] [RFC] Backup solution

2010-10-07 Thread Scott Meilicke
___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Scott Meilicke
_ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] When is it okay to turn off the "verify" option.

2010-10-04 Thread Scott Meilicke
_____ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Scott Meilicke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Resliver making the system unresponsive

2010-09-29 Thread Scott Meilicke
I should add I have 477 snapshots across all files systems. Most of them are hourly snaps (225 of them anyway). On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote: > This must be resliver day :) > > I just had a drive failure. The hot spare kicked in, and access to the pool >

[zfs-discuss] Resliver making the system unresponsive

2010-09-29 Thread Scott Meilicke
insights. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Fwd: Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
you do it :). Although stopping a scrub is pretty innocuous. -Scott On 9/29/10 9:22 AM, "LIC mesh" wrote: > You almost have it - each iSCSI target is made up of 4 of the raidz vdevs - 4 > * 6 = 24 disks. > > 16 targets total. > > We have one LUN with status of

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
llions(about 30mins in) and restarts. > > Never gets past 0.00% completion, and K resilvered on any LUN. > > 64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs. > > > > > On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke > wrote: >> Has it been running long? Initially

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread Scott Meilicke
Has it been running long? Initially the numbers are way off. After a while it settles down into something reasonable. How many disks, and what size, are in your raidz2? -Scott On 9/29/10 8:36 AM, "LIC mesh" wrote: > Is there any way to stop a resilver? > > We gotta s

Re: [zfs-discuss] Kernel panic on ZFS import - how do I recover?

2010-09-28 Thread Meilicke, Scott
Brilliant. I set those parameters via /etc/system, rebooted, and the pool imported with just the ­f switch. I had seen this as an option earlier, although not that thread, but was not sure it applied to my case. Scrub is running now. Thank you very much! -Scott On 9/23/10 7:07 PM, "

Re: [zfs-discuss] When Zpool has no space left and no snapshots

2010-09-28 Thread Scott Meilicke
s-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace & Electronics | +1 425-743-8153 | M: +1 206-406-2670 ---

Re: [zfs-discuss] My filesystem turned from a directory into a special character device

2010-09-27 Thread Scott Meilicke
On 9/27/10 9:56 AM, "Victor Latushkin" wrote: > > On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote: > >> I am running nexenta CE 3.0.3. >> >> I have a file system that at some point in the last week went from a >> directory per 'ls -l'

[zfs-discuss] My filesystem turned from a directory into a special character device

2010-09-27 Thread Scott Meilicke
st created, as seen by ls -l: drwxr-xr-x 4 root root4 Sep 27 09:14 scott crwxr-xr-x 9 root root 0, 0 Sep 20 11:51 scott2 Notice the 'c' vs. 'd' at the beginning of the permissions list. I had been fiddling with permissions last week, then had problems with a kernel panic.

Re: [zfs-discuss] Kernel panic on ZFS import - how do I recover?

2010-09-27 Thread Scott Meilicke
, although not that thread, but was not sure it applied to my case. Scrub is running now. Thank you very much! -Scott Update: The scrub finished with zero errors. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Scott Meilicke
When I do the calculations, assuming 300bytes per block to be conservative, with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped data. But block size is dynamic, so you will need more than this. Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Data transfer taking a longer time than expected (Possibly dedup related)

2010-09-24 Thread Scott Meilicke
ss. Maybe stop the process, delete the deduped file system (your copy target), and create a new file system without dedupe to see if that is any better? Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-23 Thread Scott Meilicke
will dedupe. I am not sure why reporting is not done at the file system level. It may be an accounting issue, i.e. which file system owns the dedupe blocks. But it seems some fair estimate could be made. Maybe the overhead to keep a file system updated with these stats is too high? -Scott

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-07 Thread Scott Meilicke
CPU penalty as well. My four core (1.86GHz xeons, 4 yrs old) box nearly maxes out when putting a lot of data into a deduped file system. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-16 Thread Scott Meilicke
"I had already begun the process of migrating my 134 boxes over to Nexenta before Oracle's cunning plans became known. This just reaffirms my decision. " Us too. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] snapshot space - miscalculation?

2010-08-04 Thread Scott Meilicke
Are there other file systems underneath daten/backups that have snapshots? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Scott Meilicke
Another data point - I used three 15K disks striped using my RAID controller as a slog for the zil, and performance went down. I had three raidz sata vdevs holding the data, and my load was VMs, i.e. a fair amount of small, random IO (60% random, 50% write, ~16k in size). Scott -- This

Re: [zfs-discuss] Deleting large amounts of files

2010-07-19 Thread Scott Meilicke
If these files are deduped, and there is not a lot of RAM on the machine, it can take a long, long time to work through the dedupe portion. I don't know enough to know if that is what you are experiencing, but it could be the problem. How much RAM do you have? Scott -- This message p

Re: [zfs-discuss] Announce: zfsdump

2010-07-05 Thread Tristram Scott
> At this point, I will repeat my recommendation about > using > zpool-in-files as a backup (staging) target. > Depending where you > ost, and how you combine the files, you can achieve > these scenarios > without clunkery, and with all the benefits a zpool > provides. > This is another good sch

Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott
evik wrote: Reading this list for a while made it clear that zfs send is not a backup solution, it can be used for cloning the filesystem to a backup array if you are consuming the stream with zfs receive so you get notified immediately about errors. Even one bitflip will render the stream unusa

Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott
> > if, for example, the network pipe is bigger then one > unsplitted stream > of zfs send | zfs recv then splitting it to multiple > streams should > optimize the network bandwidth, shouldn't it ? > Well, I guess so. But I wonder, what is the bottle neck here. If it is the rate at which zfs

Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott
> > would be nice if i could pipe the zfs send stream to > a split and then > send of those splitted stream over the > network to a remote system. it would help sending it > over to remote > system quicker. can your tool do that? > > something like this > >s | ->

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Tristram Scott
o. The only way I know of achieving that is by using zfs send etc. > > On 6/28/2010 11:26 AM, Tristram Scott wrote: [snip] > > > > Tristram > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org >

[zfs-discuss] Announce: zfsdump

2010-06-28 Thread Tristram Scott
For quite some time I have been using zfs send -R fsn...@snapname | dd of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks back the size of the file system grew to larger than would fit on a single DAT72 tape, and I once again searched for a simple solution to allow dumping

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Scott Meilicke
assertion, so I may be completely wrong. I assume your hardware is recent, the controllers are on PCIe x4 buses, etc. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-23 Thread Scott Meilicke
Look again at how XenServer does storage. I think you will find it already has a solution, both for iSCSI and NFS. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/l

Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Scott Kaelin
need certain kernel features turned on. -- Scott Kaelin 0x6BE43783 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-15 Thread Scott Meilicke
Price? I cannot find it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zpool export / import discrepancy

2010-06-15 Thread Scott Squires
0t3d5 ONLINE 0 0 0 |c10t3d6 ONLINE 0 0 0 |spares | c10t3d7AVAIL |_ Is ZFS dependent on the order of the drives? Will this cause any issue down the road? Thank you all; Scott --

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Scott Meilicke
your live data, another to access the historical data. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] iScsi slow

2010-05-26 Thread Scott Meilicke
iSCSI writes require a sync to disk for every write. SMB writes get cached in memory, therefore are much faster. I am not sure why it is so slow for reads. Have you tried comstar iSCSI? I have read in these forums that it is faster. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] iSCSI confusion

2010-05-24 Thread Scott Meilicke
VMware will properly handle sharing a single iSCSI volume across multiple ESX hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-05-04 Thread Scott Steagall
sn't much control over which one it keeps - for > backupsyou may realyl want to keep the earliest (or latest?) backup the > file appeared in. I've used "Dirvish" http://www.dirvish.org/ and rsync to do just that...worked great! Scott > > Using ZFS Dedup is an

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-23 Thread Scott Meilicke
. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS for ISCSI ntfs backing store.

2010-04-23 Thread Scott Meilicke
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16 disks were in a SAS cabinet, and the the slog was on the server itself. We are now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet. Since the disks are 1.5T, I was concerned about resliver times fo

Re: [zfs-discuss] ZFS for ISCSI ntfs backing store.

2010-04-16 Thread Scott Meilicke
as a target for Doubletake, so it only saw write IO, with very little read. My load testing using iometer was very positive, and I would not have hesitated to use it as the primary node serving about 1000 users, maybe 200-300 active at a time. Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread Scott Duckworth
ied to use a ZVOL from rpool (on fast 15k rpm drives) as a cache device for another pool (on slower 7.2k rpm drives). It worked great up until it hit the race condition and hung the system. It would have been nice if zfs had issued a warning, or at least if this fact was better documented. Scott

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-19 Thread Scott Meilicke
> One of the reasons I am investigating solaris for > this is sparse volumes and dedupe could really help > here. Currently we use direct attached storage on > the dom0s and allocate an LVM to the domU on > creation. Just like your example above, we have lots > of those "80G to start with please"

Re: [zfs-discuss] Rethinking my zpool

2010-03-19 Thread Scott Meilicke
at kind of performance do you need? Maybe raidz2 will give you the performance you need. Maybe not. Measure the performance of each configuration and decide for yourself. I am a big fan of iometer for this type of work. -Scott -- This message posted from opens

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Scott Meilicke
>Apple users have different expectations regarding data loss than Solaris and >Linux users do. Come on, no Apple user bashing. Not true, not fair. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
>I was planning to mirror them - mainly in the hope that I could hot swap a new >one in the event that an existing one started to degrade. I suppose I could >start with one of each and convert to a mirror later although the prospect of >losing either disk fills me with dread. You do not need to

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
disks? Hopefully your switches support NIC aggregation? The only issue I have had on 2009.06 using iSCSI (I had a windows VM directly attaching to an iSCSI 4T volume) was solved and back ported to 2009.06 (bug 6794994). -Scott -- This message posted from opensolari

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-16 Thread Scott Meilicke
volume, no security. Not quite a one liner. After you create the target once (step 3), you do not have to do that again for the next volume. So three lines. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] backup zpool to tape

2010-03-15 Thread Scott Meilicke
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and fast. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What a

2010-03-04 Thread Scott Meilicke
ave a downloaded copy of whichever main backup software you use. >That's it. You backup data using Amanda/Bacula/et al onto tape. You >backup your boot/root filesystem using 'zfs send' onto the USB key. Erik, great! I never thought of the USB key to store an rpool copy. I

Re: [zfs-discuss] raidz2 array FAULTED with only 1 drive down

2010-02-25 Thread Scott Meilicke
You might have to force the import with -f. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD and ZFS

2010-02-12 Thread Scott Meilicke
d up reads. Here is the ZFS best practices guide, which should help with this decision: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Read that, then come back with more questions. Best, Scott -- This message posted from opens

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
I plan on filing a support request with Sun, and will try to post back with any results. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
That is likely it. I create the volume using 2009.06, then later upgraded to 124. I just now created a new zvol, connected it to my windows server, formatted, and added some data. Then I snapped the zvol, cloned the snap, and used 'pfexec sbdadm create-lu'. When presented to the windows server,

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
Sure, but that will put me back into the original situation. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-08 Thread Scott Meilicke
. Thanks, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-04 Thread Scott Meilicke
Gallardo can see the LUN, but like I said, it looks blank to the OS. I suspect the 'sbdadm create-lu' phase. Any help to get Windows to see it as a LUN with NTFS data would be appreciated. Thanks, Scott -- This message posted from opens

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-29 Thread Scott Meilicke
'conversation', but the LAG setup will determine how a conversation is defined. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Scott Meilicke
It looks like there is not a free slot for a hot spare? If that is the case, then it is one more factor to push towards raidz2, as you will need time to remove the failed disk and insert a new one. During that time you don't want to be left unprotected. -- This message posted from opensolaris.o

Re: [zfs-discuss] ZFS/NFS/LDOM performance issues

2010-01-19 Thread Scott Duckworth
> Thus far there is no evidence that there is anything wrong with your > storage arrays, or even with zfs. The problem seems likely to be > somewhere else in the kernel. Agreed. And I tend to think that the problem lays somewhere in the LDOM software. I mainly just wanted to get some experience

Re: [zfs-discuss] ZFS/NFS/LDOM performance issues

2010-01-19 Thread Scott Duckworth
No errors reported on any disks. $ iostat -xe extended device statistics errors --- devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot vdc0 0.65.6 25.0 33.5 0.0 0.1 17.3 0 2 0 0 0 0 vdc1 78.1 24.4

[zfs-discuss] ZFS/NFS/LDOM performance issues

2010-01-19 Thread Scott Duckworth
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs b

Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Scott Meilicke
sequential writes. That same server can only consume about 22 MBps using an artificial load designed to simulate my VM activity (using iometer). So it varies greatly depending upon Y. -Scott -- This message posted from opensolaris.org ___ zfs-discu

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Scott Meilicke
protection. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Using iSCSI on ZFS with non-native FS - How to backup.

2009-12-07 Thread Scott Meilicke
It does 'just work', however you may have some file and/or file system corruption if the snapshot was taken at the moment that your mac is updating some files. So use the time slider function and take a lot of snaps. :) -- This message posted from opensolaris.org

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Scott Meilicke
If the 7310s can meet your performance expectations, they sound much better than a pair of x4540s. Auto-fail over, SSD performance (although these can be added to the 4540s), ease of management, and a great front end. I haven't seen if you can use your backup software with the 7310s, but from

Re: [zfs-discuss] mirroring ZIL device

2009-11-23 Thread Scott Meilicke
ncern about losing power and having the X25 RAM cache disappear during a write. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-18 Thread Scott Meilicke
iSCSI volume has nothing to do with ZFS' zil usage. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread Scott Meilicke
ivity. Same for NFS. I see no ZIL activity using rsync, for an example of a network file transfer that does not require sync. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] CIFS crashes when accessed with Adobe Photoshop Elements 6.0 via Vista

2009-11-10 Thread scott smallie
upgrade to the latest dev release fixed the problem for me. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] CIFS crashes when accessed with Adobe Photoshop Elements 6.0 via Vista

2009-11-09 Thread scott smallie
I have a repeatable test case for this indecent.Every time I access my ZFS cifs shared file system with Adobe Photoshop elements 6.0 via my Vista workstation the OpenSolaris server stops serving CIFS. The share functions as expected for all other CIFS operations. -Begin Configuration

Re: [zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-30 Thread Scott Meilicke
Excellent! That worked just fine. Thank you Victor. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-29 Thread Scott Meilicke
the SSD to my production pool. Any ideas why I am getting the import error? Thanks, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-28 Thread Scott Meilicke
ago, I have had no problems. Again, I don't know if this would fix your problem, but it may be worth a try. Just don't upgrade your ZFS version, and you will be able to roll back to 2009.06 at any time. -Scott -- This message posted from opens

Re: [zfs-discuss] File level cloning

2009-10-28 Thread Scott Meilicke
I don't think so. But, you can clone at the ZFS level, and then just use the vmdk(s) that you need. As long as you don't muck about with the other stuff in the clone, the space usage should be the same. -Scott -- This message posted from opens

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Meilicke, Scott
Interesting. We must have different setups with our PERCs. Mine have always auto rebuilt. -- Scott Meilicke On Oct 22, 2009, at 6:14 AM, "Edward Ned Harvey" wrote: Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a goo

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
lace with a good one, and the PERC will rebuild automatically. But are you talking about OpenSolaris managed RAID? I am pretty sure, but not tested, that in pseudo JBOD mode (each disk a raid 0 or 1), the PERC would still present a replaced disk to the OS without reconfiguring the PERC BIO

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Scott Meilicke
the regular pool for the ZIL, correct? Assuming this is correct, a mirror would be to preserve performance during a failure? Thanks everyone, this has been really helpful. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Meilicke, Scott
Thanks Ed. It sounds like you have run in this mode? No issues with the perc? -- Scott Meilicke On Oct 20, 2009, at 9:59 PM, "Edward Ned Harvey" wrote: System: Dell 2950 16G RAM 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive slots, a si

  1   2   3   4   >