can power down to replace
any drives or do maintenance. It's mainly for cheap, quiet enclosures
that can export JBOD...
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
p;q=%22zfs-discuss%22+site%3Amail.opensolaris.org+%28dedup+OR+%22de-duplication%22+OR+deduplication%29&btnG=Google+Search
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in a panic loop. A
support call is open and it is a known problem that (I'm told) is
being worked on.
I only mention this to say that this type of problem is not restricted
to zfs boot.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
One last question, when it comes to patching these zones, is it better to patch
it normally or destroy all the local zones and patch only the global zone and
use sh file to recreate all the zones.
This message posted from opensolaris.org
___
zfs-dis
Sorry, my question is not clear enough. These pools contain a zone each.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Greetings,
Given zfs pools, how does one import these pools to another node in
the cluster.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
differences between finding the
differences from a clone's origin to a snapshot of the clone and
finding the differences between snapshots between snapshots of the
same file system.
I'm also glad to see this is in the works. Most of my use cases
On 7/11/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
> > Perhaps a better approach is to create a pseudo file system that looks like:
> >
> > /pool
> >/@@
> >/@today
> &
a particular path would have the same effect as zfs receive.
Is this something that is maybe worth spending a few more cycles on,
or is it likely broken from the beginning?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zf
y member).
> In any case that is absolutely unaccepted practice.
The past week of inactivity is likely related to most of Sun in the US
being on mandatory vacation. Sun typically shuts down for the week
that contains July 4 and (I think) the week between Christmas and Jan
1.
Mike
--
Mike Gerdts
I had a similar situation between x86 and SPARC, version number. When I
created the pool on the LOWER rev machine, it was seen by the HIGHER rev
machine. This was a USB HDD, not a stick. I can now move the drive
between boxes.
HTH,
Mike
Dick Davies wrote:
Thanks to everyone for the sanity
At what Solaris10 level (patch/update) was the "single-threaded
compression" situation resolved?
Could you be hitting that one?
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roch - PAE
Sent: Tuesday, June 26, 2007 12:26 PM
To: Roshan Perera
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
I would not risk raidz on that many disks. A nice compromise may be 14+2
raidz2, which should perform nicely for your workload and be pretty reliable
when the disks start to fail.
Would anyone on the list not recommend this setup? I could li
. If I really do need room for two
to fail then I suppose I can look for a 14 drive space usable setup
and use raidz-2.
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/15/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
Hmmm, that's an interesting point. I remember the old days of having to
stagger startup for large drives (physically large, not capacity large).
Can that be done with SATA?
I had to link 2 600w power supplies together to be able to power
On 6/14/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
Yes, but there are many ways to get transactions, e.g. journalling.
ext3 is journaled. it doesn't seem to always be able to recover data.
it also takes forever to fsck. i thought COW might alleviate some of
the fsck needs... it just seems like
times (FAT32, NTFS, XFS, JFS) it is encouraging
to see more options that put emphasis on integrity...
On 6/14/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
On June 14, 2007 3:57:55 PM -0700 mike <[EMAIL PROTECTED]> wrote:
> as a more SOHO user i like ZFS mainly for it's COW and
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
as a more SOHO user i like ZFS mainly for it's CO
looks like you used 3 for a total of 15 disks, right?
I have a CM stacker too - I used the CM 4-disks-in-3-5.25"-slots
though. I am currently trying to sell it too, as it is bulky and I
would prefer using eSATA/maybe Firewire/USB enclosures and a small
"controller" machine (like a Shuttle) so it
> > can someone belt me with a cluestick please?
> >
> >
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http
ght try to
use a CF-boot-option in their environment.
Good thread, lets bat this around some more.
-- MikeE
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 9:48 PM
To: Ellis, Mike
Cc: Carson Gaspar; zfs-discuss@opensolaris.or
ght try to
use a CF-boot-option in their environment.
Good thread, lets bat this around some more.
-- MikeE
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 9:48 PM
To: Ellis, Mike
Cc: Carson Gaspar; zfs-discuss@opensolaris.or
Also the "unmirrored memory" for the rest of the system has ECC and
ChipKill, which provides at least SOME protection against random
bit-flips.
--
Question: It appears that CF and friends would make a descent live-boot
(but don't run on me like I'm a disk) type of boot-media due to the
limited wr
On Fri, 2007-05-25 at 15:46 -0700, Eric Schrock wrote:
> On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote:
> >
> > In fact the console-login depends on filesystem/minimal which to me
> > means minimal file systems not all file systems and there is no software
>
first accessed.
Agreed but there's still the issue with console-login being dependent on
all file systems instead of minimal file systems.
>
> - Eric
>
>
> --
> Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
--
Mike Dotson
On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
> Mike Dotson wrote:
> > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
>
> > Would help in many cases where an admin needs to work on a system but
> > doesn't need, say 20k users home directories mounted,
k users home directories mounted, to do this work.
>
> Lori
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is probably a good place to start.
http://blogs.sun.com/realneel/entry/zfs_and_databases
Please post back to the group with your results, I'm sure many of us are
interested.
Thanks,
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of homeru
e lofi driver on top of
zfs. If you have enough RAM, try copying the iso file to /tmp, lofi
mount it from there, then try again.
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
list a files for cpio.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thanks for the reply.
On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
My personal opinion is that USB is not robust enough under (Open)Solaris
to provide the reliability that someone considering ZFS is looking for.
I base this on experience with two 7 port powered USB hubs, each with 4 *
2Gb K
HO use, sharing files over samba to a couple
Windows machines + a media player.
Side note: Is this right? "ditto" blocks are extra parity blocks
stored on the same disk (won't prevent total disk failures, but could
provide data recovery if enough pari
ly PCI-e
adapters... Marvell or SI or anything as long as it's PCI-e and has 4
or 5 eSATA ports that can work with a port multipler (for 4-5 disks
per port) ... I don't think there is a clear fully supported option
yet or I'd be using it right now.
- mike
__
efore it's completely failed...
- mike
On 5/4/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Fri, 4 May 2007, Lee Fyock wrote:
> Hi--
>
> I'm looking forward to using zfs on my Mac at some point. My desktop
> server (a dual-1.25GHz G4) has a motley collection of discs that h
i am attempting to install b62 from the b62_zfsboot.iso that was posted last
week.
>
> Mike makes a good point. We have some severe
> problems
> with build 63. I've been hoping to get an answer for
> what's
> going on with it, but so far, I don't have one.
&g
ike
> 'Solaris nv_b62'Is it possible there
> were any errors while it was installing?
> If it generates a log during install, maybe you
> can ftp it away before the
> reboot.Mal class="gmail_quote">On 4/30/07, class="gmail_sendername">Mike Walker <
way I can see what its doing when it pauses before the
reboot? I'm kinda new at this OpenSolaris stuff, so any debugging tips/tricks
would be greatly appreciated.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
way I can see what its doing when it pauses before the
reboot? I'm kinda new at this OpenSolaris stuff, so any debugging tips/tricks
would be greatly appreciated.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
On 4/28/07, Mike Dotson <[EMAIL PROTECTED]> wrote:
And this changes the scenario how? I've actually been pondering this
for quite some time now. Why do we backup the root disk? With many of
the tools out now, it makes far more sense to do a flar/incremental
flars of the systems an
once I've got large numbers of filesystems and snapshots
> and clones thereof, and the odd zvol, it can be a devil of
> a job to work out what's going on.
No more difficult than doing ufs/vxfs snapshots and quick I/O, etc.
Only thing that really changes is the specific command for each and if
you're doing that, then you've already got the infrastructure for it
setup.
But that's just my viewpoint...
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm building a system with two Apple RAIDs attached. I have hardware RAID5
configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs
representing the four RAID controllers. For on-going maintenance, will a zpool
scrub be of any benefit? From what I've read with this layer of
on that seems not to be high on the list of coming
features. This would give the benefits of sparse zones (more
efficient use of memory, etc.) without the drawback of not being able
to even create mount points for other file systems.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
> Peter Tribble wrote:
> > On 4/24/07, Darren J Moffat <[EMAIL PROTECTED]>
> wrote:
> >> With reference to Lori's blog posting[1] I'd like
> to throw out a few of
> >> my thoughts on spliting up the namespace.
> >
> > Just a plea with my sysadmin hat on - please don't
> go overboard
> > and make ne
we aren't
> able to dump into a zvol yet.)
>
Will we need to use this kit for further builds or will it be updated
for new builds as they arrive?
> Lori
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
>
using port multipler eSATA with
FreeBSD (perhaps I will hunt down people on a FreeBSD list, to clarify
#3)
I'd like it to be PCI express based. PCI-x is only on normal-sized
motherboards, and I'd love to be using a smaller form factor machine
as the &quo
Could it be an order problem? NFS trying to start before zfs is mounted?
Just a guess, of course. I'm not real savvy in either realm.
HTH,
Mike
Ben Miller wrote:
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the
extra space of the internal ide disk. There
ul in the cases where the devices to be split
are on the same controller but with just a different target or LUN
range.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
w huge blocks to be read. I forget what the
cut-off is, but 512 bytes at a time should be fine.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. There are a
couple folks out here still running sparc. Is there any news to
report related to the sparc variant ZFS boot?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I noticed that there is still an open bug regarding removing devices
from a zpool:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can
Would the system be able to halt if something was unplugged/some
massive failure happened?
That way if something got tripped, I could fix it before any
corruption or issue occured.
That would be my safety net, I suppose.
On 3/20/07, Sanjeev Bagewadi <[EMAIL PROTECTED]> wrote:
Mike,
W
okay so since this is fixed, Chris, would you consider using USB/FW now?
I am desperate to replace a server that is failing and I want to
replace it with a proper quiet ZFS-based solution, I hate being held
captive by NTFS issues (it may have corrupted my data now a second
time)
ZFS's checksummi
Crair <[EMAIL PROTECTED]> wrote:
Mike,
Take a look at
http://video.google.com/videoplay?docid=8100808442979626078&q=CSI%3Amunich
Granted, this was for demo purposes, but the team in Munich is clearly
leveraging USB sticks for their purposes.
HTH,
Bev.
mike wrote:
> I still haven't
able. That would be my only design
constraint.
Thanks a ton. Again, any input (good, bad, ugly, personal experiences
or opinions) is appreciated A LOT!
- mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tick-10s
{
printf("%6d %6d %6d %6d\n",
(long long) zfs`arc.c_min / 1024/1024,
(long long) zfs`arc.c_max / 1024/1024,
(long long) zfs`arc.size / 1024/1024,
(long long) zfs`arc.c / 1024/
While the snapshot isn't RW, the clone is and would certainly be helpful
in this case
Isn't the whole idea to:
0) boot into single-user/boot-archive if you're paranoid (or just quiess
and clone if you feel lucky)
1) "clone" the primary OS instance+relevant-slices & boot into the
primary OS
2)
I haven't tested this scenario, I would expect that you would be
able to use the parameters above to achieve what you are trying to do
regardless of which UNIXy file system is being used.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
z
t
of my knowledge, it shares no heritage with SAM-QFS.
http://www.oracle.com/technology/products/database/asm/index.html
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked
the system this morning. system was running S10U2. In the course of
troubleshooting I've installed the latest recommended bundle including kjp
118833-36 and zfs patch 124204-03
created as:
zpool create zfspool01 /dev
I've used this to track down the filename and other tidbits using the object ID
from zpool status -v:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
zfspool01/nb60openv 292 1835008-1966080
zfspool01/nb60openv 292
r /opt is a separate file system, you may have
issues with this dependency.
This is based upon 5 minutes of looking, not a careful read of all the
parts involved.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-d
nix based machines.
Thanks in advance! When I saw ZFS and the upcoming crypto support
planned, it truly would meet all my needs. I have been telling all my
friends about ZFS, we're all excited but none of us have had a use or
equipment that we could use for it yet.
- mike
On 2/5/07, Richa
My two (everyman's) cents - could something like this be modeled after
MySQL replication or even something like DRBD (drbd.org) ? Seems like
possibly the same idea.
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Project Overview:
...
___
zfs-discus
er a zonepath, I would also look for mounts and
NFS shares in subdirectories of the fs that won't unmount.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ooh. they support it? cool. i'll have to explore that option now.
however i still really want eSATA.
On 1/23/07, Samuel Hexter <[EMAIL PROTECTED]> wrote:
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each)
running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a
On 1/23/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
For the "clone another system" zfs send/recv might be useful
Having support for this directly in flarcreate would be nice. It
would make differential flars very quick and efficient.
Mike
--
Mike Gerdts
http://mger
I would suggest using a CompactFlash card for the OS. I believe it
works exactly like IDE, but is more reliable, sucks less power, and
frees up a slot for the larger drive...
On 1/22/07, Elm, Rob <[EMAIL PROTECTED]> wrote:
Hello ZFS Discussion Members,
I'm looking for help or advice on a proje
I'm dying here - does anyone know when or even if they will support these?
I had this whole setup planned out but it requires eSATA + port multipliers.
I want to use ZFS, but currently cannot in that fashion. I'd still
have to buy some [more expensive, noisier, bulky internal drive]
solution for
Areca makes excellent PCI express cards - but probably have zero
support in Solaris/OpenSolaris. I use them in both Windows and Linux.
Works natively in FreeBSD too. They're the fastest cards on the market
I believe still.
However probably not very appropriate for this since it's a Solaris-based
, PCI
express preferred) would be great. Assuming it works with any eSATA
multiplier-aware enclosures (such as the one above)
I think that would open up a LOT of users to ZFS. Most definately this one <-
- mike
On 1/21/07, Moazam Raja <[EMAIL PROTECTED]> wrote:
Hi all,
I'm thinki
e patches ?
If you have (or download) the latest installation DVD, look in the
/UpgradePatches (or similarly named) directory.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
for a conversion from RAIDZ
to RAIDZ2, or vice-versa then, correct?
On 1/18/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Mike,
I think you are missing the point. What we are talking about is
removing a drive from a zpool, that is, reducing the zpool's total
capacity by a drive. Say you
Couldn't this be considered a compatibility list that we can trust for
OpenSolaris and ZFS?
http://www.sun.com/io_technologies/
I've been looking at it for the past few days. I am looking for eSATA
support options - more details below.
Only 2 devices on the list show support for eSATA, both are
what is the technical difference between forcing a removal and an
actual failure?
isn't it the same process? except one is manually triggered? i would
assume the same resilvering process happens when a usable drive is put
back in...
On 1/18/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
Not quite.
Would this be the same as failing a drive on purpose to remove it?
I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.
On 1/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > This is a pretty high priority. We are working on it.
___
t have to explain
that the file system has a 2^512 chance of silent data corruption. As
slim of a chance as that is, ZFS promises to not corrupt my data and
to tell on others that do. ZFS cannot break that promise.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely
be considered a replacement for that? I assume that anything below the
filesystem level in regards to redundancy could be an added bonus, but
is it necessary at all?
2) I am looking into building a 10-drive system using 750GB or
mounting the same FS by several
different machines? Is there a way around this?
Mike
Wee Yeh Tan wrote:
On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Mike Papper wrote:
>
> The alternative I am considering is to have a single filesystem
> available to many clients using
would see and be able to
read this new file?
Does this apply to soft-link files as well?
Does anyone have experience with such a configuration?
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
Anton Rang wrote:
On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4
of these slices to a Solaris 10 U2 machine and added each of them
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these slices to a
Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as listed below:
This is certain
0 0
errors: No known data errors
Basically, is this a supported zfs configuration? You are gonna laugh,
but do you think my zfs configuration caused the drive failure?
Cheers,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
mask:rwx
other:r-x
I want
I use zfs in a san. I have two Sun V440s running solaris 10 U2, which
have luns assigned to them from my Sun SE 3511. So far, it has worked
flawlessly.
Robert Milkowski wrote:
Hello Dave,
Friday, December 15, 2006, 9:02:31 PM, you wrote:
DB> Does anyone have a document that describes ZFS in
ular spot
Take a lock
Seek
Write 64 bytes
seek
Write 5408 bytes
close
The rrd file in question is 8.6 MB. There was 8KB of reads and 5472
bytes of write. This is one of the big wins over the current binary
rrd format over the original ASCII version that came with MRTG.
Mi
own data
layout.
This may be a good place to look:
http://www.oracle.com/technology/deploy/availability/htdocs/xtts.htm
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
active. Does anyone know of
relevant RFE's that are in the works to improve the situation, or
should I file one and stop complaining. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi All,
From reading the docs, it seems that you can add devices (non-spares)
to a zpool, but you cannot take them away, right?
Best,
Mike
Victor Latushkin wrote:
Maybe something like the "slow" parameter of VxVM?
slow[=iodelay]
Reduces
I don't see any problems with
this procedure.
However, I waited until someone else announced the features or lack
thereof found in S10 11/06. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@open
mounted and little interest in creating very
complex command lines with many -x options.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
format that and use it for swap or whatever.
The original question was about using ZFS root on a T1000. /grub
looks suspiciously incompatible with the T1000 because it isn't x86.
I've heard rumors of brining grub to sparc, but...
Mike
--
Mike Gerdts
http://mger
pport calls help the most?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s the
replies.
If your server has multiple network interfaces, it's more likely that
the server is routing the replies back on a different interface. We've
run into that problem many times with the NFS server that has my home
directory on it. If that is what's going on, you n
>>>>> "Chad" == Chad Leigh <-- Shire.Net LLC" <[EMAIL PROTECTED]>> writes:
Chad> so -t a should show wall clock time
The capture file always records absolute time. So you (just) need to
use "-t a" when you decode the captu
>>>>> "Chad" == Chad Leigh <-- Shire.Net LLC" <[EMAIL PROTECTED]>> writes:
Chad> There seems to be no packet headers or time stamps or anything --
Chad> just a lot of binary data. What am I looking for?
Use "
the two hosts.
If you notice a problem in the logs, you can find the corresponding
capture file and extract from it what you need.
mike
bgsnoop
Description: bgsnoop script
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey Tony...
When (properly) doing Array-based snapshots/BCVs with
EMC/Hitachi/what-have you arrays, you create "lun groups" out of the
luns you're interested in snappin'. You then perform snapshot/clone
operations on that "lun group" which will make it atomic across all
members of that group.
Wh
:)
2) One of the copies of the data needs to be portable to another
system while the original stays put. This could be done to refresh
non-production instances from production, to perform backups in such a
way that it doesn't put load on the prod
It's a valid use case in the high-end enterprise space.
While it probably makes good sense to use ZFS for snapshot creation,
there are still cases where array-based snapshots/clones/BCVs make
sense. (DR/Array-based replication, data-verification, separate
spindle-pool, legacy/migration reasons, an
ing that
it is up to ZFS to generate or manage the signature.
The nice thing about it is that so long as the private key is secret,
the signature stays with the file as it is moved, taken to tape, other
file systems, etc. so long as the file manipulation mechanisms support
extended-attributes.
Mi
401 - 500 of 527 matches
Mail list logo