Boot from the other root drive, mount up the "bad" one at /mnt. Then:
# mv /mnt/etc/zfs/zpool.cache /mnt/etc/zpool.cache.bad
On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> My root drive is ufs. I have corrupted my zpool which is on a differen
My root drive is ufs. I have corrupted my zpool which is on a different drive
than the root drive.
My system paniced and now it core dumps when it boots up and hits zfs start. I
have a alt root drive that can boot the system up with but how can I disable
zfs from starting on a different drive?
or clustered
storage as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I think you'll need to get device support first. Last I checked there
was still no device support for PMPs, sadly.
On Thu, Nov 20, 2008 at 4:52 PM, Krenz von Leiberman
<[EMAIL PROTECTED]> wrote:
> Does ZFS support pooled, mirrored, and raidz storage with
> SATA-port-multipliers (http://www.serial
at is wrong etc?
Thanks in advance for any advice,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
> [Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko
> <[EMAIL PROTECTED]> wrote:
>
>> Hello
>>
>> Is there any way to list all snapshots of particular file system
>> without listing the snapshots of its children file systems?
>
> fs
Hello
Is there any way to list all snapshots of particular file system without
listing the snapshots of its children file systems?
Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
>> No clue. My friend also upgraded to b101. Said it was working awesome
>> - improved network performance, etc. Then he said after a few days,
>> he's decided to downgrade too - too many other weird side effects.
>
> Any more d
erent Solaris versions:
http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde
On Fri, Nov 14, 2008 at 2:15 AM, Vincent Boisard <[EMAIL PROTECTED]> wrote:
> Do you have an idea if your problem is due to live upgrade or b101 itself ?
>
> Vincent
>
> On Thu, Nov
Depends on your hardware. I've been stable for the most part on b98.
Live upgrade to b101 messed up my networking to nearly a standstill.
It stuck even after I nuked the upgrade. I had to reinstall b98.
On Nov 13, 2008, at 10:01 AM, "Vincent Boisard" <[EMAIL PROTECTED]>
wrote:
Thanks for
Will probably have a 10_recommended u6 patch bundle sometime in December...
For now, to get to u6 (and ZFS) you must do LU (ie u5 to u6)
Just FYI
On Wed, Nov 12, 2008 at 12:48 PM, Johan Hartzenberg <[EMAIL PROTECTED]>wrote:
>
>
> On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox <[EMAIL PROTECTED]>w
svcadm disable cde-login
I'd also recommend to disable some other unnecessary processes, ex:
svcs | egrep
'webco|wbem|avahi|print|font|cde|sendm|name-service-cache|opengl' | awk
'{print $3}' | xargs -n1 svcadm disable
Thi
s chmod 0755 $foo fixes it
- the ACL inheriting doesn't seem to be remembered or I'm not
understanding it properly...
The user 'mike' should have -all- the privileges, period, no matter
what the client machine is etc. I am mounting it -as- mike from both
clients...
By Better I meant the best practice for a server running the Netbackup
application.
I am not seeing how using raidz would be a performance hit. Usually stripes
perform faster than mirrors.
--
This message posted from opensolaris.org
___
zfs-discuss ma
Hi all,
I have been asked to build a new server and would like to get some opinions on
how to setup a zfs pool for the application running on the server. The server
will be exclusively for running netbackup application.
Now which would be better? setting up a raidz pool with 6x146gig drives or
and try moving a pool between them to see what
> happens...
Would be interesting to know how it'll work if move whole zpool not just
sync with send/recv. But I think all will be fine there as is seems the
problem is in send/recv part on the file system itself on different
architectures.
Th
On Sun, Oct 26, 2008 at 12:47 AM, Peter Bridge <[EMAIL PROTECTED]> wrote:
> Well for a home NAS I'm looking at noise as a big factor. Also for a 24x7
> box, power consumption, that's why the northbridge is putting me off slightly.
That's why I built a full-sized tower using a Lian-Li case with
I'm running ZFS on nevada (b94 and b98) on two machines at home, both
with 4 gig ram. one has a quad core intel core2 w/ ECC ram, the other
has normal RAM and an athlon 64 dual-core low power. both seem to be
working great.
On Thu, Oct 23, 2008 at 2:04 PM, Peter Bridge <[EMAIL PROTECTED]> wrote:
>
ember it now some where
in its definitions. You need to remove the second datastore from VMware and
delete the target definition and ZFS backing store.
Once you recreate the backing and target you should have a new GUID and iqn
which should cure the issue.
Regards,
Mike
--
This message posted
Hello Tano,
The issue here is not the target or VMware but a missing GUID on the target as
the issue.
Observe the target smf properties using
iscsitadm list target -v
You have
iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi
Connections: 0
ACL list:
TPGT list:
TPG
Ok, I managed to get my grub menu (and spashimage) back by following:
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
Initially, I just did it for the boot enviroment I wanted to use, but it didn't
seem to work, so I also did it for the previous boot enviroment. I'm not sure
w
Hi
Just checked with snv_99 on x86 (VMware install) - same result :(
Regards
Mike
[EMAIL PROTECTED] wrote:
>> Hello
>>
>>
>> Today I've suddenly noticed that symlinks (at least) are corrupted when
>> sync ZFS from SPARC to x86 (zfs send | ssh | zfs re
ce yet to test on
latest OpenSolaris.
Any suggestions?
Thanks
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
> The exception to the "rule" of multiple 12v output sections is PC
> Power & Cooling - who claim that there is no technical advantage to
> having multiple 12v outputs (and this "feature" is only a marketing
> gimmick). But now
Yeah for this plan I needed with 8 onboard SATA or another 8 port SATA
controller, so I opted just to get two of the PCI-X ones.
The Supermicro 5-in-3's don't have a fan alarm so you could remove it
or find a quieter fan. I think most of them have quite noisy fans (the
main goal for this besides l
27;m
>> going to pick up a couple of Supermicro's 5-in-3 enclosures for mine:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405
>>
>>
>> Scott
>>
>> On Wed, Oct 15, 2008 at 12:26 AM, mike <[EMAIL PROTECTED]> wrote:
>
l of it thanks to Newegg. I will need to pick up some
4-in-3 enclosures and a better CPU heatsink/fan - this is supposed to
be quiet but it has an annoying hum. Weird. Anyway, so far so good.
Hopefully the power supply can handle all 16 disks too...
On Thu, Oct 9, 2008 at 12:46 PM, mike &
On Thu, Oct 9, 2008 at 10:33 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
>> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>>> Nevada isn't production co
ld be used to deal with cases that prevent
your normal (>4 GB) boot environment from booting.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>> Nevada isn't production code. For real ZFS testing, you must use a
>> production release, currently Solaris 10 (updat
I seem to be having the same problem as well. Has anyone found out what the
cause is, and how to fix it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
; supports ECC ram. Coincidentally, it's also the chipset used in the
> Sun Ultra 24 workstation
> (http://www.sun.com/desktop/workstation/ultra24/index.xml).
>
>
> On Mon, Oct 6, 2008 at 1:41 PM, mike <[EMAIL PROTECTED]> wrote:
>> I posted a thread here...
>
I pushed for and got a fix. However, that
pool was still lost.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- core developers of dtrace
were quite interested in the kernel crash dump.
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html
Panic during ON build. Pool was lost, no response from list.
--
Mike Gerdts
http://mgerdts.blogspot.com/
ast year I've lost more ZFS file systems than I have any other
type of file system in the past 5 years. With other file systems I
can almost always get some data back. With ZFS I can't get any back.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
patible, and have to return it online...
On Tue, Oct 7, 2008 at 1:33 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> 2008/10/6 mike <[EMAIL PROTECTED]>:
>> I am trying to finish building a system and I kind of need to pick
>> working NIC and onboard SATA chipsets (video is not a
I posted a thread here...
http://forums.opensolaris.com/thread.jspa?threadID=596
I am trying to finish building a system and I kind of need to pick
working NIC and onboard SATA chipsets (video is not a big deal - I can
get a silent PCIe card for that, I already know one which works great)
I need
9 0 0 0 0 0 0 0 0 0 543 972 518 0 0 100
>From a free memory standpoint, the current state of the system is very
different than the typical state since boot.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailin
On Wed, Sep 24, 2008 at 9:37 PM, James Andrewartha <[EMAIL PROTECTED]> wrote:
> Can you post the java error to the list? Do you have gzip compressed or
> aclinherit properties on your filesystems, hitting bug 6715550?
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html
> http
200807/
See "Flash Storage Memory" by Adam Leventhal, page 47.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Sep 21, 2008 at 11:49 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> Hmmm... I run Solaris 10/sparc U4. My /usr/java points to
> jdk/jdk1.5.0_16. I am using Firefox 2.0.0.16. Works For Me(TM) ;-)
> Sorry, can't help you any further. Maybe a question for desktop-discuss?
it's a jav
On Sun, Sep 21, 2008 at 1:31 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> Yes, you need to set the corresponding SMF property. Check
> for the value of "options/tcp_listen":
>
> # svcprop -p options/tcp_listen webconsole
> true
>
> If it says "false", you need to set it to "true". Here's
ive on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
export FAULTED corrupted data
c6t0d0UNAVAIL corrupted data
--
Mike Gerdts
http://mgerdts.blogspot.com/
On Fri, Sep 19, 2008 at 10:16 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> You need to check if the SMF service is running:
> # svcadm -v enable webconsole
> svc:/system/webconsole:console enabled.
> # svcs webconsole
> STATE STIMEFMRI
> online 19:07:24 svc:/system/
On Tue, Sep 16, 2008 at 2:28 PM, Peter Tribble <[EMAIL PROTECTED]> wrote:
> For what it's worth, we put all the disks on our thumpers into a single pool -
> mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
> would happily go much bigger.
so you have 9 drive raidz1 (8 d
Okay, well I am running snv_94 already. So I guess I'm good :)
On Fri, Sep 5, 2008 at 10:23 AM, Mark Shellenbaum
<[EMAIL PROTECTED]> wrote:
> mike wrote:
>>
>> I have a weekly scrub setup, and I've seen at least once now where it
>> says "don't
I have a weekly scrub setup, and I've seen at least once now where it
says "don't snapshot while scrubbing"
Is this a data integrity issue, or will it make one or both of the
processes take longer?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensol
Yeah, I'm looking at using 10 disks or 16 disks (depending on which
chassis I get) - and I would like reasonable redundancy (not HA-crazy
redundancy where I can suffer tons of failures, I can power this down
and replace disks, it's a home server) and maximize the amount of
usable space.
Putting up
Ok, I've managed to get around the kernel panic.
[EMAIL PROTECTED]:~/Download$ pfexec mdb -kw
Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp
scsi_vhci zfs sd ip hook neti sctp arp usba uhci s1394 fctl md lofs random sppp
ipc ptm fcip fcp cpc crypto logindmux ii nsctl sdb
G'day,
I've got a OpenSolaris server n95, that I use for media, serving. It's uses a
DQ35JOE motherboard, dual core, and I have my rpool mirrored on two IDE 40GB
drives, and my media mirrored on 2 x 500GB SATA drives.
I've got a few CIFS shares on the media drive, and I'm using MediaTomb to
s
On 8/26/08, Cyril Plisko <[EMAIL PROTECTED]> wrote:
> that's very interesting ! Can you share more info on what these
> bugs/issues are ? Since it is LU related I guess we'll never see these
> via opensolaris.org, right ? So I would appreciate if community will
> be updated when these fixes will
ermail/zfs-code/2007-March/000448.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
yeah i am on gigabit, but the clients are things like an xbox which is
only 10/100, etc. right now the setup works fine. i'm thinking the new
CIFS implementation should make it run even cleaner too.
On 8/22/08, Ross Smith <[EMAIL PROTECTED]> wrote:
> Yup, you got it, and an 8 disk raid-z2 array sh
On 8/22/08, Ross <[EMAIL PROTECTED]> wrote:
> Yes, that looks pretty good mike. There are a few limitations to that as you
> add the 2nd raidz2 set, but nothing major. When you add the extra disks,
> your original data will still be stored on the first set of disks, if you
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> You only need 1 disk to use ZFS root. You won't have any redundancy, but as
> Darren said in another email, you can convert single device vDevs to
> Mirror'd vDevs later without any hassle.
I'd just get some 80 gig disks and mirror them. Migh
On 8/22/08, Rich Teer <[EMAIL PROTECTED]> wrote:
> ZFS boot works fine; it only recently integrated into Nevada, but it
> has been in use for quite some time now.
Yeah I got the install option when I installed snv_94 but wound up not
having enough disks to use it.
> Even better: just use ZFS roo
It looks like this will be the way I do it:
initially:
zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7
when I need more space and buy 8 more disks:
zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
Correct?
> Enable compression, and set up
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Antoher note, as someone said earlier, if you can go to 16 drives, you
> should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare,
> or (I would think) even a 14disk RAIDZ2 vDev with a spare.
>
> If you can (now or later) ge
On 8/22/08, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> I could if I wanted to add another vdev to this pool but it doesn't
> have to be raidz it could be raidz2 or mirror.
> If they did they are wrong, hope the above clarifies.
I get it now. If you add more disks they have to be in their own
m
> No that isn't correct.
> One or move vdevs create a pool. Each vdev in a pool can be a
> different type, eg a mix or mirror, raidz, raidz2.
> There is no such thing as zdev.
Sorry :)
Okay, so you can create a zpool from multiple vdevs. But you cannot
add more vdevs to a zpool once the zpool
Oh sorry - for boot I don't care if it's redundant or anything.
Worst case the drive fails, I replace it and reinstall, and just re-mount the
ZFS stuff.
If I have the space in the case and the ports I could get a pair of 80 gig
drives or something and mirror them using SVM (which was recommende
I hear everyone's concerns about multiple parity disks.
Are there any benchmarks or numbers showing the performance difference using a
15 disk raidz2 zpool? I am fine sacrificing some performance but obviously
don't want to make the machine crawl.
It sounds like I could go with 15 disks evenly
likewise i could also do something like
zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
and i'd have a 7 disk raidz1 and an 8 disk raidz1... and i'd have 15 disks
still broken up into not-too-horrible pool sizes an
see, originally when i read about zfs it said it could expand to petabytes or
something. but really, that's not as a single "filesystem" ? that could only be
accomplished through combinations of pools?
i don't really want to have to even think about managing two separate
"partitions" - i'd like
i could probably do 16 disks and maybe do a raidz on both for 14 disks
usable combined... that's probably as redundant as i'd need, i think.
can you combine two zpools together? or will i have two separate
"partitions" (i.e. i'll have "tank" for example and "tank2" instead of
making one single lar
Question #1:
I've seen 5-6 disk zpools are the most recommended setup.
In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks
usable) out of the 15 disks (like raidz2 I suppose). What would make the most
sense to setup 15 disks with ~ 13 disks of usable space? This is for a h
53-02 this week. In a separate thread last week
(?) Enda said that it should be out within a couple weeks.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i must pose the question then:
is ECC required?
i am running non-ECC RAM right now on my machine (it's AMD and it would support
ECC, i'd just have to buy it online and wait for it)
but will it have any negative effects on ZFS integrity/checksumming if ECC RAM
is not used? obviously it's nice t
x27;ve attached a screenshot if it may be useful.
Any help would be appreciated...
Thanks,
Mike
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yeah but 2.5" aren't that big yet. What, they max out ~ 320 gig right?
I want 1tb+ disks :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
exactly.
that's why i'm trying to get an account on that site (looks like open
registration for the forums is disabled) so i can shoot the breeze and talk
about all this stuff too.
zfs would be perfect for this as most these guys are trying to find hardware
raid cards that will fit, etc... wit
that mashie link might be exactly what i wanted...
that mini-itx board w/ 6 SATA. use CF maybe for boot (might need IDE to CF
converter) - 5 drive holder (hotswap as a bonus) - you get 4 gig ram,
core2-based chip (64-bit), onboard graphics, 5 SATA2 drives... that is cool.
however. would need to
I'd say some good places to look are silentpcreview.com and mini-itx.com.
I found this tasty morsel on an ad at mini-itx...
http://www.american-portwell.com/product.php?productid=16133
6x onboard SATA. 4 gig support. core2duo support. which means 64 bit = yes, 4
gig = yes, 6x sata is nice.
now
I didn't use any.
That would be my -ideal- setup :)
I waited and waited, and still no eSATA/Port Multiplier support out there, or
isn't stable enough. So I scrapped it.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Holy crap! That sounds cool. Firmware-based-VPN connectivity!
At Intel we're getting better too I suppose.
Anyway... I don't know where you're at in the company but you should rattle
some cages about my idea :)
This message posted from opensolaris.org
I would love to go back to using shuttles.
Actually, my ideal setup would be:
Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
then I could chain up to 8 enclosures off a single small, nearly silent host
machine.
8 enclosures x 5 drives = 40
I have built mine the last few days, and it seems to be running fine right now.
Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94,
the latest right now) because I wanted the new CIFS support and some additional
ZFS features.
Here's my setup. These were my goals:
- Quie
Bob Says:
"But a better solution is to assign a processor set to run only
the application -- a good idea any time you need a predictable
response."
Bob's suggestion above along with "no interrupts on that pset", and a
fixed scheduling class for the application/processes in question could
yeah, i have not been pleased with the quality of the HCL.
there's plenty of hardware discussed on the forums and if you search the bugs
db that has been confirmed and/or fixed to work on various builds of osol and
solaris 10.
i wound up buying an AMD based machine (i wanted Intel) with 6 onboa
Don't take my opinion. I am a newbie to everything solaris.
>From what it looks like in the HCL, some of the VIA stuff is supported. Like I
>said I tried some nexenta CD...
They don't make 64-bit, first off, and I am not sure if any of their mini-itx
boards support more than 2 gig ram. ZFS love
i have that chassis too. did solaris install for you? what version/build?
i think i tried a nexenta build and it crapped out on install.
i also only have 2 gigs of ram in it and a CF card to boot off of...
4 drives is too small for what i want, 5 drives would be my minimum. i was
hoping this wo
dynamic data that
needs to survive a reboot, it would seem to make a lot of sense to
enable write cache on such disks. This assumes that ZFS does the
flush no matter whether it thinks the write cache is enabled or not.
Am I wrong about this somehow?
--
Mike Gerdts
http://mgerdts.blogspot.com/
Did you have success?
What version of Solaris? OpenSolaris? etc?
I'd want to use this card with the latest Solaris 10 (update 5?)
The connector on the adapter itself is "IPASS" and the Supermicro part number
for cables from the adapter to standard SATA drives is CBL-0118L-02 "IPASS to 4
SATA C
Prior to build , bug 6668666 causes the following
platform-dependent steps to also be needed:
On sparc systems:
# installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
On x86 systems:
# ...
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
Would adding a dedicated ZIL/SLOG (what is the difference between those 2
exactly? Is there one?) help meet your requirement?
The idea would be to use some sort of relatively large SSD drive of some
variety to absorb the initial write-hit. After hours when things quieit down
(or perhaps during
nd" to be a stable format and get
integration with enterprise backup software that can perform restores
in a way that maintains space efficiency.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
plication under a wide variety of
> circumstances.
The key thing here is that distributed applications will not play
nicely. In my best use case, Solaris zones and LDoms are the
"application". I don't expect or want Solaris to form some sort of
P2P storage system across my data
It seems as though there is no way to remove a log device once it is
added. Is this correct?
Assuming this is correct, is there any reason that adding the ability
to remove the log device would be particularly tricky?
--
Mike Gerdts
http://mgerdts.blogspot.com
her than operations per
second. This was with several (<100) processes contending for reading
directory contents, file creations, and file deletions. This is where
I found the script that though that "touch $dir/test.$$" (followed by
rm) was the right way to check to see if a
omplexity that will turn into a long-term management
problem as sysadmins split or merge pools, change pool naming schemes,
reorganize dataset hierarchies, etc.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
>>
>> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Thoughts ? Is this useful for anyone else ? My above e
s like the following
should work unambigously:
# zfs snapshot ./[EMAIL PROTECTED]
# zfs snapshot `pwd`/[EMAIL PROTECTED]
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
-Server.html
2. http://www.sun.com/servers/x64/x4540/specs.xml
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r that I connect to 10
gigabit Ethernet or the SAN (FC tape drives).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as a result all of the deduped copies would be sequential as well.
What's more - it is quite likely to be in the ARC or L2ARC.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tion of more storage because of
efficiencies of the storage devices make it the same cost as less
storage, then perhaps allocating more per student is feasible. Or
maybe tuition could drop by a few bucks.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
z
r my operations. Yes, teaching the user the
> "right thing" is useful, but that user isn't there to know how to "manage
> data" for my benefit. They're there to learn how to be filmmakers,
> journalists, speech pathologists, etc.
Well said.
--
Mike Gerdts
h
On Mon, Jul 7, 2008 at 9:24 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Mon, 7 Jul 2008, Mike Gerdts wrote:
>> There tend to be organizational walls between those that manage
>> storage and those that consume it. As storage is distributed across
>> a netw
the
patches remains per-server used space. Additionally the other space
used by the installed patches remains used. Deduplication can
reclaim the majority of the space.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss ma
rage - each server is a dataless FRU. If Vendor X
supports deduplication of live data (hint) I only need about 25% of
space that I would need if I weren't using clones + deduplication.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-
201 - 300 of 527 matches
Mail list logo