> On 2010-Sep-24 00:58:47 +0800, "R.G. Keen"
> wrote:
> > But for me, the likelihood of
> >making a setup or operating mistake in a virtual machine
> >setup server is far outweighs the hardware cost to put
> >another physical machine on the ground.
>
> The downsides are generally that it'll be
Have you tried setting zfs_recover & aok in /etc/system or setting it
with the mdb?
Read how to set via /etc/system
http://opensolaris.org/jive/thread.jspa?threadID=114906
mdb debugger
http://www.listware.net/201009/opensolaris-zfs/46706-re-zfs-discuss-how-to-set-zfszfsrecover1-and-aok1-in-grub
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> The dedup property is set on a filesystem, not on the pool.
>
> However, the dedup ratio is reported on the pool and not on the
> filesystem.
As with most other ZFS concepts, t
Timing is everything. Lori is the authoritative answer and makes sense, due
to the limitations at boot. Thanks Lori! :-)
-- richard
--
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
Richard Elling
rich...@nexenta.com +1-760-896-4422
Enterprise clas
On Sep 23, 2010, at 3:40 PM, Frank Middleton wrote:
> Bumping this because no one responded. Could this be because
> it's such a stupid question no one wants to stoop to answering it,
> or because no one knows the answer? Trying to picture, say, what
> could happen in /var (say /var/adm/messages),
On 09/23/10 04:40 PM, Frank Middleton wrote:
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to the new
regime, there was a period of time where the distributions were not being
delivered. If I were in your shoes, I would upgrade to OpenIndiana b147 which
has 26 weeks of maturity and bug fixes over b134.
http:
Hi Peter,
dedupe is pool wide. File systems can opt in or out of dedupe. So if multiple
file systems are set to dedupe, then they all benefit from using the same pool
of deduped blocks. In this way, if two files share some of the same blocks,
even if they are in different file systems, they wil
On 2010-Sep-24 00:58:47 +0800, "R.G. Keen" wrote:
>That may not be the best of all possible things to do
>on a number of levels. But for me, the likelihood of
>making a setup or operating mistake in a virtual machine
>setup server is far outweighs the hardware cost to put
>another physical machi
I believe it goes a something like this -
ZPS filesystems with dedupe turned on can be thought of as hippie/socialist
filesystems, wanting to "share", etc. Filesystems with dedupe turned off are
a grey Randian landscape where sharing blocks between files is seen as a
weakness/defect. They all
On 09/23/10 15:36, Peter Taps wrote:
I am a bit confused on the dedup relationship between the filesystem and its
pool.
The dedup property is set on a filesystem, not on the pool.
Dedup is a pool wide concept, blocks from multiple filesystems
maybe deduplicated.
However, the dedup ratio is
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a swap
zvol, is giving me a headache...
On 07/09
Folks,
I am a bit confused on the dedup relationship between the filesystem and its
pool.
The dedup property is set on a filesystem, not on the pool.
However, the dedup ratio is reported on the pool and not on the filesystem.
Why is it this way?
Thank you in advance for your help.
Regards,
P
Hi!
2010/9/23 Gary Mills
>
> On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote:
> >
> > We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best
> > Practices Guide http://j.mp/zfs-bp states, it's advisable to use
> > redundancy (ie. RAIDZ, mirroring or whatnot), even if the und
So, I'm still having problems with intermittent hangs on write with my ZFS
pool. Details from my original post are below. Since posting that, I've gone
back and forth with a number of you, and gotten a lot of useful advice, but I'm
still trying to get to the root of the problem so I can correc
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely
|no excuse not to buy a USED server-class machine which is 1- or 2-
|generations back. They're dirt cheap, readily available,
| [snip]
=
Anyone have
On Thu, Sep 23, 2010 at 06:58:29AM +, Markus Kovero wrote:
> > What is an example of where a checksummed outside pool would not be able
> > to protect a non-checksummed inside pool? Would an intermittent
> > RAM/motherboard/CPU failure that only corrupted the inner pool's block
> > before i
[I'm deleting the whole thread, since this is a rehash of several
discussions on this list previously - check out the archives, and search
for "ECC RAM"]
These days, for a "home" server, you really have only one choice to make:
"How much power do I care that this thing uses?"
If you are s
> On 23/09/2010 11:06 PM, casper@sun.com wrote:
> >
> >> Ok, that doesn't seem to have worked so well ...
> >>
> >> I took one of the drives offline, rebooted and it
> just hangs at the
> >> splash screen after prompting for which BE to boot
> into.
> >> It gets to
> >> hostname: blah
> >> and
I should clarify. I was addressing just the issue of
virtualizing, not what the complete set of things to
do to prevent data loss is.
> 2010/9/19 R.G. Keen
> > and last-generation hardware is very, very cheap.
> Yes, of course, it is. But, actually, is that a true
> statement?
Yes, it is. Last
On Thu, September 23, 2010 01:33, Alexander Skwar wrote:
> Hi.
>
> 2010/9/19 R.G. Keen
>
>> and last-generation hardware is very, very cheap.
>
> Yes, of course, it is. But, actually, is that a true statement? I've read
> that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
> RAM
On Sep 23, 2010, at 9:08 AM, Dick Hoogendijk wrote:
> On 23-9-2010 16:34, Frank Middleton wrote:
>
> > For home use, used Suns are
> available at ridiculously low prices and
>
> > they seem to be much better engineered than your typical PC.
> Memory
>
> > failures are muc
On 23-9-2010 16:34, Frank Middleton wrote:
For home use, used Suns are available at ridiculously low prices and
they seem to be much better engineered than your typical PC. Memory
failures are much more likely than winning the pick 6 lotto...
And about what SUN systems are you thinking for
@ kebabber:
> There was a guy doing that: Windows as host and
> OpenSolaris as guest with raw access to his disks. He
> lost his 12 TB data. It turned out that VirtualBox
> dont honor the write flush flag (or something
> similar).
That story is in the link I provided, and as has been pointed out
On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote:
>
> We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best
> Practices Guide http://j.mp/zfs-bp states, it's advisable to use
> redundancy (ie. RAIDZ, mirroring or whatnot), even if the underlying
> storage does its own RAID th
On Wed, Sep 22, 2010 at 8:13 PM, Richard Elling wrote:
> On Sep 22, 2010, at 1:46 PM, LIC mesh wrote:
>
> Something else is probably causing the slow I/O. What is the output of
> "iostat -en" ? The best answer is "all balls" (balls == zeros)
>
> Found a number of LUNs with errors this way, lo
On 09/23/10 03:01, Ian Collins wrote:
So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it "safe enough" now do use ZFS on
non-ECC-RAM systems (if backups are around)?
It's as safe as running any other OS.
The big difference is ZFS will tel
On 23/09/2010 11:06 PM, casper@sun.com wrote:
>
>> Ok, that doesn't seem to have worked so well ...
>>
>> I took one of the drives offline, rebooted and it just hangs at the
>> splash screen after prompting for which BE to boot into.
>> It gets to
>> hostname: blah
>> and just sits there.
>
>
>
>Ok, that doesn't seem to have worked so well ...
>
>I took one of the drives offline, rebooted and it just hangs at the
>splash screen after prompting for which BE to boot into.
>It gets to
>hostname: blah
>and just sits there.
When you say "offline", did you:
- remove the drive ph
swapping the boot order in the PC's BIOS doesn't help
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it is responding to pings, btw, so *something's* running. Not ssh though
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it just hangs at the splash
screen after prompting for which BE to boot into.
It gets to
hostname: blah
and just sits there.
Um ...
I read some doco that says :
The boot process can be slow if t
> On 23-9-2010 10:25, casper@sun.com wrote:
>> I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
>
>I'm using ZFS on a non-ECC machine for years now without any issues.
>Never had errors. Plus, like others said, other OS'ses have the same
>problems and also run quite well. If
On 23-9-2010 10:25, casper@sun.com wrote:
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
I'm using ZFS on a non-ECC machine for years now without any issues.
Never had errors. Plus, like others said, other OS'ses have the same
problems and also run quite well. If not, yo
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
Note that this is not different from using another OS; the difference is
that ZFS will complain when memory leads to disk corruption; without ZFS
you will still have memory corruption but you wouldn't know.
Is it helpful not know
On Thu, Sep 23, 2010 at 08:48, Haudy Kazemi wrote:
> Mattias Pantzare wrote:
>>
>> ZFS needs free memory for writes. If you fill your memory with dirty
>> data zfs has to flush that data to disk. If that disk is a virtual
>> disk in zfs on the same computer those writes need more memory from
>> th
Markus Kovero wrote:
What is an example of where a checksummed outside pool would not be able
to protect a non-checksummed inside pool? Would an intermittent
RAM/motherboard/CPU failure that only corrupted the inner pool's block
before it was passed to the outer pool (and did not corrupt the o
On 09/23/10 05:00 PM, Carl Brewer wrote:
G'day,
My OpenSolaris (b134) box is low on space and has a ZFS mirror for root :
uname -a
SunOS wattage 5.11 snv_134 i86pc i386 i86pc
rpool 696G 639G 56.7G91% 1.09x ONLINE -
It's currently a pair of 750GB drives. In my bag I have a pai
On 09/23/10 06:33 PM, Alexander Skwar wrote:
Hi.
2010/9/19 R.G. Keen
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheap
> What is an example of where a checksummed outside pool would not be able
> to protect a non-checksummed inside pool? Would an intermittent
> RAM/motherboard/CPU failure that only corrupted the inner pool's block
> before it was passed to the outer pool (and did not corrupt the outer
> pool's
40 matches
Mail list logo