[no one speaking? want my spam?]
Ok, folks, I don't know how open is this now. I turned on some public music,
the songs are very healthy today.
I guess whatever I say now would be misleading, so, here is a joke.
Zhou will always end something with happy -
I told some bullshit, not
What 'verbose information' is reported by the zfs send -v snapshot contain?
Also on Solaris 10u6 I don't get any output at all - is this a bug?
Regards,
Nick
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Jim Klimov schrieb:
Is it possible to create a (degraded) zpool with placeholders specified
instead
of actual disks (parity or mirrors)? This is possible in linux mdadm
(missing
keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.
Create sparse files with the size
meh
- Original Message -
From: Nick Smith nick.sm...@techop.ch
To: zfs-discuss@opensolaris.org
Sent: Friday, January 16, 2009 4:28 AM
Subject: [zfs-discuss] Verbose Information from zfs send -v snapshot
What 'verbose information' is reported by the zfs send -v snapshot
contain?
JZ wrote:
Beloved Jonny,
I am just like you.
There was a day, I was hungry, and went for a job interview for sysadmin.
They asked me - what is a protocol?
I could not give a definition, and they said, no, not qualified.
But they did not ask me about CICS and mainframe. Too bad.
On Jan 16, 2009, at 4:47 AM, Nick Smith wrote:
When I use the command 'zfs send -v snapshot-name' I expect to see
as the manpage states, some verbose information printed to stderr
(probably) but I don't see anything on either Solaris 10u6 or
OpenSolaris 2008.11. I am doing something
This seems to have worked. But is showing an abnormal amount of data.
r...@fsk-backup:~# zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
ambry 3.62T 132K 3.62T 0% ONLINE -
r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T 27K 2.7T 1% /ambry
This
I tested with zfs_vdev_max_pending=8
I hoped this should make the error messages
arcmsr0: too many outstanding commands (257 256)
go away but it did not.
zfs_vdev_max_pending=8 this should have only allowed 128 commands total to
be outstanding I would think (16 Drives * 8 = 128).
However
On Fri, 16 Jan 2009, Matt Harrison wrote:
Is this guy seriously for real? It's getting hard to stay on the list
with all this going on. No list etiquette, complete irrelevant
ramblings, need I go on?
The ZFS discussion list has produced its first candidate for the
rubber room that I
JZ wrote:
[...]
Is this guy seriously for real? It's getting hard to stay on the list
with all this going on. No list etiquette, complete irrelevant
ramblings, need I go on?
He probably has nothing better to do. Just ignore him; that's what
they dislike most. He will go away eventually.
Meh this is retarted. It looks like zpool list shows an incorrect
calculation? Can anyone agree that this looks like a bug?
r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T 27K 2.7T 1% /ambry
r...@fsk-backup:~# zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
Jonny Gerold wrote:
Meh this is retarted. It looks like zpool list shows an incorrect
calculation? Can anyone agree that this looks like a bug?
r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T 27K 2.7T 1% /ambry
r...@fsk-backup:~# zpool list
NAMESIZE USED
On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold j...@thermeon.com wrote:
Meh this is retarted. It looks like zpool list shows an incorrect
calculation? Can anyone agree that this looks like a bug?
r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T 27K 2.7T 1% /ambry
BTW, is there any difference between raidz raidz1 (is the one for one
disk parity) or does raidz have a parity disk too?
Thanks, Jonny
Tim wrote:
On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold j...@thermeon.com
mailto:j...@thermeon.com wrote:
Meh this is retarted. It looks like
On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold j...@thermeon.com wrote:
BTW, is there any difference between raidz raidz1 (is the one for one
disk parity) or does raidz have a parity disk too?
Thanks, Jonny
It depends on who you're talking to I suppose.
I would expect generally raidz is
That's what I figured. That raidz raidz1 are the same thing. The one
is just put there to collect confusion ;)
Thanks, Jonny
Tim wrote:
On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold j...@thermeon.com
mailto:j...@thermeon.com wrote:
BTW, is there any difference between raidz raidz1
I don't believe that iozone does any synchronous calls (fsync/O_DSYNC/O_SYNC),
so the ZIL and separate logs (slogs) would be unused.
I'd recommend performance testing by configuring filebench to
do synchronous writes:
http://opensolaris.org/os/community/performance/filebench/
Neil.
On 01/15/09
I've installed an s10u6 machine with no UFS partitions at all. I've created a
dataset for zones and one for a zone named default. I then do an lucreate
and luactivate and a subsequent boot off the new BE. All of that appears to
go just fine (though I've found that I MUST call the zone dataset
Hi Amy,
This is a known problem with ZFS and live upgrade. I believe the docs for
s10u6 discourage the config you show here. A patch should be ready some
time next month with a fix for this.
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
I've installed an s10u6 machine with no UFS
mmusante This is a known problem with ZFS and live upgrade. I believe the
mmusante docs for s10u6 discourage the config you show here. A patch should
mmusante be ready some time next month with a fix for this.
Do you happen to have a bugid handy?
I had done various searches to try and
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
mmusante This is a known problem with ZFS and live upgrade. I believe the
mmusante docs for s10u6 discourage the config you show here. A patch should
mmusante be ready some time next month with a fix for this.
Do you happen to have a bugid
Hi Amy,
You can review the ZFS/LU/zones issues here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones
The entire Solaris 10 10/08 UFS to ZFS with zones migration is described
here:
http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view
Let
cindy.swearingen
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones
Thanks, Cindy, that was in fact the page I had been originally referencing
when I set up my datasets, and it was very helpful. I found it by reading a
comp.unix.solaris post in
On Fri, Jan 16, 2009 at 2:47 AM, Nick Smith nick.sm...@techop.ch wrote:
meh
meh
You should ignore JZ, he seems to just be trolling the list.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
This what I discovered
Yo cant have sub directories of the zone root file system that is part of
the BE filesystem tree with zfs and lu (no spearate /var etc)
zone roots must be on the root pool for lu to work
extra file system must be from a none BE zfs file system tree ( I use
datasets)
Tim wrote:
On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold j...@thermeon.com
mailto:j...@thermeon.com wrote:
Meh this is retarted. It looks like zpool list shows an incorrect
calculation? Can anyone agree that this looks like a bug?
r...@fsk-backup:~# df -h | grep ambry
On Fri, 16 Jan 2009, Bob Friesenhahn wrote:
On Fri, 16 Jan 2009, Matt Harrison wrote:
Is this guy seriously for real? It's getting hard to stay on the list
with all this going on. No list etiquette, complete irrelevant
ramblings, need I go on?
The ZFS discussion list has produced its first
Hi Wes,
I now have a real question.
How do you define silly, and artificial intelligence, and script?
And halfway inclined to believe to me means 25%.
(believe is 100%, inclined is 50%, and halfway is 25% in crystal math, and
maybe even less in storage math, including the RAID and HA and DR and
Solaris 10 5/08
Customer migrated to a new emc array with a snap shot and did a send and
receive.
He is now trying to set quotas on the zfs file system and getting the
following error.
[r...@osprey /] # zfs set quota=800g target/u05
cannot set property for 'target/u05': size is less
Gregory Edwards - Software Support wrote:
[r...@osprey /] # zfs set quota=800g target/u05
cannot set property for 'target/u05': size is less than current used or
reserved space
...
target/u05 1.06T 206G
target/u...@1 671G -
...
He was able to set them all all
Hi Rich,
This is the best summary I have seen. [china folks say, older ginger more
satisfying, true]
Just one thing I would like to add -
It also depends on the encryption technique and algorism. Today we are
doing private key encryption that without the key, you cannot read the data.
Carson Gaspar wrote:
Gregory Edwards - Software Support wrote:
[r...@osprey /] # zfs set quota=800g target/u05
cannot set property for 'target/u05': size is less than current used or
reserved space
...
target/u05 1.06T 206G
target/u...@1 671G -
...
He was able
I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they're
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked at the Intel X25-E series, and they look comparable in
performance. At about 20% of the cost.
The Intel part does about a fourth as many synchronous write IOPS at
best.
Adam
On Jan 16, 2009, at 5:34 PM, Erik Trimble wrote:
I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and
they're
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked at
¡Gracias muy, mucho! para amor
Respetuoso
Best,
z
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
BTW,
ASANTE
MAPENZI
:-)
Best,
z
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: ZFS Discussions zfs-discuss@opensolaris.org
Sent: Friday, January 16, 2009 10:59 PM
Subject: Re: [zfs-discuss] (no subject)
¡Gracias muy, mucho! para amor
Respetuoso
Best,
z
36 matches
Mail list logo