> "ag" == Andrew Gabriel writes:
ag> I can't speak for qmail which I've never used, but MTA's
ag> should sync data to disk before acknowledging receipt,
yeah, I saw a talk by one of the Postfix developers. They've taken
pains to limit the amount of sync'ing so it's only one or two c
Jens Elkner wrote:
On Tue, Oct 13, 2009 at 09:20:23AM -0700, Paul B. Henson wrote:
We're currently using the Sun bundled Samba to provide CIFS access to our
ZFS user/group directories.
...
Evidently the samba engineering group is in Prague. I don't know if it is a
language problem, or where th
On 14/10/2009, at 2:27 AM, casper@sun.com wrote:
So why not the built-in CIFS support in OpenSolaris? Probably has a
similar issue, but still.
In my case, it’s at least two reasons:
* Crossing mountpoints requires separate shares - Samba can share an
entire hierarchy regardless of ZF
On Tue, Oct 13, 2009 at 09:20:23AM -0700, Paul B. Henson wrote:
>
> We're currently using the Sun bundled Samba to provide CIFS access to our
> ZFS user/group directories.
...
> Evidently the samba engineering group is in Prague. I don't know if it is a
> language problem, or where the confusion i
On Tue, Oct 13, 2009 at 01:00:35PM -0400, Frank Middleton wrote:
> After a recent upgrade to b124, decided to switch to COMSTAR for iscsi
> targets for VirtualBox hosted on AMD64 Fedora C10. Both target and
> initiator are running zfs under b124. This combination seems
> unbelievably slow compared
On Tue, 13 Oct 2009, Drew Balfour wrote:
> Ah. No. If you're using idmap and are mapping to an AD server, the
> windows SIDs (which are both users and groups) are stored in a cred
> struct (in cr_ksid) which allows more than 32 groups, up to 64k iirc.
Ah, yes, I neglected to consider that given t
Well, your plan on storage usage goes to 1% of those who doesn't need
reliability and roomy media back-end. So, it can work out well - but
unfortunately this is not a silver bullet.
--
Roman
--
This message posted from opensolaris.org
___
zfs-discuss
Mike DeMarco wrote:
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
None providing the drive is available to the OS by normal means.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
On Tue, 13 Oct 2009, Nils Goroll wrote:
Regarding my bonus question: I haven't found yet a definite answer if there
is a way to read the currently active controller setting. I still assume that
the nvsram settings which can be read with
service -d -c read -q nvsram region=0xf2 host=
On Tue, 13 Oct 2009 casper@sun.com wrote:
> That's not entirely true; the issue is similar having more than 16 groups
> as it breaks AUTH_SYS over-the-wire "authentication" but we already have
> that now.
[...]
> For now, we're aiming for 1024 groups but also make sure that the
> userland will
Paul B. Henson wrote:
So why not the built-in CIFS support in OpenSolaris? Probably has a
similar issue, but still.
I wouldn't think it has this same issue; presumably it won't support more
than the kernel limit of 32 groups, but I can't imagine that in the case
when a user is in more than 32
>Regarding Solaris 10, my understanding was that the current 32 group limit
>could only be changed by modifying internal kernel structures that would
>break backwards compatibility, which wouldn't happen because Solaris
>guarantees backwards binary compatibility. I could most definitely be
>mista
Hi Bob and all,
So this sounds like we need to wait for someone to come with a definite
answer.
I've received some helpful information on this:
> Byte 17 is for "Ignore Force Unit Access".
> Byte 18 is for Ignore Disable Write Cache.
> Byte 21 is for Ignore Cache Sync.
>
> Change ALL settings
On Tue, 13 Oct 2009 casper@sun.com wrote:
> So why not the built-in CIFS support in OpenSolaris? Probably has a
> similar issue, but still.
I wouldn't think it has this same issue; presumably it won't support more
than the kernel limit of 32 groups, but I can't imagine that in the case
when
Every ZFS filesystem uses system memory, but is this also true for
-NOT- mounted filesystems (with the canmount=noauto option set)?
Second question: would it make much difference to have 12 or 22 ZFS
filesystems? What's the memory footprint of a ZFS filesystem
--
Dick Hoogendijk -- PGP/GnuPG key
Second question: would it make much difference to have 12 or 22 ZFS
filesystems? What's the memory footprint of a ZFS filesystem
I remember a figure of 64KB kernel memory per file system.
-mg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it seems to be 33:32ish when they start talking about dedup
On Tue, Oct 13, 2009 at 10:05 AM, Paul Archer wrote:
> Someone posted this link: https://slx.sun.com/1179275620 for a video on
> ZFS deduplication. But the site isn't responding (which is typical of Sun,
> since I've been dealing with t
> Before you do a dd test try first to do:
> echo zfs_vdev_max_pending/W0t1 | mdb -kw
I did actually try this about a month ago when I first made an attempt at
figuring this out. Changing the pending values did make some small difference,
but even the best was far, far short of acceptable perfo
On Tue, Oct 13, 2009 at 05:32:35AM -0700, Julio wrote:
> Hi,
>
> I have the following partions on my laptop, Inspiron 6000, from fdisk:
>
> 1 Other OS 011 12 0
> 2 EXT LBA 12 25612550 26
> 3
Also, ZFS does things like putting the ZIL data (when not on a dedicated
device) at the outer edge of disks, that being faster.
No, ZFS does not do that. It will chain the intent log from blocks allocated
from the same metaslabs that the pool is allocating from.
This actually works out well be
My answer is incomplete.
You can use the zpool attach command to attach another disk
slice to a root pool's disk slice to expand the pool size after
the smaller disk is detached.
On Julio's laptop, I don't think think he can attach another
fdisk partition to his root pool. I think he needs to ba
After a recent upgrade to b124, decided to switch to COMSTAR
for iscsi targets for VirtualBox hosted on AMD64 Fedora C10. Both
target and initiator are running zfs under b124. This combination
seems unbelievably slow compared to the old iscsi subsystem.
A scrub of a local 20GB disk on the target
On 10/12/2009 04:38 PM, Paul B. Henson wrote:
I only have ZFS filesystems exported right now, but I assume it would
behave the same for ufs. The underlying issue seems to be the Sun NFS
server expects the NFS client to apply the sgid bit itself and create the
new directory with the parent directo
Except that you can't add a disk or partition to a root pool:
# zpool add rpool c1t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate
logs
He could try to attach the partition to his existing pool, I'm not sure
how, and this would only create a mirrored root pool, i
I think zpool add
should work for you
google it
zpool add rpool yourNTFSpartition
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, October 13, 2009 08:24, Derek Anderson wrote:
> The only bad part is I cannot estimate how much of the old disks have life
> is left because in a few months, I am going to have a handful of the
> fastest SSD's around and not sure if I would trust them for much of
> anything.
In the long
>
>We're currently using the Sun bundled Samba to provide CIFS access to our
>ZFS user/group directories.
So why not the built-in CIFS support in OpenSolaris? Probably has a
similar issue, but still.
>I found a bug in active directory integration mode, where if a user is in
>more than 32 activ
We're currently using the Sun bundled Samba to provide CIFS access to our
ZFS user/group directories.
I found a bug in active directory integration mode, where if a user is in
more than 32 active directory groups, samba calls setgroups with a group
list of greater than 32, which fails, resulting
Hi Bob and all,
I should update this paper since the performance is now radically
different and the StorageTek 2540 CAM configurables have changed.
That would be great, I think you'd do the community (and Sun, probably) a big
favor.
Is this information still current for F/W 07.35.44.10 ?
On Tue, 13 Oct 2009, Joerg Schilling wrote:
> The correct behavior would be to assign the group ownership of the parent
> directory to a new directory (instead of using the current process
> credentials) in case that the sgid bit is set in the parent directory.
> Is this your problem?
Yes, that i
On Tue, 13 Oct 2009 casper@sun.com wrote:
> If you look at the code in ufs and zfs, you'll see that they both create
> the mode correctly and the same code is used through NFS.
>
> There's another scenario: the Linux client updates the attributes after
> creating the file/directory/
I don't t
On Tue, 13 Oct 2009, Nils Goroll wrote:
I am trying to find out some definite answers on what needs to be done on an
STK 2540 to set the Ingnore Cache Sync Option. The best I could find is Bob's
"Sun StorageTek 2540 / ZFS Performance Summary" (Dated Feb 28, 2008, thank
you, Bob), in which he q
I just upgraded my machine from 2008.11 to 2009.06 with pkg install
image-update, and that all seemed to go fine.
Now, however, my 5 disk raidz is complaining about corrupted metadata. However,
if I reboot back into 2008.11, it still works fine. I even can do things which
I think might check co
Bob Friesenhahn wrote:
My own reliability concerns regarding a "SAN" are due to the big-LUN
that SAN hardware usually emulates and not due to communications in the
"SAN". A big-LUN is comprised of multiple disk drives. If the SAN
storage array has an error, then it is possible that the data o
On Tue, Oct 13, 2009 at 9:42 AM, Aaron Brady wrote:
> I did, but as tcook suggests running a later build, I'll try an
> image-update (though, 111 > 2008.11, right?)
>
It should be, yes. b111 was released in April of 2009.
--Tim
___
zfs-discuss maili
Hi,
I am trying to find out some definite answers on what needs to be done on an STK
2540 to set the Ingnore Cache Sync Option. The best I could find is Bob's "Sun
StorageTek 2540 / ZFS Performance Summary" (Dated Feb 28, 2008, thank you, Bob),
in which he quotes a posting of Joel Miller:
To
On Tue, Oct 13, 2009 at 8:24 AM, Derek Anderson wrote:
> Before you all start taking bets, I am having a difficult time
> understanding why you would. If you think I am nuts because SSD's have a
> limited lifespan, I would agree with you, however we all know that SSD's are
> going to get cheaper
I did, but as tcook suggests running a later build, I'll try an image-update
(though, 111 > 2008.11, right?)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Hi Tim, that doesn't help in this case - it's a complete lockup apparently
caused by driver issues.
However, the good news ofr Insom is that the bug is closed because the problem
now appears fixed. I tested it and found that it's no longer occuring in
OpenSolaris 2008.11 or 2009.06.
If you mo
On Tue, 13 Oct 2009, Shawn Joy wrote:
The ZFS file systems is what is different here. If a either a HBA,
fibre cable, redundant controller fail or firmware issues on a array
redundant controller occur then SSTM (MPXIO) will see the issue and
try and fail things over to the other controller. Of
On Tue, Oct 13, 2009 at 8:54 AM, Aaron Brady wrote:
> All's gone quiet on this issue, and the bug is closed, but I'm having
> exactly the same problem; pulling a disk on this card, under OpenSolaris
> 111, is pausing all IO (including, weirdly, network IO), and using the ZFS
> utilities (zfs list
I did bad math, I meant 25,000 in labor dollars saved over 6 months. There is
one applicatoin called FRx, a reporting engine for their accounting. Even if
their executives save 10 minutes a day running just that bloated application,
then this plan has payed for itself in just a few weeks.
ZFS
Hi--
Unfortunately, you cannot change the partitioning underneath your pool.
I don't see any way of resizing this partition except for backing up
your data, repartitioning the disk, and reinstalling Opensolaris
2009.06.
Maybe someone else has a better idea...
Cindy
On 10/13/09 06:32, Julio wr
Someone posted this link: https://slx.sun.com/1179275620 for a video on ZFS
deduplication. But the site isn't responding (which is typical of Sun, since
I've been dealing with them for the last 12 years).
Does anyone know of a mirror site, or if the video is on YouTube?
Paul
___
On Tue, 13 Oct 2009, Ren Pillay wrote:
I'm wondering how to interpret what ZFS is telling me in regard to
the errors being reported. 1 of my disks (in a 5 disc raidZ array)
reports about 4-5 write/read errors every few days. All 5 are
directly connected to the motheboard SATA ports, no raid co
All's gone quiet on this issue, and the bug is closed, but I'm having exactly
the same problem; pulling a disk on this card, under OpenSolaris 111, is
pausing all IO (including, weirdly, network IO), and using the ZFS utilities
(zfs list, zpool list, zpool status) causes a hang until I replace t
according to this page :
http://www.opensolaris.org/os/community/zfs/ztest/;jsessionid=ED73B91DAC77211E7A9EB687D3EF7F91
its supposed to be in /usr/bin
i run snv_124
Thanks,
Dirk
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
On 13 oct. 2009, at 15:24, Derek Anderson wrote:
Simple answer: Man hour math. I have 150 virtual machines on these
disks for shared storage. They hold no actual data so who really
cares if they get lost. However 150 users of these virtual machines
will save 5 minutes or so every day of
Mike DeMarco wrote:
Does anyone know when this will be available? Project says Q4 2009 but does not
give a build.
Yes. Not giving a build is deliberate because builds are very narrow
windows and there has been much flux in the build schedule for what may
or may not be restricted content bui
Before you all start taking bets, I am having a difficult time understanding
why you would. If you think I am nuts because SSD's have a limited lifespan,
I would agree with you, however we all know that SSD's are going to get cheaper
and cheaper as the days go by. The Intels I bought in April
Does anyone know when this will be available? Project says Q4 2009 but does not
give a build.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I think after some time we gonna see Derek screaming for f... zfs that toasted
the data on his ssd array :)
Hopefully this setup was non for production.
--
Roman Naumenko
PS
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Hi,
I have the following partions on my laptop, Inspiron 6000, from fdisk:
1 Other OS 011 12 0
2 EXT LBA 12 25612550 26
3 ActiveSolaris2 2562 97287167 74
First one is f
>In life there are many things that we "should do" (but often don't).
>There are always trade-offs. If you need your pool to be able to
>operate with a device missing, then the pool needs to have sufficient
>redundancy to keep working. If you want your pool to survive if a
>disk gets crushed by a w
> Is this minutes:seconds.millisecs ? if so, you're looking at 3-4MB/s ..
> I would say something is wrong.
Ack, you're right.
I was concentrating so much on the WTFOMG problem that I completely missed the
WTF problem.
In other news, with the Poweredge put into "SCSI" mode instead of "RAID" mod
"Paul B. Henson" wrote:
>
> We're running Solaris 10 with ZFS to provide home and group directory file
> space over NFSv4. We've run into an interoperability issue between the
> Solaris NFS server and the Linux NFS client regarding the sgid bit on
> directories and assigning appropriate group own
I'm wondering how to interpret what ZFS is telling me in regard to the errors
being reported. 1 of my disks (in a 5 disc raidZ array) reports about 4-5
write/read errors every few days. All 5 are directly connected to the
motheboard SATA ports, no raid controller card in between.
How bad is it?
>I only have ZFS filesystems exported right now, but I assume it would
>behave the same for ufs. The underlying issue seems to be the Sun NFS
>server expects the NFS client to apply the sgid bit itself and create the
>new directory with the parent directory's group, while the Linux NFS client
>ex
59 matches
Mail list logo