Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Richard Elling
ZIL pre-allocates at the block level, so think along the lines of 12k or 132k.
 — richard

> On Jun 23, 2017, at 11:30 AM, Günther Alka  wrote:
> 
> hello Richard
> 
> I can follow that the Zil does not add more fragmentation to the free space 
> but is this effect relevant?
> If a ZIL pre-allocates say 4G and the remaining fragmented poolsize for 
> regular writes is 12T
> 
> Gea
> 
> Am 23.06.2017 um 19:30 schrieb Richard Elling:
>> A slog helps fragmentation because the space for ZIL is pre-allocated based 
>> on a prediction of
>> how big the write will be. The pre-allocated space includes a 
>> physical-block-sized chain block for the
>> ZIL. An 8k write can allocate 12k for the ZIL entry that is freed when the 
>> txg commits. Thus, a slog
>> can help decrease free space fragmentation in the pool.
>>  — richard
>> 
>> 
>>> On Jun 23, 2017, at 8:56 AM, Guenther Alka  wrote:
>>> 
>>> A Zil or better dedicated Slog device will not help as this is not a write 
>>> cache but a logdevice. Its only there to commit every written datablock and 
>>> to put it onto stable storage. It is read only after a crash to redo a 
>>> missing committed write.
>>> 
>>> All writes, does not matter if sync or not, are going over the rambased 
>>> write cache (per default up to 4GB). This is flushed from time to time as a 
>>> large sequential write. Writes are fragmented then depending on the 
>>> fragmentation of the free space.
>>> 
>>> Gea
>>> 
>>> 
 To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) 
 can help. Perhaps a DDR drive (or mirror of these) with battery and flash 
 protection from poweroffs, so it does not wear out like flash would. In 
 this case, how-ever random writes come, ZFS does not have to put them on 
 media asap - so it can do larger writes later. This can also protect SSD 
 arrays from excessive small writes and wear-out, though there a bad(ly 
 sized) ZIL can become a bottleneck.
 
 Hope this helps,
 Jim
 --
>>> ___
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss@lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> -- 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Günther Alka

hello Richard

I can follow that the Zil does not add more fragmentation to the free 
space but is this effect relevant?
If a ZIL pre-allocates say 4G and the remaining fragmented poolsize for 
regular writes is 12T


Gea

Am 23.06.2017 um 19:30 schrieb Richard Elling:

A slog helps fragmentation because the space for ZIL is pre-allocated based on 
a prediction of
how big the write will be. The pre-allocated space includes a 
physical-block-sized chain block for the
ZIL. An 8k write can allocate 12k for the ZIL entry that is freed when the txg 
commits. Thus, a slog
can help decrease free space fragmentation in the pool.
  — richard



On Jun 23, 2017, at 8:56 AM, Guenther Alka  wrote:

A Zil or better dedicated Slog device will not help as this is not a write 
cache but a logdevice. Its only there to commit every written datablock and to 
put it onto stable storage. It is read only after a crash to redo a missing 
committed write.

All writes, does not matter if sync or not, are going over the rambased write 
cache (per default up to 4GB). This is flushed from time to time as a large 
sequential write. Writes are fragmented then depending on the fragmentation of 
the free space.

Gea



To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) can 
help. Perhaps a DDR drive (or mirror of these) with battery and flash 
protection from poweroffs, so it does not wear out like flash would. In this 
case, how-ever random writes come, ZFS does not have to put them on media asap 
- so it can do larger writes later. This can also protect SSD arrays from 
excessive small writes and wear-out, though there a bad(ly sized) ZIL can 
become a bottleneck.

Hope this helps,
Jim
--

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


--
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Richard Elling
A slog helps fragmentation because the space for ZIL is pre-allocated based on 
a prediction of
how big the write will be. The pre-allocated space includes a 
physical-block-sized chain block for the
ZIL. An 8k write can allocate 12k for the ZIL entry that is freed when the txg 
commits. Thus, a slog
can help decrease free space fragmentation in the pool.
 — richard


> On Jun 23, 2017, at 8:56 AM, Guenther Alka  wrote:
> 
> A Zil or better dedicated Slog device will not help as this is not a write 
> cache but a logdevice. Its only there to commit every written datablock and 
> to put it onto stable storage. It is read only after a crash to redo a 
> missing committed write.
> 
> All writes, does not matter if sync or not, are going over the rambased write 
> cache (per default up to 4GB). This is flushed from time to time as a large 
> sequential write. Writes are fragmented then depending on the fragmentation 
> of the free space.
> 
> Gea
> 
> 
>> To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) can 
>> help. Perhaps a DDR drive (or mirror of these) with battery and flash 
>> protection from poweroffs, so it does not wear out like flash would. In this 
>> case, how-ever random writes come, ZFS does not have to put them on media 
>> asap - so it can do larger writes later. This can also protect SSD arrays 
>> from excessive small writes and wear-out, though there a bad(ly sized) ZIL 
>> can become a bottleneck.
>> 
>> Hope this helps,
>> Jim
>> --
> 
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Guenther Alka
A Zil or better dedicated Slog device will not help as this is not a 
write cache but a logdevice. Its only there to commit every written 
datablock and to put it onto stable storage. It is read only after a 
crash to redo a missing committed write.


All writes, does not matter if sync or not, are going over the rambased 
write cache (per default up to 4GB). This is flushed from time to time 
as a large sequential write. Writes are fragmented then depending on the 
fragmentation of the free space.


Gea



To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) can 
help. Perhaps a DDR drive (or mirror of these) with battery and flash 
protection from poweroffs, so it does not wear out like flash would. In this 
case, how-ever random writes come, ZFS does not have to put them on media asap 
- so it can do larger writes later. This can also protect SSD arrays from 
excessive small writes and wear-out, though there a bad(ly sized) ZIL can 
become a bottleneck.

Hope this helps,
Jim
--


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Jim Klimov
On June 23, 2017 4:13:52 PM GMT+02:00, Artyom Zhandarovsky  
wrote:
>disk errors: none
>
>
>
>
>
>-
>
>CAP Alert
>
>-
>
>
>
> Is there any way to decrease fragmentation of dr_tank ?
>
>--
>
>zpool list (Sum of RAW disk capacity without redundancy counted)
>
>--
>
>NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH 
>ALTROOT
>
>dr_slow  9.06T  77.6M  9.06T - 0% 0%  1.00x  ONLINE  -
>
>dr_tank  48.9T  35.1T  13.9T -23%71%  1.00x  ONLINE  -
>
>rpool 272G  42.1G   230G -10%15%  1.00x  ONLINE  -
>
>
>
>Real Pool capacity from zfs list
>
>--
>
>NAME   USED AVAILMOUNTPOINT  %
>
>dr_slow   7.69T 1.26T /dr_slow 14%!
>
>dr_tank 41.6T 6.33T /dr_tank 13%!
>
>rpool 45.6G218G  /rpool   83%

The issue of zfs fragmentation is that at some point it becomes hard to find 
free spots to write into, as well as to do large writes contiguously, so 
performance suddenly and noticeably drops. This can impact reads as well, 
especially if atime=on is left as default.

To recover from existing fragmentation you must free up space, perhaps zfs-send 
datasets to another pool, empty as much as you can on this one, and send data 
back - so it lands in large contiguous writes.

To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) can 
help. Perhaps a DDR drive (or mirror of these) with battery and flash 
protection from poweroffs, so it does not wear out like flash would. In this 
case, how-ever random writes come, ZFS does not have to put them on media asap 
- so it can do larger writes later. This can also protect SSD arrays from 
excessive small writes and wear-out, though there a bad(ly sized) ZIL can 
become a bottleneck.

Hope this helps,
Jim
--
Typos courtesy of K-9 Mail on my Redmi Android
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Guenther Alka

Yes, but
If you increase your pool by adding a new vdev, your current data are 
not auto-rebalanced. This will only happen over time with new or 
modified data.


If you want the best performance then, you must copy over current data 
ex by renaming a filesystem, replicate it to the former name and delete 
it then.


Gea


Am 23.06.2017 um 17:19 schrieb Artyom Zhandarovsky:

So basically i need to add just more drives... ?

2017-06-23 18:09 GMT+03:00 Guenther Alka >:


The fragmentation info does not describe the fragmentation of the
data on pool but the fragmentation of the free space.  A high
fragmentation value will result in high data fragmentation only
when you write or modify data.


https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning


So the best and only way to reduce data fragmentation is not to
fill up a pool say over 70-80%.

You should also know that CopyOnWrite filesystems where a complete
datablock ex 128k is written newly even if you change a "house" to
a "mouse" in a textfile are more vulnerable to fragmentation than
older filesystems. This is the price for the crash resitency where
a power outage during a write cannot lead to a corrupted
filesystem like with older filesystems where it can happen that
the data is modified "infile" while the according metadata update
is not happening. ZFS over-compensates this with its advanced
rambased read and write caches. A "defrag tool" is not available
for ZFS.

Gea


Am 23.06.2017 um 16:13 schrieb Artyom Zhandarovsky:

there any way to decrease fragmentation of dr_tank ?


-- 


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com

http://lists.omniti.com/mailman/listinfo/omnios-discuss




--

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Artyom Zhandarovsky
So basically i need to add just more drives... ?

2017-06-23 18:09 GMT+03:00 Guenther Alka :

> The fragmentation info does not describe the fragmentation of the data on
> pool but the fragmentation of the free space.  A high fragmentation value
> will result in high data fragmentation only when you write or modify data.
>
> https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFra
> gmentationMeaning
> So the best and only way to reduce data fragmentation is not to fill up a
> pool say over 70-80%.
>
> You should also know that CopyOnWrite filesystems where a complete
> datablock ex 128k is written newly even if you change a "house" to a
> "mouse" in a textfile are more vulnerable to fragmentation than older
> filesystems. This is the price for the crash resitency where a power outage
> during a write cannot lead to a corrupted filesystem like with older
> filesystems where it can happen that the data is modified "infile" while
> the according metadata update is not happening. ZFS over-compensates this
> with its advanced rambased read and write caches. A "defrag tool" is not
> available for ZFS.
>
> Gea
>
>
> Am 23.06.2017 um 16:13 schrieb Artyom Zhandarovsky:
>
>> there any way to decrease fragmentation of dr_tank ?
>>
>
> --
>
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Guenther Alka
The fragmentation info does not describe the fragmentation of the data 
on pool but the fragmentation of the free space.  A high fragmentation 
value will result in high data fragmentation only when you write or 
modify data.


https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning
So the best and only way to reduce data fragmentation is not to fill up 
a pool say over 70-80%.


You should also know that CopyOnWrite filesystems where a complete 
datablock ex 128k is written newly even if you change a "house" to a 
"mouse" in a textfile are more vulnerable to fragmentation than older 
filesystems. This is the price for the crash resitency where a power 
outage during a write cannot lead to a corrupted filesystem like with 
older filesystems where it can happen that the data is modified "infile" 
while the according metadata update is not happening. ZFS 
over-compensates this with its advanced rambased read and write caches. 
A "defrag tool" is not available for ZFS.


Gea


Am 23.06.2017 um 16:13 schrieb Artyom Zhandarovsky:

there any way to decrease fragmentation of dr_tank ?


--

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fragmentation

2017-06-23 Thread Chris Siebenmann
>  Is there any way to decrease fragmentation of dr_tank ?
> 
> --
> 
> zpool list (Sum of RAW disk capacity without redundancy counted)
> 
> --
> 
> NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
> 
> dr_slow  9.06T  77.6M  9.06T - 0% 0%  1.00x  ONLINE  -
> dr_tank  48.9T  35.1T  13.9T -23%71%  1.00x  ONLINE  -
> rpool 272G  42.1G   230G -10%15%  1.00x  ONLINE  -

 Note that 'FRAG' probably doesn't mean what you expect it to mean,
and you probably don't need to worry about it.

 The ZFS FRAG percentage here is how fragmented *free space* is, not
how fragmented your data is, and the details are arcane. A pool with
low FRAG has most of its free space in large contiguous segments; a
pool with high FRAG has most of the free space broken up into small
pieces. FRAG is essentially a measure of how hard ZFS will have to
work to find space for new data.

 For more details, you can read:

 https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning
 https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationDetails

(These were current as of late 2015, but the details might have changed
slightly since then.)

- cks
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Fragmentation

2017-06-23 Thread Artyom Zhandarovsky
disk errors: none





-

CAP Alert

-



 Is there any way to decrease fragmentation of dr_tank ?

--

zpool list (Sum of RAW disk capacity without redundancy counted)

--

NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT

dr_slow  9.06T  77.6M  9.06T - 0% 0%  1.00x  ONLINE  -

dr_tank  48.9T  35.1T  13.9T -23%71%  1.00x  ONLINE  -

rpool 272G  42.1G   230G -10%15%  1.00x  ONLINE  -



Real Pool capacity from zfs list

--

NAME   USED AVAILMOUNTPOINT  %

dr_slow   7.69T 1.26T /dr_slow 14%!

dr_tank 41.6T 6.33T /dr_tank 13%!

rpool 45.6G218G  /rpool   83%
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Loosing NFS shares

2017-06-23 Thread Guenther Alka
With ESXi 6.0 I have had NFS problems as well. You should use at least 
6.0U2(or 6.5.0d with the 6.5 line)


Another problem may be timeouts. ZFS will wait longer for a disk than 
ESXi for NFS. What you can do is reducing disk timeout time in 
/etc/system with set sd:sd_io_time=0xF (=15s, default is 60s).  Check 
system-logs for disk related problems (/var/adm/messages).


You should also use vmxnet3s for OmniOS with increased buffer settings 
for vmxnet3, tcp and NFS for 10G networking.Snaps and replication itself 
are uncritical as long as you do not have disk or network problems. Ex 
MTU 9000 can give problems. Enough RAM may be another item. You should 
give 8GB or more to OmniOS.



Gea
napp-it


Am 23.06.2017 um 08:59 schrieb Oliver Weinmann:


Hi,

We have a hyperconverged setup.

ESXi 6.0 -> OmniOS VM -> Storage Passthrough. The 10GB Nics are in 
configured in Active / Standby. For NFS I use a dedicated /24 VLAN.


I wonder if ZFS replication or snapshotting could be the reason for 
the problems we are seeing. The system fails at night time. We have 
several auto-sync jobs from a Nexenta system to this system. I have 
now disabled these jobs. I just wonder where logs etc. I can find any 
information on what is going on on the system when it fails.


The symtoms are always the same. The vmware nfsshare disappear. So I 
just reshare them (nfs set nfsshare=…. /hgst4u60/vmware-ds-1. Then the 
root ZFS folders of all local drives that I think have snapshots 
enabled are unmounted. So I have to unmount them all hard and remount 
them. Also the root ZFS folders from replicated ZFS folders are 
unmounted. This is why I assume that there is a link between the 
problems and auto-sync or snapshotting.


*Oliver Weinmann*
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de 


http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: 
Darmstadt, HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


*From:*Lawrence Giam [mailto:paladinemisha...@gmail.com]
*Sent:* Freitag, 23. Juni 2017 05:00
*To:* Oliver Weinmann 
*Subject:* Re: [OmniOS-discuss] Loosing NFS shares

Hi Oliver,

What is your network setup in relationship to NFS file server and VM 
Host? Is your backdone using 10Gbe network and are they setup with 
redundancy (eg. MLAG)?


Regards.

On Thu, Jun 22, 2017 at 2:45 PM, Oliver Weinmann 
> wrote:


Hi,

we are using OmniOS for a few months now and have big trouble with
stability. We mainly use it for VMware NFS datastores. The last 3
nights we lost all NFS datastores and VMs stopped running. I
noticed that even though zfs get sharenfs shows folders as shared
they become inaccessible. Setting sharenfs to off and sharing
again solves the issue. I have no clue where to start. I’m fairly
new to OmniOS.

Any help would be highly appreciated.

Thanks and Best Regards,

Oliver

*Oliver Weinmann*
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744  | Fax: +49
(0)6151 8257 799 
oliver.weinm...@telespazio-vega.de

http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht:
Darmstadt, HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com

http://lists.omniti.com/mailman/listinfo/omnios-discuss



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


--

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Loosing NFS shares

2017-06-23 Thread Oliver Weinmann
Hi,

We have a hyperconverged setup.

ESXi 6.0 -> OmniOS VM -> Storage Passthrough. The 10GB Nics are in configured 
in Active / Standby. For NFS I use a dedicated /24 VLAN.

I wonder if ZFS replication or snapshotting could be the reason for the 
problems we are seeing. The system fails at night time. We have several 
auto-sync jobs from a Nexenta system to this system. I have now disabled these 
jobs. I just wonder where logs etc. I can find any information on what is going 
on on the system when it fails.

The symtoms are always the same. The vmware nfsshare disappear. So I just 
reshare them (nfs set nfsshare=…. /hgst4u60/vmware-ds-1. Then the root ZFS 
folders of all local drives that I think have snapshots enabled are unmounted. 
So I have to unmount them all hard and remount them. Also the root ZFS folders 
from replicated ZFS folders are unmounted. This is why I assume that there is a 
link between the problems and auto-sync or snapshotting.




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: Lawrence Giam [mailto:paladinemisha...@gmail.com]
Sent: Freitag, 23. Juni 2017 05:00
To: Oliver Weinmann 
Subject: Re: [OmniOS-discuss] Loosing NFS shares

Hi Oliver,

What is your network setup in relationship to NFS file server and VM Host? Is 
your backdone using 10Gbe network and are they setup with redundancy (eg. MLAG)?

Regards.

On Thu, Jun 22, 2017 at 2:45 PM, Oliver Weinmann 
mailto:oliver.weinm...@telespazio-vega.de>> 
wrote:
Hi,

we are using OmniOS for a few months now and have big trouble with stability. 
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS 
datastores and VMs stopped running. I noticed that even though zfs get sharenfs 
shows folders as shared they become inaccessible. Setting sharenfs to off and 
sharing again solves the issue. I have no clue where to start. I’m fairly new 
to OmniOS.

Any help would be highly appreciated.

Thanks and Best Regards,
Oliver



[cid:image001.png@01D2EBFF.0D6B4F50]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 
799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss