After failed luck with a pair of Syba SD-SATA-4P PCI-X SATA II controllers
(Sil3114 chipset), I've now successfully used a Tekram TR-834A 4-port SATA-II
controller (Sil3124-2 chipset) at the full PCI-X 133MHz bus speed and b50.
Since my disk mirror on the previous SATA controller (built-in W1100
> I believe there is a write limit (commonly 10
> writes) on CF and
> similar storage devices, but I don't know for sure.
> Apart from that
> I think it's a good idea.
>
>
> James C. McPherson
As a consequence, the /tmp, /var, and swap could eventually be moved to the ZFS
hard drives to gre
> In short - make sure your UID on Mac is enough to
> access the files on
> nfs (as it would be if you would try to access those
> files locally).
> Or perhaps you tried from user with uid=0 in which
> case it's mapped to
> nobody user by default.
>
> --
> Best regards,
> Robert
Exactly as Rober
> On 1/18/07, . <[EMAIL PROTECTED]> wrote:
>
> SYBA SD-SATA-4P PCI SATA Controller Card (
> http://www.newegg.com/product/Product.asp?item=N82E168
> 15124020 )
>
>
>From my home ZFS server setup, I had tried two Syba SD-SATA2-2E2I PCI-X SATA
>II Controller Cards without any luck; both cards' BI
> Hi..
> After searching hi & low, I cannot find the answer
> for what I want to do (or at least
> understand how to do it). I am hopeful somebody can
> point me in the right direction.
> I have (2) non global zones (samba & www) I want to
> be able to have all user home
> dir's served from zone s
> Hi there!
>
> I want to build el cheapo ZFS NFS/Samba server for
> storing user files and NFS mail
> storage.
>
> I'm planning to have one 0.5Tb SATA2 ZFS RAID10 pool
> with several filesystems:
> 1) 200 Gb filesystem with ~300K user files, shared
> with Samba, about 10 clients, very light loa
> What happens is that /home/thomas/zfs gets mounted
> and then the
> automounter starts. (Or /home/thomas is found
> missing and then
> the zfs mount is not completed)
>
> Probably requires legacy mount point.
>
>
> Casper
> ___
I'm experiencing thi
> [ Hi Wes from [EMAIL PROTECTED] ]
>
> And the smart-ask answer is:
>
> By Definition: the OpenSolaris/Solaris Express
> feature that is *your*
> "must-have" feature, probably won't be in Update 3!
> :)
>
Exactly, that's why I used quotes as I'm sure I'd be happy with S10u3, assuming
ignorance
I'm in the process of building a Solaris NFS server with ZFS and was wondering
if any gurus here have any comments as to choosing the upcoming Solairs 10
11/06 [presumably] or OpenSolaris bXX/Solairs Express for this use. Even with
my use of OpenSolaris I maintain a service contract to show my
Thanks Richard, this seems to be exactly what I was looking for.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise
or learn of a preexisting method to receive notification of ZFS handled errors
on a remote machine.
For example, if a disk fails and I don't regularly login or SSH into the ZFS
server, I'd like an email or some oth
> I use the smartmontools smartd daemon to email me
> when disk drives are
> about to fail. If you are interested in configuring
> smartd to send
> email notifications prior to a disk failing, check
> out the following
> blog post:
>
> http://prefetch.net/blog/index.php/2006/01/05/using-sm
> artd-
Thanks again for your input Gents, I was able to get a W1100z inexpensively
with 1Gb RAM and a 2.4 GHz Opteron...now I'll just have to manufacture my own
drive slide rails since Sun won't sell the darn things [no, I don't want a 80Gb
IDE drive and apple pie with that!] and I'm not paying $100 fo
> Though there isn't a Sun "tower server" that fits
> your description, the Ultra-40
> can hold 4 3.5" drives (80, 250, or 500 GBytes). You
> might actually prefer
> something designed for office use at home, rather
> than something designed for a
> data center.
> http://www.sun.com/desktop/
Thanks gents for your replies. I've used to a very large config W2100z and
ZFS for awhile but didn't know "how low can you go" for ZFS to shine, though a
64-bit CPU seems to be the minimum performance threshold.
Now that Sun's store is [sort of] working again, I can see some X2100's with
the
I could use the list's help.
My goal: Build a cheap ZFS file server with OpenSolairs on UFS boot (for now)
10,000 rpm U320 SCSI drive while having a ZFS pool in the same machine. The
ZFS pool will either be a mirror or raidz setup consisting of either two or
three 500Gb 7,200 rpm SATA II driv
>
>
> >Saturating 100Mbit with a 64-bit CPU and redundant
> disks for $300-400 Pounds may be tough.
>
> Anything in the market can saturate 100Mbit easily;
> even with a single
> cheap IDE disk. The disks are generally a factor
> 5-10 faster than the
> 100Mbit network.
>
> Casper
>
Indeed, I
> I was wondering if anyone could recommend hardware
> forr a ZFS-based NAS for home use.
>
> setup, so I'm looking at sparc or opteron. Ideally it
> would:
>
> a) run quiet (blade 100/150 is ok, x4100 ain't :) )
Not much space in a Blade 100/150 for multiple disks, but it is quiet and cheap.
Fo
Try the directions in the previous posted linkusing the Solaris 'format'
command.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Hi
>
> I had a one disk pool, that I want to use as a boot
> disk, but I can't
> seem to get rid of the efi label, when i use format
> -e it and try to
> relabel, format bitches that it can't set disk
> geometry or write the
> new label.
>
> any one have any clues how to fix this?
>
> james
> Over coffee with a colleague (cc'd) we were talking
> about the problem of
> taking advantage of ZFS over NFS (or CIFS) from a non
> Solaris machine.
>
> We already have the .zfs/snapshot dir and this is
> great. One of the
> other areas was knowing the settings on your data set
> are. So ente
> is anyone else seeing this? I couldn't find any
> references to this in
> the bug database.
>
I'm also seeing this behavior on occasion with b36 and b38...from the b36 box...
Sun Microsystems Inc. SunOS 5.11 snv_36 October 2007
-bash-3.00$ iostat 1
***snip***
ttysd0
Thank you gentlemen for your quick replies.
The ZFS upgrade process sounds like it'll be a snap since I'll simply use the
native ZFS version 2 on my next install/upgrade and simply import my data from
the existing backup pools (prior to ZFS v2).
Keep up the great work!
This message posted
Could anyone confirm that with the recent additions to ZFS, most notably ZFS
version 2, that a ZFS pool created in b37 or older will still be
readable/importable in b38 ZFS version 2 and newer?
If so, is there any serious negative impact on the using an existing ZFS pool
or should the older poo
[u][b]Snapshot Management:[/b][/u]
With all the talk of snapshots as of late, is there an interest for a ZFS
discuss sub-group for Snapshots?
Perhaps this may prevent further anything getting "lost in the shuffle".
This message posted from opensolaris.org
Of interest to this thread, Tim Foster has just created an interesting blog
entry at:
http://blogs.sun.com/roller/page/timf#zfs_automatic_snapshots_prototype_1
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
> Ys, I did tun that command but it was quite a few
> days ago... :( Would it take
> that long to complete? I would never imagine it
> would... Is there any way to
> stop it?
>
> Chris
>
The really odd part Chris is that the scrub indicates it's at 35.37% with 1hour
and 1 minute left to finis
> A couple of questions:
> - Do you think that zfs (and other subsystems) should
> use the ISO8601 time
> formatting standards?
>
> Regards,
>
> Al Hopper Logical Approach Inc, Plano, TX.
I wish that were the only time standard!
No matter what country you live in, the ISO 8601 time/date sta
Interesting idea Constantin.
However, perhaps instead of or in addition to your idea, I'd like to have a
mechanism or script that would overwrite the older snapshots [u]only if[/u]
some more current snapshot were created. Ideally this mechanism would prevent
your idea of expired snapshots bein
> I believe RAID-Z in a two-disk configuration is
> almost completely identical (in terms of space and failure resistant)
> to mirroring, but not an optimal implementation of it.
>
> If you want mirroring, you should just use mirror
> vdevs. Any ZFS folk want to chime in?
>
> Cheers,
> - jonatha
30 matches
Mail list logo