On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
> How about SPARC - can it do zfs install+root yet, or if not, when?
> Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
> have a mirrored pool where zfs owns the entire drives, if possible.
> (I'd also eventuall
How about SPARC - can it do zfs install+root yet, or if not, when?
Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
have a mirrored pool where zfs owns the entire drives, if possible.
(I'd also eventually like to have multiple bootable zfs filesystems in
that pool, corresponding
Richard Elling wrote:
> James C. McPherson wrote:
>> Will Murnane wrote:
>>
>>> On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison <[EMAIL PROTECTED]> wrote:
>>>
My question: Where/how in the heck does one get a list of which devices
are valid targets?
>>> Run "format"
I'm pretty sure that this bug is fixed in Solaris 10U5, patch 127127-11 and
127128-11 (note: 6462690 sd driver should set SYNC_NV bit when issuing
SYNCHRONIZE CACHE to SBC-2 devices). However, a test system with new 6140
arrays still seems to be suffering from lots of cache flushes. This is veri
On Jun 3, 2008, at 18:34, Paulo Soeiro wrote:
> This test was done without the hub:
FWIW, I bought 9 microSD's and 9 USB controller units for them from
NewEgg to replicate the famous ZFS demo video, and I had problems
getting them working with OpenSolaris (on VMWare on OSX, in this case).
Af
Richard Elling wrote:
> James C. McPherson wrote:
>> Will Murnane wrote:
>>
>>> On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison <[EMAIL PROTECTED]> wrote:
>>>
My question: Where/how in the heck does one get a list of which devices
are valid targets?
>>> Run "format"
This test was done without the hub:
On Tue, Jun 3, 2008 at 11:33 PM, Paulo Soeiro <[EMAIL PROTECTED]> wrote:
> Did the same test again and here is the result:
>
> 1)
>
> zpool create myPool mirror c6t0d0p0 c7t0d0p0
>
> 2)
>
> -bash-3.2# zfs create myPool/myfs
>
> -bash-3.2# zpool status
>
> pool:
Did the same test again and here is the result:
1)
zpool create myPool mirror c6t0d0p0 c7t0d0p0
2)
-bash-3.2# zfs create myPool/myfs
-bash-3.2# zpool status
pool: myPool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
myPool ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t0
James C. McPherson wrote:
> Will Murnane wrote:
>
>> On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison <[EMAIL PROTECTED]> wrote:
>>
>>> My question: Where/how in the heck does one get a list of which devices
>>> are valid targets?
>>>
>> Run "format" and it'll list the devices that
>Try running iostat in another ssh window, you'll see it can't even gather
>stats every 5 seconds >(below is iostats every 5 seconds):
>Tue May 27 09:26:41 2008
>Tue May 27 09:26:57 2008
>Tue May 27 09:27:34 2008
that should not happen!
i`d call that a bug!
how does vmstat behave with lzjb compr
your "Type" "sata-port" will change to "disk" when you put
a disk on it. like:
1 % cfgadm
Ap_Id Type Receptacle Occupant Condition
sata0/0::dsk/c2t0d0disk connectedconfigured ok
sata0/1::dsk/c2t1d0cd/dvd connected
On Jun 3, 2008, at 16:50, Benjamin Ellison wrote:
> "cfgadm" shows the following:
> App_Id Type Receptacle Occupant Condition
> sata0/0 sata-port empty unconfigured ok
> sata0/1 sata-port empty unconfigured ok
> sata0/2 sata-port empt
Will Murnane wrote:
> On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison <[EMAIL PROTECTED]> wrote:
>> My question: Where/how in the heck does one get a list of which devices are
>> valid targets?
> Run "format" and it'll list the devices that are available. If you
> hot-plug a drive, you may need
Thanks for the reply (Will's too - although I see it hasn't posted to the
thread).
"format" gets me a list of just my system drive.
"devfsadm -c disk" just spins for a second, but doesn't change anything found
under format.
"cfgadm" shows the following:
App_Id Type Receptacle
On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison <[EMAIL PROTECTED]> wrote:
> My question: Where/how in the heck does one get a list of which devices are
> valid targets?
Run "format" and it'll list the devices that are available. If you
hot-plug a drive, you may need to run "devfsadm -c disk" f
On Jun 3, 2008, at 16:35, Benjamin Ellison wrote:
> 've got 4 SATA drives in a hot-swap backplane hooked to my
> motherboard... where do I look or what command should I use to
> see what I should put after the "zpool create [poolname] raidz" bit?
See if 'cfgadm' gives you a good list. You
Sorry for the newbie sounding post (well, I *am a zfs newb), but I've googled
for a couple hours now and have yet to find a clear or specific answer...
The zfs documentation indicates that it is very easy to make storage pools,
using commands such as:
zpool create tank raidz c1t0d0 c2t0d0 c3t0d0
| On Nevada, use the 'cachefile' property. On S10 releases, use '-R /'
| when creating/importing the pool.
The drawback of '-R /' appears to be that it requires forcing the
import after a system reboot *all* the time (unless you explicitly
export the pool during reboot).
- cks
_
Darryl wrote:
> >From what i can understand (prtconf is gibberish to me :)), the other sys
> >checks seem to recognize everything else, so seems to be all good with this
> >board.
>
> Thank you for confirming this!
>
> ps. Not sure if this ok, but i came across a deal, i though i should pass
>
>From what i can understand (prtconf is gibberish to me :)), the other sys
>checks seem to recognize everything else, so seems to be all good with this
>board.
Thank you for confirming this!
ps. Not sure if this ok, but i came across a deal, i though i should pass on..
a 1 TB Samsung drive f
On Jun 3, 2008, at 11:16 AM, Chris Siebenmann wrote:
> Is there any way to set ZFS on a system so that it will not
> automatically import all of the ZFS pools it had active when it was
> last
> running?
>
> The problem with automatic importation is preventing disasters in a
> failover situation
On Nevada, use the 'cachefile' property. On S10 releases, use '-R /'
when creating/importing the pool.
- Eric
On Tue, Jun 03, 2008 at 02:16:03PM -0400, Chris Siebenmann wrote:
> Is there any way to set ZFS on a system so that it will not
> automatically import all of the ZFS pools it had active
Is there any way to set ZFS on a system so that it will not
automatically import all of the ZFS pools it had active when it was last
running?
The problem with automatic importation is preventing disasters in a
failover situation. Assume that you have a SAN environment with the same
disks visible
On Tue, Jun 3, 2008 at 9:38 AM, Rich Teer <[EMAIL PROTECTED]> wrote:
> Hmm, why's that? Also, is there an ETA for when ZFS root will be
> offered in the GUI installer? (I recently installed Build 89 and
> was surprised by the lack of ZFS pools after installation using
> the GUI installer.)
There
Nathan Galvin wrote:
> When on leased equipment and previously using VxVM we were able to migrate
> even a lowly UFS filesystems from one storage array to another storage array
> via the evacuate process. I guess this makes us only the 3rd customer
> waiting for this feature.
>
UFS cannot b
On Tue, 3 Jun 2008, Mark J Musante wrote:
> On Tue, 3 Jun 2008, Gordon Ross wrote:
>
> > I'd really like to know: What are the conditions under which the
> > installer will offer ZFS root?
>
> Only the text-based installer will offer it - not the GUI.
Hmm, why's that? Also, is there an ETA fo
When on leased equipment and previously using VxVM we were able to migrate even
a lowly UFS filesystems from one storage array to another storage array via the
evacuate process. I guess this makes us only the 3rd customer waiting for this
feature.
It would be interesting to ask other users of
Hi Darryl!
> VAB: Did you have any issues with this board, or was everything detected, i
> believe reading in the HCL that the sound card was not detected...
I did not have any issues. Here's what prtconf says about the sound:
prtconf -vpc /dev/sound/0
pci1043,8249, instance #0
System so
As part of testing for our planned iSCSI + ZFS NFS server environment,
I wanted to see what would happen if I imported a ZFS pool on two
machines at once (as might happen someday in, for example, a failover
scenario gone horribly wrong).
What I expected was something between a pool with damage a
On Tue, 3 Jun 2008, Gordon Ross wrote:
> I'd really like to know: What are the conditions under which the
> installer will offer ZFS root?
Only the text-based installer will offer it - not the GUI.
Regards,
markm
___
zfs-discuss mailing list
zfs-disc
I'm trying a new install, creating a solaris partition in free space
(p3) which follows Windows XP (p1) and Ubuntu Linux (p2).
I want ZFS root, but the "interactive" installer (choice 1) on the
snv_90 DVD does not offer me ZFS root. Anyone know why?
or how to work-around the issue?
When I did a ne
no, weird situation. I unplugged the disks from the controller (I have them
labeled) before upgrading to snv89. after the upgrade, the controller names
changed.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
> Darryl wrote:
> > This thread really messed me up, posts dont follow
> a chronological order... so sorry for all the extra
> posts!
>
> That's what you get when you don't use working tools
> like usenet news.
> nntp for ever!!!
>
> ___
> zfs-discuss
Hi,
I can't seem to delete a file in my zpool that has permanent errors:
zpool status -vx
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Other
I am trying to get a commitment to get this fixed, if you have a server with a
whole bunch of SAN attached disks and then use the internals for some sort of
temp space if one of those el cheapo sas disks dies it takes down the whole
lot, not good. This problem is enought o prevent a roll out of
Darryl wrote:
> This thread really messed me up, posts dont follow a chronological order...
> so sorry for all the extra posts!
That's what you get when you don't use working tools like usenet news.
nntp for ever!!!
___
zfs-discuss mailing list
zfs-di
> Timely discussion. I too am trying to build a stable yet inexpensive storage
> server for my home lab
[...]
> Other options are that I build a whitebox or buy a new PowerEdge or Sun
> X2200 etc
If this is really just a lab storage server then an X2100M2 will be
enough. Just get the minimum spe
37 matches
Mail list logo