Hello!
Before installing opensolaris i try nexenta, and install it in zfs mirror pool
syspool with 2 HDD: c3d0s0 and c6d1s0.
After one month i am come back to Opensolaris , but this pool syspool have
data, what i want store. I am insert new HDD on c3d0s0 and install this simple
configuration:
p
Nico Sabbi wrote:
> On Friday 19 December 2008 03:32:01 Ian Collins wrote:
>
>> On Fri 19/12/08 14:52 , Shawn Joy shawn@sun.com sent:
>>
>>> I have read the ZFS best practice guide located at
>>> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices
>>> _Guide However I have
Iman,
Sure, just select both disks during the install, like the screen below.
If you don't see all the disks on the system during the initial install,
then either their is an underlying configuration problem or you just
need to scroll down to see all the disks.
Cindy
Select Disks
On thi
Ross wrote:
> Well, I really like the idea of an automatic service to manage send/receives
> to backup devices, so if you guys don't mind, I'm going to share some other
> ideas for features I think would be useful.
>
cool.
> One of the first is that you need some kind of capacity management
Iman,
Yes, you can do either of the following:
o Select two disks for creating a mirrored root pool during an initial
installation
o Attach a second disk after the initial installation, like this:
# zpool attach rpool old-disk new-disk
In the attach disk scenario, you will also need to add the
Dear All
thanks for your guide
I dont remember that in the installation process,after selecting zfs file
system,it let me to select additional disks or not?
can i select another c0txdy for creating mirror? or it must done after
installation?
Regards
iman
On Fri, Dec 19, 2008 at 9:11 AM, Tomas Ögre
Hello Allris
can i add second hard for building mirror boot disk after installation?
or better reinstalling solaris operating system and make mirroring during
installation?
Regards
On Fri, Dec 19, 2008 at 8:11 AM, iman habibi wrote:
> Hello All
> Im new in solaris 10 zfs structure.my machine is
On 19 December, 2008 - Nico Sabbi sent me these 0,8K bytes:
> On Friday 19 December 2008 03:32:01 Ian Collins wrote:
> > On Fri 19/12/08 14:52 , Shawn Joy shawn@sun.com sent:
> > > I have read the ZFS best practice guide located at
> > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_
On 19 December, 2008 - Andrew Gabriel sent me these 1,7K bytes:
> Just to add -- this is a boot disk restriction.
> Solaris 10 supports RAIDZ just fine, but not as a boot disk. Boot disks
> can only be mirrored. So you could use a couple of disks for a pair of
> mirrored boot disks, and then cre
On Friday 19 December 2008 03:32:01 Ian Collins wrote:
> On Fri 19/12/08 14:52 , Shawn Joy shawn@sun.com sent:
> > I have read the ZFS best practice guide located at
> > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices
> >_Guide However I have questions whether we support using
Just to add -- this is a boot disk restriction.
Solaris 10 supports RAIDZ just fine, but not as a boot disk. Boot disks
can only be mirrored. So you could use a couple of disks for a pair of
mirrored boot disks, and then create a new RAIDZ zpool from the
remaining 3 disks to use for data.
--
A
The best you can do right now is mirroring. During the install, choose
more than one hard drive and zfs will create a mirror configuration.
Support for raidz and/or striping is for a future project.
On Fri, 19 Dec 2008, iman habibi wrote:
> Hello All
> Im new in solaris 10 zfs structure.my m
Hello All
Im new in solaris 10 zfs structure.my machine is ultra-sparc server with 5
scsi hards.
data protection for my solaris operating system is important for me,so i
want all of my hards participate in one pool and build raidz construction.
in solaris 10 installation process,when it shows the z
Mark Wiederspahn wrote:
> I'm exporting/importing a zpool from a sun 4200 running Solaris 10 10/08
> s10x_u6wos_07b X86
> to a t2000 running Solaris 10 10/08 s10s_u6wos_07b SPARC. Neither one is yet
> patched,
> but I didn't see anything obvious on sunsolve for recent updates.
>
> The filesysem
The pool used to consist of z6d0, c7d0, c4t0d0, and c4t1d0. Now I get the
following when doing "zpool import"
j...@opensolaris:~# zpool import
pool: tank
id: 12465835398523411309
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite
I'm exporting/importing a zpool from a sun 4200 running Solaris 10 10/08
s10x_u6wos_07b X86
to a t2000 running Solaris 10 10/08 s10s_u6wos_07b SPARC. Neither one is yet
patched,
but I didn't see anything obvious on sunsolve for recent updates.
The filesysem contains symbolic links. I made a cop
Hi
My computer reboots when I try to import my degraded pool.
I'm running the OpenSolaris 2008.11 RC1b LiveCD and I have (had) a raidz pool
with four drives. I formated one of the disks, so the pool is now in a degraded
state.
When I do "zpool import -f tank" the computer just reboots without
Hello all,
I'm getting many OpenSolaris kernel panic while send/receiving data. I did try
to create another pool and another host to test, and the same error. And the
send side can be any server (i did test with four different servers, all build
89).
The panic message:
--- cut here ---
> There are times when a cluster might try a simple
> import, and there
> are times when a cluster will force the import. We
> do know that if
> two nodes have simultaneously imported the pool then
> chances are
> very good that the pool will be corrupted. So it
> seems prudent for
> the cluster
19 matches
Mail list logo