Re: [OpenIndiana-discuss] [OmniOS-discuss] Shopping for an all-in-one server

2014-06-03 Thread ken mays via openindiana-discuss
I like what Ian said. although, 4U is a bit daunting.

You can opt for 1U-2U, 1-2 processors.

You can have a chassis with 8 drive bays. Fit for = 18TB of storage.

From there you can get solid state HDs and regular drives of your choice. 
Simple controller that can handle them - and has a driver for it.
Your HD vendor/OEM usually know that info on controller/HD compatibility.

You can also go for a separate NAS storage unit. 

Point is there are a few options and custom servers or the cheap ones on eBay 
for $500 (i.e. Dell R710 or similar)...

The HP N54L are well supported by the community.

~ Ken Mays
 




On Monday, June 2, 2014 1:56 AM, Ian Collins i...@ianshome.com wrote:
 


Jim Klimov wrote:
 Thus the box we'd build should be good with storage (including responsive 
 read-write NFS) and VM hosting. I am not sure whether OI, OmniOS or ESX(i?) 
 with HBA passthrough onto an illumos-based storage/infrastructure services VM 
 would be a better fit. Also, I was away from shopping for new server gear for 
 a while and its compatibility with illumos in particular, so I'd kindly ask 
 for suggestions for a server like that ;)

SmartOS would be a good fit if you are combining storage with KVM. USB 
booting also saves a couple of drive slots!

 The company's preference is to deal with HP, so while it is not an 
 impenetrable barrier, buying whatever is available under that brand is much 
 simpler for the department. Cost seems a much lesser constraint ;)

HP's bundled RAID controllers can be a problem, make sure you can get 
something with IT firmware, or at least JBOD support.

 I am less certain about HBAs (IT mode, without HW-RAID crap), and the 
 practically recommended redundancy (raidzN? raid10? how many extra disks in 
 modern size ranges are recommended - 3?)

That all depends on the number of drives and the workload.

 Also i am not sure about modern considerations of multiple PCI buses - 
 especially with regard to separation of ssd's onto a separate HBA (or 
 several?) to avoid bottlenecks in performance and/or failures.

SSDs can be SATA and the hard drives SAS.

 Finally, are departmental all-in-one combines following the Thumper ideology 
 of data quickly accessible to applications living on the same host without 
 uncertainties and delays of remote networking still at all 'fashionable'? ;)

They are with Joyent!

 Buying a single purchase initially may be easier to justify than multiple 
 boxes with separate roles, but there are other considerations too. In 
 particular, their corporate network is crappy and slow, so splitting into 
 storage+server nodes would need either direct cabling for data, or new 
 switching gear which i don't know yet if it would be a problem; localhost 
 data transfers are likely to be a lot faster. I am also not convinced about 
 higher reliability of split-head solutions, though for high loads i am eager 
 to believe that separating the tasks can lead to higher performance. I am 
 uncertain if this setup and its tasks would qualify for that; but it might be 
 expanded later on, including role-separation, if a practical need is found 
 after all.

You can easily get all you are after in a 4U all in one.  Keep the 
system simple if you can.

 PS: how do you go about backing up such a thing? Would some N54L's suffice to 
 receive zfs-send's of select datasets? :)

Another, low spec Illumos box with plenty of storage.  Performance won't 
be an issue, so you can use wider raidzN vdevs to boots capacity.

-- 
Ian.



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread david boutcher
The other day my hard disk became completely full due to a home directory with 
some massive files. 

This caused the server to fail to boot properly and only allowed me into 
maintenance mode as root

I was unable to navigate to the home directories as root to delete stuff. How 
could I achieve this?


David

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread Jim Klimov
3 июня 2014 г. 19:26:05 CEST, david boutcher davidboutc...@me.com пишет:
The other day my hard disk became completely full due to a home
directory with some massive files. 

This caused the server to fail to boot properly and only allowed me
into maintenance mode as root

I was unable to navigate to the home directories as root to delete
stuff. How could I achieve this?


David

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Were you unable to navigate (cd) or delete (rm)?
It seems like you have a case of single-pool machine, and the rpool has 
overflowed. You likely have regular auto-snapshots, so deleting files from live 
datasets does not really free up space - to the extent that zfs refuses to 
borrow some bits from space it has system-reserved anyway in order to mark 
blocks from the deleted files as last-referred by a snapshot. And so 'rm' fails 
even as root.
Does this guess match? ;)

Kill a snapshot and further deletions should then proceed well, although won't 
free up space until you remove snapshots that reference the deleted data.

Other failure to delete may be to read-only and/or ocerlay mounts, attempts to 
delete from snapshot (directory representation), immutable file/dir attribute, 
access over nfs to a host that does not trust you as root and maps to nobody. 
These are the most likely secondary reasons...

Hth,
Jim
--
Typos courtesy of K-9 Mail on my Samsung Android

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread James Carlson
On 06/03/14 13:26, david boutcher wrote:
 The other day my hard disk became completely full due to a home directory 
 with some massive files. 
 
 This caused the server to fail to boot properly and only allowed me into 
 maintenance mode as root
 
 I was unable to navigate to the home directories as root to delete stuff. How 
 could I achieve this?

There's a lot of detail missing here -- error messages, commands used,
and so on -- but zfs mount -a might possibly be part of the specific
question that you're asking.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread Jim Klimov
3 июня 2014 г. 19:37:07 CEST, James Carlson carls...@workingcode.com пишет:
On 06/03/14 13:26, david boutcher wrote:
 The other day my hard disk became completely full due to a home
directory with some massive files. 
 
 This caused the server to fail to boot properly and only allowed me
into maintenance mode as root
 
 I was unable to navigate to the home directories as root to delete
stuff. How could I achieve this?

There's a lot of detail missing here -- error messages, commands used,
and so on -- but zfs mount -a might possibly be part of the specific
question that you're asking.

Yes, that is also a likely explanation - without the home datasets mounted (or 
perhaps even without their secondary pool imported) it is hard to navigate into 
them ;)
Still, zfs and svc errors would be welcome.
An overflown rpool might fail to update the boot-archive upon reboot, for 
example...

Hth,
//Jim
--
Typos courtesy of K-9 Mail on my Samsung Android

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread david boutcher
Thank you. I'm a bit of a novice with Solaris but from what I understand that 
would make sense to use the mount command. 

To explain the symptoms and what led up to the failure...

I have open Indiana as an os replicated across 2 500 gb drives. 

A zfs pool for storage which holds my media and a virtual machine, made up of 3 
1tb drives and a spare. 

Virtual box is on the os disk but it's vms are on that separate pool. 

Sometimes, virtual box catches me out and installs a new/cloned machine onto 
the os drives 

The day of the failure, everything slowed down. I tried deleting content from 
the media pool via vmb. I thought this was working but it all came to a 
grinding halt. 

Then, the screen saver of the host reported error message and I was eventually 
completely unable to log in. Sadly I don't remember the errors. 

Eventually I had to give into a hard reboot. 

Upon reboot it would only go into maintenance mode. The boot process reported 
something about not being able to make a boot copy due to no hd space or 
similar.  

Could the deletion of files from the separate media pool have spilled over to 
the operating system causing it to crash?

The deletion of some poorly placed vbox files free a lot of space when I was 
eventually able to boot into it. I knkw they shouldn't have been there but 
seems odd it should suddenly run out of room 

I hope that provides a little more detail.  

 On 3 Jun 2014, at 18:41, Jim Klimov jimkli...@cos.ru wrote:
 
 3 июня 2014 г. 19:37:07 CEST, James Carlson carls...@workingcode.com пишет:
 On 06/03/14 13:26, david boutcher wrote:
 The other day my hard disk became completely full due to a home
 directory with some massive files. 
 
 This caused the server to fail to boot properly and only allowed me
 into maintenance mode as root
 
 I was unable to navigate to the home directories as root to delete
 stuff. How could I achieve this?
 
 There's a lot of detail missing here -- error messages, commands used,
 and so on -- but zfs mount -a might possibly be part of the specific
 question that you're asking.
 
 Yes, that is also a likely explanation - without the home datasets mounted 
 (or perhaps even without their secondary pool imported) it is hard to 
 navigate into them ;)
 Still, zfs and svc errors would be welcome.
 An overflown rpool might fail to update the boot-archive upon reboot, for 
 example...
 
 Hth,
 //Jim
 --
 Typos courtesy of K-9 Mail on my Samsung Android
 
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Michelle Knight
H Folks,

I've got the following...

mich@jaguar:~# cfgadm -al
Ap_Id  Type Receptacle   Occupant
Condition sata0/0::dsk/c3t0d0disk connected
configured   ok sata0/1sata-port
emptyunconfigured ok sata0/2::dsk/c3t2d0
disk connectedconfigured   ok
sata0/3::dsk/c3t3d0disk connectedconfigured
ok sata0/4::dsk/c3t4d0disk connected
configured   ok sata0/5::dsk/c3t5d0disk
connectedconfigured   ok

mich@jaguar:~# zpool status
  pool: rpool1
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool1  ONLINE   0 0 0
  c3t5d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
  2014 config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  raidz1-0ONLINE   0 0 0
c3t4d0s1  ONLINE   0 0 0
c3t2d0s1  ONLINE   0 0 0
c3t3d0s1  ONLINE   0 0 0

errors: No known data errors


I try ... 

mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
cannot replace c3t4d0 with c3t0d0: no such device in pool

... which is what I am reading in countless articles on-line to do, but
it isn't working. I don't know why I'm getting this error, or which of
the two it is complaining about, because if I try...

mich@jaguar:~# zpool offline tank c3t4d0
cannot offline c3t4d0: no such device in pool

So I can't get the drive replaced.

Any help gratefully appreciated.

Michelle

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Root access to home directories

2014-06-03 Thread John D Groenveld
In message 3a87db63-f231-4ddc-b97d-1f6a3fc7b...@me.com, david boutcher writes
:
Virtual box is on the os disk but it's vms are on that separate pool. 

Sometimes, virtual box catches me out and installs a new/cloned machine onto t
he os drives 

My WAG is that while you have your VBox harddrives on a separate
pool, the VBox clones, snapshots, and various log files
and crash detritus are stored in $HOME/.VirtualBox

But boot your OI installation media, import your rpool, mount
your filesystems and start looking at your various filesystems
for usage.
Also look for any ZFS snapshots under your rpool.

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Andreas Wacknitz
Michelle,

Am 03.06.2014 um 20:39 schrieb Michelle Knight miche...@msknight.com:

 H Folks,
 
 I've got the following...
 
 mich@jaguar:~# cfgadm -al
 Ap_Id  Type Receptacle   Occupant
 Condition sata0/0::dsk/c3t0d0disk connected
 configured   ok sata0/1sata-port
 emptyunconfigured ok sata0/2::dsk/c3t2d0
 disk connectedconfigured   ok
 sata0/3::dsk/c3t3d0disk connectedconfigured
 ok sata0/4::dsk/c3t4d0disk connected
 configured   ok sata0/5::dsk/c3t5d0disk
 connectedconfigured   ok
 
 mich@jaguar:~# zpool status
  pool: rpool1
 state: ONLINE
  scan: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
rpool1  ONLINE   0 0 0
  c3t5d0s0  ONLINE   0 0 0
 
 errors: No known data errors
 
  pool: tank
 state: ONLINE
  scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
  2014 config:
 
NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  raidz1-0ONLINE   0 0 0
c3t4d0s1  ONLINE   0 0 0
c3t2d0s1  ONLINE   0 0 0
c3t3d0s1  ONLINE   0 0 0
 
Your zpool „tank“ consists of sliced disks (note the s1 at the end of the 
device names)!
This is uncommon (only the disk that the system is booted from needs to have a 
slice because grub needs it).

 errors: No known data errors
 
 
 I try ... 
 
 mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
 cannot replace c3t4d0 with c3t0d0: no such device in pool
 
 ... which is what I am reading in countless articles on-line to do, but
 it isn't working. I don't know why I'm getting this error, or which of
 the two it is complaining about, because if I try...
 
 mich@jaguar:~# zpool offline tank c3t4d0
 cannot offline c3t4d0: no such device in pool
 
The error message is correct. You have c3t4d0s1 in your zpool, not c3t4d0!
When you created your „tank“ zpool you have been using sliced disks, this is 
unneeded but possible…


 So I can't get the drive replaced.
 
I am not that experienced with it but I guess you could issue
zpool replace tank c3t4d0s1 c3t0d0
(using the whole unsliced disk c3t0d0).

Regards,
Andreas

 Any help gratefully appreciated.
 
 Michelle
 
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Tim Mooney

In regard to: [OpenIndiana-discuss] ZFS replacement problem, Michelle...:


 pool: tank
state: ONLINE
 scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
 2014 config:

   NAME  STATE READ WRITE CKSUM
   tank  ONLINE   0 0 0
 raidz1-0ONLINE   0 0 0
   c3t4d0s1  ONLINE   0 0 0
   c3t2d0s1  ONLINE   0 0 0
   c3t3d0s1  ONLINE   0 0 0

errors: No known data errors


I try ...

mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
cannot replace c3t4d0 with c3t0d0: no such device in pool


Your devices have partitions (aka slices, the s1 at the end), but you're
not using that with the replace.  Try adding s1 to the end of both and
see if that makes a difference.

Note that that means there actually will need to be an s1 on the
replacement drive, which means you probably need to use format.

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing  Infrastructure  701-231-1076 (Voice)
Room 242-J6, Quentin Burdick Building  701-231-8541 (Fax)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Michelle Knight
Thanks Andreas,

Good suggestion, however, it didn't work...

mich@jaguar:~# zpool replace -f tank c3t4d0s1  c3t0d0
cannot replace c3t4d0s1 with c3t0d0: no such device in pool

I have also tried exporting and importing the pool, as suggested in
other forums.

Michelle.

On Tue, 3 Jun 2014 21:01:49 +0200
Andreas Wacknitz a.wackn...@gmx.de wrote:

 Michelle,
 
 Am 03.06.2014 um 20:39 schrieb Michelle Knight
 miche...@msknight.com:
 
  H Folks,
  
  I've got the following...
  
  mich@jaguar:~# cfgadm -al
  Ap_Id  Type Receptacle   Occupant
  Condition sata0/0::dsk/c3t0d0disk connected
  configured   ok sata0/1sata-port
  emptyunconfigured ok sata0/2::dsk/c3t2d0
  disk connectedconfigured   ok
  sata0/3::dsk/c3t3d0disk connectedconfigured
  ok sata0/4::dsk/c3t4d0disk connected
  configured   ok sata0/5::dsk/c3t5d0disk
  connectedconfigured   ok
  
  mich@jaguar:~# zpool status
   pool: rpool1
  state: ONLINE
   scan: none requested
  config:
  
 NAMESTATE READ WRITE CKSUM
 rpool1  ONLINE   0 0 0
   c3t5d0s0  ONLINE   0 0 0
  
  errors: No known data errors
  
   pool: tank
  state: ONLINE
   scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
   2014 config:
  
 NAME  STATE READ WRITE CKSUM
 tank  ONLINE   0 0 0
   raidz1-0ONLINE   0 0 0
 c3t4d0s1  ONLINE   0 0 0
 c3t2d0s1  ONLINE   0 0 0
 c3t3d0s1  ONLINE   0 0 0
  
 Your zpool „tank“ consists of sliced disks (note the s1 at the end of
 the device names)! This is uncommon (only the disk that the system is
 booted from needs to have a slice because grub needs it).
 
  errors: No known data errors
  
  
  I try ... 
  
  mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
  cannot replace c3t4d0 with c3t0d0: no such device in pool
  
  ... which is what I am reading in countless articles on-line to do,
  but it isn't working. I don't know why I'm getting this error, or
  which of the two it is complaining about, because if I try...
  
  mich@jaguar:~# zpool offline tank c3t4d0
  cannot offline c3t4d0: no such device in pool
  
 The error message is correct. You have c3t4d0s1 in your zpool, not
 c3t4d0! When you created your „tank“ zpool you have been using sliced
 disks, this is unneeded but possible…
 
 
  So I can't get the drive replaced.
  
 I am not that experienced with it but I guess you could issue
 zpool replace tank c3t4d0s1 c3t0d0
 (using the whole unsliced disk c3t0d0).
 
 Regards,
 Andreas
 
  Any help gratefully appreciated.
  
  Michelle
  
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Bryan N Iotti
You created the pool using slices and not whole disks (that's why it's cXtXdXsX 
instead of just cXtXdX).

That's necessary only for the root pool, it's better not to do that for data 
pools.

If that pool is empty, destroy it and rebuild it using whole disks (just the 
cXtXdX part, no sX), then your commands will work as expected.

Bryan

Sent from my BlackBerry 10 smartphone.
  Original Message  
From: Michelle Knight
Sent: Tuesday, June 3, 2014 20:41
To: Discussion list for OpenIndiana
Reply To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] ZFS replacement problem

H Folks,

I've got the following...

mich@jaguar:~# cfgadm -al
Ap_Id Type Receptacle Occupant
Condition sata0/0::dsk/c3t0d0 disk connected
configured ok sata0/1 sata-port
empty unconfigured ok sata0/2::dsk/c3t2d0
disk connected configured ok
sata0/3::dsk/c3t3d0 disk connected configured
ok sata0/4::dsk/c3t4d0 disk connected
configured ok sata0/5::dsk/c3t5d0 disk
connected configured ok

mich@jaguar:~# zpool status
pool: rpool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool1 ONLINE 0 0 0
c3t5d0s0 ONLINE 0 0 0

errors: No known data errors

pool: tank
state: ONLINE
scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun 3 18:19:48
2014 config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t4d0s1 ONLINE 0 0 0
c3t2d0s1 ONLINE 0 0 0
c3t3d0s1 ONLINE 0 0 0

errors: No known data errors


I try ... 

mich@jaguar:~# zpool replace -f tank c3t4d0 c3t0d0
cannot replace c3t4d0 with c3t0d0: no such device in pool

... which is what I am reading in countless articles on-line to do, but
it isn't working. I don't know why I'm getting this error, or which of
the two it is complaining about, because if I try...

mich@jaguar:~# zpool offline tank c3t4d0
cannot offline c3t4d0: no such device in pool

So I can't get the drive replaced.

Any help gratefully appreciated.

Michelle

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Damo
After formatting c3t0d0 run.

 zpool replace tank c3t4d0s1  c3t0d0s1


On Tue, Jun 3, 2014 at 8:05 PM, Michelle Knight miche...@msknight.com
wrote:

 Thanks Andreas,

 Good suggestion, however, it didn't work...

 mich@jaguar:~# zpool replace -f tank c3t4d0s1  c3t0d0
 cannot replace c3t4d0s1 with c3t0d0: no such device in pool

 I have also tried exporting and importing the pool, as suggested in
 other forums.

 Michelle.

 On Tue, 3 Jun 2014 21:01:49 +0200
 Andreas Wacknitz a.wackn...@gmx.de wrote:

  Michelle,
 
  Am 03.06.2014 um 20:39 schrieb Michelle Knight
  miche...@msknight.com:
 
   H Folks,
  
   I've got the following...
  
   mich@jaguar:~# cfgadm -al
   Ap_Id  Type Receptacle   Occupant
   Condition sata0/0::dsk/c3t0d0disk connected
   configured   ok sata0/1sata-port
   emptyunconfigured ok sata0/2::dsk/c3t2d0
   disk connectedconfigured   ok
   sata0/3::dsk/c3t3d0disk connectedconfigured
   ok sata0/4::dsk/c3t4d0disk connected
   configured   ok sata0/5::dsk/c3t5d0disk
   connectedconfigured   ok
  
   mich@jaguar:~# zpool status
pool: rpool1
   state: ONLINE
scan: none requested
   config:
  
  NAMESTATE READ WRITE CKSUM
  rpool1  ONLINE   0 0 0
c3t5d0s0  ONLINE   0 0 0
  
   errors: No known data errors
  
pool: tank
   state: ONLINE
scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
2014 config:
  
  NAME  STATE READ WRITE CKSUM
  tank  ONLINE   0 0 0
raidz1-0ONLINE   0 0 0
  c3t4d0s1  ONLINE   0 0 0
  c3t2d0s1  ONLINE   0 0 0
  c3t3d0s1  ONLINE   0 0 0
  
  Your zpool „tank“ consists of sliced disks (note the s1 at the end of
  the device names)! This is uncommon (only the disk that the system is
  booted from needs to have a slice because grub needs it).
 
   errors: No known data errors
  
  
   I try ...
  
   mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
   cannot replace c3t4d0 with c3t0d0: no such device in pool
  
   ... which is what I am reading in countless articles on-line to do,
   but it isn't working. I don't know why I'm getting this error, or
   which of the two it is complaining about, because if I try...
  
   mich@jaguar:~# zpool offline tank c3t4d0
   cannot offline c3t4d0: no such device in pool
  
  The error message is correct. You have c3t4d0s1 in your zpool, not
  c3t4d0! When you created your „tank“ zpool you have been using sliced
  disks, this is unneeded but possible…
 
 
   So I can't get the drive replaced.
  
  I am not that experienced with it but I guess you could issue
  zpool replace tank c3t4d0s1 c3t0d0
  (using the whole unsliced disk c3t0d0).
 
  Regards,
  Andreas
 
   Any help gratefully appreciated.
  
   Michelle
  
   ___
   openindiana-discuss mailing list
   openindiana-discuss@openindiana.org
   http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss

 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS replacement problem

2014-06-03 Thread Michelle Knight
Hi Tim,

I don't recall creating them with slices; like you say, only the root
pool needs that, which was done because originally it had two SSD units
for root in another machine.

There was a bit of kerfaffery with other versions, but I do beleive I
copied everything off, then destroyed the set and re-created it ...

... but given what you're saying, I think I'm better to copy all the
data off again, blast the pool and re-create again using whole drives.

Many thanks,

Michelle.

On Tue, 3 Jun 2014 14:06:14 -0500 (CDT)
Tim Mooney tim.moo...@ndsu.edu wrote:

 In regard to: [OpenIndiana-discuss] ZFS replacement problem,
 Michelle...:
 
   pool: tank
  state: ONLINE
   scan: resilvered 1.70M in 0h0m with 0 errors on Tue Jun  3 18:19:48
   2014 config:
 
 NAME  STATE READ WRITE CKSUM
 tank  ONLINE   0 0 0
   raidz1-0ONLINE   0 0 0
 c3t4d0s1  ONLINE   0 0 0
 c3t2d0s1  ONLINE   0 0 0
 c3t3d0s1  ONLINE   0 0 0
 
  errors: No known data errors
 
 
  I try ...
 
  mich@jaguar:~# zpool replace -f tank c3t4d0  c3t0d0
  cannot replace c3t4d0 with c3t0d0: no such device in pool
 
 Your devices have partitions (aka slices, the s1 at the end), but
 you're not using that with the replace.  Try adding s1 to the end
 of both and see if that makes a difference.
 
 Note that that means there actually will need to be an s1 on the
 replacement drive, which means you probably need to use format.
 
 Tim

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss