Justin,
More than likely c3d1 contain a label from an old pool. If you run 'zdb
-l /dev/dsk/c3d1s0' you should be able to if one exists. If does contain
information for an old pool then using the '-f' option when attaching
will solve your problem and relabel the device with the new pool
Ram,
Would it be possible to get ssh access to the box? If so, then I might
be able to help to figure out why you can't import.
Thanks,
George
On 2/7/13 7:49 AM, Ram Chander wrote:
The drives are in Coraid connected to the server via Coraid 10G HBA card.
Its exported and imported on the
Jim,
I had done some testing when I put together my blog post on booting a 4K
sector rpool and didn't encounter any. Here's the post if you would like
to do some more digging:
http://blog.delphix.com/gwilson/2012/11/15/4k-sectors-and-zfs/
Thanks,
George
On 2/3/13 7:55 AM, Jim Klimov wrote:
Matt,
You should do something like this:
1). zpool create newpool newdisk
2). zfs get -o name,value -p volsize oldpool/oldvolume
3). zfs get -o name,value -p volblocksize oldpool/oldvolume
4). zfs create -V value from step #2 -b value from step #3
newpool/newvolume
5). dd
Matt,
Are you able to create a pool on another disk? If so, then you could try
to create a new zvol on the new pool and copy over the contents from the
old zvol using 'dd'. Then you could try to share out the LU from the new
pool.
- George
On 11/8/12 7:21 AM, m...@focusedmobile.com wrote:
Comments inline...
On 10/23/12 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation so
I thought I would ask them here.
I have learned that zfs datasets can be expanded by
It looks like this disk has a Solaris VTOC on it so ZFS will only use
the partition size that was manually created (or came shipped with the
drive).
Can you try running 'zpool create pool diskname_without_sliceinfo'?
Example: zpool create test c4t0d0
- George
On 9/28/12 7:27 AM, Rainer
and tried most everything. :-(
Rainer
On 9/28/2012 5:40 AM, George Wilson wrote:
It looks like this disk has a Solaris VTOC on it so ZFS will only use
the partition size that was manually created (or came shipped with
the drive).
Can you try running 'zpool create pool diskname_without_sliceinfo
Ray,
It looks like that it's trying to allocate a block that is larger that
what ZFS supports. You may want to use the 32-bit version of zdb
(/usr/sbin/i86/zdb) so that the corefile will display all the arguments.
Then can you post the stack trace by running 'mdb core' and at the mdb
prompt
, George Wilson wrote:
Ray,
It looks like that it's trying to allocate a block that is larger that
what ZFS supports. You may want to use the 32-bit version of zdb
(/usr/sbin/i86/zdb) so that the corefile will display all the
arguments. Then can you post the stack trace by running 'mdb core
Yes it will. The only way to do this is to create a secondary pool and
send/receive your root pool to the new pool.
- George
On Jun 11, 2012, at 7:16 PM, Rich wrote:
Won't zpool replace fail b/c the new disks require ashift=12 and his
existing pool devices have ashift=9?
- Rich
On Mon,
On Jun 12, 2012, at 12:58 AM, Rich wrote:
On Tue, Jun 12, 2012 at 12:50 AM, Richard Elling
richard.ell...@richardelling.com wrote:
On Jun 11, 2012, at 6:08 PM, Bob Friesenhahn wrote:
On Mon, 11 Jun 2012, Jim Klimov wrote:
ashift=12 (2^12 = 4096). For disks which do not lie, it
works
On Jun 12, 2012, at 11:00 AM, Bob Friesenhahn wrote:
On Tue, 12 Jun 2012, George Wilson wrote:
Illumos has a way to override the physical block size of a given disk by
using the sd.conf file. Here's an example:
sd-config-list =
DGC RAID, physical-block-size:4096,
NETAPP
On Apr 29, 2012, at 1:28 PM, Roy Sigurd Karlsbakk wrote:
Also, I posted a bug report for it here
https://www.illumos.org/issues/2663
Thanks :-). We can now track the progress of the OI-specific
discussion about this issue.
Seems the old post about the initial patch is here
On Apr 29, 2012, at 2:53 PM, Gary Mills wrote:
On Sun, Apr 29, 2012 at 02:45:05PM -0400, George Wilson wrote:
On Apr 29, 2012, at 1:28 PM, Roy Sigurd Karlsbakk wrote:
Also, I posted a bug report for it here
https://www.illumos.org/issues/2663
Seems the old post about the initial patch
Take a look at 'fmdump -eV' as it should give you more information about the
each of the checksum errors.
- George
On Feb 19, 2012, at 3:23 PM, Richard Lowe wrote:
Vague recollection that the pool level is errors that weren't
recovered, so possibly two of them on the device were ditto'd
Tommy,
If you get this again, can you generate a crash dump? The easiest way would be
to do the following from the console:
# mdb -K
$systemdump
I'm afraid if you try to do a 'reboot -d' that it will hang too.
- George
On Nov 20, 2011, at 1:14 PM, Tommy Eriksen wrote:
Hi,
Just saw this
', it listed two
'tank' pools, one dead and one good - I specified the good ID and the pool
came back and is fine. There must be something else going on, no?
-Original Message-
From: George Wilson [mailto:george.wil...@delphix.com]
Sent: Monday, October 24, 2011 9:27 AM
To: Discussion list
Dan,
I suspect that the problem is that your original pool was built using a couple
of p0 devices. Can you do a 'zdb -l /dev/rdsk/c0t50014EE204411A53d0p0' and send
that output?
Thanks,
George
On Oct 24, 2011, at 9:51 AM, Dan Swartzendruber wrote:
To be (maybe) clearer: these disks were in
re-attach it your
mirror.
- George
On Oct 24, 2011, at 10:26 AM, Dan Swartzendruber wrote:
George Wilson wrote:
Dan,
I suspect that the problem is that your original pool was built using a
couple of p0 devices. Can you do a 'zdb -l
/dev/rdsk/c0t50014EE204411A53d0p0' and send that output
Dan,
Actually you'll need to 'dd' the end of the disk since it's labels 2 and 3 that
are still visible to the system. I would start by dd-ing the last mega or so of
the p0 device.
Good luck,
George
On Oct 24, 2011, at 11:42 AM, Dan Swartzendruber wrote:
George Wilson wrote:
Since
Bryan,
I believe you're hitting a kernel fragmentation issue which causes the system
to hang waiting for memory. I have a fix which may help you. I'm be submitting
this for review in the next day or two.
Thanks,
George
On Oct 21, 2011, at 9:06 AM, Bryan S. Leaman wrote:
On 10/21/2011 4:16
If you encounter this problem again can you run the following command and
supply the data:
# echo ::walk spa | ::print spa_t spa_name spa_suspended | mdb -k
Thanks,
George
On Oct 21, 2011, at 4:16 AM, Tommy Eriksen wrote:
Hi guys,
I've got a bit of a ZFS problem:
All of a sudden, and it
It would be good to get a crash dump of this so that we can figure out what is
really happening.
- George
On Oct 21, 2011, at 12:38 PM, Michael Stapleton wrote:
Hi,
I had similar hard lockups when I accidentally tried to delete a ZFS
Volume while doing a ZFS send at the same time.
There
24 matches
Mail list logo