Re: [zfs-discuss] permanent error in 'metadata'

2010-05-16 Thread Germano Caronni
Looks like it does. Mount / unmount and then scrub again made the error 'go 
away'.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Haudy Kazemi
I don't really have an explanation.  Perhaps flaky second controller 
hardware that only works sometimes and can corrupt pools?  Have you seen 
any other strangeness/instability on this computer? 

Did you use zpool export before moving the disks the first time to the 
second controller, or did you just move them without exporting?


If you dd zero wipe the disks that made up this test pool, and then 
recreate the test pool, does it behave the same way the second time?




Jan Hellevik wrote:

Ok - this is really strange. I did a test. Wiped my second pool (4 disks like 
the other pool), and used them to create a pool similar to the one I have 
problems with.

Then i powered off, moved the disks and powered on. Same error message as 
before. Moved the disks back to the original controller. Pool is ok. Moved the 
disks to the new controller.  At first it is exactly like my original problem, 
but when i did a second zpool import, the pool is imported ok.

Zpool status reports the same as before. I run the same command as I did the 
first time:
zpool status
zpool import
zpool export
format
cfgadm
zpool status
zpool import ---> now it imports the pool!

How can this be? The only difference (as far as I can tell) is that the cache/log is on a 
2.5" Samsung disk insted of a 2.5" OCZ SSD.

Details follow (it is long - sorry):

Also note below - I did a zpool destroy mpool before poweroff - when I powered 
on and did a zpool status it show the pool as UNAVAIL. It should not be there 
at all, if I understand correctly?

- create the partitions for log and cache

 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Solaris2  1   608 608  2
  2 Solaris2609  30402432  8

format> quit
j...@opensolaris:~# zpool destroy mpool
j...@opensolaris:~# poweroff

Last login: Sun May 16 17:07:15 2010 from macpro.janhelle
Sun Microsystems Inc.   SunOS 5.11  snv_134 February 2010
j...@opensolaris:~$ pfexec bash
j...@opensolaris:~# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c8d0 
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c10d0 
  /p...@0,0/pci-...@11/i...@0/c...@0,0
   2. c10d1 
  /p...@0,0/pci-...@11/i...@0/c...@1,0
   3. c11d0 
  /p...@0,0/pci-...@11/i...@1/c...@0,0
   4. c12d0 
  /p...@0,0/pci-...@14,1/i...@1/c...@0,0
   5. c12d1 
  /p...@0,0/pci-...@14,1/i...@1/c...@1,0
Specify disk (enter its number): ^C
j...@opensolaris:~# zpool create vault2 raidz c10d1 c11d0 c12d0 c12d1
j...@opensolaris:~# zpool status

-- this pool is the one I destroyed - why is it here now?

  pool: mpool
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
mpoolUNAVAIL  0 0 0  insufficient replicas
  mirror-0   UNAVAIL  0 0 0  insufficient replicas
c13t2d0  UNAVAIL  0 0 0  cannot open
c13t0d0  UNAVAIL  0 0 0  cannot open
  mirror-1   UNAVAIL  0 0 0  insufficient replicas
c13t3d0  UNAVAIL  0 0 0  cannot open
c13t1d0  UNAVAIL  0 0 0  cannot open

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8d0s0ONLINE   0 0 0

errors: No known data errors

  pool: vault2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
vault2  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c10d1   ONLINE   0 0 0
c11d0   ONLINE   0 0 0
c12d0   ONLINE   0 0 0
c12d1   ONLINE   0 0 0

errors: No known data errors
j...@opensolaris:~# zpool destroy mpool
cannot open 'mpool': I/O error
j...@opensolaris:~# zpool status -x
all pools are healthy
j...@opensolaris:~# 
j...@opensolaris:~# 
j...@opensolaris:~# zpool status 



-- and now the pool is vanished

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8d0s0ONLINE   0 0 0

errors: No known data errors

  pool: vault2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
vault2  

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-16 Thread Brandon High
On Sun, May 16, 2010 at 5:08 PM, Thomas Burgess  wrote:
> well, i haven't had a lot of time to work with this...but i'm having trouble
> getting the onboard sata to work in anything but NATIVE IDE mode.

Have you tried going straight from the motherboard to a drive too?
Take as many pieces out of the mix to see what helps. Is the backplane
just a backplane, or a SAS expander?

Norco has a "Discrete SATA to SFF-8087 Mini SAS Reverse breakout
cable" listed on the 4220 page, so you've probably got the right
thing.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-16 Thread Ian Collins

On 05/17/10 12:08 PM, Thomas Burgess wrote:
well, i haven't had a lot of time to work with this...but i'm having 
trouble getting the onboard sata to work in anything but NATIVE IDE mode.



I'm not sure exactly what the problem isi'm wondering if i bought 
the wrong cable (i have a norco 4220 case so the drives connect via a 
sas sff-8087 on the backpane)


I thought this required a "reverse breakout cable" but maybe i was 
wrongthis is the first time i've worked with sas


on the otherhand, I was able to flash my intel Intel SASUC8I cards 
with the LSI SAS3081E IT firmware from the LSI site.  These seem to 
work fine.  I think i'm just going to order a 3rd card and put it in 
the pci-e x4 slot.  I don't want 16 drives running as sata and 4 
running in IDE mode.Is there any way i can tell if the drive i 
installed opensolaris to is in IDE or SATA mode?



Does it show up in cfgadm?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-16 Thread Thomas Burgess
well, i haven't had a lot of time to work with this...but i'm having trouble
getting the onboard sata to work in anything but NATIVE IDE mode.


I'm not sure exactly what the problem isi'm wondering if i bought the
wrong cable (i have a norco 4220 case so the drives connect via a sas
sff-8087 on the backpane)

I thought this required a "reverse breakout cable" but maybe i was
wrongthis is the first time i've worked with sas

on the otherhand, I was able to flash my intel Intel SASUC8I cards with the
LSI SAS3081E IT firmware from the LSI site.  These seem to work fine.  I
think i'm just going to order a 3rd card and put it in the pci-e x4 slot.  I
don't want 16 drives running as sata and 4 running in IDE mode.Is there
any way i can tell if the drive i installed opensolaris to is in IDE or SATA
mode?



On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> Great! Please report here so we can read about your impressions.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-16 Thread Geoff Nordli
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server.  Now I am not able to mount the pool.  I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.  

I am running Nexenta 3.0 NCP (b134+).

I have tried a couple of the commands (zpool import -f and zpool import -FX
llift) 

r...@zfs1:/export/home/gnordli# zpool import -f
  pool: llift
id: 15946357767934802606
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

lliftUNAVAIL  missing device
  mirror-0   ONLINE
c4t8d0   ONLINE
c4t9d0   ONLINE
  mirror-1   ONLINE
c4t10d0  ONLINE
c4t11d0  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.


r...@zfs1:/export/home/gnordli# zpool import -FX llift
cannot import 'llift': no such pool or dataset
Destroy and re-create the pool from
a backup source.



I do not have a copy of the "zpool.cache" file.

Any other commands I could try to recover it or is it just unrecoverable?  

Thanks,

Geoff 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-16 Thread Charles Hedrick
We use this configuration. It works fine. However I don't know enough about the 
details to answer all of your questions.

The disks are accessible from both systems at the same time. Of course with ZFS 
you had better not actually use them from both systems.

Actually, let me be clear about what we do. We have two J4200's and one J4400. 
One J4200 uses SAS disks, the others SATA. The two with SATA disks are used in 
Sun cluster configurations as NFS servers. They fail over just fine, losing no 
state. The one with SAS is not used with Sun Cluster. Rather, it's a Mysql 
server with two systems, one of them as a hot spare. (It also acts as a mysql 
slave server, but it uses different storage for that.) That means that our 
actual failover experience is with the SATA configuration. I will say from 
experience that in the SAS configuration both systems see the disks at the same 
time. I even managed to get ZFS to mount the same pool from both systems, which 
shouldn't be possible. Behavior was very strange until we realized what was 
going on.

I get the impression that they have special hardware in the SATA version that 
simulates SAS dual interface drives. That's what lets you use SATA drives in a 
two-node configuration. There's also some additional software setup for that 
configuration.

Note however that they do not support SSD in the J4000. That means that a Sun 
cluster configuration is going to have slow write performance in any 
application that uses synchronous writes (e.g. the NFS server). The recommended 
approach is to put the ZIL in SSD. But in SunCluster it would have to be SSD 
that's shared between the two systems, or you'd lose the contents of the ZIL 
when you do a failover. Since you can't put SSD in the J4200, it's not clear 
how you'd set that up.

Personally I consider this a very serious disadvantage to the J4000 series. I 
kind of wish we had gotten a higher end storage system with some non-volatile 
cache. Of course when we got the hardware, Sun claimed they were going to 
support SSD in it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Jan Hellevik
Ok - this is really strange. I did a test. Wiped my second pool (4 disks like 
the other pool), and used them to create a pool similar to the one I have 
problems with.

Then i powered off, moved the disks and powered on. Same error message as 
before. Moved the disks back to the original controller. Pool is ok. Moved the 
disks to the new controller.  At first it is exactly like my original problem, 
but when i did a second zpool import, the pool is imported ok.

Zpool status reports the same as before. I run the same command as I did the 
first time:
zpool status
zpool import
zpool export
format
cfgadm
zpool status
zpool import ---> now it imports the pool!

How can this be? The only difference (as far as I can tell) is that the 
cache/log is on a 2.5" Samsung disk insted of a 2.5" OCZ SSD.

Details follow (it is long - sorry):

Also note below - I did a zpool destroy mpool before poweroff - when I powered 
on and did a zpool status it show the pool as UNAVAIL. It should not be there 
at all, if I understand correctly?

- create the partitions for log and cache

 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Solaris2  1   608 608  2
  2 Solaris2609  30402432  8

format> quit
j...@opensolaris:~# zpool destroy mpool
j...@opensolaris:~# poweroff

Last login: Sun May 16 17:07:15 2010 from macpro.janhelle
Sun Microsystems Inc.   SunOS 5.11  snv_134 February 2010
j...@opensolaris:~$ pfexec bash
j...@opensolaris:~# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c8d0 
  /p...@0,0/pci-...@14,1/i...@0/c...@0,0
   1. c10d0 
  /p...@0,0/pci-...@11/i...@0/c...@0,0
   2. c10d1 
  /p...@0,0/pci-...@11/i...@0/c...@1,0
   3. c11d0 
  /p...@0,0/pci-...@11/i...@1/c...@0,0
   4. c12d0 
  /p...@0,0/pci-...@14,1/i...@1/c...@0,0
   5. c12d1 
  /p...@0,0/pci-...@14,1/i...@1/c...@1,0
Specify disk (enter its number): ^C
j...@opensolaris:~# zpool create vault2 raidz c10d1 c11d0 c12d0 c12d1
j...@opensolaris:~# zpool status

-- this pool is the one I destroyed - why is it here now?

  pool: mpool
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
mpoolUNAVAIL  0 0 0  insufficient replicas
  mirror-0   UNAVAIL  0 0 0  insufficient replicas
c13t2d0  UNAVAIL  0 0 0  cannot open
c13t0d0  UNAVAIL  0 0 0  cannot open
  mirror-1   UNAVAIL  0 0 0  insufficient replicas
c13t3d0  UNAVAIL  0 0 0  cannot open
c13t1d0  UNAVAIL  0 0 0  cannot open

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8d0s0ONLINE   0 0 0

errors: No known data errors

  pool: vault2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
vault2  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c10d1   ONLINE   0 0 0
c11d0   ONLINE   0 0 0
c12d0   ONLINE   0 0 0
c12d1   ONLINE   0 0 0

errors: No known data errors
j...@opensolaris:~# zpool destroy mpool
cannot open 'mpool': I/O error
j...@opensolaris:~# zpool status -x
all pools are healthy
j...@opensolaris:~# 
j...@opensolaris:~# 
j...@opensolaris:~# zpool status 


-- and now the pool is vanished

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8d0s0ONLINE   0 0 0

errors: No known data errors

  pool: vault2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
vault2  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c10d1   ONLINE   0 0 0
c11d0   ONLINE   0 0 0
c12d0   ONLINE   0 0 0
c12d1   ONLINE   0 0 0

errors: No known data errors
j...@opensolaris:~# 

dmesg
4 times these messages
May 16 20:36:19 opensolaris fmd: [ID 377184 daemon.error] SUNW-MSG-ID: 
ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
May 16 20:36:19 opensolaris EVENT-T

Re: [zfs-discuss] dedup status

2010-05-16 Thread Erik Trimble

Roy Sigurd Karlsbakk wrote:

- "Haudy Kazemi"  skrev:


In this file system, 2.75 million blocks are allocated. The in-core
size
of a DDT entry is approximately 250 bytes.  So the math is pretty
simple:
in-core size = 2.63M * 250 = 657.5 MB

If your dedup ratio is 1.0, then this number will scale linearly with
size.
If the dedup rate > 1.0, then this number will not scale linearly, it
will be
less. So you can use the linear scale as a worst-case approximation.


How large was this filesystem?

Are there any good ways of planning memory or SSDs for this?

roy
If you mean figuring out how big memory should be BEFORE you write any 
data, You need to guesstimate the average block size for the files you 
are storing in the zpool, which is highly data-dependent.  In general, 
consider that zfs will write a file of size X using a block size of Y 
where Y a power of 2 and the minimum amount needed such that X < Y, up 
to a maximum of Y=128k.  So, look at your (potential) data,  and 
consider how big files are. 


DDT requirements for RAM/L2ARC would be:  250 bytes * # blocks


So, let's say I'm considering a 1TB pool, where I think I'm going to be 
storing 200GB worth of MP3s, 200GB of source code, 200GB of misc Office 
docs, 200GB of various JPEG image files from my 8 megapixel camera.  
(don't want more than 80% full!)


Assumed block sizes & thus number of blocks for:
   Data Block Size   # Blocks per 200GB  
   MP3  128k ~1.6 million

   Source Code  1k   ~200 million
   Office docs  32k  ~6.5 million
   Pictures 4k   ~52 million

Thus, total number of blocks you'll need = ~260 million

DDT tables size = 260 million * 250 bytes = 65GB


Note that the source code takes up 20% of the space, but requires 80% of 
the DDT entries.



Given that the above is the worst case for that file mix (actual 
dedup/compression will lower the total block count), I would use it for 
the max L2ARC size you want. 

RAM sizing is dependent on the size of your *active* working set of 
files; I'd want enough RAM to cache both all my writes and my most 
commonly-read files into RAM all at once.




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup status

2010-05-16 Thread Roy Sigurd Karlsbakk
- "Haudy Kazemi"  skrev:

> In this file system, 2.75 million blocks are allocated. The in-core
> size
> of a DDT entry is approximately 250 bytes.  So the math is pretty
> simple:
>   in-core size = 2.63M * 250 = 657.5 MB
> 
> If your dedup ratio is 1.0, then this number will scale linearly with
> size.
> If the dedup rate > 1.0, then this number will not scale linearly, it
> will be
> less. So you can use the linear scale as a worst-case approximation.

How large was this filesystem?

Are there any good ways of planning memory or SSDs for this?

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup status

2010-05-16 Thread Haudy Kazemi

Erik Trimble wrote:

Roy Sigurd Karlsbakk wrote:

Hi all

I've been doing a lot of testing with dedup and concluded it's not 
really ready for production. If something fails, it can render the 
pool unuseless for hours or maybe days, perhaps due to single-threded 
stuff in zfs. There is also very little data available in the docs 
(though I've from what I've got on this list) on how much memory one 
should have for deduping an xTiB dataset.
  
I think it was Richard a month or so ago that had a good post about 
about how much space the Dedup Table entry would be (it was in some 
discussion where I ask about it).  I can't remember what it was (a 
hundred bytes?) per DDT entry, but one had to remember that each entry 
was for a slab, which can vary in size (512 bytes to 128k).  So, 
there's no good generic formula for X bytes in RAM per Y TB space.  
You can compute a rough guess if you know what kind of data and the 
general usage pattern is for the  pool (basically, you need to take a 
stab at  how big you think the average slab size is).   Also, remember 
that if you have a /very/ good dedup ratio, then you will have a 
smaller DDT for a given X size pool, vs a pool with poor dedup ratios.  
Unfortunately, there's no magic bullet, though if you can dig up 
Richard's post, you should be able to take a guess, and not be off 
more than x2 or so. 
Also, remember you only need to hold the DDT in L2ARC, not in actual 
RAM, so buy that SSD, young man!


As far as failures, well, I can't speak to that specifically. Though, 
do realize that not having sufficient L2ARC/RAM to hold the DDT does 
mean that you spend an awful amount of time reading pool metadata, 
which really hurts performance (not to mention can cripple deleting of 
any sort...)


Here's Richard Elling's post in the "dedup and memory/l2arc 
requirements" thread where he presents a worst case DDT size upper bound:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-April/039516.html

--start of copy--

You can estimate the amount of disk space needed for the deduplication table

and the expected deduplication ratio by using "zdb -S poolname" on your existing
pool.  Be patient, for an existing pool with lots of objects, this can take 
some time to run.

# ptime zdb -S zwimming
Simulated DDT histogram:

bucket  allocated   referenced  
__   __   __

refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
--   --   -   -   -   --   -   -   -
12.27M239G188G194G2.27M239G188G194G
2 327K   34.3G   27.8G   28.1G 698K   73.3G   59.2G   59.9G
430.1K   2.91G   2.10G   2.11G 152K   14.9G   10.6G   10.6G
87.73K691M529M529M74.5K   6.25G   4.79G   4.80G
   16  673   43.7M   25.8M   25.9M13.1K822M492M494M
   32  197   12.3M   7.02M   7.03M7.66K480M269M270M
   64   47   1.27M626K626K3.86K103M   51.2M   51.2M
  128   22908K250K251K3.71K150M   40.3M   40.3M
  2567302K 48K   53.7K2.27K   88.6M   17.3M   19.5M
  5124131K   7.50K   7.75K2.74K102M   5.62M   5.79M
   2K1  2K  2K  2K3.23K   6.47M   6.47M   6.47M
   8K1128K  5K  5K13.9K   1.74G   69.5M   69.5M
Total2.63M277G218G225G3.22M337G263G270G

dedup = 1.20, compress = 1.28, copies = 1.03, dedup * compress / copies = 1.50


real 8:02.391932786
user 1:24.231855093
sys15.193256108

In this file system, 2.75 million blocks are allocated. The in-core size
of a DDT entry is approximately 250 bytes.  So the math is pretty simple:
in-core size = 2.63M * 250 = 657.5 MB

If your dedup ratio is 1.0, then this number will scale linearly with size.
If the dedup rate > 1.0, then this number will not scale linearly, it will be
less. So you can use the linear scale as a worst-case approximation.
-- richard

--end of copy--


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Jan Hellevik
I am making a second backup of my other pool - then I'll use those disks and 
recreate the problem pool. The only difference will be the SSD - only have one 
of those. I'll use a disk in the same slot, so it will be close.

Backup will be finished in 2 hours time
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-16 Thread John J Balestrini
Finally, it's been destroyed! 

Last night I turned dedup off and sent the destroy command and simply let it 
run over night. Also, I let zpool iostat 30 run as well. It showed no activity 
for the first 6-1/2 hours and then a flurry of activity for 13-minutes. That 
snapshot is finally gone and the system seems to be behaving now.

Thanks for all your help!

John 




On May 16, 2010, at 2:46 AM, Roy Sigurd Karlsbakk wrote:

> - "Roy Sigurd Karlsbakk"  skrev:
> 
>> - "John Balestrini"  skrev:
>> 
>>> Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was
>>> imagining that the large ratio was tied to that particular snapshot.
>>> 
>>> basie@/root# zpool list pool1
>>> NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
>>> pool1  2.72T  1.55T  1.17T57%  1.50x  ONLINE  -
>>> 
>>> So, it is possible to turn dedup off? More importantly, what happens
>>> when I try?
>> 
>> # zfs set dedup=off pool1
>> 
>> This will not dedup your data, though, unless you do something like
>> copying all the files again, since dedup is done on write. Some seems
>> to be fixed in 135, and it was said here on the list that all known
>> bugs should be fixed before the next release (see my thread 'dedup
>> status')
> 
> I meant, this will not de-dedup the deduped data...
> 
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
> relevante synonymer på norsk.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-16 Thread Roy Sigurd Karlsbakk
- "Roy Sigurd Karlsbakk"  skrev:

> - "John Balestrini"  skrev:
> 
> > Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was
> > imagining that the large ratio was tied to that particular snapshot.
> >
> > basie@/root# zpool list pool1
> > NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
> > pool1  2.72T  1.55T  1.17T57%  1.50x  ONLINE  -
> >
> > So, it is possible to turn dedup off? More importantly, what happens
> > when I try?
> 
> # zfs set dedup=off pool1
> 
> This will not dedup your data, though, unless you do something like
> copying all the files again, since dedup is done on write. Some seems
> to be fixed in 135, and it was said here on the list that all known
> bugs should be fixed before the next release (see my thread 'dedup
> status')

I meant, this will not de-dedup the deduped data...

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-16 Thread Roy Sigurd Karlsbakk
- "John Balestrini"  skrev:

> Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was
> imagining that the large ratio was tied to that particular snapshot.
> 
> basie@/root# zpool list pool1
> NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
> pool1  2.72T  1.55T  1.17T57%  1.50x  ONLINE  -
> 
> So, it is possible to turn dedup off? More importantly, what happens
> when I try?

# zfs set dedup=off pool1

This will not dedup your data, though, unless you do something like copying all 
the files again, since dedup is done on write. Some seems to be fixed in 135, 
and it was said here on the list that all known bugs should be fixed before the 
next release (see my thread 'dedup status')

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup status

2010-05-16 Thread Roy Sigurd Karlsbakk
- "Erik Trimble"  skrev:

> Roy Sigurd Karlsbakk wrote:
> > Hi all
> >
> > I've been doing a lot of testing with dedup and concluded it's not
> really ready for production. If something fails, it can render the
> pool unuseless for hours or maybe days, perhaps due to single-threded
> stuff in zfs. There is also very little data available in the docs
> (though I've from what I've got on this list) on how much memory one
> should have for deduping an xTiB dataset.
> >   
> I think it was Richard a month or so ago that had a good post about 
> about how much space the Dedup Table entry would be (it was in some 
> discussion where I ask about it).  I can't remember what it was (a 
> hundred bytes?) per DDT entry, but one had to remember that each entry

150 bytes per block IIRC, but still, it'd be nice to have this in the official 
ZFS docs. Let's hope this is added soon

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup status

2010-05-16 Thread Markus Kovero
Hi, its getting better, I believe its no longer single threaded after 135? 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6922161)
but still waiting for major bug fix, 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824

It should be fixed before Release afaik.

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss