Re: [zfs-discuss] Download illumos build image

2011-12-20 Thread Alexander Lesle
Hi,

and here ist the List-Address disc...@lists.illumos.org for your
question.

On Dezember, 20 2011, 21:27 Tomas Forsman wrote in [1]:

 On 20 December, 2011 - Henry Lau sent me these 1,6K bytes:

 Hi
  
 When I run the following script from my illumos build 151,
  
 ./usr/src/tools/scripts/onu -t nightly -d ./packages/i386/nightly
  
 It also download nidia, java ... How to avoid these packages to be dowmload 
 so that the image can be copied to the system faster?

 You should probably ask on some forum/list that's related to illumos..
 This is about the ZFS filesystem..

 /Tomas

-- 
Best Regards
Alexander
Dezember, 20 2011

[1] mid:20111220202716.ga11...@suiko.acc.umu.se


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Alexander Lesle
Hello Stu Whitefish and List,

On August, 15 2011, 21:17 Stu Whitefish wrote in [1]:

 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
 kernel panic, even when booted from different OS versions

 Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
 from Oracle) several times each as well as 2 new installs of Update 8.

When I understand you right is your primary interest to recover your
data on tank pool.

Have you check the way to boot from a Live-DVD, mount your safe place
and copy the data on a other machine?

-- 
Best Regards
Alexander
August, 15 2011

[1] mid:1313435871.14520.yahoomail...@web121919.mail.ne1.yahoo.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale performance query

2011-08-07 Thread Alexander Lesle
Hello Bob Friesenhahn and List,

On August, 06 2011, 20:41 Bob Friesenhahn wrote in [1]:

 I think that this depends on the type of hardware you have, how much
 new data is written over a period of time, the typical I/O load on the
 server (i.e. does scrubbing impact usability?), and how critical the 
 data is to you.  Even power consumption and air conditioning can be a 
 factor since scrubbing is an intensive operation which will increase 
 power consumption.

Thx Bob for answering.

The hardware ist SM-Board, Xeon, 16 GB reg RAM, LSI 9211-8i HBA,
6 x Hitachi 2TB Deskstar 5K3000 HDS5C3020ALA632.
Server is standing in the basement by 32°C
The HDs are filled to 80% and the workload ist only most reading.

Whats the best? Scrubbing every week, every second week once a month?

-- 
Best Regards
Alexander
August, 07 2011

[1] mid:alpine.gso.2.01.1108061337570.1...@freddy.simplesystems.org


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale performance query

2011-08-07 Thread Alexander Lesle
Hello Roy Sigurd Karlsbakk and List,

On August, 07 2011, 19:27 Roy Sigurd Karlsbakk wrote in [1]:

 Generally, you can't scrub too often. If you have a set of striped
 mirrors, the scrub shouldn't take too long. The extra stress on the
 drives during scrub shouldn't matter much, drives are made to be
 used. By the way, 32˚C is a bit high for most servers. Could you
 check the drive temperature with smartctl or ipmi tools?

Thx Ron for answering.
The temp. are between 27° and 31° C checked with smartctl.
At the moment I scrub every sunday.
-- 
Best Regards
Alexander
August, 08 2011

[1] mid:8906420.12.1312738051044.JavaMail.root@zimbra


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Alexander Lesle
Hello Bob Friesenhahn and List,

On August, 06 2011, 18:34 Bob Friesenhahn wrote in [1]:

 Those using mirrors or raidz1 are best advised to perform periodic
 scrubs.  This helps avoid future media read errors and also helps 
 flush out failing hardware.

And what is your suggestion for scrubbing a mirror pool?
Once per month, every 2 weeks, every week.

 Regardless, it is true that two hard failures can take out your whole 
 pool.

In RaidZ1 not in Mirror?

-- 
Best Regards
Alexander
August, 06 2011

[1] mid:alpine.gso.2.01.1108061131170.1...@freddy.simplesystems.org


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Alexander Lesle
Hello Rob Cohen and List,

On August, 06 2011, 17:32 Rob Cohen wrote in [1]:

 In this case, RAIDZ is at least 8x slower to resilver (assuming CPU
 and writing happen in parallel).  In the mean time, performance for
 the array is severely degraded for RAIDZ, but not for mirrors.

 Aside from resilvering, for many workloads, I have seen over 10x
 (!) better performance from mirrors.

Horrible.
My little pool needs for scrubbing more than 8 hours with no workload.
The pool has 6 Hitachi 2 TB

# zpool status archepool
pool: archepool
state: ONLINE
scan: scrub repaired 0 in 8h14m with 0 errors on Sun Jul 31 19:14:47 2011
config:

NAME   STATE READ WRITE CKSUM
archepool  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c1t50024E9003CE0317d0  ONLINE   0 0 0
c1t50024E9003CF7685d0  ONLINE   0 0 0
  mirror-1 ONLINE   0 0 0
c1t50024E9003CE031Bd0  ONLINE   0 0 0
c1t50024E9003CE0368d0  ONLINE   0 0 0
  mirror-2 ONLINE   0 0 0
c1t5000CCA369CA262Bd0  ONLINE   0 0 0
c1t5000CCA369CBF60Cd0  ONLINE   0 0 0

errors: No known data errors

How much time needs the thread opener with his config?
 Technical Specs:
 216x 3TB 7k3000 HDDs
 24x 9 drive RAIDZ3

I suggest resilver need weeks and the chance that a second or
third HD crashs in that time is high. Murphy’s Law

-- 
Best Regards
Alexander
August, 06 2011

[1] mid:1688088365.31312644757091.JavaMail.Twebapp@sf-app1


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] locate any file

2011-07-28 Thread Alexander Lesle
Hello All,

is there a way where I can locking where a file on my pool is?
I mean is there a way where i can see where/on which HD zfs is saving a
file.

root@ripley:~# zpool list
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
archepool  5,44T  4,40T  1,04T80%  1.00x  ONLINE  -

root@ripley:~# zpool status archepool
  pool: archepool
 state: ONLINE
 scan: scrub repaired 0 in 8h5m with 0 errors on Sun Jul 24 19:05:28 2011
config:

NAME   STATE READ WRITE CKSUM
archepool  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c1t50024E9003CE0317d0  ONLINE   0 0 0
c1t50024E9003CF7685d0  ONLINE   0 0 0
  mirror-1 ONLINE   0 0 0
c1t50024E9003CE031Bd0  ONLINE   0 0 0
c1t50024E9003CE0368d0  ONLINE   0 0 0
  mirror-2 ONLINE   0 0 0
c1t5000CCA369CA262Bd0  ONLINE   0 0 0
c1t5000CCA369CBF60Cd0  ONLINE   0 0 0

errors: No known data errors


-- 
Best Regards
Alexander
Juli, 28 2011

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Resilvering - Scrubing whats the different

2010-12-20 Thread Alexander Lesle
Hello All

I read this thread Resilver/scrub times? for a few minutes
and I have recognize that I dont know the different between
Resilvering and Scrubing. Shame on me. :-(

I dont find some declarations in the man-pages and I know the command
to start scrubing zpool scrub tank
but what is the command to start resilver and what is the different?

-- 
Best Regards
Alexander
Dezember, 20 2010

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering - Scrubing whats the different

2010-12-20 Thread Alexander Lesle
Hello Erik Trimble and Ian Collins,

thx for quick answering.

My inexperience is solved and I am glad.

-- 
Best Regards
Alexander
Dezember, 20 2010

[1] mid:4d0fb4a4.1090...@oracle.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2010-12-17 Thread Alexander Lesle
at Dezember, 17 2010, 17:48 Lanky Doodle wrote in [1]:

 By single drive mirrors, I assume, in a 14 disk setup, you mean 7
 sets of 2 disk mirrors - I am thinking of traditional RAID1 here.

 Or do you mean 1 massive mirror with all 14 disks?

Edward means a set of two-way-mirrors.

Do you remember what he wrote:
 Also, in the event of a resilver, the 3x5 radiz will be faster.  In rough
 numbers, suppose you have 1TB drives, 70% full.  Then your resilver might be
 8 days instead of 12 days.  That's important when you consider the fact that
 during that window, you have degraded redundancy.  Another failed disk in
 the same vdev would destroy the entire pool.

 Also if a 2nd disk fails during resilver, it's more likely to be in the same
 vdev, if you have only 2 vdev's.  Your odds are better with smaller vdev's,
 both because the resilver completes faster, and the probability of a 2nd
 failure in the same vdev is smaller.

And this scene is a horrible notion. In that time resilvering is
running you have to hope that nothing fails. In his example between
192 to 288 hours - thats a long a very long time.
And be aware that a disk will broken at some point.

 This is always a tough one for me. I too prefer RAID1 where
 redundancy is king, but the trade off for me would be 5GB of
 'wasted' space - total of 7GB in mirror and 12GB in 3x RAIDZ.

You lost at most space when you make a pool with mirrors BUT
the I/O is much faster and its more secure and you have
all the features of zfs too.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance

 Decisions, decisions.

My suggestion is
make a two-way-mirror of small disks or ssd for the OS. This is not
easy to do after installation, you have to look for a howto.
Sorry I dont find the link at the moment.

At Sol11 Express Oracle announced that at TestInstall you can set
RootPool to mirror during installation. At the moment I try it out
in a VM but I didnt find this option. :-(

zpool create lankyserver mirror vdev1 vdev2 mirror vdev3 vdev4

When you need more space you can expand a bundle of two disks to your
lankyserver. Each pair with the same capacity is effective.

zpool add lankyserver mirror vdev5 vdev6 mirror vdev7 vdev8  ...

Consider that its a good decision when you plan one spare disk.
You can using the zpool add command when you want to add a
spare disk at a later time.
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m?a=view

When you build a raidz pool every disk in this pool must have the same
space as the smallest disk have or bigger. Raidz pool uses only this
space that the smallest disk have. The rest of the bigger disk is
waste.
At a mirrored pool only the pair must have the same space so you can
use a pair of 1 TB disks, one pair of 2 TB disks at the same pool. In
this scene your spare disk _must have_ the biggest space.

Read this for your decision:
http://constantin.glez.de/blog/2010/01/home-server-raid-greed-and-why-mirroring-still-best

-- 
Best Regards
Alexander
Dezember, 17 2010

[1] mid:382802084.111292604519623.javamail.tweb...@sf-app1


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] AHCI or IDE?

2010-12-16 Thread Alexander Lesle
Hello All,

I want to build a home file and media server now. After experiment with a
Asus Board and running in unsolve problems I have bought this
Supermicro Board X8SIA-F with Intel i3-560 and 8 GB Ram
http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIA.cfm?IPMI=Y
also the LSI HBA SAS 9211-8i
http://lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/internal/sas9211-8i/index.html

rpool = 2vdev mirror
tank = 2 x 2vdev mirror. For the future I want to have the option to
expand up to 12 x 2vdev mirror.

After reading the board manual I found at page 4-9 where I can set
SATA#1 from IDE to AHCI.

Can zfs handle AHCI for rpool?
Can zfs handle AHCI for tank?

Thx for helping.
-- 
Regarrds
Alexander

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] AHCI or IDE?

2010-12-16 Thread Alexander Lesle
Hello Pasi,

thx for the quick answer.

Its sounds fine because with the Asus Board and at Nexenta always I
get a message that my rpool is degraded when I set AHCI.
Additionally with the LSI HBA the board doesnt boot by set AHCI.

am Donnerstag, 16. Dezember 2010 um 20:53 hat Pasi Kärkkäinen u.a.
in mid:20101216195305.gz2...@reaktio.net geschrieben:
 On Thu, Dec 16, 2010 at 08:43:02PM +0100, Alexander Lesle wrote:
 Hello All,
 
 I want to build a home file and media server now. After experiment with a
 Asus Board and running in unsolve problems I have bought this
 Supermicro Board X8SIA-F with Intel i3-560 and 8 GB Ram
 http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIA.cfm?IPMI=Y
 also the LSI HBA SAS 9211-8i
 http://lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/internal/sas9211-8i/index.html
 
 rpool = 2vdev mirror
 tank = 2 x 2vdev mirror. For the future I want to have the option to
 expand up to 12 x 2vdev mirror.
 
 After reading the board manual I found at page 4-9 where I can set
 SATA#1 from IDE to AHCI.
 
 Can zfs handle AHCI for rpool?
 Can zfs handle AHCI for tank?
 
 Thx for helping.


 You definitely want to use AHCI and not the legacy IDE.

 AHCI enables:
 - disk hotswap.
 - NCQ (Native Command Queuing) to execute multiple commands at the 
 same time.


 -- Pasi


-- 
Schoenen Abend wuenscht
Alexander
mailto:gro...@tierarzt-mueller.de
eMail geschrieben mit: The Bat! 4.0.38
unter Windows Pro 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS2-L8i

2010-10-16 Thread Alexander Lesle
Hello all,

now I have ordered this controller card from LSI
http://lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/internal/sas9211-8i/index.html

It has the same controller onboard as the Supermicro had.
The card is to plug in the PCI Express 2.0 x8 and the bracket is for
normal cases.
But its _not_ MegaRaid. MegaRaid is not necessary when using ZFS and
here I wont use a hardware raid system. ;-)

-- 
Mit freundlichem Gruss
Regards
Alexander




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Supermicro AOC-USAS2-L8i

2010-10-12 Thread Alexander Lesle
Hello guys,

I want to built a new NAS and I am searching for a controller.
At supermicro I found this new one with the LSI 2008 controller.
http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=I

Who can confirm that this card runs under OSOL build134 or solaris10?
Why this card? Because its supports 6.0 Gb/s SATA.

-- 
Regards
Alexander


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss