[zfs-discuss] Resilver misleading output

2010-12-13 Thread Bruno Sousa
Hello everyone, I have a pool consisting of 28 1TB sata disks configured in 15*2 vdevs raid1 (2 disks per mirror)2 SSD in miror for the ZIL and 3 SSD's for L2ARC, and recently i added two more disks. For some reason the resilver process kicked in, and the system is noticeable slower, but i'm

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-18 Thread Bruno Sousa
I confirm that form the fileserver point of view and storage, i had more network connections used. Bruno On Wed, 17 Nov 2010 22:00:21 +0200, Pasi Kärkkäinen pa...@iki.fi wrote: On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote: Hi all, Let me tell you all that the MC/S

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-18 Thread Bruno Sousa
On Wed, 17 Nov 2010 16:31:32 -0500, Ross Walker rswwal...@gmail.com wrote: On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen pa...@iki.fi wrote: On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:    Hi all,    Let me tell you all that the MC/S *does* make a difference...I had

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-17 Thread Bruno Sousa
/zfs-discuss -- This message has been scanned for viruses and dangerous content by MAILSCANNER [3], and is believed to be clean. -- Bruno Sousa Links: -- [1] mailto:t...@cook.ms [2] mailto:zfs-discuss@opensolaris.org [3] http://www.mailscanner.info/ -- This message has been scanned

Re: [zfs-discuss] Running on Dell hardware?

2010-10-13 Thread Bruno Sousa
-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Bruno Sousa -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Running on Dell hardware?

2010-10-13 Thread Bruno Sousa
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Bruno Sousa -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean

Re: [zfs-discuss] zvol recordsize for backing a zpool over iSCSI

2010-08-02 Thread Bruno Sousa
On 2-8-2010 2:53, Richard Elling wrote: On Jul 30, 2010, at 11:35 AM, Andrew Gabriel wrote: Just wondering if anyone has experimented with working out the best zvol recordsize for a zvol which is backing a zpool over iSCSI? This is an interesting question. Today, most ZFS

[zfs-discuss] COMSTAR iscsi replication - dataset busy

2010-07-28 Thread Bruno Sousa
Hi all, I have in lab two servers running snv_134 and while doing some experiences with iscsi volumes and replication i came up to a road-block that i would like to ask for your help. So in server A i have a lun created in COMSTAR without any views attach to it and i can zfs send it to server B

[zfs-discuss] zfs destroy - weird output ( cannot destroy '': dataset already exists )

2010-07-27 Thread Bruno Sousa
Hi all, I'm running snv_134 and i'm testing the COMSTAR framework and during those tests i've created an ISCSI zvol and exported to a server. Now that the tests are done i have renamed the zvol and so far so good..things get really weird (at least to me) when i try to destroy this zvol.

Re: [zfs-discuss] zfs destroy - weird output ( cannot destroy '': dataset already exists )

2010-07-27 Thread Bruno Sousa
On 27-7-2010 19:36, Bruno Sousa wrote: Hi all, I'm running snv_134 and i'm testing the COMSTAR framework and during those tests i've created an ISCSI zvol and exported to a server. Now that the tests are done i have renamed the zvol and so far so good..things get really weird (at least to me) when

Re: [zfs-discuss] Optimal Disk configuration

2010-07-22 Thread Bruno Sousa
Hi all, That's what i have, so i'm probably on the good track :) Basically i have a Sun X4240 with 2 Sun HBA's attached to 2 Sun J4400 , each of them with 12 SATA 1TB disks. The configuration is - ZFS mirrored pool with 22x2 +2 spares , with 1 disk on Jbod A attached to HBA A and the other disk

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Bruno Sousa
Hi, If you can share those scripts that make use of mbuffer, please feel free to do so ;) Bruno On 19-7-2010 20:02, Brent Jones wrote: On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel rich...@ellipseinc.com wrote: I've tried ssh blowfish and scp arcfour. both are CPU limited long before

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Bruno Sousa
On 19-7-2010 20:36, Brent Jones wrote: On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa bso...@epinfante.com wrote: Hi, If you can share those scripts that make use of mbuffer, please feel free to do so ;) Bruno On 19-7-2010 20:02, Brent Jones wrote: On Mon, Jul 19, 2010 at 9:06

[zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Bruno Sousa
Hi all, Today i notice that one of the ZFS based servers within my company is complaining about disk errors, but i would like to know if this a real physical error or something like a transport error or something. The server in question runs snv_134 attached to 2 J4400 jbods , and the head-node

Re: [zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Bruno Sousa
On 17-7-2010 15:49, Bob Friesenhahn wrote: On Sat, 17 Jul 2010, Bruno Sousa wrote: Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16 Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1 Jul 15 12:30:48 storage01 DESC: The command was terminated with a non-recovered error

Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Bruno Sousa
Hmm...that easy? ;) Thanks for the tip, i will see if that works out. Bruno On 29-6-2010 2:29, Mike Devlin wrote: I havnt tried it yet, but supposedly this will backup/restore the comstar config: $ svccfg export -a stmf ⁠comstar⁠.bak.${DATE} If you ever need to restore the configuration,

Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Bruno Sousa
/LUN_10GB Thanks for all the tips. Bruno On 29-6-2010 14:10, Preston Connors wrote: On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote: Hmm...that easy? ;) Thanks for the tip, i will see if that works out. Bruno Be aware of the Important Note in http://wikis.sun.com/display

[zfs-discuss] ZFS - USB 3.0 SSD disk

2010-05-06 Thread Bruno Sousa
Hi all, It seems like the market has yet another type of ssd device, this time a USB 3.0 portable SSD device by OCZ. Going on the specs it seems to me that if this device has a good price it might be quite useful for caching purposes on ZFS based storage. Take a look at

[zfs-discuss] Another MPT issue - kernel crash

2010-05-05 Thread Bruno Sousa
Hi all, I have faced yet another kernel panic that seems to be related to mpt driver. This time i was trying to add a new disk to a running system (snv_134) and this new disk was not being detected...following a tip i ran the lsitool to reset the bus and this lead to a system panic. MPT driver :

Re: [zfs-discuss] Another MPT issue - kernel crash

2010-05-05 Thread Bruno Sousa
Hi James, Thanks for the information, and if there's any test/command to be done on this server, just let me know it. Regards, Bruno On 5-5-2010 15:38, James C. McPherson wrote: On 5/05/10 10:42 PM, Bruno Sousa wrote: Hi all, I have faced yet another kernel panic that seems to be related

Re: [zfs-discuss] MPT issues strikes back

2010-04-29 Thread Bruno Sousa
, and i will try to understand what's wrong with this machine. Bruno On 27-4-2010 16:41, Mark Ogden wrote: Bruno Sousa on Tue, Apr 27, 2010 at 09:16:08AM +0200 wrote: Hi all, Yet another story regarding mpt issues, and in order to make a long story short everytime that a Dell R710 running

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Bruno Sousa
Indeed the scrub seems to take too much resources from a live system. For instance i have a server with 24 disks (SATA 1TB) serving as NFS store to a linux machine holding user mailboxes. I have around 200 users, with maybe 30-40% of active users at the same time. As soon as the scrub process

[zfs-discuss] MPT issues strikes back

2010-04-27 Thread Bruno Sousa
Hi all,Yet another story regarding mpt issues, and in order to make a longstory short everytime that a Dell R710 running snv_134 logs the information scsi: [ID 107833 kern.warning] WARNING:/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system freezes andony a hard-reset fixes the issue.

[zfs-discuss] MPT issues strikes back

2010-04-27 Thread Bruno Sousa
Hi all, Yet another story regarding mpt issues, and in order to make a long story short everytime that a Dell R710 running snv_134 logs the information scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system freezes and ony a hard-reset fixes the

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Bruno Sousa
Hi, Maybe your zfs box used for dedup has a big load, therefore giving timeouts in nagios checks? I ask you this because i also suffer from that effect in a system with 2 Intel Xeon 3.0Ghz ;) Bruno On 14-4-2010 15:48, Paul Archer wrote: So I turned deduplication on on my staging FS (the one

Re: [zfs-discuss] Post crash - what to do - update

2010-04-13 Thread Bruno Sousa
On 13-4-2010 11:42, Bruno Sousa wrote: Hi all, Recently one of the servers , a Dell R710, attached to 2 J4400 started to crash quite often. Finally i got a message in /var/adm/messages that might point to something usefull, but i don't have the expertise to start to troubleshooting

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-06 Thread Bruno Sousa
Hi, I also ran into the problem of Dell+Broadcom. I fixed it by downgrading the firmware to version 4.xxx instead of running in version 5.xxx . You may try that one as well. Bruno On 6-4-2010 16:54, Eric D. Mudama wrote: On Tue, Apr 6 at 13:03, Markus Kovero wrote: Install nexenta on a dell

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Bruno Sousa
On 31-3-2010 14:52, Charles Hedrick wrote: Incidentally, this is on Solaris 10, but I've seen identical reports from Opensolaris. Probably you need to delete any existing view over the lun you want to destroy. Example : stmfadm list-lu LU Name: 600144F0B67340004BB31F060001 stmfadm

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-31 Thread Bruno Sousa
-boun...@opensolaris.org] *On Behalf Of *Bruno Sousa *Sent:* Thursday, March 25, 2010 3:28 PM *To:* Freddie Cash *Cc:* ZFS filesystem discussion list *Subject:* Re: [zfs-discuss] RAIDZ2 configuration Hmm...it might be completely wrong , but the idea of raidz2 vdev with 3 disks came from

Re: [zfs-discuss] Adaptec AAC driver

2010-03-30 Thread Bruno Sousa
Thanks..it was what i had to do . Bruno On 29-3-2010 19:12, Cyril Plisko wrote: On Mon, Mar 29, 2010 at 4:57 PM, Bruno Sousa bso...@epinfante.com wrote: pkg uninstall aac Creating Planpkg: Cannot remove 'pkg://opensolaris.org/driver/storage/a...@0.5.11 ,5.11-0.134:20100302T021758Z' due

[zfs-discuss] Adaptec AAC driver

2010-03-29 Thread Bruno Sousa
Hello all, Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and the driver supplied by Opensolaris doesn't support JBOD drives. I'm running snv_134 but when i try to do uninstall the SUNWacc driver i have the following error : pkgrm SUNWaac The following package is currently

Re: [zfs-discuss] Adaptec AAC driver

2010-03-29 Thread Bruno Sousa
:25 PM, Bruno Sousa bso...@epinfante.com wrote: Hello all, Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and the driver supplied by Opensolaris doesn't support JBOD drives. I'm running snv_134 but when i try to do uninstall the SUNWacc driver i have the following error

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Bruno Sousa
On 30-3-2010 0:39, Nicolas Williams wrote: One really good use for zfs diff would be: as a way to index zfs send backups by contents. Nico Any prevision about the release target? snv_13x? Bruno smime.p7s Description: S/MIME Cryptographic Signature

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Bruno Sousa
Hi, I think that in this case the cpu is not the bottleneck, since i'm not using ssh. However my 1gb network link probably is the bottleneck. Bruno On 26-3-2010 9:25, Erik Ableson wrote: On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote: Hi, Indeed the 3 disks per vdev

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Bruno Sousa
deliver a good performance. And what a relief to know that i'm not alone when i say that storage management is part science, part arts and part voodoo magic ;) Cheers, Bruno On 25-3-2010 23:22, Ian Collins wrote: On 03/26/10 10:00 AM, Bruno Sousa wrote: [Boy top-posting sure mucks up threads!] Hi

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Bruno Sousa
Hi, As far as i know this is a normal behaviour in ZFS... So what we need is somesort of rebalance task what moves data around multiple vdevs in order to achieve the best performance possible... Take a look to http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 Bruno On

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-25 Thread Bruno Sousa
Hi, Actually the idea of having the ZFS code inside a HW raid controllers does seems quite interesting. Imagine the possibility of having any OS with raid volumes backed by all the good aspects of the ZFS, specially the checksum and the raidz vs the raid5-write-hole thing... I also consider the

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Bruno Sousa
Hi, You never experienced any faulted drives, or something similar? So far i only saw imbalance if the vdevs add changed, if a hotspare is used and i think even during a replacement of one disk of a raidz2 group. I Bruno On 25-3-2010 9:46, Ian Collins wrote: On 03/25/10 09:32 PM, Bruno Sousa

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Bruno Sousa
a huge mistake. If someone with more knowledge about zfs would like to comment, please do so.. It's always a learning experience. Bruno On 25-3-2010 11:53, Ian Collins wrote: On 03/25/10 11:23 PM, Bruno Sousa wrote: On 25-3-2010 9:46, Ian Collins wrote: On 03/25/10 09:32 PM, Bruno Sousa wrote

[zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Bruno Sousa
Hi all, Yet another question regarding raidz configuration.. Assuming a system with 24 disks available , having in mind reliability as the crucial factor , secondary the usable space and finally performance would be the last criteria, what would be the preferable configuration ? Should it be :

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Bruno Sousa
seems to behave quite nice...but than again we are just starting it. Thanks for the input, Bruno On 25-3-2010 16:46, Freddie Cash wrote: On Thu, Mar 25, 2010 at 6:28 AM, Bruno Sousa bso...@epinfante.com mailto:bso...@epinfante.com wrote: Assuming a system with 24 disks available , having

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Bruno Sousa
On 25-3-2010 15:28, Richard Jahnel wrote: I think I would do 3xraidz3 with 8 disks and 0 hotspares. That way you have a better chance of resolving bit rot issues that might become apparent during a rebuild. Indeed raidz3...i didn't consider it. In short, a raidz3 could sustain 3 broken

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Bruno Sousa
c3t0d0ONLINE 0 0 0 So...what am i missing here? Just a bad example in the sun documentation regarding zfs? Bruno On 25-3-2010 20:10, Freddie Cash wrote: On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa bso...@epinfante.com mailto:bso...@epinfante.com wrote: What do you

[zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
Hi all, The more readings i do about ZFS, and experiments the more i like this stack of technologies. Since we all like to see real figures in real environments , i might as well share some of my numbers .. The replication has been achieved with the zfs send / zfs receive but piped with mbuffer

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
Thanks for the tip..btw is there any advantage with jbod vs simple volumes? Bruno On 25-3-2010 21:08, Richard Jahnel wrote: BTW, if you download the solaris drivers for the 52445 from adaptec, you can use jbod instead of simple volumes. smime.p7s Description: S/MIME Cryptographic

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
good. However, like i said i would like to know other results from other guys... Thanks for the time. Bruno On 25-3-2010 21:52, Ian Collins wrote: On 03/26/10 08:47 AM, Bruno Sousa wrote: Hi all, The more readings i do about ZFS, and experiments the more i like this stack of technologies

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread Bruno Sousa
Well...i can only say well said. BTW i have a raidz2 with 9 vdevs with 4 disks each (sata enterprise disks) and the scrub of the pool takes between 12 to 39 hours..depends on the workload of the server. So far it's acceptable but each case is a case i think... Bruno On 16-3-2010 14:04, Khyron

[zfs-discuss] snv_133 mpt_sas driver

2010-03-08 Thread Bruno Sousa
Hi all, Today a new message has been seen in my system and another freeze has happen to it. The message is : Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start do passthru error 16 Mar 9 06:20:01 zfs01

[zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Bruno Sousa
Hi all, Recently i got myself a new machine (Dell R710) with 1 internal Dell SAS/i and 2 sun hba (non-raid) . From time to time this system just freezes and i noticed that it always freezes after this message (shown in the /var/adm/messages) : scsi: [ID 107833 kern.warning] WARNING:

Re: [zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Bruno Sousa
wrote: -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bruno Sousa Sent: 5. maaliskuuta 2010 10:34 To: ZFS filesystem discussion list Subject: [zfs-discuss] snv_133 mpt0 freezing machine Hi all

Re: [zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Bruno Sousa
Seems like it...and the workaround doesn't help it. Bruno On 5-3-2010 16:52, Mark Ogden wrote: Bruno Sousa on Fri, Mar 05, 2010 at 09:34:19AM +0100 wrote: Hi all, Recently i got myself a new machine (Dell R710) with 1 internal Dell SAS/i and 2 sun hba (non-raid) . From time to time

[zfs-discuss] snv_133 - high cpu - update

2010-02-24 Thread Bruno Sousa
Hi all, I still didn't find the problem but it seems to be related with interrupts sharing between onboard network cards (broadcom) and the intel 10gbE card PCI-e. Runing a simple iperf from a linux box to my zfs box, if i use bnx2 or bnx3 i have a performance over 100 mbs, but if i use bnx0,

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bruno Sousa
Yes i'm using the mtp driver . In total this system has 3 HBA's, 1 internal (Dell perc), and 2 Sun non-raid HBA's. I'm also using multipath, but if i disable multipath i have pretty much the same results.. Bruno On 24-2-2010 19:42, Andy Bowers wrote: Hi Bart, yep, I got Bruno to run a

Re: [zfs-discuss] snv_133 - high cpu

2010-02-24 Thread Bruno Sousa
Hi, Until it's fixed the 132 build should be used instead of the 133? Bruno On 25-2-2010 3:22, Bart Smaalders wrote: On 02/24/10 12:57, Bruno Sousa wrote: Yes i'm using the mtp driver . In total this system has 3 HBA's, 1 internal (Dell perc), and 2 Sun non-raid HBA's. I'm also using

[zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bruno Sousa
Hi all, I'm currently evaluating the possibility of migrating a NFS server (Linux Centos 5.4 / RHEL 5.4 x64-32) based to a opensolaris box and i'm seeing some huge cpu usage in the opensolaris box. The zfs box is a Dell R710 with 2 Quad-Cores (Intel E5506 @ 2.13GHz), 16Gb ram , 2 Sun non-Raid

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bruno Sousa
Hi, I don't have compression and deduplication enabled, but checksums are. However disabling checksums gives a 0.5 load reduction only... Bruno On 23-2-2010 20:27, Eugen Leitl wrote: On Tue, Feb 23, 2010 at 01:03:04PM -0600, Bob Friesenhahn wrote: Zfs can consume appreciable CPU if

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bruno Sousa
Hi Bob, I have neither deduplication or compression enabled. The checksum are enabled, but if try to disable it i gain aroud 0.5 less load on the box, so it still seems to be to much. Bruno On 23-2-2010 20:03, Bob Friesenhahn wrote: On Tue, 23 Feb 2010, Bruno Sousa wrote: Could the fact

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bruno Sousa
also affect the performance of the system? Regards, Bruno On 23-2-2010 20:47, Bob Friesenhahn wrote: On Tue, 23 Feb 2010, Bruno Sousa wrote: I don't have compression and deduplication enabled, but checksums are. However disabling checksums gives a 0.5 load reduction only... Since high CPU

Re: [zfs-discuss] [storage-discuss] Horribly bad luck with Unified Storage 7210 - hardware or software?

2010-02-23 Thread Bruno Sousa
Hi, Just some comments on your situation , please take a look the following things : * Sometimes the hw looks the same, i'm talking specifically to the SSD's, but they can be somehow different and that may lead to some problems in the

Re: [zfs-discuss] ZFS dedup report tool

2009-12-10 Thread Bruno Sousa
Hi, Couldn't agree more..but i just asked if there was such a tool :) Bruno Richard Elling wrote: On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote: Hi, Despite the fact that i agree in general with your comments, in reality it all comes to money.. So in this case, if i could prove that ZFS

[zfs-discuss] ZFS dedup report tool

2009-12-09 Thread Bruno Sousa
Hi all, Is there any way to generate some report related to the de-duplication feature of ZFS within a zpool/zfs pool? I mean, its nice to have the dedup ratio, but it think it would be also good to have a report where we could see what directories/files have been found as repeated and therefore

Re: [zfs-discuss] ZFS dedup report tool

2009-12-09 Thread Bruno Sousa
at 2:26 PM, Bruno Sousa bso...@epinfante.com wrote: Hi all, Is there any way to generate some report related to the de-duplication feature of ZFS within a zpool/zfs pool? I mean, its nice to have the dedup ratio, but it think it would be also good to have a report where we could see what

Re: [zfs-discuss] ZFS dedup report tool

2009-12-09 Thread Bruno Sousa
: On Wed, Dec 9, 2009 at 2:47 PM, Bruno Sousa bso...@epinfante.com wrote: Hi Andrey, For instance, i talked about deduplication to my manager and he was happy because less data = less storage, and therefore less costs . However, now the IT group of my company needs to provide to management

Re: [zfs-discuss] ZFS dedup report tool

2009-12-09 Thread Bruno Sousa
of a cost centre. But indeed, you're right , in my case a possible technical solution is trying to answer for a managerial solution..however, isn't it way IT was invented, that i believe that's why i got my paycheck each month :) Bruno Richard Elling wrote: On Dec 9, 2009, at 3:47 AM, Bruno Sousa wrote

Re: [zfs-discuss] ZFS dedup report tool

2009-12-09 Thread Bruno Sousa
, but in order to do that , there has to be a way to measure those costs/savings. But yes, this costs probably represent less than 20% of the total cost, but its a cost no matter what. However, maybe im driving in the wrong road... Bruno Bob Friesenhahn wrote: On Wed, 9 Dec 2009, Bruno Sousa wrote

Re: [zfs-discuss] Update - mpt errors on snv 101b

2009-12-08 Thread Bruno Sousa
: Bruno Sousa wrote: Hi all, During this problem i did a power-off/power-on in the server and the bus reset/scsi timeout issue persisted. After that i decided to poweroff/power on the jbod array, and after that everything became normal. No scsi timeouts, normal performance, everything

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-08 Thread Bruno Sousa
But don't forget that The unknown is what makes life interesting :) Bruno Cindy Swearingen wrote: Hi Mike, In theory, this should work, but I don't have an experience with this particular software, maybe someone else does. One way to determine if it might work is by using use the zdb -l

[zfs-discuss] mpt errors on snv 101b

2009-12-07 Thread Bruno Sousa
/power on and see how it goes * replace HBA/disk ? * other ? Thanks for the time, and if any other information is required (even ssh access can be granted) please feel free to ask it. Best regards, Bruno Sousa System specs : * OpenSolaris snv_101b, with two Dual-Core AMD

Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-11-30 Thread Bruno Sousa
Katzke Systems Analyst II TAMU - RGS On 11/25/2009 at 11:13 AM, in message 4b0d65d6.4020...@epinfante.com, Bruno Sousa bso...@epinfante.com wrote: Hello ! I'm currently using a X2200 with a LSI HBA connected to a Supermicro JBOD chassis, however i want to have more

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-25 Thread Bruno Sousa
Maybe 11/30/2009 ? According to http://hub.opensolaris.org/bin/view/Community+Group+on/schedule. we have onnv_129 11/23/2009 11/30/2009 But..as far as i know those release dates are in a best effort basis. Bruno Karl Rossing wrote: When will SXCE 129 be released since 128 was passed over?

[zfs-discuss] Opensolaris with J4400 - Experiences

2009-11-25 Thread Bruno Sousa
Hello ! I'm currently using a X2200 with a LSI HBA connected to a Supermicro JBOD chassis, however i want to have more redundancy in the JBOD. So i have looked into to market, and into to the wallet, and i think that the Sun J4400 suits nicely to my goals. However i have some concerns and if

Re: [zfs-discuss] ZFS storage server hardware

2009-11-20 Thread Bruno Sousa
, that only has disks, disk backplane, jbod power interface and power supplies Hope this helps... Bruno Sriram Narayanan wrote: On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa bso...@epinfante.com wrote: Hi Ian, I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit

Re: [zfs-discuss] Data balance across vdevs

2009-11-20 Thread Bruno Sousa
Interesting, at least to me, the part where/ this storage node is very small (~100TB) /:) Anyway, how are you using your ZFS? Are you creating volumes and present them to end-nodes over iscsi/fiber , nfs, or other? Could be helpfull to use some sort of cluster filesystem to have some more control

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Bruno Sousa
Hi, I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so good.. So i have a 48 TB raw capacity, with a mirror configuration for NFS usage (Xen VMs) and i feel that for the price i paid i have a very nice

Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Bruno Sousa
: Hi Bruno, Bruno Sousa wrote: Hi, I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so good.. So i have a 48 TB raw capacity, with a mirror configuration for NFS usage (Xen VMs) and i feel

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
Hi all, I fully understand that within a cost effective point of view, developing the fishworks for a reduced set of hardware makes , alot, of sense. However, i think that Sun/Oracle would increase their user base if they make availabe a Fishwork framework certified only for a reduced set of

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
the best approach. Bruno Tim Cook wrote: On Tue, Oct 27, 2009 at 2:35 AM, Bruno Sousa bso...@epinfante.com mailto:bso...@epinfante.com wrote: Hi all, I fully understand that within a cost effective point of view, developing the fishworks for a reduced set of hardware makes , alot

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
series. Regarding APPLE...well they have marketing gurus Bruno On Wed, 28 Oct 2009 09:47:31 +1300, Trevor Pretty wrote: Bruno Sousa wrote: Hi, I can agree that the software is the one that really has the added value, but to my opinion allowing a stack like Fishworks to run outside

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
I just curious to see how much effort would it take to put the software of FISH running within a Sun X4275... Anyway..lets wait and see. Bruno On Tue, 27 Oct 2009 13:29:24 -0500 (CDT), Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 27 Oct 2009, Bruno Sousa wrote: I can agree

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
-- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Bruno Sousa -- This message has

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
. --Tim -- This message has been scanned for viruses and dangerous content by MAILSCANNER [2], and is believed to be clean. -- Bruno Sousa Links: -- [1] mailto:da...@elemental.org [2] http://www.mailscanner.info/ -- This message has been scanned for viruses and dangerous content

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
errors to display. According to the 6694909 comments, this issue is documented in the release notes. As they are harmless, I wouldn't worry about them. Maybe someone from the driver group can comment further. Cindy On 10/22/09 05:40, Bruno Sousa wrote: Hi all, Recently i upgrade from

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Hi Adam, How many disks and zpoo/zfs's do you have behind that LSI? I have a system with 22 disks and 4 zpools with around 30 zfs's and so far it works like a charm, even during heavy load. The opensolaris release is snv_101b . Bruno Adam Cheal wrote: Cindy: How can I view the bug report

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of hidden problems found by Sun where the HBA resets, and due to market time pressure the quick and dirty solution was to spread the load over multiple HBA's instead of software fix? Just my 2 cents.. Bruno Adam Cheal wrote: Just

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
, I wouldn't worry about them. Maybe someone from the driver group can comment further. Cindy On 10/22/09 05:40, Bruno Sousa wrote: Hi all, Recently i upgrade from snv_118 to snv_125, and suddently i started to see this messages at /var/adm/messages : Oct 22 12:54:37 SAN02 scsi: [ID 243001

Re: [zfs-discuss] Disk locating in OpenSolaris/Solaris 10

2009-10-22 Thread Bruno Sousa
If you use an LSI, maybe you install the LSI Logic MPT Configuration Utility. Example of the usage : lsiutil LSI Logic MPT Configuration Utility, Version 1.61, September 18, 2008 1 MPT Port found Port Name Chip Vendor/Type/RevMPT Rev Firmware Rev IOC 1. mpt0

[zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Bruno Sousa
Hi all, Recently i upgrade from snv_118 to snv_125, and suddently i started to see this messages at /var/adm/messages : Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0): Oct 22 12:54:37 SAN02 mpt_handle_event: IOCStatus=0x8000,

Re: [zfs-discuss] Adding another mirror to storage pool

2009-10-20 Thread Bruno Sousa
Hi, Something like http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 ? Bruno Matthias Appel wrote: You will see more IOPS/bandwith, but if your existing disks are very full, then more traffic may be sent to the new disks, which results in less benefit. OK, that

[zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread Bruno Sousa
Hi all ! I have a serious problem, with a server, and i'm hoping that some one could help me how to understand what's wrong. So basically i have a server with a pool of 6 disks, and after a zpool scrub i go the message : errors: Permanent errors have been detected in the following files:

[zfs-discuss] ZPOOL data corruption - Help

2009-10-04 Thread Bruno Sousa
Hi all ! I have a serious problem, with a server, and i'm hoping that some one could help me how to understand what's wrong. So basically i have a server with a pool of 6 disks, and after a zpool scrub i go the message : errors: Permanent errors have been detected in the following files: