Re: [zfs-discuss] Any HP Servers recommendation for Openindiana (Capacity Server) ?

2012-01-04 Thread Svavar Örn Eysteinsson

Thanks for your reply and time everyone.

The only reason I asked about HP is that I have good support trough HP 
reseller here in
Iceland. IBM has also a good reseller here too. DELL, not so good and 
Oracle from the

same reseller is hell here in Iceland.

I also have pretty good reseller for SuperMicro that I have bought many 
times before.
I actually have 5 Supermicro servers in production here, and they are 
damn good.


The actual question about HP, and or some other vender was regarding 
say, if a HD disk
would fail, controller and or motherboard, I have a pretty good 
connection(and support) regarding replace'ing them
after few hours. So hardware support would be great, and hardware 
compatibility. Software support is not

what I'm actually looking for.

I'm aware of the RAM regarding ZFS so I would buy RAM heavy server

A closed software solution is not my type of thing.
Firstly, they cost huge money regarding our currency.
Secondly, I have no access (like for Nexenta) into the core of the 
operating system.
I have had great success with OpenIndiana and Napp-IT and using linux in 
our environment.
Thirdly, We do have some limited budget. Time will tell how much, as our 
currency rocks

like a bad jojo. :s


I think a good solution would be :

* 1U/2U Server with large RAM, good XEON's and correct SAS controllers
* Large external storage chassis connected with SAS (6Gbit right ?)


Any recommandations on external chassis ? Supermicro ?


Thanks allot people.

Best regards,

Svavar - Reykjavik - Iceland






Gary Driggs mailto:gdri...@gmail.com
4. janúar 2012 07:26

They might use the same chipset but their firmware usually doesn't
support JBOD. Unless they've changed in the last couple of years...
Best you can do is try but if you don't see each drive individually
you'll know it's by design and not lack of skill on your part.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Eric D. Mudama mailto:edmud...@bounceswoosh.org
4. janúar 2012 06:36


That was what got us with the HP boxes was the unsupported RAID cards.
We ended up getting Dell T610 boxes with SAS6i/R cards, which are
properly supported in Solaris/OI.

Supposedly the H200/H700 cards are just their name for the 6gbit LSI
SAS cards, but I haven't tested them personally.

--eric

Gary Driggs mailto:gdri...@gmail.com
3. janúar 2012 16:03
I can't comment on their 4U servers but HP's 12U includwd SAS
controllers rarely allow JBOD discovery of drives. So I'd recommend an
LSI card and an external storage chassis like those available from
Promise and others.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Lahm mailto:frankl...@googlemail.com
3. janúar 2012 15:22

One of them being:
http://www.racktopsystems.com/products/brickstor/brickstor-models/

Afaik they're in the process of announing official AFP support (with
Netatalk) soon.

-f
Christopher Hearn mailto:christopher.he...@cchmc.org
3. janúar 2012 15:08

I haven't really seen any 100% commercially supported solutions for 
this. If you put this configuration on an HP server  call them to 
report a problem, they're going to most likely deny your request 
because you're using an unsupported OS. You may be able to get away 
with using Solaris 11, but you'll pay for it. I suggest looking at 
NexentaStor  it's partners: http://www.nexenta.com. Talk to them 
about what you want to do, they'd most likely be most receptive of 
unique setups. I was able to get AFP working on NexentaStor 
Community edition. I was not able to get it working with Active 
Directory (I'd love advice if anyone has it). If anything, their 
partners would most likely be receptive to supporting the hardware 
configuration itself  the software would be up to you. Better than 
nothing at all.


Chris



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-01-04 Thread Michael Sullivan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3 Jan 12, at 04:22 , Darren J Moffat wrote:

 On 12/28/11 06:27, Richard Elling wrote:
 On Dec 27, 2011, at 7:46 PM, Tim Cook wrote:
 On Tue, Dec 27, 2011 at 9:34 PM, Nico Williamsn...@cryptonector.com  
 wrote:
 On Tue, Dec 27, 2011 at 8:44 PM, Frank Cusackfr...@linetwo.net  wrote:
 So with a de facto fork (illumos) now in place, is it possible that two
 zpools will report the same version yet be incompatible across
 implementations?
 
 This was already broken by Sun/Oracle when the deduplication feature was not
 backported to Solaris 10. If you are running Solaris 10, then zpool version 
 29 features
 are not implemented.
 
 Solaris 10 does have some deduplication support, it can import and read 
 datasets in a deduped pool just fine.  You can't enable dedup on a dataset 
 and any writes won't dedup they will rehydrate.
 
 So it is more like partial dedup support rather than it not being there at 
 all.

rehydrate???


Is it instant or freeze dried?


Mike

- ---
Michael Sullivan   
m...@axsh.us
http://www.axsh.us/
Phone: +1-662-259-
Mobile: +1-662-202-7716

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBAgAGBQJPAuOzAAoJEPFdIteZcPZgn7QQAI0nq500qymcpuTreoPpDHIL
vvMtRS4/VoOxmHbu2wJT9GO21f4JC3CCzFRHl8t6NkAK5vi9cuNUx1IGjDjlZAqG
Vp3H2DmtuHVHsPiAGB4J7b3zI4U8IL5tPhgbEcg5kkiTqBjMOCTdg1ibRz7ovf9Y
aDmplOD1d2UN5il6FEs3ZEojHslb4yoRajd5HgyjibF6sdC1leKcAFaUOg9q0t/s
40Ckzw6G4RC5mCb6WHK+a4WXPUMG4uPryIRl4F4jxqrMCSw/rIUHa1slVcagu1gO
wft+P7Y922SPnClMHhDufIGGKrqvJaOriYU+1ZXVoil18BaauboVn1/PEtlDOF57
vy0jOiC/DVICvk/AzzKfQxlO9YFhu4RInc27B2Ut4pCmXLeDDJpy5QXge+AZBM6K
Q2dPJJ3ZNii4JYsTfIufMzWjBwBMhUgkbbK5kbdNyuIptg/ueHOKOf+v9gSkqCGC
CjWrqtchtBSHa5Vw1JjMbKR5Y2qNzH+YuYICFgnYvJbZ31WO8TdzRL+M8PnuJRE3
WJDKs0TmSStYiuGZ1jf1oA3SJ1gcok47rYueSGKcmMSfhHfw3zeB0JpHLVQaCG2j
k2CwfwGskSs1FvgHR+YbCCne5KXwk5PzqCvd5IGH7GZyEOJLtW29MjW5d2TazSzr
3u01cKzStpyXPaxj6+cD
=SLu1
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0

2012-01-04 Thread David Blasingame

Well it looks like the only place this get's changed is in the 
arc_reclaim_thread for opensolaris.  I suppose you could dtrace it to see what 
is going on and investigate what is happening to the return code of the 
arc_reclaim_needed is.


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089

maybe

dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }'

Dave



 Original Message  
Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0 
Date: Tue, 3 Jan 2012 23:50:04 + 
From: Peter Radig pe...@radig.de 
To: Tomas Forsman st...@acc.umu.se 
CC: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org 


Tomas, Yup, same here. free0 mem_inuse  
 16162750464 mem_total   17171480576 Page Summary   
 PagesMB  %Tot
     
Kernel 842305  3290   20%
ZFS File Data2930110%
Anon44038   1721%
Exec and libs3731140%
Page cache   8580330%
Free (cachelist) 5504210%
Free (freelist)   3284924 12831   78% Total 
4192012 16375
Physical  4192011 16375 I will create an SR with 
Oracle. Thanks,
Peter -Original Message-
From: Tomas Forsman [mailto:st...@acc.umu.se] 
Sent: Mittwoch, 4. Januar 2012 00:39
To: Peter Radig
Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] arc_no_grow is set 
to 1 and never set back to 0 On 03 January, 2012 - Peter Radig sent me these 
3,5K bytes:  Hello.
 
 I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of 
 weeks ago).
 
 Without no obvious reason (at least for me), after an uptime of 1 to 2 days 
 (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it 
 back to 0. ARC is being shrunk to less than 1 GB -- needless to say that 
 performance is terrible. There is not much load on this system.
 
 Memory seems to be not an issue (see below).
 
 I looked at the old Nevada code base of onnv_147 and can't find a reason for 
 this happening.
 
 How can I find out what's causing this? New code that seems to be counting 
 wrong.. I was planning on filing a bug, but am currently struggling to 
 convince oracle that we bought support.. Try this:
kstat -n zfs_file_data In my case, I get: free15322 
mem_inuse   24324866048 mem_total   
25753026560
.. where ::memstat says:
Kernel2638984 10308   42%
ZFS File Data   39260   1531%
Anon   873549  3412   14%
Exec and libs5199200%
Page cache  20019780%
Free (cachelist) 6608250%
Free (freelist)   2703509 10560   43% On another reboot, it 
refused to go over 130MB on a 24GB system..  BTW: I was not seeing this on 
SolEx 11/10. Dito.  Thanks,
 Peter
 
 
 
 *** ::memstat ***
 Page SummaryPagesMB  %Tot
      
 Kernel 860254  3360   21%
 ZFS File Data3047110%
 Anon38246   1491%
 Exec and libs3765140%
 Page cache   8517330%
 Free (cachelist) 5866220%
 Free (freelist)   3272317 12782   78%
 Total 4192012 16375
 Physical  4192011 16375
 
 mem_inuse   4145901568
 mem_total   1077466365952
 
 *** ::arc ***
 hits  = 186279921
 misses=  14366462
 demand_data_hits  =   4648464
 demand_data_misses=   8605873
 demand_metadata_hits  = 171803126
 demand_metadata_misses=   3805675
 prefetch_data_hits=772678
 prefetch_data_misses  =   1464457
 prefetch_metadata_hits=   9055653
 prefetch_metadata_misses  =490457
 mru_hits  =  12295087
 mru_ghost_hits= 0
 mfu_hits  = 175281066
 mfu_ghost_hits= 0
 deleted   =  14462192
 mutex_miss=30
 hash_elements =   3752768
 hash_elements_max =   3752770
 hash_collisions   =  11409790
 hash_chains   =  8256
 hash_chain_max=20
 p =48 MB
 c =   781 MB
 c_min  

Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0

2012-01-04 Thread Peter Radig
Thanks. The guys from Oracle are currently looking at some new code that was 
introduced  in arc_reclaim_thread() between b151a and b175.

Peter Radig, Ahornstrasse 34, 85774 Unterföhring, Germany
tel: +49 89 99536751 - fax: +49 89 99536754 - mobile: +49 171 2652977
email: pe...@radig.demailto:pe...@radig.de

From: David Blasingame [mailto:dbla...@yahoo.com]
Sent: Mittwoch, 4. Januar 2012 17:35
To: Peter Radig; st...@acc.umu.se
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0


Well it looks like the only place this get's changed is in the 
arc_reclaim_thread for opensolaris.  I suppose you could dtrace it to see what 
is going on and investigate what is happening to the return code of the 
arc_reclaim_needed is.

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089

maybe

dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }'

Dave


 Original Message 
Subject:

Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0

Date:

Tue, 3 Jan 2012 23:50:04 +

From:

Peter Radig pe...@radig.demailto:pe...@radig.de

To:

Tomas Forsman st...@acc.umu.semailto:st...@acc.umu.se

CC:

zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org 
zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org



Tomas,



Yup, same here.



free0

mem_inuse   16162750464

mem_total   17171480576



Page SummaryPagesMB  %Tot

     

Kernel 842305  3290   20%

ZFS File Data2930110%

Anon44038   1721%

Exec and libs3731140%

Page cache   8580330%

Free (cachelist) 5504210%

Free (freelist)   3284924 12831   78%



Total 4192012 16375

Physical  4192011 16375





I will create an SR with Oracle.



Thanks,

Peter



-Original Message-

From: Tomas Forsman [mailto:st...@acc.umu.se]

Sent: Mittwoch, 4. Januar 2012 00:39

To: Peter Radig

Cc: zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org

Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0



On 03 January, 2012 - Peter Radig sent me these 3,5K bytes:



 Hello.



 I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of 
 weeks ago).



 Without no obvious reason (at least for me), after an uptime of 1 to 2 days 
 (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it 
 back to 0. ARC is being shrunk to less than 1 GB -- needless to say that 
 performance is terrible. There is not much load on this system.



 Memory seems to be not an issue (see below).



 I looked at the old Nevada code base of onnv_147 and can't find a reason for 
 this happening.



 How can I find out what's causing this?



New code that seems to be counting wrong.. I was planning on filing a bug, but 
am currently struggling to convince oracle that we bought support..



Try this:

kstat -n zfs_file_data



In my case, I get:

free15322

mem_inuse   24324866048

mem_total   25753026560

.. where ::memstat says:

Kernel2638984 10308   42%

ZFS File Data   39260   1531%

Anon   873549  3412   14%

Exec and libs5199200%

Page cache  20019780%

Free (cachelist) 6608250%

Free (freelist)   2703509 10560   43%



On another reboot, it refused to go over 130MB on a 24GB system..





 BTW: I was not seeing this on SolEx 11/10.



Dito.



 Thanks,

 Peter







 *** ::memstat ***

 Page SummaryPagesMB  %Tot

      

 Kernel 860254  3360   21%

 ZFS File Data3047110%

 Anon38246   1491%

 Exec and libs3765140%

 Page cache   8517330%

 Free (cachelist) 5866220%

 Free (freelist)   3272317 12782   78%

 Total 4192012 16375

 Physical  4192011 16375



 mem_inuse   4145901568

 mem_total   1077466365952



 *** ::arc ***

 hits  = 186279921

 misses=  14366462

 

Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0

2012-01-04 Thread Richard Elling
On Jan 4, 2012, at 8:49 AM, Peter Radig wrote:

 Thanks. The guys from Oracle are currently looking at some new code that was 
 introduced  in arc_reclaim_thread() between b151a and b175.

Closed source strategy loses again!
 -- richard

  
 Peter Radig, Ahornstrasse 34, 85774 Unterföhring, Germany
 tel: +49 89 99536751 - fax: +49 89 99536754 - mobile: +49 171 2652977
 email: pe...@radig.de
  
 From: David Blasingame [mailto:dbla...@yahoo.com] 
 Sent: Mittwoch, 4. Januar 2012 17:35
 To: Peter Radig; st...@acc.umu.se
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
  
 
 Well it looks like the only place this get's changed is in the 
 arc_reclaim_thread for opensolaris.  I suppose you could dtrace it to see 
 what is going on and investigate what is happening to the return code of the 
 arc_reclaim_needed is.
  
 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089
  
 maybe
  
 dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }'
  
 Dave
 
 
  Original Message 
 Subject:
 Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
 Date:
 Tue, 3 Jan 2012 23:50:04 +
 From:
 Peter Radig pe...@radig.de
 To:
 Tomas Forsman st...@acc.umu.se
 CC:
 zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
  
 
 Tomas,
  
 Yup, same here.
  
 free0
 mem_inuse   16162750464
 mem_total   17171480576
  
 Page SummaryPagesMB  %Tot
      
 Kernel 842305  3290   20%
 ZFS File Data2930110%
 Anon44038   1721%
 Exec and libs3731140%
 Page cache   8580330%
 Free (cachelist) 5504210%
 Free (freelist)   3284924 12831   78%
  
 Total 4192012 16375
 Physical  4192011 16375
  
  
 I will create an SR with Oracle.
  
 Thanks,
 Peter
  
 -Original Message-
 From: Tomas Forsman [mailto:st...@acc.umu.se] 
 Sent: Mittwoch, 4. Januar 2012 00:39
 To: Peter Radig
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
  
 On 03 January, 2012 - Peter Radig sent me these 3,5K bytes:
  
  Hello.
  
  I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple 
  of weeks ago).
  
  Without no obvious reason (at least for me), after an uptime of 1 to 2 days 
  (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it 
  back to 0. ARC is being shrunk to less than 1 GB -- needless to say that 
  performance is terrible. There is not much load on this system.
  
  Memory seems to be not an issue (see below).
  
  I looked at the old Nevada code base of onnv_147 and can't find a reason 
  for this happening.
  
  How can I find out what's causing this?
  
 New code that seems to be counting wrong.. I was planning on filing a bug, 
 but am currently struggling to convince oracle that we bought support..
  
 Try this:
 kstat -n zfs_file_data
  
 In my case, I get:
 free15322
 mem_inuse   24324866048
 mem_total   25753026560
 .. where ::memstat says:
 Kernel2638984 10308   42%
 ZFS File Data   39260   1531%
 Anon   873549  3412   14%
 Exec and libs5199200%
 Page cache  20019780%
 Free (cachelist) 6608250%
 Free (freelist)   2703509 10560   43%
  
 On another reboot, it refused to go over 130MB on a 24GB system..
  
  
  BTW: I was not seeing this on SolEx 11/10.
  
 Dito.
  
  Thanks,
  Peter
  
  
  
  *** ::memstat ***
  Page SummaryPagesMB  %Tot
       
  Kernel 860254  3360   21%
  ZFS File Data3047110%
  Anon38246   1491%
  Exec and libs3765140%
  Page cache   8517330%
  Free (cachelist) 5866220%
  Free (freelist)   3272317 12782   78%
  Total 4192012 16375
  Physical  4192011 16375
  
  mem_inuse   4145901568
  mem_total   1077466365952
  
  *** ::arc ***
  hits  = 186279921
  misses=  14366462

Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0

2012-01-04 Thread Steve Gonczi
The interesting bit is what happens inside arc_reclaim_needed(), 
that is, how it arrives at the conclusion that there is memory pressure. 

Maybe we could trace arg0, which gives the location where 
we have left the function. This would finger which return path 
arc_reclaim_needed() took. 

Steve 


- Original Message -



Well it looks like the only place this get's changed is in the 
arc_reclaim_thread for opensolaris. I suppose you could dtrace it to see what 
is going on and investigate what is happening to the return code of the 
arc_reclaim_needed is. 



http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089
 


maybe 


dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }' 


Dave 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Stress test zfs

2012-01-04 Thread grant lowe
Hi all,

I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
memory RIght now oracle . I've been trying to load test the box with
bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more
than a couple K for writes. Any suggestions? Or should I take this to a
bonnie++ mailing list? Any help is appreciated. I'm kinda new to load
testing. Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-04 Thread Hung-Sheng Tsao (laoTsao)
what is your storage?
internal sas or external array
what is  your zfs setup?


Sent from my iPad

On Jan 4, 2012, at 17:59, grant lowe glow...@gmail.com wrote:

 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-04 Thread Erik Trimble

On 1/4/2012 2:59 PM, grant lowe wrote:

Hi all,

I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 
128GB memory RIght now oracle . I've been trying to load test the box 
with bonnie++. I can seem to get 80 to 90 K writes, but can't seem to 
get more than a couple K for writes. Any suggestions? Or should I take 
this to a bonnie++ mailing list? Any help is appreciated. I'm kinda 
new to load testing. Thanks.


Also, note that bonnie++ is single threaded, and a T3's single-thread 
performance isn't stellar, by any means. It's entirely possible you're 
CPU bound during the test.


Though, a list of your ZFS config would be nice, as previously mentioned...

-Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss