.
Can the problem come from the sas/sata controler. I have an ibm m1015 (sas,
for the first vdev) and a lsi (a cheap one, sata, for the second)
Le 23/10/2013 18:16, Richard Elling a écrit :
On Oct 22, 2013, at 11:46 PM, Clement BRIZARD clem...@brizou.fr wrote:
I cleared the degraded disk
On Oct 22, 2013, at 11:46 PM, Clement BRIZARD clem...@brizou.fr wrote:
I cleared the degraded disk. we will see what happens in 131hours
Yes, clearing is the proper procedure.
The predicted time to complete is usually wildly inaccurate until you get near
the end
of resilvering or scrubbing.
On Oct 23, 2013, at 2:01 AM, Clement BRIZARD clem...@brizou.fr wrote:
The disks are in a Fractal XL case which have little rubber pad to contain
vibrations.
As long as it works I will leave it. When I will have some money I will build
a proper server
Once the resilvering is done, should
On Oct 20, 2013, at 6:52 AM, Chris Murray chrismurra...@gmail.com wrote:
Hi all,
I'm hoping for some troubleshooting advice. I have an OpenIndiana
oi_151a8 virtual machine which was functioning correctly on vSphere 5.1
but now isn't on vSphere 5.5 (ESXi-5.5.0-1331820-standard)
A small
On Sep 9, 2013, at 11:09 AM, Simon Toedt simon.to...@gmail.com wrote:
On Mon, Sep 9, 2013 at 7:52 PM, Peter Tribble peter.trib...@gmail.com wrote:
Hi,
topic says it all. I want to install OpenIndiana on a UFS filesystem. No
typo. I do not want to use ZFS on my boot disk. Can you choose what
On Aug 19, 2013, at 4:02 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
From: Steve Goldthorpe [mailto:openindi...@waistcoat.org.uk]
Sent: Sunday, August 18, 2013 12:23 PM
No matter what I try I can't seem to get a 4K aligned root pool using the
OpenIndiana installer
Hi Willem,
On Aug 14, 2013, at 10:49 AM, w...@vandenberge.us wrote:
Good morning,
Last week we put three identical oi_151a7 systems into pre-production. Each
system has 240 drives in 9drive RAIDZ1 vdevs (I'm aware of the potential DR
issues with this configuration and I'm ok with them in
On Aug 7, 2013, at 2:50 PM, Jason Lawrence jjlaw...@gmail.com wrote:
This might be a better question for the Illumos group, so please let me know.
I have a zvol for a KVM instance which I felt was taking up too much space.
After doing a little research, I stumbled upon
On Aug 5, 2013, at 3:58 AM, Gary Gendel g...@genashor.com wrote:
When I reboot my machine, fmstat always shows 12 counts for zfs-* categories.
fmdump and fmdump -e don't report anything and I don't see anything in the
logs of the current or previous BE (when applicable). I'm at a bit of a
a little weak in the snapshot area, and throw big sighs when
asked to scrub a disk hehe.
My question here was about the various ways of using a zfs box.
Richard Elling did some comparisons of vdev layouts, calculating mean time
to data loss:
Yes, on my todo list is to update with more
On Jul 25, 2013, at 3:21 PM, James Relph ja...@themacplace.co.uk wrote:
Hi Karl,
I think we need more information to be able to help.
Have you enabled mpxio? Have a look at the stmsboot command.
mpxio is enabled.
What kind of Qlogic card do you have. Oem or original Qlogic, and model.
On Jul 11, 2013, at 9:30 AM, Laurent Blume laurent...@elanor.org wrote:
On 2013-07-11 6:56 PM, James Carlson wrote:
I've been using it for a while, first on OpenSolaris.
Yes, me too, on and off until S11.1, when I dumped it for good because
it annoyed me one time too many. I do know the
On Jun 18, 2013, at 5:34 AM, Sebastian Gabler sequoiamo...@gmx.net wrote:
Am 18.06.2013 06:15, schrieb openindiana-discuss-requ...@openindiana.org:
Message: 7
Date: Mon, 17 Jun 2013 17:00:37 -0700
From: Richard Ellingrichard.ell...@richardelling.com
To: Discussion list for OpenIndiana
On Jun 17, 2013, at 7:12 AM, Sebastian Gabler sequoiamo...@gmx.net wrote:
Hi,
it occured to me that obviously some ZFS Storage systems only feature a
single SAS HBA, including the ZFSSA 7320. At least, as far as I understand.
From what I saw in the 7320 documentation, each of the two HBA
On Jun 17, 2013, at 1:36 PM, Sebastian Gabler sequoiamo...@gmx.net wrote:
Dear Bill, Peter, Richard, and Saso.
Thanks for the great comments.
Now, changing to reverse gear, isn't it more likely to loose data by having a
pool that spans across mutiple HBAs than if you connect all drives
On Jun 16, 2013, at 3:11 PM, Alberto Picón Couselo alpic...@gmail.com wrote:
Hi, Saso
I don't think there's any in-kernel support. But before you go out on a
software digging expedition into clustered filesystems, have you made sure
that you *really* need it? High Availability does not
On Apr 21, 2013, at 3:47 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-21 06:13, Richard Elling wrote:
Terminology warning below…
BER is the term most often used in networks, where the corruption is
transient. For permanent
data faults, the equivalent is unrecoverable read error
mirrors, which is a reason for raid-z3, but this may
already be a less likely case.
Richard Elling wrote a blog post about mean time to data loss [1]. A
few years later he graphed out a few cases for typical values of
resilver times [2].
[1]https://blogs.oracle.com/relling/entry
comment below…
On Apr 18, 2013, at 5:17 AM, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
From: Timothy Coalson [mailto:tsc...@mst.edu]
Did you also compare the probability of bit errors causing data loss
without a complete pool failure? 2-way mirrors, when one device
/entry/a_story_of_two_mttdl
Thanks for pointing at that. I stand corrected with my previous statement
about Richard's MTTDL model excluding BER/UER. Asking Richard Elling to
accept my apology.
No worries.
Unfortunately, Oracle totally hosed the older Sun blogs. I do have on my todo
list
[catching up... comment below]
On Apr 18, 2013, at 2:03 PM, Timothy Coalson tsc...@mst.edu wrote:
On Thu, Apr 18, 2013 at 10:24 AM, Sebastian Gabler
sequoiamo...@gmx.netwrote:
Am 18.04.2013 16:28, schrieb
clarification below...
On Apr 16, 2013, at 2:44 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
For the context of ZPL, easy answer below :-) ...
On Apr 16, 2013, at 4:12 PM, Timothy Coalson tsc...@mst.edu wrote:
On Tue, Apr 16, 2013 at 6:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-04-16 23:56, Jay Heyl wrote:
result in more devices being hit for both read and write. Or am I
Julien,
Good idea. Please file an RFE at illumos.org, thanks
-- richard
On Apr 14, 2013, at 7:44 AM, Julien Ramseier j.ramse...@gmail.com wrote:
Hi there,
I was playing with OI 151a7, and I noticed a strange noise
coming from my hard drive each time the system was shut down.
After some
On Apr 14, 2013, at 8:15 AM, Wim van den Berge w...@vandenberge.us wrote:
Hello,
We have been running OpenIndiana (and its various predecessors) as storage
servers in production for the last couple of years. Over that time the
majority of our storage infrastructure has been moved to Open
On Mar 28, 2013, at 11:04 PM, Shvayakov A. a.shvaya...@gmail.com wrote:
I found this: https://www.illumos.org/issues/1437
But not sure that it will be without problem
Anybody knows - what's the reason for this limitation?
WAG. The old limit was 32 because nobody would ever need more
On Mar 16, 2013, at 5:02 PM, Richard Elling richard.ell...@richardelling.com
wrote:
there is a way to get this info from mdb... I added a knowledge base article
on this at Nexenta a few years ago, lemme see if I can dig it up from my
archives…
And the winner is:
echo ::mptsas -t
there is a way to get this info from mdb... I added a knowledge base article on
this at Nexenta a few years ago, lemme see if I can dig it up from my
archives...
-- richard
On Mar 15, 2013, at 11:22 PM, Richard L. Hamilton rlha...@smart.net wrote:
Running on something older (SXCE snv_97 on
On Feb 16, 2013, at 3:59 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 02/17/2013 12:52 AM, Grant Albitz wrote:
Yes jim I actually used something similar to enable the 9000 mtu that's why
I want familiar with the config file method.
dladm set-linkprop -p mtu=9000 InterfaceName
On Feb 8, 2013, at 6:33 AM, real-men-dont-cl...@gmx.net wrote:
Hello,
given the lack of encryption in current open-source zfs I came across the so
called self-encrypting-disks (eg. HGST UltraStar A7K2000 BDE 1000GB).
Did anybody try to use them under OI so far?
Soon, many, if not all,
This is a bug in the mpt_sas driver. I'm not sure of the RTI date, but I
believe it was
scheduled to be fixed soon. I've CC'ed Dan McDonald who has been working in this
area. He'll know for sure :-)
-- richard
On Feb 7, 2013, at 1:20 AM, Randy S sim@live.nl wrote:
Hi,
Thanks for the
On Nov 16, 2012, at 4:19 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2012-11-17 00:46, Roel_D wrote:
How about teaming? Is it supported under OI?
My memory serves me not worse than google: teaming is one of the
umbrella terms to describe what is implemented by LACP - a means
of
On Nov 1, 2012, at 1:24 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2012-11-01 01:47, Richard Elling wrote:
Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed
on a
modern CPU.
There is no CSMA/CD on gigabit and faster available from any vendor today.
Everything
On Oct 31, 2012, at 5:53 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
2012-10-30 19:21, Sebastian Gabler wrote:
Whereas that's relative: performance is still at a quite miserable
62
MB/s through a gigabit link. Apparently, my environment has room for
improvement.
Does your gigabit
On Oct 31, 2012, at 3:37 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-10-31 13:58, Sebastian Gabler wrote:
2012-10-30 19:21, Sebastian Gabler wrote:
Whereas that's relative: performance is still at a quite miserable 62
MB/s through a gigabit link. Apparently, my environment has room for
On Oct 28, 2012, at 5:10 AM, Robin Axelsson gu99r...@student.chalmers.se
wrote:
On 2012-10-24 21:58, Timothy Coalson wrote:
On Wed, Oct 24, 2012 at 6:17 AM, Robin Axelsson
gu99r...@student.chalmers.se wrote:
It would be interesting to know how you convert a raidz2 stripe to say a
raidz3
On Oct 22, 2012, at 9:13 AM, James Carlson carls...@workingcode.com wrote:
Daniel Kjar wrote:
I have this problem with any VM running on either Sol10 Nevada,
Opensolaris, openindiana. I have the ARC restricted now but for some
reason, and 'people at sun that know these things' have mentioned
On Oct 19, 2012, at 3:51 PM, Dan Swartzendruber dswa...@druber.com wrote:
Hi, all. I've got an issue that is bugging me. I've got an OI 151a7 VM and
ssh to it takes 15 seconds or so, then I get a prompt. It's not the usual
reverse dns or gssapi stuff, since my backup node is also OI 151a7
On Oct 15, 2012, at 3:00 PM, heinrich.vanr...@gmail.com wrote:
Most of my storage background is with EMC CX and VNX and that is used in a
vast amount of datacenters.
They run a process called sniiffer that runs in the background and request a
read of all blocks on each disk individually
On Oct 8, 2012, at 4:07 PM, Martin Bochnig mar...@martux.org wrote:
Marilio,
at first a reminder: never ever detach a disk before you have a third
disk that already completed resilvering.
The term detach is misleading, because it detaches the disk from the
pool. Afterwards you cannot
On Oct 8, 2012, at 2:07 PM, Roel_D openindi...@out-side.nl wrote:
I still think this whole discussion is like renting a 40 meter long truck to
move your garden hose.
We all know that it is possible to rent such a truck but nobody tries to role
up the hose
SSD's are good for fast
On Sep 29, 2012, at 6:46 AM, Bryan N Iotti ironsides.med...@gmail.com wrote:
Hi all,
thought you'd like to know the following...
I have my rpool on a 146GB SCSI 15K rpm disk.
I regularly back it up with the following sequence of commands:
- zfs snapshot -r rpool@DATE
- cd to backup
On Sep 27, 2012, at 5:15 PM, Reginald Beardsley pulask...@yahoo.com wrote:
--- On Thu, 9/27/12, Richard Elling richard.ell...@richardelling.com wrote:
zfs_scrub_delay = 100
a bit extreme, but probably ok
zfs_scan_idle = 1000
no, you'll want to make this smaller.
OK, Thanks
On Sep 27, 2012, at 8:44 AM, Reginald Beardsley pulask...@yahoo.com wrote:
The only thing google turned up was stop the scrub if it impacts performance
too badly which is not really all that helpful. Or ways to speed up scrubs
resilvers.
On modern ZFS implementations, scrub I/O is
--- On Thu, 9/27/12, Richard Elling richard.ell...@richardelling.com wrote:
From: Richard Elling richard.ell...@richardelling.com
Subject: Re: [OpenIndiana-discuss] Mitigating the performance impact of scrub
To: Discussion list for OpenIndiana openindiana-discuss@openindiana.org
Date: Thursday
On Sep 27, 2012, at 3:24 PM, Reginald Beardsley pulask...@yahoo.com wrote:
--- On Thu, 9/27/12, Richard Elling richard.ell...@richardelling.com wrote:
From: Richard Elling richard.ell...@richardelling.com
Subject: Re: [OpenIndiana-discuss] Mitigating the performance impact of scrub
On Sep 25, 2012, at 4:19 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-09-25 11:52, Armin Maier wrote:
Hello, is there an easy way wo find out when the last update occured to
an zfs filesystem, my goal is to only make a backup of a filesystem when
something has changed. At this time i make it
On Sep 24, 2012, at 10:29 PM, Jaco Schoonen j...@macuser.nl wrote:
After studying all the information about 4K-disks I figured out that to
get more space in my server I need to create a new pool, consisting of
4K-disks and then moving everything from the old 512-byte pool to the new
one.
On Sep 24, 2012, at 2:22 AM, Gabriele Bulfon gbul...@sonicle.com wrote:
Hi,
I noticed that I usually have to grow the default swap installed by OI or
XStreamOS, because the
default text installer set up following some rules (stated inside the python
sources):
memorytype
On Sep 25, 2012, at 11:41 AM, Peter Tribble peter.trib...@gmail.com wrote:
On Tue, Sep 25, 2012 at 1:50 PM, Richard Elling
richard.ell...@richardelling.com wrote:
Use what you need. Most people don't need or want to use swap. Why?
Because...
if you have to swap, performance will suck
On Sep 24, 2012, at 2:30 PM, Jaco Schoonen j...@macuser.nl wrote:
Dear all,
After studying all the information about 4K-disks I figured out that to get
more space in my server I need to create a new pool, consisting of 4K-disks
and then moving everything from the old 512-byte pool to the
On Sep 11, 2012, at 10:46 AM, Ray Arachelian r...@arachelian.com wrote:
On 09/10/2012 09:14 AM, Sašo Kiselkov wrote:
I recommend losing some large unused app blobs that nobody needs on a
Live CD. I don't know what you've got in there, but I recommend you
throw out stuff like image editing
Thanks for the update, Bryan! Well done!
-- richard
On Aug 28, 2012, at 10:06 AM, Bryan N Iotti ironsides.med...@gmail.com wrote:
Folks,
just thought you'd like to know that the Veterinary Sciences Faculty of
the University Of Torino, Italy, is now running an open source PACS
DICOM
On Sep 6, 2012, at 8:08 AM, Roel_D openindi...@out-side.nl wrote:
Reading this it reminds me of the old days where IRQ's were important to
systems.
Those days my serial mouse could interfere with my modem.
But I thought those days were way back..
Interrupt conflicts are syslogged at
On Aug 6, 2012, at 5:15 AM, James Carlson wrote:
It's never been possible to mount NFS at boot.
Well, some of us old farts remember nd, and later, NFS-based diskless
workstations :-)
The current lack of support for diskless leaves an empty feeling in my heart :-P
-- richard
--
ZFS
On Jul 24, 2012, at 9:11 AM, Jason Matthews wrote:
are you missing a zero to the left of the decimal place?
Been there, done that, wrote a whitepaper. Add 2 zeros.
-- richard
Sent from Jasons' hand held
On Jul 23, 2012, at 8:57 PM, John T. Bittner j...@xaccel.net wrote:
Subject: ZFS and
On Jul 20, 2012, at 12:01 PM, Bob Friesenhahn wrote:
On Fri, 20 Jul 2012, Ichiko Sakamoto wrote:
Hi, all
I have a disk that has many bad sectors.
I created zpool with this disk and expected that
zpool told me the disk has meny errors.
But zpool told me everything was fine until I
On Jul 5, 2012, at 10:13 AM, Reginald Beardsley wrote:
I had a power failure last night. The UPS alarms woke me up and I powered
down the systems. (some day I really will automate shutdowns) It's also been
quite hot (90 F) in the room where the computer is.
At boot the BIOS on the HP
On Jul 6, 2012, at 12:59 AM, Richard L. Hamilton wrote:
It's been my impression on a few occasions that a disk with very limited
damage might have any bad areas discovered and effectively repaired by a
scan; even on a semi-modern (e.g. older Fibre Channel) disk, the
manufacturer's and
On Jul 2, 2012, at 2:49 PM, Rich wrote:
Hm, we appear to have been discussing a different problem, which is
fascinating.
I have a number of devices which are in the Supermicro SC846A-R1200
chassis - which has no expanders, just 6 SFF-8087 ports on it, running
into LSI 9201-16i
On Jun 25, 2012, at 2:06 PM, Ray Arachelian wrote:
On 06/25/2012 03:31 PM, michelle wrote:
I did a hard reset and moved the drive to another channel.
The fault followed the drive so I'm certain it is the drive, as people
have said.
The thing that bugs me is that this ZFS fault locked up
UFS root certainly works, but not sure if the OI installer makes it easy?
-- richard
On Jun 25, 2012, at 7:37 PM, Gordon Ross wrote:
UFS root should still work, also NFS root (convenient for ZFS debug work:)
On Mon, Jun 25, 2012 at 9:00 PM, Jan Owoc jso...@gmail.com wrote:
On Mon, Jun 25,
On Jun 11, 2012, at 6:08 PM, Bob Friesenhahn wrote:
On Mon, 11 Jun 2012, Jim Klimov wrote:
ashift=12 (2^12 = 4096). For disks which do not lie, it
works properly out of the box. The patched zpool binary
forced ashift=12 at the user's discretion.
It seems like new pools should provide the
On Jun 10, 2012, at 7:45 AM, michelle wrote:
The system seems to have hung again, any query to the ZFS system hangs the
session.
Nothing in /var/adm/messages.
Try fmdump -eV
-- richard
I'm wondering whether the combination of having a mirrored zfs set with one
drive on e-sata and
On Jun 4, 2012, at 8:24 AM, Nick Hall wrote:
I'm considering buying a separate SSD drive for my ZIL as I do quite a bit
over NFS and would like the latency to improve. But first I'm trying to
understand exactly how the ZIL works and what happens in case of a problem.
I'll list my
On Jun 4, 2012, at 10:06 AM, Dan Swartzendruber wrote:
On 6/4/2012 11:56 AM, Richard Elling wrote:
On Jun 4, 2012, at 8:24 AM, Nick Hall wrote:
For NFS workloads, the ZIL implements the synchronous semantics between
the NFS server and client. The best way to get better performance is to have
On Jun 1, 2012, at 10:45 PM, Richard L. Hamilton wrote:
In a non-COW filesystem, one would expect that rewriting an already allocated
block would never fail for out-of-space (ENOSPC).
This seems like a rather broad assumption. It may hold for FAT or UFS, but
might not
hold for some of the
idea at the bottom...
On May 29, 2012, at 12:56 PM, Jason Cox wrote:
Let me start by saying that I am very new to OpenIndiana and Solaris
10/11 in general. I normally deal with Red Hat Linux. I wanted to use
OI for ZFS support for a vmware shared storage server to mount LUNs on
my SAN.
On May 23, 2012, at 8:31 PM, Jim Klimov wrote:
2012-05-24 3:50, Richard Elling wrote:
As a side note, it is then possible to augment GRUB to be
able to import and export an rpool and thus help IDE-SATA
migrations?
Go for it.
Huh... wait a couple of years, please. I'm better
On May 22, 2012, at 1:41 PM, Robbie Crash wrote:
Gaming iperf you can get close to theoretical maximums on wire connections,
but if you're just on a 10/100 network looks liek you've got everything
working properly. Real world performance (for me) sits at around 400Mb/sec
for medium (4-100MB)
On May 23, 2012, at 2:37 AM, Jim Klimov wrote:
2012-05-23 8:00, Richard Elling wrote:
This procedure is far too complex. Let's edit it...
Thanks... that seemed far too easy ;)
As a side note, it is then possible to augment GRUB to be
able to import and export an rpool and thus help IDE
On May 22, 2012, at 2:40 PM, Jason Matthews wrote:
Let me get this straight...
You installed the OS on the disk with the BIOS set to IDE. Later, you
changed the BIOS to AHCI and the system crashes when booting. Is that about
right?
Since the OS is not yet running, I don't consider it to
On May 22, 2012, at 12:36 PM, Jim Klimov wrote:
2012-05-22 23:29, Jim Klimov wrote:
There are workarounds, likely posted in archives of zfs-discuss
list and many other sources. If I google anything good up, I'll
post a link here :)
What do you know? I posted some myself, and found those
On May 1, 2012, at 8:41 PM, Tim Dunphy wrote:
hello list
I have attempted to enable link aggregation on my oi 151 box using the
command dladm create-aggr -d e1000g0 -d e1000g1 1 then I plumbed it
with an address of 192.168.1.200 and echoed 192.168.1.1
defaultrouter
I noticed that my
On May 2, 2012, at 12:25 AM, Mark wrote:
There are two issues.
The first is correct partition alignment, the second ashift value.
In theory, I haven't tested this yet, manually creating the slices with a
start position to sector 64 and using slices instead of whole disks for the
zpool
On Apr 29, 2012, at 7:38 PM, Gordon Ross wrote:
On Sun, Apr 29, 2012 at 8:46 PM, Richard Elling
richard.ell...@richardelling.com wrote:
On Apr 29, 2012, at 11:45 AM, George Wilson wrote:
[...]
Speaking of 4K sectors, I've taken a slightly different approach that fixes
this outside
On Apr 29, 2012, at 11:45 AM, George Wilson wrote:
On Apr 29, 2012, at 1:28 PM, Roy Sigurd Karlsbakk wrote:
Also, I posted a bug report for it here
https://www.illumos.org/issues/2663
Thanks :-). We can now track the progress of the OI-specific
discussion about this issue.
Seems
On Apr 24, 2012, at 12:35 PM, Roy Sigurd Karlsbakk wrote:
Hi all
There was a discussion some time back about some (or most?) SSDs not honoring
cache flushes, that is, something is written to, say, the SLOG, and ZFS sends
a flush(), the SSD issues a NOP and falsely acknowledges the flush.
On Apr 23, 2012, at 6:27 AM, paolo marcheschi wrote:
HI
I see that there is a variant of opensolaris known as Omnios:
No, it is an illumos distribution.
http://omnios.omniti.com/
Is that related with Openindiana ?, Are there any advantages with it ?
It is designed for the server
On Apr 9, 2012, at 2:20 PM, Martin Frost wrote:
Is there some issue with sharing via both SMB/CIFS and NFS?
NFSv3 does not have ACLs. NFSv4 does have ACLs. So there is not a
problem with NFS, per se, but the version your clients use might not
understand ACLs.
-- richard
--
ZFS Performance
On Apr 1, 2012, at 12:48 PM, Hugh McIntyre wrote:
On 3/30/12 8:41 AM, Richard Elling wrote:
On Mar 30, 2012, at 2:01 AM, Harry Putnam wrote:
USB drives tend to ignore cache flush commands, which can appear as
unreliable disks. Shouldn't be much of a problem if you rarely plug them
On Mar 30, 2012, at 2:01 AM, Harry Putnam wrote:
Richard Elling richard.ell...@richardelling.com writes:
On Mar 26, 2012, at 12:34 PM, Jonathan Adams wrote:
Probably not the most reliable, but definitely the easiest, way to get
access to your data is to use USB disks because VirtualBox
On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote:
So I have an M1015 and it works fine. I noticed the other day I hotplugged a
crucial M4 into the last free port on the HBA, and later noticed in the dmesg
output:
Mar 27 17:55:40 nas genunix: [ID 483743 kern.info]
On Mar 28, 2012, at 11:24 AM, Dan Swartzendruber wrote:
On 3/28/2012 1:38 PM, Richard Elling wrote:
On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote:
So I have an M1015 and it works fine. I noticed the other day I hotplugged
a crucial M4 into the last free port on the HBA
On Mar 26, 2012, at 12:34 PM, Jonathan Adams wrote:
Probably not the most reliable, but definitely the easiest, way to get
access to your data is to use USB disks because VirtualBox allows
direct USB passthrough.
My boss uses OpenSolaris with USB drives to get access to the ZFS pool
running
On Mar 12, 2012, at 2:36 AM, Hans Joergensen wrote:
Hello,
I've been having this problem with several storage servers running
mostly NFS clients.. (ESXi). And I've seen it both on nexenta and
OI.
I/O seems to freeze/pause when write IOPS are high..
There are many possible reasons for
On Mar 6, 2012, at 6:18 AM, Jonathan Adams wrote:
/etc/passwd still exists for local users (root should always exist as
a local user) ... ldap is additional to it (and likewise should never
have root in it)
Actually, it is very useful to have an LDAP entry for root, that way you can
track
On Feb 25, 2012, at 8:49 PM, Anil Jangity wrote:
What I am really trying to do is, isolate the zone, but also at the same time
have it be able to talk the outside world. Going over the real link means
it would see all the wire traffic (broadcasts etc... from the rest of the
network, it
On Feb 20, 2012, at 6:38 AM, Robin Axelsson wrote:
Maybe the iostat behavior depends on the controller it monitors. Some
controllers such as the AMD SB950 in my case may not be as transparent with
errors as the LSI 1068e operating in IT mode.
Still, I find this to be too much of a
89 matches
Mail list logo