On Wed, Oct 27, 2010 at 3:56 PM, David Magda wrote:
> If the OP doesn't have a test system available, it may be possible to try
> this multi-replace experiment using plain files as the backing store
> (created with mkfile).
.. or via a VirtualBox, VMWare, or other virtualization instance.
-B
--
Finding PCIe x1 cards with more than 2 SATA ports is difficult so you
might want to make sure that either your chosen motherboard has lots
of PCIe slots or has some wider slots. If you plan on using on-board
video and re-using the x16 slot for something else, you should verify
that the BIOS wil
Hi!
i'm interested in zil's behavior.
it's my understanding that zil is used only when
synchronous(o_sync,o_dsync & directio) write io request.
then zil have written contents that system call is used by zil replay
when used by crashed.
is this correct?
It seems a large amount of io for system cal
On Oct 27, 2010, at 21:17, Brandon High wrote:
You may be able to replace more than one drive at the same time this
way. I've never tried it, and you should test before attempting to do
so.
If the OP doesn't have a test system available, it may be possible to
try this multi-replace experimen
On Thu, Oct 7, 2010 at 12:22 AM, Kevin Walker wrote:
> I would like to know if it is viable to add larger disks to zfs pool to grow
> the pool size and then remove the smaller disks?
>
> I would assume this would degrade the pool and require it to resilver?
You can do a zfs replace without removi
Mike Gerdts writes:
[...]
Thanks for suggestions and I have closed it all up to see if there was
a difference.
> Perhaps this belongs somewhere other than zfs-discuss - it has nothing
> to do with zfs.
Yes... it does, It started out much nearer to belonging here.
Not sure now how to switch to
Peter Jeremy writes:
> See the archives for lots more discussion on suggested systems for ZFS.
Any suggested search stings? Maybe at search.gmane.org
It would be too lucky to expect someone has a list of some good (up to
date) setups a home NAS fellow could be inspired by eh?
I know there is a
On Wed, Oct 27, 2010 at 3:41 PM, Harry Putnam wrote:
> I'm guessing it was probably more like 60 to 62 c under load. The
> temperature I posted was after something like 5minutes of being
> totally shutdown and the case been open for a long while. (mnths if
> not yrs)
What happens if the case is
Peter Jeremy writes:
>>It seems there ought to be something, some kind of evidence and clues
>>if I only knew how to look for them, in the logs.
>
> Serious hardware problems are unlikely to be in the logs because the
> system will die before it can write the error to disk and sync the
> disks.
On 2010-Oct-28 04:54:00 +0800, Harry Putnam wrote:
>If I were to decide my current setup is too problem beset to continue
>using it, is there a guide or some good advice I might employ to scrap
>it out and build something newer and better in the old roomy midtower?
I'd scrap the existing PSU as w
On 2010-Oct-28 04:45:16 +0800, Harry Putnam wrote:
>Short of doing such a test, I have evidence already that machine will
>predictably shutdown after 15 to 20 minutes of uptime.
My initial guess is thermal issues. Check that the fans are running
correctly and there's no dust/fluff buildup on the
On Wed, 27 Oct 2010, Harry Putnam wrote:
I have been having some trouble with corrupted data in one pool but
I thought I'd gotten it cleared up and posted to that effect in
another thread.
zpool status on all pools shows thumbs up.
What are some key words I should be looking for in /var/adm/mes
Krunal Desai writes:
> With an A64, I think a thermal shutdown would instantly halt CPU
> execution, removing the chance to write any kind of log message.
> memtest will report any errors in RAM; perhaps when the ARC expands to
> the upper-stick of memory it hits the bad bytes and crashes.
>
> Ca
* Harry Putnam (rea...@newsguy.com) wrote:
> Toby Thain writes:
>
> > On 27/10/10 4:21 PM, Krunal Desai wrote:
> >> I believe he meant a memory stress test, i.e. booting with a
> >> memtest86+ CD and seeing if it passed.
> >
> > Correct. The POST tests are not adequate.
>
> Got it. Thank you.
With an A64, I think a thermal shutdown would instantly halt CPU
execution, removing the chance to write any kind of log message.
memtest will report any errors in RAM; perhaps when the ARC expands to
the upper-stick of memory it hits the bad bytes and crashes.
Can you try switching power supplies
If I were to decide my current setup is too problem beset to continue
using it, is there a guide or some good advice I might employ to scrap
it out and build something newer and better in the old roomy midtower?
I don't mean the hardware part, although I no doubt will need advice
right through tha
Toby Thain writes:
> On 27/10/10 4:21 PM, Krunal Desai wrote:
>> I believe he meant a memory stress test, i.e. booting with a
>> memtest86+ CD and seeing if it passed.
>
> Correct. The POST tests are not adequate.
Got it. Thank you.
Short of doing such a test, I have evidence already that ma
I created a 1TB file on my new FreeNAS 0.7.2 Sabanda (revision 5226)
box recently using dd, in order to get an idea of write performance,
and when I deleted it the space was not freed.
Snapshots are not enabled:
bunker:~# zfs list -t all
NAMEUSED AVAIL REFER MOUNTPOINT
tank0 1.
Krunal Desai writes:
> I believe he meant a memory stress test, i.e. booting with a
> memtest86+ CD and seeing if it passed. Even if the memory is OK, the
> stress from that test may expose defects in the power supply or other
> components.
>
> Your CPU temperature is 56C, which is not out-of-lin
On 27/10/10 4:21 PM, Krunal Desai wrote:
> I believe he meant a memory stress test, i.e. booting with a
> memtest86+ CD and seeing if it passed.
Correct. The POST tests are not adequate.
--Toby
Even if the memory is OK, the
> stress from that test may expose defects in the power supply or othe
I believe he meant a memory stress test, i.e. booting with a
memtest86+ CD and seeing if it passed. Even if the memory is OK, the
stress from that test may expose defects in the power supply or other
components.
Your CPU temperature is 56C, which is not out-of-line for most modern
CPUs (you didn't
Toby Thain writes:
> On 27/10/10 3:14 PM, Harry Putnam wrote:
>> It seems my hardware is getting bad, and I can't keep the os running
>> for more than a few minutes until the machine shuts down.
>>
>> It will run 15 or 20 minutes and then shutdown
>> I haven't found the exact reason for it.
>>
On Wed, October 27, 2010 15:07, Roy Sigurd Karlsbakk wrote:
> - Original Message -
>> Ok, so I did it again... I moved my disks around without doing export
>> first.
>> I promise - after this I will always export before messing with the
>> disks. :-)
>>
>> Anyway - the problem. I decided to
On 27/10/10 3:14 PM, Harry Putnam wrote:
> It seems my hardware is getting bad, and I can't keep the os running
> for more than a few minutes until the machine shuts down.
>
> It will run 15 or 20 minutes and then shutdown
> I haven't found the exact reason for it.
>
One thing to try is a thorou
On Mon, Oct 25, 2010 at 2:46 AM, Cuyler Dingwell wrote:
> I have a zpool that once it hit 96% full the performance degraded horribly.
> So, in order to get things better I'm trying to clear out some space. The
> problem I have is after I've deleted a directory it no longer shows on the
> file
It seems my hardware is getting bad, and I can't keep the os running
for more than a few minutes until the machine shuts down.
It will run 15 or 20 minutes and then shutdown
I haven't found the exact reason for it.
Or really any thing in logs that seems like a reason.
It may be because I don't k
- Original Message -
> Ok, so I did it again... I moved my disks around without doing export
> first.
> I promise - after this I will always export before messing with the
> disks. :-)
>
> Anyway - the problem. I decided to rearrange the disks due to cable
> lengths and case layout. I disc
On Wed, Oct 27, 2010 at 9:27 AM, bhanu prakash wrote:
> Hi Mike,
>
>
> Thanks for the information...
>
> Actually the requirement is like this. Please let me know whether it matches
> for the below requirement or not.
>
> Question:
>
> The SAN team will assign the new LUN’s on EMC DMX4 (currently
Hi Mike,
Thanks for the information...
Actually the requirement is like this. Please let me know whether it matches
for the below requirement or not.
*Question*:
The SAN team will assign the new LUN’s on EMC DMX4 (currently IBM Hitache is
there). We need to move the 17 containers which are exi
Ok, so I did it again... I moved my disks around without doing export first.
I promise - after this I will always export before messing with the disks. :-)
Anyway - the problem. I decided to rearrange the disks due to cable lengths and
case layout. I disconnected the disks and moved them around.
On Tue, Oct 26, 2010 at 5:21 PM, Matthieu Fecteau
wrote:
> My question : in the event that there's no more common snapshot between Site
> A and Site B, how can we replicate again ? (example : Site B has a power
> failure and then Site A cleanup his snapshots before Site B is brought back,
> so
31 matches
Mail list logo