Re: rdiff-backup memory problems

2008-05-08 Thread David
  If you are looking for a replacement, I don't know of any that do rdiffs
  besides rdiff-backup. I think that a good incremental backup would be your
  best option.

All incrementals (that I know of) waste space when there are large
files where only a small part of the file changes. This is a problem
for me - many of our users have 1-2 GB Outlook files. I don't want to
use up an extra 1GB+ of storage for those users every time I make a
backup.


  It looks like all the stuff with making the hardlinks and temp directory are
  to avoid a potential conflict between the existing rdiff-backup-data
  directory on backup1 and the other rdiff-backup-data directory that gets
  written to on backup2. If backup1 and backup2 both have rdiff-backup
  installed then you can do something like


The hardlinks and temp directories have nothing (specific) to do with
the interaction between backup1 and backup2. Their only purpose is to
allow me to combine rsync and rdiff-backup.

ie:

1) rsync from remote location to a local path (without breaking
rdiff-backup store)

2) tell rdiff-backup to pull from this local path into it's store, so
I get rdiff-backup history.

The reason for using hard links is to conserve space - if 'files' is
100GB, I don't want the 'temp' directory to also be 100GB. Also, it's
much faster to make hardlinks than to physically copy all the bytes
over from 'files' to 'temp'

I exclude rdiff-backup metadata when I sync from 'files' to 'new' for
a few reasons:

1) The source (where I am rsyncing from) won't (shouldn't) have it, so
it will get erased anyway when I run the rsync.

2) In the event that the source does have a rdiff-backup-data
directory for some reason, I need to exclude it anyway, because it
will cause problems with rdiff-backup.

I use the same method on backup2 (to backup backup1) because I have
the same needs as on backup1 (to backup all the other servers,
workstations, etc):

- I want history of the source
- I want to conserve space

The main difference is that backup2 contains the sum of all files on
backup1, plus their rdiff-backup metadata (in sub-dirs, not under the
root of backup1 where it would cause problems for backup2). In theory
there shouldn't be any major problem with this approach.

rdiff-backup's docs say that a large number of files shouldn't cause
excessive memory usage, but that hardlinks can - if you're hard
linking 1000's of files together.

This fact and our discussion gives me an idea: I think that
rdiff-backup *might* have a bug, where if the source and dest files
are all hardlinked to each other, it will use extra memory.

  rdiff-backup backup1::/backup/files /backup/path/on/backup2 --exclude
  **rdiff-backup-data**

  on backup2. This avoids making hardlinks and a temp directory and also avoids
  your problem of having the two rdiff-backup-data directories conflicting.


This is worth testing. If my non-standard usage (all those hardlinks
between source and dest files) is causing excessive memory usage on
backup2 (due to a huge number of files), this should fix the problem.
backup1 will also have this problem, but to a lesser degree (it has 90
odd backups, instead of 1 huge single backup).

I'll look into it.

(It will be a bit of a hack, because currently my backup logic is
split into 2 separate steps for each backup (regardless of type -
files, db, etc). First get the data from the source, then
compress/make history/etc. I'll have to add a new 'source' type for
'rdiff-backup', where there is no applicable 'compress' logic)

Thanks for your feedback :-)

David.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



rdiff-backup memory problems

2008-05-07 Thread David
Hi list.

I've posted this problem to a few other lists over the past few days
(rdiff-backup, my local LUG), but I haven't had a reply yet, so I
thought I'd try here.

Short version: rdiff-backup is using 2 GB of memory (1 GB RAM, 1 GB
swap) on one of my backup servers. I'm using the latest Etch versions,
so there shouldn't be a memory leak in librsync.

Here is my (somewhat long) mail to my LUG:

http://lists.clug.org.za/pipermail/clug-tech/2008-May/040532.html

Does anyone on this list have suggestions?

Thanks in advance,

David.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: rdiff-backup memory problems

2008-05-07 Thread Matthew Dale Moore
I read you CLUG post. It seems like you should be able to do everything that 
you want using rdiff-backup and not using your temp work directory with rsync 
(which looks to be messing things up).

Also, if you are using rdiff-backup on backup1, why do you need to preserve 
file history on backup2? Shouldn't the copy of backup1 on backup2 also 
contain the rdiff-backup-data directory? If this is the case then you can 
just use rsync to move the backup from backup1 to backup2.

MM


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: rdiff-backup memory problems

2008-05-07 Thread David
Hi there and thanks for your reply.

On Wed, May 7, 2008 at 6:39 PM, Matthew Dale Moore
[EMAIL PROTECTED] wrote:
 I read you CLUG post. It seems like you should be able to do everything that
  you want using rdiff-backup and not using your temp work directory with rsync
  (which looks to be messing things up).

Last time I checked, rdiff-backup only works over a network if you
have rdiff-backup on the other side. This means that for Windows boxes
we would need to install Cygwin etc. If there was a simple Windows
installer for rdiff-backup (similar to DeltaCopy for rsync) it would
be another story.

Also, I don't trust rdiff-backup as much as I do rsync. It seems a bit
too complicated/fragile by comparison. Rsync is very robust, simple,
and works every time. The only reason I use rdiff-backup is because of
it's reverse delta support. I would prefer to replace rdiff-backup if
possible, rather than rsync.

And finally, we already have rsync on most of the workstations (after
a long period of phasing it in, to enable faster backups than with SMB
shares). There would need to be a strong reason to change from rsync
(on the machines being backed up) to rdiff-backup.


  Also, if you are using rdiff-backup on backup1, why do you need to preserve
  file history on backup2? Shouldn't the copy of backup1 on backup2 also
  contain the rdiff-backup-data directory? If this is the case then you can
  just use rsync to move the backup from backup1 to backup2.


This is for a few reasons:

1) I'm using the same backup script on both servers (with different
config). It would be extra work to disable the rdiff-backup part.
2) If backup1 looses data, and backup2's backup runs, I don't want to
lose the data from backup1 at that time
3) I also want to keep history for the entire backup1 (not just the
backups). This is so I can restore the entire backup1 server  as it
was X days ago if there are problems.

David


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: rdiff-backup memory problems

2008-05-07 Thread Matthew Dale Moore
On Wednesday 07 May 2008 11:58:20 am David wrote:
 Also, I don't trust rdiff-backup as much as I do rsync. It seems a bit
 too complicated/fragile by comparison. Rsync is very robust, simple,
 and works every time. The only reason I use rdiff-backup is because of
 it's reverse delta support. I would prefer to replace rdiff-backup if
 possible, rather than rsync.

If you are looking for a replacement, I don't know of any that do rdiffs 
besides rdiff-backup. I think that a good incremental backup would be your 
best option.

It looks like all the stuff with making the hardlinks and temp directory are 
to avoid a potential conflict between the existing rdiff-backup-data 
directory on backup1 and the other rdiff-backup-data directory that gets 
written to on backup2. If backup1 and backup2 both have rdiff-backup 
installed then you can do something like

rdiff-backup backup1::/backup/files /backup/path/on/backup2 --exclude 
**rdiff-backup-data**

on backup2. This avoids making hardlinks and a temp directory and also avoids 
your problem of having the two rdiff-backup-data directories conflicting.

MM


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Memory problems, zombie processes

2001-09-30 Thread Dope on Plaztic,,,
Hi

Please forgive me as this is my first post but i think i have the format right 
;

I have been experiencing problems on my Debian box(2.2r2, sid dist. pIII 500mhz 
128mb ram).  The problems include 'zombie processes', ie issuing a 'cat' of a 
file, and it does not happen, so i  have to ctrl+c and i see 'cat defunct' in 
a `ps` output.  I've experienced other problems too, concerning 'wait' and 'no 
child process' errors.

The following example is from `make`, when making a new kernel, althought i've 
experience the same errors with `tar`, among others.

make[1]: Entering directory `/usr/src/linux/arch/i386/boot'
rm -f tools/build
Putting child 0x08074140 (clean) PID 745 on the chain.
Live child 0x08074140 (clean) PID 745
make[1]: *** wait: No child processes.  Stop.
make[1]: *** Waiting for unfinished jobs
Live child 0x08074140 (clean) PID 745
make[1]: *** wait: No child processes.  Stop.
Got a SIGCHLD; 1 unreaped children.
Reaping losing child 0x0808a928 PID 744
make: *** [archclean] Error 2
Removing child 0x0808a928 PID 744  from chain.
([EMAIL PROTECTED]):/usr/src/linux#

that is a `make` with the debug flag, for better understanding of what is going 
on(i dont get it!).  Me and a few friends have tried to figure out what the 
problem is, but to no avail.  Trying new ram, etc has not worked.  The thing 
is, i 've experienced these errors with two Debian computers i have, so i  get 
the idea its something i have done -- i have NO clue what!.  

Please email me if you need any more info, i will keep a sharp eye on the list 
other the week to see replies. 

Thanks,
pip



Re: Memory problems, zombie processes

2001-09-30 Thread Dmitriy
On Sun, Sep 30, 2001 at 02:04:12PM +, Dope on Plaztic,,, wrote:
 Hi
 
 Please forgive me as this is my first post but i think i have the format 
 right ;
 
 I have been experiencing problems on my Debian box(2.2r2, sid dist. pIII 
 500mhz 128mb ram).  The problems include 'zombie processes', ie issuing a 
 'cat' of a file, and it does not happen, so i  have to ctrl+c and i see 'cat 
 defunct' in a `ps` output.  I've experienced other problems too, concerning 
 'wait' and 'no child process' errors.
 
 The following example is from `make`, when making a new kernel, althought 
 i've experience the same errors with `tar`, among others.
 
 make[1]: Entering directory `/usr/src/linux/arch/i386/boot'
 rm -f tools/build
 Putting child 0x08074140 (clean) PID 745 on the chain.
 Live child 0x08074140 (clean) PID 745
 make[1]: *** wait: No child processes.  Stop.
 make[1]: *** Waiting for unfinished jobs
 Live child 0x08074140 (clean) PID 745
 make[1]: *** wait: No child processes.  Stop.
 Got a SIGCHLD; 1 unreaped children.
 Reaping losing child 0x0808a928 PID 744
 make: *** [archclean] Error 2
 Removing child 0x0808a928 PID 744  from chain.
 ([EMAIL PROTECTED]):/usr/src/linux#
 
 that is a `make` with the debug flag, for better understanding of what is 
 going on(i dont get it!).  Me and a few friends have tried to figure out what 
 the problem is, but to no avail.  Trying new ram, etc has not worked.  The 
 thing is, i 've experienced these errors with two Debian computers i have, so 
 i  get the idea its something i have done -- i have NO clue what!.  
 
 Please email me if you need any more info, i will keep a sharp eye on the 
 list other the week to see replies. 

Hmmm...If you tried to change  RAM, perhaps it is something on the
motherboard that makes RAM malfunction?

I hope you are not overclocked.

I suggest you try memtest86 package,
which put a memtest86 image in your /boot , which you can boot and
let it run for a while.

Maybe that will help.

 
 Thanks,
 pip
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]

-- 
GPG key-id: 1024D/5BE3DCFD Dmitriy
CCAB 5F17 A099 9E43 1DBE  295C 9A21 2F1C 5BE3 DCFD

Free Dmitry Sklyarov!  http://www.freesklyarov.org


pgpqDZZcaeOI2.pgp
Description: PGP signature


grep and memory problems with kernel 2.4.1

2001-02-19 Thread Thomas Braun
hello group knows anyone this problem with the kernel 2.4.1 

i do  cd /

grep -r hallo * 

und then cames a memory enhausted and the network is down.


cu thomas.



Re: grep and memory problems with kernel 2.4.1

2001-02-19 Thread David B . Harris
To quote Thomas Braun [EMAIL PROTECTED],
# i do  cd /
# 
# grep -r hallo * 
# 
# und then cames a memory enhausted and the network is down.

Well, since you're specifying -r, it's going recursively through
subdirectories ... I don't know for sure, but maybe it's running into
some problems with some of the devices in /dev? :) Remember, most of
them are just like files, you can 'grep' them all you want, even if it's
not always a good idea ;)

I doubt this would be specific to 2.4.1, though. Have you tried it in
2.2.x? Do you get the same errors(or similar ones) there?

David Barclay Harris, Clan Barclay
Aut agere, aut mori. (Either action, or death.)



RE: grep and memory problems with kernel 2.4.1

2001-02-19 Thread Joris Lambrecht
the /dev directory indeed just lists a list of names wich are linked to
device driver files through the inode table

so in fact you're grep-in the output of the /dev, if this contains some
control chars it might hang your grep command, you should* be able to kill
this from another console

greetings,

joris

-Original Message-
From: David B. Harris [mailto:[EMAIL PROTECTED]
Sent: Monday, February 19, 2001 2:01 PM
To: debian-user@lists.debian.org
Subject: Re: grep and memory problems with kernel 2.4.1


To quote Thomas Braun [EMAIL PROTECTED],
# i do  cd /
# 
# grep -r hallo * 
# 
# und then cames a memory enhausted and the network is down.

Well, since you're specifying -r, it's going recursively through
subdirectories ... I don't know for sure, but maybe it's running into
some problems with some of the devices in /dev? :) Remember, most of
them are just like files, you can 'grep' them all you want, even if it's
not always a good idea ;)

I doubt this would be specific to 2.4.1, though. Have you tried it in
2.2.x? Do you get the same errors(or similar ones) there?

David Barclay Harris, Clan Barclay
Aut agere, aut mori. (Either action, or death.)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact
[EMAIL PROTECTED]



Re: grep and memory problems with kernel 2.4.1

2001-02-19 Thread Moritz Schulte
David B. Harris [EMAIL PROTECTED] writes:

 # i do  cd /
 # 
 # grep -r hallo * 
 # 
 # und then cames a memory enhausted and the network is down.
 
 Well, since you're specifying -r, it's going recursively through
 subdirectories ... I don't know for sure, but maybe it's running into
 some problems with some of the devices in /dev? :)

Yes, /dev/zero for example.

I guess your system is running out of memory, because you don't have
set up user limits? By specyfing limits you can decide how much memory
(, processes, number of open files, cpu time, etc...) a user is
allowed to use. You can specify these limits in
/etc/security/limits.conf; don't forget to active this feature in
/etc/pam.d/login (and /etc/pam.d/su?).
Btw: you can get information about your current limits via 'ulimit -a'.

hth,
moritz
-- 
Moritz Schulte [EMAIL PROTECTED] http://www.chaosdorf.de/moritz/
Debian/GNU supporter - http://www.debian.org/ http://www.gnu.org/
GPG fingerprint = 3A14 3923 15BE FD57 FC06  B501 0841 2D7B 6F98 4199



Re: grep and memory problems with kernel 2.4.1

2001-02-19 Thread David B . Harris
To quote Joris Lambrecht [EMAIL PROTECTED],
# the /dev directory indeed just lists a list of names wich are linked
to
# device driver files through the inode table
# 
# so in fact you're grep-in the output of the /dev, if this contains
some
# control chars it might hang your grep command, you should* be able to
kill
# this from another console
# 

Umm... More than that; since -r is passed to grey, and those devices
are, for the most part, regular files, they themseves are grepped.

For instance, 'grep -r hello /*', will eventually lead to grepping
/dev/hda. You'll be grepping your entire bloody harddrive. :)

David Barclay Harris, Clan Barclay
Aut agere, aut mori. (Either action, or death.)



Memory Problems on Compaq Prosignia w/Potato

2001-02-07 Thread Ian Smith
Hi,

I've just recently installed Potato on a Compaq Prosignia 300 
Server with 64M of RAM.  Checking free, however, shows only 
13M.  The Bios reports the correct amount on startup, and the 
previous OS (NT) also had no problems with the memory.  I tried 
adding an append=mem=64M line to Lilo.conf, but that has not 
helped.  I thought that the kernel was supposed to autodetect up to 
64M anyway.  Should I just do a clean reinstall, or does anyone 
have an idea of how to fix this?

thanks,

-Ian
Ian W. Smith
Systems Engineer
Systems Integration Group
Fourth Phase New Jersey
 Westside Ave.
North Bergen, NJ 07047
(201) 758-4315
[EMAIL PROTECTED]



Re: Memory Problems on Compaq Prosignia w/Potato

2001-02-07 Thread ktb
On Wed, Feb 07, 2001 at 01:33:17PM -0500, Ian Smith wrote:
 Hi,
 
 I've just recently installed Potato on a Compaq Prosignia 300 
 Server with 64M of RAM.  Checking free, however, shows only 
 13M.  The Bios reports the correct amount on startup, and the 
 previous OS (NT) also had no problems with the memory.  I tried 
 adding an append=mem=64M line to Lilo.conf, but that has not 
 helped.  I thought that the kernel was supposed to autodetect up to 
 64M anyway.  Should I just do a clean reinstall, or does anyone 
 have an idea of how to fix this?
 

The append line should work.  Just to make sure did you run lilo after
making the changes in /etc/lilo.conf?  That is the only thing I can
think of.  What does dmesg or /proc/meminfo say in terms of memory?
Maybe there is some possibility of free not reporting correctly?   
hth,
kent

-- 
From seeing and seeing the seeing has become so exhausted
First line of The Panther - R. M. Rilke




Re: Memory Problems on Compaq Prosignia w/Potato

2001-02-07 Thread brian moore
On Wed, Feb 07, 2001 at 01:33:17PM -0500, Ian Smith wrote:
 Hi,
 
 I've just recently installed Potato on a Compaq Prosignia 300 
 Server with 64M of RAM.  Checking free, however, shows only 
 13M.  The Bios reports the correct amount on startup, and the 
 previous OS (NT) also had no problems with the memory.  I tried 
 adding an append=mem=64M line to Lilo.conf, but that has not 
 helped.  I thought that the kernel was supposed to autodetect up to 
 64M anyway.  Should I just do a clean reinstall, or does anyone 
 have an idea of how to fix this?

Check your BIOS configuration for something like 'Memory hole at 15M' or
similar.  This was needed for some old ISA cards that needed to fit
their memory mapped addresses in without conflicting with system memory.
It's highly unlikely you actually have such cards in your system, so you
should disable that setting.

That, of course, assumes that what you're taking about on 'free' is the
'Mem: total' value.   That should be equal to the size of the kernel +
ram in the system.  If you're taking about 'Mem: free', well, that
depends entirely on what you've been running lately and will usually
work its way down to only a few megs regardless of how much RAM you
have.

-- 
CueCat decoder .signature by Larry Wall:
#!/usr/bin/perl -n
printf Serial: %s Type: %s Code: %s\n, map { tr/a-zA-Z0-9+-/ -_/; $_ = unpack
'u', chr(32 + length()*3/4) . $_; s/\0+$//; $_ ^= C x length; } /\.([^.]+)/g; 



Re: Memory Problems on Compaq Prosignia w/Potato

2001-02-07 Thread Mark Lamers
Ian Smith wrote:
 
 Hi,
 
 I've just recently installed Potato on a Compaq Prosignia 300
 Server with 64M of RAM.  Checking free, however, shows only
 13M.  The Bios reports the correct amount on startup, and the
 previous OS (NT) also had no problems with the memory.  I tried
 adding an append=mem=64M line to Lilo.conf, but that has not
 helped.

 I had the same problem with SuSe on my Prosignia an found that there is
a special
 program on Compaq's site, it's called the Compaq Siystems Cofiguration
Utillity. You have to run
this from a dos partition to upgrade your Bios with the new mem amound.
 

http://www5.compaq.com/support/files/server/us/locate/8_1138.html#System
ROMPaqs/BIOS


good luck
 Mark Lamers
 I thought that the kernel was supposed to autodetect up to
 64M anyway.  Should I just do a clean reinstall, or does anyone
 have an idea of how to fix this?
 
 thanks,
 
 -Ian
 Ian W. Smith
 Systems Engineer
 Systems Integration Group
 Fourth Phase New Jersey
  Westside Ave.
 North Bergen, NJ 07047
 (201) 758-4315
 [EMAIL PROTECTED]
 
 --
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Memory problems after kernel compilation

2000-08-12 Thread Ronald Castillo
Hi...  Several days ago I recompiled my kernel with support for apmd, sound
and some other things.  But since I did that, I've had several problems with
my computer's memory.  For example, The Myth II Game won't run and my just
installed xmms Mp3 player will skip a lot.  Did I do anything wrong?  Thanks
for your help!!!



Memory problems after kernel compilation SOLVED

2000-08-12 Thread Ronald Castillo
I fount out it wasn't a complilation problem but a problem with Loadlin.
NOw I have to boot from floppy because I don't want ot mess up witl LILO.

---Hi...  Several days ago I recompiled my kernel with support
for apmd, sound and some other  things.  But since I did that, I've had
several problems with my computer's memory.  For example, The Myth II Game
won't run and my just installed xmms Mp3 player will skip a lot.  Did I do
anything wrong?  Thanks for your help!!!



Memory problems

1998-08-26 Thread Simon Holgate
I'm using a laptop which only has 4Mb of RAM (640k standard, 3Mb
extended). Despite this I've been having lot's of fun with it. It swaps
a lot, and then today I noticed that top and free only show 2Mb of RAM!
The BIOS check is fine and it used to run Windoze 3.1 ok. Any idea why
the extra memory isn't found by linux?

Thanks,

Simon

-- 

Simon Holgate,Tel: (+1) (250) 721 6080
Centre for Earth and Ocean Research,(FAX) 721 6200 
University of Victoria,
P.O. Box 3055,   E-mail: [EMAIL PROTECTED]
Victoria, B.C.  
CANADA. V8W 3P6 http://george.seos.uvic.ca/people/simon/simon.html




memory problems (sort of :) (was: iso9660 in 2.0.34)

1998-07-16 Thread E.L. Meijer \(Eric\)
Matthew Collins wrote:

[...]
 I have a hard enough time remembering the name of things I've
 installed. :)

I know the feeling.  Try typing `dpkg -l' and you will see what you
installed.

Eric

-- 
 E.L. Meijer ([EMAIL PROTECTED])  | tel. office +31 40 2472189
 Eindhoven Univ. of Technology | tel. lab.   +31 40 2475032
 Lab. for Catalysis and Inorg. Chem. (TAK) | tel. fax+31 40 2455054


--  
Unsubscribe?  mail -s unsubscribe [EMAIL PROTECTED]  /dev/null


Memory problems web page?

1998-01-20 Thread Ben Pfaff
Can someone point me to the webpage that gives a list of things to
check when you get sporadic segfaults from kernel compiles and the
like?  I think I might have a defective RAM chip (ECC RAM at that) and
wanted to check out the possibilities.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Memory problems web page?

1998-01-20 Thread jdassen
On Tue, Jan 20, 1998 at 08:32:30AM -0500, Ben Pfaff wrote:
 Can someone point me to the webpage that gives a list of things to check
 when you get sporadic segfaults from kernel compiles and the like?  I
 think I might have a defective RAM chip (ECC RAM at that) and wanted to
 check out the possibilities.

Do you mean http://www.bitwizard.nl/sig11/ ?

Ray
-- 
Tevens ben ik van mening dat Nederland overdekt dient te worden.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Memory problems web page?

1998-01-20 Thread Ben Pfaff
   On Tue, Jan 20, 1998 at 08:32:30AM -0500, Ben Pfaff wrote:
Can someone point me to the webpage that gives a list of things to check
when you get sporadic segfaults from kernel compiles and the like?  I
think I might have a defective RAM chip (ECC RAM at that) and wanted to
check out the possibilities.

   Do you mean http://www.bitwizard.nl/sig11/ ?

Yes.  Thanks for the URL.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Memory problems

1997-07-28 Thread Franck LE GALL - STAGIAIRE A FT.BD/CNET/DTD/PIH
-  I had problems too adding 2*16MB (32MB) and my solution was to
- take out the two 16MB cards.
-  I set a little cron job to run memtest at 5 minutes, and every
- time I got a disk problem (could'nt get a free inode, or smthg like) I
- also got a memory error. 
-  It's now 4 days w/o any problems both in disk and memory, and w/o
- my 32 MB too ;( 
-  I don't know if the problem is with memory physically corrupted or
- some bad configuration in bios. 



Is it 60 ns EDO memory ?
Did you try to configure other access time in bios ?

Franck


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Memory problems

1997-07-25 Thread Franck LE GALL - STAGIAIRE A FT.BD/CNET/DTD/PIH
Hello,

I am using debian 1.3 with a p166+.

I used to have 16 Mb (2*8) of 60 ns EDO memory and I had no problems.

I bought 32 Mb (2*16) of 60 ns EDO Memory (not the same as above).
 So I have now 48  Mb of memory.

It seems to have no problems with Windows NT4.0 but when I run Linux, 
I have got disks problems (too much inods...).

If I configure my bios for 70 ns memory, the problem disappear.


Does anyones have any idea about it ?

Thanks
Franck


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Memory problems

1997-07-25 Thread Mario Olimpio de Menezes
On Fri, 25 Jul 1997, Franck LE GALL - STAGIAIRE A FT.BD/CNET/DTD/PIH wrote:

   Hello,


Hi,

 
   It seems to have no problems with Windows NT4.0 but when I run Linux, 
 I have got disks problems (too much inods...).


I had problems too adding 2*16MB (32MB) and my solution was to
take out the two 16MB cards.
I set a little cron job to run memtest at 5 minutes, and every
time I got a disk problem (could'nt get a free inode, or smthg like) I
also got a memory error. 
It's now 4 days w/o any problems both in disk and memory, and w/o
my 32 MB too ;( 
I don't know if the problem is with memory physically corrupted or
some bad configuration in bios. 


[]s,
   Mario O.de Menezes mailto:[EMAIL PROTECTED] 
 | Nuclear and Energetic Research Institute - IPEN-CNEN/SP  BRAZIL | 
 | http://curiango.ipen.br/~mario  |
 There will be a day when every PC on the world will be a
   host, not a 'MyComputer'! - mom
 


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: Memory problems

1997-07-25 Thread Ben Gertzfield
Franck LE GALL - STAGIAIRE A FT.BD/CNET/DTD/PIH [EMAIL PROTECTED] writes:

   I used to have 16 Mb (2*8) of 60 ns EDO memory and I had no problems.
   I bought 32 Mb (2*16) of 60 ns EDO Memory (not the same as above).
  So I have now 48  Mb of memory.
   It seems to have no problems with Windows NT4.0 but when I run Linux, 
 I have got disks problems (too much inods...).
   If I configure my bios for 70 ns memory, the problem disappear.

Windows is notorious for not caring about bad memory; Linux is not
very good at dealing with bad hardware.

I'll bet you have some bad SIMMs.

-- 
Brought to you by the letters J and F and the number 16.
Mmm.. Soylent Green.. -- Homer Simpson
Ben Gertzfield http://www.imsa.edu/~wilwonka/ Finger me for my public
PGP key. I'm on FurryMUCK as Che, and EFNet and YiffNet IRC as Che_Fox.


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .