[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-05-26 Thread STINNER Victor


STINNER Victor  added the comment:

Oh. I'm not sure that it works as expected :-(

AMD64 FreeBSD Non-Debug 3.x build 804, Finished 18 minutes ago:

https://buildbot.python.org/all/#/builders/214/builds/804
0:33:28 load avg: 4.64 [329/425/1] test_code_module passed -- running: 
test_multiprocessing_forkserver (2 min 13 sec)
*** Signal 9

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-05-26 Thread STINNER Victor


STINNER Victor  added the comment:

> Added 8gb swap disk to each BB worker

Great! Thank you very much!

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-05-26 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Added 8gb swap disk to each BB worker

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-04-30 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Provisioning new/additional swap to both of FreeBSD BB workers in the next few 
days. Apologies for the delay

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-04-30 Thread STINNER Victor


STINNER Victor  added the comment:

The worker still has the same issue:
https://buildbot.python.org/all/#/builders/214/builds/674
0:19:08 load avg: 3.11 running: test_multiprocessing_forkserver (1 min 38 sec)
*** Signal 9

Any update on this issue Koobs?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-04-13 Thread STINNER Victor


STINNER Victor  added the comment:

koobs: Any update on this swap issue?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process killed by signal 9)

2020-03-25 Thread STINNER Victor


STINNER Victor  added the comment:

New failure: https://buildbot.python.org/all/#/builders/214/builds/512

test.pythoninfo says:

datetime.datetime.now: 2020-03-25 18:59:08.424147
socket.hostname: 121-RELEASE-p2-amd64

/var/log/messages says:

Mar 25 18:41:13 121-RELEASE-p2-amd64 kernel: pid 65447 (python), jid 0, uid 
1002, was killed: out of swap space

121-RELEASE-p2-amd64% sysctl hw | egrep 'hw.(phys|user|real)'
hw.physmem: 1033416704
hw.usermem: 745279488
hw.realmem: 1073676288

=> 985.5 MB of memory

121-RELEASE-p2-amd64% sysctl vm|grep swap  
vm.swap_enabled: 1
vm.domain.0.stats.unswappable: 0
vm.swap_idle_threshold2: 10
vm.swap_idle_threshold1: 2
vm.swap_idle_enabled: 0
vm.disable_swapspace_pageouts: 0
vm.stats.vm.v_swappgsout: 5793651
vm.stats.vm.v_swappgsin: 3322252
vm.stats.vm.v_swapout: 1390626
vm.stats.vm.v_swapin: 875591
vm.nswapdev: 1
vm.swap_fragmentation: 
vm.swap_async_max: 4
vm.swap_maxpages: 1964112
vm.swap_total: 4294864896
vm.swap_reserved: 7942307840

=> 4095.9 MB of swap (total)

121-RELEASE-p2-amd64% swapinfo -h
Device  1K-blocks UsedAvail Capacity
/dev/da0p34194204  70M 3.9G 2%

121-RELEASE-p2-amd64% swapinfo 
Device  1K-blocks UsedAvail Capacity
/dev/da0p3419420472164  4122040 2%

--
title: AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL 
(Signal 9) -> AMD64 FreeBSD Non-Debug 3.x: out of swap space (test process 
killed by signal 9)

___
Python tracker 
<https://bugs.python.org/issue39321>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-03-19 Thread STINNER Victor


STINNER Victor  added the comment:

The bug still occurs time to time. AMD64 FreeBSD Non-Debug 3.x:
https://buildbot.python.org/all/#/builders/214/builds/475

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-03-10 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Investigating

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-03-09 Thread STINNER Victor


STINNER Victor  added the comment:

> If it fails again in the same manner, please re-open

The issue is back. Two examples.

--

Today: https://buildbot.python.org/all/#/builders/214/builds/405

(...)
0:15:40 load avg: 3.11 [366/420] test_pickletools passed -- running: 
test_multiprocessing_forkserver (2 min 43 sec), test_multiprocessing_fork (1 
min 17 sec)
0:15:40 load avg: 3.11 [367/420] test_webbrowser passed -- running: 
test_multiprocessing_forkserver (2 min 43 sec), test_multiprocessing_fork (1 
min 18 sec)
0:15:42 load avg: 3.11 [368/420] test_codecmaps_hk passed -- running: 
test_multiprocessing_forkserver (2 min 45 sec), test_multiprocessing_fork (1 
min 19 sec)
fetching http://www.pythontest.net/unicode/BIG5HKSCS-2004.TXT ...
0:15:43 load avg: 3.11 [369/420] test_pprint passed -- running: 
test_multiprocessing_forkserver (2 min 45 sec), test_multiprocessing_fork (1 
min 20 sec)
*** Signal 9

--

1 day ago: https://buildbot.python.org/all/#/builders/214/builds/395

(...)
0:14:53 load avg: 3.29 [269/420/1] test_keywordonlyarg passed -- running: 
test_multiprocessing_forkserver (2 min 25 sec)
0:15:00 load avg: 2.94 [270/420/1] test_pprint passed -- running: 
test_multiprocessing_forkserver (2 min 31 sec)
0:15:00 load avg: 2.94 [271/420/2] test_io crashed (Exit code -9) -- running: 
test_multiprocessing_forkserver (2 min 31 sec)
0:15:05 load avg: 2.87 [272/420/2] test_positional_only_arg passed -- running: 
test_multiprocessing_forkserver (2 min 37 sec)
*** Signal 9

--
resolution: fixed -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread STINNER Victor


STINNER Victor  added the comment:

> Identified a kernel/userland mismatch which may have caused this. Have 
> restarted the server and worker, and will rebuild 
> https://buildbot.python.org/all/#/builders/214/builds/152

Aha, interesting bug. Thanks for fixing it ;-)

--
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Looks OK now: https://buildbot.python.org/all/#/builders/214

If it fails again in the same manner, please re-open

--
assignee:  -> koobs
resolution:  -> fixed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Rebuilding now

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread Kubilay Kocak


Kubilay Kocak  added the comment:

Identified a kernel/userland mismatch which may have caused this. Have 
restarted the server and worker, and will rebuild 
https://buildbot.python.org/all/#/builders/214/builds/152

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread STINNER Victor


STINNER Victor  added the comment:

Same error https://buildbot.python.org/all/#builders/214/builds/138

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread STINNER Victor


Change by STINNER Victor :


--
nosy: +koobs

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by Signal 9

2020-01-13 Thread STINNER Victor


New submission from STINNER Victor :

https://buildbot.python.org/all/#/builders/214/builds/152

...
0:08:21 load avg: 3.66 [240/420] test_wait3 passed -- running: 
test_multiprocessing_forkserver (1 min 51 sec)
0:08:22 load avg: 3.66 [241/420] test_uuid passed -- running: 
test_multiprocessing_forkserver (1 min 53 sec)
0:08:25 load avg: 3.53 [242/420] test_tuple passed -- running: 
test_multiprocessing_forkserver (1 min 55 sec)
0:08:32 load avg: 3.56 [243/420] test___all__ passed -- running: 
test_multiprocessing_forkserver (2 min 3 sec)
*** Signal 9
Stop.
make: stopped in /usr/home/buildbot/python/3.x.koobs-freebsd-9e36.nondebug/build
program finished with exit code 1
elapsedTime=519.823452

--
components: Tests
messages: 359904
nosy: pablogsal, vstinner
priority: normal
severity: normal
status: open
title: AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by Signal 9
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39321>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread STINNER Victor


STINNER Victor  added the comment:

It seems like Signal 9 is SIGKILL.

--
title: AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by Signal 9 -> 
AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

___
Python tracker 
<https://bugs.python.org/issue39321>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39321] AMD64 FreeBSD Non-Debug 3.x: main regrtest process killed by SIGKILL (Signal 9)

2020-01-13 Thread STINNER Victor


STINNER Victor  added the comment:

Same error on https://buildbot.python.org/all/#builders/214/builds/148

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Process Killed

2008-09-01 Thread dieter h
On Sat, Aug 30, 2008 at 11:07 AM, Eric Wertman [EMAIL PROTECTED] wrote:
 I'm doing some simple file manipulation work and the process gets
 Killed everytime I run it. No traceback, no segfault... just the
 word Killed in the bash shell and the process ends. The first few
 batch runs would only succeed with one or two files being processed
 (out of 60) before the process was Killed. Now it makes no
 successful progress at all. Just a little processing then Killed.

 This is the behavior you'll see when your os has run out of some
 memory resource.  The kernel sends a 9 signal.  I'm pretty sure that
 if you exceed a soft limit your program will abort with out of memory
 error.

 Eric


Eric, thank you very much for your response.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-30 Thread Eric Wertman
 I'm doing some simple file manipulation work and the process gets
 Killed everytime I run it. No traceback, no segfault... just the
 word Killed in the bash shell and the process ends. The first few
 batch runs would only succeed with one or two files being processed
 (out of 60) before the process was Killed. Now it makes no
 successful progress at all. Just a little processing then Killed.

This is the behavior you'll see when your os has run out of some
memory resource.  The kernel sends a 9 signal.  I'm pretty sure that
if you exceed a soft limit your program will abort with out of memory
error.

Eric
--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-29 Thread Glenn Hutchings
dieter [EMAIL PROTECTED] writes:

 I'm doing some simple file manipulation work and the process gets
 Killed everytime I run it. No traceback, no segfault... just the
 word Killed in the bash shell and the process ends. The first few
 batch runs would only succeed with one or two files being processed
 (out of 60) before the process was Killed. Now it makes no
 successful progress at all. Just a little processing then Killed.

 Any Ideas? Is there a buffer limitation? Do you think it could be the
 filesystem?
 Any suggestions appreciated Thanks.

 The code I'm running:
 ==

 from glob import glob

 def manipFiles():
 filePathList = glob('/data/ascii/*.dat')
 for filePath in filePathList:
 f = open(filePath, 'r')
 lines = f.readlines()[2:]
 f.close()
 f = open(filePath, 'w')
 f.writelines(lines)
 f.close()
 print file

Have you checked memory usage while your program is running?  Your

lines = f.readlines()[2:]

statement will need almost twice the memory of your largest file.  This
might be a problem, depending on your RAM and what else is running at the
same time.

If you want to reduce memory usage to almost zero, try reading lines from
the file and writing all but the first two to a temporary file, then
renaming the temp file to the original:

import os

infile = open(filePath, 'r')
outfile = open(filePath + '.bak', 'w')

for num, line in enumerate(infile):
if num = 2:
outfile.write(line)

infile.close()
outfile.close()
os.rename(filePath + '.bak', filePath)

Glenn
--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-29 Thread Paul Boddie
On 28 Aug, 07:30, dieter [EMAIL PROTECTED] wrote:

 I'm doing some simple file manipulation work and the process gets
 Killed everytime I run it. No traceback, no segfault... just the
 word Killed in the bash shell and the process ends. The first few
 batch runs would only succeed with one or two files being processed
 (out of 60) before the process was Killed. Now it makes no
 successful progress at all. Just a little processing then Killed.

It might be interesting to check the various limits in your shell. Try
this command:

  ulimit -a

Documentation can found in the bash manual page. The limits include
memory size, CPU time, open file descriptors, and a few other things.

Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-29 Thread Fredrik Lundh

dieter wrote:


Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?


what does ulimit -a say?

/F

--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-29 Thread Fredrik Lundh

Glenn Hutchings wrote:


Have you checked memory usage while your program is running?  Your

lines = f.readlines()[2:]

statement will need almost twice the memory of your largest file.


footnote: list objects contain references to string objects, not the 
strings themselves.  the above temporarily creates two list objects, but 
the actual file content is only stored once.


/F

--
http://mail.python.org/mailman/listinfo/python-list


Re: Process Killed

2008-08-28 Thread Matt Nordhoff
dieter wrote:
 Hi,
 
 Overview
 ===
 
 I'm doing some simple file manipulation work and the process gets
 Killed everytime I run it. No traceback, no segfault... just the
 word Killed in the bash shell and the process ends. The first few
 batch runs would only succeed with one or two files being processed
 (out of 60) before the process was Killed. Now it makes no
 successful progress at all. Just a little processing then Killed.

That isn't a Python thing. Run sleep 60 in one shell, then kill -9
the process in another shell, and you'll get the same message.

I know my shared web host has a daemon that does that to processes that
consume too many resources.

Wait a minute. If you ran this multiple times, won't it have removed the
first two lines from the first files multiple times, deleting some data
you actually care about? I hope you have backups...

 Question
 ===
 
 Any Ideas? Is there a buffer limitation? Do you think it could be the
 filesystem?
 Any suggestions appreciated Thanks.
 
 
 The code I'm running:
 ==
 
 from glob import glob
 
 def manipFiles():
 filePathList = glob('/data/ascii/*.dat')

If that dir is very large, that could be slow. Both because glob will
run a regexp over every filename, and because it will return a list of
every file that matches.

If you have Python 2.5, you could use glob.iglob() instead of
glob.glob(), which returns an iterator instead of a list.

 for filePath in filePathList:
 f = open(filePath, 'r')
 lines = f.readlines()[2:]

This reads the entire file into memory. Even better, I bet slicing
copies the list object temporarily, before the first one is destroyed.

 f.close()
 f = open(filePath, 'w')
 f.writelines(lines)
 f.close()
 print file

This is unrelated, but print file will just say type 'file',
because it's the name of a built-in object, and you didn't assign to it
(which you shouldn't anyway).


Actually, if you *only* ran that exact code, it should exit almost
instantly, since it does one import, defines a function, but doesn't
actually call anything. ;-)

 Sample lines in File:
 
 
 # time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
 isCancel
 
 1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
 1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0
 
 
 Other Info
 
 
 - The file sizes range from 76 Kb to 146 Mb
 - I'm running on a Gentoo Linux OS
 - The filesystem is partitioned and using: XFS for the data
 repository, Reiser3 for all else.

How about this version? (note: untested)

import glob
import os

def manipFiles():
# If you don't have Python 2.5, use glob.glob instead.
filePaths = glob.iglob('/data/ascii/*.dat')
for filePath in filePaths:
print filePath
fin = open(filePath, 'rb')
fout = open(filePath + '.out', 'wb')
# Discard two lines
fin.next(); fin.next()
fout.writelines(fin)
fin.close()
fout.close()
os.rename(filePath + '.out', filePath)

I don't know how light it will be on CPU, but it should use very little
memory (unless you have some extremely long lines, I guess). You could
write a version that just used .read() and .write() in chunks

Also, it temporarily duplicates whatever.dat to whatever.dat.out,
and if whatever.dat.out already exists, it will blindly overwrite it.

Also, if this is anything but a one-shot script, you should use
try...finally statements to make sure the file objects get closed (or,
in Python 2.5, the with statement).
-- 
--
http://mail.python.org/mailman/listinfo/python-list


Process Killed

2008-08-27 Thread dieter
Hi,

Overview
===

I'm doing some simple file manipulation work and the process gets
Killed everytime I run it. No traceback, no segfault... just the
word Killed in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was Killed. Now it makes no
successful progress at all. Just a little processing then Killed.


Question
===

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated Thanks.


The code I'm running:
==

from glob import glob

def manipFiles():
filePathList = glob('/data/ascii/*.dat')
for filePath in filePathList:
f = open(filePath, 'r')
lines = f.readlines()[2:]
f.close()
f = open(filePath, 'w')
f.writelines(lines)
f.close()
print file


Sample lines in File:


# time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
isCancel

1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0


Other Info


- The file sizes range from 76 Kb to 146 Mb
- I'm running on a Gentoo Linux OS
- The filesystem is partitioned and using: XFS for the data
repository, Reiser3 for all else.
--
http://mail.python.org/mailman/listinfo/python-list