[issue20584] On FreeBSD, signal.NSIG is smaller than biggest signal value

2014-05-23 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso added the comment:

Salut!, amis français!  (Und auch sonst so, natürlich.)

POSIX has recently standardized a NSIG_MAX constant in limits.h [1]:

  The value of {NSIG_MAX} shall be no greater than the number of signals that 
the sigset_t type (see [cross-ref to signal.h]) is capable of representing, 
ignoring any restrictions imposed by sigfillset() or sigaddset().

I'm personally following an advise of Rich Felker in the meantime:

  #ifdef NSIG_MAX
  # undef NSIG
  # define NSIG   NSIG_MAX
  #elif !defined NSIG
  # define NSIG   ((sizeof(sigset_t) * 8) - 1)
  #endif

That is for old signals only, there; maybe reducing this to

  #undef NSIG
  #ifdef NSIG_MAX
  # define NSIG  NSIG_MAX
  #else
  # define NSIG  ((sizeof(sigset_t) * 8) - 1)
  #endif 

should do the trick for Python on any POSIX system?
Ciao from, eh, gray, Germany :)

[1] http://austingroupbugs.net/view.php?id=741#c1834

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20584
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11686] Update of some email/ __all__ lists

2012-03-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

R. David Murray wrote [2012-03-17 03:51+0100]:
 Thanks for the patch, Steffen.

Warm wink to the cleanroom-squatters!
(I count this as the promised petit glass of red wine.)

--steffen
Forza Figa!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11686
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-09-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Let me close this!
I've just recently removed the real patch from my postman's next
branch, because even that real implementation doesn't work reliable.
I.e., please forget msg135791.  It was true, but on the long run
mutt(1) sometimes sees all, sometimes only some (really nuts), but
most of the time it simply does not see just any box with new mail.
That is, that plugged-in filesystem is simply handled as a pendant.

Remarks: because that stdlib MBOX whispered
  Where Are Tho{u}, Brother
to me all the {time}, i've done my own, also just recently:

== postman:
  - test: 321 messages (5083760 bytes) [action=hunky-dory]
  = Dispatched 321 tickets to 1 box.
  [69853 refs] real 0m35.538s user 0m6.760s sys 0m0.904s
..
  = Dispatched 1963 tickets to 1 box.
  [93552 refs] real 0m38.860s user 0m8.697s sys 0m0.985s
== stdlib:
  [83010 refs] real 1m3.862s user 0m10.151s sys 0m7.500s
  [93217 refs] real 7m24.958s user 2m0.174s sys 1m35.163s

Was worth it.
Have a good time!

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11686] Update of some email/ __all__ lists

2011-09-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Closing this...

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11686
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11780] email.encoders are broken

2011-09-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I think this is historic either?
As far as i remember you solved it as part of another issue...

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11780
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11701] email.parser.BytesParser().parse() closes file argument

2011-09-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Right, and that is considered to be a non-issue due to
that close() is allowed multiple times without causing harm.

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11701
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11466] getpass.getpass doesn't close tty file

2011-09-17 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


--
nosy:  -sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11466
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12868] test_faulthandler.test_stack_overflow() failed on OpenBSD

2011-09-01 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Heya.
OpenBSD does support 1:1 threads via the RThread library
since 2005, thanks to tedu@ and more fantastic guys!
It about to be super-stable (one commit in 2011, last real
fix in april 2010).
(There is a techtalk from tedu@ (Ted Unangst) about this library
on YouTube, btw.)

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12868
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12868] test_faulthandler.test_stack_overflow() failed on OpenBSD

2011-09-01 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 In Python 3.3, you can use sys.thread_info to check which threading 
 library is used.

Great!  I didn't know that!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12868
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12730] Python's casemapping functions are untrustworthy due to narrow/wide build issues

2011-08-11 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

A sign!
A sign!

Someone with a name-name-name!!

(Not a useful comment, i'm afraid.)

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12730
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12730] Python's casemapping functions are untrustworthy due to narrow/wide build issues

2011-08-11 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


--
nosy:  -sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12730
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-07-25 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 Are you willing to update your patch accordingly?

I'm a vain rooster!  I've used fullfsync() in 11877-standalone.1.diff!
Sorry for that, haypo.  :-/

11877.fullsync-1.diff uses fullsync() and will instead always be
provided when fsync() is available, to which it will fall back if
no special operating system functionality is available.

I really think this is the cleanest solution, because like this
a user can state i want the strongest guarantees available on
data integrity, and Python does just that.

--Steffen
Ciao, sdaoden(*)(gmail.com)
ASCII ribbon campaign   ( ) More nuclear fission plants
  against HTML e-mailXcan serve more coloured
and proprietary attachments / \ and sounding animations

--
Added file: http://bugs.python.org/file22759/11877.fullsync-1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -706,6 +706,31 @@
Availability: Unix, and Windows starting in 2.2.3.
 
 
+.. function:: fullsync(fd)
+
+   The POSIX standart requires that :c:func:`fsync` must transfer the buffered
+   data to the storage device, not that the data is actually written by the
+   device itself.  It explicitely leaves it up to operating system implementors
+   whether users are given stronger guarantees on data integrity or not.  Some
+   systems also offer special functions which overtake the part of making such
+   stronger guarantees, i.e., Mac OS X and NetBSD.  :func:`fullsync` is
+   identical to :func:`fsync` unless there is such special functionality
+   available, in which case that will be used.
+   To strive for best-possible data integrity, the following can be done::
+
+  # Force writeout of local buffer modifications
+  f.flush()
+  # Then synchronize the changes to physical backing store
+  if hasattr(os, 'fsync'):
+ os.fullsync(f.fileno())
+
+   ..note::
+  Calling this function may take a long time, since it may block
+  until the disk reports that the transfer has been completed.
+
+   Availability: See :func:`fsync`.
+
+
 .. function:: ftruncate(fd, length)
 
Truncate the file corresponding to file descriptor *fd*, so that it is at 
most
diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py
--- a/Lib/test/test_os.py
+++ b/Lib/test/test_os.py
@@ -554,7 +554,7 @@
 
 class TestInvalidFD(unittest.TestCase):
 singles = [fchdir, fdopen, dup, fdatasync, fstat,
-   fstatvfs, fsync, tcgetpgrp, ttyname]
+   fstatvfs, fsync, fullsync, tcgetpgrp, ttyname]
 #singles.append(close)
 #We omit close because it doesn'r raise an exception on some platforms
 def get_single(f):
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -1855,6 +1855,42 @@
 {
 return posix_fildes(fdobj, fsync);
 }
+
+PyDoc_STRVAR(fullsync__doc__,
+fullsync(fd)\n\n
+force write of file buffers to disk, and the flush of disk caches\n
+of the file given by file descriptor fd.);
+
+static PyObject *
+fullsync(PyObject *self, PyObject *fdobj)
+{
+/* See issue 11877 discussion */
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+# if defined __APPLE__
+/* F_FULLFSYNC is not supported for all types of FDs/FSYSs;
+ * be on the safe side and test for inappropriate ioctl errors.
+ * Because plain fsync() may succeed even then, let it decide about error 
*/
+res = fcntl(fd, F_FULLFSYNC);
+if (res  0  errno == ENOTTY)
+res = fsync(fd);
+# elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC | FDISKSYNC, 0, 0);
+# else
+res = fsync(fd);
+# endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+}
 #endif /* HAVE_FSYNC */
 
 #ifdef HAVE_FDATASYNC
@@ -8953,6 +8989,7 @@
 #endif
 #ifdef HAVE_FSYNC
 {fsync,   posix_fsync, METH_O, posix_fsync__doc__},
+{fullsync,fullsync, METH_O, fullsync__doc__},
 #endif
 #ifdef HAVE_FDATASYNC
 {fdatasync,   posix_fdatasync,  METH_O, posix_fdatasync__doc__},
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-07-23 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

  Even PEP 3151 won't help.
 
 I don't understand. If the syscall supposed to flush the disk's buffer
 cache fails - be it fcntl() or sync_file_range() - I think the error
 should be propagated, not silently ignored and replaced with another
 syscall which doesn't have the same semantic. That's all.

I'm with you theoretically - of course errors should be propagated
to users so that they can act accordingly.
And this is exactly what the patches do, *unless* it is an error
which is not produced by the native fsync(2) call:

-- 8 --
?0%0[steffen@sherwood tmp]$ cat t.c 
#include errno.h
#include fcntl.h
#include stdio.h
#include string.h
#include unistd.h
int main(void) {
int r  = fcntl(2, F_FULLFSYNC);
fprintf(stderr, 1. %d: %d, %s\n, r, errno, strerror(errno));
errno = 0;
r = fsync(2);
fprintf(stderr, 2. %d: %d, %s\n, r, errno, strerror(errno));
return 0;
}
?0%0[steffen@sherwood tmp]$ gcc -o t t.c  ./t
1. -1: 25, Inappropriate ioctl for device
2. 0: 0, Unknown error: 0
?0%0[steffen@sherwood tmp]$ grep -F 25 /usr/include/sys/errno.h 
#define ENOTTY  25  /* Inappropriate ioctl for device */
-- 8 --

So in fact the patches do what is necessary to make the changed
version act just as the plain systemcall.

  - I favour haypos fullsync() approach
 Are you willing to update your patch accordingly?

Both patches still apply onto the tip of friday noon:
http://bugs.python.org/file22016/11877.9.diff,
http://bugs.python.org/file22046/11877-standalone.1.diff.

Again: i personally would favour os.fsync(fd, fullsync=True), because
that is the only way to put reliability onto unaware facilities
unaware (e.g. my S-Postman replaces os.fsync() with a special function
so that reliability is injected in- and onto Python's mailbox.py,
which calls plain os.fsync()), but i've been convinced that this is
impossible to do.  It seems to be impossible to change os.fsync()
at all, because it has a standartized function prototype.

So what do you mean?  Shall i rewrite 11877-standalone.1.diff to
always offer fullsync() whenever there is fsync()?  This sounds to
be a useful change, because testing hasattr() of the one would
imply availability of the other.

+ Aaarrg!  I'm a liar!!  I lie about - data integrity!!!
 Well, actually, some hard disks lie about this too :-)

Yeah.  But hey:
I feel save in New York City.  I feel save in New York City.
Nice weekend - and may the juice be with you!

--Steffen
Ciao, sdaoden(*)(gmail.com)
ASCII ribbon campaign   ( ) More nuclear fission plants
  against HTML e-mailXcan serve more coloured
and proprietary attachments / \ and sounding animations

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-07-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Here is something unsorted and loose:

- @neologix:
  One could argue that something had happened before the fsync(2),
  so that code which blindly did so is too dumb to do any right
  decision anyway.  Even PEP 3151 won't help.

- I favour haypos fullsync() approach, because that could probably
  make it into it.  Yet Python doesn't offer any possibility to
  access NetBSD DISKSYNC stuff sofar.

- Currently the programmer must be aware of any platform specific
  problems.  I, for example, am not aware of Windows.  How can
  i give any guarantee to users which (will) use my S-Postman on
  Windows?  I need to put trust in turn into the Framework i am
  using - Python.  And that makes me feel pretty breathless.

- Fortunately Python is dynamic, so that one simply can replace
  os.fsync().  Works once only though (think signal handlers :=).

  + That is indeed the solution i'm using for my S-Postman,
because *only* like this i can actually make Python's
mailbox.py module reliable on Mac OS X!  I can't give any
guarantee for NetBSD, though i document it!

  + Aaarrg!  I'm a liar!!  I lie about - data integrity!!!

--Steffen
Ciao, sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-07-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

If Nir's analysis is right, and Antoines comment pushes me into
this direction, (i personally have not looked at that code),
then multiprocessing is completely brain-damaged and has been
implemented by a moron.
And yes, I know this is a bug tracker, and even that of Python.

Nir should merge his last two messages into a single mail to
python-dev, and those guys should give Nir or Thomas or a group of
persons who have time and mental power a hg(1) repo clone and
committer access to that and multiprocessing should be rewritten,
maybe even from scratch, but i dunno.

For the extremely unlikely case that all that doesn't happen maybe
the patch of neologix should make it?

--Steffen
Ciao, sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-07-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Um, and just to add: i'm not watching out for anything, and it won't
and it can't be me:

?0%0[steffen@sherwood sys]$ grep -F smp CHANGELOG.svn -B3 | grep -E 
'^r[[:digit:]]+' | tail -n 1
r162 | steffen | 2006-01-18 18:29:58 +0100 (Wed, 18 Jan 2006) | 35 lines

--Steffen
Ciao, sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-07-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

P.S.:
I have to apologize, it's Tomaž, not Thomas.
(And unless i'm mistaken this is pronounced TomAsch rather than
the english Tommes, so i was just plain wrong.)

--Steffen
Ciao, sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-07-19 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

  then multiprocessing is completely brain-damaged and has been
  implemented by a moron.
 
 Please do not use this kind of language. 
 Being disrespectful to other people hurts the discussion.

So i apologize once again.
'Still i think this should go to python-dev in the mentioned case.

(BTW: there are religions without god, so whom shall e.g. i praise
for the GIL?)

--Steffen
Ciao, sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-07-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

So sorry that i'm stressing this, hopefully it's the final message.
Apples iterative kernel-update strategy resulted in these versions:

14:02 ~/tmp $ /usr/sbin/sysctl kern.version
kern.version: Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 2011; 
root:xnu-1504.15.3~1/RELEASE_I386
14:02 ~/tmp $ gcc -o zt osxversion.c -framework CoreServices
14:03 ~/tmp $ ./zt 
OS X version: 10.6.8
apple_osx_needs_fullsync: -1

I.e. the new patch uses 10.7.0 or =10.6.8 to avoid that
FULLFSYNC disaster (even slower than the Macrohard memory
allocator during Wintel partnership!), and we end up as:

14:03 ~/src/cpython $ ./python.exe -E -Wd -m test -r -w -uall test_mmap
Using random seed 8466468
[1/1] test_mmap
1 test OK.

P.S.: i still have no idea how to do '-framework CoreServices'
regulary.  Like i've said in #11046 i never used GNU Autoconf/M4,
sorry.  You know.  Maybe the version check should be moved
somewhere else and simply be exported, even replacing the stuff
from platform.py?  I don't know.  Bye.
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--
Added file: http://bugs.python.org/file22593/11277.apple-fix-3.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Doc/library/mmap.rst b/Doc/library/mmap.rst
--- a/Doc/library/mmap.rst
+++ b/Doc/library/mmap.rst
@@ -88,7 +88,8 @@
 
To ensure validity of the created memory mapping the file specified
by the descriptor *fileno* is internally automatically synchronized
-   with physical backing store on Mac OS X and OpenVMS.
+   with physical backing store on operating systems where this is
+   necessary, e.g. OpenVMS, and some buggy versions of Mac OS X.
 
This example shows a simple way of using :class:`mmap`::
 
diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c
--- a/Modules/mmapmodule.c
+++ b/Modules/mmapmodule.c
@@ -25,6 +25,8 @@
 #define UNIX
 # ifdef __APPLE__
 #  include fcntl.h
+
+#  include CoreServices/CoreServices.h
 # endif
 #endif
 
@@ -65,6 +67,44 @@
 #define my_getpagesize getpagesize
 #endif
 
+# ifdef __APPLE__
+static void
+apple_osx_needs_fullsync(long *use_fullsync)
+{
+/* Issue #11277: mmap(2) bug with 32 bit sparse files.
+ * Apple fixed the bug before announcement of OS X Lion, but since we
+ * need to handle buggy versions, perform a once-only check to see if the
+ * running kernel requires the expensive sync.  Fixed in 10.6.8, 10.7++.
+ * 0: F_FULLSYNC is required, 0: kernel has mmap(2) bug fixed */
+SInt32 ver;
+*use_fullsync = 1;
+
+if (Gestalt(gestaltSystemVersion, ver) != noErr)
+goto jleave;
+/* SystemVersion(Major|Minor|BugFix) available at all? */
+if (ver  0x1040)
+goto jleave;
+if (Gestalt(gestaltSystemVersionMajor, ver) != noErr)
+goto jleave;
+if (ver  10)
+goto jgood;
+if (Gestalt(gestaltSystemVersionMinor, ver) != noErr)
+goto jleave;
+if (ver = 7)
+goto jgood;
+if (ver  6)
+goto jleave;
+if (Gestalt(gestaltSystemVersionBugFix, ver) != noErr)
+goto jleave;
+if (ver  8)
+goto jleave;
+jgood:
+*use_fullsync = -1;
+jleave:
+return;
+}
+# endif /* __APPLE__ */
+
 #endif /* UNIX */
 
 #include string.h
@@ -1150,8 +1190,14 @@
 #ifdef __APPLE__
 /* Issue #11277: fsync(2) is not enough on OS X - a special, OS X specific
fcntl(2) is necessary to force DISKSYNC and get around mmap(2) bug */
-if (fd != -1)
-(void)fcntl(fd, F_FULLFSYNC);
+if (fd != -1) {
+/* (GIL protected) */
+static long use_fullsync /*= 0*/;
+if (!use_fullsync)
+apple_osx_needs_fullsync(use_fullsync);
+if (use_fullsync  0)
+(void)fcntl(fd, F_FULLFSYNC);
+}
 #endif
 #ifdef HAVE_FSTAT
 #  ifdef __VMS
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-07-06 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file22281/11277.apple-fix-2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-06-30 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

My suggestion to this would be that it should be outdated in the
same way that Georg Brandl has suggested for changing the default
encoding on python-dev [1], and add clear documentation on that,
also in respect to the transition phase ..

 The problem is that multiprocessing itself, by construction,
 uses fork() with multiple threads.

.. and maybe add some switches which allow usage of fork() for the
time being.

Today a '$ grep -Fir fork' does not show up threading.rst at all,
which seems to be little in regard to the great problem.
I would add a big fat note that multiprocessing.Process should be
used instead today, because how about those of us who are not
sophisticated enough to be appointed to standard committees?

But anyway we should be lucky: fork(2) is UNIX specific, and thus
it can be expected that all thread-safe libraries etc. are aware of
the fact that they may be cloned by it.  Except mine, of course.  ,~)

[1] http://mail.python.org/pipermail/python-dev/2011-June/112126.html
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-06-30 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 How do you think multiprocessing.Process launches a new process?

But it's a single piece of code under control and even
multi-OS/multi-architecture test-coverage, not a general purpose
Joe, you may just go that way and Python will handle it
correctly?

What i mean is: ten years ago (or so), Java did not offer true
selection on sockets (unless i'm mistaken) - servers needed a 1:1
mapping of threads:sockets to handle connections?!
But then, a this thread has finished the I/O, let's use it for
something different seems to be pretty obvious.
This is ok if it's your professor who is forcefully misleading
you into the wrong direction, but otherwise you will have
problems, maybe sooner, maybe later (, maybe never).  And
currently there is not a single piece of documentation which
points you onto the problems.  (And there *are* really people
without Google.)

The problem is that it looks so simple and easy - but it's not.
In my eyes it's an unsolvable problem.  And for the sake of
resource usage, simplicity and execution speed i appreciate all
solutions which don't try to do the impossible.

I want to add that all this does not really help just as long just
*any* facility which is used by Python *itself* is not under control
of atfork.  Solaris e.g. uses atfork for it's memory allocator,
because that is surely needed if anything else but async-safe
facilities are used in the newly forked process.  Can Python give
that guarantee for all POSIX systems it supports?

Good night.
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11728] mbox parser incorrect behaviour

2011-06-13 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Hello Valery Masiutsin, i recently stumbled over this while searching
for the link to the standart i've stored in another issue.
(Without being logged in, say.)
The de-facto standart (http://qmail.org/man/man5/mbox.html) says:

HOW A MESSAGE IS READ
  A reader scans through an mbox file looking for From_ lines.
  Any From_ line marks the beginning of a message.  The reader
  should not attempt to take advantage of the fact that every
  From_ line (past the beginning of the file) is preceded by a
  blank line.

This is however the recent version.  The mbox manpage of my up-to-date
Mac OS X 10.6.7 does not state this, for example.  It's from 2002.
However, all known MBOX standarts, i.e. MBOXO, MBOXRD, MBOXCL, require
proper quoting of non-From_ From  lines (by preceeding with '').
So your example should not fail in Python.
(But hey - are you sure *that* has been produced by Perl?)

You're right however that Python seems to only support the old MBOXO
way of un-escaping only plain From  to/from From , which is not
even mentioned anymore in the current standart - that only describes
MBOXRD ((*From ) - +match.group(1)). 
(Lucky me: i own Mac OS X, otherwise i wouldn't even know.)
Thus you're in trouble if the unescaping is performed before the split..
This is another issue, though: MBOX parser uses MBOXO algorithm.

; - Ciao, Steffen

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11728
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-06-09 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ Ronald Oussoren wrote:
if major = 10:
   # We're on OSX 10.6 or earlier
   enableWorkaround()

(You sound as if you participate in an interesting audiophonic
event.  27 imac's are indeed great recording studio hardware.
But no Coffee Shops in California - br.)
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-06-08 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Ok, this patch could be used.
*Unless* the code is not protected by the GIL.

- Gestalt usage is a bit more complicated according to

http://www.cocoadev.com/index.pl?DeterminingOSVersion

  unless Python only supports OS X 10.4 and later.
  (And platform.py correctly states that in _mac_ver_gestalt(),
  but see below.)

- Due to usage of Gestalt, '-framework CoreServices' must be
  linked against mmapmodule.c.
  The Python configuration stuff is interesting for me, i managed
  compilation by adding the line

mmap mmapmodule.c -framework CoreServices

  to Modules/Setup, but i guess it's only OS X which is happy
  about that.

platform.py: _mac_ver_xml() should be dropped entirely according
to one of Ned Deily's links (never officially supported), and
_mac_ver_gestalt() obviously never executed because i guess it
would fail due to versioninfo.  Unless i missed something.

By the way: where do you get the info from?  sys1, sys2,
sys3?  Cannot find it anywhere, only the long names, e.g.
gestaltSystemVersionXy.

Note that i've mailed Apple.  I did not pay 99$ or even 249$, so
i don't know if there will be a response.
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--
Added file: http://bugs.python.org/file22281/11277.apple-fix-2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Doc/library/mmap.rst b/Doc/library/mmap.rst
--- a/Doc/library/mmap.rst
+++ b/Doc/library/mmap.rst
@@ -88,7 +88,8 @@
 
To ensure validity of the created memory mapping the file specified
by the descriptor *fileno* is internally automatically synchronized
-   with physical backing store on Mac OS X and OpenVMS.
+   with physical backing store on operating systems where this is
+   necessary, e.g. OpenVMS, and some buggy versions of Mac OS X.
 
This example shows a simple way of using :class:`mmap`::
 
diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c
--- a/Modules/mmapmodule.c
+++ b/Modules/mmapmodule.c
@@ -25,6 +25,8 @@
 #define UNIX
 # ifdef __APPLE__
 #  include fcntl.h
+
+#  include CoreServices/CoreServices.h
 # endif
 #endif
 
@@ -65,6 +67,39 @@
 #define my_getpagesize getpagesize
 #endif
 
+# ifdef __APPLE__
+static void
+apple_osx_needs_fullsync(long *use_fullsync)
+{
+/* Issue #11277: mmap(2) bug with 32 bit sparse files.
+ * Apple fixed the bug before announcement of OS X Lion, but since we
+ * need to handle buggy versions, perform a once-only check to see if the
+ * running kernel requires the expensive sync.
+ * 0: F_FULLSYNC is required, 0: kernel has mmap(2) bug fixed */
+SInt32 ver;
+
+*use_fullsync = 1;
+if (Gestalt(gestaltSystemVersion, ver) != noErr  ver = 0x1040) {
+if (Gestalt(gestaltSystemVersionMajor, ver) != noErr)
+goto jleave;
+if (ver  10) {
+*use_fullsync = -1;
+goto jleave;
+}
+
+if (Gestalt(gestaltSystemVersionMinor, ver) != noErr)
+goto jleave;
+if (ver = 7) {
+*use_fullsync = -1;
+goto jleave;
+}
+}
+
+jleave:
+return;
+}
+# endif /* __APPLE__ */
+
 #endif /* UNIX */
 
 #include string.h
@@ -1128,8 +1163,14 @@
 #ifdef __APPLE__
 /* Issue #11277: fsync(2) is not enough on OS X - a special, OS X specific
fcntl(2) is necessary to force DISKSYNC and get around mmap(2) bug */
-if (fd != -1)
-(void)fcntl(fd, F_FULLFSYNC);
+if (fd != -1) {
+/* (GIL protected) */
+static long use_fullsync /*= 0*/;
+if (!use_fullsync)
+apple_osx_needs_fullsync(use_fullsync);
+if (use_fullsync  0)
+(void)fcntl(fd, F_FULLFSYNC);
+}
 #endif
 #ifdef HAVE_FSTAT
 #  ifdef __VMS
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-06-08 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file22273/11277.apple-fix.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-06-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Aehm, note that Apple has fixed the mmap(2) bug!!
I'm still surprised and can't really believe it, but it's true!
Just in case you're interested, i'll apply an updated patch.

Maybe Ned Deily should have a look at the version check, which
does not apply yet, but i don't know any other way to perform exact
version checking.  (Using 10.6.7 is not enough, it must be 10.7.0;
uname -a yet reports that all through, but no CPP symbol does!?)

--
Added file: http://bugs.python.org/file22273/11277.apple-fix.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-06-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ Ned Deily rep...@bugs.python.org wrote (2011-06-07 19:43+0200):
 Thanks for the update.  Since the fix will be in a future
 version of OS X 10.7 Lion, and which has not been released yet,
 so it is not appropriate to change mmap until there has been an
 opportunity to test it.

It's really working fine.  That i see that day!
(Not that they start to fix the CoreAudio crashes...)

 But even then, we would need to be careful about adding
 a compile-time test as OS X binaries are often built to be
 compatible for a range of operating system version so avoid
 adding compilation conditionals unless really necessary.
 If after 10.7 is released and someone is able to test that it
 works as expected, the standard way to support it would be to
 use the Apple-supplied availability macros to test for the
 minimum supported OS level of the build assuming it makes enough
 of a performance difference to bother to do so

Of course it only moves the delay from before mmap(2) to after
close(2).  Well, i don't know, if hardcoding is not an option,
a dynamic sysctl(2) lookup may do:

kern.version = Darwin Kernel Version 10.7.0: Sat Jan 29 15:17:16 PST 2011

This is obviously not the right one.  :)
--
Ciao, Steffen
sdaoden(*)(gmail.com)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11700] mailbox.py proxy updates

2011-05-24 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Hello, David, i'm about to remind you that this issue is still
open (and the next release is about to come soon).
I think we do agree at least in the fact that this is a bug :).
Well, your mailbox_close_twice.patch no. 2 still imports on the
current tip.

(I'm still a bit disappointed that you don't want to -a-r-m-
upgrade the proxies to the full implementation i've posted.  But
it's ok.  By the way: you're the first american i know who doesn't
want to upgrade his arms!  And i do have had an ex-uncle who is
a fellow countryman of yours.)

Regards from Germany during kitschy pink sunset

--
Added file: http://bugs.python.org/file22095/11700.yeah-review.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11700
___diff --git a/Lib/mailbox.py b/Lib/mailbox.py
--- a/Lib/mailbox.py
+++ b/Lib/mailbox.py
@@ -1864,97 +1864,142 @@
 Message with MMDF-specific properties.
 
 
-class _ProxyFile:
-A read-only wrapper of a file.
+class _ProxyFile(io.BufferedIOBase):
+A io.BufferedIOBase inheriting read-only wrapper for a seekable file.
+It supports __iter__() and the context-manager protocol.
+
+def __init__(self, file, pos=None):
+io.BufferedIOBase.__init__(self)
+self._file = file
+self._pos = file.tell() if pos is None else pos
+self._close = True
+self._is_open = True
 
-def __init__(self, f, pos=None):
-Initialize a _ProxyFile.
-self._file = f
-if pos is None:
-self._pos = f.tell()
+def _set_noclose(self):
+Subclass hook - use to avoid closing internal file object.
+self._close = False
+
+def _closed_check(self):
+Raise ValueError if not open.
+if not self._is_open:
+raise ValueError('I/O operation on closed file')
+
+def close(self):
+if self._close:
+self._close = False
+self._file.close()
+del self._file
+self._is_open = False
+
+@property
+def closed(self):
+return not self._is_open
+
+def flush(self):
+# Not possible because it gets falsely called (issue 11700)
+#raise io.UnsupportedOperation('flush')
+pass
+
+def _read(self, size, read_method, readinto_arg=None):
+if size is None or size  0:
+size = -1
+self._file.seek(self._pos)
+if not readinto_arg:
+result = read_method(size)
 else:
-self._pos = pos
+result = read_method(readinto_arg)
+if result  len(readinto_arg):
+del readinto_arg[result:]
+self._pos = self._file.tell()
+return result
 
-def read(self, size=None):
-Read bytes.
+def readable(self):
+self._closed_check()
+return True
+
+def read(self, size=-1):
+self._closed_check()
+if size is None or size  0:
+return self.readall()
 return self._read(size, self._file.read)
 
-def read1(self, size=None):
-Read bytes.
+def read1(self, size=-1):
+self._closed_check()
+if size is None or size  0:
+return b''
 return self._read(size, self._file.read1)
 
-def readline(self, size=None):
-Read a line.
+def readinto(self, by_arr):
+self._closed_check()
+return self._read(len(by_arr), self._file.readinto, by_arr)
+
+def readall(self):
+self._closed_check()
+self._file.seek(self._pos)
+if hasattr(self._file, 'readall'):
+result = self._file.readall()
+else:
+dl = []
+while 1:
+i = self._file.read(8192)
+if len(i) == 0:
+break
+dl.append(i)
+result = b''.join(dl)
+self._pos = self._file.tell()
+return result
+
+def readline(self, size=-1):
+self._closed_check()
 return self._read(size, self._file.readline)
 
-def readlines(self, sizehint=None):
-Read multiple lines.
+def readlines(self, sizehint=-1):
 result = []
 for line in self:
 result.append(line)
-if sizehint is not None:
+if sizehint = 0:
 sizehint -= len(line)
 if sizehint = 0:
 break
 return result
 
+def seekable(self):
+self._closed_check()
+return True
+
+def seek(self, offset, whence=0):
+self._closed_check()
+if whence == 1:
+self._file.seek(self._pos)
+self._pos = self._file.seek(offset, whence)
+return self._pos
+
+def tell(self):
+self._closed_check()
+return self._pos
+
+def writable(self):
+self._closed_check()
+return False

[issue11700] mailbox.py proxy updates

2011-05-24 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file22095/11700.yeah-review.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11700
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12102] mmap requires file to be synced

2011-05-22 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 that doesn't make me any good

Well - 'can only be better than myself, so i'll take that as yes :)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12102
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12102] mmap requires file to be synced

2011-05-22 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file22020/12102.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12102
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-21 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -810,6 +810,35 @@
Availability: Unix, and Windows.

+.. function:: fullfsync(fd)
+
+   The POSIX standart requires that :c:func:`fsync` must transfer the buffered
+   data to the storage device, not that the data is actually written by the
+   device itself.  It explicitely leaves it up to operating system implementors
+   whether users are given stronger guarantees on data integrity or not.  Some
+   systems also offer special functions which overtake the part of making such
+   stronger guarantees, i.e., Mac OS X and NetBSD.
+
+   This non-standart function is *optionally* made available to access such
+   special functionality when feasible.  It will force write of file buffers to
+   disk and the flush of disk caches of the file given by file descriptor *fd*.
+   To strive for best-possible data integrity, the following can be done::
+
+  # Force writeout of local buffer modifications
+  f.flush()
+  # Then synchronize the changes to physical backing store
+  if hasattr(os, 'fullfsync'):
+ os.fullfsync(f.fileno())
+  elif hasattr(os, 'fsync'):
+ os.fsync(f.fileno())
+
+   ..note::
+  Calling this function may take a long time, since it may block
+  until the disk reports that the transfer has been completed.
+
+   Availability: Unix.
+
+
 .. function:: ftruncate(fd, length)

Truncate the file corresponding to file descriptor *fd*, so that it is at 
most
diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py
--- a/Lib/test/test_os.py
+++ b/Lib/test/test_os.py
@@ -835,12 +835,12 @@

 class TestInvalidFD(unittest.TestCase):
 singles = [fchdir, dup, fdopen, fdatasync, fstat,
-   fstatvfs, fsync, tcgetpgrp, ttyname]
+   fstatvfs, fsync, fullfsync, tcgetpgrp, ttyname]
 #singles.append(close)
-#We omit close because it doesn'r raise an exception on some platforms
+# We omit close because it doesn't raise an exception on some platforms
 def get_single(f):
 def helper(self):
-if  hasattr(os, f):
+if hasattr(os, f):
 self.check(getattr(os, f))
 return helper
 for f in singles:
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -174,6 +174,11 @@
 #endif /* ! __IBMC__ */

 #ifndef _MSC_VER
+  /* os.fullfsync()? */
+# if (defined HAVE_FSYNC  ((defined __APPLE__  defined F_FULLFSYNC) || \
+ (defined __NetBSD__  defined FDISKSYNC)))
+#  define PROVIDE_FULLFSYNC
+# endif

 #if defined(__sgi)_COMPILER_VERSION=700
 /* declare ctermid_r if compiling with MIPSPro 7.x in ANSI C mode
@@ -2129,6 +2134,41 @@
 {
 return posix_fildes(fdobj, fsync);
 }
+
+# ifdef PROVIDE_FULLFSYNC
+PyDoc_STRVAR(fullfsync__doc__,
+fullfsync(fd)\n\n
+force write of file buffers to disk, and the flush of disk caches\n
+of the file given by file descriptor fd.);
+
+static PyObject *
+fullfsync(PyObject *self, PyObject *fdobj)
+{
+/* See issue 11877 discussion */
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+/* F_FULLFSYNC is not supported for all types of FDs/FSYSs;
+ * be on the safe side and test for inappropriate ioctl errors */
+res = fcntl(fd, F_FULLFSYNC);
+if (res  0  errno == ENOTTY)
+res = fsync(fd);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC | FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+}
+# endif /* PROVIDE_FULLFSYNC */
 #endif /* HAVE_FSYNC */

 #ifdef HAVE_SYNC
@@ -9473,6 +9513,9 @@
 #endif
 #ifdef HAVE_FSYNC
 {fsync,   posix_fsync, METH_O, posix_fsync__doc__},
+# ifdef PROVIDE_FULLFSYNC
+{fullfsync,   fullfsync, METH_O, fullfsync__doc__},
+# endif
 #endif
 #ifdef HAVE_SYNC
 {sync,posix_sync, METH_NOARGS, posix_sync__doc__},

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-21 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

(This was an attachment to an empty mail message.)

--
Added file: http://bugs.python.org/file22046/11877-standalone.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12102] mmap requires file to be synced

2011-05-21 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Looked at it again and i think it's much better english with an
additional ..to ensure that local...
@Ross, aren't you a native english speaker?  What do you say?

--
Added file: http://bugs.python.org/file22048/12102.2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12102
___diff --git a/Doc/library/mmap.rst b/Doc/library/mmap.rst
--- a/Doc/library/mmap.rst
+++ b/Doc/library/mmap.rst
@@ -21,6 +21,11 @@
 :func:`os.open` function, which returns a file descriptor directly (the file
 still needs to be closed when done).
 
+..note::
+   If you want to create a memory-mapping for a writable, buffered file, you
+   should :func:`flush` the file first.  This is necessary to ensure that local
+   modifications to the buffers are actually available to the mapping.
+
 For both the Unix and Windows versions of the constructor, *access* may be
 specified as an optional keyword parameter. *access* accepts one of three
 values: :const:`ACCESS_READ`, :const:`ACCESS_WRITE`, or :const:`ACCESS_COPY`
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-18 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Excusing myself seems to be the only probates Mittel.
@Antoine Pitrou: It was a real shame to read your mail.
(It's sometimes so loud that i don't even hear what i write.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12102] mmap requires file to be synced

2011-05-18 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ rion wrote (2011-05-18 12:39+0200):
 just document it or fix.

Hello, zion, Victor, i'm proposing a documentation patch.
It applies to 2.7 and 3.3 (from yesterday).

--
keywords: +patch
Added file: http://bugs.python.org/file22020/12102.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12102
___diff --git a/Doc/library/mmap.rst b/Doc/library/mmap.rst
--- a/Doc/library/mmap.rst
+++ b/Doc/library/mmap.rst
@@ -21,6 +21,11 @@
 :func:`os.open` function, which returns a file descriptor directly (the file
 still needs to be closed when done).
 
+..note::
+   If you want to create a memory-mapping for a writable, buffered file, you
+   should :func:`flush` the file first.  This is necessary to ensure local
+   modifications to the buffers are actually available to the mapping.
+
 For both the Unix and Windows versions of the constructor, *access* may be
 specified as an optional keyword parameter. *access* accepts one of three
 values: :const:`ACCESS_READ`, :const:`ACCESS_WRITE`, or :const:`ACCESS_COPY`
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12102] mmap requires file to be synced

2011-05-18 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ STINNER Victor wrote (2011-05-18 14:33+0200):
 I don't think that Python should guess what the user expects
 (i.e. Python should not sync the file *implicitly*).

Before i've found F_FULLFSYNC i indeed have had a solution which
tracked the open() of all files, so that mmap() could check wether
a file had been opened in write mode or not.  ;|
Python does not do that by default (AFAIK).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12102
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-05-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ Nir Aides wrote (2011-05-16 20:57+0200):
 Steffen, can you explain in layman's terms?

I am the layman here.
Charles-François has written a patch for Python which contradicted
his own proposal from msg135079, but he seems to have tested a lot
so that he then was even able to prove that his own proposal was
correct.  His new patch does implement that with a nice
introductional note.

He has also noticed that the only really safe solution is to
simply disallow multi-threading in programs which fork().  And
this either-or is exactly the conclusion we have taken and
implemented in our C++ library - which is not an embeddable
programming language that needs to integrate nicely in whatever
environment it is thrown into, but even replaces main().
And i don't know any application which cannot be implemented
regardless of fork()-or-threads instead of fork()-and-threads.
(You *can* have fork()+exec()-and-threads at any time!)

So what i tried to say is that it is extremely error-prone and
resource intensive to try to implement anything that tries to
achieve both.  I.e. on Solaris they do have a forkall() and it
seems they have atfork handlers for everything (and even document
that in the system manual).  atfork handlers for everything!!
And for what?  To implement a standart which is obviously
brain-dead because it is *impossible* to handle it - as your link
has shown this is even confessed by members of the committee.

And writing memory in the child causes page-faults.
That's all i wanted to say.
(Writing this mail required more than 20 minutes, the mentioned
one was out in less than one.  And it is much more meaningful
AFAIK.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Thank you, thank you, thank you.
I'm a bit irritated that a french man treats a wet-her as a typo!
What if *i* like it??  In fact it is a fantastic physical backing
store.  Unbeatable.

Well and about dropping the fsync() in case the fcntl() fails with
ENOTTY.  This is Esc2dd, which shouldn't hurt a committer.
I'm convinced that full_fsync=False is optional and false by
default, but i don't trust Apple.  I've seen a reference to an
atomic file somewhere on bugs.python.org and that does fsync()
first followed by fcntl() if FULLFSYNC is available.  Thus, if
someone knows about that, she may do so, but otherwise i would
guess he doesn't, and in that case i would not expect ENOTTY from
an fsync() - still i want a full flush!
This is what NetBSD describes:

NOTES
For optimal efficiency, the fsync_range() call requires that
the file system containing the file referenced by fd support
partial synchronization of file data.  For file systems which
do not support partial synchronization, the entire file will
be synchronized and the call will be the equivalent of calling
fsync().

But Apple is *s*spcil* again.  Happy birthday.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-17 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I've dropped wet-her!
I hope now you're satisfied!
So the buffer cache is all which remains hot.
How deserted!

 And you could also add a test (I guess that just calling fsync
 with full_sync=True on a valid FD would be enough.

I was able to add two tests as an extension to what is yet tested
about os.fsync(), but that uses an invalid fd.
(At least it enters the conditional and fails as expected.)

 I'm not sure static is necessary, I'd rather make it const.

Yes..

 This code is correct as it is, see other extension modules in
 the stdlib for other examples of this pattern

..but i've used copy+paste here.

 And you could also add a test (I guess that just calling fsync
 with full_sync=True on a valid FD would be enough.

 The alternative would be that full_sync

Ok, i've renamed full_fsync to full_sync.

--
Added file: http://bugs.python.org/file22016/11877.9.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -798,7 +798,7 @@
Availability: Unix.
 
 
-.. function:: fsync(fd)
+.. function:: fsync(fd, full_sync=False)
 
Force write of file with filedescriptor *fd* to disk.  On Unix, this calls 
the
native :c:func:`fsync` function; on Windows, the MS :c:func:`_commit` 
function.
@@ -807,6 +807,15 @@
``f.flush()``, and then do ``os.fsync(f.fileno())``, to ensure that all 
internal
buffers associated with *f* are written to disk.
 
+   The POSIX standart requires that :c:func:`fsync` must transfer the buffered
+   data to the storage device, not that the data is actually written by the
+   device itself.  It explicitely leaves it up to operating system implementors
+   whether users are given stronger guarantees on data integrity or not.  Some
+   systems also offer special functions which overtake the part of making such
+   stronger guarantees, i.e., Mac OS X and NetBSD.  The optional *full_sync*
+   argument can be used to enforce usage of these special functions if that is
+   appropriate for the *fd* in question.
+
Availability: Unix, and Windows.
 
 
diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py
--- a/Lib/test/test_os.py
+++ b/Lib/test/test_os.py
@@ -837,10 +837,10 @@
 singles = [fchdir, dup, fdopen, fdatasync, fstat,
fstatvfs, fsync, tcgetpgrp, ttyname]
 #singles.append(close)
-#We omit close because it doesn'r raise an exception on some platforms
+# We omit close because it doesn't raise an exception on some platforms
 def get_single(f):
 def helper(self):
-if  hasattr(os, f):
+if hasattr(os, f):
 self.check(getattr(os, f))
 return helper
 for f in singles:
@@ -855,6 +855,11 @@
 self.fail(%r didn't raise a OSError with a bad file descriptor
   % f)
 
+def test_fsync_arg(self):
+if hasattr(os, fsync):
+self.check(os.fsync, True)
+self.check(os.fsync, False)
+
 def test_isatty(self):
 if hasattr(os, isatty):
 self.assertEqual(os.isatty(support.make_bad_fd()), False)
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -2121,13 +2121,50 @@
 
 #ifdef HAVE_FSYNC
 PyDoc_STRVAR(posix_fsync__doc__,
-fsync(fildes)\n\n\
-force write of file with filedescriptor to disk.);
-
-static PyObject *
-posix_fsync(PyObject *self, PyObject *fdobj)
-{
-return posix_fildes(fdobj, fsync);
+fsync(fildes, full_sync=False)\n\n
+force write of file buffers with fildes to disk;\n
+full_sync forces flush of disk caches in case fsync() alone is not enough.);
+
+static PyObject *
+posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+PyObject *fdobj;
+int full_sync = 0;
+static char *keywords[] = {fd, full_sync, NULL };
+
+if (!PyArg_ParseTupleAndKeywords(args, kwargs, O|i, keywords,
+ fdobj, full_sync))
+return NULL;
+
+/* See issue 11877 discussion */
+# if ((defined __APPLE__  defined F_FULLFSYNC) || \
+  (defined __NetBSD__  defined FDISKSYNC))
+if (full_sync != 0) {
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+/* F_FULLFSYNC is not supported for all types of FDs/FSYSs;
+ * be on the safe side and test for inappropriate ioctl errors */
+res = fcntl(fd, F_FULLFSYNC);
+if (res  0  errno == ENOTTY)
+res = fsync(fd);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS

[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-17 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21986/11877.8.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-15 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 Finally, depending on the workload, it could have a significant
 performance impact.

Oh yes (first replaces os.fsync() with an in-python wrapper):

18:12 ~/tmp $ ll mail
ls: mail: No such file or directory
18:12 ~/tmp $ ll X-MAIL
312 -rw-r-  1 steffen  staff  315963 15 May 17:49   X-MAIL
18:12 ~/tmp $ time s-postman.py --folder=~/tmp/mail --dispatch=X-MAIL
Dispatched 37 tickets to 4 targets.

real0m4.638s
user0m0.974s
sys 0m0.160s
18:13 ~/tmp $ rm -rf mail
18:13 ~/tmp $ time s-postman.py --folder=~/tmp/mail --dispatch=X-MAIL
Dispatched 37 tickets to 4 targets.

real0m1.228s
user0m0.976s
sys 0m0.122s

(I'm using the first one.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-05-15 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ Charles-François Natali wrote (2011-05-15 01:14+0200):
 So if we really wanted to be safe, the only solution would be to
 forbid fork() in a multi-threaded program.
 Since it's not really a reasonable option

But now - why this?  The only really acceptable thing if you have
control about what you are doing is the following:

class SMP::Process
/*!
* \brief Daemonize process.
*[.]
* \note
* The implementation of this function is not trivial.
* To avoid portability no-goes and other such problems,
* you may \e not call this function after you have initialized
* Thread::enableSMP(),
* nor may there (have) be(en) Child objects,
* nor may you have used an EventLoop!
* I.e., the process has to be a single threaded, synchronous one.
* [.]
*/
pub static si32 daemonize(ui32 _daemon_flags=df_default);

namespace SMP::POSIX
/*!
* \brief \fn fork(2).
*[.]
* Be aware that this passes by all \SMP and Child related code,
* i.e., this simply \e is the system-call.
* Signal::resetAllSignalStates() and Child::killAll() are thus if
* particular interest; thread handling is still entirely up to you.
*/
pub static sir fork(void);

Which kind of programs cannot be written with this restriction?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-05-13 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@ Charles-François Natali rep...@bugs.python.org wrote (2011-05-13 
13:24+0200):
 I happily posted a reinit patch

I must say in advance that we have implemented our own thread
support 2003-2005 and i'm thus lucky not to need to use anything
else ever since.  So.  And of course i have no overview about
Python.  But i looked and saw no errors in the default path and
the tests run without errors.
Then i started to try your semaphore path which is a bit
problematic because Mac OS X doesn't offer anon sems ;).
(
By the way, in PyThread_acquire_lock_timed() these lines

if (microseconds  0)
MICROSECONDS_TO_TIMESPEC(microseconds, ts);

result in these compiler warnings.

python/thread_pthread.h: In function ‘PyThread_acquire_lock_timed’:
Python/thread_pthread.h:424: warning: ‘ts.tv_sec’ may be used
uninitialized in this function
Python/thread_pthread.h:424: warning: ‘ts.tv_nsec’ may be used
uninitialized in this function
)

#ifdef USE_SEMAPHORES
#define broken_sem_init broken_sem_init
static int broken_sem_init(sem_t **sem, int shared, unsigned int value) {
int ret;
auto char buffer[32];
static long counter = 3000;
sprintf(buffer, %016ld, ++counter);
*sem = sem_open(buffer, O_CREAT, (mode_t)0600, (unsigned int)value);
ret = (*sem == SEM_FAILED) ? -1 : 0;
//printf(BROKEN_SEM_INIT WILL RETURN %d (value=%u)\n, ret,value);
return ret;
}
static int sem_timedwait(sem_t *sem, struct timespec *ts) {
int success = -1, iters = 1000;
struct timespec now, wait;
printf(STARTING LOOP\n);
for (;;) {
if (sem_trywait(sem) == 0) {
printf(TRYWAIT OK\n);
success = 0;
break;
}
wait.tv_sec = 0, wait.tv_nsec = 200 * 1000;
//printf(DOWN ); fflush(stdout);
nanosleep(wait, NULL);
MICROSECONDS_TO_TIMESPEC(0, now);
//printf(WOKE UP NOW=%ld:%ld END=%ld:%ld\n, now.tv_sec,now.tv_nsec, 
ts-tv_sec,ts-tv_nsec);
if (now.tv_sec  ts-tv_sec ||
(now.tv_sec == ts-tv_sec  now.tv_nsec = ts-tv_nsec))
break;
if (--iters  0) {
printf(BREAKING OFF LOOP, 1000 iterations\n);
errno = ETIMEDOUT;
break;
}
}
return success;
}
#define sem_destroy sem_close

typedef struct _pthread_lock {
sem_t   *sem;
struct _pthread_lock*next;
sem_t   sem_buf;
} pthread_lock;
#endif

plus all the changes the struct change implies, say.
Yes it's silly, but i wanted to test.  And this is the result:

== CPython 3.3a0 (default:804abc2c60de+, May 14 2011, 01:09:53) [GCC 4.2.1 
(Apple Inc. build 5666) (dot 3)]
==   Darwin-10.7.0-i386-64bit little-endian
==   /Users/steffen/src/cpython/build/test_python_19230
Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, 
dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=1, 
verbose=0, bytes_warning=0, quiet=0)
Using random seed 1362049
[1/1] test_threading
STARTING LOOP
test_acquire_contended (test.test_threading.LockTests) ... ok
test_acquire_destroy (test.test_threading.LockTests) ... ok
test_acquire_release (test.test_threading.LockTests) ... ok
test_constructor (test.test_threading.LockTests) ... ok
test_different_thread (test.test_threading.LockTests) ... ok
test_reacquire (test.test_threading.LockTests) ... ok
test_state_after_timeout (test.test_threading.LockTests) ... ok
test_thread_leak (test.test_threading.LockTests) ... ok
test_timeout (test.test_threading.LockTests) ... STARTING LOOP
TRYWAIT OK
FAIL
test_try_acquire (test.test_threading.LockTests) ... ok
test_try_acquire_contended (test.test_threading.LockTests) ... ok
test_with (test.test_threading.LockTests) ... ok
test__is_owned (test.test_threading.PyRLockTests) ... ok
test_acquire_contended (test.test_threading.PyRLockTests) ... ok
test_acquire_destroy (test.test_threading.PyRLockTests) ... ok
test_acquire_release (test.test_threading.PyRLockTests) ... ok
test_constructor (test.test_threading.PyRLockTests) ... ok
test_different_thread (test.test_threading.PyRLockTests) ... ok
test_reacquire (test.test_threading.PyRLockTests) ... ok
test_release_unacquired (test.test_threading.PyRLockTests) ... ok
test_thread_leak (test.test_threading.PyRLockTests) ... ok
test_timeout (test.test_threading.PyRLockTests) ... STARTING LOOP
TRYWAIT OK
FAIL
test_try_acquire (test.test_threading.PyRLockTests) ... ok
test_try_acquire_contended (test.test_threading.PyRLockTests) ... ok
test_with (test.test_threading.PyRLockTests) ... ok
test__is_owned (test.test_threading.CRLockTests) ... ok
test_acquire_contended (test.test_threading.CRLockTests) ... ok
test_acquire_destroy (test.test_threading.CRLockTests) ... ok
test_acquire_release (test.test_threading.CRLockTests) ... ok
test_constructor (test.test_threading.CRLockTests) ... ok
test_different_thread (test.test_threading.CRLockTests) ... ok
test_reacquire

[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Just adding more notes on that by reactivating one of haypo's
links from #8604.  (And: maybe some Linux documentation should be
updated?)  From Theodore Ts'o,
http://www.linuxfoundation.org/news-media/blogs/browse/2009/03/don’t-fear-fsync:

As the Eat My Data presentation points out very clearly, the only
safe way according that POSIX allows for requesting data written
to a particular file descriptor be safely stored on stable storage
is via the fsync() call.  Linux’s close(2) man page makes this
point very clearly:

A successful close does not guarantee that the data has been
successfully saved to disk, as the kernel defers writes. It is not
common for a file system to flush the buffers when the stream is
closed. If you need to be sure that the data is physically stored
use fsync(2).

Why don’t application programmers follow these sage words?  These
three reasons are most often given as excuses:

- (Perceived) performance problems with fsync()
- The application only needs atomicity, but not durability
- The fsync() causing the hard drive to spin up unnecessarily in
  laptop_mode

(Don't ask me why i'm adding this note though.
I should have searched for it once i've opened that issue?
Bah!!!  Ts'o did not write that article for me.  He'd better hacked.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12060] Python doesn't support real time signals

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Dunno.

 The patch is not completely safe.

Yeah it will not work without atomic ops.
Unfortunately the C standart seems to go into a direction
noone understands - as if a atomic_compare_and_swap() would
not suffice!  Do you know any machine language which reflects
what that standart draft describes?  I don't.

The NSIG detection of Modules/signalmodule.c uses 64 as a fallback.
32 seems to be more reasonable.
And you test against it instead of RTMAX in the patch.
(signalmodule.c also exports Python constants RTMIN and RTMAX
even though the standart explicitely allows these values to
be non-constants;
 http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html; last 
time i've done anything on signals - in 2005 - that was
used nowhere - Linux, FreeBSD - though.)

Often there is a huge whole in between NSIG and RTMIN, but
struct Handlers is 8 or 12 bytes (unless the compiler does the
alignment - ouuh), so 32 unused members in Handlers[] will not
cost the world anyway; on Mac OS X (no RTSIG support?!? ;)
Python is at least 6 megabytes of memory anyway.

And does anyone actually know why the last time i looked after this
(on Linux, then) realtime signals had a default action EQ SIGABRT?
Armchair crouchers...

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12060
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12060] Python doesn't support real time signals

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 On my Linux box, Python 3.3 says that signal.NSIG is equal to 65
 which looks correct.

On FreeBSD NSIG only counts old signals (32, one 32 bit mask),
SIGRTMIN is 65 and SIGRTMAX is 126.
Our internal old signal.h states

* If we do have realtime signals, #rtmin is 35 (i.e.,
* #nsig, FreeBSD+) or something like 38 or even 40 (Linux),
* and #rtmax is most likely 64 (Linux) or 128 (FreeBSD+).

so that this seems to be somewhat constant in time.
(#rtmin: we take some of those RT sigs for internal purposes if
possible.  This was maybe a bad and expensive design decision.)

 Why do you care about the default action?

* \brief Hooking program crashes (\psa crash.h crash.h\epsa).
* \note
* Installed hooks (normally) execute from within an internal
* signal handler!

So many syscalls for things which don't matter almost ever.
And that may even cost context-switches sometimes.

 I don't understand: I don't use RTMAX in my patch.

+for (signum = 1; signum  NSIG; signum++) {

This will not catch the extended signal range on FreeBSD.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12060
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

[.]
 OSError: [Errno 22] Invalid argument

Sorry, i didn't know that.  Mac OS X (2.5 and 2.6 Apple shipped):

21:43 ~/tmp $ python2.5 -c 'import os; os.fsync(1)'; echo $?
0
21:43 ~/tmp $ python2.6 -c 'import os; os.fsync(1)'; echo $?
0
21:43 ~/tmp $ python2.7 -c 'import os; os.fsync(1)'; echo $?
0
21:43 ~/tmp $ python3 -c 'import os; os.fsync(1)'; echo $?
0

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 So I think you should stick with the previous version (well, if the
 full sync fails on other FDs, then it's another story, but in that
 case it should just be dropped altogether if it's not reliable...).

Strong stuff.
*This* is the version which should have been implemented from the
beginning, but Apple states

 F_FULLFSYNC Does the same thing as fsync(2) then asks the drive to
 flush all buffered data to the permanent storage
 device (arg is ignored).  This is currently
 implemented on HFS, MS-DOS (FAT), and Universal Disk
 Format (UDF) file systems.

and i thought
- fsync (maybe move buffers to Queue; do reorder Queue as approbiate)
- do call fsys impl. to .. whatever
That's why i had a version of the patch which did 'fsync();fcntl();'
because it would have been an additional syscall but the fsync()
part would possibly have been essentially skipped over ..unless..
Linux RC scripts had 'sync  sync  sync' but it does not seem
to be necessary any more (was it ever - i don't know).

But who knows if that fcntl will fail on some non-noted fsys?
I think it's better to be on the safe side.
Quoting you, Charles-François
 People requiring write durability will probably manage to find
 this full_sync parameter
and if they do they thus really strive for data integrity, so call
fsync() as a fallback for the security which Apple provides.
Also: we cannot let os.fsync() fail with ENOTTY!?

 By the way, it's appropriate, not approbiate. You made the same
 typo in your patch.

8~I  That was not a typo.  Thanks.
I'll changed that.

--
Added file: http://bugs.python.org/file21986/11877.8.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -798,7 +798,7 @@
Availability: Unix.
 
 
-.. function:: fsync(fd)
+.. function:: fsync(fd, full_fsync=False)
 
Force write of file with filedescriptor *fd* to disk.  On Unix, this calls 
the
native :c:func:`fsync` function; on Windows, the MS :c:func:`_commit` 
function.
@@ -807,6 +807,15 @@
``f.flush()``, and then do ``os.fsync(f.fileno())``, to ensure that all 
internal
buffers associated with *f* are written to disk.
 
+   The POSIX standart requires that :c:func:`fsync` must transfer the buffered
+   data to the storage device, not that the data is actually written by the
+   device itself.  It explicitely leaves it up to operating system implementors
+   wether users are given stronger guarantees on data integrity or not.  Some
+   systems also offer special functions which overtake the part of making such
+   stronger guarantees, i.e., Mac OS X and NetBSD.  The optional *full_fsync*
+   argument can be used to enforce usage of these special functions if that is
+   appropriate for the *fd* in question.
+
Availability: Unix, and Windows.
 
 
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -2121,13 +2121,50 @@
 
 #ifdef HAVE_FSYNC
 PyDoc_STRVAR(posix_fsync__doc__,
-fsync(fildes)\n\n\
-force write of file with filedescriptor to disk.);
-
-static PyObject *
-posix_fsync(PyObject *self, PyObject *fdobj)
-{
-return posix_fildes(fdobj, fsync);
+fsync(fildes, full_fsync=False)\n\n
+force write of file buffers with fildes to disk;\n
+full_fsync forces flush of disk caches in case fsync() alone is not enough.);
+
+static PyObject *
+posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+PyObject *fdobj;
+int full_fsync = 0;
+static char *keywords[] = {fd, full_fsync, NULL };
+
+if (!PyArg_ParseTupleAndKeywords(args, kwargs, O|i, keywords,
+ fdobj, full_fsync))
+return NULL;
+
+/* See issue 11877 discussion */
+# if ((defined __APPLE__  defined F_FULLFSYNC) || \
+  (defined __NetBSD__  defined FDISKSYNC))
+if (full_fsync != 0) {
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+/* F_FULLFSYNC is not supported for all types of FDs/FSYSs;
+ * be on the safe side and test for inappropriate ioctl errors */
+res = fcntl(fd, F_FULLFSYNC);
+if (res  0  errno == ENOTTY)
+res = fsync(fd);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+} else
+# endif
+return posix_fildes(fdobj, fsync);
 }
 #endif /* HAVE_FSYNC */
 
@@ -9472,7 +9509,8 @@
 {fchdir

[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-12 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21973/11877.7.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6721] Locks in python standard library should be sanitized on fork

2011-05-12 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@Nir Aides: *thanks* for this link:
   http://groups.google.com/group/comp.programming.threads/msg/3a43122820983fde
You made my day!

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6721
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-11 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

For the record:
On Mac OS X 10.6.7, ,HFS, case sensitive` updates st_atime by
itself *once only*.  It does so ~0.75 seconds after os.utime() (+)
was called.  A time.sleep(0.8) can be used to detect this automatic
update reliably (about 50 tests with changing load all succeeded).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-11 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Ouch, ouch, ouch!!
I'll have to send 11877.7.diff which extends 11877.6.diff.
This is necessary because using fcntl(2) with F_FULLFSYNC may fail
with ENOTTY (inapprobiate ioctl for device) in situations where
a normal fsync(2) succeeds (e.g. STDOUT_FILENO).
By the way - i have no idea of Redmoondian Horror at all
(except for http://msdn.microsoft.com/en-us/sync/bb887623.aspx).

Dropping .5 and .6 - and sorry for the noise.
Good night, Europe.

--
Added file: http://bugs.python.org/file21973/11877.7.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -798,7 +798,7 @@
Availability: Unix.
 
 
-.. function:: fsync(fd)
+.. function:: fsync(fd, full_fsync=False)
 
Force write of file with filedescriptor *fd* to disk.  On Unix, this calls 
the
native :c:func:`fsync` function; on Windows, the MS :c:func:`_commit` 
function.
@@ -807,6 +807,15 @@
``f.flush()``, and then do ``os.fsync(f.fileno())``, to ensure that all 
internal
buffers associated with *f* are written to disk.
 
+   The POSIX standart requires that :c:func:`fsync` must transfer the buffered
+   data to the storage device, not that the data is actually written by the
+   device itself.  It explicitely leaves it up to operating system implementors
+   wether users are given stronger guarantees on data integrity or not.  Some
+   systems also offer special functions which overtake the part of making such
+   stronger guarantees, i.e., Mac OS X and NetBSD.  The optional *full_fsync*
+   argument can be used to enforce usage of these special functions if that is
+   approbiate for the *fd* in question.
+
Availability: Unix, and Windows.
 
 
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -2121,13 +2121,50 @@
 
 #ifdef HAVE_FSYNC
 PyDoc_STRVAR(posix_fsync__doc__,
-fsync(fildes)\n\n\
-force write of file with filedescriptor to disk.);
-
-static PyObject *
-posix_fsync(PyObject *self, PyObject *fdobj)
-{
-return posix_fildes(fdobj, fsync);
+fsync(fildes, full_fsync=False)\n\n
+force write of file buffers with fildes to disk;\n
+full_fsync forces flush of disk caches in case fsync() alone is not enough.);
+
+static PyObject *
+posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+PyObject *fdobj;
+int full_fsync = 0;
+static char *keywords[] = {fd, full_fsync, NULL };
+
+if (!PyArg_ParseTupleAndKeywords(args, kwargs, O|i, keywords,
+ fdobj, full_fsync))
+return NULL;
+
+/* See issue 11877 discussion */
+# if ((defined __APPLE__  defined F_FULLFSYNC) || \
+  (defined __NetBSD__  defined FDISKSYNC))
+if (full_fsync != 0) {
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+/* F_FULLFSYNC is not supported for all types of descriptors, be on the
+ * safe side and test for inapprobiate ioctl errors */
+res = fcntl(fd, F_FULLFSYNC);
+if (res  0  errno == ENOTTY)
+res = fsync(fd);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+} else
+# endif
+return posix_fildes(fdobj, fsync);
 }
 #endif /* HAVE_FSYNC */
 
@@ -9472,7 +9509,8 @@
 {fchdir,  posix_fchdir, METH_O, posix_fchdir__doc__},
 #endif
 #ifdef HAVE_FSYNC
-{fsync,   posix_fsync, METH_O, posix_fsync__doc__},
+{fsync,   (PyCFunction)posix_fsync, METH_VARARGS|METH_KEYWORDS,
+posix_fsync__doc__},
 #endif
 #ifdef HAVE_SYNC
 {sync,posix_sync, METH_NOARGS, posix_sync__doc__},
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-11 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21924/11877.5.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-11 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21953/11877.6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-10 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I don't agree with you and i don't believe it is implemented like
that.  But it seems i am the only one on this issue who sees it
like that.  Thus i apply 11877.6.diff.

 Declaring variables as auto is not necessary in C code and not
 used anywhere else in Python's source code

Changed.

 Steffen, you changed the default to doing a full sync in your
 last patch

Changed all through.

 The reason being that Apple and NetBSD folks should know what
 they're doing [.]
 People requiring write durability will probably manage to find
 this full_sync parameter.

This sounds logical.  I've changed the doc in os.rst so that it
includes a note on *which* operating systems actually do something
dependend on that argument.
I should have done that from the very beginning.

 Finally, depending on the workload, it could have a significant
 performance impact.

:-)

--
Added file: http://bugs.python.org/file21953/11877.6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -798,7 +798,7 @@
Availability: Unix.
 
 
-.. function:: fsync(fd)
+.. function:: fsync(fd, full_fsync=False)
 
Force write of file with filedescriptor *fd* to disk.  On Unix, this calls 
the
native :c:func:`fsync` function; on Windows, the MS :c:func:`_commit` 
function.
@@ -807,6 +807,13 @@
``f.flush()``, and then do ``os.fsync(f.fileno())``, to ensure that all 
internal
buffers associated with *f* are written to disk.
 
+   The POSIX standart only requires that :c:func:`fsync` must transfer the
+   buffered data to the storage device, not that the data is actually
+   written by the device itself.  On operating systems where it is
+   necessary and possible the optional *full_fsync* argument can be used to
+   initiate additional steps to synchronize the physical backing store.
+   At the time of this writing this affects Apple Mac OS X and NetBSD.
+
Availability: Unix, and Windows.
 
 
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -2121,13 +2121,46 @@
 
 #ifdef HAVE_FSYNC
 PyDoc_STRVAR(posix_fsync__doc__,
-fsync(fildes)\n\n\
-force write of file with filedescriptor to disk.);
-
-static PyObject *
-posix_fsync(PyObject *self, PyObject *fdobj)
-{
-return posix_fildes(fdobj, fsync);
+fsync(fildes, full_fsync=False)\n\n
+force write of file buffers with fildes to disk;\n
+full_fsync forces flush of disk caches in case fsync() alone is not enough.);
+
+static PyObject *
+posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+PyObject *fdobj;
+int full_fsync = 0;
+static char *keywords[] = {fd, full_fsync, NULL };
+
+if (!PyArg_ParseTupleAndKeywords(args, kwargs, O|i, keywords,
+ fdobj, full_fsync))
+return NULL;
+
+/* See issue 11877 discussion */
+# if ((defined __APPLE__  defined F_FULLFSYNC) || \
+  (defined __NetBSD__  defined FDISKSYNC))
+if (full_fsync != 0) {
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+res = fcntl(fd, F_FULLFSYNC);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+} else
+# endif
+return posix_fildes(fdobj, fsync);
 }
 #endif /* HAVE_FSYNC */
 
@@ -9472,7 +9505,8 @@
 {fchdir,  posix_fchdir, METH_O, posix_fchdir__doc__},
 #endif
 #ifdef HAVE_FSYNC
-{fsync,   posix_fsync, METH_O, posix_fsync__doc__},
+{fsync,   (PyCFunction)posix_fsync, METH_VARARGS|METH_KEYWORDS,
+posix_fsync__doc__},
 #endif
 #ifdef HAVE_SYNC
 {sync,posix_sync, METH_NOARGS, posix_sync__doc__},
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-09 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Ronald Oussoren wrote (2011-05-08 10:33+0200):
 Steffen, I don't understand your comment about auto. Declaring
 variables as auto is not necessary in C code and not used
 anywhere else in Python's source code.

Well, as long as i can keep my underwear all is fine.
(I also looked at Google translate because i first wanted to start
the reply with croak .. pip .. twist .. wrench .. groan .. ugh.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@Nadeem: note that the committed versions of the tests would not
show up the Mac OS X mmap() bug AFAIK, because there is an
intermediate .close() of the file to be mmapped.  The OS X bug is
that the VMS/VFS interaction fails to provide a valid memory
region for pages which are not yet physically present on disc
- i.e. there is no true sparse file support as on Linux, which
simply uses references to a single COW zero page.
(I've not tried it out for real yet, but i'm foolish like a prowd
cock, so i've looked at the changeset :)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

(Of course this may also be intentional, say.
But then i would vote against it :), because it's better the
tests bring out errors than end-user apps.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

That's really a good one.
(In my eyes.)

--
title: sporadic failure in test_mailbox - sporadic failure in test_mailbox on 
FreeBSD

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Sat,  7 May 2011 14:20:51 +0200, Charles-François Natali wrote:
 I really can't see a good reason for using it here (and
 anywhere, see http://c-faq.com/decl/auto.html).

You're right.

 Why are you using the auto storage class specifier? I know
 that explicit is better than implicit

Yup.  I'm doing what is happening for real in (x86) assembler.
Thus auto means (at a glance!) that this one will need space on
the stack.
(Conversely i'm not using register because who knows if that is
really true?  ;))

 Do you really need to use goto for such a simple code?

Yup.  Ok, this is more complicated.  The reason is that my funs
have exactly one entry point and exactly one place where they're
left.  This is because we here do manual instrumentalisation as in

ret
fun(args)
{
locals
s_NYD_ENTER;

assertions

...

jleave:
s_NYD_LEAVE;
return;

[maybe only, if large: possibly predict-false code blocks]

[possible error jumps here
goto jleave;]
}

We're prowd of that.  N(ot)Y(et)D(ead) is actually pretty cool,
i've used debuggers exactly 5 times in the past about ten years!
I don't even know exactly *how to use debuggers*.  8---} (NYD can
do backtracing, or, with NDEBUG and optional, profiling.)
A really good optimizing compiler will produce the very same code!
And i love nasm, but it's pretty non-portable.
But C is also nice.

But of course i can change this (in C) to simply use return, this
is easy here, no resources to be freed.

Thanks for looking at this, by the way.  :)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Sat,  7 May 2011 14:20:51 +0200, Charles-François Natali wrote:
 # ifdef __APPLE__
 res = fcntl(fd, F_FULLFSYNC);
 # endif
 # ifdef __NetBSD__
 res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
 # endif
 
 Since __APPLE__ and __NetBSD__ are exclusive, you could use something like
 # if defined(__APPLE__)
 res = fcntl(fd, F_FULLFSYNC);
 # elif defined(__NetBSD__)
 res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
 # endif

Yes, you're right, i'll update the patch accordingly.

--
Steffen, sdaoden(*)(gmail.com)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

This tracker - sorry!
(I surely will try to write and offer a patch for the tracker,
but first i need to understand that mailbox.py jungle for
#11935, next week.)

--
title: sporadic failure in test_mailbox on FreeBSD - sporadic failure in 
test_mailbox

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-07 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

11877.5.diff incorporates all changes suggested by
Charles-François except for the 'auto' keyword, which is extremely
important and which would need to be invented if it would not
yet exist.

I'm dropping the old stuff.  And i think this is the final version
of the patch.  I've changed the default argument to 'True' as
i really think it's better to be on the safe side here.  (The
french are better off doing some gracy and dangerous sports to
discover edges of life!)  I'm still of the opinion that this
should be completely hidden, but since it's completely transparent
wether a Python function gets yet another named argument or not...

So, thanks to Ronald, i detected that also NetBSD introduced
a FDISKSYNC flag in 2005 and that you really need fsync_range()
there (at least by definition)!  How could they do that?  But i'm
also happy to see that all other systems try hard to achieve
security transparently and by default and unless i missed
something once again.

--
Added file: http://bugs.python.org/file21924/11877.5.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -798,7 +798,7 @@
Availability: Unix.
 
 
-.. function:: fsync(fd)
+.. function:: fsync(fd, full_fsync=True)
 
Force write of file with filedescriptor *fd* to disk.  On Unix, this calls 
the
native :c:func:`fsync` function; on Windows, the MS :c:func:`_commit` 
function.
@@ -807,6 +807,12 @@
``f.flush()``, and then do ``os.fsync(f.fileno())``, to ensure that all 
internal
buffers associated with *f* are written to disk.
 
+   The POSIX standart only requires that :c:func:`fsync` must transfer the
+   buffered data to the storage device, not that the data is actually
+   written by the device itself.  On operating systems where it is
+   necessary and possible the optional *full_fsync* argument can be used to
+   initiate additional steps to synchronize the physical backing store.
+
Availability: Unix, and Windows.
 
 
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -2121,13 +2121,46 @@
 
 #ifdef HAVE_FSYNC
 PyDoc_STRVAR(posix_fsync__doc__,
-fsync(fildes)\n\n\
-force write of file with filedescriptor to disk.);
-
-static PyObject *
-posix_fsync(PyObject *self, PyObject *fdobj)
-{
-return posix_fildes(fdobj, fsync);
+fsync(fildes, full_fsync=True)\n\n
+force write of file buffers with fildes to disk;\n
+full_fsync forces flush of disk caches in case fsync() alone is not enough.);
+
+static PyObject *
+posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+auto PyObject *fdobj;
+auto int full_fsync = 1;
+static char *keywords[] = {fd, full_fsync, NULL };
+
+if (!PyArg_ParseTupleAndKeywords(args, kwargs, O|i, keywords,
+ fdobj, full_fsync))
+return NULL;
+
+/* See issue 11877 discussion */
+# if ((defined __APPLE__  defined F_FULLFSYNC) || \
+  (defined __NetBSD__  defined FDISKSYNC))
+if (full_fsync != 0) {
+int res, fd = PyObject_AsFileDescriptor(fdobj);
+if (fd  0)
+return NULL;
+if (!_PyVerify_fd(fd))
+return posix_error();
+
+Py_BEGIN_ALLOW_THREADS
+#  if defined __APPLE__
+res = fcntl(fd, F_FULLFSYNC);
+#  elif defined __NetBSD__
+res = fsync_range(fd, FFILESYNC|FDISKSYNC, 0, 0);
+#  endif
+Py_END_ALLOW_THREADS
+
+if (res  0)
+return posix_error();
+Py_INCREF(Py_None);
+return Py_None;
+} else
+# endif
+return posix_fildes(fdobj, fsync);
 }
 #endif /* HAVE_FSYNC */
 
@@ -9472,7 +9505,8 @@
 {fchdir,  posix_fchdir, METH_O, posix_fchdir__doc__},
 #endif
 #ifdef HAVE_FSYNC
-{fsync,   posix_fsync, METH_O, posix_fsync__doc__},
+{fsync,   (PyCFunction)posix_fsync, METH_VARARGS|METH_KEYWORDS,
+posix_fsync__doc__},
 #endif
 #ifdef HAVE_SYNC
 {sync,posix_sync, METH_NOARGS, posix_sync__doc__},
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-07 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21749/11877.3.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-05-07 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21771/11877.4.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Fri,  6 May 2011 04:44:00 +0200, R. David Murray wrote:
 [.] the mtime only has a resolution of one second.

You always say that!  But i'm pretty sure from somewhen long ago
that there are filesystems which have a two second time resolution.
And unless i'm mistaken that filesystem is still used widely.

 Attached is a patch implementing the fix.
 It undoes the 6896 patch

I've not yet tried your code but from looking at the patch it
seems to target towards a real approach.

 I also added an additional delta in case the file system clock
 is skewing relative to the system clock.  I made this a class
 attribute so that it is adjustable; perhaps it should be made
 public and documented.

On the other hand, if it shows up after almost five years that the
one second resolution solution doesn't work, and that simply
adjusting to a two second resolution is not smart enough to get
this fixed, then i would not go for something half-automatic which
a user needs to adjust manually, because how could a user do that?

Thus, in my view, if you are *really* looking forward to make
mailbox.py a *good* and *beautiful* thing then the timedelta in
between the filesystem and the host's time() must of course be
tracked-, and the fuzziness should be adjusted automatically.
E.g. similar http://linuxcommand.org/man_pages/adjtimex8.html.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 I also added an additional delta in case the file system clock
 is skewing relative to the system clock. 

In fact this idea could even be made public e.g. like this

class ClockDrifter:
def add_snapshot(self, exttime, loctime=None):
if loctime is None:
loctime = time.time()
...
def drift_tendency(self):
...
def drift_distance(self):
...

I could think of usages thereof.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I like all kind of single-screw solutions.
And i feel a bit uncomfortable with your usual tightness.
(0.1 seconds - i mean, come on)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 how many people are going to use maildir on FAT?

Dunno.
But it's maybe the lowest common denominator of mountable
readwrite filesystems.  Except for that MacBook i always had
a shared FAT partition on my private PCs.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@david: note i got stuck on updating my patch for mailbox.py and
switched to do test_mmap.py instead, so that i don't know wether
i will be able to finish it today.  Is it really true that
mailbox.py even writes mailboxes without locking in case of an
appending write?  So i really have to look at that before i will
proceed and write the patch.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I agree that it's fun to see pieces of code interacting like gears
in a transmission, but often it gets ugly due to the noise from
the outside which requires to add ugly fuzziness or honour stupid
design decisions.
Maybe an environment variable like PYMAILBOX_SKEW and then one
could run test_mailbox.py and see if it works for 0.01?? :)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-06 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Fri,  6 May 2011 02:54:07 +0200, Nadeem Vawda wrote:
 I think so. [.]
 it turns out that the OS X sparsefile crash is also covered by
 LargeMmapTests.test_large_offset() in test_mmap [!!!]. [.]

So i followed your suggestion and did not do something on zlib no
more.  Even if that means that there is no test which checksums an
entire superlarge mmap() region.
Instead i've changed/added test cases in test_mmap.py:

- Removed all context-manager usage from LargeMmapTests().
  This feature has been introduced in 3.2 and is already tested
  elsewhere.  Like this the test is almost identical on 2.7 and 3.x.
- I've dropped _working_largefile().  This creates a useless large
  file only to unlink it directly.  Instead the necessary try:catch:
  is done directly in the tests.
- (Directly testing after .flush() without reopening the file.)
- These new tests don't run on 32 bit.

May the juice be with you

--
Added file: http://bugs.python.org/file21909/11277-test_mmap.1.py
Added file: http://bugs.python.org/file21910/11277-test_mmap-27.1.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py
--- a/Lib/test/test_mmap.py
+++ b/Lib/test/test_mmap.py
@@ -1,4 +1,5 @@
-from test.support import TESTFN, run_unittest, import_module, unlink, requires
+from test.support import TESTFN, run_unittest, import_module, unlink
+from test.support import requires, _4G
 import unittest
 import os
 import re
@@ -662,44 +663,87 @@
 def tearDown(self):
 unlink(TESTFN)
 
-def _working_largefile(self):
-# Only run if the current filesystem supports large files.
-f = open(TESTFN, 'wb', buffering=0)
+def _test_splice(self, f, i):
+# Test splicing with pages around critical values in respect to
+# memory management
+# Issue 11277: does mmap() force materialization of backing store?
+m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
 try:
-f.seek(0x8001)
-f.write(b'x')
-f.flush()
-except (IOError, OverflowError):
-raise unittest.SkipTest(filesystem does not have largefile 
support)
+# Memory page before xy
+self.assertEqual(m[i+0:i+2], b'  ')
+# Memory page after xy
+self.assertEqual(m[i+10:i+12], b'  ')
+# Cross pages
+self.assertEqual(m[i+2:i+10], b'DEARdear')
 finally:
-f.close()
-unlink(TESTFN)
+m.close()
 
-def test_large_offset(self):
+def _test_subscr(self, f, idx, expect):
+# Test subscript for critical values like INT32_MAX, UINT32_MAX
+m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
+try:
+self.assertEqual(m[idx], expect)
+finally:
+m.close()
+
+@unittest.skipUnless(sys.maxsize  _4G, Can't run on a 32-bit system.)
+def test_around_32bit_sbitflip(self):
+start = 0x7FFA
 if sys.platform[:3] == 'win' or sys.platform == 'darwin':
 requires('largefile',
-'test requires %s bytes and a long time to run' % 
str(0x18000))
-self._working_largefile()
-with open(TESTFN, 'wb') as f:
-f.seek(0x14FFF)
-f.write(b )
+ 'test requires %s bytes and a long time to run' %
+ str(start+12))
+with open(TESTFN, 'w+b') as f:
+try:
+f.seek(start)
+f.write(b'  DEARdear  ')
+f.flush()
+except (IOError, OverflowError):
+raise unittest.SkipTest('filesystem does not have largefile '
+'support')
+self._test_splice(f, start)
+self._test_subscr(f, start+len(b'  DEA'), ord(b'R'))
+self._test_subscr(f, start+len(b'  DEARdea'), ord(b'r'))
+unlink(TESTFN)
 
-with open(TESTFN, 'rb') as f:
-with mmap.mmap(f.fileno(), 0, offset=0x14000, 
access=mmap.ACCESS_READ) as m:
-self.assertEqual(m[0xFFF], 32)
+@unittest.skipUnless(sys.maxsize  _4G, Can't run on a 32-bit system.)
+def test_around_32bit_excess(self):
+start = 0xFFFA
+if sys.platform[:3] == 'win' or sys.platform == 'darwin':
+requires('largefile',
+ 'test requires %s bytes and a long time to run' %
+ str(start+12))
+with open(TESTFN, 'w+b') as f:
+try:
+f.seek(start)
+f.write(b'  DEARdear  ')
+f.flush()
+except (IOError, OverflowError):
+raise unittest.SkipTest('filesystem does not have largefile

[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-06 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21869/11277-27.2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-06 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21885/11277-27.3.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Thu,  5 May 2011 03:52:29 +0200, R. David Murray wrote:
 [..] the shell [..] I believe it just looks at the mtime/atime.

/* check_mail () is useful for more than just checking mail.  Since it has
   the paranoids dream ability of telling you when someone has read your
   mail, it can just as easily be used to tell you when someones .profile
   file has been read, thus letting one know when someone else has logged
   in.  Pretty good, huh? */

  /* If the user has just run a program which manipulates the
 mail file, then don't bother explaining that the mail
 file has been manipulated.  Since some systems don't change
 the access time to be equal to the modification time when
 the mail in the file is manipulated, check the size also.  If
 the file has not grown, continue. */

 /* If the mod time is later than the access time and the file
 has grown, note the fact that this is *new* mail. */

 Not all system mail spools are mode 1777.  Mutt needs to be
 setgid mail on systems that aren't, if I understand correctly.
 Making a python program setgid mail is a bit more of security
 issue than making a well-tested C program setgid, since it is
 easier to break out of the box in a python program.

Ok, maybe set-group-ID on /var/mail isn't even necessary;
0 drwxrwxx-x3 root  mail   102  5 May 11:30 mail
is enough as long as
$ groups $USER
states you are member of group mail.  On my system mailbox.py
doesn't have any problems with modifying the mail directory.
If this is not true on your box go and stress your admin, he's not
worth his money - is he?
I.e., whereas it is possible to rewrite mailbox.py to handle issue
#7359 i would not do so because it is unnecessary on correctly
setup boxes.  Maybe mailbox.py has used so much copy-and-paste
from mutt(1)'s mbox.c because that code works well for many years.
And Jason seems to work as root all of the time.

 mailbox is an mbox manipulation program, not a mail delivery
 agent.  If you are using it to write a mail delivery agent,
 I think perhaps the mtime setting code belongs in your
 application, not the mailbox module.

I really don't understand your point now.
Of course the standart is soft like butter in that it seems to
assume that the spool mailbox is then locally processed and
truncated to zero length, so that mailbox has grown==new mail
arrived, whereas it is also possible to use that spool file as
a real local mailbox, including resorting, partial deletion etc..

This issue is about fixing mailbox.py to adhere MMDF and MBOX
standarts, which is what the patch implements.
This patch works for me locally in that mutt(1) will mention that
new mail has arrived in the boxes.

The patch uses a safe approach by dating back the access time
instead of pointing modification time into the future, which
however will make the patch fail on Pear OS X if the mailbox is on
HFS, case sensitive, because that is buggy and *always* updates
atime; maybe this is because Apple only provides a shallow wrapper
around UFS to integrate that in the Microkernel/IOKit structure,
just in case HFS, case sensitive is really UFS, but i'm guessing
in all directions here.  I would not adjust the patch to fix this,
but the problem exists and it has been noted in this issue.

--
Steffen
sdao...@gmail.com

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Thu,  5 May 2011 03:52:29 +0200, R. David Murray wrote:
 [..] the shell [..] I believe it just looks at the mtime/atime.

   Pretty good, huh?

Mr. Mojo says:

Prowd to be a part of this number.
Successful hills are here to stay.
Everything must be this way.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@haypo: trouble, trouble on the dev-list, as i've seen currently.
Sorry, sorry.  (Cannot subscribe, my DynIP's are often blacklisted ;)
Of course my comments were completely wrong, as Ethan has pointed
out correctly.

This is all s**t.  These are mmap(2) related issues and should be
tested in Lib/test/test_mmap.py.  However that does not use
with open:
create sparse file
materialize
yet so that the Pear OS X sparsefile bug doesn't show up.  In fact
it doesn't do a full beam-me-up test at all yet?

 Is the test useful or not? What do we test?

We do test that mmap.mmap materializes a buffer which can be
accessed (readonly) from [0]..[len-1].
And that the checksums that zlib produces for that buffer are
correct.  Unfortunately we cannot test 0x8000+ no more because
Python prevents that such a buffer can be used - that's a shame.
Maybe we could test 0x7FFF*2 aka 0xfffe in two iterations.

 Can you check if the test crashs on Mac OS X on a 32 bits system
 (1 GB buffer) if you disable F_FULLFSYNC in mmapmodule.c? Same
 question on a 64 bits system (2 GB-1 byte buffer)?

Aeh - F_FULLFSYNC was not yet committed at that time in 2.7.

 Can we just remove the test?

If i - choke! - need to write tests, i try to catch corner cases.
The corner cases would be 0,MAX_LEN(-1) and some (rather pseudo)
random values around these and maybe some in the middle.
(Plus some invalid inputs.)

Can we remove it?  I would keep it, Apple is preparing the next
major release (uname -a yet states 10.7.0 even though it's
10.6.7), and maybe then mmap() will fail for 0xDEADBEEF.
Who will be the one which detects that otherwise??

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12000] SSL certificate verification failed if no dNSName entry in subjectAltName

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

P.S.: if you're really right ('have those RFC's, but didn't read
them yet), you could also open an issue for Mercurial at
http://mercurial.selenic.com/bts - i think those guys do the very
same.

Thanks, Steffen!

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12000
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Thu,  5 May 2011 13:42:29 +0200, R. David Murray wrote:
 what does mutt do in the case you are talking about?

16 -rwxr-s---  1 steffen  mail  14832 23 Jan 19:13 usr/bin/mutt_bitlock
set bitlock_program=~/usr/bin/mutt_bitlock -p

I see.  Unfortunately the world is not even almost perfect.
So should f?truncate(2) be used if the resulting file is empty?

 what does mutt do in the case you are talking about?

Otherwise there is only one solution: a mailbox-is-readonly policy
has to be introduced.
That will surely drive users insane which see that they in fact
have write access to the file.
Python has got bad cards.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 The problem report in question was submitted by one of the
 Debian maintainers.

Yeah, a documentainer at least.
I've used Debian (Woody i think that was 3.1).
Actually great because Lehmanns, Heidelberg, Germany did not
include the sources but they've sent me the sources (on seven CD's
as far as i remember) for free after i've complained.

Linux is really great.  You don't need internet access at all
because of that fantastic documentation, everywhere.  You look
into /dev and /sys and /proc and it's all so translucent!!  And
the GNU tools and libraries - they are so nicely designed.
The source code is so clean.  It's really an enlightened system.

Then i discovered FreeBSD 4.8 which released me from all that.
\|/
_ .
 |
 -
 |
(I still had hairs at that time.  But that was long ago.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

After half an hour of shallow inspection.

mutt really modifies mailbox files in place (mbox_sync_mailbox())
after creating (all the new tail in) a temporary file.  Then
seek()/write()/truncate() etc..  It however has mutt_dotlock(1)
and it does block signals and it is a standalone program and thus
i don't think this behaviour can be used by Python.

In respect to our issue here i must really admit that mutt does:

prepare new tail
stat box
modify box to incorporate tail
close box
utime box with stat result times
reopen box

So actually the result looks as if it never has been modified.
But maybe it is because like this it is in sync with the standart,
since strictly speaking there is no *new* mail in the box.

Unless you vote against it i'll write a patch tomorrow which will
use a state machine which only triggers the utime if some kind of
setitem has occurred.  I can't help you to overcome your malaise
against soiling an atime's pureness.
'Really want a future date??

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

In fact i like my idea of using iterations.
I have some time tomorrow, so if nobody complains until then,
i write diffs for the tests of 3.x and 2.7 with these updates:

- Two different target sizes:
1. 0x + x (7)
2. 0x7FFF + x (7)
- On 32 bit systems, use iterations on a potentially safe buffer
  size.  I think 0x4000 a.k.a 1024*1024*1024 is affordable,
  but 512 MB are probably more safe?  I'll make that a variable.
- The string will be 'DeadAffe' (8).
- The last 4 bytes of the string will always be read on their own
  (just in case the large buffer sizes irritated something down
  the path).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7359] mailbox cannot modify mailboxes in system mail spool

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Issue #11935 becomes gigantic and may even swamp this one over!

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7359
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-05 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Thu,  5 May 2011 19:04:16 +0200, R. David Murray wrote:
 prepare new tail means all of the text from the first modified
 line to the end?  (As opposed to just the new mail?) mailbox
 does locking.  I see no reason in principle it couldn't
 stat/restore, it would just be setting the times on the new file
 rather than on a truncated/rewritten old file.  How hard that
 would be to incorporate into the existing logic I have no idea.
 Of course there may be issues about this I haven't thought of.

Me too and even more.
Clearly mailbox.py cannot do any dotlocking due to missing
permissions, so this is silently ignored to be able to proceed at
all.  Therefore only fcntl/flock locking is used for
a /var/{spool/}mail box by mailbox.py.  This is fine as long as
all programs agree in locking such a file in the usual way, that
is, use both, dotlocking *and* flock/lock, and restart from the
beginning if one of them fails due to another program holding that
very lock.  mutt does that but i won't do any bet here.

And then the signal handling, and Python even supports threading,
and it is embeddable and there may be third-party modules also
involved.  This is the Death Valley of programming.

$PYP mb.flush()
Traceback (most recent call last):
  File stdin, line 1, in module
  File /Users/steffen/usr/opt/py3k/lib/python3.3/mailbox.py, line 659, in 
flush
new_file = _create_temporary(self._path)
  File /Users/steffen/usr/opt/py3k/lib/python3.3/mailbox.py, line 2061, 
in _create_temporary
os.getpid()))
  File /Users/steffen/usr/opt/py3k/lib/python3.3/mailbox.py, line 2051, 
in _create_carefully
fd = os.open(path, os.O_CREAT | os.O_EXCL | os.O_RDWR, 0o666)
OSError: [Errno 13] Permission denied: 
'/var/mail/steffen.1304628960.sherwood.local.37135'

So this seems to be the safest and most useful approach in this
context, because i do not want to imagine what happens if
something weird occurs in the middle of writing the tail
otherwise.  So i stop thinking about issue #7359.

 Do you think the mutt model is a good one to follow?

You mean resetting atime/mtime back to before the rename?
I don't like that and i don't understand it because the file has
been modified, so i think i would do (now,now) in that case
instead (because of the MMDF/MBOX newer==new mail case).
And in case a new mail has been inserted (now-2.42,now).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11997] One typo in Doc/c-api/init.rst

2011-05-04 Thread Steffen Daode Nurpmeso

New submission from Steffen Daode Nurpmeso sdao...@googlemail.com:

Yes, i really found a typo.
I'll send two patches, one with the typo fixed,
and one with the typo fixed and one for which
i've :setlocal tw=80:{gq}

--
assignee: docs@python
components: Documentation
messages: 135107
nosy: docs@python, sdaoden
priority: normal
severity: normal
status: open
title: One typo in Doc/c-api/init.rst
versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11997
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11997] One typo in Doc/c-api/init.rst

2011-05-04 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

11997.1.diff only corrects the typo, 11997.2.diff does also
reformat.  (Note that all of init.rst is hard to read on a 80
column terminal.)

--
keywords: +patch
Added file: http://bugs.python.org/file21880/11997.1.diff
Added file: http://bugs.python.org/file21881/11997.2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11997
___diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst
--- a/Doc/c-api/init.rst
+++ b/Doc/c-api/init.rst
@@ -883,7 +883,7 @@
 modules.
 
 Also note that combining this functionality with :c:func:`PyGILState_\*` APIs
-is delicate, become these APIs assume a bijection between Python thread states
+is delicate, because these APIs assume a bijection between Python thread states
 and OS-level threads, an assumption broken by the presence of sub-interpreters.
 It is highly recommended that you don't switch sub-interpreters between a pair
 of matching :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` calls.
diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst
--- a/Doc/c-api/init.rst
+++ b/Doc/c-api/init.rst
@@ -882,14 +882,14 @@
 by such objects may affect the wrong (sub-)interpreter's dictionary of loaded
 modules.
 
-Also note that combining this functionality with :c:func:`PyGILState_\*` APIs
-is delicate, become these APIs assume a bijection between Python thread states
-and OS-level threads, an assumption broken by the presence of sub-interpreters.
-It is highly recommended that you don't switch sub-interpreters between a pair
-of matching :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` calls.
-Furthermore, extensions (such as :mod:`ctypes`) using these APIs to allow 
calling
-of Python code from non-Python created threads will probably be broken when 
using
-sub-interpreters.
+Also note that combining this functionality with :c:func:`PyGILState_\*` APIs 
is
+delicate, because these APIs assume a bijection between Python thread states 
and
+OS-level threads, an assumption broken by the presence of sub-interpreters.  It
+is highly recommended that you don't switch sub-interpreters between a pair of
+matching :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` calls.
+Furthermore, extensions (such as :mod:`ctypes`) using these APIs to allow
+calling of Python code from non-Python created threads will probably be broken
+when using sub-interpreters.
 
 
 Asynchronous Notifications
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11999] sporadic failure in test_mailbox on FreeBSD

2011-05-04 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I think this relates #6896.
Maybe a two second resolution should be tried?

--
keywords: +patch
nosy: +sdaoden
Added file: http://bugs.python.org/file21884/11999.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11999
___diff --git a/Lib/mailbox.py b/Lib/mailbox.py
--- a/Lib/mailbox.py
+++ b/Lib/mailbox.py
@@ -514,13 +514,11 @@
 else:
 return
 
-# We record the current time - 1sec so that, if _refresh() is called
-# again in the same second, we will always re-read the mailbox
-# just in case it's been modified.  (os.path.mtime() only has
-# 1sec resolution.)  This results in a few unnecessary re-reads
-# when _refresh() is called multiple times in the same second,
-# but once the clock ticks over, we will only re-read as needed.
-now = time.time() - 1
+# Try to be fancy by using a date in the past for our _last_read mtime
+# checks (see issues #6896, #11999)
+# Using a two second resolution should be enough to overcome all
+# fuzziness which may be introduced along the different filesystems.
+now = time.time() - 2
 
 self._toc = {}
 def update_dir (subdir):
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-04 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 error: [Errno 12] Cannot allocate memory

@haypo: Well i told you i have no idea.  These bots are 32 bit?

I'll attach 11277-27.3.diff which does @skipUnless(not 32 bit).
Note i'll test against _4G - does this work (on 32 bit and in
Python)?  A pity that Python does not offer a 'condition is
always true due to datatype storage restriction' check?!

And i don't think it makes sense to test a _1GB mmap on 32 bit at
all (but at least address space shouldn't exhaust for that).
So, sorry, also for the two bugs in that two-liner, but very
especially the 'm' case.

--
Added file: http://bugs.python.org/file21885/11277-27.3.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -2,7 +2,7 @@
 from test.test_support import TESTFN, run_unittest, import_module, unlink, 
requires
 import binascii
 import random
-from test.test_support import precisionbigmemtest, _1G
+from test.test_support import precisionbigmemtest, _1G, _2G, _4G
 import sys
 
 try:
@@ -75,17 +75,16 @@
 # Issue #11277 - check that inputs of 2 GB are handled correctly.
 # Be aware of issues #1202, #8650, #8651 and #10276
 class ChecksumBigBufferTestCase(unittest.TestCase):
-int_max = 0x7FFF
-
+@unittest.skipUnless(sys.maxsize  _4G, Can't run on a 32-bit system.)
 @unittest.skipUnless(mmap, mmap() is not available.)
 def test_big_buffer(self):
 if sys.platform[:3] == 'win' or sys.platform == 'darwin':
 requires('largefile',
  'test requires %s bytes and a long time to run' %
- str(self.int_max))
+ str(_2G -1))
 try:
 with open(TESTFN, wb+) as f:
-f.seek(self.int_max-4)
+f.seek(_2G -1-4)
 f.write(asdf)
 f.flush()
 m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-04 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

@haypo: Oh. Not:

   if sys.maxsize  _4G:
# (64 bits system) crc32() and adler32() stores the buffer size into an
# int, the maximum filesize is INT_MAX (0x7FFF)
filesize = 0x7FFF
crc_res = 0x709418e7
adler_res = -2072837729
else:
# (32 bits system) On a 32 bits OS, a process cannot usually address
# more than 2 GB, so test only 1 GB
filesize = _1G
crc_res = 0x2b09ee11
adler_res = -1002962529

self.assertEqual(zlib.crc32(m), self.crc_res)
self.assertEqual(zlib.adler32(m), self.adler_res)

I'm not that fast.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10044] small int optimization

2011-05-03 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Now me.

(http://gcc.gnu.org/onlinedocs/gcc/Arrays-and-pointers-implementation.html#Arrays-and-pointers-implementation)
 When casting from pointer to integer and back again, the resulting
 pointer must reference the same object as the original pointer,
 otherwise the behavior is undefined. That is, one may not use
 integer arithmetic to avoid the undefined behavior of pointer
 arithmetic as proscribed in C99 6.5.6/8.

Say - isn't that a joke about farts of armchair crouchers.
All it says is that if you dereference garbage you get a crash.
If you're concerned, use volatile, and go shoot the compiler
programmer if she dares to optimize just anything.
And on ARM, isn't the interrupt table at ((void*)(char[])0x0)??
La vie est une rose beside that.
And all i need is atomic_compare_and_swap().

--
nosy: +sdaoden

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10044
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-03 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

 Should we fix Python 2.7?
  - backport issue #8651
  - use PY_SSIZE_T_CLEAN in zlibmodule.c

I really thought about this over night.
I'm a C programmer and thus:
- Produce no bugs
- If you've produced a bug, fix it at once
- If you've fixed a bug, scream out loud BUGFIX! -
  or at least incorporate the patch in the very next patch release

But i have no experience with maintaining a scripting language.
My survey of something like this spans about three months now.
And if even such a heavy known bug as #1202 survives at least two
minor releases (2.6 and 2.7) without being fixed, then maybe no
more effort should be put into 2.7 at all.

 11277-27.1.diff contains # Issue #10276 - check that inputs
 =4GB are handled correctly.. I don't understand this comment
 because the test uses a buffer of 2 GB + 2 bytes.
 How is it possible to pass a buffer of 2 GB+2 bytes to crc32(),
 whereas it stores the size into an int. The maximum size is
 INT_MAX which is 2 GB-1 byte. It looks like the i format of
 PyArg_ParseTuple() doesn't check for integer overflow = issue
 #8651. This issue was fixed in 3.1, 3.2 and 3.3, but not in
 Python 2

11277-27.2.diff uses INT_MAX and thus avoids any such pitfall.
Maybe it brings up memory mapping errors somewhere which i surely
would try fix everywhere i can.

--
Added file: http://bugs.python.org/file21869/11277-27.2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -1,10 +1,16 @@
 import unittest
-from test import test_support
+from test.test_support import TESTFN, run_unittest, import_module, unlink, 
requires
 import binascii
 import random
 from test.test_support import precisionbigmemtest, _1G
+import sys
 
-zlib = test_support.import_module('zlib')
+try:
+import mmap
+except ImportError:
+mmap = None
+
+zlib = import_module('zlib')
 
 
 class ChecksumTestCase(unittest.TestCase):
@@ -66,6 +72,34 @@
  zlib.crc32('spam',  (2**31)))
 
 
+# Backport to 2.7 due to Issue #11277: why not also verify INT32_MAX on 2.7?
+# Be aware of issues #1202, #8650, #8651 and #10276
+class ChecksumBigBufferTestCase(unittest.TestCase):
+int_max = 0x7FFF
+
+@unittest.skipUnless(mmap, mmap() is not available.)
+def test_big_buffer(self):
+if sys.platform[:3] == 'win' or sys.platform == 'darwin':
+requires('largefile',
+ 'test requires %s bytes and a long time to run' %
+ str(self.int_max))
+try:
+with open(TESTFN, wb+) as f:
+f.seek(self.int_max-4)
+f.write(asdf)
+f.flush()
+try:
+m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
+self.assertEqual(zlib.crc32(m), 0x709418e7)
+self.assertEqual(zlib.adler32(m), -2072837729)
+finally:
+m.close()
+except (IOError, OverflowError):
+raise unittest.SkipTest(filesystem doesn't have largefile 
support)
+finally:
+unlink(TESTFN)
+
+
 class ExceptionTestCase(unittest.TestCase):
 # make sure we generate some expected errors
 def test_badlevel(self):
@@ -546,8 +580,9 @@
 
 
 def test_main():
-test_support.run_unittest(
+run_unittest(
 ChecksumTestCase,
+ChecksumBigBufferTestCase,
 ExceptionTestCase,
 CompressTestCase,
 CompressObjectTestCase
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-03 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21855/11277-27.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10276] zlib crc32/adler32 buffer length truncation (64-bit)

2011-05-03 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

Ha!  I always knew it!  wink

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10276
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-02 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Sun,  1 May 2011 00:15:11 +0200, R. David Murray rep...@bugs.python.org 
wrote:
 The problem with this patch is that it would also show 'new
 mail' if what had in fact happened was that a message had been
 *deleted* (see the comments at the beginning of the flush
 method).  So actually fixing this is a bit more complicated.

Well i don't think so because MUA's do some further checks,
like checking the size and of course the status of each mail;
some indeed use the mtime as an entry-gate for further inspection.
And deleting an entry should surely pass that gate, too.

Please do see the file mbox.c of the mutt(1) source repository,
which in fact seems to have been used as an almost copy-and-paste
template for the implementation of large parts of mailbox.py.

But note that i just search less than five minutes in mailbox.py
to find a place where i can add the code of the patch (after
i've added an identical workaround for my S-Postman), so of course
it may not catch all cases.  One error is obvious: it also sets the
mtime for that Babylon format.  I don't use emacs and i'm
a buddhist so i don't care about that Babylon mess anyway.
Right?

 A proper fix for this should also consider fixing issue 7359.

Hm.  #7359 refers to misconfiguration, not to Python or
mailbox.py.  Maybe Doc/library/mailbox.rst should be adjusted to
give users which are new to UNIX a hint about ,group mail` and the
set-group-ID on directories?  I think this would really be a good
thing?!?!  Should i open an issue on that?

But again, mailbox.py reflects almost one-to-one (except for the
naive file lock handling in comparison and AFAIK) mutt(1)'s
mbox.c, and i think that if mutt(1) does
create-temp-work-work-work-rename then this should be ok for
mailbox.py, too.

Did you know that ,bin Laden` ment ,am loading` in german?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-02 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

I'll attach a patch with a clearer comment (entry-gate instead
new mail), i.e. the comment now reflects what MUAs really do.

--
Added file: http://bugs.python.org/file21850/11935.2.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___diff --git a/Lib/mailbox.py b/Lib/mailbox.py
--- a/Lib/mailbox.py
+++ b/Lib/mailbox.py
@@ -692,6 +692,13 @@
 self._file = open(self._path, 'rb+')
 self._toc = new_toc
 self._pending = False
+# Set modification time to be after access time so that MMDF and MBOX
+# mail readers detect changes (or perform further inspection to do so)
+try:
+currtime = time.time()
+os.utime(self._path, (currtime-3, currtime))
+except:
+pass
 if self._locked:
 _lock_file(self._file, dotlock=False)
 
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11935] MMDF/MBOX mailbox need utime

2011-05-02 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21795/mailbox.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11935
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-02 Thread Steffen Daode Nurpmeso

Steffen Daode Nurpmeso sdao...@googlemail.com added the comment:

On Mon,  2 May 2011 01:22:41 +0200, STINNER Victor rep...@bugs.python.org 
wrote:
 @sdaoden: Can you try on Python 2.7?

@haypo: Python 2.7 is absolute horror.
But i tried and produced a (terrible - i don't know the test
framework and that test_support stuff seems to have been changed
a lot since 2.7) 2 gigabyte+ big buffer test for 2.7.
(Of course: even though Python uses int, ZLib uses uInt.)
It took some time because i fell over #1202 from 2007 unprepared.

The (nasty) test works quite well on Apple, which is not such
a big surprise, because Apple's OS X is especially designed for
artists which need to work on large files, like video+ cutters,
sound designers with sample databases etc., so i would be
terribly disappointed if that wouldn't work!  Apple even
propagandize OS X for, and makes money with that very application
task - i really couldn't understand your doubts here.

--
Added file: http://bugs.python.org/file21855/11277-27.1.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -1,10 +1,16 @@
 import unittest
-from test import test_support
+from test.test_support import TESTFN, run_unittest, import_module, unlink, 
requires
+from test.test_support import precisionbigmemtest, _1G, _2G
 import binascii
 import random
-from test.test_support import precisionbigmemtest, _1G
+import sys
 
-zlib = test_support.import_module('zlib')
+try:
+import mmap
+except ImportError:
+mmap = None
+
+zlib = import_module('zlib')
 
 
 class ChecksumTestCase(unittest.TestCase):
@@ -66,6 +72,32 @@
  zlib.crc32('spam',  (2**31)))
 
 
+# Issue #10276 - check that inputs =4GB are handled correctly.
+# Backport to 2.7 due to Issue #11277: why not verify INT32_MAX on 2.7?
+# Also take care of Issue #1202 here
+class ChecksumBigBufferTestCase(unittest.TestCase):
+@unittest.skipUnless(mmap, mmap() is not available.)
+def test_big_buffer(self):
+if sys.platform[:3] == 'win' or sys.platform == 'darwin':
+requires('largefile',
+'test requires %s bytes and a long time to run' % str(_2G+2))
+try:
+with open(TESTFN, wb+) as f:
+f.seek(_2G-2)
+f.write(asdf)
+f.flush()
+try:
+m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
+self.assertEqual(zlib.crc32(m), -2072986226)
+self.assertEqual(zlib.adler32(m), -2072641121)
+finally:
+m.close()
+except (IOError, OverflowError):
+raise unittest.SkipTest(filesystem doesn't have largefile 
support)
+finally:
+unlink(TESTFN)
+
+
 class ExceptionTestCase(unittest.TestCase):
 # make sure we generate some expected errors
 def test_badlevel(self):
@@ -546,8 +578,9 @@
 
 
 def test_main():
-test_support.run_unittest(
+run_unittest(
 ChecksumTestCase,
+ChecksumBigBufferTestCase,
 ExceptionTestCase,
 CompressTestCase,
 CompressObjectTestCase
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] Crash with mmap and sparse files on Mac OS X

2011-05-02 Thread Steffen Daode Nurpmeso

Changes by Steffen Daode Nurpmeso sdao...@googlemail.com:


Removed file: http://bugs.python.org/file21673/11277.zsum32.c

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   >