Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Paul McMillan
> I may have a terminology problem here. I expect that a random seed must
> change every time it is used, otherwise the pseudorandom number generator
> using it just returns the same value each time. Should we be talking about a
> salt rather than a seed?

You should read the several other threads, the bug, as well as the
implementation and patch under discussion. Briefly, Python string
hashes are calculated once per string, and then used in many places.
You can't change the hash value for a string during program execution
without breaking everything. The proposed change modifies the starting
value of the hash function to include a process-wide randomly
generated seed. This seed is chosen randomly at runtime, but cannot
change once chosen. Using the seed changes the final output of the
hash to be unpredictable to an attacker, solving the underlying
problem.

Salt could also be an appropriate term here, but since salt is
generally changed on a per-use basis (a single process may use many
different salts), seed is more correct, since this value is only
chosen once per process.

-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-21 Thread Matt Joiner
> My concern is that you will end up with vastly more 'yield from's
> than places that require locks, so most of them are just noise.
> If you bite your nails over whether a lock is needed every time
> you see one, they will cause you a lot more anxiety than they
> alleviate.

Not necessarily. The yield from's follow the blocking control flow,
which is surprisingly less common than you might think. Parts of your
code naturally arise as not requiring blocking behaviour in the same
manner as in Haskell where parts of your code are identified as
requiring the IO monad.

>> Sometimes there's no alternative, but wherever I can, I avoid thinking,
>> especially hard thinking.  This maxim has served me very well throughout my
>> programming career ;-).

I'd replace "hard thinking" with "future confusion" here.

> There are already well-known techniques for dealing with
> concurrency that minimise the amount of hard thinking required.
> You devise some well-behaved abstractions, such as queues, and
> put all your hard thinking into implementing them. Then you
> build the rest of your code around those abstractions. That
> way you don't have to rely on crutches such as explicitly
> marking everything that might cause a task switch, because
> it doesn't matter.

It's my firm belief that this isn't sufficient. If this were true,
then the Python internals could be improved by replacing the GIL with
a series of channels/queues or what have you. State is complex, and
without guarantees of immutability, it's just not practical to try to
wrap every state object in some protocol to be passed back and forth
on queues.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Stephen J. Turnbull
Benjamin Peterson writes:
 > 2012/1/21 Stefan Krah :

 > > Do you mean (void)write(...)? Many people think this is good practice,
 > > since it indicates to the reader that the return value is deliberately
 > > ignored.
 > 
 > Not doing anything with it seems fairly deliberate to me.

It may be deliberate, but then again it may not be.  EIBTI applies.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Steven D'Aprano

Paul McMillan wrote:

On Sat, Jan 21, 2012 at 4:19 PM, Jared Grubb  wrote:

I agree; it sounds really odd to throw an exception since nothing is actually 
wrong and there's nothing the caller would do about it to recover anyway. 
Rather than throwing an exception, maybe you just reseed the random value for 
the hash


This is nonsense. You have to determine the random seed at startup,
and it has to be uniform for the entire life of the process. You can't
change it after Python has started.


I may have a terminology problem here. I expect that a random seed must change 
every time it is used, otherwise the pseudorandom number generator using it 
just returns the same value each time. Should we be talking about a salt 
rather than a seed?




--
Steven

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Paul McMillan
On Sat, Jan 21, 2012 at 4:19 PM, Jared Grubb  wrote:
> I agree; it sounds really odd to throw an exception since nothing is actually 
> wrong and there's nothing the caller would do about it to recover anyway. 
> Rather than throwing an exception, maybe you just reseed the random value for 
> the hash

This is nonsense. You have to determine the random seed at startup,
and it has to be uniform for the entire life of the process. You can't
change it after Python has started.

-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.2): Fixes issue #8052: The posix subprocess module's close_fds behavior was

2012-01-21 Thread Benjamin Peterson
2012/1/21 gregory.p.smith :
...
> +/* Convert ASCII to a positive int, no libc call. no overflow. -1 on error. 
> */

Is no libc call important?

> +static int _pos_int_from_ascii(char *name)

To be consistent with the rest of posixmodule.c, "static int" should
be on a different line from the signature. This also applies to all
other function declarations added by this.

> +{
> +    int num = 0;
> +    while (*name >= '0' && *name <= '9') {
> +        num = num * 10 + (*name - '0');
> +        ++name;
> +    }
> +    if (*name)
> +        return -1;  /* Non digit found, not a number. */
> +    return num;
> +}
> +
> +
> +/* Returns 1 if there is a problem with fd_sequence, 0 otherwise. */
> +static int _sanity_check_python_fd_sequence(PyObject *fd_sequence)
> +{
> +    Py_ssize_t seq_idx, seq_len = PySequence_Length(fd_sequence);

PySequence_Length can fail.

> +    long prev_fd = -1;
> +    for (seq_idx = 0; seq_idx < seq_len; ++seq_idx) {
> +        PyObject* py_fd = PySequence_Fast_GET_ITEM(fd_sequence, seq_idx);
> +        long iter_fd = PyLong_AsLong(py_fd);
> +        if (iter_fd < 0 || iter_fd < prev_fd || iter_fd > INT_MAX) {
> +            /* Negative, overflow, not a Long, unsorted, too big for a fd. */
> +            return 1;
> +        }
> +    }
> +    return 0;
> +}
> +
> +
> +/* Is fd found in the sorted Python Sequence? */
> +static int _is_fd_in_sorted_fd_sequence(int fd, PyObject *fd_sequence)
> +{
> +    /* Binary search. */
> +    Py_ssize_t search_min = 0;
> +    Py_ssize_t search_max = PySequence_Length(fd_sequence) - 1;
> +    if (search_max < 0)
> +        return 0;
> +    do {
> +        long middle = (search_min + search_max) / 2;
> +        long middle_fd = PyLong_AsLong(
> +                PySequence_Fast_GET_ITEM(fd_sequence, middle));

No check for error?

> +        if (fd == middle_fd)
> +            return 1;
> +        if (fd > middle_fd)
> +            search_min = middle + 1;
> +        else
> +            search_max = middle - 1;
> +    } while (search_min <= search_max);
> +    return 0;
> +}
> +
> +
> +/* Close all file descriptors in the range start_fd inclusive to
> + * end_fd exclusive except for those in py_fds_to_keep.  If the
> + * range defined by [start_fd, end_fd) is large this will take a
> + * long time as it calls close() on EVERY possible fd.
> + */
> +static void _close_fds_by_brute_force(int start_fd, int end_fd,
> +                                      PyObject *py_fds_to_keep)
> +{
> +    Py_ssize_t num_fds_to_keep = PySequence_Length(py_fds_to_keep);
> +    Py_ssize_t keep_seq_idx;
> +    int fd_num;
> +    /* As py_fds_to_keep is sorted we can loop through the list closing
> +     * fds inbetween any in the keep list falling within our range. */
> +    for (keep_seq_idx = 0; keep_seq_idx < num_fds_to_keep; ++keep_seq_idx) {
> +        PyObject* py_keep_fd = PySequence_Fast_GET_ITEM(py_fds_to_keep,
> +                                                        keep_seq_idx);
> +        int keep_fd = PyLong_AsLong(py_keep_fd);
> +        if (keep_fd < start_fd)
> +            continue;
> +        for (fd_num = start_fd; fd_num < keep_fd; ++fd_num) {
> +            while (close(fd_num) < 0 && errno == EINTR);
> +        }
> +        start_fd = keep_fd + 1;
> +    }
> +    if (start_fd <= end_fd) {
> +        for (fd_num = start_fd; fd_num < end_fd; ++fd_num) {
> +            while (close(fd_num) < 0 && errno == EINTR);
> +        }
> +    }
> +}
> +
> +
> +#if defined(__linux__) && defined(HAVE_SYS_SYSCALL_H)
> +/* It doesn't matter if d_name has room for NAME_MAX chars; we're using this
> + * only to read a directory of short file descriptor number names.  The 
> kernel
> + * will return an error if we didn't give it enough space.  Highly Unlikely.
> + * This structure is very old and stable: It will not change unless the 
> kernel
> + * chooses to break compatibility with all existing binaries.  Highly 
> Unlikely.
> + */
> +struct linux_dirent {
> +   unsigned long  d_ino;        /* Inode number */
> +   unsigned long  d_off;        /* Offset to next linux_dirent */
> +   unsigned short d_reclen;     /* Length of this linux_dirent */
> +   char           d_name[256];  /* Filename (null-terminated) */
> +};
> +
> +/* Close all open file descriptors in the range start_fd inclusive to end_fd
> + * exclusive. Do not close any in the sorted py_fds_to_keep list.
> + *
> + * This version is async signal safe as it does not make any unsafe C library
> + * calls, malloc calls or handle any locks.  It is _unfortunate_ to be forced
> + * to resort to making a kernel system call directly but this is the ONLY api
> + * available that does no harm.  opendir/readdir/closedir perform memory
> + * allocation and locking so while they usually work they are not guaranteed
> + * to (especially if you have replaced your malloc implementation).  A 
> version
> + * of this function that uses those can be found in the _maybe_unsafe 
> variant.
> + *
> + * This is Linux specif

Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Jared Grubb
On 20 Jan 2012, at 10:49, Brett Cannon wrote:
> Why can't we have our cake and eat it too?
> 
> Can we do hash randomization in 3.3 and use the hash count solution for 
> bugfix releases? That way we get a basic fix into the bugfix releases that 
> won't break people's code (hopefully) but we go with a more thorough (and IMO 
> correct) solution of hash randomization starting with 3.3 and moving forward. 
> We aren't breaking compatibility in any way by doing this since it's a 
> feature release anyway where we change tactics. And it can't be that much 
> work since we seem to have patches for both solutions. At worst it will make 
> merging commits for those files affected by the patches, but that will most 
> likely be isolated and not a common collision (and less of any issue once 3.3 
> is released later this year).
> 
> I understand the desire to keep backwards-compatibility, but collision 
> counting could cause an error in some random input that someone didn't expect 
> to cause issues whether they were under a DoS attack or just had some 
> unfortunate input from private data. The hash randomization, though, is only 
> weak if someone is attacked, not if they are just using Python with their own 
> private data.

I agree; it sounds really odd to throw an exception since nothing is actually 
wrong and there's nothing the caller would do about it to recover anyway. 
Rather than throwing an exception, maybe you just reseed the random value for 
the hash:
 * this would solve the security issue that someone mentioned about being able 
to deduce the hash because if they keep being mean it'll change anyway
 * for bugfix, start off without randomization (seed==0) and start to use it 
only when the collision count hits the threshold
 * for release, reseeding when you hit a certain threshold still seems like a 
good idea as it will make lookups/insertions better in the long-run

AFAIUI, Python already doesnt guarantee order stability when you insert 
something into a dictionary, as in the worst case the dictionary has to resize 
its hash table, and then the order is freshly jumbled again.

Just my two cents.

Jared
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Benjamin Peterson
2012/1/21 Gregory P. Smith :
> On Sat, Jan 21, 2012 at 2:33 PM, Stefan Krah  wrote:
>> Benjamin Peterson  wrote:
>>> > Can't that give you another warning about the ssize_t being truncated to
>>> > int?
>>> > How about the following instead?
>>> >
>>> >   (void) write(...);
>>>
>>> Also, if you use a recent enough version of gcc, ./configure will
>>> disable the warning. I would prefer if stop using these kinds of
>>> hacks.
>>
>> Do you mean (void)write(...)? Many people think this is good practice,
>> since it indicates to the reader that the return value is deliberately
>> ignored.
>
> Unfortunately (void) write(...) does not get rid of the warning.
>
> Asking me to change the version of the compiler i'm using is
> unfortunately not helpful.  I don't want to see this warning on any
> common default compiler versions today.

I'm not asking you to. I'm just saying this annoyance (which is all it
is) has been fixed when the infrastructure is new enough to support
it.

>
> I am not going to use a different gcc/g++ version than what my distro
> provides to build python unless we start making that demand of all
> CPython users as well.
>
> It is normally a useful warning message, just not in this specific case.

-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Fixes issue #8052: The posix subprocess module's close_fds behavior was

2012-01-21 Thread Gregory P. Smith
On Sat, Jan 21, 2012 at 2:52 PM, Antoine Pitrou  wrote:
> On Sat, 21 Jan 2012 23:39:41 +0100
> gregory.p.smith  wrote:
>> http://hg.python.org/cpython/rev/61aa484a3e54
>> changeset:   74563:61aa484a3e54
>> branch:      3.2
>> parent:      74561:d01fecadf3ea
>> user:        Gregory P. Smith 
>> date:        Sat Jan 21 14:01:08 2012 -0800
>> summary:
>>   Fixes issue #8052: The posix subprocess module's close_fds behavior was
>> suboptimal by closing all possible file descriptors rather than just
>> the open ones in the child process before exec().
>
> For what it's worth, I'm not really confident with so much new low-level
> code in a bugfix release.
> IMHO it's more of a new feature, since it's a performance improvement.

No APIs change and it makes the subprocess module usable on systems
running with high file descriptor limits where it was painfully slow
to use in the past.

This was a regression in behavior introduced with 3.2's change to make
close_fds=True be the (quite sane) default so I do consider it a fix
rather than a performance improvement.

Obviously the final decision rests with the 3.2.3 release manager.

For anyone uncomfortable with the code itself: The equivalent of that
code has been in use in production at work continuously in
multithreaded processes across a massive number of machines running a
variety of versions of Linux for many years now.  And the non-Linux
code is effectively what the Java VM's Process module does.

-gps
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Coroutines and PEP 380

2012-01-21 Thread Greg Ewing

Glyph wrote:

Yes, but you /can/ look at a 'yield' and conclude that you /might/ need 
a lock, and that you have to think about it.


My concern is that you will end up with vastly more 'yield from's
than places that require locks, so most of them are just noise.
If you bite your nails over whether a lock is needed every time
you see one, they will cause you a lot more anxiety than they
alleviate.

Sometimes there's no alternative, but wherever I can, I avoid thinking, 
especially hard thinking.  This maxim has served me very well throughout 
my programming career ;-).


There are already well-known techniques for dealing with
concurrency that minimise the amount of hard thinking required.
You devise some well-behaved abstractions, such as queues, and
put all your hard thinking into implementing them. Then you
build the rest of your code around those abstractions. That
way you don't have to rely on crutches such as explicitly
marking everything that might cause a task switch, because
it doesn't matter.

--
Greg

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Gregory P. Smith
On Sat, Jan 21, 2012 at 2:33 PM, Stefan Krah  wrote:
> Benjamin Peterson  wrote:
>> > Can't that give you another warning about the ssize_t being truncated to
>> > int?
>> > How about the following instead?
>> >
>> >   (void) write(...);
>>
>> Also, if you use a recent enough version of gcc, ./configure will
>> disable the warning. I would prefer if stop using these kinds of
>> hacks.
>
> Do you mean (void)write(...)? Many people think this is good practice,
> since it indicates to the reader that the return value is deliberately
> ignored.

Unfortunately (void) write(...) does not get rid of the warning.

Asking me to change the version of the compiler i'm using is
unfortunately not helpful.  I don't want to see this warning on any
common default compiler versions today.

I am not going to use a different gcc/g++ version than what my distro
provides to build python unless we start making that demand of all
CPython users as well.

It is normally a useful warning message, just not in this specific case.

-gps
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Fixes issue #8052: The posix subprocess module's close_fds behavior was

2012-01-21 Thread Antoine Pitrou
On Sat, 21 Jan 2012 23:39:41 +0100
gregory.p.smith  wrote:
> http://hg.python.org/cpython/rev/61aa484a3e54
> changeset:   74563:61aa484a3e54
> branch:  3.2
> parent:  74561:d01fecadf3ea
> user:Gregory P. Smith 
> date:Sat Jan 21 14:01:08 2012 -0800
> summary:
>   Fixes issue #8052: The posix subprocess module's close_fds behavior was
> suboptimal by closing all possible file descriptors rather than just
> the open ones in the child process before exec().

For what it's worth, I'm not really confident with so much new low-level
code in a bugfix release.
IMHO it's more of a new feature, since it's a performance improvement.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Benjamin Peterson
2012/1/21 Stefan Krah :
> Benjamin Peterson  wrote:
>> > Can't that give you another warning about the ssize_t being truncated to
>> > int?
>> > How about the following instead?
>> >
>> >   (void) write(...);
>>
>> Also, if you use a recent enough version of gcc, ./configure will
>> disable the warning. I would prefer if stop using these kinds of
>> hacks.
>
> Do you mean (void)write(...)? Many people think this is good practice,
> since it indicates to the reader that the return value is deliberately
> ignored.

Not doing anything with it seems fairly deliberate to me.



-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Stefan Krah
Benjamin Peterson  wrote:
> > Can't that give you another warning about the ssize_t being truncated to
> > int?
> > How about the following instead?
> >
> >   (void) write(...);
> 
> Also, if you use a recent enough version of gcc, ./configure will
> disable the warning. I would prefer if stop using these kinds of
> hacks.

Do you mean (void)write(...)? Many people think this is good practice,
since it indicates to the reader that the return value is deliberately
ignored.



Stefan Krah


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Benjamin Peterson
2012/1/21 Antoine Pitrou :
> On Sat, 21 Jan 2012 21:51:43 +0100
> gregory.p.smith  wrote:
>> http://hg.python.org/cpython/rev/d01fecadf3ea
>> changeset:   74561:d01fecadf3ea
>> branch:      3.2
>> parent:      74558:03e61104f7a2
>> user:        Gregory P. Smith 
>> date:        Sat Jan 21 12:31:25 2012 -0800
>> summary:
>>   Avoid the compiler warning about the unused return value.
>
> Can't that give you another warning about the ssize_t being truncated to
> int?
> How about the following instead?
>
>    (void) write(...);

Also, if you use a recent enough version of gcc, ./configure will
disable the warning. I would prefer if stop using these kinds of
hacks.


-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.2): Avoid the compiler warning about the unused return value.

2012-01-21 Thread Antoine Pitrou
On Sat, 21 Jan 2012 21:51:43 +0100
gregory.p.smith  wrote:
> http://hg.python.org/cpython/rev/d01fecadf3ea
> changeset:   74561:d01fecadf3ea
> branch:  3.2
> parent:  74558:03e61104f7a2
> user:Gregory P. Smith 
> date:Sat Jan 21 12:31:25 2012 -0800
> summary:
>   Avoid the compiler warning about the unused return value.

Can't that give you another warning about the ssize_t being truncated to
int?
How about the following instead?

(void) write(...);

cheers

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread David Malcolm
On Fri, 2012-01-20 at 16:55 +0100, Frank Sievertsen wrote:
> Hello,
> 
> I still see at least two ways to create a DOS attack even with the
> collison-counting-patch.

[snip description of two types of attack on the collision counting
approach]

> What to do now?
> I think it's not smart to reduce the number of allowed collisions 
> dramatically
> AND count all slot-collisions at the same time.

Frank: did you see the new approach I proposed in:
http://bugs.python.org/issue13703#msg151735
http://bugs.python.org/file24289/amortized-probe-counting-dmalcolm-2012-01-21-003.patch

(repurposes the ma_smalltable region of large dictionaries to add
tracking of each such dict's average iterations taken per modification,
and raise an exception when it exceeds a particular ratio)

I'm interested in hearing how it holds up against your various test
cases, or what flaws there are in it.

Thanks!
Dave

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python build failed on mac

2012-01-21 Thread Hynek Schlawack
Am Freitag, 20. Januar 2012 um 23:40 schrieb Vijay Majagaonkar:
> > > I am trying to build python 3 on mac and build failing with following 
> > > error can somebody help me with this
> > 
> > It is a known bug that Apple's latest gcc-llvm (that comes with Xcode 4.1 
> > by default as gcc) miscompiles Python: http://bugs.python.org/issue13241 
> > 
> > make clean
> > CC=clang ./configure && make -s
> 
> Thanks for the help, but above command need to run in different way
> 
> ./configure CC=clang
> make


I'm not sure why you think it "needs" to be that way, but it's fine by me as 
both ways work fine.

> this allowed me to build the code but when ran test I got following error 
> message
> 
> [363/364/3] test_io
> python.exe(11411) malloc: *** mmap(size=9223372036854775808) failed (error 
> code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> python.exe(11411,0x7fff7a8ba960) malloc: *** mmap(size=9223372036854775808) 
> failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> python.exe(11411,0x7fff7a8ba960) malloc: *** mmap(size=9223372036854775808) 
> failed (error code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> 
> I am using Mac OS-X 10.7.2 and insatlled Xcode 4.2.1 

Please ensure there aren't any gcc-created objects left by running "make 
distclean" first.

-h
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Guido van Rossum
On Sat, Jan 21, 2012 at 7:50 AM, Matthew Woodcraft
 wrote:
> Victor Stinner   wrote:
>> I propose to solve the hash collision vulnerability by counting
>> collisions [...]
>
>> We now know all issues of the randomized hash solution, and I
>> think that there are more drawbacks than advantages. IMO the
>> randomized hash is overkill to fix the hash collision issue.
>
>
> For web frameworks, forcing an exception is less harmful than forcing a
> many-second delay, but I think it's hard to be confident that there
> aren't other vulnerable applications where it's the other way round.
>
>
> Web frameworks like the exception because they already have backstop
> exception handlers, and anyway they use short-lived processes and keep
> valuable data in databases rather than process memory.
>
> Web frameworks don't like the delay because they allow unauthenticated
> users to submit many requests (including multiple requests in parallel),
> and they normally expect each response to take little cpu time.
>
>
> But many programs are not like this.
>
> What about a log analyser or a mailing list archiver or a web crawler or
> a game server or some other kind of program we haven't considered?

If my log crawler ended up taking minutes per log entry instead of
milliseconds I'd have to kill it anyway. Web crawlers are huge
multi-process systems that are as robust as web servers, or more. Game
servers are just web apps.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-21 Thread Matthew Woodcraft
Victor Stinner   wrote:
> I propose to solve the hash collision vulnerability by counting
> collisions [...]

> We now know all issues of the randomized hash solution, and I
> think that there are more drawbacks than advantages. IMO the
> randomized hash is overkill to fix the hash collision issue.


For web frameworks, forcing an exception is less harmful than forcing a
many-second delay, but I think it's hard to be confident that there
aren't other vulnerable applications where it's the other way round.


Web frameworks like the exception because they already have backstop
exception handlers, and anyway they use short-lived processes and keep
valuable data in databases rather than process memory.

Web frameworks don't like the delay because they allow unauthenticated
users to submit many requests (including multiple requests in parallel),
and they normally expect each response to take little cpu time.


But many programs are not like this.

What about a log analyser or a mailing list archiver or a web crawler or
a game server or some other kind of program we haven't considered?

-M-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com