New submission from wang xuancong <xuancon...@gmail.com>:

We all know that since:
[False, True, False].count(True) gives 1
eval('[False, True, False].count(True)') also gives 1.

However, in Python 2,
eval('[False, True, False].count(True)', {}, Counter()) gives 3, while
eval('[False, True, False].count(True)', {}, {}) gives 1.
Take note that a Counter is a special kind of defaultdict, which is again a 
special kind of dict. Thus, this should not alter the behaviour of eval().

This behaviour is correct in Python 3.

----------
components: Library (Lib)
messages: 349146
nosy: xuancong84
priority: normal
severity: normal
status: open
title: A strange bug in eval() not present in Python 3
type: behavior
versions: Python 2.7

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue37780>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to