[Python-Dev] bz2.BZ2File doesn't support name?

2021-04-26 Thread roy
I was surprised recently to discover that BZ2File (at least in 3.7) doesn't 
have a name attribute.  Is there some fundamental reason name couldn't be 
supported, or is it just a bug that it wasn't implemented?
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N3Q7AN5ISRGKS76GT4YSJX2SV6BNQIWM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Making SocketHandler safe for use with non-pickleable objects

2022-06-20 Thread roy
I've been working on a bug in django where if you configure a 
logging.handlers.SocketHandler in the django logging config, it gets a 
TypeError because it tries to log a HttpRequest object, which can't be pickled. 
 I'm slowly coming to the conclusion that this isn't a django bug, and the 
right fix is to make SocketHandler (and DatagramHandler) be able to deal with 
this internally.

QueueHandler already does this, or at least the documentation says it does:

prepare(record) ... removes unpickleable items from the record in-place.

It seems like SocketHandler should do the same thing.  Thoughts?
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XGHGO4WL35S5HMMQ4NZGXD5FOYTCA3EY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] async/await behavior on multiple calls

2015-12-15 Thread Roy Williams
Howdy,

I'm experimenting with async/await in Python 3, and one very surprising
behavior has been what happens when calling `await` twice on an Awaitable.
In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7.  In Python,
calling `await` multiple times results in all future results getting back
`None`.  Here's a small example program:


async def echo_hi():
result = ''
echo_proc = await asyncio.create_subprocess_exec(
'echo', 'hello', 'world',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.DEVNULL)
result = await echo_proc.stdout.read()
await echo_proc.wait()
return result

async def await_twice(awaitable):
print('first time is {}'.format(await awaitable))
print('second time is {}'.format(await awaitable))

loop = asyncio.get_event_loop()
loop.run_until_complete(await_twice(echo_hi()))

This makes writing composable APIs using async/await in Python very
difficult since anything that takes an `awaitable` has to know that it
wasn't already awaited.  Also, since the behavior is radically different
than in the other programming languages implementing async/await it makes
adopting Python's flavor of async/await difficult for folks coming from a
language where it's already implemented.

In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that
can be awaited multiple times and either returns the result or throws any
thrown exceptions.  It doesn't appear that the Awaitable class in Python
has a `result` or `exception` field but `asyncio.Future` does.

Would it make sense to shift from having `await` functions return a `
*Future-like`* return object to returning a Future?

Thanks,
Roy
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] async/await behavior on multiple calls

2015-12-15 Thread Roy Williams
Thanks for the insight Guido.

I've mostly used async/await inside of HHVM/Hack, and used Guava/Java
Futures extensively in the past so I found this behavior to be quite
surprising.  I'd like to use Awaitables to represent a DAG of work that
needs to get done.  For example, I used to be one of the maintainers of
Buck (a build tool similar to Bazel) and we used a collection of futures
for building all of our dependencies.  For each rule, we'd effectively:

dependency_results = await asyncio.gather(*dependencies)
# Proceed with building.

Rules were free to depend on the same dependency and since the Future would
just return the same result when resolved more than one time things just
worked.

Similarly when building up the results for say a web request, I effectively
want to construct a DAG of work that needs to get done and then just await
on that DAG in a similar manner without having to enforce that the DAG is
actually a tree.  I can of course write a function to wrap everything in
Futures, but this seems to be against the spirit of async/await.

Thanks,
Roy

On Tue, Dec 15, 2015 at 12:08 PM, Guido van Rossum  wrote:

> I think this goes back all the way to a debate we had when we were
> discussing PEP 380 (which introduced 'yield from', on which 'await' is
> built). In fact I believe that the reason PEP 380 didn't make it into
> Python 2.7 was that this issue was unresolved at the time (the PEP author
> and I preferred the current approach, but there was one vocal opponent who
> disagreed -- although my memory is only about 60% reliable on this :-).
>
> In any case, problem is that in order to implement the behavior you're
> asking for, the generator object would have to somehow hold on to its
> return value so that each time __next__ is called after it has already
> terminated it can raise StopIteration with the saved return value. This
> would extend the lifetime of the returned object indefinitely (until the
> generator object itself is GC'ed) in order to handle a pretty obscure
> corner case.
>
> I don't know how long you have been using async/await, but I wonder if
> it's possible that you just haven't gotten used to the typical usage
> patterns? In particular, your claim "anything that takes an `awaitable` has
> to know that it wasn't already awaited" makes me sound that you're just
> using it in an atypical way (perhaps because your model is based on other
> languages). In typical asyncio code, one does not usually take an
> awaitable, wait for it, and then return it -- one either awaits it and then
> extracts the result, or one returns it without awaiting it.
>
> On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams  wrote:
>
>> Howdy,
>>
>> I'm experimenting with async/await in Python 3, and one very surprising
>> behavior has been what happens when calling `await` twice on an Awaitable.
>> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7.  In Python,
>> calling `await` multiple times results in all future results getting back
>> `None`.  Here's a small example program:
>>
>>
>> async def echo_hi():
>> result = ''
>> echo_proc = await asyncio.create_subprocess_exec(
>> 'echo', 'hello', 'world',
>> stdout=asyncio.subprocess.PIPE,
>> stderr=asyncio.subprocess.DEVNULL)
>> result = await echo_proc.stdout.read()
>> await echo_proc.wait()
>> return result
>>
>> async def await_twice(awaitable):
>> print('first time is {}'.format(await awaitable))
>> print('second time is {}'.format(await awaitable))
>>
>> loop = asyncio.get_event_loop()
>> loop.run_until_complete(await_twice(echo_hi()))
>>
>> This makes writing composable APIs using async/await in Python very
>> difficult since anything that takes an `awaitable` has to know that it
>> wasn't already awaited.  Also, since the behavior is radically different
>> than in the other programming languages implementing async/await it makes
>> adopting Python's flavor of async/await difficult for folks coming from a
>> language where it's already implemented.
>>
>> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that
>> can be awaited multiple times and either returns the result or throws any
>> thrown exceptions.  It doesn't appear that the Awaitable class in Python
>> has a `result` or `exception` field but `asyncio.Future` does.
>>
>> Would it make sense to shift from having `await` functions return a `
>> *Future-like`* return object to returning a Future?
>>

Re: [Python-Dev] async/await behavior on multiple calls

2015-12-15 Thread Roy Williams
@Kevin correct, that's the point I'd like to discuss.  Most other
mainstream languages that implements async/await expose the programming
model with Tasks/Futures/Promises as opposed to coroutines  PEP 492 states
'Objects with __await__ method are called Future-like objects in the rest
of this PEP.' but their behavior differs from that of Futures in this core
way.  Given that most other languages have standardized around async
returning a Future as opposed to a coroutine I think it's worth exploring
why Python differs.

There's a lot of benefits to making the programming model coroutines
without a doubt.  It's absolutely brilliant that I can just call code
annotated with @asyncio.coroutine and have it just work.  Code using the
old @asyncio.coroutine/yield from syntax should absolutely stay the same.
Similarly, since ES7 async/await is backed by Promises it'll just work for
any existing code out there using Promises.

My proposal would be to automatically wrap the return value from an `async`
function or any object implementing `__await__` in a future with
`asyncio.ensure_future()`.  This would allow async/await code to behave in
a similar manner to other languages implementing async/await and would
remain compatible with existing code using asyncio.

What's your thoughts?

Thanks,
Roy

On Tue, Dec 15, 2015 at 3:35 PM, Kevin Conway 
wrote:

> I think there may be somewhat of a language barrier here. OP appears to be
> mixing the terms of coroutines and futures. The behavior OP describes is
> that of promised or async tasks in other languages.
>
> Consider a JS promise that has been resolved:
>
> promise.then(function (value) {...});
>
> promise.then(function (value) {...});
>
> Both of the above will execute the callback function with the resolved
> value regardless of how much earlier the promise was resolved. This is not
> entirely different from how Futures work in Python when using
> 'add_done_callback'.
>
> The code example from OP, however, is showing the behaviour of awaiting a
> coroutine twice rather than awaiting a Future twice. Both objects are
> awaitable but both exhibit different behaviour when awaited multiple times.
>
> A scenario I believe deserves a test is what happens in the asyncio
> coroutine scheduler when a promise is awaited multiple times. The current
> __await__ behaviour is to return self only when not done and then to return
> the value after resolution for each subsequent await. The Task, however,
> requires that it must be a Future emitted from the coroutine and not a
> primitive value. Awaiting a resolved future should result
>
> On Tue, Dec 15, 2015, 14:44 Guido van Rossum  wrote:
>
>> Agreed. (But let's hear from the OP first.)
>>
>> On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov <
>> andrew.svet...@gmail.com> wrote:
>>
>>> Both Yury's suggestions sounds reasonable.
>>>
>>> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov
>>>  wrote:
>>> > Hi Roy and Guido,
>>> >
>>> > On 2015-12-15 3:08 PM, Guido van Rossum wrote:
>>> > [..]
>>> >>
>>> >>
>>> >> I don't know how long you have been using async/await, but I wonder if
>>> >> it's possible that you just haven't gotten used to the typical usage
>>> >> patterns? In particular, your claim "anything that takes an
>>> `awaitable` has
>>> >> to know that it wasn't already awaited" makes me sound that you're
>>> just
>>> >> using it in an atypical way (perhaps because your model is based on
>>> other
>>> >> languages). In typical asyncio code, one does not usually take an
>>> awaitable,
>>> >> wait for it, and then return it -- one either awaits it and then
>>> extracts
>>> >> the result, or one returns it without awaiting it.
>>> >
>>> >
>>> > I agree.  Holding a return value just so that coroutine can return it
>>> again
>>> > seems wrong to me.
>>> >
>>> > However, since coroutines are now a separate type (although they share
>>> a lot
>>> > of code with generators internally), maybe we can change them to throw
>>> an
>>> > error when they are awaited on more than one time?
>>> >
>>> > That should be better than letting them return `None`:
>>> >
>>> > coro = coroutine()
>>> > await coro
>>> > await coro  # <- will raise RuntimeError
>>> >
>>> >
>>> > I'd also add a check that the coro

Re: [Python-Dev] async/await behavior on multiple calls

2015-12-16 Thread Roy Williams
I totally agree that async/await should not be tied to any underlying
message pump/event loop.  Ensuring that async/await works with existing
systems like Tornado is great.

As for the two options, option 1 is the expected behavior from developers
coming from other languages implementing async/await which is why I found
the existing behavior to be so unintuitive.  To Barry and Kevin's point,
this problem is exacerbated by a lack of documentation and examples that
one can follow to learn about the Pythonic approach to async/await.

Thanks,
Roy

On Tue, Dec 15, 2015 at 7:33 PM, Yury Selivanov 
wrote:

> Roy,
>
> On 2015-12-15 8:29 PM, Roy Williams wrote:
> [..]
>
>>
>> My proposal would be to automatically wrap the return value from an
>> `async` function or any object implementing `__await__` in a future with
>> `asyncio.ensure_future()`.  This would allow async/await code to behave in
>> a similar manner to other languages implementing async/await and would
>> remain compatible with existing code using asyncio.
>>
>> What's your thoughts?
>>
>
> Other languages, such as JavaScript, have a notion of event loop
> integrated on a very deep level.  In Python, there is no centralized event
> loop, and asyncio is just one way of implementing one.
>
> In asyncio, Future objects are designed to inter-operate with an event
> loop (that's also true for JS Promises), which means that in order to
> automatically wrap Python coroutines in Futures, we'd have to define the
> event loop deep in Python core.  Otherwise it's impossible to implement
> 'Future.add_done_callback', since there would be nothing that calls the
> callbacks on completion.
>
> To avoid adding a built-in event loop, PEP 492 introduced coroutines as an
> abstract language concept.  David Beazley, for instance, doesn't like
> Futures, and his new framework 'curio' does not have them at all.
>
> I highly doubt that we want to add a generalized event loop in Python
> core, define a generalized Future interface, and make coroutines return
> it.  It's simply too much work with no clear wins.
>
> Now, your initial email highlights another problem:
>
>coro = coroutine()
>print(await coro)  # will print the result of coroutine
>await coro  # prints None
>
> This is a bug that needs to be fixed.  We have two options:
>
> 1. Cache the result when the coroutine object is awaited first time.
> Return the cached result when the coroutine object is awaited again.
>
> 2. Raise an error if the coroutine object is awaited more than once.
>
> The (1) option would solve your problem.  But it also introduces new
> complexity: the GC of result will be delayed; more importantly, some users
> will wonder if we cache the result or run the coroutine again.  It's just
> not obvious.
>
> The (2) option is Pythonic and simple to understand/debug, IMHO.  In this
> case, the best way for you to solve your initial problem, would be to have
> a decorator around your tasks.  The decorator should wrap coroutines with
> Futures (with asyncio.ensure_future) and everything will work as you expect.
>
> Thanks,
> Yury
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/rwilliams%40lyft.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Expose environment variable for Py_Py3kWarningFlag (Issue 28288)

2016-11-23 Thread Roy Williams
Howdy all,

I'd love to be able to get http://bugs.python.org/issue28288 into 2.7.12 .
I've found the -3 flag to be immensely useful when porting Python code to
Python 3, but unfortunately it can be difficult to use with services that
run python in subprocesses (like Gunicorn or Xdist with py.test).  With
this patch I'd be able to set the `PYTHON3WARNINGS` environment variable to
ensure I get warnings everywhere.

Thanks, let me know what you think,

Roy
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] New functionality for unittest.mock side_effect

2022-04-26 Thread Roy Smith
I often want a side_effect of "if called with foo, return bar" functionality.  
This is really useful when the order of calls to your mock is indeterminate, so 
you can't just use an iterable.  What I end up doing is writing a little 
function:

def f(x):
data = {
'foo': 'bar',
'baz': 'blah',
}
return data[x]

my_mock.side_effect = f

sometimes I wrap that up in a lambda, but either way it's fidgety boilerplate 
which obfuscates the intent.  It would be nice if side_effect could accept a 
dict and provide the above functionality for free:

my_mock.side_effect = {
'foo': 'bar',
'baz': 'blah',
}

If could handle DEFAULT just like side_effect does now.  And it could throw 
something more useful than KeyError if the key isn't found.

I could see the argument that "but dicts are themselves iterable, so this could 
break some existing mocks which depend on side_effect iterating over the dicts 
keys.  That would be bizarre, but possible.  The fix for that would be invent a 
new attribute, say, return_value_map, which does this, and say that it's an 
error to set more than one of {return_value, return_value_map, side_effect}.

Anyway, if I were to take a whack at coding this up, is it something that would 
be considered for a future release?




___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4A24FFKMN2PEB4GRERXUX2BUNVMZR2MA/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] mkdir -p in python

2010-07-21 Thread Clinton Roy
Hey folks,

On 21 July 2010 10:37, Greg Ewing  wrote:
> Steven D'Aprano wrote:
>>
>> Perhaps all we need is a recipe in the docs:
>>
>> try:
>>    os.makedirs(path)
>> except OSError, e:
>>    if e.errno != 17:
>>        raise
>
> I don't like writing code that depends on particular
> errno values, because I don't trust it to work cross-
> platform.

I use errno.EEXIST instead of a magic number.

later,
-- 
Clinton Roy
Software Engineer
Global Change Institute
University of Queensland

humbug.org.au  - Brisbane Unix Group
clintonroy.wordpress.com - Blog
flickr.com/photos/croy/ - Photos
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] fileinput module tests

2006-12-09 Thread Anthony Roy
Hi all,

I have a patch for the fileinput.FileInput class, adding a parameter
to the __init__ method called write_mode in order to specify the write
mode when using the class with the inplace parameter set to True.

Before I submit the patch, I've added a test to the test module, and
noticed that the module is pretty messy, with half of the tests being
run in the module body, and the rest in a large function body.

I propose to refactor the module, moving the tests into a
unittest.TestCase subclass (essentially unchanged, bar changing verify
calls to self.assert_ calls, raise TestFailed(...) to self.fail(...)
etc). I think this will add clarity and modularity to the tests, and
bring them into line with the unittest based test suite used by the
test_file module amongst others (which I'm guessing are substantially
more up to date than test_fileinput).

Any thought?

Cheers,

-- 
Ant...

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python developers are in demand

2007-10-25 Thread Anthony Roy

> Interesting to see discussion on supply and demand issues for
> Python programmers.  You might be interested to learn that,
> after a few years of flirting with Python in various ways, the
> School of Computing at the University of Leeds has recently
> switched to teaching Python as the first and primary programming
> language for undergraduates on all of our degree programmes.
>
> I know we're not the only ones doing this, so perhaps the
> supply will rise to meet the demand in a few years!

I was a researcher in the School of Computing at Leeds Uni about 4 years
ago. Good to see them pushing Python!

I keep my eyes open for Python Developer roles in the UK (particularly the
North), since I would far prefer to develop in Python than Java. However,
in the UK Python Jobs seem to be few and far between, and most of the ones
that there are are either low paid sys admin type roles, or are based in
London.

Cheers,

-- 
Anthony Roy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] beginning developer: fastest way to learn how Python 3.0 works

2008-12-13 Thread Roy Lowrance
I'd like to learn how Python 3.0 works. I've downloaded the svn.

I am wondering what the best way to learn is:
- Just jump in?
- Or perhaps learn A before B?
- Or maybe there is a tutorial for those new to the internals?

What's the best way to learn how Python 3.0 works?

Roy
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] beginning developer: fastest way to learn how Python 3.0 works

2008-12-13 Thread Roy Lowrance
Maybe this is the correct list, as my inquiry is about how to learn
how the current implementation works so that I could consider how to
implement new features.

So, here's a modified question: If you want to learn how python works
(not how to program in the python language), what's a productive way
to proceed?

Roy

On Sat, Dec 13, 2008 at 1:18 PM, Aahz  wrote:
> On Sat, Dec 13, 2008, Roy Lowrance wrote:
>>
>> What's the best way to learn how Python 3.0 works?
>
> Post to the correct mailing list.  ;-)
>
> Use comp.lang.python or python-tutor or python-help
>
> python-dev is for people creating new versions of Python
> --
> Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/
>
> "It is easier to optimize correct code than to correct optimized code."
> --Bill Harlan
>



-- 
Roy Lowrance
home: 212 674 9777
mobile: 347 255 2544
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon 2009: Call for sprint projects

2009-02-25 Thread Roy H. Han
Hi Jacob,

Will there be GeoDjango/OpenLayers subsprint in the Django sprint?

Thanks,
RHH

On Mon, Feb 9, 2009 at 8:27 PM, Jacob Kaplan-Moss  wrote:
> Python-related projects: join the PyCon Development Sprints!
>
> The development sprints are a key part of PyCon, a chance for the contributors
> to open-source projects to get together face-to-face for up to four days of
> intensive learning and development. Newbies sit at the same table as the 
> gurus,
> go out for lunch and dinner together, and have a great time while advancing
> their project. Sprints are one of the best parts of PyCon; in 2008 over 250
> sprinters came for at least one day!
>
> If your project would like to sprint at PyCon, now is the time to let us know.
> We'll collect your info and publish it so that participants will have time to
> make plans. We'll need to get the word out early so that folks who want to
> sprint have time to make travel arrangements.
>
> In the past, some have been reluctant to commit to sprinting: some may not 
> know
> what sprinting is all about; others may think that they're not "qualified" to
> sprint. We're on an ongoing mission to change that perception:
>
> * We'll help promote your sprint. The PyCon website, the PyCon blog, the PyCon
>  podcast, and press releases will be there for you.
>
> * PyCon attendees will be asked to commit to sprints on the registration form,
>  which will include a list of sprints with links to further info.
>
> * We will be featuring a "How To Sprint" session on Sunday afternoon, followed
>  by sprint-related tutorials, all for free. This is a great opportunity to
>  introduce your project to prospective contributors. We'll have more details
>  about this later.
>
> * Some sponsors are helping out with the sprints as well.
>
> There's also cost. Although the sprinting itself is free, sprints have
> associated time and hotel costs. We can't do anything about the time cost, but
> we may have some complimentary rooms and funding available for sprinters. We
> will have more to say on financial aid later.
>
> Those who want to lead a sprint should send the following information
> to pycon-organiz...@python.org:
>
> * Project/sprint name
>
> * Project URL
>
> * The name and contact info (email and/or telephone) for the sprint leader(s)
>  and other contributors who will attend the sprint
>
> * Instructions for accessing the project's code repository and documentation 
> (or
>  a URL)
>
> * Pointers to new contributor information (setup, etc.)
>
> * Any special requirements (projector? whiteboard? flux capacitor?)
>
> We will add this information to the PyCon website and set up a wiki page for 
> you
> (or we can link to yours). Projects should provide a list of goals (bugs to 
> fix,
> features to add, docs to write, etc.), especially some goals for beginners, to
> attract new sprinters. The more detail you put there, the more prepared your
> sprinters will be, and the more results you'll get.
>
> In 2008 there were sprints for Python, TurboGears, Pylons, Django, Jython, 
> WSGI,
> PyGame, Bazaar, Trac, PyCon-Tech, OLPC, Orbited, PyPy, Grok, and many others. 
> We
> would like to see all these and more!
>
> Sprints will start with an introductory session on Sunday, March 29; this
> session will begin immedately after PyCon's scheduled portion ends. The 
> sprints
> themselves will run from Monday, March 30 through Thursday, April 2, 2009.
>
> You can find all these details and more at http://us.pycon.org/2009/sprints/.
>
> If you have any questions, feel free to contact me directly.
> Thank you very much, and happy coding!
>
> Jacob Kaplan-Moss
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/starsareblueandfaraway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] About adding a new iterator method called "shuffled"

2009-03-24 Thread Roy Hyunjin Han
I know that Python has iterator methods called "sorted" and "reversed" and
these are handy shortcuts.


Why not add a new iterator method called "shuffled"?

>>> for x in shuffled(range(5)):
>>>print x
>>> 3
>>> 1
>>> 2
>>> 0
>>> 4


Currently, a person has to do the following because random.shuffle() does
not return the actual shuffled list.  It is verbose.

>>> import random
>>> x = range(5)
>>> random.shuffle(x)
>>> for x in x:
>>> print x
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Splitting something into two steps produces different behavior from doing it in one fell swoop in Python 2.6.2

2009-12-11 Thread Roy Hyunjin Han
While debugging a network algorithm in Python 2.6.2, I encountered
some strange behavior and was wondering whether it has to do with some
sort of code optimization that Python does behind the scenes.



After initialization: defaultdict(, {1: set([1])})
Popping and updating in two steps: defaultdict(, {1: set([1])})

After initialization: defaultdict(, {1: set([1])})
Popping and updating in one step: defaultdict(, {})


import collections
print ''
x = collections.defaultdict(set)
x[1].update([1])
print 'After initialization: %s' % x
items = x.pop(1)
x[1].update(items)
print 'Popping and updating in two steps: %s' % x
print ''
y = collections.defaultdict(set)
y[1].update([1])
print 'After initialization: %s' % y
y[1].update(y.pop(1))
print 'Popping and updating in one step: %s' % y


inOneStepVSTwoSteps.py
Description: Binary data
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Splitting something into two steps produces different behavior from doing it in one fell swoop in Python 2.6.2

2009-12-11 Thread Roy Hyunjin Han
On Fri, Dec 11, 2009 at 2:43 PM, MRAB  wrote:
> John Arbash Meinel wrote:
>>
>> Roy Hyunjin Han wrote:
>>>
>>> While debugging a network algorithm in Python 2.6.2, I encountered
>>> some strange behavior and was wondering whether it has to do with some
>>> sort of code optimization that Python does behind the scenes.
>>>
>>>
>>> 
>>> After initialization: defaultdict(, {1: set([1])})
>>> Popping and updating in two steps: defaultdict(, {1:
>>> set([1])})
>>> 
>>> After initialization: defaultdict(, {1: set([1])})
>>> Popping and updating in one step: defaultdict(, {})
>>>
>>>
>>> import collections
>>> print ''
>>> x = collections.defaultdict(set)
>>> x[1].update([1])
>>> print 'After initialization: %s' % x
>>> items = x.pop(1)
>>> x[1].update(items)
>>> print 'Popping and updating in two steps: %s' % x
>>> print ''
>>> y = collections.defaultdict(set)
>>> y[1].update([1])
>>> print 'After initialization: %s' % y
>>> y[1].update(y.pop(1))
>>> print 'Popping and updating in one step: %s' % y
>>>
>>
>> y[1].update(y.pop(1))
>>
>> is going to be evaluating y[1] before it evaluates y.pop(1).
>> Which means that it has the original set returned, which is then removed
>> by y.pop, and updated.
>>
>> You probably get the same behavior without using a defaultdict:
>>  y.setdefault(1, set()).update(y.pop(1))
>>  ^^- evaluated first
>>
>>
>> Oh and I should probably give the standard: "This list is for the
>> development *of* python, not development *with* python."
>>
> To me the question was whether Python was behaving correctly; the
> behaviour was more subtle than the legendary mutable default argument.

Thanks, John and MRAB.  I was pointing it out on this list because I
felt like it was counterintuitive and that the result should be the
same whether the developer decides to do it in two steps or in one
step.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Splitting something into two steps produces different behavior from doing it in one fell swoop in Python 2.6.2

2009-12-14 Thread Roy Hyunjin Han
On Fri, Dec 11, 2009 at 7:59 PM, Nick Coghlan  wrote:
> It follows the standard left-to-right evaluation order within an expression:
>
> ()
>
> (i.e. a function call always determines which function is going to be
> called before determining any arguments to be passed)
>
> Splitting it into two lines then clearly changes the evaluation order:
>
> temp = 
> (temp)
>
> I'm not sure what behaviour could be suggested as being more intuitive -
> the problem in this case arose due to both referencing and mutating the
> same object in a single statement, which is always going to be
> problematic from an ordering point of view, since it depends on subtle
> details of statement definitions that people often won't know. Better to
> split the mutation and the reference into separate statements so the
> intended order is clear regardless of how well the reader knows the
> subtleties of Python's evaluation order.
>
> Cheers,
> Nick.
>

Thanks for the explanation, Nick.  I understand what is happening now.
 y[1].update resolves to the update() method of the old set referenced
by y[1], but this set is then popped so that the update() manages to
change the old set, but not the new one that is returned by any
subsequent reference to y[1].  I suppose I will have to keep this in
two statements.  A similar thing happened with the SWIG extensions for
GDAL, in which one had to separate statements so that SWIG objects
were not destroyed prematurely, i.e. no more one-liners.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] What if replacing items in a dictionary returns the new dictionary?

2011-04-29 Thread Roy Hyunjin Han
It would be convenient if replacing items in a dictionary returns the
new dictionary, in a manner analogous to str.replace().  What do you
think?
::

# Current behavior
x = {'key1': 1}
x.update(key1=3) == None
x == {'key1': 3} # Original variable has changed

# Possible behavior
x = {'key1': 1}
x.replace(key1=3) == {'key1': 3}
x == {'key1': 1} # Original variable is unchanged
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What if replacing items in a dictionary returns the new dictionary?

2011-04-29 Thread Roy Hyunjin Han
2011/4/29 R. David Murray :
> 2011/4/29 Roy Hyunjin Han :
>> It would be convenient if replacing items in a dictionary returns the
>> new dictionary, in a manner analogous to str.replace()
>
> This belongs on python-ideas, but the short answer is no.  The
> general language design principle (as I understand it) is that
> mutable object do not return themselves upon mutation, while
> immutable objects do return the new object.

Thanks for the responses.  Sorry for the mispost, I'll post things
like this on python-ideas from now on.

RHH
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What if replacing items in a dictionary returns the new dictionary?

2011-04-29 Thread Roy Hyunjin Han
>   You can implement this in your own subclass of dict, no?

Yes, I just thought it would be convenient to have in the language
itself, but the responses to my post seem to indicate that [not
returning the updated object] is an intended language feature for
mutable types like dict or list.

class ReplaceableDict(dict):
def replace(self, **kw):
'Works for replacing string-based keys'
return dict(self.items() + kw.items())
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What if replacing items in a dictionary returns the new dictionary?

2011-05-05 Thread Roy Hyunjin Han
>> 2011/4/29 Roy Hyunjin Han :
>> It would be convenient if replacing items in a dictionary returns the
>> new dictionary, in a manner analogous to str.replace().  What do you
>> think?
>>
>># Current behavior
>>x = {'key1': 1}
>>x.update(key1=3) == None
>>x == {'key1': 3} # Original variable has changed
>>
>># Possible behavior
>>x = {'key1': 1}
>>x.replace(key1=3) == {'key1': 3}
>>x == {'key1': 1} # Original variable is unchanged
>>
> 2011/5/5 Giuseppe Ottaviano :
> In general nothing stops you to use a proxy object that returns itself
> after each method call, something like
>
> class using(object):
>def __init__(self, obj):
>self._wrappee = obj
>
>def unwrap(self):
>return self._wrappee
>
>def __getattr__(self, attr):
>def wrapper(*args, **kwargs):
>getattr(self._wrappee, attr)(*args, **kwargs)
>return self
>return wrapper
>
>
> d = dict()
> print using(d).update(dict(a=1)).update(dict(b=2)).unwrap()
> # prints {'a': 1, 'b': 2}
> l = list()
> print using(l).append(1).append(2).unwrap()
> # prints [1, 2]

Cool!  I never thought of that.  That's a great snippet.

I'll forward this to the python-ideas list.  I don't think the
python-dev people want this discussion to continue on their mailing
list.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com