Re: Sqlite pragma statement "locking_mode" set to "EXCLUSIVE" by default

2015-09-22 Thread Ryan Stuart
On Tue, Sep 22, 2015 at 5:12 PM, Sol T  wrote:

> I know I can do this per connection, however how can I have it set to
> default? Does this need to be compiled into python?
>

If you need to compile sqlite you might want to look at apsw:
https://github.com/rogerbinns/apsw

Cheers


>
> On Tue, Sep 22, 2015 at 2:04 PM, Ryan Stuart 
> wrote:
>
>> On Thu, Sep 17, 2015 at 2:24 PM, sol433tt  wrote:
>>
>>> I would like to have the Sqlite pragma statement "locking_mode" set to
>>> "EXCLUSIVE" by default (RO database). Does this need to be compiled in? How
>>> might this be achieved?
>>>
>>
>> You can issue any PRAGA statement you like using the execute method on a
>> connection as per the documentation (
>> https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.execute).
>> For information on the specific PRAGMA option you need, see
>> https://www.sqlite.org/pragma.html#pragma_locking_mode. I can't see any
>> mention of it being a compile time PRAGMA.
>>
>> Cheers
>>
>>
>>>
>>> There is some performance to be gained. I have a number of python
>>> scripts, and don't want to alter the pragma statement for every script.
>>> This computer never writes to the databases.
>>>
>>> thank you
>>>
>>> --
>>> https://mail.python.org/mailman/listinfo/python-list
>>>
>>>
>>
>>
>> --
>> Ryan Stuart, B.Eng
>> Software Engineer
>>
>> W: http://www.kapiche.com/
>> M: +61-431-299-036
>>
>
>


-- 
Ryan Stuart, B.Eng
Software Engineer

W: http://www.kapiche.com/
M: +61-431-299-036
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sqlite pragma statement "locking_mode" set to "EXCLUSIVE" by default

2015-09-21 Thread Ryan Stuart
On Thu, Sep 17, 2015 at 2:24 PM, sol433tt  wrote:

> I would like to have the Sqlite pragma statement "locking_mode" set to
> "EXCLUSIVE" by default (RO database). Does this need to be compiled in? How
> might this be achieved?
>

You can issue any PRAGA statement you like using the execute method on a
connection as per the documentation (
https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.execute).
For information on the specific PRAGMA option you need, see
https://www.sqlite.org/pragma.html#pragma_locking_mode. I can't see any
mention of it being a compile time PRAGMA.

Cheers


>
> There is some performance to be gained. I have a number of python scripts,
> and don't want to alter the pragma statement for every script. This
> computer never writes to the databases.
>
> thank you
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
>


-- 
Ryan Stuart, B.Eng
Software Engineer

W: http://www.kapiche.com/
M: +61-431-299-036
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Parser needed.

2015-06-09 Thread Ryan Stuart
On Tue, Jun 9, 2015 at 12:30 PM, Skybuck Flying 
wrote:

> Anyway... I am trying a more robust parser... because my own parser right
> now didn't work out for new inputs.
>

You should take a look at lrparsing:
https://pypi.python.org/pypi/lrparsing/1.0.11

Cheers


>
> It almost worked except for first item... don't know what problem was
> maybe just this...
>
> But I'll try and do it the usually way.
>
> "Tokenize", "Parse" etc.
>
> It's significantly slower though.
>
> Maybe an idea for a new kind of parser could be:
>
> Find all recgonized token keywords in one go... and stuff all find indexes
> into a list... and then perhaps sort the list...
>
> and then use the sorted list... to return token found at that position...
>
> and then start processing like that...
>
> Benefit of this idea is less characters to compare... and could be done
> with just integer compare... or even lookup table processing.
>
> Added benefit is still in-order which is nice.
>
> For now I won't do it... cause I am not sure it would be an improvemt.
>
> Another idea which might be an improvement is.
>
> Parallel searching... ofcourse that might not be possible... though...
> multi core does it exist and threading too.
>
> But to mimic parallel searching and to prevent problems.
>
> A sliding window approach could be taken.
>
> And perhaps items found that way in a certain buffer or so and then still
> added to processing list or so...
>
> which kinda mimics parallel search... but just uses data cache nicely.
>
> Though it's somewhat of a complex method... I will avoid it for now.
>
> The split() routine is real nice... to get rid of fudd/white space... and
> just tokens which is nice.
>
> So for now I will use that as my tokenizer ;) =D
>
> and bracket level counting and sections and stuff like that yeah...
>
>
> Bye,
>  Skybuck.
> --
> https://mail.python.org/mailman/listinfo/python-list
>



-- 
Ryan Stuart, B.Eng
Software Engineer

ABN: 81-206-082-133
W: http://www.textisbeautiful.net/
M: +61-431-299-036
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: test2

2015-03-24 Thread Ryan Stuart
et_tally
> from django.http import HttpResponsePermanentRedirect
> from django.shortcuts import render_to_response
> from django.contrib import admin
> admin.autodiscover()
>
> import os
>
> PROJECT_PATH = os.path.realpath(os.path.dirname(__file__))
>
> # Uncomment the next two lines to enable the admin:
> from django.contrib import admin
> admin.autodiscover()
>
> urlpatterns = patterns('asset.views',
> # Examples:
> # url(r'^$', 'ipdb.views.home', name='home'),
> # url(r'^ipdb/', include('ipdb.foo.urls')),
>
> # Uncomment the admin/doc line below to enable admin documentation:
>
> # Uncomment the next line to enable the admin:
> url(r'^admin/', include(admin.site.urls)),
> #(r'^accounts/', include('registration.urls')),
> (r'^$', lambda request:
> HttpResponsePermanentRedirect('/asset/overview')),
> (r'^heatmap/', country_tally),
> (r'^grid/',grid),
> (r'^nto/',newest_to_oldest),
> (r'^ctwtt/',country_tally_web_top10),
> (r'^ctw/',country_tally_web),
> (r'^at/',asset_tally),
>
>  #(r'^/$', ipdb_overview), #Start
>  #(r'^$', ipdb_overview), #Start
> (r'^asset/overview/$', ipdb_overview), #Start
> (r'^input/$', gui_add),
> (r'^search/$', gui_search),
> (r'^search$', gui_search),
> # (r'^delete/$', ipdb_input_delete),
> (r'^api/add/$', api_add), #check sensornet urls.py and
> views/sensors.py for cool tricks.
> (r'^api/search/$', api_search), #check sensornet urls.py and
> views/sensors.py for cool tricks.
> (r'^api/file/$', api_file),
>
> #(r'^asset/media/(?P.*)$', 'django.views.static.serve',
> {'document_root':  os.path.join(PROJECT_PATH, '../asset/media')}),
>
> #(r'^ip_db/media/(?P.*)$', 'django.views.static.serve',
> {'document_root':  os.path.join(PROJECT_PATH, 'ip_db/media')}),
> #(r'^media/(?P.*)$', 'django.views.static.serve',
> {'document_root':  os.path.join(PROJECT_PATH, 'media')}),
> #(r'^static/(?P.*)$', 'django.views.static.serve',
> {'document_root':  '/home/sensornet/static'}),
>
> # extjs4 stuff
> #(r'^asset/overview/extjs/(?P.*)$', 'serve', {'document_root':
> os.path.join(PROJECT_PATH, '../asset/extjs')}),
> #(r'^extjs/(?P.*)$', 'django.views.static.serve',
> {'document_root':  os.path.join(PROJECT_PATH, '../ip_db/extjs')}),
> #(r'^extjs/(?P.*)$', 'django.views.static.serve',
> {'document_root':  '/home/totis/tree/ipdb/django/ipdb/asset/static/extjs'}),
> ) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT),
> 'ipdb.asset': {
> 'handlers': ['console', 'logfile'],
> 'level': 'DEBUG',
> },
> }
> }
>
> ===
>
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>



-- 
Ryan Stuart, B.Eng
Software Engineer

ABN: 81-206-082-133
W: http://www.textisbeautiful.net/
M: +61-431-299-036
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: HELP!! How to ask a girl out with a simple witty Python code??

2015-03-04 Thread Ryan Stuart
On Thu, 5 Mar 2015 at 11:35 Xrrific  wrote:

> Could you please come up with something witty incorporating a simple
> python line like If...then... but..etc.
>

Send her this:

import base64
print(base64.b64decode('CkhpLCAKVGhpcyBpcyBzb21lb25lIGZyb20gdGhlIHB5dGhvbiBtYWlsaW5nIGxpc3QuIFRoaXMgZ3V5IChoZSBwb3N0ZWQgdW5kZXIgdGhlIG5hbWUgYFhycmlmaWNgKSBwb3N0ZWQgaGVyZSBhc2tpbmcgZm9yIHNvbWUgY29kZSB0aGF0IHdvdWxkIGltcHJlc3MgeW91LiBJdCB3YXMgdmVyeSB0ZW1wdGluZyB0byBwdXQgc29tZXRoaW5nIHZlcnkgb2ZmZW5zaXZlIGluIHRoaXMgbWVzc2FnZSEgUmF0aGVyLCBoZXJlIGlzIHNvbWUgYXNjaWkgYXJ0LgoKICAgICAgICAgICAsCiAgICAgICwsICAgJSUgICAsICAgLAogICAgIDolJSU7IDslJSUgOyUgICUlOgogICAgICA6JSUlICUlJSUgJSUgJSU6ICAgCiAgICAgOiUlJSUlICUlJSUgJSAlJSU6IAogICAgIDolJSUlJSUgJSUlJSAlJSUlOgogICAgICA6JSUlJSUgJSUlICUlJSU6CiAgICAgICA6JSUlJSAlJSAlJSUlOgogICAgICAgIDolJSUlJSAlJSUlOgogICAgICAgICA6JSUlJSAlJSU6IAogICAgICAgICAgJiU6JSY6JSYKICAgICAgICAgICAmJjomOiYKICAgICAgICAgICAgJiYmJgogICAgICAgICAgICA7JiY7ICAgICA7CiAgICAgICAgICAgICA6OiAgICA7Ozs7CiAgICAgICAgICAgICA6OiAgIDs7Ozs7CiAgICAgICAgICAgICA6OiAgOzs7Ozs7CiAgICAgICAgICAgICA6OiA7Ozs7OzsKICAgICAgIDsgICAgIDo6IDs7Ozs7CiAgICAgIDs7ICAgICA6Ojs7OzsgCiAgICAgOzs7OyAgICA6OjsgCiAgICAgOzs7OzsgICA6OgogICAgIDs7Ozs7OyAgOjoKICAgICAgOzs7OzsgIDo6CiAgICAgICAgOzs7OyA6OgogICAgICAgICAgIDs7OjoKICAgICAgICAgICAgOzo6CiAgICAgICAgICAgICA6OgogICAgICAgICAgICAgOjoKICAgICAgICAgICAgIDo6CiAgICAgICAgICAgICA6OgogICAgICAgICAgICAgOjoKICAgICAgICAgICAgIDo6CiAgICAgICAgICAgICA6OgogICAgICAgICAgICAgOjoKClAuUy4gWWVzIGl0IGlzIGEgbGl0dGxlIGxhbWUgdGhhdCBoZSBwb3N0ZWQgb24gdGhlIG1haWxpbmcgbGlzdCBmb3IgdGhpcywgYnV0IGl0J3MgdGhlIHRob3VnaHQgdGhhdCBjb3VudHMhCg=='))

Cheers


>
> You will make me a very happy man!!!
>
> Thank you very much!!!
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Are threads bad? - was: Future of Pypy?

2015-02-24 Thread Ryan Stuart
On Tue Feb 24 2015 at 3:32:47 PM Paul Rubin  wrote:

> Ryan Stuart  writes:
> Sure, the shared memory introduces the possibility of some bad errors,
> I'm just saying that I've found that by staying with a certain
> straightforward style, it doesn't seem difficult in practice to avoid
> those errors.
>

And all I'm saying is that it doesn't matter how careful you are, all it
takes is someone up the stack to not be as careful and you will be bitten.
And even if they are careful, it's still very easy to be bitten because
everything runs in shared memory.


> I don't understand what you mean about malloc.
>

My point is malloc, something further up (down?) the stack, is making
modifications to shared state when threads are involved. Modifying shared
state makes it infinitely more difficult to reason about the correctness of
your software.


> That article was interesting in some ways but confused in others.  One
> way it was interesting is it said various non-thread approaches (such as
> coroutines) had about the same problems as threads.  Some ways it
> was confused were:
>

We clearly got completely different things from the article. My
interpretation was that it was making *the exact opposite* point to what
you stated mainly because non-threading approaches don't share state. It
states that quite clearly. For example "it is – literally – exponentially
more difficult to reason about a routine that may be executed from an
arbitrary number of threads concurrently".


>   1) thinking Haskell threads were like processes with separate address
>   spaces.  In fact they are in the same address space and programming
>   with them isn't all that different from Python threads, though the
>   synchronization primitives are a bit different.  There is also an STM
>   library available that is ingenious though apparently somewhat slow.
>

I don't know Haskell, so I can't comment too much, except to say that by
default Haskell looks to use lightweight threads where only 1 thread can be
executing at a time [1] (sounds similar to the GIL to me). That doesn't
seem to be shared state multithreading, which is what the article is
referring to. But again, they will undoubtedly be someone sharing state
higher up the stack.


>   2) it has a weird story about the brass cockroach, that basically
>   signified that they didn't have a robust enough testing system to be
>   able to reproduce the bug.  That is what they should have worked on.
>

The point was that it wasn't feasible to have a robust testing suite
because, you guessed it, shared state and concurrent threads producing n*n
complexity.


>   3) It goes into various hazards of the balance transfer example not
>   mentioning that STM (available in Haskell and Clojure) completely
>   solves it.
>

This is probably correct. Is there any STM implementations out that that
don't significantly compromise performance?

Anyway, I got one thing out of this, which is that the multiprocessing
> module looks pretty nice and I should try it even when I don't need
> multicore parallelism, so thanks for that.
>

It's 1 real advantage is that it side-steps the GIL. So, if you need to
utilise multiple cores for CPU bound tasks, then it might well be the only
option.

Cheers


> In reality though, Python is still my most productive language for
> throwaway, non-concurrent scripting, but for more complicated concurrent
> programs, alternatives like Haskell, Erlang, and Go all have significant
> attractions.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Ryan Stuart
On Mon Feb 23 2015 at 4:15:42 PM Paul Rubin  wrote:
>
> What do you mean about Queues working with processes?  I meant
> Queue.Queue.  There is multiprocessing.Queue but that's much less
> capable, and it uses cumbersome IPC like pipes or sockets instead of a
> lighter weight lock.  Threads can also share read-only data and you can
> pass arbitrary objects (such as code callables that you want the other
> thread to execute--this is quite useful) through Queue.Queue.  I don't
> think you can do that with the multiprocessing module.
>

These things might be convenient but they are error prone for the reasons
pointed out. Also, the majority can be achieved via the process approach.
For example, using fork to take a copy of the current process (including
the heap) you want to use will give you access to any callables on the
heap.

The vital point here is that fork takes a *copy* of the process and runs in
a *separate* memory space. This means there can be no accidents here. If it
were to run in the same memory space like a thread then bugs anywhere in
the code you run could cause very nasty problems. This includes not only
bugs in your code, but also bugs in any other library code. Not only are
they nasty, they could be close to invisible.

And when I saw any code you run, I literally mean any. Even if you are
extra careful to not touch any shared state in your code, you can almost be
guaranteed that code higher up the stack, like malloc for example, *will*
be using shared state.

The risk of unintended and difficult to track issues when using threads is
very high because of shared state. Even if you aren't sharing state in your
code directly, code higher up the stack will be sharing state. That is the
whole point of a thread, that's what they were invented for. Using threads
safely might well be impossible much less verifiable. So when there are
other options that are just as viable/functional, result in far less risk
and are often much quicker to implement correctly, why wouldn't you use
them? If it were easy to use threads in a verifiably safe manner, then
there probably wouldn't be a GIL.

Cheers


> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Ryan Stuart
On Mon Feb 23 2015 at 1:50:40 PM Paul Rubin  wrote:

> That article is about the hazards of mutable state shared between
> threads.  The key to using threads safely is to not do that.  So the
> "transfer" example in the article would instead be a message handler in
> the thread holding the account data, and it would do the transfer in the
> usual sequential way.  You'd start a transfer by sending a message
> through a Queue, and get back a reply through another queue.
>

I think that is a pretty accurate summary. In fact, the article even says
that. So, just to iterate its point, if you are using non-blocking Queues
to communicate to these threads, then you just have a communicating event
loop. Given that Queues work perfectly with with processes as well, what is
the point of using a thread? Using a process/fork is far safer in that
someone can't "accidentally" decide to alter mutable state in the future.


> You might like this:
>
> http://jlouisramblings.blogspot.com/2012/08/getting-
> 25-megalines-of-code-to-behave.html


Thanks for this, I'll take a look.

Cheers


>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Future of Pypy?

2015-02-22 Thread Ryan Stuart
On Mon Feb 23 2015 at 12:05:46 PM Paul Rubin 
wrote:

> I don't see what the big deal is.  I hear tons of horror stories about
> threads and I believe them, but the thing is, they almost always revolve
> around acquiring and releasing locks in the wrong order, forgetting to
> lock things, stuff like that.  So I've generally respected the terror
> and avoided programming in that style, staying with a message passing
> style that may take an efficiency hit but seems to avoid almost all
> those problems.  TM also helps with lock hazards and it's a beautiful
> idea--I just haven't had to use it yet.  The Python IRC channel seems to
> rage against threads and promote Twisted for concurrency, but Twisted
> has always reminded me of Java.  I use threads in Python all the time
> and haven't gotten bitten yet.
>
>
Many people have written at length about why it's bad. The most recent
example I have come across is here ->
https://glyph.twistedmatrix.com/2014/02/unyielding.html

It's not a specific Python problem. I must be in the limited crowd that
believes that the GIL is a *feature* of Python. Then again, maybe it isn't
so limited since the GIL-less python implementations haven't really taken
off.

I have yet to come across a scenario I couldn't solve with either
Processes, NumPy or event loops. Yes, when using processes, the passing of
data can be annoying sometimes. But that is far less annoying then trying
to debug something that shares state across threads.

It's great that you haven't been bitten yet. But, the evidence seems to
suggest the either you *will* be bitten at some point or, you already have
been, and you just don't know it.

Cheers


> > Writing multithreaded code is *hard*. It is not a programming model
> > which comes naturally to most human beings. Very few programs are
> > inherently parallelizable, although many programs have *parts* which
> > can be successfully parallelized.
>
> Parallel algorithms are complicated and specialized but tons of problems
> amount to "do the same thing with N different pieces of data", so-called
> embarassingly parallel.  The model is you have a bunch of worker threads
> reading off a queue and processing the items concurrently.  Sometimes
> separate processes works equally well, other times it's good to have
> some shared data in memory instead of communicating through sockets.  If
> the data is mutable then have one thread own it and access it only with
> message passing, Erlang style.  If it's immutable after initialization
> (protect it with a condition variable til initialization finishes) then
> you can have read-only access from anywhere.
>
> > if you're running single-thread code on a single-core machine and
> > still complaining about the GIL, you have no clue.
>
> Even the Raspberry Pi has 4 cores now, and fancy smartphones have had
> them for years.  Single core cpu's are specialized and/or historical.
>
> > for some programs, the simplest way to speed it up is to vectorize the
> > core parts of your code by using numpy.  No threads needed.
>
> Nice for numerical codes, not so hot for anything else.
>
> > Where are the people flocking to use Jython and IronPython?
>
> Shrug, who knows, those implementations were pretty deficient from what
> heard.
>
> > For removal of the GIL to really make a difference: ...
> > - you must be performing a task which is parallelizable and not
> inherently
> > sequential (no point using multiple threads if each thread spends all its
> > time waiting for the previous thread);
>
> That's most things involving concurrency these days.
>
> > - the task must be one that moving to some other multi-processing model
> > (such as greenlets, multiprocess, etc.) is infeasible;
>
> I don't understand this--there can be multiple ways to solve a problem.
>
> > - your threading bottleneck must be primarily CPU-bound, not I/O bound
> > (CPython's threads are already very effective at parallelising I/O
> tasks);
>
> If your concurrent program's workload makes it cpu-bound even 1% of the
> time, then you gain something by having it use your extra cores at those
> moments, instead of having those cores always do nothing.
>
> > - and you must be using libraries and tools which prevent you moving to
> > Jython or IronPython or some other alternative.
>
> I don't get this at all.  Why should I not want Python to have the same
> capabilities?
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: TypeError: list indices must be integers, not tuple

2015-02-10 Thread Ryan Stuart
Hi,

There is a lot of issues with this code. First, setting fav to a 1 tuples
with a string probably isn't what you want. What you probably mean is:

if restraunt == ("Pizza"):
fav = 1

Second, when you are trying to lookup items in Menu, you are using the
incorrect fav. Lists have int indicies (just like the error points out).
Values like ("1") aren't integers.

Thirdly, Menu is a list of lists. To fetch "Barbeque pizza" from Menu, you
need to do Menu[0][0], not Menu[0, 0].

Finally, Python comes with a style guide which you can find in pep8
. Your code violates that guide
in many places. It might be worth working through the Python Tutorial
.

Cheers

On Tue Feb 10 2015 at 9:55:40 AM  wrote:

> import random
> RandomNum = random.randint(0,7)
> restraunt = raw_input("What's your favourite takeaway?Pizza, Chinease or
> Indian?")
> if restraunt == ("Pizza"):
> fav = ("1")
>
> elif restraunt == ("Chinease"):
> fav = ("2")
>
> elif restraunt == ("Indian"):
> fav = ("3")
>
> else:
> print("Try using a capital letter, eg; 'Chinease'")
>
> Menu = [["Barbeque pizza","Peparoni","Hawain"],["
> Curry","Noodles","Rice"],["Tika Masala","Special Rice","Onion Bargees"]]
>
> print Menu[fav,RandomNum]
>^
> TypeError: list indices must be integers, not tuple
>
> How do I set a variable to a random number then use it as a list indece,
> (I'm only a student in his first 6 months of using python)
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Profiler for long-running application

2015-02-09 Thread Ryan Stuart
Hi Asad,

Is there any reason why you can't just use profile/cProfile? In particular,
you could use the api of that module to save out the profile stats to an
external file with a unique name and then inspect them later with a tool
like snakeviz . The code to save
profile stats might look like the following:

pr = cProfile.Profile()
pr.runcall(your_celery_task_without_async)
ps = pstats.Stats(pr)
ps.dump_stats("my_task.profile")

Obviously you need to call your celery task function directly, not via
Celery using delay() or any derivative. Alternatively, you could try
something like line_profiler  by
again, calling the task directly.

If using this method it turn out your actual task code is running quite
fast, then I'd suggest that the majority of the time is being lost in
transferring the task to the Celery node.

Cheers

On Mon Feb 09 2015 at 5:20:43 AM Asad Dhamani  wrote:

> I have a Flask application where I run a specific task asynchronously
> using Celery. Its basically parsing some HTML and inserting data into a
> Postgres database(using SQLAlchemy). However, the task seems to be running
> very slowly, at 1 insert per second.
> I'd like to find out where the bottleneck is, and I've been looking for a
> good profiler that'd let me do this, however, I couldn't find anything. Any
> recommendations would be great.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list