OT: Accessibility: Jana Schroeder's Holman Prize Application

2021-05-10 Thread Jonathan Fine
Perhaps Off Topic, but for a good cause.

This year I met Jana Schroeder, a blind person forced to change jobs as part of 
the social cost of Covid. Her outsider experience of computer coding training 
became a wish to make things better. She has applied for a Holman Prize 
($25,000 over a year) to fund this. She's also set up a survey to reach and 
know better those with similar wishes.

One simple way to help open to many is to volunteer to be a sighted helper for 
a code and math variant of BeMyEyes.org. I encourage you to listen to Jana's 
pitch for a Holman prize, and if you want to help complete the survey (whether 
you're blind or sighted, code or math, young or old). I've learnt a lot about 
accessibility from Jana.

Jana Schroeder's Holman pitch (90 seconds):
https://www.youtube.com/watch?v=3ywl5d162vU

Jana Schroeder's survey (15 minutes):
https://tinyurl.com/blindcodersurvey

Finally, The Holman Prize:
https://holman.lighthouse-sf.org/

best regards

Jonathan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: A Python script to put CTAN into git (from DVDs)

2011-11-07 Thread Jonathan Fine

On 07/11/11 21:49, Jakub Narebski wrote:

[snip]


But now I understand that you are just building tree objects, and
creating references to them (with implicit ordering given by names,
I guess).  This is to be a start of further work, isn't it?


Yes, that's exactly the point, and my apologies if I was not clear enough.

I'll post again when I've finished the script and performed placed 
several years of DVD into git.  Then the discussion will be more 
concrete - we have this tree, how do we make it more useful.


Thank you for your contributions, particularly telling me about gitpan.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: A Python script to put CTAN into git (from DVDs)

2011-11-07 Thread Jonathan Fine

On 06/11/11 20:28, Jakub Narebski wrote:


Note that for gitPAN each "distribution" (usually but not always
corresponding to single Perl module) is in separate repository.
The dependencies are handled by CPAN / CPANPLUS / cpanm client
(i.e. during install).


Thank you for your interest, Jakub, and also for this information.  With 
TeX there's a difficult which Perl, I think, does not have.  With TeX we 
process documents, which may demand specific versions of packages. 
LaTeX users are concerned that move on to a later version will cause 
documents to break.



Putting all DVD (is it "TeX Live" DVD by the way?) into single
repository would put quite a bit of stress to git; it was created for
software development (although admittedly of large project like Linux
kernel), not 4GB+ trees.


I'm impressed by how well git manages it.  It took about 15 minutes to 
build the 4GB tree, and it was disk speed rather than CPU which was the 
bottleneck.



Once you've done that, it is then possible and sensible to select
suitable interesting subsets, such as releases of a particular
package. Users could even define their own subsets, such as "all
resources needed to process this file, exactly as it processes on my
machine".


This could be handled using submodules, by having superrepository that
consist solely of references to other repositories by the way of
submodules... plus perhaps some administrativa files (like README for
whole CTAN, or search tool, or DVD install, etc.)

This could be the used to get for example contents of DVD from 2010.


We may be at cross purposes.  My first task is get the DVD tree into 
git, performing necessary transformations such as expanding zip files 
along the way.  Breaking the content into submodules can, I believe, be 
done afterwards.


With DVDs from several years it could take several hours to load 
everything into git.  For myself, I'd like to do that once, more or less 
as a batch process, and then move on to the more interesting topics. 
Getting the DVD contents into git is already a significant piece of work.


Once done, I can them move on to what you're interested in, which is 
organising the material.  And I hope that others in the TeX community 
will get involved with that, because I'm not building this repository 
just for myself.



But even though submodules (c.f. Subversion svn:external, Mecurial
forest extension, etc.) are in Git for quite a bit of time, it doesn't
have best user interface.


In addition, many TeX users have a TeX DVD.  If they import it into a
git repository (using for example my script) then the update from 2011
to 2012 would require much less bandwidth.


???


A quick way to bring your TeX distribution up to date is to do a delta 
with a later distribution, and download the difference.  That's what git 
does, and it does it well.  So I'm keen to convert a TeX DVD into a git 
repository, and then differences can be downloaded.



Finally, I'd rather be working within git that modified copy of the
ISO when doing the subsetting.  I'm pretty sure that I can manage to
pull the small repositories from the big git-CTAN repository.


No you cannot.  It is all or nothing; there is no support for partial
_clone_ (yet), and it looks like it is a hard problem.

Nb. there is support for partial _checkout_, but this is something
different.


From what I know, I'm confident that I can achieve what I want using 
git.  I'm also confident that my approach is not closing off any 
possible approached.  But if I'm wrong you'll be able to say: I told you so.



Commit = tree + parent + metadata.


Actually, any number of parents, including none.  What metadata do I 
have to provide?  At this time nothing, I think, beyond that provided by 
the name of a reference (to the root of a tree).



I think you would very much want to have linear sequence of trees,
ordered via DAG of commits.  "Naked" trees are rather bad idea, I think.


As I recall the first 'commit' to the git repository for the Linux
kernel was just a tree, with a reference to that tree as a tag.  But
no commit.


That was a bad accident that there is a tag that points directly to a
tree of _initial import_, not something to copy.


Because git is a distributed version control system, anyone who wants to 
can create such a directed acyclic graph of commits.  And if it's useful 
I'll gladly add it to my copy of the repository.


best regards


Jonathan

--
http://mail.python.org/mailman/listinfo/python-list


Re: A Python script to put CTAN into git (from DVDs)

2011-11-06 Thread Jonathan Fine

On 06/11/11 16:42, Jakub Narebski wrote:

Jonathan Fine  writes:


Hi

This it to let you know that I'm writing (in Python) a script that
places the content of CTAN into a git repository.
  https://bitbucket.org/jfine/python-ctantools


I hope that you meant "repositories" (plural) here, one per tool,
rather than putting all of CTAN into single Git repository.


There are complex dependencies among LaTeX macro packages, and TeX is 
often distributed and installed from a DVD.  So it makes sense here to 
put *all* the content of a DVD into a repository.


Once you've done that, it is then possible and sensible to select 
suitable interesting subsets, such as releases of a particular package. 
Users could even define their own subsets, such as "all resources needed 
to process this file, exactly as it processes on my machine".


In addition, many TeX users have a TeX DVD.  If they import it into a 
git repository (using for example my script) then the update from 2011 
to 2012 would require much less bandwidth.


Finally, I'd rather be working within git that modified copy of the ISO 
when doing the subsetting.  I'm pretty sure that I can manage to pull 
the small repositories from the big git-CTAN repository.


But as I proceed, perhaps I'll change my mind (smile).


I'm working from the TeX Collection DVDs that are published each year
by the TeX user groups, which contain a snapshot of CTAN (about
100,000 files occupying 4Gb), which means I have to unzip folders and
do a few other things.


There is 'contrib/fast-import/import-zips.py' in git.git repository.
If you are not using it, or its equivalent, it might be worth checking
out.


Well, I didn't know about that.  I took a look, and it doesn't do what I 
want.  I need to walk the tree (on a mounted ISO) and unpack some (but 
not all) zip files as I come across them.  For details see:


https://bitbucket.org/jfine/python-ctantools/src/tip/ctantools/filetools.py

In addition, I don't want to make a commit.  I just want to make a ref 
at the end of building the tree.  This is because I want the import of a 
TeX DVD to give effectively identical results for all users, and so any 
commit information would be effectively constant.



CTAN is the Comprehensive TeX Archive Network.  CTAN keeps only the
latest version of each file, but old CTAN snapshots will provide many
earlier versions.


There was similar effort done in putting CPAN (Comprehensive _Perl_
Archive Network) in Git, hosting repositories on GitHub[1], by the name
of gitPAN, see e.g.:

   "The gitPAN Import is Complete"
   http://perlisalive.com/articles/36

[1]: https://github.com/gitpan


This is really good to know!!!  Not only has this been done already, for 
similar reasons, but github is hosting it.  Life is easier when there is 
a good example to follow.



I'm working on putting old CTAN files into modern version
control. Martin Scharrer is working in the other direction.  He's
putting new files added to CTAN into Mercurial.
  http://ctanhg.scharrer-online.de/


Nb. thanks to tools such as git-hg and fast-import / fast-export
we have quite good interoperability and convertability between
Git and Mercurial.

P.S. I'd point to reposurgeon tool, which can be used to do fixups
after import, but it would probably won't work on such large (set of)
repositories.


Thank you for the pointer to reposurgeon.  My approach is a bit 
different.  First, get all the files into git, and then 'edit the tree' 
to create new trees.  And then commit worthwhile new trees.


As I recall the first 'commit' to the git repository for the Linux 
kernel was just a tree, with a reference to that tree as a tag.  But no 
commit.



P.P.S. Can you forward it to comp.text.tex?


Done.

--
Jonathan

--
http://mail.python.org/mailman/listinfo/python-list


A Python script to put CTAN into git (from DVDs)

2011-11-06 Thread Jonathan Fine

Hi

This it to let you know that I'm writing (in Python) a script that 
places the content of CTAN into a git repository.

https://bitbucket.org/jfine/python-ctantools

I'm working from the TeX Collection DVDs that are published each year by 
the TeX user groups, which contain a snapshot of CTAN (about 100,000 
files occupying 4Gb), which means I have to unzip folders and do a few 
other things.


CTAN is the Comprehensive TeX Archive Network.  CTAN keeps only the 
latest version of each file, but old CTAN snapshots will provide many 
earlier versions.


I'm working on putting old CTAN files into modern version control. 
Martin Scharrer is working in the other direction.  He's putting new 
files added to CTAN into Mercurial.

http://ctanhg.scharrer-online.de/

My script works already as a proof of concept, but needs more work (and 
documentation) before it becomes useful.  I've requested that follow up 
goes to comp.text.tex.


Longer terms goals are git as
* http://en.wikipedia.org/wiki/Content-addressable_storage
* a resource editing and linking system

If you didn't know, a git tree is much like an immutable JSON object, 
except that it does not have arrays or numbers.


If my project interests you, reply to this message or contact me 
directly (or both).


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


A new syntax for writing tests

2010-08-04 Thread Jonathan Fine

Hi

I just discovered today a new syntax for writing tests.  The basic idea 
is to write a function that contains some statements, and run it via a 
decorator.  I wonder if anyone had seen this pattern before, and how you 
feel about it.  For myself, I quite like it.


Let's suppose we want to test this trivial (of course) class.
class Adder(object):

def __init__(self):
self.value = 0

def plus(self, delta):
self.value += delta

The test the class you need a runner.  In this case it is quite simple.

def runner(script, expect):
'''Create an adder, run script, expect value.'''

adder = Adder()
script(adder)
return adder.value

We can now create (and run if we wish) a test.  To do this we write

@testit(runner, 4)
def whatever(a):
'''Two plus two is four.'''

a.plus(2)
a.plus(2)

Depending on the exact value of the testit decorator (which in the end 
is up to you) we can store the test, or execute it immediately, or do 
something else.


The simplest implementation prints:
OK: Two plus two is four.
for this passing test, and
Fail: Two plus four is five.
  expect 5
  actual 6
for a test that fails.

Here is the testit decorator used to produce the above output:

def testit(runner, expect):
'''Test statements decorator.'''

def next(script):
actual = runner(script, expect)
if actual == expect:
print 'OK:', script.__doc__
else:
print 'Fail:', script.__doc__
print '  expect', expect
print '  actual', actual

return next


You can pick this code, for at least the next 30 days, at
http://dpaste.com/hold/225056/

For me the key benefit is that writing the test is really easy.  Here's 
a test I wrote earlier today.


@testit(runner, '')
def whatever(tb):
tb.start('a', {'att': 'value'})
tb.start('b')
tb.end('b')
tb.end('a')

If the test has a set-up and tear-down, this can be handled in the 
runner, as can the test script raising an expected or unexpected exception.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is there any module/utility like 'rsync' in python

2010-06-15 Thread Jonathan Fine

hiral wrote:

Hi,

Is there any module/utility like 'rsync' in python.

Thank you in advance.


Not exactly what you asked for, but Mercurial provides a Python 
interface.  You might find this URL a good starting point:

   http://mercurial.selenic.com/wiki/MercurialApi

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: Python solution for ordering dependencies

2010-04-25 Thread Jonathan Fine

Eduardo Schettino wrote:

On Sun, Apr 25, 2010 at 11:44 PM, Jonathan Fine  wrote:

Eduardo Schettino wrote:

On Sun, Apr 25, 2010 at 4:53 AM, Jonathan Fine  wrote:

Hi

I'm hoping to avoid reinventing a wheel (or other rolling device).  I've
got
a number of dependencies and, if possible, I want to order them so that
each
item has its dependencies met before it is processed.

I think I could get what I want by writing and running a suitable
makefile,
but that seems to be such a kludge.

Does anyone know of an easily available Python solution?

http://pypi.python.org/pypi/doit


Thank you for this, Eduardo. However, all I require is a means of ordering
the items that respects the dependencies.  This rest I can, and pretty much
have to, manage myself.

So probably something more lightweight would suit me.


you just want a function?

def order_tasks(tasks):
ADDING, ADDED = 0, 1
status = {} # key task-name, value: ADDING, ADDED
task_order = []
def add_task(task_name):
if task_name in status:
# check task was alaready added
if status[task_name] == ADDED:
return
# detect cyclic/recursive dependencies
if status[task_name] == ADDING:
msg = "Cyclic/recursive dependencies for task %s"
raise Exception(msg % task_name)

status[task_name] = ADDING
# add dependencies first
for dependency in tasks[task_name]:
add_task(dependency)

# add itself
task_order.append(task_name)
status[task_name] = ADDED

for name in tasks.keys():
add_task(name)
return task_order

if __name__ == '__main__':
task_list = {'a':['b','c'],
 'b':['c'],
 'c':[]}
print order_tasks(task_list)



Yes, this is good, and pretty much what I'd have written if I had to do 
it myself.  Thank you, Eduardo.


But the links posted by Chris Rebert suggest that this solution can be 
quadratic in its running time, and give a Python implementation of 
Tarjan's linear solution.  Here are the links:

http://pypi.python.org/pypi/topsort/0.9
http://www.bitformation.com/art/python_toposort.html

I don't know if the quadratic running time is an issue for my purpose.

--
Jonathan

--
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Oktest-0.2.2 - a new-style testing library

2010-04-25 Thread Jonathan Fine

Makoto Kuwata wrote:

Hi,
I released Oktest 0.2.2.

http://packages.python.org/Oktest/
http://pypi.python.org/pypi/Oktest/


Overview


Oktest is a new-style testing library for Python.
::

from oktest import ok
ok (x) > 0 # same as assert_(x > 0)
ok (s) == 'foo'# same as assertEqual(s, 'foo')
ok (s) != 'foo'# same as assertNotEqual(s, 'foo')
ok (f).raises(ValueError)  # same as assertRaises(ValueError, f)
ok (u'foo').is_a(unicode)  # same as assert_(isinstance(u'foo', unicode))
not_ok (u'foo').is_a(int)  # same as assert_(not isinstance(u'foo', int))
ok ('A.txt').is_file() # same as assert_(os.path.isfile('A.txt'))
not_ok ('A.txt').is_dir()  # same as assert_(not os.path.isdir('A.txt'))

You can use ok() instead of 'assertXxx()' in unittest.

Oktest requires Python 2.3 or later. Oktest is ready for Python 3.

NOTICE!! Oktest is a young project and specification may change in the future.

See http://packages.python.org/Oktest/ for details.


This reminds me a bit of my own in-progress work
http://bitbucket.org/jfine/python-testutil/

Here's an example of how it works:

>>> def plus(a, b): return a + b
>>> def minus(a, b): return a - b
>>> def square(a): return a * a

>>> x = TestScript(
... plus,
... (
... f(2, 2) == 5,
... )
... )

>>> x.run()
[WrongValue(5, 4)]

>>> y = TestScript(
... dict(f=plus, g=square, h=map),
... (
... f(2, 2) == 5,
... h(g, (1, 2, 3)) == [1, 3, 6],
... )
... )

>>> y.run()
   [WrongValue(5, 4), WrongValue([1, 3, 6], [1, 4, 9])]


But it's not yet ready for use.
--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: Python solution for ordering dependencies

2010-04-25 Thread Jonathan Fine

Eduardo Schettino wrote:

On Sun, Apr 25, 2010 at 4:53 AM, Jonathan Fine  wrote:

Hi

I'm hoping to avoid reinventing a wheel (or other rolling device).  I've got
a number of dependencies and, if possible, I want to order them so that each
item has its dependencies met before it is processed.

I think I could get what I want by writing and running a suitable makefile,
but that seems to be such a kludge.

Does anyone know of an easily available Python solution?


http://pypi.python.org/pypi/doit



Thank you for this, Eduardo. However, all I require is a means of 
ordering the items that respects the dependencies.  This rest I can, and 
pretty much have to, manage myself.


So probably something more lightweight would suit me.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: Python solution for ordering dependencies

2010-04-25 Thread Jonathan Fine

Makoto Kuwata wrote:

On Sun, Apr 25, 2010 at 5:53 AM, Jonathan Fine  wrote:

I'm hoping to avoid reinventing a wheel (or other rolling device).  I've got
a number of dependencies and, if possible, I want to order them so that each
item has its dependencies met before it is processed.

I think I could get what I want by writing and running a suitable makefile,
but that seems to be such a kludge.

Does anyone know of an easily available Python solution?


If you are looking for alternatives of Make or Ant, try pyKook.
pyKook is a pure-Python tool similar to Make, Ant, or Rake.
http://www.kuwata-lab.com/kook/pykook-users-guide.html
http://pypi.python.org/pypi/Kook/


Thank you for this, Makoto. However, all I require is a means of 
ordering the items that respects the dependencies.  This rest I can, and 
pretty much have to, manage myself.


So probably something more lightweight would suit me.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: Python solution for ordering dependencies

2010-04-25 Thread Jonathan Fine

Aahz wrote:

In article ,
Jonathan Fine   wrote:
I'm hoping to avoid reinventing a wheel (or other rolling device).  I've 
got a number of dependencies and, if possible, I want to order them so 
that each item has its dependencies met before it is processed.


I think I could get what I want by writing and running a suitable 
makefile, but that seems to be such a kludge.


scons?


Thanks for the pointer.  Looks interesting, but it may be a bit 
heavyweight for what I need.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: Python solution for ordering dependencies

2010-04-25 Thread Jonathan Fine

Chris Rebert wrote:

On Sat, Apr 24, 2010 at 1:53 PM, Jonathan Fine  wrote:

Hi

I'm hoping to avoid reinventing a wheel (or other rolling device).  I've got
a number of dependencies and, if possible, I want to order them so that each
item has its dependencies met before it is processed.

I think I could get what I want by writing and running a suitable makefile,
but that seems to be such a kludge.

Does anyone know of an easily available Python solution?


http://pypi.python.org/pypi/topsort/0.9
http://www.bitformation.com/art/python_toposort.html


Thanks Chris. Most helpful.  I think I'll use that code (which will save 
me at least half a day, and that for a solution that wouldn't be linear 
time).


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Wanted: Python solution for ordering dependencies

2010-04-24 Thread Jonathan Fine

Hi

I'm hoping to avoid reinventing a wheel (or other rolling device).  I've 
got a number of dependencies and, if possible, I want to order them so 
that each item has its dependencies met before it is processed.


I think I could get what I want by writing and running a suitable 
makefile, but that seems to be such a kludge.


Does anyone know of an easily available Python solution?

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Q: We have *args and **kwargs. Woud ***allargs be useful?

2010-04-01 Thread Jonathan Fine

Jon Clements wrote:


I'm not sure this'll catch on, it'll be interesting to see other
comments.
However, I believe you can get the behaviour you desire something
like:

import inspect

class AllArgs(object):
def __init__(self, func):
self._func = func
self._spec = inspect.getargspec(func)
self._nposargs = len(self._spec.args)
def __call__(self, *args, **kwdargs):
self._func.func_globals['Args'] = (args[self._nposargs:],
kwdargs)
return self._func(*args[:self._nposargs])

@AllArgs
def test():
print Args

@AllArgs
def test2(a, b):
print a, b, Args

test(1, 2, 3, 4, 5, a=3, b=5)
test2(1, 2, 3, 4, 5, c=7)

Done quickly, probably buggy, but does provide 'Args', but without
further work
swallows any *'s and **'s and might ignore defaults (hideously
untested)


Thank you for your interest, Jon.

According to http://docs.python.org/library/inspect.html we have that 
func_globals is the global namespace in which this function was defined.


Hence we have
>>> def f(): pass
...
>>> f.func_globals is globals()
True
>>> f.func_globals ['x'] = 3
>>> x
3
>>>

I don't yet understand what your code is intended to do, but I'm fairly 
sure you're not wishing to 'monkey-patch' a global namespace of a module.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Q: We have *args and **kwargs. Woud ***allargs be useful?

2010-04-01 Thread Jonathan Fine

The idioms
def f(*args, **kwargs):
# Do something.
and
args = (1, 2, 3)
kwargs = dict(a=4, b=5)
g(*args, **kwargs)
are often useful in Python.

I'm finding myself picking up /all/ the arguments and storing them for 
later use (as part of a testing framework).  So for me it would be nice 
if I could write

def f(***allargs):
 args, kwargs = allargs
 # Continue as before.

However, if we do this then 'args' in '*args' is misleading.  So I'll 
use 'sargs' (for sequence arguments) instead.


I can now write, for a suitable class Args
args = Args(1, 2, 3, a=4, b=5)
g(***args)   # Same as before.
sargs, kwargs = args
g(*sargs, **kwargs)  # Same as before.

Even better, now that Args is a class we can give it a method 'call' so that
args.call(g)
is equivalent to
g(***args)
which removes the need for the *** construct.

This reminds me of functools.partial except, of course, we've fixed all 
the arguments and left the passing of the function for later, whereas in 
partial we fix the function and some of the arguments.

http://docs.python.org/library/functools.html#functools.partial

My view are that
1.  Conceptually ***allargs is useful, but an Args class would be more 
useful (not that it need be either-or).


2.  If Args were built in , there could be performance benefits.

3.  It's clearer to write
def(*seqargs, **kwargs):
than
def(*args, **kwargs):

4.  When the Args class is used a lot, one might welcome
def(***args):
# Do something with args.
as a shortcut (and minor speedup) for
def(*seqargs, **kwargs):
args = Args(*seqargs, **kwargs)
# Do something with args.

I look forward to your comments on this.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: WANTED: A good name for the pair (args, kwargs)

2010-03-05 Thread Jonathan Fine

Jonathan Fine wrote:

Hi

We can call a function fn using
val = fn(*args, **kwargs)

I'm looking for a good name for the pair (args, kwargs).  Any suggestions?

Here's my use case:
def doit(fn , wibble, expect):
args, kwargs = wibble
actual = fn(*args, **kwargs)
if actual != expect:
# Something has gone wrong.
pass

This is part of a test runner.

For now I'll use argpair, but if anyone has a better idea, I'll use it.


Thank you, Tim, Paul, Steve and Aahz for your suggestions.

I'm now preferring:

def test_apply(object, argv, valv):

args, kwargs = argv
expect, exceptions = valv

# Inside try: except:
actual = object(*args, **kwargs)

# Check against expect, exceptions.

best regards


Jonathan



--
http://mail.python.org/mailman/listinfo/python-list


WANTED: A good name for the pair (args, kwargs)

2010-03-04 Thread Jonathan Fine

Hi

We can call a function fn using
val = fn(*args, **kwargs)

I'm looking for a good name for the pair (args, kwargs).  Any suggestions?

Here's my use case:
def doit(fn , wibble, expect):
args, kwargs = wibble
actual = fn(*args, **kwargs)
if actual != expect:
# Something has gone wrong.
pass

This is part of a test runner.

For now I'll use argpair, but if anyone has a better idea, I'll use it.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: WANTED: Regular expressions for breaking TeX/LaTeX document into tokens

2010-02-26 Thread Jonathan Fine

Wes James wrote:

On Wed, Feb 24, 2010 at 5:03 AM, Jonathan Fine  wrote:

Hi

Does anyone know of a collection of regular expressions that will break a
TeX/LaTeX document into tokens?  Assume that there is no verbatim or other
category code changes.


I'm not sure how this does it, but it might help:

http://plastex.sourceforge.net/plastex/sect0025.html


Thanks, Wes.  I'm already using PlasTeX

It handles changes of category codes, which makes it over the top for 
what I want to do.  In addition it is a  fairly large complex 
application, and sadly it's not at all easy to use just a part of the 
code base.


There's been more discussion of this thread on comp.text.tex (which is 
where I set the follow-up to).


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


WANTED: Regular expressions for breaking TeX/LaTeX document into tokens

2010-02-24 Thread Jonathan Fine

Hi

Does anyone know of a collection of regular expressions that will break 
a TeX/LaTeX document into tokens?  Assume that there is no verbatim or 
other category code changes.


Thanks


Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Author of a Python Success Story Needs a Job!

2009-12-28 Thread Andrew Jonathan Fine
On Dec 28, 6:21 am, Steve Holden  wrote:
> Andrew Jonathan Fine wrote:
> > To whom it may concern,
>
> > I am the author of "Honeywell Avoids Documentation Costs with Python
> > and other Open Standards!"
>
> > I was laid off by Honeywell several months after I had made my
> > presentation in the 2005 Python Conference.
>
> > Since then I have been unable to find work either as a software
> > engineer or in any other capacity, even at service jobs.  I've sent
> > resumes and have been consistently ignored.
>
> > What I have been doing in the meantime is to be a full time homemaker
> > and parent.   As a hobby to keep me sane, I am attempting to retrain
> > part time at home as a jeweler and silversmith, and I sometimes used
> > Python for generating and manipulating code for CNC machines.
>
> > For my own peace of mind, however, I very much want to be doing
> > software work again because I feel so greatly ashamed to have
> > dedicated my life to learning and working in the field only to now
> > find myself on the scrap heap.
>
> > I find it highly ironic that my solution is still being advertised on
> > the Python web site but that I, the author of that solution, am now a
> > long term unemployment statistic.
>
> > Please, if there is anyone out there who needs a highly creative and
> > highly skilled software designer for new and completely original work,
> > then for the love of God I implore you to contact me.
>
> > A mind is a terrible thing to waste.
>
> > Sincerely,
>
> > Andrew Jonathan Fine
> > BEE, MSCS, 15 years experience, 5 in Python, the rest in C/C++,
> > about 1/3 embedded design and device drivers, and 2/3 in applications.
>
> Andrew:
>
> I am sorry to hear about your predicament. Unfortunately Holden Web
> isn't hiring, so I can't offer you a job, but I wanted to at least thank
> you for your support of Python and commiserate with you. These are
> difficult times to be looking for work in the USA.
>
> Do you follow the Python Job Board? It's a resource that not everyone
> knows about, where employers are allowed to post free for the benefit of
> Python community members who may be looking for a job.
>
>  http://www.python.org/community/jobs/
>
> Hope this helps.
>
> regards
>  Steve
> --
> Steve Holden           +1 571 484 6266   +1 800 494 3119
> PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
> Holden Web LLC                http://www.holdenweb.com/
> UPCOMING EVENTS:        http://holdenweb.eventbrite.com/- Hide quoted text -
>
> - Show quoted text -

Yes, I have been following that board for years.
-- 
http://mail.python.org/mailman/listinfo/python-list


Author of a Python Success Story Needs a Job!

2009-12-27 Thread Andrew Jonathan Fine
To whom it may concern,

I am the author of "Honeywell Avoids Documentation Costs with Python
and other Open Standards!"

I was laid off by Honeywell several months after I had made my
presentation in the 2005 Python Conference.

Since then I have been unable to find work either as a software
engineer or in any other capacity, even at service jobs.  I've sent
resumes and have been consistently ignored.

What I have been doing in the meantime is to be a full time homemaker
and parent.   As a hobby to keep me sane, I am attempting to retrain
part time at home as a jeweler and silversmith, and I sometimes used
Python for generating and manipulating code for CNC machines.

For my own peace of mind, however, I very much want to be doing
software work again because I feel so greatly ashamed to have
dedicated my life to learning and working in the field only to now
find myself on the scrap heap.

I find it highly ironic that my solution is still being advertised on
the Python web site but that I, the author of that solution, am now a
long term unemployment statistic.

Please, if there is anyone out there who needs a highly creative and
highly skilled software designer for new and completely original work,
then for the love of God I implore you to contact me.

A mind is a terrible thing to waste.

Sincerely,

Andrew Jonathan Fine
BEE, MSCS, 15 years experience, 5 in Python, the rest in C/C++,
about 1/3 embedded design and device drivers, and 2/3 in applications.
-- 
http://mail.python.org/mailman/listinfo/python-list


FYI: ConfigParser, ordered options, PEP 372 and OrderedDict + big thank you

2009-11-17 Thread Jonathan Fine

Hi

A big thanks to Armin Ronacher and Raymond Hettinger for
   PEP 372: Adding an ordered dictionary to collections

I'm using ConfigParser and I just assumed that the options in a section 
were returned in the order they were given.  In fact, I relied on this fact.

http://docs.python.org/library/configparser.html

And then when I came to test the code it went wrong.  After some anguish 
I looked at ConfigParser and saw I could pass it a dict_type.  So I 
could fix the problem myself by writing an OrderDict.  Which I duly 
prototyped (in about an hour).


I then thought - maybe someone has been down this path before.  A Google 
search quickly led me to PEP 372 and hence to

http://docs.python.org/dev/py3k/library/configparser.html
which says
class configparser.RawConfigParser(defaults=None,
   dict_type=collections.OrderedDict)

So all that I want has been done already, and will be waiting for me 
when I move to Python3.


So a big thank you is in order.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using 'apply' as a decorator, to define constants

2009-08-22 Thread Jonathan Fine

Steven D'Aprano wrote:

On Sat, 22 Aug 2009 10:51:27 +0100, Jonathan Fine wrote:


Steven D'Aprano wrote:


There's a standard idiom for that, using the property() built-in, for
Python 2.6 or better.

Here's an example including a getter, setter, deleter and doc string,
with no namespace pollution, imports, or helper functions or deprecated
built-ins:

class ColourThing(object):
@property
def rgb(self):
"""Get and set the (red, green, blue) colours.""" return
(self.r, self.g, self.b)
@rgb.setter
def rgb(self, rgb):
self.r, self.g, self.b = rgb
@rgb.deleter
def rgb(self):
del self.r, self.g, self.b


Sorry, Steve, but I don't understand this.  In fact, I don't even see
how it can be made to work.


Nevertheless, it does work, and it's not even magic. It's described (very 
briefly) in the docstring for property: help(property) will show it to 
you. More detail is here:


http://docs.python.org/library/functions.html#property


My apologies.  I wasn't up to date with my Python versions:

| Changed in version 2.6: The getter, setter, and deleter
| attributes were added.

I was still thinking Python2.5 (or perhaps earlier?).  I still don't 
like it.  All those repetitions of 'rgb'.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using 'apply' as a decorator, to define constants

2009-08-22 Thread Jonathan Fine

Steven D'Aprano wrote:

There's a standard idiom for that, using the property() built-in, for 
Python 2.6 or better.


Here's an example including a getter, setter, deleter and doc string, 
with no namespace pollution, imports, or helper functions or deprecated 
built-ins:


class ColourThing(object):
@property
def rgb(self):
"""Get and set the (red, green, blue) colours."""
return (self.r, self.g, self.b)
@rgb.setter
def rgb(self, rgb):
self.r, self.g, self.b = rgb
@rgb.deleter
def rgb(self):
del self.r, self.g, self.b



Sorry, Steve, but I don't understand this.  In fact, I don't even see 
how it can be made to work.


Unless an exception is raised,
@wibble
def wobble():
pass
will make an assignment to wobble, namely the return value of wibble. 
So in your example above, there will be /three/ assignments to rgb. 
Unless you do some complicated introspection (and perhaps not even then) 
surely they will clobber each other.


I still prefer:
@wibble.property
def rgb():
'''Red Green Blue color settings (property)'''

def fset(rgb):
self.r, self.g, self.b = rgb
return locals()

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using 'apply' as a decorator, to define constants

2009-08-22 Thread Jonathan Fine

Jonathan Gardner wrote:

On Aug 21, 9:09 am, alex23  wrote:

On Aug 21, 11:36 pm, Jonathan Fine  wrote:

class ColourThing(object):
@apply
def rgb():
def fset(self, rgb):
self.r, self.g, self.b = rgb
def fget(self):
return (self.r, self.g, self.b)
return property(**locals())



This is brilliant. I am going to use this more often. I've all but
given up on property() since defining "get_foo", "get_bar", etc... has
been a pain and polluted the namespace.



I think we can do better, with a little work.  And also solve the 
problem that 'apply' is no longer a builtin in Python3.


Here's my suggestion:
===
import wibble # Informs reader we're doing something special here.

class ColourThing(object):

@wibble.property
def rgb():
'''This is the docstring for the property.'''
def fset(self, rgb):
self.r, self.g, self.b = rgb
def fget(self):
return (self.r, self.g, self.b)
return locals()
===

And here's wibble.property()
===
_property = property  # Avoid collision.
def property(fn):
return _property(doc=fn.__doc__, **fn())
===

We can add apply() to the wibble module.  By the way, 'wibble' is a 
placeholder to the real name.  Any suggestions?


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using 'apply' as a decorator, to define constants

2009-08-21 Thread Jonathan Fine

alex23 wrote:


Unfortunately, apply() has been removed as a built-in in 3.x. I'm not
sure if it has been relocated to a module somewhere, there's no
mention of such in the docs.


The old use of apply()


You can save yourself the tidy up by using the same name for the
function & the label:

def tags():
value = []
# ...
return value
tags = tags()


I don't like that because there's no hint at
def tags():
that this is /not/ the value of tags.


Like all uses of decorators, it is simply syntactic sugar.  It allows
you to see, up front, what is going to happen.  I think, once I get used
to it, I'll get to like it.


The question is, is it really that useful, or is it just a slight
aesthetic variation? Given that apply(f, args, kwargs) is identical to
f(*args, **kwargs), it's understandable that's apply() isn't seen as
worth keeping in the language.


Yes, I agree with that, completely.


Why I've personally stopped using it: I've always had the impression
that decorators were intended to provide a convenient and obvious way
of augmenting functions. 


Yes, that was the intended use case.


Having one that automatically executes the
function at definition just runs counter to the behaviour I expect
from a decorator. 


I'd expect the name of the decorator to explain what is going on.  If 
apply() were a well known part of the language, that would be fine.



Especially when direct assignment... foo = foo
() ...is a far more direct and clear way of expressing exactly what is
happening.


Actually, I think the decorator approach is clearer.  But that's just my 
opinion, and not with the benefit of a lot of experience.



But that's all IMO, if you feel it makes your code cleaner and don't
plan on moving to 3.x any time soon (come on in! the water's great!),
go for it :)


Thank you for your comments, Alex.  And particularly for telling me that 
apply() is no longer a builtin for Python 3.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Using 'apply' as a decorator, to define constants

2009-08-21 Thread Jonathan Fine

Hi

It might seem odd to use 'apply' as a decorator, but it can make sense.

I want to write:
# Top level in module.
tags =  
where the list is most easily constructed using a function.

And so I write:
@apply
def tags():
value = []
# complicated code
return value

And now 'tags' has the result of running the complicated code.


Without using 'apply' as a decorator one alternative is
def tmp():
value = []
# complicated code
return value
tags = tmp()
del tmp


Like all uses of decorators, it is simply syntactic sugar.  It allows 
you to see, up front, what is going to happen.  I think, once I get used 
to it, I'll get to like it.


The way to read
@apply
def name():
 # code
is that we are defining 'name' to be the return value of the effectively 
anonymous function that follows.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Do anyone here use Python *embedded* in a database?

2009-08-05 Thread Jonathan Fine

Jon Clements wrote:

On 5 Aug, 13:17, Jonathan Fine  wrote:

Hi

I'm writing a talk that compares embed and extend, and wondered if
anyone here ever used the Python *embedded* in a database server.
 http://twistedmatrix.com/users/glyph/rant/extendit.html

Web frameworks (such as Django) use extend, to import an extension
module that makes a connection to a database.

If you have used embed, you might have consulted a page such as:
 http://www.postgresql.org/docs/8.3/interactive/plpython-funcs.html

--
Jonathan


Yup, have used plpythonu within postgres without too many problems.
Although I do recall once that using the CSV module to load and filter
external data, did consume a massive amount of memory. Even though the
entire procedure was iterator based -- never did work out if it was
postgres caching, or some other weird stuff. [That was about 2 years
ago though]


Thanks, Jon, for that.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Do anyone here use Python *embedded* in a database?

2009-08-05 Thread Jonathan Fine

Hi

I'm writing a talk that compares embed and extend, and wondered if 
anyone here ever used the Python *embedded* in a database server.

http://twistedmatrix.com/users/glyph/rant/extendit.html

Web frameworks (such as Django) use extend, to import an extension 
module that makes a connection to a database.


If you have used embed, you might have consulted a page such as:
http://www.postgresql.org/docs/8.3/interactive/plpython-funcs.html

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Wanted: something more Pythonic than _winreg.

2008-10-10 Thread Jonathan Fine

Hello

I'm using the _winreg module to change Windows registry settings, but 
its rather low level, and I'd prefer to be working with something more 
Pythonic.


Does anyone have any recommendations?


Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Generating test data from an XML Schema

2008-09-17 Thread Jonathan Fine

Mark Thomas wrote:


Has anyone seen anything that might help generate test data from a schema?


I'm unaware of anything in Python, but Eclipse can generate sample
documents from a schema:
http://help.eclipse.org/help32/index.jsp?topic=/org.eclipse.wst.xmleditor.doc.user/topics/tcrexxsd.html

As can XML IDEs such as Stylus Studio and XML Spy.


Mark - thank you for this.  The problem is that I'd like to sit quite 
close to the schema and manipulate the generation of the test data. 
(This is because a straight representation won't suit my needs.)


However, if I get the chance I'll look at these tools, in case they can 
help me.  Thank you for the suggestion.


--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Generating test data from an XML Schema

2008-09-17 Thread Jonathan Fine

Hello

I want to generate test data from an XML schema.  I've had a quick look 
at existing tools (such as minixsv and amara) but from what I've seen 
they don't seem to help.


It is of course easy to extract all the element names from a schema, but 
I want more than that.


Motivation:  I wish to create a subset of TeX/LaTeX that can be easily 
translated into XML that validates against an existing schema.  I also 
want the input TeX to be fairly easy to author (so a simple mechanical 
translation will not work).


A tool that provides a nice Python interface to navigating the schema 
would probably be quite helpful.


Has anyone seen anything that might help generate test data from a schema?

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Metatest 0.1.0

2007-09-19 Thread Jonathan Fine
Kay Schluehr wrote:
> On 19 Sep., 01:30, Jonathan Fine <[EMAIL PROTECTED]> wrote:
> 
> 
>>>there is no fundamental reason why it can't be separated from
>>>eeconsole.py.
>>
>>OK.  That might be a good idea.
> 
> 
> Ironically, I liked the idea of having more expressive assert
> statements - a discussion you brought up. But this requires either
> syntcatical analysis in the general case ( EE and assert magic ) some
> particular API ( Ben Finney ) or a somewhat restricted, but fully
> embedded, domain specific language ( Metatest ).
> 
> Here is what I got after an hour of hacking and filtering assert.
> Additional remarks are in line comments:



Thank you for doing this coding, and providing an example of its use.

It seems to me that /writing the tests/ and /running the tests/ are two 
distinct, although linked, matters.  When programming, we are interested 
in running the tests.  When designing, we are interested in writing the 
tests.  And when looking at someone else's modules, we are interested in 
/reading the tests/.

Both Metatest and EasyExtend show that we have some freedom in how we 
choose to write our tests.

What I am interested in doing is developing and promoting a /language 
for writing tests/.  This has both a formal side and also conventions 
and examples of its use.  We would, of course, like that language to be 
Pythonic (if not exactly Python, although that would be a distinct 
advantage).

I think we can do this without having to think too much about 
implementation (although it would be useful to have experience of using 
the language).

I also think that some sort of 'filter' between the user and the Python 
commmand line would be useful.  GNU readline is a simple, and effective, 
example of the sort of thing I have in mind.

Thank you for discussing this with me, Kay and Ben.

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Metatest 0.1.0

2007-09-18 Thread Jonathan Fine
Ben Finney wrote:
> [Jonathan, please don't send me copies of messages sent to the
> discussion thread. I follow comp.lang.python via a non-mail interface,
> and it's irritating to get unwanted copies of messages via email.]

[Thank you for letting me know your preference.  For myself, I often 
appreciate it when people send me a copy directly.]

>>The line
>> plus(2, '', _ex=TypeError)
>>causes something to be recorded,



> That's confusing, then, for two reasons:
> 
> It looks like '_ex' is an argument to the 'plus' function, which
> otherwise (probably by design) looks exactly like a call to the 'plus'
> function the programmer is testing. 

Actually, 'plus' above is an instance of a class (which for now we will 
call PyUnknown) that has a __call__ method.  We know this because we 
imported it from metaclass.py.mymod.

It is, if you like, a /test surrogate/ or /instrumented wrapper/ for the 
function 'plus' in mymod.  And as such it has different properties.  For 
example, the '_ex' parameter has a special significance.

> Since this is, instead, an
> assertion *about* that function, it is misleading to see it as an
> argument *to* the function.

Although, by design, it looks like an argument to 'plus' in mymod, it is 
  as I said an argument to 'plus' in metatest.py.mymod, which is 
something completely different, namely a PyUnknown object.

We can think of test-first programming as
1.  Stating the problem to be solved.
2.  Solving that problem.

In mathematics, we often use unknowns when stating problems.  We can 
think of a PyUnknown as being analogous, in programming, to the unknowns 
we use in mathematics.

However, if the difference confuses one (and it can in some situations), 
then instead do
 from metatest.py.mymod import plus as mt_plus
and then the confusing line becomes
 mt_plus(2, '', _ex=TypeError)
which I think you will find much clearer.

> It uses the "leading-underscore" convention which means "this is not
> part of the public interface, and you'd better know what you're doing
> if you use this externally".

This convention is exactly that, a convention.  Conventions allow us to 
communicate efficiently, without spelling everything out.  Implicit in 
metatest are some other conventions or the like.  Oh, and this use of 
metatest does confirm to the convention, and extra knowledge is required 
to use leading underscore parameters.

>>Finally, if you can think of a better way of saying, in Python, "The
>>function call plus(2, '') raises a TypeError", please let me know,
>>and I'll consider using it in the next version of Metatest.
>  
> I would think an explicit function call that says what it's doing
> would be better. The unittest module implements this as:
> 
> self.failUnlessRaises(TypeError, plus, 2, '')

> which has the benefit of being explicit about what it's doing.

Well, you did not tell me what self is (althout I guessed it is an 
instance of a subclass of unittest.TestCase).  Nor did you tell me that 
this statement is part of a class method.

To my eye, your example puts the focus on failUnlessRaises and on 
TypeError.  I think the metatest way puts an equal focus on calling 
plus(2, '') and the result of this call.

I give a comparison of three ways of writing tests in my slides:
http://metatest.sourceforge.net/doc/pyconuk2007/metatest.html#slide13

To summarise:  both metatest and unittest require the user to know 
something.  Once that something is known, the choices for the crucial 
test line are
 plus(2, '', _ex=TypeError)
 self.failUnlessRaises(TypeError, plus, 2, '')

I hope you know find the first option less confusing.  (It certainly is 
shorter.)

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Metatest 0.1.0

2007-09-18 Thread Jonathan Fine
Ben Finney wrote:
> Jonathan Fine <[EMAIL PROTECTED]> writes:
> 
>>Here's how to write some tests using Metatest. We can think of the
>>tests as an executable specification.
>>
>>from metatest.py.mymod import plus, Point
>>
>># Function plus adds two numbers.
>>plus(2, 2) == 4
>>plus(2, '', _ex=TypeError)
> 
> This second example seems counterintuitive. Is '_ex' part of the
> public interface? If so, why does it follow the convention for "not
> part of the public interface" (i.e. it is named with an underscore)?

Hello Ben

(Well, I'm glad you seem to find the first example intuitive.)

No, the function we are testing here is
 def plus(a, b):
 return a + b

However, the line
plus(2, '', _ex=TypeError)
refers only indirectly to the function plus, to be imported from mymod.

Read again the line
from metatest.py.mymod import plus, Point

We do some metapath magic and some __method__ tricks to ensure that here 
plus is what I have called a 'stub object', although 'unknown' would be 
a better term.  See
http://metatest.sourceforge.net/doc/pyconuk2007/metatest.html#slide11
http://www.python.org/dev/peps/pep-0302/

The line
  plus(2, '', _ex=TypeError)
causes something to be recorded, and when the test is run the _ex 
argument is filtered off, and the remaining arguments passed to the plus 
function, as imported from mymod.

This is done by the function split_kwargs in the module
http://metatest.cvs.sourceforge.net/metatest/metatest/py/metatest/player.py?view=markup

Hope this helps.  If not, maybe try downloading and running it.

Finally, if you can think of a better way of saying, in Python, "The 
function call plus(2, '') raises a TypeError", please let me know, and 
I'll consider using it in the next version of Metatest.

-- 
Jonathan




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Metatest 0.1.0

2007-09-18 Thread Jonathan Fine
Kay Schluehr wrote:

>>Sounds interesting.  Is this code, or examples of its use, available?
> 
> 
> Sure, it's part of EasyExtend. See also www.fiber-space.de

OK.  So the ULR for the documentation of consoletest is:
http://www.fiber-space.de/EasyExtend/doc/consoletest/consoletest.html

It has a recorder and a player.  As does Metatest (developed later and 
independently).  I think that is a good concept.  Other frameworks use
   def test_something(...):
# assertions go here
as their recorder.  Such a recorder cannot do very much at all.

> Checkout the documentation for consoletest. I guess in the current
> release recording and replaying can't be done in the same
> run. I've got a corrected version on my disk but I didn't uploaded it
> yet and corrected the docs.



> there is no fundamental reason why it can't be separated from
> eeconsole.py.

OK.  That might be a good idea.



> How does metatest analyze the tested expression? Lets say I try to
> check this expression
> 
plus( f(1) == True, f(2) != False) == plus( f(2) == True, f(1) != False)
>>>

This won't work.  In fact, even though
 plus(2, 2) == 4
works
 4 == plus(2, 2)
won't.  (Not so odd.  We have (a+b) != (b+a) when a and b are distinct 
strings.  Metatest works by overloading the == or __eq__ operator.)

However, in practice I don't think this is a problem.  And if it is, 
then there should be a nice solution.  I hope that the tests we want to 
write are the same as the ones Metatest allows us to write.

My goal is to finding the simplest way, using Python syntax, to express 
or state the test we wish to run, and to then code Metatest to give the 
required meaning to he statement.



-- 
Jonathan


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: can Python be useful as functional? (off topic)

2007-09-18 Thread Jonathan Fine
Steve Holden wrote:

> You remind me of the conversation between the philosopher and an 
> attractive lady whom he was seated next to at dinner. He asked her if 
> she would sleep with him for a million dollars, to which she readily 
> agreed. So he followed this by asking her if she'd sleep with him for a 
> dollar. She replied: "No. Do you take me for a prostitutte?", to which 
> his riposte was "We have already established that fact, and are now 
> merely haggling about the price".

I've seen this before, and it is witty.

However, it is perhaps unfair towards the woman.  The man, after all, is 
someone who has offered a woman money in return for sex.

The whole story reads differently if we replace 'philosopher' by 'man' 
and 'attractive lady' by 'woman'.

-- 
Jonathan




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cvs module

2007-09-18 Thread Jonathan Fine
Tim Arnold wrote:
> Hi, I need to do some scripting that interacts with CVS. I've been just 
> doing system calls and parsing the output to figure out what's going on, but 
> it would be nice to deal with CVS directly.
> 
> Does anyone know of a python module I can use to interface with CVS?
> thanks,

Hello Tim

Not exactly what you asked for, but I've integrated Tortoise CVS with 
WinEdt.  This gives users with menu commands in WinEdt for invoking 
Tortoise.  Works well for my users, but might not meet your need.

Have you looked at
 http://rhaptos.org/downloads/python/pycvs/

Or for Subversion
 http://pysvn.tigris.org/

If you use Subversion, you call then also use Trac (which is written in 
Python)
 http://trac.edgewall.org/

Please let us know how you get on.

-- 
Jonathan



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Metatest 0.1.0

2007-09-18 Thread Jonathan Fine
Kay Schluehr wrote:

>> http://metatest.sourceforge.net/doc/pyconuk2007/metatest.html

>From the HTML slides:
> 
>Assertion tests are easy to write but report and run poorly.
> 
> I tend to think this is a prejudice that leads to ever more ways to
> write tests perform test discoveries, invent test frameworks etc.
> 
> When I thought about organizing my tests for EasyExtend I was seeking
> for a strategy being adapted to my own behaviour. The very first step
> I made when creating a new language, modifying a grammar rule,
> changing the Transformer etc. was starting an interactive shell and
> typing some commands. 

Yes, Python is really good for that.  I do it a lot also.

> This is just checking out or smoke testing my
> application and I wanted to reproduce it. So I recorded the
> interactive shell session and replayed it as another shell session. I
> enabled to set breakpoints in the logged output and implemented a
> command for proceeding the replay.

This is similar, I think, to what Metatest does.  See 
http://metatest.sourceforge.net/doc/pyconuk2007/metatest.html#slide5

> Then I discovered I actually wrote an interactive test runner. I don't
> have to care for all the exceptions being thrown in the session and
> don't care for cleanups but just assign parts of the session as tests
> using assert for getting an overview at the end. The only additonal
> command I implemented was raises to capture the side-effect of raising
> an exception.

Sounds interesting.  Is this code, or examples of its use, available?

> So I'm going to argue here that there isn't anything particular about
> writing/coding a test ( planning tests, specifying tests, reviewing a
> testspecification etc. are another issue ). Instead you can keep a
> seemingly unrelated practice and turn it into a test by a tiny
> signification.

I'm all in favour of making tests easier to write, easier to run, and 
the outputs easier to understand.

I've submitted a feature request for command line use of Metatest 
(something I've been thinking of a while):
http://sourceforge.net/tracker/index.php?func=detail&aid=1797187&group_id=204046&atid=988038

Here's how I'd like the feature to look (from the above URL):
===
Python 2.4.1a0 (#2, Feb 9 2005, 12:50:04)
[GCC 3.3.5 (Debian 1:3.3.5-8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
 >>> from metatest.py.mymod import plus
 >>> import metatest
 >>> plus(2, 3) == 6

 >>> metatest.immediate = True # New command
 >>> plus(2, 3) == 6
TestError: Expected '6', got '5'
 >>>
===

OK.  So now here's a problem.  We can create stuff like the above that 
states clearly (I hope) what is required.  How can we write a test for 
this sort of behaviour?  So that's another feature request.

While I'm in favour of using Metatest to write tests for Metatest 
(eating your own dogfood), I'm more interested in real world examples. 
But I've added second feature request, that Metatest be able to test 
Metatest.
http://sourceforge.net/tracker/index.php?func=detail&aid=1797202&group_id=204046&atid=988038

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] Metatest 0.1.0

2007-09-18 Thread Jonathan Fine
Hello

This announcement also appears on the Metatest web site 
http://metatest.sourceforge.net

===
*** Metatest - a Python test framework

Metatest is a simple and elegant Python framework for writing tests.

Metatest is mostly about writing tests and by design is not tied to any 
particular test runner. Hence the name.

Here's how to write some tests using Metatest. We can think of the tests 
as an executable specification.

 from metatest.py.mymod import plus, Point

 # Function plus adds two numbers.
 plus(2, 2) == 4
 plus(2, '', _ex=TypeError)

 # Class Point represent a point in the plane.
 p = Point(2, 5)
 p.x == 2
 p.y == 5
 p.area == 10

And here's one way to run them.

 if __name__ == '__main__':
 import metatest
 metatest.run()

It's not hard to write an adapter that will run these tests in a 
different test runner framework.

We gave a talk about Metatest at PyCon UK 2007 and here are the slides 
(HTML).
 http://www.pyconuk.org
 http://metatest.sourceforge.net/doc/pyconuk2007/metatest.html

Please visit Sourceforge to download Metatest.
 http://sourceforge.net/project/showfiles.php?group_id=204046

-- 
Jonathan Fine

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Packing a simple dictionary into a string - extending struct?

2007-06-22 Thread Jonathan Fine
John Machin wrote:

> def unpack(bytes, unpack_entry=unpack_entry):
> '''Return dictionary gotten by unpacking supplied bytes.
> Both keys and values in the returned dictionary are byte-strings.
> '''
> bytedict = {}
> ptr = 0
> while 1:
> key, val, ptr = unpack_entry(bytes, ptr)
> bytedict[key] = val
> if ptr == len(bytes):
> break
> # That's beautiful code -- as pretty as a cane-toad.

Well, it's nearly right.  It has a transposition error.

> # Well-behaved too, a very elegant response to unpack(pack({}))

Yes, you're right.  An attempt to read bytes that aren't there.

> # Try this:
> blen = len(bytes)
> while ptr < blen:
> key, val, ptr = unpack_entry(bytes, ptr)
> bytedict[key] = val
> 
> return bytedict

I've committed such a change.  Thank you.

-- 
Jonathan



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Packing a simple dictionary into a string - extending struct?

2007-06-22 Thread Jonathan Fine
Jonathan Fine wrote:

> Thank you for this suggestion.  The growing adoption of JSON in Ajax
> programming is a strong argument for my using it in my application, although
> I think I'd prefer something a little more binary.
> 
> So it looks like I'll be using JSON.

Well, I tried.  But I came across two problems (see below).

First, there's bloat.  For binary byte data, one average one
character becomes just over 4.

Second, there's the inconvenience.  I can't simple take a
sequence of bytes and encode them using JSON.  I have to
turn them into Unicode first.  And I guess there's a similar
problem at the other end.

So I'm going with me own solution: 
http://mathtran.cvs.sourceforge.net/mathtran/py/bytedict.py?revision=1.1&view=markup

It seems to be related to cerializer:
http://home.gna.org/oomadness/en/cerealizer/index.html

It seems to me that JSON works well for Unicode text, but not
with binary data.  Indeed, Unicode hides the binary form of
the stored data, presenting only the code points.  But I don't
have Unicode strings!

Here's my test script, which is why I'm not using JSON:
===
import simplejson

x = u''
for i in range(256):
 x += unichr(i)

print len(simplejson.dumps(x)), '\n'

simplejson.dumps(chr(128))
===

Here's the output
===
1046  # 256 bytes => 256 * 4 + 34 bytes

Traceback (most recent call last):
  
   File "/usr/lib/python2.4/encodings/utf_8.py", line 16, in decode
 return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: 
unexpected code byte
===

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Packing a simple dictionary into a string - extending struct?

2007-06-20 Thread Jonathan Fine
"Sridhar Ratna" <[EMAIL PROTECTED]> wrote in message

> What about JSON? You can serialize your dictionary, for example, in
> JSON format and then unserialize it in any language that has a JSON
> parser (unless it is Javascript).

Thank you for this suggestion.  The growing adoption of JSON in Ajax
programming is a strong argument for my using it in my application, although
I think I'd prefer something a little more binary.

So it looks like I'll be using JSON.

Thanks.


Jonathan


-- 
http://mail.python.org/mailman/listinfo/python-list


Packing a simple dictionary into a string - extending struct?

2007-06-20 Thread Jonathan Fine
Hello

I want to serialise a dictionary, whose keys and values are ordinary strings
(i.e. a sequence of bytes).

I can of course use pickle, but it has two big faults for me.
1.  It should not be used with untrusted data.
2.  I want non-Python programs to be able to read and write these
dictionaries.

I don't want to use XML because:
1.  It is verbose.
2.  It forces other applications to load an XML parser.

I've written, in about 80 lines, Python code that will pack and unpack (to
use the language of the struct module) such a dictionary.  And then I
thought I might be reinventing the wheel.  But so far I've not found
anything much like this out there.  (The closest is work related to 'binary
XML' - http://en.wikipedia.org/wiki/Binary_XML.)

So, what I'm looking for is something like and extension of struct that
allows dictionaries to be stored.  Does anyone know of any related work?

-- 
Jonathan Fine


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: a python24 package for Python 2.3

2007-03-21 Thread Jonathan Fine
Gerald Klix wrote:
> Hi,
> You can't import subproces from future, only syntactic and semantic 
> changes that will become standard feature in future python version can 
> be activated that way.
> 
> You can copy the subprocess module from python 2.4 somewhere where it 
> will be found from python 2.3. At least subporcess is importable after 
> that:
> 
> --- snip ---
> [EMAIL PROTECTED]:~/ttt> cp -av /usr/local/lib/python2.4/subprocess.py .
> »/usr/local/lib/python2.4/subprocess.py« -> »./subprocess.py«
> [EMAIL PROTECTED]:~/ttt> python2.3
> Python 2.3.3 (#1, Jun 29 2004, 14:43:40)
> [GCC 3.3 20030226 (prerelease) (SuSE Linux)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>  >>> import subprocess
>  >>>

You're quite right about the use of __future__.  I decided to
put subprocess in a package, so that my system can choose
which one to find, whether running Python 2.3 or 2.4.

(Well, in 2.3 there's no choice, but in 2.4 I don't want
the "just for 2.3" module to hide the real 2.4 module.)

The responses I've had indicate that my approach might
be a good idea, and might be useful to others.  For me,
that's enough for now.

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: a python24 package for Python 2.3

2007-03-20 Thread Jonathan Fine
Alex Martelli wrote:
> Jonathan Fine <[EMAIL PROTECTED]> wrote:
>...
> 
>>In other words, I'm asking for a python24 package that
>>contains all (or most) of the modules that are new to
>>Python 2.4.
> 
> 
> For subprocess specifically, see
> <http://www.lysator.liu.se/~astrand/popen5/> .  

Thanks for the URL.

> I don't think anybody's
> ever packaged up ALL the new stuff as you require.

Actually, all I require (for now) is subprocess.  So
I went and made a python24 module.  I'll change this
if people think something else would be better.  (It's
easy to ask for forgiveness than ask for permission.)

I can show you what I've done:
 http://texd.cvs.sourceforge.net/texd/py/python24/
http://texd.cvs.sourceforge.net/texd/py/tex/util.py?revision=1.4&view=markup

My idea is that python24 should contain the Python 2.4
modules that those who are still on Python 2.3 might
want to use.

Similarly, python26 would be modules that are new
for Python 2.6 (which has not been released next).

I doubt that I'm the only one with this problem, and
this is my suggestion for making it easier to solve.

-- 
Jonathan


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wanted: a python24 package for Python 2.3

2007-03-20 Thread Jonathan Fine
[EMAIL PROTECTED] wrote:
> On Mar 20, 10:33 am, Jonathan Fine <[EMAIL PROTECTED]> wrote:

>>My problem is that I want a Python 2.4 module on
>>a server that is running Python 2.3.  I definitely
>>want to use the 2.4 module, and I don't want to
>>require the server to move to Python 2.4.

> You might be able to use the "from future import SomeModule" syntax to
> accomplish this, but I am not sure. Other than that, I would just
> recommend using the os.popen calls that are native to 2.3

I've already made my mind up.  I want to use subprocess on
both 2.3 and 2.4.  To do this, 2.3 sites have to have a copy
of the subprocess module.

My question (snipped) is how best to package this up.

best regards


Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Wanted: a python24 package for Python 2.3

2007-03-20 Thread Jonathan Fine
Hello

My problem is that I want a Python 2.4 module on
a server that is running Python 2.3.  I definitely
want to use the 2.4 module, and I don't want to
require the server to move to Python 2.4.

More exactly, I am using subprocess, which is
new in Python 2.4.  What I am writing is something
like
===
from subprocess import Popen
===

This will fail in Python 2.3, in which case I
would like to write something like
===
try:
 from subprocess import Popen
else ImportError:
 from somewhere_else import Popen
===

Put this way, it is clear (to me) that somewhere_else
should be python24.

In other words, I'm asking for a python24 package that
contains all (or most) of the modules that are new to
Python 2.4.

I've looked around a bit, and it seems that this
formulation of the solution is new.  I wonder if
anyone else has encountered this problem, or has
comments on my solution.

-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


distutils - how to get more flexible configuration

2007-03-07 Thread Jonathan Fine
Hello

I'm writing a package that has cgi-bin scripts, html
files and data files (templates used by cgi scripts).
I find that using distutils in the standard way
does not give me enough flexibilty, even if I use
a setup.cfg.

For example, I want certain data files to go to
markedly different locations.

However, I have come up with a solution, that looks
like it will work for me, and I'd welcome comments.

Here's the MANIFEST file
===
setup.py
myproj_cfg.py
data/wibble.txt
data/wobble.txt
===

And here's the setup.py file I've written
===
from distutils.core import setup
import myproj_cfg

data_files = [
(myproj_cfg.wibble, ['data/wibble.txt']),
(myproj_cfg.wobble, ['data/wobble.txt']),
]

setup(data_files=data_files)
===

The user is asked to create a myproj_cfg.py file,
which might look like
===
wibble = '/wibble'
wobble = '/wobble'
===

And when a distribution is created and installed
we get
===
$ python setup.py install
running install
running build
running install_data
creating /wibble
copying data/wibble.txt -> /wibble
creating /wobble
copying data/wobble.txt -> /wobble
===

This is an example of what I want.  I'd welcome
your comments.

-- 
Jonathan Fine
The Open University, Milton Keynes, England


-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN] MathTran project

2007-02-22 Thread Jonathan Fine
MathTran is a JISC funded project to provide translation of mathematical
content as a web service.  MathTran will be using TeX to provide
mathematical typography, and will use Python as its main programming
language.

http://www.open.ac.uk/mathtran/
http://www.jisc.ac.uk/

http://www.jisc.ac.uk/whatwedo/programmes/elearning_framework/toolkit_mathtran.aspx

-- 
Jonathan Fine
The Open University, Milton Keynes, England


-- 
http://mail.python.org/mailman/listinfo/python-list


struct.pack bug?

2007-02-08 Thread Jonathan Fine
Hello

I find the following inconsistent:
===
 >>> sys.version
'2.4.1a0 (#2, Feb  9 2005, 12:50:04) \n[GCC 3.3.5 (Debian 1:3.3.5-8)]'
 >>> pack('>B', 256)
'\x00'
 >>> pack('>> pack('B', 256)
Traceback (most recent call last):
   File "", line 1, in ?
struct.error: ubyte format requires 0<=number<=255
 >>>
===

I was hoping that '>B' and 'http://mail.python.org/mailman/listinfo/python-list


Re: Two mappings inverse to each other: f, g = biject()

2007-02-07 Thread Jonathan Fine
[EMAIL PROTECTED] wrote:

>>A google search for biject.py and bijection.py
>>produced no hits, so I suspect that this may not
>>have been done before.
> 
> 
> There are few (good too) implementations around, but they are called
> bidict or bidirectional dicts. Sometimes I use this implementation,
> with few changes:
> http://www.radlogic.com.au/releases/two_way_dict.py

Thank you for this.  You are quite right, a dictionary
is a particular type of mapping.  A mapping with an
inverse is called (at least by me) a bijection.  Therefore,
as you say, bidict or something similar is correct for
a bijection that is based on dictionaries.

I had a look at the code in radlogic.  There, the
design philosophy is to add 'inverse operations' to
a dictionary.  Thus, it adds a method reversed_items.

In my code, I used a different philosophy, which
comes down to this.  If a mapping is by design a
bijection, then it should have an inverse method
that gives the inverse mapping.  This preserves the
symmetry between a mapping and its inverse.  (The
inverse has an inverse, which is the original mapping.)

Therefore, my semantics comes down to
   f, g = bidict()  # Use the better name.
   assert f = g.inverse
   assert g = f.inverse
and also
   f[a] = b if and only if g[b] = a

By the way, it turns out that a bidict is not what
my application needs.  But I find it an interesting
problem, and time spent on it I do not consider
wasted.

Best regards

Jonathan



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Two mappings inverse to each other: f, g = biject()

2007-02-06 Thread Jonathan Fine
Nick Vatamaniuc wrote:

> If you need to get a short name, given a long name or vice-verse _and_
> the set of short names and long names is distinct (it would be
> confusing if it wasn't!) then you can just have one dictionary, no
> need to complicate things too much:
> f[a]=b
> f[b]=a
> You won't know which is a short and which is a long based just on
> this, so you need to keep track of it. But it will give you the
> mapping.

Thank you for this suggestion, Nick.  It's not
something I thought of.  And I'm sure that in some
cases it might be just the right thing.  It would
hold something like 'other-name' values.  (Every
cat should have at least two names ...)

But for my application, I think it complicates the
code that uses the bijection.

For example, I want to say:
   # Write the font dictionary to a file
   for key in font_dict:
   # write the font

   # Does value already exist in the font dictionary?
   # If not, add it to the font dictionary.
   key = font_dict.inverse.get(value)
   if key is None:
  key = len(font_dict)
  font_dict[key] = value

Following your suggestion, ingenious though it is,
would make the above code much more complicated and
error prone.

Perhaps it helps to think of
   f, g = biject()
as establishing a database, that has a consistency
condition, and which has two views.

There we are:  biject() gives two views on a
mapping (of a particular type).  Thank you for
you suggestion - it has clarified my thinking.

-- 
Jonathan


-- 
http://mail.python.org/mailman/listinfo/python-list


Two mappings inverse to each other: f, g = biject()

2007-02-06 Thread Jonathan Fine
Hello

As part of the MathTran project I found myself
wanting to maintain a bijection between long
names and short names.
   http://www.open.ac.uk/mathtran

In other words, I wanted to have two dictionaries
f and g such that
   f[a] == b
   g[b] == a
are equivalent statements.

A google search for biject.py and bijection.py
produced no hits, so I suspect that this may not
have been done before.

I've written a partial implementation of this,
and would appreciate comments.

http://texd.cvs.sourceforge.net/texd/tex/util.py?revision=1.1&view=markup
http://texd.cvs.sourceforge.net/texd/test_tex/test_util.py?revision=1.1&view=markup

Here's the code from test_util.py, that shows how it
works.  The weakref stuff is so that there isn't a
circular reference f to g to f.
===
from tex.util import biject

f, g = biject()
assert f.inverse is g
assert g.inverse is f

f[1] = 2
assert f[1] == 2
assert g[2] == 1
assert f.has_value(2)

import weakref

wr_g = weakref.ref(g)
del g
assert wr_g() == None
assert f.inverse == None
===

best regards


Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Converting TeX tokens into characters

2005-06-28 Thread Jonathan Fine
I'm sort of wishing to convert TeX tokens into characters.

We can assume the standard (i.e. plain) category codes.
And that the characters are to be written to a file.

This proceess to take place outside of TeX.
Say in a Python program.

Think of a pretty-printer.
* Read the TeX in as tokens.
* Write the TeX out as characters.

My present interest is in the writer part.

And I'd very much prefer to have human-editable output.

So the writer should have methods for
* Writing a string (not containing control sequences).
* Writing a control sequence (or control symbol).

And, like humans, it should also have a line length limit.

Does anyone know of such a writer?  Or something close?

Or any projects that could use such a writer?


-- 
Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Writing a bytecode interpreter (for TeX dvi files)

2005-05-26 Thread Jonathan Fine
I'm writing some routines for handling dvi files.
In case you didn't know, these are TeX's typeset output.

These are binary files containing opcodes.
I wish to write one or more dvi opcode interpreters.

Are there any tools or good examples to follow for
writing a bytecode interpreter?

I am already aware of opcode.py and other modules in
Python Language Services, in the Library Reference.

Thanks.

-- 
Jonathan
http://texd.sourceforge.net

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simple Python + Tk text editor

2005-04-14 Thread Jonathan Fine
Eric Brunel wrote:

Do you know the (apparently dead) project named e:doc? You can find it 
here:
http://members.nextra.at/hfbuch/edoc/
It's a kind of word processor that can produce final documents to 
various formats using backends, and one of the backends is for LaTeX.

It's written in Perl, but with Perl::Tk as a tool-kit, so it is quite 
close to Tkinter. There may be some ideas to steal from it.

Thanks for this.  I've not seen it before
There are quite a few GUI semi-wysiwyg front ends to (La)TeX.
Interesting that there are so many, and that besides LyX few
seem to have succeeded.  Guess it's an important problem that
is also difficult.
My approach is a rather different - it is to exploit running
TeX as a daemon
  http://www.pytex.org/texd
This allows for Instant Preview.  My application is simply
a means of show-casing this capability.  And making it
useful in simple contexts.
So what I'm really wanting to do is provide a component for
projects such as e:doc.
Any, this might be a bit off-topic.
And thanks again for the link.
--
Jonathan
http://qatex.sourceforge.net

--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple Python + Tk text editor

2005-04-13 Thread Jonathan Fine
Paul Rubin wrote:
Jonathan Fine <[EMAIL PROTECTED]> writes:
I'm looking for a simple Python + Tk text editor.
I want it as a building block/starting point.

Something wrong with IDLE?

Thanks for this suggestion.
For some reason, I did not think of IDLE as an editor.
Must have been a blind spot.
Though not simple, IDLE is a good starting point for me.
And for my project (integration of Python and TeX) there
is most unlikely to be a better one.
However, learning IDLE details might be quite a detour
from my immediate goals.
Some code follows, in case anyone else is interested.
IDLE is not yet a package, it seems.  But the idea is there.
===
[EMAIL PROTECTED]:/usr/lib/idle-python2.1$ cat __init__.py
# Dummy file to make this a potential package.
===
/usr/bin/idle hacked to produce an EditorWindow.
===
#! /usr/bin/python
import os
import sys
sys.path[:0] = ['/usr/lib/idle-python2.1']
import IdleConf
idle_dir = os.path.dirname(IdleConf.__file__)
IdleConf.load(idle_dir)
# new code
import Tkinter
import EditorWindow
root = Tkinter.Tk()
EditorWindow.EditorWindow(root=root)
EditorWindow.mainloop()
sys.exit()
# end of new code
# defer importing Pyshell until IdleConf is loaded
import PyShell
PyShell.main()
===
--
Jonathan
http://qatex.souceforge.org
--
http://mail.python.org/mailman/listinfo/python-list


Simple Python + Tk text editor

2005-04-13 Thread Jonathan Fine
Hi
I'm looking for a simple Python + Tk text editor.
I want it as a building block/starting point.
I need basic functions only:
  open a file, save a file, new file etc.
It has to be open source.
Anyone know of a candidate?
--
Jonathan
http://qatex.sourceforge.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: Non-blocking input on windows, like select in Unix

2005-03-02 Thread Jonathan Fine
fraca7 wrote:
Jonathan Fine a écrit :
Paul Rubin wrote:

As I recall, some posts to this list say that Windows provides
non-blocking i/o for sockets but not for files.

No, Windows does provide non-blocking I/O for regular files, but it's a 
completely different mechanism than the one used by winsock. You'll have 
to use win32all and enter the Dark Side, that is Windows APIs.

You don't want to do that if you're not already familiar with 
CreateProcess, CreatePipe, overlapped structures, WaitForSingleObject & 
al...

Thank you for this.
My application will, I think, become much more complicated if I cannot
use non-blocking input.  (As already explained, it is not a problem
that can be solved by threads.  Basically, I _need_  either to read
all available data, _or_ to read very carefully a byte at a time.)
Knowing that non-blocking input can be done under Windows, I would
like to use it.  In the longer run, that will be easier than
rewriting my application.  Or so it seems to me.
I did a google search, on the web, for
  CreateProcess, CreatePipe, overlapped structures, WaitForSingleObject
This is the first page is produced
  http://www.codeproject.com/threads/anonpipe1.asp
Seems to be the right sort of thing.  But I don't have time to read it
now.
I'm not really a Windows programmer.  Don't know the system calls.
But I do want my application to run on Windows.
I'll get back to this in a couple of weeks.  (Busy right now.)
--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: Non-blocking input on windows, like select in Unix

2005-03-01 Thread Jonathan Fine
Paul Rubin wrote:
Jonathan Fine <[EMAIL PROTECTED]> writes:
My question is this: Under Windows, is it possible
to read as many bytes as are available from stdout,
without blocking?

I think Windows implements non-blocking i/o calls.  However the
traditional (to some) Python or Java approach to this problem is
to use separate threads for the reader and writer, and let them block
as needed.
Thank you for this.
As I recall, some posts to this list say that Windows provides
non-blocking i/o for sockets but not for files.
However, if non-blocking i/o for files were available, that
would be great.  Actually, all I care about is input.
Can anyone provide a definite answer to this question?
And please, if the answer is YES (hope it is), with
working sample code.
The threaded approach does not help me.  If the read blocks,
I do not know what to write.  (I am responding to a command
line prompt - I have to read it first.)
--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Non-blocking input on windows, like select in Unix

2005-03-01 Thread Jonathan Fine
Hello
I have written a program that interacts with a
command line program.
Roughly speaking, it mimics human interaction.
(With more speed and accuracy, less intelligence.)
It works fine under Linux, using select().
But Windows does not support select for files.
Only for sockets.
Here's a google search on this topic.

I'd like my program to work under Windows also.
I'm exploring the following approach.
   stdin, stdout = os.popen2('tex')
Now read a byte at a time from stdout.
Whenever a prompt string from TeX appears,
write the response to stdin.
An experiment at the Python command line shows
that this works.  At each step either
a) we can read one byte from stdout w/o blocking
or
b) we can usefully write to stdin.
By the way, this experiment is rather tedious.
Still, I hope to have my computer do this for me.
So, unless I'm mistaken, I can get the program to
work under Windows.  Though reading a byte at a
time might give a performance penalty.  Can't say
yet on  that.
My question is this: Under Windows, is it possible
to read as many bytes as are available from stdout,
without blocking?
This is one way to improve performance.  (If needed.)
Another way, is to use _longer_ prompt strings.
If prompt strings are at least 64 bytes long, then
we can safely read 64 bytes -- unless we are in
the process of reading what might be a prompt
string.
This will of course increase performance in the
limiting case of when there are zero prompt
strings, and expensive system calls.
This problem of non-blocking input on Windows seems
to arise often.  I hope my remarks might be helpful
to others.  Certainly, I've found it helpful to
write them.
--
Jonathan
http://www.pytex.org
--
http://mail.python.org/mailman/listinfo/python-list


Re: best way to do a series of regexp checks with groups

2005-01-24 Thread Jonathan Fine
Mark Fanty wrote:
In perl, I might do (made up example just to illustrate the point):
if(/add (\d+) (\d+)/) {
  do_add($1, $2);
} elsif (/mult (\d+) (\d+)/) {
  do_mult($1,$2);
} elsif(/help (\w+)/) {
  show_help($1);
}
or even
do_add($1,$2) if /add (\d+) (\d+)/;
do_mult($1,$2) if /mult (\d+) (\d+)/;
show_help($1) if /help (\w+)/;

Here's some Python code (tested).
It is not as concise as the Perl code.
Which might or might not be a disadvantage.
Sometimes, regular expressions are not the right thing.
For example, a simple str.startswith() might be better.
What about "add 9 999"?
Maybe we want to catch the error before we get to the do_add.
Can't easily do that with regular expressions.
And what about a variable number of arguments.
If regular expressions are no longer used, the Perl code seems
to loose some of its elegance.
I've been arguing for writing small, simple functions that do something.
This should make testing much easier.
These functions might happen to use regular expressions.
The code below is clearly more flexible.
It's easy, for example, to add a new command.
Just add an entry to dispatch.
The thing I like best about it is the passing of a dict.
===
#!/usr/bin/python
import re
# here we know about functions and patterns
def do_add(arg1, arg2): print "+ %s %s" % (arg1, arg2)
def do_times(arg1, arg2): print "* %s %s" % (arg1, arg2)
add_re = re.compile(r'add (?P.*) (?P.*)')
times_re = re.compile(r'times (?P.*) (?P.*)')
def find_add(str):
match = add_re.match(str)
if match is None:
return match
return match.groupdict()
def find_times(str):
match = times_re.match(str)
if match is None:
return match
return match.groupdict()
# here we bind everything together
dispatch = [
(find_add, do_add),
(find_times, do_times),
]
def doit(str):
for (find, do) in dispatch:
d = find(str)
if d is not None:
return do(**d)
return None # or error
if __name__ == '__main__':
doit('add this that')
doit('times this that')
===
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: What could 'f(this:that=other):' mean?

2005-01-11 Thread Jonathan Fine
Nick Coghlan wrote:
If the caller is meant to supply a namespace, get them to supply a 
namespace.

def f(ns1, ns2):
  print ns1['a'], ns1['b'], ns2['a'], ns2['b']
f(ns1 = dict(a=1, b=2), ns2 = dict(a=3, b=4))
Hey, where's Steve? Maybe his generic objects should be called 
namespaces instead of bunches. . .

def f(ns1, ns2):
  print ns1.a, ns1.b, ns2.a, ns2.b
f(ns1 = namespace(a=1, b=2), ns2 = namespace(a=3, b=4))

Basically, there are three main possibilities.
  f_1(ns1=dict(a=1, b=2), ns2=dict(a=3, b=4))
  f_2(ns1_a=1m, ns1_b=2, ns2_a=3, ns2_b=4)
  f_3(ns1:a=1m, ns1:b=2, ns2:a=3, ns2:b=4)
f_3 is the suggested extension to Python.
f_3 is similar to f_2 for the caller of f_3.
f_3 is similar to f_1 for the implementor of f_3.
Nick points out that a key issue is this:  Is the user meant
to supply arguments belonging to a namespace?
I'm not, at this time, wishing to promote my suggestion.
If I were, I would be well advised to find usage cases.
Rather, I simply wish to point out that the
  f(this:that=other)
syntax may have uses, other than optional static typing.
And this I've done.  So for me the thread is closed.
Jonathan

--
http://mail.python.org/mailman/listinfo/python-list


Re: What could 'f(this:that=other):' mean?

2005-01-07 Thread Jonathan Fine
Jeff Shannon wrote:
Jonathan Fine wrote:

The use of *args and **kwargs allows functions to take a variable number 
of arguments.  The addition of ***nsargs does not add significantly.  
I've posted usage examples elsewhere in this thread.
I think they show that ***nsargs do provide a benefit.
At least in these cases.
How significant is another matter.  And how often do they occur.

Now, though, we've lost the ability to specify *only* the argname and 
not the namespace as well -- that is, you *cannot* call f4 with keywords 
but not namespaces.  From the caller's vantage point, this means that 
they need to know the full namespace spec of the function, which makes 
it no different than simply using longer (but unique) keyword names.
This is a major point.
From the caller's point of view:
  fn_1(x_aa=1, x_bb=2, y_aa=3)
  fn_2(x:aa=1, x:bb=2, y:aa=3)
are very similar.  Replace '_' by ':'.
So you are saying, I think, 'what does this buy the caller'.
Well, by using ':' the namespacing is explicit.
  fn_3(my_path, my_file, any_file)
Is this an example of implicit namespacing? (Rhetorical question.)
Explicit is better than implicit, I'm told (smile).

So, we can see that allowing namespaces and ***nsargs doesn't add any 
utility from the caller's perspective.  How much does the callee gain 
from it?

Well, the following functions would be equivalent:
def g1(arg1, arg2, arg3, arg4):
ns1 = {'arg1':arg1, 'arg2':arg2, 'arg3':arg3, 'arg4':arg4}
return ns1
def g2(ns1:arg1, ns1:arg2, ns1:arg3, ns1:arg4):
return ns1
You might say "Wow, look at all that repetetive typing I'm saving!" But 
that *only* applies if you need to stuff all of your arguments into 
dictionaries.  
This is a key point.  But there's more to it.
Namespace arguments are good if you have to divide the arguments into
two or more dictionaries.  Based on the namespace prefix.
I suspect that this is a rather small subset of 
functions.  In most cases, it will be more convenient to use your 
arguments as local variables than as dictionary entries.
This is a matter of usage.  If the usage is very, very small, then
what's the point.  If the usage is very, very large, then the case
is very strong.
def gcd1(a, b):
while a:
a, b = b%a, a
return b
def gcd2(ns1:a, ns1:b):
while ns1['a']:
ns1['a'], ns1['b'] = ns1['b']%ns1['a'], ns1['a']
return ns1['b']
Speaking of repetetive typing :P
Besides, another function equivalent to the above two would be:
def g3(arg1, arg2, arg3, arg4):
ns1 = locals()
return ns1
which is quite a bit less repetetive typing than the 'namespace' 
version.
These are not good examples for the use of namespacing.
And the results are horrible.
So this is an argument _in favour_ of namespacing.
Namely, that it _passes_ "there should be one (and preferably only one) 
obvious way to do things" test (quoted from your earlier message).


So, you're looking at making the function-calling protocol significantly 
more complex, both for caller and callee, for the rather marginal gain 
of being able to get arguments prepackaged in a dictionary or two, when 
there already exist easy ways of packaging function arguments into a 
dict.  Given the deliberate bias against adding lots of new features to 
Python, one needs a *much* better cost/benefit ratio for a feature to be 
considered worthwhile.
I believe that we _agree_, that it is a matter of cost/benefit ratio.
My opinion is that studying usage examples is a good way of evaluating
this ratio.  Which is why I have posted some favourable usage examples.

I'd note also that the usage you're drawing from, in XML/XSLT, isn't 
really comparable to function parameters.  It's a much closer parallel 
to object attributes.  Python *does* have this concept, but it's spelled 
differently, using a '.' instead of a ':'.  In other words, the XML 
fragment you give,


 would be more appropriate to render in Python as
e = Element()
e.this.that = 'other'
It's quite reasonable to suppose that some object of type Element may 
have a set of font attributes and a set of image attributes, and that 
some of these may have the same name.  Python would use font objects and 
image objects instead of 'namespaces' --

e.font.size = '14pt'
e.image.size = (640, 480)
So while these namespaces are probably a great thing for XML, XSLT, 
they're not very useful in Python.  Which, given the rather different 
goals and design philosophies behind the languages, shouldn't really be 
much of a surprise.
It may be better, in some cases, to write
  fp = FontParams()
  fp.size = 14

Re: What could 'f(this:that=other):' mean?

2005-01-07 Thread Jonathan Fine
Jonathan Fine wrote:

I'll post some usage examples later today, I hope.

Well, here are some examples.  A day later, I'm afraid.
** Pipelines and other composites
This is arising for me at work.
I produce Postscript by running TeX on a document.
And then running dvips on the output of TeX.
TeX as a program has parameters (command line options).
As does dvips.
For various reasons I wish to wrap TeX and dvips.
  def tex_fn(filename, input_path=None, eps_path=None):
'''Run tex on filename.
Search for input files on input_path.
Search for EPS files on eps_path. '''
pass
  def dvips_fn(filename, page_size=None, color_profile=None):
'''Run dvips on filename.  etc.'''
pass
In reality, these tex and dvips have many options.
More parameters will be added later.
So now I wish to produce a composite function, that runs
both tex and dvips.  And does other glue things.
  def tex_and_dvips_fn(filename,
tex:input_path=xxx,
dvips:color_profile=yyy):
# glueing stuff
tex_fn(filename, **tex)
# more glueing stuff
dvips_fn(filename, **dvips)
To avoid a name clash, we use 'tex' for the parameter
space, and 'tex_fn' for the function that takes 'tex'
as parameter.
The point is that parameters can be added to tex_fn and
dvips_fn without our having to change tex_and_dvips_fn
**  Wrapping functions
This is the problem that originally motivated my
suggestion.
We have coded a new function.
  def adder(i, j):  return i + j
We wish to test 'adder'.
But unittest is too verbose for us.
We wish to define a decorator (?) test_wrap to
work as follows.
  orig_adder = adder
  adder = test_wrap(adder)
  new_adder = adder
orig_adder(i, j) and new_adder(i, j) to be
effectively identical - same return, same side
effects, raise same exceptions.
So far,
  def test_wrap(fn): return fn
does the job.
But now we want
  new_adder(2, 2, returns=4)
  new_adder(2, '', raises=TypeError)
to be same as
  orig_adder(2, 2)
  orig_adder(2, '')
(which can be achieved by ignoring 'returns' and 'raises'.
The idea here is that we can call
  adder = new(adder)
early on, and not break any working code.
And we also want
  new_adder(2, 2, 5)
  new_adder('', '', raises=TypeError)
to raise something like an AssertionError.
OK - so I have an informal specification of test_wrap.
Its clear, I think, that test_wrap must be something like
  def test_wrap(fn):
def wrapped_fn(*args, **kwargs):
test_args = {}
# transfer entries from one dict to another
for key in ('returns', 'raises'):
if kwargs.has_key(key):
test_args[key] = kwargs[key]
del kwargs[key]
result = fn(args, kwargs)
if test_args.has_key(result):
 assert test_args['result'] = result
(I've not coded testing for 'raises'.)
Now, the more parameters added by the test_wrap function,
the more the chance of a name clash.
So why not reduce the chances by using name spaces.
One possible namespace syntax is:
  new_adder(2, 3, test=dict(returns=5))
Another such syntax is:
  new_adder(2, 3, test:returns=5)
Each has its merits.
The first works with Python 2.4.
The second is, in my opinion, easier on the eye.
Anyway, that's my suggestion.
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


Re: What could 'f(this:that=other):' mean?

2005-01-05 Thread Jonathan Fine
Jeff Shannon wrote:
Jonathan Fine wrote:
Giudo has suggested adding optional static typing to Python.
(I hope suggested is the correct word.)
  http://www.artima.com/weblogs/viewpost.jsp?thread=85551
An example of the syntax he proposes is:
 > def f(this:that=other):
 > print this

I'm going to suggest a different use for a similar syntax.
In XML the syntax
 >  
is used for name spaces.
Name spaces allow independent attributes to be applied to an
element.  For example, 'fo' attributes for fonts and layout.
XSLT is of course a big user of namespaces in XML.
Namespaces seems to be a key idea in allow independent
applications to apply attributes to the same element.
[...]
Here's an example of how it might work.  With f as above:
 > f(this:that='value')
{'that': 'value'}

I fail to see how this is a significant advantage over simply using 
**kwargs.  It allows you to have multiple dictionaries instead of just 
one, that's all.  And as you point out, it's trivial to construct your 
own nested dicts.
This argument could be applied to **kwargs (and to *args).  In other
words, **kwargs can be avoided using a trivial construction.
>>> def f_1(**kwargs): print kwargs
...
>>> f_1(a=3, b=4)
{'a': 3, 'b': 4}
>>>
>>> def f_2(kwargs): print kwargs
...
>>> f_2({'a':3, 'b':4})
{'a': 3, 'b': 4}
(and in Python 2.3)
>>> f_2(dict(a=3, b=4))
{'a': 3, 'b': 4}
f_1() is internally the same as f_2(), but the argument passing is
different.
Jeff, are you in favour of kwargs as a language feature?  If so, you
may wish to refine your argument.
(One can be in favour of kwargs and against my proposal.  That kwargs
is widely used, and my proposal would not be, is a good argument, IMO.)
I'll post some usage examples later today, I hope.

Besides, Python already uses the concept of namespaces by mapping them 
to object attributes.  Module references are a namespace, exposed via 
the attribute-lookup mechanism.  This (IMO) fails the "there should be 
one (and preferably only one) obvious way to do things" test.  The 
functionality already exists, so having yet-another way to spell it will 
only result in more confusion.  (The fact that we're borrowing the 
spelling from XML does little to mollify that confusion.)

Here, I don't understand.  Could you give an example of two obvious ways
of doing the same thing, should my suggestion be adopted?
--
Jonathan
http://www.pytex.org
--
http://mail.python.org/mailman/listinfo/python-list


What could 'f(this:that=other):' mean?

2005-01-05 Thread Jonathan Fine
Giudo has suggested adding optional static typing to Python.
(I hope suggested is the correct word.)
  http://www.artima.com/weblogs/viewpost.jsp?thread=85551
An example of the syntax he proposes is:
> def f(this:that=other):
> print this
This means that f() has a 'this' parameter, of type 'that'.
And 'other' is the default value.
I'm going to suggest a different use for a similar syntax.
In XML the syntax
>  
is used for name spaces.
Name spaces allow independent attributes to be applied to an
element.  For example, 'fo' attributes for fonts and layout.
XSLT is of course a big user of namespaces in XML.
Namespaces seems to be a key idea in allow independent
applications to apply attributes to the same element.
For various reasons, I am interested in wrapping functions,
and supplying additional arguments.  Namespaces would be
useful here.  Decorators, by the way, are ways of wrapping
functions.  Namespaces might make decorators a bit easier
to use.
Here's an example of how it might work.  With f as above:
> f(this:that='value')
{'that': 'value'}
Do you see?  The syntax of def makes 'this' a dictionary.
And the syntax of the call adds an item to 'this'.
This aligns nicely with XML syntax and semantics.
One could extend **kwargs similarly.
> def g(***nsargs):
> print ***nsargs
>
> g(this:that='other', that:this='more')
{'this': {'that': 'other'}; {'that': {'this': 'more'}}
All the namespace args are gathered into a dict of dicts.
Thus, this suggestion is mostly syntactic sugar for
f(this=dict(that='other), that=dict('this'=other))
(Have I got this right? - I'm only up to Python 2.2 at
home.  This is how I remember 2.4.)
Back to optional static typing.  A common idiom is
>  def return_dict(data=None):
>  if data is None:
>   data = {}
# etc
This avoid the newbie gotcha in
>  def return_dict(data={}:
>   # etc
So to write this using the suggested syntax one has:
>  def return_dict(data:dict=None):
>   # oops!
So now some final comments.
1.  I have no opinion yet about optional static typing.
2.  Calls of function f() should outnumber definitions of f().
(I'm not totally convinced of this - for example __eq__ may
be defined in many classes, but called by few functions.
Frameworks often have functions as parameters.)
3.  Granted (2), perhaps function calls are first in the
queue for syntactic sugar.
--
Jonathan
http://www.pytex.org
--
http://mail.python.org/mailman/listinfo/python-list