[Python-announce] Django-Compat-Patcher 0.12 released

2023-08-03 Thread Pascal Chambon

Hello everyone,

I'm pleased to announce the release of Django-Compat-Patcher 0.12, which 
now includes 86 compatibility shims ranging from Django 1.6 to 4.2


If your Django project is incompatible with handy pluggable apps, or if 
you reach the depths of dependency hell when attempting to mass-upgrade 
your packages, no panic!


Just drop this django compatibility patcher into your project, and keep 
your developments going forward, while dependency conflicts are slowly 
sorted out in bugtrackers.


As a proof-of-concept, the Pychronia alternate reality portal 
(https://github.com/ChrysalisTeam/pychronia) is kept perfectly 
functional on Django 4.2, while still having Django 1.10 constructs like 
"views as dotted strings" (but don't do that for your own projects of 
course, alternative tools like django-compat or django-codemod will help 
you migrate your own codebase).


Enjoy your decade-long backward compatibility, and get in touch if some 
compatibility shims are missing for you.


Pascal

Repository : https://github.com/pakal/django-compat-patcher
Download : https://pypi.org/project/django-compat-patcher/

PS: Pip might block on theoretical dependency conflicts even though 
Django-Compat-Patcher would solve them anyway; so you might have to 
bypass the Pip dependency resolver, until some escape hatches are 
implemented in it.


___
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/
Member address: arch...@mail-archive.com


[Python-announce] RSFile v2.2 released

2023-08-03 Thread Pascal Chambon

Dear pythoneers,

I'm pleased to announce a little update of the RSFile I/O Library, 
bringing support for recent Python versions.


RSFile provides drop-in replacements for io classes and for the open() 
builtin.


Its goal is to provide a cross-platform, reliable, and comprehensive 
synchronous file I/O API, with advanced
features like fine-grained opening modes, shared/exclusive file record 
locking, thread-safety, cache synchronization,
file descriptor inheritability, and handy stat getters (size, inode, 
times...).


Possible use cases for this library: write to logs concurrently without 
ending up with garbled data,
manipulate sensitive data like disk-based databases, synchronize 
heterogeneous producer/consumer

processes when multiprocessing semaphores aren't an option...

Unix users might particularly be interested by the workaround that this 
library provides, concerning
the weird semantic of fcntl() locks (when any descriptor to a disk file 
is closed, the process loses ALL

locks acquired on this file through any descriptor).

RSFile has been tested with CPython 3.7+, on Windows/Linux/Mac systems,
and should work on other python implementations

The technical documentation of RSFile includes a comprehensive description
of concepts and gotchas encountered while developing this library, which 
could
be useful to anyone interested in getting in touch with gory file I/O 
details.


Downloads:
https://pypi.python.org/pypi/RSFile

Documentation:
http://rsfile.readthedocs.io/en/latest/

Repository:
https://github.com/pakal/rsfile

PS: The implementation is currently pure-python, so if you need high 
performances, using standard python streams
in parallel will remain necessary. Also, do not use non-blocking streams 
with this library or with the IO module in general, lots of things could 
go wrong...



___
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/
Member address: arch...@mail-archive.com


[Python-announce] RsFile 3.0 released

2022-06-27 Thread Pascal Chambon

Dear pythoneers,

I'm pleased to announce version 3.0 of RSFile I/O Library, which adds 
support for python 3.8, 3.9, and 3.10, drops support for Python<=3.5, 
and strengthens testing on OSX.


RSFile provides cross-platform drop-in replacements for the classes of 
the io module, and for the open() builtin.


Its goal is to provide a cross-platform, reliable, and comprehensive 
synchronous file I/O API, with advanced features like fine-grained 
opening modes, shared/exclusive file record locking, thread-safety, disk 
cache synchronization, file descriptor inheritability, and handy stat 
getters (size, inode, times…).


Locking is performed using actual file record locking capabilities of 
the OS, not by using separate files/directories as locking markers, or 
other fragile gimmicks. Unix users might particularly be interested by 
the workaround that this library provides, concerning the weird semantic 
of fcntl() locks (when any descriptor to a disk file is closed, the 
process loses ALL locks acquired on this file through any descriptor). 
Possible use cases for this library: concurrently writing to logs 
without ending up with garbled data, manipulating sensitive data like 
disk-based databases, synchronizing heterogeneous producer/consumer 
processes when multiprocessing semaphores aren’t an option…


https://pypi.org/project/RSFile/
https://rsfile.readthedocs.io/en/latest/
https://github.com/pakal/rsfile/

regards,

Pakal

___
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/
Member address: arch...@mail-archive.com


[Python-announce] ANN: django-compat-patcher 0.11 released

2022-06-17 Thread Pascal Chambon

Hello,

it's with great pleasure that I announce the release of 
*django-compat-patcher v0.11*


This release extends compatibility fixers so that you can painlessly 
upgrade your project to *Django 4.0*, without breaking your existing 
pluggable-apps ecosystem.


--

DCP is a companion package which adds backwards/forwards compatibility 
patches to Django, so that your dependencies don't get broken by trivial 
changes made to the core of the framework.


It injects compatibility shims like function/attribute aliases, restores 
data structures which were replaced by stdlib ones, extends the 
behaviour of callables (eg. referring to a view by object, by name, or 
by dotted path), and can even preserve deprecated modules as “import 
aliases”.


This allows to you upgrade your dependencies one at a time, to 
fork/patch them when you have a proper opportunity, and most importantly 
to not get stuck, when deadlines are tight.


Technically, DCP manages a set of “fixers”, small utilities which 
advertise the change that they make, the versions of Django that they 
support, and which monkey-patch the Django framework on demand.


Beware, DCP is aimed at project maintainers. If you are developing a 
reusable Django application, you can’t expect all your users to 
integrate DCP as well. In this case, to support a wide range of Django 
versions, you should rather use a toolkit like Django-compat 
. You may think of DCP as a 
“runtime 2to3 for Django’, wherease Django-Compat is rather a “six 
module for Django”.


Feel free to contribute new fixers, for backwards or forwards 
compatibility, depending on the compatibility troubles you encounter on 
your projects


https://pypi.org/project/django-compat-patcher/
https://github.com/pakal/django-compat-patcher

regards,
Pakal
___
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/
Member address: arch...@mail-archive.com


python list files and folder using tkinter

2021-12-05 Thread Pascal B via Python-list
Hello,
I have already posted a message some time ago for this app. Since then, I 
didn't code in python or made any changes. I think before getting further with 
functionnalities a few things or the whole thing need to be changed.
For exemple, it would need a button to pick folders and maybe ask if the csv 
resulting file can be saved in the same folder.
And more important, files are listing ok in windows but not in linux after 
running it a few times.
https://github.com/barpasc/listfiles
code extract without indentation, see source script on github

elif vvchkboxF == 1: 
|   # *** FOLDERS AND FILES ONLY *** |
|


|   for root, dirs, files in os.walk(Lpath, topdown=False): |
|


|   ### calcul taille dossier |
|


|   size = 0 |
|


|   for x, y, z in os.walk(root): |
|


|   for i in z: |
|


|   ftmp_che = x + os.sep + i |
|


|   f_size += os.path.getsize(ftmp_che) |
|


|   ### ecriture taille dossier |
|


|   counter = root.count(os.path.sep) - counterPath |
|


|   vfile_name = root |
|


|   vfile_name = vfile_name + os.path.sep |
|


|   vfile_name = os.path.split(os.path.dirname(vfile_name))[1] |
|


|   vfile_name += os.path.sep |
|


|   if counter <= f_vscale: |
|


|   csv_contents += "%s;%s;%.0f;%.2f;%.2f;%.2f;%s\n" % (root, vfile_name, 
f_size, f_size/1024, f_size/1048576,f_size/1073741824, "folder") |
|


|  
 |
|


|   ### calcul +ecriture taille fichier |
|


|   for f in os.listdir(Lpath): |
|


|   path = os.path.join(Lpath, f) |
|


|   if os.path.isfile(path): |
|


|   f_size = 0 |
|


|   f_size = os.path.getsize(path) |
|


|   csv_contents += "%s;%s;%.0f;%.2f;%.2f;%.2f;%s\n" % (path, f, f_size, 
f_size/1024, f_size/1048576,f_size/1073741824, "file") |
|


|  
 |
|


|   fx_writeCSV_str(csv_contents) |
|

  print("job adv listing files ok")
-- 
https://mail.python.org/mailman/listinfo/python-list


Php vs Python gui (tkinter...) for small remote database app

2021-06-14 Thread Pascal B via Python-list
Hi,
I would like to know if for a small app for instance that requires a connection 
to a remote server database if php is more suitable than Python mainly 
regarding security.
Php requires one port for http and one port for the connection to the database 
open. If using Python with a tkinter gui, I understand a small app can connect 
to a database so only one port to the database would need to be accessed/open 
listening to connection. So I would need to worry less about security if using 
Python over Php for something small, like a small python app that I give over 
to users.

Am I missing something in this assertion?
-- 
https://mail.python.org/mailman/listinfo/python-list


Django-Compat-Patcher 0.10 Released, with Django 3.1 support

2021-01-22 Thread Pascal Chambon

Hello,

I'm pleased to announce the release of Django-Compat-Patcher 0.10, which 
now includes 67 compatibility shims ranging from Django 1.6 to 3.1


If your Django project is incompatible with a useful pluggable app, or 
if you encounter the depths of dependency hell when attempting to 
mass-upgrade your packages, no panic!


Just drop the django compatibility patcher into your project, and keep 
your developments going forward, while dependency conflicts are slowly 
sorted out in bugtrackers.


As a proof-of-concept, the Pychronia alternate reality portal is kept 
perfectly functional on Django 3.1.5, while still having Django 1.10 
constructs like "views as dotted strings" (but don't do that for your 
own projects, alternative tools like django-compat or django-codemod can 
help you migrate your own codebase).


Enjoy your decade-long backwards compatibility (and get in touch if some 
compatibility shims are missing for you)


Pascal Chambon

Project homepage : https://github.com/pakal/django-compat-patcher
Download : https://pypi.org/project/django-compat-patcher/0.10/

PS: latest versions of Pip, with the improved dependency solver, might 
block on theoretical dependency conflicts even though 
Django-Compat-Patcher solved them; then you might have to keep using an 
older Pip version, or bypass the Pip dependency resolver, until some 
escape hatches  are implemented in it.

___
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/
Member address: arch...@mail-archive.com


Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
On Monday, January 11, 2021 at 2:07:03 PM UTC, Chris Angelico wrote:
> On Tue, Jan 12, 2021 at 1:01 AM pascal z via Python-list 
>  wrote: 
> > 
> > On Monday, January 11, 2021 at 1:45:31 PM UTC, Greg Ewing wrote: 
> > > On 12/01/21 1:12 am, pascal z wrote: 
> > > > As alternative, I pasted it into github and pasted it back into this 
> > > > page, it's ok when pasting but when posting it fails keeping spaces... 
> > > The indentation in your last three posts looks fine here in 
> > > the comp.lang.python newsgroup. If it doesn't look right to 
> > > you, it may be the fault of whatever you're using to read 
> > > the group. 
> > > 
> > > -- 
> > > Greg 
> > 
> > @Greg, then if you want, you can post these from what you're using. I tried 
> > sending a few times and it seemed it didn't work when refreshing the google 
> > group discussion page. However, just looking now at the discussion through 
> > emails, shows indentation right. I'm using firefox. I'll try using chromium 
> > for later posts if that makes things easier. 
> >
> Easy fix: stop looking at the Google Group page. 
> 
> ChrisA
ok, emails show post fine. 10 years ago or something, I was quite involved to 
comp.lang.c and I don't remember having this issue. Of course, only Python 
needs formatting speficities but from what I can remember, indentation was 
working fine then. One more thing, is that there was a lot less ads than now. 
And comp.lang.python seems to show more ads than other groups like comp.lang.c
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
On Monday, January 11, 2021 at 1:45:31 PM UTC, Greg Ewing wrote:
> On 12/01/21 1:12 am, pascal z wrote: 
> > As alternative, I pasted it into github and pasted it back into this page, 
> > it's ok when pasting but when posting it fails keeping spaces...
> The indentation in your last three posts looks fine here in 
> the comp.lang.python newsgroup. If it doesn't look right to 
> you, it may be the fault of whatever you're using to read 
> the group. 
> 
> -- 
> Greg

@Greg, then if you want, you can post these from what you're using. I tried 
sending a few times and it seemed it didn't work when refreshing the google 
group discussion page. However, just looking now at the discussion through 
emails, shows indentation right. I'm using firefox. I'll try using chromium for 
later posts if that makes things easier.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
On Monday, January 11, 2021 at 12:00:28 PM UTC, Loris Bennett wrote:
> pascal z  writes: 
> 
> > tab to space on linux is not something easy to do 
> 
> I would argue that you are mistaken, although that depends somewhat on 
> your definition of 'easy'. 
> 
> > , I had to launch windows and use notepad++. 
> 
> There is the Linux command 'expand' , which I have never used, but which 
> sounds like it will do what you want: 
> 
> $ expand --help 
> Usage: expand [OPTION]... [FILE]... 
> Convert tabs in each FILE to spaces, writing to standard output. 
> 
> As an Emacs user, personally I would use the command 
> 
> M-x untabify 
> 
> within Emacs. I assume that Vim has something similar. 
> 
> Cheers, 
> 
> Loris 
> 
> -- 
> This signature is currently under construction.


Thanks, I'm going to try

As alternative, I pasted it into github and pasted it back into this page, it's 
ok when pasting but when posting it fails keeping spaces... Until I can find a 
way to do it, this is the github link 

https://github.com/barpasc/listfiles/blob/main/pyFilesGest_6B18.py
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import locale
import os
import csv
from tkinter import messagebox as msg

try:
from tkinter import *
import ttk
except:
import tkinter as tk #GUI package
from tkinter import ttk


def fx_BasicListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
# tree.delete(*tree.get_children())
fx_browseFoldersZ(1)
return

def fx_AdvancedListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# fx_browseFoldersZ(2,"txt")
# tree.destroy()
#tree.delete(*tree.get_children())
fx_browseFoldersZ(2)
return

def fx_browseFoldersZ(argy):
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
tree.delete(*tree.get_children())
fx_browseFolders(argy,"txt")

###
###
###

def fx_writeCSV(*arr):

csv_file_title = 'csv_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w', newline ='\n') as f:
write = csv.writer(f, doublequote=True, delimiter=';')
for row in arr:
write.writerows(row)

def fx_writeCSV_str(txt_str):
csv_file_title = 'csvtxt_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w') as f:
f.write(txt_str)

# fx_LoadCSV(CSV_FILE)

with open(CSV_FILE, 'r') as f:
reader = csv.DictReader(f, delimiter=';')
for row in reader:
col1 = row['Path']
col2 = row['Folder-file']
col3 = row['Size in Byte']
col4 = row['Size in Kb']
col5 = row['Size in Mb']
col6 = row['Size in Gb']
col7 = row['type']

tree.insert('', 'end', values=(col1, col2, col3, col4, col5, 
col6,col7))

return

###
###

def fx_chkPath(xzPath):
isxFile = os.path.isfile(xzPath)
isxDir = os.path.isdir(xzPath)
print("DOSSIER OUI",isxDir)
if isxDir:
return
elif not isxDir:
msg.showwarning("Folder path", "WD Path entered not found")
return


###
###
###


def fx_browseFolders(argz, tycsv):
tree.delete(*tree.get_children())
# /// /// ///
csv_txt = ""
csv_contents = ""
counterPath = 0
size = 0
f_size = 0
f_vscale = 0
# /// /// ///

# path WD
Lpath = vtxt_path.get()
print('%s' % Lpath)

# include files
vvchkboxF = vchkboxF.get()
# print("include files:::", vchkboxF.get())

# include modification date
print(vchkboxD.get())

# include creation date
print(vchkboxC.get())

# scale
f_vscale = int(var_scale.get())
print(f_vscale)

# path WD 2
if Lpath.endswith(os.path.sep):
   Lpath = Lpath[:-1]

# isFile = os.path.isfile(Lpath)
# print("fichier?",isFile)
fx_chkPath(Lpath)

counterPath = Lpath.count(os.path.sep)

csv_contents = "Path;Folder-file;Size in Byte;Size in Kb;Size in Mb;Size in 
Gb;type\n"

csv_txt = csv_contents

# csv_contents
# 1-FOLDER PATH
# 2-FILENAME
# 3-FOLDER PATH FULL
# 4-Size in Byte
# 5-Size in Kb
# 6-Size in Mb
# 7-Size in Gb
# 8-type\n

### BASIC LISTING #
if argz == 1:
print("basic listing")
file_paths = []
file_paths.append([csv_contents])
for root, dirs, files in os.walk(Lpath, topdown=True):
for file in files:
if tycsv == "csv":
vfolder_path = root + os.sep
vfile_name = "'" + file + "'"
vfolder_path_full = root + os.sep + file
csv_contents = "%s;%s;%s;%s;%s;%s;%s" % (vfolder_path, 
vfile_name , 'na', 'na', 'na','na', "folder")
file_paths.append([csv_contents])
elif tycsv == "txt":
vfolder_path = root + os.sep
vfile_name = file
vfolder_path_full = root + os.sep + file
f_size = 

Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
tab to space on linux is not something easy to do, I had to launch windows and 
use notepad++. Anyway, indentation should all be converted to spaces below 

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import locale
import os
import csv
from tkinter import messagebox as msg

try:
from tkinter import *
import ttk
except:
import tkinter as tk #GUI package
from tkinter import ttk


def fx_BasicListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
# tree.delete(*tree.get_children())
fx_browseFoldersZ(1)
return

def fx_AdvancedListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# fx_browseFoldersZ(2,"txt")
# tree.destroy()
#tree.delete(*tree.get_children())
fx_browseFoldersZ(2)
return

def fx_browseFoldersZ(argy):
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
tree.delete(*tree.get_children())
fx_browseFolders(argy,"txt")

###
###
###

def fx_writeCSV(*arr):

csv_file_title = 'csv_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w', newline ='\n') as f:
write = csv.writer(f, doublequote=True, delimiter=';')
for row in arr:
write.writerows(row)

def fx_writeCSV_str(txt_str):
csv_file_title = 'csvtxt_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w') as f:
f.write(txt_str)

# fx_LoadCSV(CSV_FILE)

with open(CSV_FILE, 'r') as f:
reader = csv.DictReader(f, delimiter=';')
for row in reader:
col1 = row['Path']
col2 = row['Folder-file']
col3 = row['Size in Byte']
col4 = row['Size in Kb']
col5 = row['Size in Mb']
col6 = row['Size in Gb']
col7 = row['type']

tree.insert('', 'end', values=(col1, col2, col3, col4, col5, 
col6,col7))

return

###
###

def fx_chkPath(xzPath):
isxFile = os.path.isfile(xzPath)
isxDir = os.path.isdir(xzPath)
print("DOSSIER OUI",isxDir)
if isxDir:
return
elif not isxDir:
msg.showwarning("Folder path", "WD Path entered not found")
return


###
###
###


def fx_browseFolders(argz, tycsv):
tree.delete(*tree.get_children())
# /// /// ///
csv_txt = ""
csv_contents = ""
counterPath = 0
size = 0
f_size = 0
f_vscale = 0
# /// /// ///

# path WD
Lpath = vtxt_path.get()
print('%s' % Lpath)

# include files
vvchkboxF = vchkboxF.get()
# print("include files:::", vchkboxF.get())

# include modification date
print(vchkboxD.get())

# include creation date
print(vchkboxC.get())

# scale
f_vscale = int(var_scale.get())
print(f_vscale)

# path WD 2
if Lpath.endswith(os.path.sep):
   Lpath = Lpath[:-1]

# isFile = os.path.isfile(Lpath)
# print("fichier?",isFile)
fx_chkPath(Lpath)

counterPath = Lpath.count(os.path.sep)

csv_contents = "Path;Folder-file;Size in Byte;Size in Kb;Size in Mb;Size in 
Gb;type\n"

csv_txt = csv_contents

# csv_contents
# 1-FOLDER PATH
# 2-FILENAME
# 3-FOLDER PATH FULL
# 4-Size in Byte
# 5-Size in Kb
# 6-Size in Mb
# 7-Size in Gb
# 8-type\n

### BASIC LISTING #
if argz == 1:
print("basic listing")
file_paths = []
file_paths.append([csv_contents])
for root, dirs, files in os.walk(Lpath, topdown=True):
for file in files:
if tycsv == "csv":
vfolder_path = root + os.sep
vfile_name = "'" + file + "'"
vfolder_path_full = root + os.sep + file
csv_contents = "%s;%s;%s;%s;%s;%s;%s" % (vfolder_path, 
vfile_name , 'na', 'na', 'na','na', "folder")
file_paths.append([csv_contents])
elif tycsv == "txt":
vfolder_path = root 

Re: learning python building 2nd app, need advices

2021-01-11 Thread pascal z via Python-list
tab to space on linux is not something easy to do, I had to launch windows and 
use notepad++. Anyway, indentation should all be converted to spaces below

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import locale
import os
import csv
from tkinter import messagebox as msg

try:
from tkinter import *
import ttk
except:
import tkinter as tk #GUI package
from tkinter import ttk


def fx_BasicListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
# tree.delete(*tree.get_children())
fx_browseFoldersZ(1)
return

def fx_AdvancedListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# fx_browseFoldersZ(2,"txt")
# tree.destroy()
#tree.delete(*tree.get_children())
fx_browseFoldersZ(2)
return

def fx_browseFoldersZ(argy):
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
tree.delete(*tree.get_children())
fx_browseFolders(argy,"txt")

###
###
###

def fx_writeCSV(*arr):

csv_file_title = 'csv_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w', newline ='\n') as f:
write = csv.writer(f, doublequote=True, delimiter=';')
for row in arr:
write.writerows(row)

def fx_writeCSV_str(txt_str):
csv_file_title = 'csvtxt_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w') as f:
f.write(txt_str)

# fx_LoadCSV(CSV_FILE)

with open(CSV_FILE, 'r') as f:
reader = csv.DictReader(f, delimiter=';')
for row in reader:
col1 = row['Path']
col2 = row['Folder-file']
col3 = row['Size in Byte']
col4 = row['Size in Kb']
col5 = row['Size in Mb']
col6 = row['Size in Gb']
col7 = row['type']

tree.insert('', 'end', values=(col1, col2, col3, col4, col5, 
col6,col7))

return

###
###

def fx_chkPath(xzPath):
isxFile = os.path.isfile(xzPath)
isxDir = os.path.isdir(xzPath)
print("DOSSIER OUI",isxDir)
if isxDir:
return
elif not isxDir:
msg.showwarning("Folder path", "WD Path entered not found")
return


###
###
###


def fx_browseFolders(argz, tycsv):
tree.delete(*tree.get_children())
# /// /// ///
csv_txt = ""
csv_contents = ""
counterPath = 0
size = 0
f_size = 0
f_vscale = 0
# /// /// ///

# path WD
Lpath = vtxt_path.get()
print('%s' % Lpath)

# include files
vvchkboxF = vchkboxF.get()
# print("include files:::", vchkboxF.get())

# include modification date
print(vchkboxD.get())

# include creation date
print(vchkboxC.get())

# scale
f_vscale = int(var_scale.get())
print(f_vscale)

# path WD 2
if Lpath.endswith(os.path.sep):
   Lpath = Lpath[:-1]

# isFile = os.path.isfile(Lpath)
# print("fichier?",isFile)
fx_chkPath(Lpath)

counterPath = Lpath.count(os.path.sep)

csv_contents = "Path;Folder-file;Size in Byte;Size in Kb;Size in Mb;Size in 
Gb;type\n"

csv_txt = csv_contents

# csv_contents
# 1-FOLDER PATH
# 2-FILENAME
# 3-FOLDER PATH FULL
# 4-Size in Byte
# 5-Size in Kb
# 6-Size in Mb
# 7-Size in Gb
# 8-type\n

### BASIC LISTING #
if argz == 1:
print("basic listing")
file_paths = []
file_paths.append([csv_contents])
for root, dirs, files in os.walk(Lpath, topdown=True):
for file in files:
if tycsv == "csv":
vfolder_path = root + os.sep
vfile_name = "'" + file + "'"
vfolder_path_full = root + os.sep + file
csv_contents = "%s;%s;%s;%s;%s;%s;%s" % (vfolder_path, 
vfile_name , 'na', 'na', 'na','na', "folder")
file_paths.append([csv_contents])
elif tycsv == "txt":
vfolder_path = root + 

Re: learning python building 2nd app, need advices

2021-01-08 Thread pascal z via Python-list
And something important to this app, is about listing files, how to avoid 
listing small application files parts .ini and binary files so if it's an 
application it would tell the size of of the folder of this application and not 
list the content or make it optionnal?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: learning python building 2nd app, need advices

2021-01-08 Thread pascal z via Python-list
any way to attach a file because I loose indentation?
-- 
https://mail.python.org/mailman/listinfo/python-list


learning python building 2nd app, need advices

2021-01-08 Thread pascal z via Python-list
Hi,

This is a python app I was working on, can you help making it a beautiful 
looking app like bleachbit or ccleaner?

The whole code below (what it does: it lists all folders and files from a 
specified path and tells some infos like size in mb or gb... and export it to a 
csv file for further processing maybe with customized dashboard...the listing 
should will also be used to rename multiple files to help ordering and finding 
files because current renaming tools are difficult to use I find...) For now it 
just gives infos about folders and files and rename. Maybe a backup tool would 
be nice, please advise. But the code is opposiite to bullet proof and if could 
be more bullet proof, it would be a way to start and continue

the messy code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import locale
import os
import csv
from tkinter import messagebox as msg

try:
from tkinter import *
import ttk
except:
import tkinter as tk #GUI package
from tkinter import ttk


def fx_BasicListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
# tree.delete(*tree.get_children())
fx_browseFoldersZ(1)
return

def fx_AdvancedListing():
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# fx_browseFoldersZ(2,"txt")
# tree.destroy()
#tree.delete(*tree.get_children())
fx_browseFoldersZ(2)
return

def fx_browseFoldersZ(argy):
#argx mode = 1 pour basic listing
#argx mode = 2 pour adv listing
# "txt" pour type enreg csv txt/csv
tree.delete(*tree.get_children())
fx_browseFolders(argy,"txt")

###
###
###

def fx_writeCSV(*arr):

csv_file_title = 'csv_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w', newline ='\n') as f:
write = csv.writer(f, doublequote=True, delimiter=';')
for row in arr:
write.writerows(row)

def fx_writeCSV_str(txt_str):
csv_file_title = 'csvtxt_1_baselisting.csv'
# csv path entry box
CSV_FILE = vcsv_path.get()

if not os.path.exists(CSV_FILE):
os.makedirs(CSV_FILE)

CSV_FILE += csv_file_title
print('%s' % CSV_FILE)

with open(CSV_FILE,'w') as f:
f.write(txt_str)

# fx_LoadCSV(CSV_FILE)

with open(CSV_FILE, 'r') as f:
reader = csv.DictReader(f, delimiter=';')
for row in reader:
col1 = row['Path']
col2 = row['Folder-file']
col3 = row['Size in Byte']
col4 = row['Size in Kb']
col5 = row['Size in Mb']
col6 = row['Size in Gb']
col7 = row['type']

tree.insert('', 'end', values=(col1, col2, col3, col4, col5, 
col6,col7))

return

###
###

def fx_chkPath(xzPath):
isxFile = os.path.isfile(xzPath)
isxDir = os.path.isdir(xzPath)
print("DOSSIER OUI",isxDir)
if isxDir:
return
elif not isxDir:
msg.showwarning("Folder path", "WD Path entered not found")
return


###
###
###


def fx_browseFolders(argz, tycsv):
tree.delete(*tree.get_children())
# /// /// ///
csv_txt = ""
csv_contents = ""
counterPath = 0
size = 0
f_size = 0
f_vscale = 0
# /// /// ///

# path WD
Lpath = vtxt_path.get()
print('%s' % Lpath)

# include files
vvchkboxF = vchkboxF.get()
# print("include files:::", vchkboxF.get())

# include modification date
print(vchkboxD.get())

# include creation date
print(vchkboxC.get())

# scale
f_vscale = int(var_scale.get())
print(f_vscale)

# path WD 2
if Lpath.endswith(os.path.sep):
   Lpath = Lpath[:-1]

# isFile = os.path.isfile(Lpath)
# print("fichier?",isFile)
fx_chkPath(Lpath)

counterPath = Lpath.count(os.path.sep)

csv_contents = "Path;Folder-file;Size in Byte;Size in Kb;Size in Mb;Size in 
Gb;type\n"

csv_txt = csv_contents

# csv_contents
# 1-FOLDER PATH
# 2-FILENAME
# 3-FOLDER PATH FULL
# 4-Size in Byte
# 5-Size in Kb
# 6-Size in Mb
# 7-Size in Gb
# 8-type\n

### BASIC LISTING #
if argz == 1:
print("basic listing")

Re: sudo python PermissionError [Errno 13] Permission denied

2020-12-17 Thread Pascal
you are right !

the "sticky bit" set to /tmp/ prevents the root user from altering the file
belonging to the simple user !

$ ls -ld /tmp/
drwxrwxrwt 13 root root 320 Dec 17 13:22 /tmp/

$ ls -l /tmp/test
-rw-r--r-- 1 user 0 Dec 17 13:24 /tmp/test

$ echo test | sudo tee -a /tmp/test
tee: /tmp/test: Permission denied
test

but it does not prevent its deletion !

$ sudo rm -v /tmp/test
removed '/tmp/test'.

which misled me : sorry for the waste of time.

happy end of year 2020, lacsaP.

Le jeu. 17 déc. 2020 à 13:09, <2qdxy4rzwzuui...@potatochowder.com> a écrit :

> On 2020-12-17 at 11:17:37 +0100,
> Pascal  wrote:
>
> > hi,
> >
> > here, I have this simple script that tests if the /tmp/test file can be
> > opened in write mode :
> >
> > $ cat /tmp/append
> > #!/usr/bin/python
> > with open('/tmp/test', 'a'): pass
> >
> > the file does not exist yet :
> >
> > $ chmod +x /tmp/append
> > $ ls -l /tmp/test
> > ls: cannot access '/tmp/test': No such file or directory
> >
> > the script is launched as a simple user :
> >
> > $ /tmp/append
> > $ ls -l /tmp/test
> > -rw-r--r-- 1 user user 0 Dec 17 10:30 /tmp/test
> >
> > everything is ok.
> > now, the script fails if it is replayed as root user with the sudo
> command :
> >
> > $ sudo /tmp/append
> > [sudo] password for user:
> > Traceback (most recent call last):
> >   File "/tmp/append", line 2, in 
> > with open('/tmp/test', 'a'):
> > PermissionError: [Errno 13] Permission denied: '/tmp/test'
> >
> > the problem is the same if the opening mode is 'w' or if "sudo -i" or
> "su -"
> > are used.
> >
> > why can't root user under python manipulate the simple user file ?
>
> This has to do with the idiosyncratic permissions of the /tmp directory
> and not your code.  In my shell on my Linux box:
>
> $ rm -f /tmp/x
> $ echo x >/tmp/x
> $ echo x | sudo tee /tmp/x
> tee: /tmp/x: Permission denied
> x
>
> $ ls -ld /tmp
> drwxrwxrwt 13 root root 380 Dec 17 06:03 /tmp
>
> Try your experiment in a different directory, one without the sticky bit
> set.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


sudo python PermissionError [Errno 13] Permission denied

2020-12-17 Thread Pascal
hi,

here, I have this simple script that tests if the /tmp/test file can be
opened in write mode :

$ cat /tmp/append
#!/usr/bin/python
with open('/tmp/test', 'a'): pass

the file does not exist yet :

$ chmod +x /tmp/append
$ ls -l /tmp/test
ls: cannot access '/tmp/test': No such file or directory

the script is launched as a simple user :

$ /tmp/append
$ ls -l /tmp/test
-rw-r--r-- 1 user user 0 Dec 17 10:30 /tmp/test

everything is ok.
now, the script fails if it is replayed as root user with the sudo command :

$ sudo /tmp/append
[sudo] password for user:
Traceback (most recent call last):
  File "/tmp/append", line 2, in 
with open('/tmp/test', 'a'):
PermissionError: [Errno 13] Permission denied: '/tmp/test'

the problem is the same if the opening mode is 'w' or if "sudo -i" or "su -"
are used.

why can't root user under python manipulate the simple user file ?

regards, lacsaP.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue5906] Risk of confusion in multiprocessing module - daemonic processes

2020-10-26 Thread Pascal Chambon


Pascal Chambon  added the comment:

The latest doc has a quick mention about the fact that daemon is not used in 
the Unix sens, so it seems fine now  B-)

https://docs.python.org/3/library/multiprocessing.html?#multiprocessing.Process.daemon

"""Additionally, these are not Unix daemons or services, they are normal 
processes that will be terminated (and not joined) if non-daemonic processes 
have exited."""

My paragraph was just my one attempt at distinguishing concepts, it was never 
part of the official docs

--

___
Python tracker 
<https://bugs.python.org/issue5906>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: python show folder files and not subfolder files

2020-10-04 Thread pascal z via Python-list
On Thursday, September 24, 2020 at 4:37:07 PM UTC+2, Terry Reedy wrote:
> On 9/23/2020 7:24 PM, pascal z via Python-list wrote:
> > Please advise if the following is ok (i don't think it is)
> > 
> > #!/usr/bin/env python3
> > # -*- coding: utf-8 -*-
> > 
> > import os
> > 
> > csv_contents = ""
> > output_file = '/home/user/Documents/csv/output3csv.csv'
> > Lpath = '/home/user/Documents/'
> > 
> > csv_contents = "FOLDER PATH;Size in Byte;Size in Kb;Size in Mb;Size in Gb\n"
> > 
> > d_size = 0
> > for root, dirs, files in os.walk(Lpath, topdown=False):
> >  for i in files:
> >  d_size += os.path.getsize(root + os.sep + i)
> >  csv_contents += "%s   ;%.2f   ;%.2f   ;%.2f   ;%.2f  \n" % (root, 
> > d_size, d_size/1024, d_size/1048576, d_size/1073741824)
> > 
> >  counter = Lpath.count(os.path.sep)
> >  if counter < 5:
> >  for f in os.listdir(Lpath):
> >  path = os.path.join(Lpath, f)
> >  f_size = 0
> >  f_size = os.path.getsize(path)
> >  csv_contents += "%s   ;%.2f   ;%.2f   ;%.2f   ;%.2f  \n" % 
> > (path, f_size, f_size/1024, f_size/1048576, f_size/1073741824)
> > 
> > fp = open(output_file, "w")
> > fp.write(csv_contents)
> > fp.close()
> 
> 
> Read
> https://docs.python.org/3/faq/programming.html#what-is-the-most-efficient-way-to-concatenate-many-strings-together
> -- 
> Terry Jan Reedy

Thanks for this tip. I do think it's better to use lists than concatenate into 
string variable. However, writing a list to a csv file is not something easy. 
If strings stored into the list have commas and single quotes (like song 
title's), it messes up the whole csv when it first meets this. Example with arr 
as list:


import csv
import io

(...)

csv_contents = "%s;%s;%s;%.2f;%.2f;%.2f;%.2f;%s" % (vfolder_path, vfile_name, 
vfolder_path_full, 0.00, 0.00, 0.00,0.00, "folder")
arr.append([csv_contents])

b = io.BytesIO()
with open(CSV_FILE,'w', newline ='\n') as f:
#write = csv.writer(f,delimiter=';')
#write = csv.writer(f,quotechar='\'', 
quoting=csv.QUOTE_NONNUMERIC,delimiter=',')
write = csv.writer(f,b)
for row in arr:
write.writerows(row)

(...)

string samples: ;'Forgotten Dreams' Mix.mp3;'Awakening of a Dream' Ambient 
Mix.mp3;Best of Trip-Hop & Downtempo & Hip-Hop Instrumental.mp3;2-Hour _ Space 
Trance.mp3

for the titles above, the easiest thing to write csv for me is 


(...)
csv_contents += "%s;%s;%s;%.2f;%.2f;%.2f;%.2f;%s" % (vfolder_path, vfile_name, 
vfolder_path_full, 0.00, 0.00, 0.00,0.00, "folder"

with open(CSV_FILE,'w') as f:
f.write(csv_contents)


csv_contents can be very large and it seems to work. It can concatenate 30k 
items and it's ok. Also with the above, I have the expected result into each of 
the 8 rows having the corresponding data. This is not always the case with csv 
writerows. If it meets a character it can't process, from there everything go 
into a single cell row. The split on semi colon from doesnt work anymore.

I am not allowed to change the name of the files (it could be used later 
somewhere else, making side effects...).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python if and same instruction line not working

2020-10-04 Thread pascal z via Python-list
On Tuesday, September 29, 2020 at 5:28:22 PM UTC+2, MRAB wrote:
> On 2020-09-29 15:42, pascal z via Python-list wrote:
> > I need to change the script commented out to the one not commented out. Why?
> > 
> >  # for x in sorted (fr, key=str.lower):
> >  # tmpstr = x.rpartition(';')[2]
> >  # if x != csv_contents and tmpstr == "folder\n":
> >  # csv_contentsB += x
> >  # elif x != csv_contents and tmpstr == "files\n":
> >  # csv_contentsC += x
> > 
> >  for x in sorted (fr, key=str.lower):
> >  if x != csv_contents:
> >  tmpstr = x.rpartition(';')[2]
> >  if tmpstr == "folder\n":
> >  csv_contentsB += x
> >  elif tmpstr == "file\n":
> >  csv_contentsC += x
> > 
> You haven't defined what you mean by "not working" for any test values 
> to try, but I notice that the commented code has "files\n" whereas the 
> uncommented code has "file\n".

Very good point, it should what caused the issue

By the way, it seems it's ok to check \n as end of line, it will work on 
windows linux and mac platforms even if windows use \r\n
-- 
https://mail.python.org/mailman/listinfo/python-list


python if and same instruction line not working

2020-09-29 Thread pascal z via Python-list
I need to change the script commented out to the one not commented out. Why?

# for x in sorted (fr, key=str.lower):
# tmpstr = x.rpartition(';')[2]
# if x != csv_contents and tmpstr == "folder\n":
# csv_contentsB += x
# elif x != csv_contents and tmpstr == "files\n":
# csv_contentsC += x

for x in sorted (fr, key=str.lower):
if x != csv_contents:
tmpstr = x.rpartition(';')[2]
if tmpstr == "folder\n":
csv_contentsB += x
elif tmpstr == "file\n":
csv_contentsC += x
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python show folder files and not subfolder files

2020-09-23 Thread pascal z via Python-list
ok, i came up with 
if os.path.isfile(path)
following 
path = os.path.join(Lpath, f)

and it seems to be ok, no dupplicates or wrong sizes...

thanks

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python show folder files and not subfolder files

2020-09-23 Thread pascal z via Python-list
Please advise if the following is ok (i don't think it is)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os

csv_contents = ""
output_file = '/home/user/Documents/csv/output3csv.csv'
Lpath = '/home/user/Documents/'

csv_contents = "FOLDER PATH;Size in Byte;Size in Kb;Size in Mb;Size in Gb\n"

d_size = 0
for root, dirs, files in os.walk(Lpath, topdown=False):
for i in files:
d_size += os.path.getsize(root + os.sep + i)
csv_contents += "%s   ;%.2f   ;%.2f   ;%.2f   ;%.2f  \n" % (root, d_size, 
d_size/1024, d_size/1048576, d_size/1073741824)

counter = Lpath.count(os.path.sep)
if counter < 5:
for f in os.listdir(Lpath):
path = os.path.join(Lpath, f)
f_size = 0
f_size = os.path.getsize(path)
csv_contents += "%s   ;%.2f   ;%.2f   ;%.2f   ;%.2f  \n" % (path, 
f_size, f_size/1024, f_size/1048576, f_size/1073741824)

fp = open(output_file, "w")
fp.write(csv_contents)
fp.close()
-- 
https://mail.python.org/mailman/listinfo/python-list


python show .

2020-09-23 Thread pascal z via Python-list
Hello, I'm working on a script where I want to loop into folders somehow 
recursively to get information but I want to limit the infos for the files on a 
certain level of folders for example:

/home/user/Documents/folder1
/home/user/Documents/folder2
/home/user/Documents/folder3/folder1/file1
/home/user/Documents/folder4/file1
/home/user/Documents/file1
/home/user/Documents/file2
/home/user/Documents/file3

I only want file1, 2, 3 at the root of Documents to show (write to a csv) and 
I'm using the script below

### SCRIPT###
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os

csv_contents = ""
output_file = '/home/user/Documents/csv/output2csv.csv'
Lpath = '/home/user/Documents/'

csv_contents = "FOLDER PATH;Size in Byte;Size in Kb;Size in Mb;Size in Gb\n"

for root, dirs, files in os.walk(Lpath, topdown=False):
counter = Lpath.count(os.path.sep)
if counter < 5:
for f in os.listdir(root):
path = os.path.join(root, f)
f_size = 0
f_size = os.path.getsize(path)
csv_contents += "%s   ;%.2f   ;%.2f   ;%.2f   ;%.2f  \n" % (path, 
f_size, f_size/1024, f_size/1048576, f_size/1073741824)

fp = open(output_file, "w")
fp.write(csv_contents)
fp.close()
### END OF SCRIPT###

When I run this script, I get files in subfolders. For now, I need to keep 
using "os.walk" because the script includes functions that I didn't include to 
make thing simple.

Pascal
-- 
https://mail.python.org/mailman/listinfo/python-list


python oop learning communication between function inside a class

2020-09-17 Thread pascal z via Python-list
Hello,
I would like to know how possible it is to call a member function from a class 
and pass it a variable

Example:

class Application(tk.Frame):
"""docstring for ."""

def __init__(self, parent):
super(Application, self).__init__(parent)
self.parent = parent
parent.title('Courses selections')
#parent.geometry("700x350")
parent.geometry('500x320+0+0') #Width x Height
# Create widgets/grid
self.create_widgets()
self.selected_item = 0

def create_widgets(self):
### FIRST NAME LABEL + ENTRY
self.firstName_txt = tk.StringVar()
self.firstName_lbl = tk.Label(self.parent, text='First Name', 
font=('bold'))
self.firstName_lbl.place(x=20,y=10)
self.firstName_entry = tk.Entry(self.parent, 
textvariable=self.firstName_txt)
self.firstName_entry.place(x=120,y=10)

...


def prereq(self):
self.boo = 1

if self.firstName_txt.get() == "":
msg.showwarning("Missing information", "First name info missing")
boo = 0
elif self.lastName_txt.get() == "":
msg.showwarning("Missing information", "Last name info missing")
boo = 0
elif self.age_txt.get() == "":
msg.showwarning("Missing information", "Age info missing")
boo = 0
elif self.rBtnGender.get() == 0:
msg.showwarning("Missing information", "Gender info missing")
boo = 0

if self.boo == 1:
self.fname = self.firstName_txt.get()
self.lname = self.lastName_txt.get()
self.age = int(self.age_txt.get())

self.selectedCourse = 
self.coursesLBX.get(self.coursesLBX.curselection())

if self.age < 21:
msg.showwarning("Invalid Age", "Invalid Age, you are not 
eligible")
return
elif self.age >= 21:
pass

### SELECTED COURSE
if self.selectedCourse == "Quality Management (Adv.)":
self.prereq = "The prereq for this course is Quality Management 
(Int)."
self.flag = 1
elif self.selectedCourse == "Financial Management (Adv.)":
self.prereq = "The prereq for this course is Financial 
Management (Bas)."
self.flag= 1
elif self.selectedCourse == "Project Management (Adv.)":
self.prereq = "The prereq for this course is Project Management 
(Int)."
self.flag = 0
else:
self.prereq = "The prereq for this course is Project Management 
(Bas)."
self.flag = 0

### PART TIME
if self.chkBtnPTime.get() == 1 and self.flag == 0:
self.str2 = "\nThis course is not available part time."
elif self.chkBtnPTime.get() == 1 and self.flag == 1:
self.str2 = "\nThis course is available part time."
else:
self.str2 = ""

self.result = self.prereq + self.str2
msg.showinfo('Form info', self.result)


def save2db(self):
try:
db.insert(self.fname, self.lname, self.age)
msg.showinfo('DB action', "Selection inserted into db")
except:
msg.showinfo("Form submission failed", "Plz check ur input")

##

all script available on github 
https://github.com/barpasc/python_tuto_400_oop

In function save2db, I would like to know if there is any alternative to using 
try/except. The alternative I'm thinking is something like 

def save2db(self,boo):
   if boo == 1:
  do something
   else:
  do something like return to previous step...

Is this possible?




-- 
https://mail.python.org/mailman/listinfo/python-list


fileinput

2019-10-26 Thread Pascal
I have a small python (3.7.4) script that should open a log file and
display its content but as you can see, an encoding error occurs :

---

import fileinput
import sys
try:
source = sys.argv[1:]
except IndexError:
source = None
for line in fileinput.input(source):
print(line.strip())

---

python3.7.4 myscript.py myfile.log
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 799:
invalid continuation byte

python3.7.4 myscript.py < myfile.log
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 799:
invalid continuation byte

---

I add the encoding hook to overcome the error but this time, the script
reacts differently depending on the input used :

---

import fileinput
import sys
try:
source = sys.argv[1:]
except IndexError:
source = None
for line in fileinput.input(source,
openhook=fileinput.hook_encoded("utf-8", "ignore")):
print(line.strip())

---

python3.7.4 myscript.py myfile.log
first line of myfile.log
...
last line of myfile.log

python3.7.4 myscript.py < myfile.log
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 799:
invalid continuation byte

python3.7.4 myscript.py /dev/stdin < myfile.log
first line of myfile.log
...
last line of myfile.log

python3.7.4 myscript.py - < myfile.log
Traceback (most recent call last):
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 799:
invalid continuation byte

---

does anyone have an explanation and/or solution ?
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue37408] [DOC] Precise that Tarfile "format" argument only concerns writing.

2019-06-26 Thread Pascal Chambon


Pascal Chambon  added the comment:

Looking at tarfile.py, "format" seems only used in addfile() indeed.

--

___
Python tracker 
<https://bugs.python.org/issue37408>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37408] [DOC] Precise that Tarfile "format" argument only concerns writing.

2019-06-26 Thread Pascal Chambon


Pascal Chambon  added the comment:

My bad, this was a wrongly targeted PR, the real one is here: 
https://github.com/python/cpython/pull/14389

--

___
Python tracker 
<https://bugs.python.org/issue37408>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37408] [DOC] Precise that Tarfile "format" argument only concerns writing.

2019-06-26 Thread Pascal Chambon


Pascal Chambon  added the comment:

PR is on https://github.com/pakal/cpython/pull/1

--

___
Python tracker 
<https://bugs.python.org/issue37408>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37408] [DOC] Precise that Tarfile "format" argument only concerns writing.

2019-06-26 Thread Pascal Chambon


New submission from Pascal Chambon :

According to https://bugs.python.org/issue30661#msg339300 , "format" argument 
of Tarfile.open() only concerns the writing of files. It's worth mentioning it 
in the doc, if it's True (confirmation from core maintainers is welcome).

--
components: Library (Lib)
messages: 346586
nosy: pakal
priority: normal
severity: normal
status: open
title: [DOC] Precise that Tarfile "format" argument only concerns writing.

___
Python tracker 
<https://bugs.python.org/issue37408>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19645] decouple unittest assertions from the TestCase class

2019-06-13 Thread Pascal Chambon


Pascal Chambon  added the comment:

"Suppose failureException is set to TypeError on that TestCase class, how would 
your assertEquals signal failure to the test runner?"

failureException is an artefact from unittest.TestCase. It's only supposed to 
be used in a TestCase context, with an unittest-compatible runner. If people 
corrupt it, I guess it's their problem?

The point of decoupling is imho that other test runner might use the separate 
set of assertions. These assertions should raise a sensible default (i.e 
AssertionError) when encountering troubles, and accepting an alternate class as 
parameter will allow each test framework to customize the way these assertions 
behave for it.

--

___
Python tracker 
<https://bugs.python.org/issue19645>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19645] decouple unittest assertions from the TestCase class

2019-06-13 Thread Pascal Chambon


Pascal Chambon  added the comment:

I don't get it, why would failureException block anything ? The 
unittest.TestCase API must remain the same anyway, but it could become just a 
wrapper towards external assertions.

For example :

class TestCase:

   assertEqual = wrap(assertions.assert_equal)

Where "wrap" for example is some kind of functools.partial() injecting into 
external assertions a parameter "failure_exception_class". Having all these 
external assertions take such a parameter (defaulting to AssertionError) would 
be a great plus for adaptability anyway.

--
versions: +Python 3.5 -Python 3.9

___
Python tracker 
<https://bugs.python.org/issue19645>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19645] decouple unittest assertions from the TestCase class

2019-06-13 Thread Pascal Chambon


Pascal Chambon  added the comment:

(Redirected here from https://bugs.python.org/issue37262)

I haven't dug the assertThat() idea, but why not make, as a first step, turn 
assertion methods in TestCase to staticmethods/classmethods, instead of 
instance methods?

Since they (to my knowledge) don't need to access an instance dict, they could 
be turned into such instance-less methods, and thus be usable from other 
testing frameworks (like pytest, for those who want to use pytest fixtures and 
yet benefit from advanced assertions like Django's TestCase's assertions).

"failureException" and others are meant to be (sub)class attributes, so no 
backwards incompatible change should occur (unless someone did really weird 
things with manually instantiated TestCases).

--
nosy: +pakal

___
Python tracker 
<https://bugs.python.org/issue19645>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37262] Make unittest assertions staticmethods/classmethods

2019-06-13 Thread Pascal Chambon


Pascal Chambon  added the comment:

Indeed I missed this ticket, thanks

--

___
Python tracker 
<https://bugs.python.org/issue37262>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37262] Make unittest assertions staticmethods/classmethods

2019-06-13 Thread Pascal Chambon


New submission from Pascal Chambon :

Is there any reasons why assertXXX methods in TestCase are instance methods and 
not staticmethods/classmethods?

Since they (to my knowledge) don't need to access an instance dict, they could 
be turned into instance-less methods, and thus be usable from other testing 
frameworks (like pytest, for those who want to use all the power of fixtures 
and yet benefit from advanced assertions, like Django's TestCase's assertXXX).

Am I missing something here?

--
components: Tests
messages: 345463
nosy: pakal
priority: normal
severity: normal
status: open
title: Make unittest assertions staticmethods/classmethods
type: enhancement
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37262>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20767] Some python extensions can't be compiled with clang 3.4

2019-01-30 Thread Pascal van der Donck


Change by Pascal van der Donck :


--
assignee:  -> docs@python
components: +2to3 (2.x to 3.x conversion tool), Argument Clinic, Build, 
Cross-Build, Demos and Tools, Documentation, Extension Modules, FreeBSD, IDLE, 
IO, Installation, Interpreter Core, Library (Lib), Regular Expressions, SSL, 
Tests, Tkinter, Unicode, XML, asyncio, ctypes, email
nosy: +Alex.Willmer, asvetlov, barry, docs@python, larry, mrabarnett, 
r.david.murray, terry.reedy, yselivanov
type: compile error -> resource usage
versions: +Python 3.4

___
Python tracker 
<https://bugs.python.org/issue20767>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35846] Incomplete documentation for re.sub

2019-01-29 Thread Pascal Bugnion


New submission from Pascal Bugnion :

The documentation for `re.sub` states that "Unknown escapes such as ``\&`` are 
left alone.". This is only true for escapes which are not ascii characters, as 
far as I can tell (c.f. source on 
https://github.com/python/cpython/blob/master/Lib/sre_parse.py#L1047).

Would there be value in amending that documentation to either remove that 
sentence or to clarify it? If so, I'm happy to submit a PR on GitHub.

--
components: Regular Expressions
messages: 334504
nosy: ezio.melotti, mrabarnett, pbugnion
priority: normal
severity: normal
status: open
title: Incomplete documentation for re.sub
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue35846>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: Django-Compat-Patcher0.4

2019-01-23 Thread Pascal Chambon

Dear pythoneers,

I'm pleased to announce version 0.4 Django-Compat-Patcher, which adds 
forwards/backwards compatibility shims for Django1.10 and Django 1.11: 
https://pypi.org/manage/project/django-compat-patcher/release/0.4/


The goal of this project is to vastly improve the compatibility of 
different Django versions, so that different applications and plugins of 
its ecosystem can be used together without creating dependency hells.


Django-Compat-Patcher also showcases a shim management system which 
could be ported to other frameworks, to separate compatibility shims 
from (clean) codebase, and apply them automatically only when needed.


It has been used successfully on the Pychronia roleplay portal 
(https://github.com/ChrysalisTeam/pychronia).


Homepage of the DCP project : https://github.com/pakal/django-compat-patcher

regards,
Pascal Chambon

--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


ANN: RsFile 2.2 released

2019-01-23 Thread Pascal Chambon

Dear pythoneers,

I'm pleased to announce version 2.2 of RSFile I/O Library, which adds 
support for python3.6 and python3.7, and fixes some corner cases when 
using PIPEs.


RSFile provides pure-python drop-in replacements for the classes of the 
io module, and for the open() builtin.


Its goal is to provide a cross-platform, reliable, and comprehensive 
synchronous file I/O API, with advanced features like fine-grained 
opening modes, shared/exclusive file record locking, thread-safety, 
cache synchronization, file descriptor inheritability, and handy stat 
getters (size, inode, times…).


Locking is performed using actual file record locking capabilities of 
the OS, not by using separate files/directories as locking markers, or 
other fragile gimmicks. Unix users might particularly be interested by 
the workaround that this library provides, concerning the weird semantic 
of fcntl() locks (when any descriptor to a disk file is closed, the 
process loses ALL locks acquired on this file through any descriptor).


Possible use cases for this library: concurrently writing to logs 
without ending up with garbled data, manipulating sensitive data like 
disk-based databases, synchronizing heterogeneous producer/consumer 
processes when multiprocessing semaphores aren’t an option…


Tested on python2.7 and python3.5+, on windows and unix-like systems. 
Should work with IronPython/Jython/PyPy too, since it uses stdlib 
utilities and ctypes bridges.


regards,
Pascal Chambon

--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


Announcing django-compat-patcher 0.2

2017-02-02 Thread Pascal Chambon

*** django-compat-patcher 0.2 has been released. ***

DCP is a “magic” package which adds backwards/forwards compatibility 
patches to Django, so that your app ecosystem doesn’t get broken by 
trivial changes made to the core of the framework.


It injects compatibility shims like function/attribute aliases, restores 
data structures which were replaced by stdlib ones, extends the 
behaviour of callables (eg. referring to a view by object, by name, or 
by dotted path), and can even preserve deprecated module as “import 
aliases” (ex. keep importing from “django.contrib.comments” instead of 
the now external “django_comments”).


This allows to you upgrade your dependencies one at a time, to 
fork/patch them when you have a proper opportunity, and most importantly 
to not get stuck, when deadlines are tight and your dependencies 
suddenly have conflicting requirements. DCP somehow relaxes the 
deprecation policy of Django, so that it comes closer to semantic 
versioning.


Technically, DCP manages a set of “fixers”, small utilities which 
advertise the change that they make, the versions of Django that they 
support, and which on monkey-patch the Django framework on demand. By 
applying these fixers in a proper order (sometimes before, sometimes 
after django.setup()), DCP can workaround multiple breaking changes 
which target the same part of the code (eg. a tag library being added 
and then removed).


Beware, DCP is aimed at project maintainers. If you are developing a 
reusable Django application, you can’t expect all your users to 
integrate DCP as well. In this case, to support a wide range of Django 
versions, you should rather use a toolkit like Django-compat 
. You may think of DCP as a 
“runtime 2to3 for Django’, wherease Django-Compat is rather a “six 
module for Django”.


Feel free to contribute new fixers, for backwards or forwards 
compatibility, depending on the compatibility troubles you encounter on 
your projects


https://pypi.python.org/pypi/django-compat-patcher
https://github.com/pakal/django-compat-patcher
--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


[issue7659] Attribute assignment on object() instances raises wrong exception

2016-11-17 Thread Pascal Chambon

Pascal Chambon added the comment:

I guess it can, since retrocompatibility prevents some normalization here.

Or is it worth updating the doc about "AttributeError", misleading regarding 
the type of exception expected in this case ?

--
status: pending -> open

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue7659>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Rstransaction 0.1 released

2016-08-11 Thread Pascal Chambon

Hello,

I'm pleased to announce the release of "rstransaction" package.

This is a python2/python3 toolbox to create transactional systems, for 
any kind of operations: in-memory, on filesystems, on remote storages...


It supports commits/rollbacks and savepoints.

It was never used in production, but is well tested, and easily 
extendable to support different kinds of behaviour : immediate or lazy 
actions, recording of operations to disk files or DBs in case of crash, 
auto-rollback on error or not...


More information here:

https://github.com/pakal/rstransaction
https://pypi.python.org/pypi/RSTransaction

regards,
Pascal Chambon

--
https://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


[issue27141] Fix collections.UserList shallow copy

2016-07-02 Thread Pascal Chambon

Changes by Pascal Chambon <chambon.pas...@gmail.com>:


--
nosy: +pakal

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27141>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RSFile v2.1 released

2016-06-24 Thread Pascal Chambon

Dear pythoneers,

I'm pleased to announce a major update of the RSFile I/O Library: 
version 2.1.


RSFile provides drop-in replacements for the classes of the*io *module, 
and for the *open()* builtin.


Its goal is to provide a cross-platform, reliable, and comprehensive 
synchronous file I/O API, with advanced
features like fine-grained opening modes, shared/exclusive file record 
locking, thread-safety, cache synchronization,
file descriptor inheritability, and handy stat getters (size, inode, 
times...).


Possible use cases for this library: concurrently writing to logs 
without ending up with garbled data,
manipulating sensitive data like disk-based databases, synchronizing 
heterogeneous producer/consumer

processes when multiprocessing semaphores aren't an option...

Unix users might particularly be interested by the workaround that this 
library provides, concerning
the weird semantic of fcntl() locks (when any descriptor to a disk file 
is closed, the process loses ALL

locks acquired on this file through any descriptor).

RSFile has been tested with py2.7 and py3.3+, on Windows/Linux systems,
and should theoretically work on other *nix systems and python 
implementations


The technical documentation of RSFile includes a comprehensive description
of concepts and gotchas encountered while developing this library, which 
could
be useful to anyone interested in getting in touch with gory file I/O 
details.


The implementation is currently pure-python, so if you need high 
performances, using standard python streams

in parallel will remain necessary.

/Why v2.1 and not v2.0 you ask ? Just some pypi constraints I wasn't 
aware of, that'll teach me not to erase a just-released version to add 
some cleanup commits to it.../



Downloads:
https://pypi.python.org/pypi/RSFile/2.1

Documentation:
http://rsfile.readthedocs.io/en/latest/

Repository:
https://github.com/pakal/rsfile

_CHANGELOG:_
* Switch from Mercurial to Git
* Remove python2.6 support and its polyfills
* Move backends and test suites inside rsfile package
* Conform rsfile to the behaviour of latest "io" module and "open" builtin
* Make rsfile work against py33, py34 and py35, by leveraging their 
stdlib test suites
* Rename "win32" to "windows" everywhere (even pywin32 extensions 
actually handle x64 system)

* Improve the I/O benchmark runner
* Cache decorated methods to boost performances
* Add support for the new "x" mode flag in rsopen()
* Fix the corner case of uninitialized streams
* Tweak the excessive verbosity of locking tests
* Handle exceptions when closing raw streams (stream is marked as closed 
anyway)

* Normalize the naming of backend modules
* Fix bugs with __getattr__() lookup forwarding
* Use C/N flags for file existence on opening (-/+ supported but deprecated)
* Automatically compare the behaviour of all possible open modes, 
between stdlib/io and rsfile
* Autogenerate equivalence matrix for file opening modes, using 
python-tabulate.

* Switch from distutils to setuptools for setup.py
* Add support for the new "opener" parameter of open() builtin
* Strengthen tests around fileno/handle reuse and duplication
* Fix bug regarding improper value of file "modification_time" on windows
* Add implicit flush() before every sync()
* Remove heavy star imports from pywin32 backend
* Roughly test sync() parameters, via performance measurements
* Rename file "uid()" to "unique_id()", to prevent confusion with "user 
id" (but an alias remains)

* Fix nasty bug where file unique_id could be None in internal registries
* Add lots of defensive assertions
* Make FileTimes repr() more friendly
* Add support for the wrapping of [non-blocking] pipes/fifos
* Reject the opening of directories properly
* Reorganize and cleanup sphinx docs
* Improve docstrings of added/updated methods/attributes
* Explain the file locking semantic better
* Update and correct typos in the "I/O Overview" article
* Document lots of corner cases: thread safety, reentrancy, sync 
parameters, file-share-delete semantic...
* Remove the now obsolete "multi_syscall_lock" (thread-safe interface 
does better)

* Integrate tests and doc building with Tox
* Fix bug with windows/ctypes backend on python3.5 (OVERLAPPED structure 
was broken)

* Add tests for the behaviour of streams after a fork()
* Add optmizations for systems without fork (no need for multiprocessing 
locks then)

* Normalize "__future__" imports and code formatting
* Review and document the exception types used
* Cleanup/DRY tons of obsolete TODOs and comments
* Better document the CAVEATS of rsfile, regarding fcntl and 
interoperability with other I/O libs
* Add standard files to the repository (readme, contributing, changelog 
etc.)

* Integrate with Travis CI
* Add some tweaks to mimick the more tolerant behaviour of python2.7 
open(),

  regarding the mixing of 

Code with random module faster on the vm than the vm host...

2013-11-08 Thread Pascal Bit

Here's the code:

from random import random
from time import clock

s = clock()

for i in (1, 2, 3, 6, 8):
M = 0
N = 10**i

for n in xrange(N):
r = random()
if 0.5  r  0.6:
M += 1

k = (N, float(M)/N)

print (clock()-s)

Running on win7 python 2.7 32 bit it uses around 30 seconds avg.
Running on xubuntu, 32 bit, on vmware on windows 7: 20 seconds!
The code runs faster on vm, than the computer itself...
The python version in this case is 1.5 times faster...
I don't understand.

What causes this?
--
https://mail.python.org/mailman/listinfo/python-list


[issue18171] os.path.expanduser does not use the system encoding

2013-06-09 Thread Pascal Garcia

New submission from Pascal Garcia:

The name of the user contains accents under windows.

This error occurs when using the function. expaduser(~)

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 10: 
ordinal not in range(128)

ascii is the default encoding as sys.getdefaultencoding()
If in site.py I enable Enable the support locale then de defaultencoding 
become cp1252 and the function works.

Expand user should use the encoding used by the system (may be 
locale.getdefaultlocale()) to decode path given by the system instead of the 
default encoding the should be the target encoding.

I do beleave some other functions may be concerned by this problem.
I detect the problem on Windows (WP and 7), but I do beleave the problem may 
happen on Linux also.

--
components: Library (Lib)
messages: 190850
nosy: plgarcia
priority: normal
severity: normal
status: open
title: os.path.expanduser does not use the system encoding
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18171
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18171] os.path.expanduser does not use the system encoding

2013-06-09 Thread Pascal Garcia

Pascal Garcia added the comment:

Here are 2 logs one with the default site.py forcing defaultencoding to ascii, 
and the other to utf8.
You can see that the home dir includes accents : Pépé Not an insult to anybody 
but this stupid computer :)

When I force using the locale.getdefaultlocale() as encoding then the function 
works, but, after having called expanduser, I need to make an explicit 
decode(locale.getdefaultlocale()), or else the string can not be used to build 
path to files.

== with ASCII

C:\Users\pépéD:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py
Traceback (most recent call last):
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py, 
line 329, in module
run()
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py, 
line 206, in run
globaldef.initDefs(wd, force)
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\globaldef.py, 
line 80, in initDefs
wrkdir= os.path.expanduser(u~+os.sep)
  File C:\Python27\lib\ntpath.py, line 301, in expanduser
return userhome + path[i:]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 10: 
ordinal not in range(128)

WITH UTF8 :
C:\Users\pépéD:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py
Traceback (most recent call last):
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py, 
line 329, in module
run()
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\scolasync.py, 
line 206, in run
globaldef.initDefs(wd, force)
  File 
D:\DevelopmentWorkspaces\SCOLASYNC\ScolaSyncNG\scolasync-ng\src\globaldef.py, 
line 80, in initDefs
wrkdir= os.path.expanduser(u~+os.sep)
  File C:\Python27\lib\ntpath.py, line 301, in expanduser
return userhome + path[i:]
  File C:\Python27\lib\encodings\utf_8.py, line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 10: invalid 
continuation byte

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18171
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18171] os.path.expanduser does not use the system encoding

2013-06-09 Thread Pascal Garcia

Pascal Garcia added the comment:

Sorry for this error.
Thanks for the solution.

Here is the code as I modify it.
wrkdir= os.path.expanduser(~+os.sep)
loc = locale.getdefaultlocale()
if loc[1]:
encoding = loc[1]
wrkdir= wrkdir.decode(encoding)

I need to explicitally decode the string if I want to use it and have the next 
sentence working a bit further.
os.path.join(wrkdir, uTango\\)

Encodding is a very good motivation to go to python3, and if i didn't have 
other constraints it would be done for ages.

For this special case I think that function should return strings with the 
default encoding, and the programmer should not have to know about the 
underground to make the right decode.

But it works, thanks again.
Pascal

--
resolution: invalid - 
status: closed - open
type: behavior - 

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18171
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17716] From ... import fails when parent package failed but child module succeeded, yet works in std import case

2013-05-21 Thread Pascal Chambon

Pascal Chambon added the comment:

Well, since it's a tough decision to make (erasing all children modules when 
rolling back parent, or instead reconnecting with children on 2nd import of 
parent), I guess it should be discussed on python-dev first, shouldn't it ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17716
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17636] Modify IMPORT_FROM to fallback on sys.modules

2013-04-15 Thread Pascal Chambon

Changes by Pascal Chambon chambon.pas...@gmail.com:


--
nosy: +Pascal.Chambon

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17636
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17716] From ... import fails when parent package failed but child module succeeded, yet works in std import case

2013-04-15 Thread Pascal Chambon

Pascal Chambon added the comment:

(sorry for the long post, but it's a complex issue I guess)

I forgot to precise that I have this behaviour with the latest python2.7, as 
well as python3.3 (I guess other versions behave the same).

I agree that having side effects in script imports looks dangerous, but on the 
other hand it's incredibly handy to use the script behaviour of module so 
that each one initializes/checks himself, rather than relying on the calling of 
initialization methods from somewhere else (many web frameworks don't even plan 
such setup scripts actually, I have a django ticket running on that subject 
just at the moment).

Loads of python modules perform such inits (registration of atexit handlers, 
setup of loggers, of working/temp directories, or even modifying process-level 
settings.), so even though we're currently adding protection via exception 
handlers (and checking the idempotency of our imports, crucial points!), I 
could not guarantee that none of the modules/packages we use won't have such 
temporary failures (failures that can't be fixed by the web server, because 
module trees become forever unimportable).

With the video and the importlib code, I'm beginning to have a better 
understanding on the from..import, and I noticed that actually both import 
mypkg.module_a and from mypkg import module_a get broken when mypkg raised 
an exception after successfully loading module_a. 
It's just that the second form breaks loudly, whereas the first one remains 
silently corrupted (i.e the variable mypkg.module_a does NOT exist in both 
cases, so theer are pending AttributeErrors anyway).

All comes from the fact that - to talk with importlib/_bootstrap.py terms - 
_gcd_import() assumes everything is loaded and bound when a chain of modules 
(eg. mypkg.module_a) is in sys.modules, whereas intermediary bindings 
(setattr(mypkg, module_a, module_a)) might have been lost due to an import 
failure (and the removal of the mypkg module).
Hum I wonder, could we just recheck all bindings inside that _gcd_import() ? I 
guess there would be annoying corner cases with circular imports, i.e we could 
end up creating these bindings whereas they are just pending to be done in 
parent frames...

Issue 17636 might provide a workaround for some cases, but it doesn't fix the 
root problem of the rolled back import (eg. here the absence of binding 
between mypkg and module_a, whatever the import form that was used). Imagine a 
tree mypkg/mypkg2/module.py, if module.py gets well loaded but mypkg and 
mypkg2 fail, then later, somewhere else in the code, it seems an import 
mypkg.mypkg2.module will SUCCEED even though the module tree is broken, and 
AttributeErrors are pending.

I guess Nick was right (and me wrong), the cleanest solution seems to enforce 
an invariant saying that a submodule can NOT fully be in sys.modules if his 
parent is not either loaded or in the process of loading it (thus if a 
binding between parent and child is missing, we're simply in the case of 
circular dependencies). Said another way, the import system should delete all 
children modules from sys.modules when aborting the import of a parent package. 
What do you think about it ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17716
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17716] From ... import fails when parent package failed but child module succeeded, yet works in std import case

2013-04-14 Thread Pascal Chambon

Pascal Chambon added the comment:

Thanks for the feedback, I'm gonna read those docs and related issues asap, and 
check that planned evolutions will actually fix this.

just as a side note in the meantime: I dont think that the problem here is the 
purge  of sys.modules, the failure is actually located in the semantic 
difference between the two forms of import statements, that should basically 
behave the same but do not (hance the interest of these related issues noted 
above).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17716
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17716] IMPORTANT - Process corruption on partly failed imports

2013-04-13 Thread Pascal Chambon

New submission from Pascal Chambon:

Hello,

we've encountered several times a very nasty bug on our framework, several 
times tests or even production code (served by mod_wsgi) ended up in a broken 
state, where imports like from . import processing_exceptions, which were NOT 
in circular imports and were 100% existing submodules, raised exceptions like 
ImportError: cannot import name processing_exceptions. Restarting the 
test/server fixed it, and we never knew what happened.

I've crossed several forum threads on similar issues, only recently did I find 
one which gave a way to reproduce the bug:
http://stackoverflow.com/questions/12830901/why-does-import-error-change-to-cannot-import-name-on-the-second-import

So here attached is a python2 sample (python3 has the same pb), showing the bug 
(just run their test_import.py)

What happens here, is that a package mypkg fails to get imported due to an 
exception (eg. temporarily failuure of DB), but only AFTER successfully 
importing a submodule mypkg.module_a.
Thus, mypkg.module_a IS loaded and stays in sys.modules, but mypkg is 
erased from sys.modules (like the doc on python imports describes it).

The next time we try, from within the same application, to import mypkg, and 
we cross from mypkg import module_a in the mypkg's __init__.py code, it SEEMS 
that the import system checks sys.modules, and seeing mypkg.module_a in it, 
it THINKS that necessarily mypkg is already initialized and contains a name 
module_a in its global namespace. Thus the cannot import name 
processing_exceptions error.

Importing module_a as an absolute or relative import changes nothing, however 
doing import mypkg.module_a solves the problem (dunno why).

Another workaround is to cleanup sys.modules in mypkg/__init__.py, to ensure 
that a previously failed attempt at importing the package modules doesn't 
hinder us.

# on top of mypkg/__init__.py
exceeding_modules = [k for k in sys.modules.keys() if 
k.startswith(mypkg.)]
for k in exceeding_modules:
del sys.modules[k]

Anyway, I don't know enough python's import internals to understand why, 
exactly, on second import attempt, the system tries a kind of faulty 
getattr(mypkg, module_a), instead of simply returning 
sys.modules[mypkg.module_a] which exists.
Could anyone help with that ? 
That's a very damaging issue, imo, since webserver workers can reach a 
completely broken state because of that.

PS: more generally, I guess python users lack insight on the behaviour of from 
xxx import yyy, especially when yyy is both a real submodule of xxx and a 
variable initialized in xxx/__init__.py (it seems the real module overrides the 
variable), or when the __all__ list of xxx could prevent the import of a 
submodule of xxx by not including it.
Provided I better understand the workflow of all these stuffs - that have quite 
moved recently I heard - I'd be willing to summarize it for the python docs.

--
components: Interpreter Core
files: ImportFailPy2.zip
messages: 186738
nosy: Pascal.Chambon
priority: normal
severity: normal
status: open
title: IMPORTANT - Process corruption on partly failed imports
type: behavior
versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5
Added file: http://bugs.python.org/file29798/ImportFailPy2.zip

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17716
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Open Source: you're doing it wrong - the Pyjamas hijack

2012-05-15 Thread Pascal Chambon

Hi,

cool down, people, if anything gave FOSS a bad reputation, that's well 
the old pyjamas website (all broken, because wheel must be reinvented 
here), and most of all the terror management that occurred on its 
mailing list.
Previously I had always considered open-source as a benevolent state of 
mind, until I got, there, the evidence that it could also be, for some 
people, an irrational and harmful cult (did you know github were 
freaking evildoers ?).


Blatantly the pyjs ownership  change turned out to be an awkward 
operation (as reactions on that ML show it), but a fork could also have 
very harmfully split pyjs-interested people, so all in all I don't 
think there was a perfect solution - dictatorships never fall harmlessly.


The egos of some might have been hurt, the legal sense of others might 
have been questioned, but believe me all this fuss is pitiful compared 
to the real harm that was done numerous time to willing newcomers, on 
pyjs' old ML, when they weren't aware about the heavy dogmas lying around.


A demo sample  (I quote it each time the suvject arises, sorry for 
duplicates)


| Please get this absolutely clear in your head: that  |
| you do not understand my reasoning is completely and utterly   |
| irrelevant.  i understand *your* reasoning; i'm the one making the   |
| decisions, that's my role to understand the pros and cons.  i make a |
| decision: that's the end of it.  |
| You present reasoning to me: i weight it up, against the other   |
| reasoning, and i make a decision.  you don't have to understand that |
| decision, you do not have to like that decision, you do not have to  |
| accept that decision.|


Ling live pyjs,
++
PKL



Le 08/05/2012 07:37, alex23 a écrit :

On May 8, 1:54 pm, Steven D'Apranosteve
+comp.lang.pyt...@pearwood.info  wrote:

Seriously, this was a remarkably ham-fisted and foolish way to resolve
a dispute over the direction of an open source project. That's the sort
of thing that gives open source a bad reputation.

The arrogance and sense of entitlement was so thick you could choke on
it. Here's a sampling from the circle jerk of self-justification that
flooded my inbox over the weekend:

i did not need to consult Luke, nor would that have be productive

No, it's generally _not_ productive to ask someone if you can steal
their project from them.

i have retired Luke of the management duties, particularly, *above*
the source

Who is this C Anthony Risinger asshole and in what way did he _hire_
the lead developer?

What I have wondered is, what are effects of having the project
hostage to the whims of an individuals often illogically radical
software libre beliefs which are absolutely not up for discussion at
all with anyone.

What I'm wondering is: how is the new set up any different? Why were
Luke Leighton's philosophies/whims any more right or wrong than
those held by the new Gang of Dicks?

Further more, the reason I think it's a bad idea to have this drawn
out discussion is that pretty much the main reason for this fork is
because of Luke leadership and project management decisions and
actions. To have discussions of why the fork was done would invariably
lead to quite a bit of personal attacks and petty arguments.

Apparently it's nicer to steal someone's work than be mean to them.

I agree, Lex - this is all about moving on.  This is a software
project, not a cult of personality.

Because recognising the effort of the lead developer is cult-like.

My only quibble is with the term fork.  A fork is created when you
disagree with the technical direction of a project.  That's not the
issue here.  This is a reassignment of the project administration only
- a shuffling of responsibility among *current leaders* of the
community.  There is no divine right of kings here.

My quibble is over the term fork too, as this is outright theft. I
don't remember the community acknowledging _any other leadership_ over
Luke Leighton's.

I suspect Luke will be busy with other projects and not do much more
for Pyjamas/pyjs, Luke correct me if you see this and I am wrong.

How about letting the man make his own fucking decisions?

All of you spamming the list with your unsubscribe attempts: Anthony
mentioned in a previous email that he's using mailman now

Apparently it's the responsibility of the person who was subscribed
without their permission to find out the correct mechanism for
unsubscribing from that list.

apparantly a bunch of people were marked as POSTING in the DB, but
not receiving mail (?)

Oh I see, the sudden rush of email I received was due to an error in
the data they stole...

Nobody wins if we spend any amount of time debating the details of
this transition, what's done is done.

Truly the 

Re: John Carmack glorifying functional programing in 3k words

2012-05-03 Thread Pascal J. Bourguignon
Tim Bradshaw t...@tfeb.org writes:

 On 2012-05-02 14:44:36 +, jaialai.technol...@gmail.com said:

 He may be nuts

 But he's right: programmers are pretty much fuckwits[*]: if you think
 that's not true you are not old enough.

 [*] including me, especially.

You need to watch: 
http://blog.ted.com/2012/02/29/the-only-way-to-learn-to-fly-is-to-fly-regina-dugan-at-ted2012/

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-11 Thread Pascal J. Bourguignon
Shmuel (Seymour J.) Metz spamt...@library.lspace.org.invalid writes:

 In 87wr5nl54w@sapphire.mobileactivedefense.com, on 04/10/2012
at 09:10 PM, Rainer Weikusat rweiku...@mssgmbh.com said:

'car' and 'cdr' refer to cons cells in Lisp, not to strings. How the
first/rest terminology can be sensibly applied to 'C strings' (which
are similar to linked-lists in the sense that there's a 'special
termination value' instead of an explicit length)

 A syringe is similar to a sturgeon in the sense that they both start
 with S. LISP doesn't have arrays, and C doesn't allow you to insert
 into the middle of an array.

You're confused. C doesn't have arrays.  Lisp has arrays.
C only has vectors (Lisp has vectors too).

That C calls its vectors array, or its bytes char doesn't change the
fact that C has no array and no character.


cl-user (make-array '(3 4 5) :initial-element 42)
#3A(((42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42))
((42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42))
((42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42) (42 42 42 42 42)))

cl-user (make-array 10 :initial-element 42)
#(42 42 42 42 42 42 42 42 42 42)



-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Programing Art or Science?

2012-04-03 Thread Pascal J. Bourguignon
ccc31807 carte...@gmail.com writes:

 On Apr 2, 5:48 pm, Pascal J. Bourguignon
 This is a narrow-minded definition of programming.

 Well, that's the point.

 If we make a list and include things like:
 computer science
 software engineering
 computer engineering
 discrete math
 logic
 formal methods
 web development
 computer graphics
 information technology
 information management
 data processing
 database management
 database administration
 network administration
 artificial intelligence
 ... and so on and so forth ...

 Some of these involve real art. Some of these involve real science.
 Even engineering can be considered as science, in a way, and perhaps
 art in a way. All these include programming! HOWEVER, 'programming'
 seen as 'talking to a computer' is neither an art nor a science, but
 simply a learned skill, like plumbing or cabinet making, or even
 medicine or law.

 I was a lawyer for 14 years, so I know what I'm talking about: the
 practice of law in the ordinary sense is simply that, the practice of
 law, and as such it's not an art nor a science, but simply a trade,
 albeit a highly skilled and abstract trade. And yes, lawyers can be
 artists and scientists, but neither one of these is basic to the
 practice of law.

 I'm not saying that artists and scientists can't be programmers. Many
 of them are. What I'm saying is that you can program a computer (i.e.,
 practice programming) without being either an artist or a scientist.


Well, of course.  Those words designate different categories that are
not exclusive.  So it's meaningless to say that programming is or is not
art or science.

Art is something that comes from a quality of the would-be artist.

Science is something that comes from a methodology applied by the
would-be scientist.

Program is something that comes from the work applied by the would-be
programmer.

You can be both a programmer and artist and produce a program
arstistically (like a torero), or an artistic program (like a painter).

You can be both a programmer and scientist, and produce a program
scientifically (like a mathematician), or a science program (like a
physicist). 

You can be both a scientist and artist and produce science artistically,
or art scientifically.

You can be the three, producing programs artistically and
scientifically, or producing artisctic programs scientifically, or
producing scientific programs artistically, etc.

When you produce programs scientifically and artistically you're a 
hacker.

It could be nice to produce scientific programs scientifically, and even
better if your scientific programs are also artistic (so that you can
show the science in an interesting way to the public).
http://www.ted.com/talks/joann_kuchera_morin_tours_the_allosphere.html

You can also produce art programmatically.  For that you need to be both
an artist or a programmer. http://animusic.com/ Or you may try to split
the qualities among a team like at Pixar producing artistic movies
programmatically and scientifically like
http://www.pixar.com/featurefilms/index.html
http://graphics.pixar.com/library/UntanglingCloth/paper.pdf


And the best is to produce scientific programs that are artistic,
scientifically and artistically.  
Then you're an scientifico-artistico-hacker.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Programing Art or Science?

2012-04-02 Thread Pascal J. Bourguignon
ccc31807 carte...@gmail.com writes:

 Programming is neither an art nor a science, but a trade.

 It's not an art in the sense of painting, music, dance, poetry, etc.,
 because the objective isn't to make a beautiful something, but to give
 instructions to a machine to accomplish some useful task.

 It's not a science in the sense of either physics and chemistry
 (experimental) or geology or astronomy (observational) or cosmology or
 psychology (theoretical) because the objective isn't to test
 hypothetical s against data, but to give instructions to a machine to
 accomplish some useful task.

 Obviously, it's very much connected with art (e.g., user interface
 design) and science (e.g., artificial intelligence) but the practice
 of giving instructions to a machine is more like assembling machines
 in a factory than the pursuit of an art or the practice of a science.

This is a narrow-minded definition of programming.


Watch:  http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute


Read:Structure and Interpretation of Computer Programs
 http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html
 http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lisp refactoring puzzle

2011-07-12 Thread Pascal J. Bourguignon
Neil Cerutti ne...@norwich.edu writes:

 What's the rationale for providing them? Are the definitions
 obvious for collections that a not sets?

The rational is to prove that Xah is dumb.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Keyboard Layout: Dvorak vs Colemak: is it Worthwhile to Improve the Dvorak Layout?

2011-06-13 Thread Pascal J. Bourguignon
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes:

 The actual physical cost of typing is a small part of coding. 
 Productivity-wise, optimizing the distance your hands move is worthwhile 
 for typists who do nothing but type, e.g. if you spend their day 
 mechanically copying text or doing data entry, then increasing your 
 typing speed from 30 words per minute (the average for untrained computer 
 users) to 90 wpm (the average for typists) means your productivity 
 increases by 200% (three times more work done).

 I don't know if there are any studies that indicate how much of a 
 programmer's work is actual mechanical typing but I'd be surprised if it 
 were as much as 20% of the work day.

I'd agree that while programming, typing speed is not usually a problem
(but it has been reported that some star programmers could issue bug
free code faster than they could type, and they could type fast!).


Now, where the gain lies, is in typing flames on IRC or usenet.

If they can do it faster, then it's more time left for programming.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue5906] Risk of confusion in multiprocessing module - daemonic processes

2011-06-07 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

I've just crossed again the doc of the daemon flag 
(http://docs.python.org/library/multiprocessing.html), and it's still quite 
confusing to newcomers.

daemon
The process’s daemon flag, a Boolean value. This must be set before start() 
is called.
The initial value is inherited from the creating process. [1]
When a process exits, it attempts to terminate all of its daemonic child 
processes.

[1] this sentence is weird: since daemonic processes are not allowed to have 
children, isn't this flag always False in a new Process instance ?
[2] typo, it meant all of its NON-daemonic child processes instead, didn't it 
?

--
resolution: fixed - 
status: closed - open

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5906
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: GIL in alternative implementations

2011-05-30 Thread Pascal Chambon

Thanks for the details on IronPython's implementation B-)

Hopefully Pypy will eventually get rid of its own Gil, since it doesn't 
do refcounting either.


Regards,
Pascal

Le 28/05/2011 00:52, Dino Viehland a écrit :


In IronPython we have fine grained locking on our mutable data 
structures.  In particular we have a custom dictionary type which is 
designed to allow lock-free readers on common operations while writers 
take a lock.  Our list implementation is similar but in some ways 
that's trickier to pull off due to features like slicing so if I 
recall correctly we only have lock-free reads when accessing a single 
element.


For .NET data structures they follow the .NET convention which is up 
to the data structure.  So if you wanted to get every last bit of 
performance out of your app you could handle thread safety yourself 
and switch to using the .NET dictionary or list types (although 
they're a lot less friendly to Python developers).


Because of these locks on micro-benchmarks that involve simple 
list/dict manipulations you do see noticeably worse performance in 
IronPython vs. CPython. 
http://ironpython.codeplex.com/wikipage?title=IP27A1VsCPy27PerfreferringTitle=IronPython%20Performance 
http://ironpython.codeplex.com/wikipage?title=IP27A1VsCPy27PerfreferringTitle=IronPython%20Performance 
 - See the SimpleListManipulation and SimpleDictManipulation as the 
core examples here.  Also CPython's dictionary is so heavily tuned 
it's hard to beat anyway, but this is a big factor.


Finally one of the big differences with both Jython and IronPython is 
that we have good garbage collectors which don't rely upon reference 
counting.  So one area where CPython gains from having a GIL is a 
non-issue for us as we don't need to protect ref counts or use 
interlocked operations for ref counting.


*From:* python-list-bounces+dinov=exchange.microsoft@python.org 
[mailto:python-list-bounces+dinov=exchange.microsoft@python.org] 
*On Behalf Of *Pascal Chambon

*Sent:* Friday, May 27, 2011 2:22 PM
*To:* python-list@python.org  Python List
*Subject:* GIL in alternative implementations

Hello everyone,

I've already read quite a bit about the reasons for the GIL in 
CPython, i.e to summarize, that a more-fine graine locking, allowing 
real concurrency in multithreaded applications, would bring too much 
overhead for single-threaded python applications.


However, I've also heard that other python implementations 
(ironpython, jython...) have no GIL, and yet nobody blames them for 
performance penalties that would be caused by that lack (I especially 
think about IronPython, whose performances compare quite well to CPython).


So I'd like to know: how do these other implementations handle 
concurrency matters for their primitive types, and prevent them from 
getting corrupted in multithreaded programs (if they do) ? I'm not 
only thinking about python types, but also primitive containers and 
types used in .Net and Java VMs, which aren't atomic elements either 
at an assembly-level point of view.


Do these VMs have some GIL-like limitations, that aren't spoken about 
? Are there functionings completely different from the CPython VM, so 
that the question is not relevant ? Do people consider that they 
always concern multithreaded applications, and so accept performance 
penalties that they wouldn't allow in their CPython scripts ?


I think you in advance for your lights on these questions.

Regards,

Pkl

[[ Important Note: this is a serious question, trolls and emotionally 
disturbed persons had better go on their way. ]]




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Functional Programing: stop using recursion, cons. Use map vectors

2011-05-23 Thread Pascal J. Bourguignon
torb...@diku.dk (Torben Ægidius Mogensen) writes:

 Xah Lee xah...@gmail.com writes:


 Functional Programing: stop using recursion, cons. Use map  vectors.

 〈Guy Steele on Parallel Programing〉
 http://xahlee.org/comp/Guy_Steele_parallel_computing.html

 This is more or less what Backus said in his Turing Award lecture about
 FP.

Stop inflating his ego!  Next he'll quote Nobel prize winners...

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: English Idiom in Unix: Directory Recursively

2011-05-19 Thread Pascal J. Bourguignon
t...@sevak.isi.edu (Thomas A. Russ) writes:

 Pascal J. Bourguignon p...@informatimago.com writes:

 t...@sevak.isi.edu (Thomas A. Russ) writes:
 
  This will only work if there is a backpointer to the parent.
 
 No, you don't need backpointers; some cases have been mentionned in the
 other answer, but in general:
 
 (defun parent (tree node)
(if (member node (children tree))
   tree
   (some (lambda (child) (parent child node)) (children tree
 
 Yes, the question wasn't about time complexity.

  :-p

 Um, this is a recursive function.  Inside PARENT, there is another call
 to PARENT.

Feel free to derecursive it.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: English Idiom in Unix: Directory Recursively

2011-05-18 Thread Pascal J. Bourguignon
t...@sevak.isi.edu (Thomas A. Russ) writes:

 Well, unless you have a tree with backpointers, you have to keep the
 entire parent chain of nodes visited.  Otherwise, you won't be able to
 find the parent node when you need to backtrack.  A standard tree
 representation has only directional links.

 Consider:

 A--+---B+---D
||
|+---E
||
|+---F
| 
+---C

 If all you keep is the current and previous node, then the only thing
 you have reference do when doing the depth-first traverse is:
   1.  Current = A,  Previous = null
   2.  Current = B.  Previous = A
   3.  Current = D   Previous = B
   4.  Current = E   Previous = D
   5.  now what?  You can't get from E or D back to B.

 By comparing the previous node (pointer or ID) to the
 current node's parent and children one will know wherefrom the
 current node was entered, and can choose the next child in the
 list as the next node, or the parent if all children have been
 visited.  A visit action may be added in any or all times the
 node is visited.
 
 This node requires no stack.  The only state space is constant,
 regardless of the size of the tree, requiring just the two pointers
 to previous and current.

 This will only work if there is a backpointer to the parent.

No, you don't need backpointers; some cases have been mentionned in the
other answer, but in general:

(defun parent (tree node)
   (if (member node (children tree))
  tree
  (some (lambda (child) (parent child node)) (children tree

Yes, the question wasn't about time complexity.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: English Idiom in Unix: Directory Recursively

2011-05-17 Thread Pascal J. Bourguignon
Roland Hutchinson my.spamt...@verizon.net writes:

 Sorry to have to contradict you, 

Don't be sorry.


 but it really is a textbook example of 
 recursion.  Try this psuedo-code on for size:  

 FUNCTION DIR-DELETE (directory)
   FOR EACH entry IN directory
   IF entry IS-A-DIRECTORY THEN DIR-DELETE (entry).

 Well, now that's not just recursion; it's tail recursion.  

It's not tail recursion.  If you had indented your code properly, you'd
see why it's not:

(defun dir-delete (directory)
  (loop for entry in directory
do (if (is-a-directory entry)
   (dir-delete entry

(I put parentheses, so my editor knows what I mean and can do the
indentation for me).


That's why walking a directory is done with a recursive procedure,
instead of an iterative one: it's much simplier.  To implement an
iterative procedure, you would have to manage a stack yourself, instead
of using the implicit stack of the recursive procedure.


 Tail recursion  can always be turned into an iteration when it is
 executed.  

All recursions can be turned into iterations, before execution.


 Reasonably  designed compilers are required to do so, in fact--have
 been for decades  now.  That doesn't mean that recursion isn't the
 best way of describing  the algorithm.



-- 
p__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A question about Python Classes

2011-04-21 Thread Pascal J. Bourguignon
chad cdal...@gmail.com writes:

 Let's say I have the following

 class BaseHandler:
 def foo(self):
 print Hello

 class HomeHandler(BaseHandler):
 pass


 Then I do the following...

 test = HomeHandler()
 test.foo()

 How can HomeHandler call foo() when I never created an instance of
 BaseHandler?

But you created one!

test is an instance of HomeHandler, which is a subclass of BaseHandler,
so test is also an instance of BaseHandler.

A subclass represents a subset of the instances of its super class.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
-- 
http://mail.python.org/mailman/listinfo/python-list


RSFile 1.1 released

2011-04-15 Thread Pascal Chambon


I'm pleased to announce the first bugfix release of the RSFile package.

Issues addressed:
- rejection of unicode keys in kwargs arguments, in some versions of py2.6
- indentation bug swallowing some errors on file opening



RSFile aims at providing python with a cross-platform, reliable, and 
comprehensive file
I/O API. It's actually a partial reimplementation of the io module, as 
compatible possible
(it passes latest stdlib io tests), which offers a set of new - and 
possibly very useful - features:
shared/exclusive file record locking, cache synchronization, advanced 
opening flags, handy stat

getters (size, inode...), shortcut I/O functions etc.

Unix users might particularly be interested by the workaround that this 
library provides, concerning
the catastrophic fcntl() lock semantic (when any descriptor to a file is 
closed, your process loses ALL

locks acquired on it through other streams).

RSFile has been tested with py2.6, py2.7, and py3.2, on win32, linux and 
freebsd systems,

and should theoretically work with IronPython/Jython/PyPy (on Mac OS X too).

The technical documentation of RSFile includes a comprehensive description
of concepts and gotchas encountered while setting up this library, which 
could
prove useful to anyone interested in getting aware about gory file I/O 
details.


The implementation is currently pure-python, as integration with the C 
implementation of io module
raises lots of issues. So if you need heavy performances, standard 
python streams will
remain necessary. But for most programs and scripts, which just care 
about data integrity, RSFile

should be a proper choice.

Downloads:
http://pypi.python.org/pypi/RSFile/1.1

Documentation:
http://bytebucket.org/pchambon/python-rock-solid-tools/wiki/index.html


Regards,
Pascal Chambon

PS : Due to miscellaneous bugs of python core and stdlib io modules 
which have been fixed relatively recently,
it's advised to have an up-to-date minor version of python (be it 2.6, 
2.7 or 3.2) to benefit from RSFile.

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


Re: Directly calling python's function arguments dispatcher

2010-12-13 Thread Pascal Chambon

Le 12/12/2010 23:41, Peter Otten a écrit :

Pascal Chambon wrote:


   

I've encountered several times, when dealing with adaptation of function
signatures, the need for explicitly resolving complex argument sets into
a simple variable mapping. Explanations.


Consider that function:

def foo(a1, a2, *args, **kwargs):
  pass

calling foo(1, a2=2, a3=3)

will map these arguments to local variables like these:
{
'a1': 1,
'a2': 2,
'args': tuple(),
'kwarg's: {'a3': 3}
}

That's a quite complex resolution mechanism, which must handle
positional and keyword arguments, and deal with both collision and
missing argument cases.
 
   

Is that routine exposed to python, somewhere ? Does anybody know a
working implementation here or there ?
 

http://docs.python.org/library/inspect.html#inspect.getcallargs

   

Too sweet  \o/

Thanks a lot,
regards,
Pakal
--
http://mail.python.org/mailman/listinfo/python-list


Directly calling python's function arguments dispatcher

2010-12-12 Thread Pascal Chambon

Hello

I've encountered several times, when dealing with adaptation of function 
signatures, the need for explicitly resolving complex argument sets into 
a simple variable mapping. Explanations.



Consider that function:

def foo(a1, a2, *args, **kwargs):
pass

calling foo(1, a2=2, a3=3)

will map these arguments to local variables like these:
{
'a1': 1,
'a2': 2,
'args': tuple(),
'kwarg's: {'a3': 3}
}

That's a quite complex resolution mechanism, which must handle 
positional and keyword arguments, and deal with both collision and 
missing argument cases.


Normally, the simplest way to invoke this mechanism is to define a 
function with the proper signature, and then call it (like, here, foo()).


But there are cases where a more meta approach would suit me well.

For example when adapting xmlrpc methods : due to the limitations of 
xmlrpc (no keyword arguments), we use a trick, i.e our xmlrpc functions 
only accept a single argument, a struct (python dict) which gets 
unpacked on arrival, when calling the real functions exposed by the 
xmlrpc server.


But on client side, I'd like to offer a more native interface (allowing 
both positional and keyword arguments), without having to manually 
define an adapter function for each xmlrpc method.


To summarize, I'd like to implement a magic method like this one (please 
don't care about performance isues for now):


class XmlrpcAdapter:
def __getattr__(self, funcname):
# we create an on-the-fly adapter
def adapter(*args, **kwargs):
xmlrpc_kwargs = _resolve_func_signature(funcname, *args, 
**kwargs)

# we call the remote function with an unique dict argument
self.xmlrpc_server.call(funcname, xmlrpc_kwargs)
return adapter

As you see, all I need is _resolve_func_signature(), which is actually 
the routine (internal to the python runtime) which transforms complex 
function calls in a simple mapping of variables to be added to the 
function local namespace. Of course this routine would need information 
about the target functions' signature, but I have that info available 
(for example, via a set of functions that are a mockup of the real 
xmlrpc API).


Is that routine exposed to python, somewhere ? Does anybody know a 
working implementation here or there ?


Thanks for the help,
regards,
Pakal



--
http://mail.python.org/mailman/listinfo/python-list


[issue1553375] Add traceback.print_full_exception()

2010-11-15 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

I dont understand, if we use traceback.print_stack(), it's the stack at the 
exception handling point which will be displayed.

In my view, the interesting think was not the stack trace at the point where 
the exception is being handled, but where the unwinding stopped (i.e, a 
snapshot of the stack at the moment the exception was caught).

I agree that most of the time these stacks are quite close, but if you happen 
to move the traceback object all around, in misc. treatment functions (or even, 
if it has been returned by functions to their caller - let's be fool), it can 
be handy to still be able to output a full exception stack, like if the 
exception had flowed up to the root of the program. At least that's what'd 
interest me for debugging.

try:
   myfunction() #- that's the point of which I'd likle a stack trace
except Exception, e:
   handle_my_exception(e) #- not of that point, some recursion levels deeper

Am I the only one viewing it as this ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1553375
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10327] Abnormal SSL timeouts when using socket timeouts - once again

2010-11-08 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

Allright, it actually looks more like a pathological latency behaviour of my 
target platforms than a ssl bug... 
I was mislead by the heavy history of socket.settimeout(), sorry. _

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10327
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10327] Abnormal SSL timeouts when using socket timeouts - once again

2010-11-05 Thread Pascal Chambon

New submission from Pascal Chambon chambon.pas...@gmail.com:

On freebsd 8, using python 2.6.6, I've run into the bug already widely dealt 
with in these reports :
http://bugs.python.org/issue1380952
http://bugs.python.org/issue1153016

When using socket timeouts (eg. with socket.setdefaulttimeout()), whatever the 
timeout I use (eg. 10 seconds), I begin having random SSLError: The read 
operation timed out exceptions in my http calls, via urlopen or 3rd party 
libraries.

Here is an example of traceback ending:

...
  File 
/usr/local/lib/python2.6/site-packages/ZSI-2.0-py2.6.egg/ZSI/client.py, line 
349, in ReceiveRaw
response = self.h.getresponse()
  File /usr/local/lib/python2.6/httplib.py, line 990, in getresponse
response.begin()
  File /usr/local/lib/python2.6/httplib.py, line 391, in begin
version, status, reason = self._read_status()
  File /usr/local/lib/python2.6/httplib.py, line 349, in _read_status
line = self.fp.readline()
  File /usr/local/lib/python2.6/socket.py, line 427, in readline
data = recv(1)
  File /usr/local/lib/python2.6/ssl.py, line 215, in recv
return self.read(buflen)
  File /usr/local/lib/python2.6/ssl.py, line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out

I've checked the py2.6.6 sources, the patches described in previous reports are 
still applied (eg. SSL_pending() checks etc.), I have no idea of how so long 
socket timeouts might interfere with ssl operations...

--
components: IO
messages: 120498
nosy: arekm, georg.brandl, maltehelmert, pakal, pristine777, tarek-ziade, 
twouters
priority: normal
severity: normal
status: open
title: Abnormal SSL timeouts when using socket timeouts - once again
type: behavior
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10327
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10327] Abnormal SSL timeouts when using socket timeouts - once again

2010-11-05 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

The exception is raised too early, none of my calls takes more than 1-2 seconds 
and I've a default timeout set at 10s or more.

This occurs rather rarely, one or two times on some hundreds of calls. I'll 
make a little script to try to isolate the pb.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10327
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10327] Abnormal SSL timeouts when using socket timeouts - once again

2010-11-05 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

Humz on second thought you may be right, now I have some trouble reproducing 
the bugs (wich have been there since the beginning, though), so it may be that 
the webservice I call seldom takes 10+ seconds to answer (weird anyway).

I've placed timers in the codebase, the pb will eventually surface again.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10327
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



openmp do loops

2010-11-04 Thread Pascal

Hi,

I would like to parallelize this loop:
do i=1,hklsize
fcalctable(i)=structfact(hkltable(1,i),hkltable(2,i),hkltable(3,i))
end do


I thought I would do this:
!$OMP PARALLEL DO default(private) shared(hkltable, fcalctable,hklsize)
do i=1,hklsize
fcalctable(i)=structfact(hkltable(1,i),hkltable(2,i),hkltable(3,i))
end do
!$OMP END PARALLEL DO

However it seems that the order of the final table is not guarantee 
compared to the serial version. I need a j element of the table to stay 
there because I have an other table and I am using the index to match 
the data.


Regards,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: openmp do loops

2010-11-04 Thread Pascal

On 11/04/2010 11:13 AM, Pascal wrote:

Hi,



Oops, wrong group, sorry...

Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Land Of Lisp is out

2010-10-28 Thread Pascal J. Bourguignon
sthueb...@googlemail.com (Stefan Hübner) writes:

 Would it be right to say that the only Lisp still in common use is the Elisp 
 built into Emacs?

 Clojure (http://clojure.org) is a Lisp on the JVM. It's gaining more and
 more traction.

There are actually 2 REAL Lisp on the JVM: 

- abcl http://common-lisp.net/project/armedbear/ and

- CLforJava http://www.clforjava.org


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Land Of Lisp is out

2010-10-28 Thread Pascal J. Bourguignon
Alain Ketterlin al...@dpt-info.u-strasbg.fr writes:

 Lawrence D'Oliveiro l...@geek-central.gen.new_zealand writes:

 Would it be right to say that the only Lisp still in common use is the
 Elisp built into Emacs?
 
 There is a new version of Lisp called Clojure that runs on the Java
 Virtual Machine (JVM) that is on the upswing.

 Now is not exactly a good time to build new systems crucially dependent on 
 the continuing good health of Java though, is it?

 Nonsense. See
 http://blogs.sun.com/theaquarium/entry/ibm_and_oracle_to_collaborate

Last time I remember a corporation having developed a nice software
(NeXTSTEP), Sun joined it to make an OpenStep, and a few years alter
it was over, bought by Apple, and morphed into MacOSX, and none of my
NeXTSTEP (or even OpenStep) programs compile anymore on my computers.

In the meantime, I switched to Linux.

So now IBM and Oracle join to make an OpenJDK?   
I bet Lawerence is right.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Land Of Lisp is out

2010-10-28 Thread Pascal J. Bourguignon
kodifik kodi...@eurogaran.com writes:

 On Oct 28, 1:55 am, Lawrence D'Oliveiro l...@geek-
 central.gen.new_zealand wrote:
 Would it be right to say that the only Lisp still in common use is the Elisp
 built into Emacs?

 Surely surpassed by autolisp (a xlisp derivative inside the Autocad
 engineering software).

I wouldn't bet.  With emacs, you don't have a choice, you need to use
emacs lisp to customize it (even if theorically you could do it with
emacs-cl or some other language implementation written in emacs lisp).

On the other hand, AutoCAD allow people to customize it using other
programming languages than AutoLisp, so I wouldn't expect it to be
majoritary.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Land Of Lisp is out

2010-10-27 Thread Pascal J. Bourguignon
Lawrence D'Oliveiro l...@geek-central.gen.new_zealand writes:

 Would it be right to say that the only Lisp still in common use is the Elisp 
 built into Emacs?

The lisps in common use nowadays are emacs lisp, Common Lisp, and the
various schemes, from R4RS to R6RS.

Some other lisps are in use in niches too.  Eg. guile (a kind of scheme)
in gimp, autolisp in AutoCad, etc.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scheme as a virtual machine?

2010-10-14 Thread Pascal J. Bourguignon
namekuseijin namekusei...@gmail.com writes:

 On 13 out, 19:41, p...@informatimago.com (Pascal J. Bourguignon)
 wrote:
 namekuseijin namekusei...@gmail.com writes:
  On 11 out, 08:49, Oleg  Parashchenko ole...@gmail.com wrote:
  Hello,

  I'd like to try the idea that Scheme can be considered as a new
  portable assembler. We could code something in Scheme and then compile
  it to PHP or Python or Java or whatever.

  Any suggestions and pointers to existing and related work are welcome.
  Thanks!

  My current approach is to take an existing Scheme implementation and
  hijack into its backend. At this moment Scheme code is converted to
  some representation with a minimal set of bytecodes, and it should be
  quite easy to compile this representation to a target language. After
  some research, the main candidates are Gambit, Chicken and CPSCM:

 http://uucode.com/blog/2010/09/28/r5rs-scheme-as-a-virtual-machine-i/...

  If there is an interest in this work, I could publish progress
  reports.

  --
  Oleg Parashchenko  o...@http://uucode.com/http://uucode.com/blog/ XML, 
  TeX, Python, Mac, Chess

  it may be assembler, too bad scheme libs are scattered around written
  in far too many different flavors of assembler...

  It warms my heart though to realize that Scheme's usual small size and
  footprint has allowed for many quality implementations targetting many
  different backends, be it x86 assembly, C, javascript or .NET.  Take
  python and you have a slow c bytecode interpreter and a slow
  bytecode .NET compiler.  Take haskell and its so friggin' huge and
  complex that its got its very own scary monolithic gcc.  When you
  think of it, Scheme is the one true high-level language with many
  quality perfomant backends -- CL has a few scary compilers for native
  code, but not one to java,

 Yep, it only has two for java.

 I hope those are not Clojure and Qi... :p

No, they're CLforJava and ABCL.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-13 Thread Pascal J. Bourguignon
Steven D'Aprano st...@remove-this-cybersource.com.au writes:

 Hmmm, my ISP's news software really doesn't like it when I cross-post to
 more than three newsgroups. So, trying again without comp.lang.c.

 On Wed, 13 Oct 2010 02:00:46 +0100, BartC wrote:

 RG rnospa...@flownet.com wrote in message
 news:rnospamon-20651e.17410012102...@news.albasani.net...
 In article i92dvd$ad...@news.eternal-september.org, BartC
 b...@freeuk.com wrote:

 Thomas A. Russ t...@sevak.isi.edu wrote in message

  But radians are dimensionless.

 But they are still units

 No, they aren't.

 so that you can choose to use radians, degrees or gradians

 Those aren't units either, any more than a percentage is a unit.  They
 are just different ways of writing numbers.

 All of the following are the same number written in different
 notations:

 0.5
 1/2
 50%

 Likewise, all of the following are the same number written in different
 notations:

 pi/2
 pi/2 radians
 90 degrees
 100 gradians
 1/4 circle
 0.25 circle
 25% of a circle
 25% of 2pi

 See?

 But what exactly *is* this number? Is it 0.25, 1.57 or 90?

 That's the wrong question. It's like asking, what exactly is the number
 twenty-one -- is it one and twenty, or 21, or 0x15, or 0o25, or 21.0, or
 20.999... recurring, or 63/3, or XXI, or 0b10101, or vinet et un, or any
 one of many other representations.

This is not the wrong question.  These are two different things.

In the case of 0.25, 1.57 or 90, you have elements of the same set of
real numbers ℝ, which are used to represent the same entity, which IS NOT
a number, but an angle.  Angles are not in the ℝ set, but in ℝ/2π, which
is an entirely different set with entirely different properties.



In the other case, we have strings 21, 0x15, 0o25, 21.0,
20.999..., 63/3, XXI, 0b10101, vingt et un, that represent the
same number in ℝ.




So you have different pairs of sets and different representationnal
mapping.  There's very little in common between an angle of 90 degree,
and the number 21.




 Likewise, it doesn't matter whether you write 45° or π/4 radians, the
 angle you are describing -- the number -- is the same.

No.  The numbers ARE different.  One number is 45, the other is π/4.
What is the same, is the angle that is represented.

I cannot fathom how you can arrive at such a misunderstanding.  It's
rather easy to picture out:

ℝ   .ℝ/2π..
   :  : ::
  ::degree : full turn:
  :45 --\ :
  : :  : \:
  : :  :   angle of an eighth of a turn   :
  : :radian: /:
  :π/4 -/ :
  : :  :quarter turn  :
   :   :::
...  


--
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-13 Thread Pascal J. Bourguignon
Steven D'Aprano st...@remove-this-cybersource.com.au writes:

 On Wed, 13 Oct 2010 17:28:42 +0200, Pascal J. Bourguignon wrote:

 But what exactly *is* this number? Is it 0.25, 1.57 or 90?

 That's the wrong question. It's like asking, what exactly is the
 number twenty-one -- is it one and twenty, or 21, or 0x15, or 0o25,
 or 21.0, or 20.999... recurring, or 63/3, or XXI, or 0b10101, or vinet
 et un, or any one of many other representations.
 
 This is not the wrong question.  These are two different things.

 Which is why I said it was LIKE asking the second.


 In the case of 0.25, 1.57 or 90, you have elements of the same set of
 real numbers ℝ, which are used to represent the same entity, which IS
 NOT a number, but an angle.  Angles are not in the ℝ set, but in ℝ/2π,
 which is an entirely different set with entirely different properties.

 It's quite standard to discuss (say) sin(theta) where theta is an element 
 of ℝ. The fact that angles can extent to infinity in both directions is 
 kind of fundamental to the idea of saying that the trig functions are 
 periodic.

You're falling in a trap.  It's costumary in mathematics to ellide the
trivial isomorphims.  But they're still there.

When you're writing:  2+3.4 you have to use the trivial isomorphims
between ℕ and the subset of ℝ called 1.0ℕ, let's call it t, so that when
you write:  2+3.4
you actually mean t(2)+3.4
with t(2) ∈ 1.0ℕ ⊂ ℝ
 3.4  ∈ ℝ
and + being the additive operator on ℝ with 0.0 as neutral element.


Similarly, when you write sin(θ) with θ ∈ ℝ
what you actually mean is sin( angle-equivalence-class-of(θ) )
with angle-equivalence-class-of(θ) ∈ ℝ/2π.


As a programmer, it should be obvious to you.

(defstruct angle
  (representant 0.0 :type real))

(defun sinus (angle)
   ...)


(sinus 0.2) -- error

(sinus (make-angle :representant 0.2)) -- 0.19866933079506122


It just happen that 

   (defun cl:sin (representant)
 (sinus (make-angle :representant representant)))


But this should not confuse you, nor the type checking.



 So you have different pairs of sets and different representationnal
 mapping.  There's very little in common between an angle of 90 degree,
 and the number 21.

 Would it have been easier to understand if I had made the analogy between 
 angles and (say) time? A time of 1 minute and a time of 60 seconds are 
 the same time, regardless of what representation you use for it.

Yes, but time has a dimension (time), so you don't confuse it with
random numbers.


 Likewise, it doesn't matter whether you write 45° or π/4 radians, the
 angle you are describing -- the number -- is the same.
 
 No.  The numbers ARE different.  One number is 45, the other is π/4.
 What is the same, is the angle that is represented.

 Fair enough. I worded that badly.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scheme as a virtual machine?

2010-10-13 Thread Pascal J. Bourguignon
namekuseijin namekusei...@gmail.com writes:

 On 11 out, 08:49, Oleg  Parashchenko ole...@gmail.com wrote:
 Hello,

 I'd like to try the idea that Scheme can be considered as a new
 portable assembler. We could code something in Scheme and then compile
 it to PHP or Python or Java or whatever.

 Any suggestions and pointers to existing and related work are welcome.
 Thanks!

 My current approach is to take an existing Scheme implementation and
 hijack into its backend. At this moment Scheme code is converted to
 some representation with a minimal set of bytecodes, and it should be
 quite easy to compile this representation to a target language. After
 some research, the main candidates are Gambit, Chicken and CPSCM:

 http://uucode.com/blog/2010/09/28/r5rs-scheme-as-a-virtual-machine-i/http://uucode.com/blog/2010/09/28/r5rs-scheme-as-a-virtual-machine-ii/

 If there is an interest in this work, I could publish progress
 reports.

 --
 Oleg Parashchenko  o...@http://uucode.com/http://uucode.com/blog/ XML, TeX, 
 Python, Mac, Chess

 it may be assembler, too bad scheme libs are scattered around written
 in far too many different flavors of assembler...

 It warms my heart though to realize that Scheme's usual small size and
 footprint has allowed for many quality implementations targetting many
 different backends, be it x86 assembly, C, javascript or .NET.  Take
 python and you have a slow c bytecode interpreter and a slow
 bytecode .NET compiler.  Take haskell and its so friggin' huge and
 complex that its got its very own scary monolithic gcc.  When you
 think of it, Scheme is the one true high-level language with many
 quality perfomant backends -- CL has a few scary compilers for native
 code, but not one to java,

Yep, it only has two for java.


  .NET or javascript that I know of...

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-12 Thread Pascal J. Bourguignon
Tim Bradshaw t...@tfeb.org writes:

 On 2010-10-12 20:46:26 +0100, BartC said:

 You can't do all that if angles are just numbers.

 I think that the discussion of percentages is relevant here: angles
 //are// just numbers, but you're choosing a particular way of
 displaying them (or reading them). 100% //is// 1, and 360° //is// 2π.
 So really, like, for instance, number base, they're things that exist
 for I/O but not inside the system.  At least for the purposes of doing
 maths: computer type systems often don't have very much to do with
 maths (for instance floating-point numbers are obviously a very
 important type, but don't map onto anything that would be interesting
 to a theoratical physicist).

Units are really the product of two things: a dimension, and a scale.

You can add values that have the same dimension, even if they don't have
the same unit (this doesn't necessarily make the addition meaningful,
because having the same dimension still doesn't mean they've got the
same semantics, but that's another question).

So for example, you can add meters and inches.  Both have the dimension
of length.  But meters have the scale of 1/299792458 while inches have
the scale of 254/299792458.  Scales are not absolute, they're given
in relation to some other scale, so you could also say that:
1 inch = 0.0254 meter, or that the scale of inches with respect to
meters is 1/254.


So the interesting thing is that some pseudo-units don't have
dimensions.  They only have the scale.  Radian and Degrees have no
dimension, but they still have scale, with 1 degree = Π/180 radian.



I would argue that angles are not just numbers.  There's a notion of
angle that is different from the notion of interest rate.  (I have also
vague memories of a mathematical presentation of angles that clearly
distinguished angles from numbers used to represent them).

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue1553375] Add traceback.print_full_exception()

2010-10-09 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

Indeed I don't understand the following part :

+Traceback (most recent call last):
+  File testmod.py, line 16, in module
+{exception_action}
+  File testmod.py, line 6, in foo
+bar()
+  File testmod.py, line 11, in bar
+raise Exception
+Exception

Why does the f_back of the first exception, when chain=True, leads back to the 
{exception_action} part, in the except: black, instead of the initial foo() 
call inside the try: block ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1553375
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1553375] Add traceback.print_full_exception()

2010-10-08 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

Is that normal to have two methods test_full_traceback_is_full at the same 
place, in full_traceback.patch / r.david.murray / 2010-08-04 02:32 ?

format_exception should have the same semantic as print_exception indeed.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1553375
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Strong typing vs. strong testing

2010-10-06 Thread Pascal J. Bourguignon
Keith H Duggar dug...@alum.mit.edu writes:

 On Sep 29, 9:01 pm, RG rnospa...@flownet.com wrote:
 That the problem is elsewhere in the program ought to be small
 comfort.  But very well, try this instead:

 [...@mighty:~]$ cat foo.c
 #include stdio.h

 int maximum(int a, int b) { return a  b ? a : b; }

 int main() {
   long x = 8589934592;
   printf(Max of %ld and 1 is %d\n, x, maximum(x,1));
   return 0;}

 [...@mighty:~]$ gcc -Wall foo.c
 [...@mighty:~]$ ./a.out
 Max of 8589934592 and 1 is 1

 $ gcc -Wconversion -Werror foo.c
 cc1: warnings being treated as errors
 foo.c: In function 'main':
 foo.c:5: warning: passing argument 1 of 'maximum' with different width
 due to prototype

 It's called learning to compile. And, yes, those warnings (and
 nearly
 every other one) should be enabled and treated as errors if a shop
 wants
 maximum protection. I only wish more (library) vendors did so.

So you're wishing that they'd be active by default.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem installing psycopg2 in virtualenv (Ubuntu 10.04, Python 2.5)

2010-10-05 Thread Pascal Polleunus

On 05/10/10 00:11, Diez B. Roggisch wrote:

Pascal Polleunusp...@especific.be  writes:


Hi,

I've problems to install psycopg2 in a virtualenv on Ubuntu 10.04.


My problem is also explained on stackoverflow:
http://stackoverflow.com/questions/3847536/installing-psycopg2-in-virtualenv-ubuntu-10-04-python-2-5


I tried different things explained there:
http://www.saltycrane.com/blog/2009/07/using-psycopg2-virtualenv-ubuntu-jaunty/

The last thing I tried is this...
I created a virtualenv with -p python2.5 --no-site-packages
I installed libpq-dev: apt-get install libpq-dev

In the virtualenv, I did this: easy_install -i
http://downloads.egenix.com/python/index/ucs4/ egenix-mx-base

Then when I tried pip install psycopg2==2.0.7, I got this error:

Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1
-DPSYCOPG_VERSION=2.2.2 (dt dec ext pq3) -DPG_VERSION_HEX=0x080404
-DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1
-DHAVE_PQPROTOCOL3=1 -I/usr/include/python2.5
-I. -I/usr/include/postgresql -I/usr/include/postgresql/8.4/server -c
psycopg/psycopgmodule.c -o
build/temp.linux-i686-2.5/psycopg/psycopgmodule.o
-Wdeclaration-after-statement
psycopg/psycopgmodule.c:27:20: error: Python.h: No such file or directory
In file included from psycopg/psycopgmodule.c:31:
./psycopg/python.h:31:26: error: structmember.h: No such file or directory
./psycopg/python.h:34:4: error: #error psycopg requires Python= 2.4
In file included from psycopg/psycopgmodule.c:32:


Does anyone have any idea how to solve that?


Install the python-dev-package. It contains the Python.h file, which the
above error message pretty clearly says. Usually, it's a good idea to
search package descriptions of debian/ubuntu packages for missing header
files to know what to install.


It's already installed; at least for 2.6, nor sure it's correct for 2.5.
python2.5-dev is not available but python-old-doctools replaces it.

Here's what is installed:

ii  python2.52.5.4-1ubuntu6.1
ii  python2.5-minimal2.5.4-1ubuntu6.1
ii  python-old-doctools  2.5.5-1
ii  python2.62.6.5-1ubuntu6
ii  python2.6-dev2.6.5-1ubuntu6
ii  python2.6-minimal2.6.5-1ubuntu6
ii  python-dev   2.6.5-0ubuntu1
--
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-05 Thread Pascal Costanza

On 05/10/2010 05:36, salil wrote:

On Sep 30, 1:38 pm, Lie Ryanlie.1...@gmail.com  wrote:

The /most/ correct version of maximum() function is probably one written
in Haskell as:

maximum :: Integer -  Integer -  Integer
maximum a b = if a  b then a else b

Integer in Haskell has infinite precision (like python's int, only
bounded by memory), but Haskell also have static type checking, so you
can't pass just any arbitrary objects.

But even then, it's still not 100% correct. If you pass a really large
values that exhaust the memory, the maximum() could still produce
unwanted result.

Second problem is that Haskell has Int, the bounded integer, and if you
have a calculation in Int that overflowed in some previous calculation,
then you can still get an incorrect result. In practice, the
type-agnostic language with *mandatory* infinite precision arithmetic
wins in terms of correctness. Any language which only has optional
infinite precision arithmetic can always produce erroneous result.



I have not programmed in Haskell that much, but I think Haskell
inferences type Integer (the infinite precision) by default and not
Int (finite precision) type for the integers. So, the programmer who
specifically mentions Int in the signature of the function, is
basically overriding this default behavior for specific reasons
relevant to the application, for example, for performance. I think
Haskell's way is the right. It is providing safe behavior  as
default and at the same time treating programmer as adults, at least
in this case.

I think dynamic languages are attractive because they make programs
less verbose. But, statically typed languages with type inference
(Haskell, OCaML, Scala, F#) is a very good compromise because they
offer both type safety and succinctness. And when we need algorithms
that should work the same independent of types, Haskell has
typeclasses which are pretty intuitive, unlike the horrible C++
templates.


Static typing still doesn't mesh well with certain kinds of reflection.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP  ContextL: http://common-lisp.net/project/closer/
--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem installing psycopg2 in virtualenv (Ubuntu 10.04, Python 2.5)

2010-10-05 Thread Pascal Polleunus

On 05/10/10 10:18, Alex Willmer wrote:

On Oct 5, 7:41 am, Pascal Polleunusp...@especific.be  wrote:

On 05/10/10 00:11, Diez B. Roggisch wrote:

Install the python-dev-package. It contains the Python.h file, which the
above error message pretty clearly says. Usually, it's a good idea to
search package descriptions of debian/ubuntu packages for missing header
files to know what to install.


It's already installed; at least for 2.6, nor sure it's correct for 2.5.
python2.5-dev is not available but python-old-doctools replaces it.


Ubuntu 10.04 doesn't have a full Python 2.5 packaged, as evidenced by
the lack of python2.5-dev. You need to use Python 2.6 or if you
absolutely must use Python 2.5 build it from source, try a Debian
package or switch distro. python-old-doctools does not replace python-
dev, it looks like it was bodged to keep some latex tools working.



Thanks Diez and Alex for you quick answers.

I finally used Python 2.6 and everything went fine.
--
http://mail.python.org/mailman/listinfo/python-list


Problem installing psycopg2 in virtualenv (Ubuntu 10.04, Python 2.5)

2010-10-04 Thread Pascal Polleunus

Hi,

I've problems to install psycopg2 in a virtualenv on Ubuntu 10.04.


My problem is also explained on stackoverflow:
http://stackoverflow.com/questions/3847536/installing-psycopg2-in-virtualenv-ubuntu-10-04-python-2-5


I tried different things explained there: 
http://www.saltycrane.com/blog/2009/07/using-psycopg2-virtualenv-ubuntu-jaunty/


The last thing I tried is this...
I created a virtualenv with -p python2.5 --no-site-packages
I installed libpq-dev: apt-get install libpq-dev

In the virtualenv, I did this: easy_install -i 
http://downloads.egenix.com/python/index/ucs4/ egenix-mx-base


Then when I tried pip install psycopg2==2.0.7, I got this error:

Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall 
-Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 
-DPSYCOPG_VERSION=2.2.2 (dt dec ext pq3) -DPG_VERSION_HEX=0x080404 
-DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 
-DHAVE_PQPROTOCOL3=1 -I/usr/include/python2.5 -I. 
-I/usr/include/postgresql -I/usr/include/postgresql/8.4/server -c 
psycopg/psycopgmodule.c -o 
build/temp.linux-i686-2.5/psycopg/psycopgmodule.o 
-Wdeclaration-after-statement

psycopg/psycopgmodule.c:27:20: error: Python.h: No such file or directory
In file included from psycopg/psycopgmodule.c:31:
./psycopg/python.h:31:26: error: structmember.h: No such file or directory
./psycopg/python.h:34:4: error: #error psycopg requires Python = 2.4
In file included from psycopg/psycopgmodule.c:32:


Does anyone have any idea how to solve that?

Thanks in advance,
Pascal
--
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
Gene gene.ress...@gmail.com writes:

 The FA or TM dichotomy is more painful to contemplate than you say.
 Making appropriate simplifications for input, any modern computer is a
 FA with 2^(a few trillion) states.  Consequently, the gestalt of
 computer science seems to be to take it on faith that at some very
 large number of states, the FA behavior makes a transition to TM
 behavior for all possible practical purposes (and I mean all).  So
 what is it--really--that's trivial to analyze?  And what is
 impossible?  I'm sorry this is drifting OT and will stop here.


Don't worry, this thread is becoming interesting at least.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
Seebs usenet-nos...@seebs.net writes:

 On 2010-10-01, Don Geddis d...@geddis.org wrote:
 in C I can have a function maximum(int a, int b) that will always
 work. Never blow up, and never give an invalid answer. If someone
 tries to call it incorrectly it is a compile error.

 I would agree that the third sentence is arguably wrong, simply
 because there's no such thing (outside #error) of a mandate to stop
 compiling.  However, my understanding was that the dispute was over
 the second sentence, and that's certainly correct.

 The obvious simple maximum() in C will not raise an exception nor return
 something which isn't an int in any program which is not on its face
 invalid in the call.  This is by definite contrast with several of the
 interpreted languages, 

This has nothing to do with the fact that these languages have
implementations using the interpreter pattern instead of a compiler.

Matter of fact, most Common Lisp implementations just do not have any
interpreter!  (Which doesn't prevent them to have a REPL).


 where a function or subroutine like that cannot
 specify that its argument must be some kind of integer.

This is correct, but this is not a characteristic of dynamic programming
languages.  There are dynamic programming languages where you can
declare the type of the parameters, to allow for static checking.  For
example in Common Lisp (where these declarations are advisory, so the
compiler is free to take them into account or not, depending on what it
can infer from its own side, so any problem here is just a warning: the
program can always handle the problems at run-time).



-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
rustom rustompm...@gmail.com writes:

 Some points that seem to be missed (or Ive missed them?)

 1. A dichotomy is being made between 'static' languages like C and
 'dynamic' languages like python/lisp. This dichotomy was valid 30
 years ago, not today.  In Haskell for example

 - static checking is stronger than in C/C++  -- its very hard if not
 impossible to core dump haskell except through memory exhaustion

 - dynamic-ness is almost that of python/lisp -- on can write
 significant haskell programs without type-declaring a single variable/
 function

You're confunding type strongness with the requirement that the
programmer should declare the types, and with the time when the types
are checked.

http://en.wikipedia.org/wiki/Comparison_of_programming_languages#Type_systems

 
type strong   staticexplicitAda
type strong   staticimplicitHaskell
type weak staticexplicitC
type weak staticimplicit  ?
type strong   dynamic   explicit (*)
type strong   dynamic   implicitCommon Lisp
type weak dynamic   explicitObjective-C
type weak dynamic   implicitJavaScript


(*) Usually languages provide explicit typing as an option, but can
deal with implicit typing, when they're dynamic.  


There are also a few languages with no type checking, such as assembler
or Forth.



 Much more mainstream, C# is almost as 'managed' as dynamic languages
 and has efficiency comparable to C.

Nothing extraordinary here.  Common Lisp is more efficient than C.
http://www.lrde.epita.fr/~didier/research/verna.06.ecoop.pdf
http://portal.acm.org/citation.cfm?id=1144168

Actually, it's hard to find a language that has no compiler generating
faster code than C... 


 2. The dichotomy above misses a more pervasive dichotomy -- hardware
 vs software -- as real today as 30 years ago.

 To see this let us lift the discussion from that of *languages* C vs
 Python/Lisp  to philosophies:
 -- C-philosophy: the purpose of type-checking is to maximize (runtime)
 efficiency
 -- Lisp-philosophy: the purpose of type-checking is zero-errors (aka
 seg-faults) via continuous checks at all levels.

 If one is honest (and not polemical :-) ) it would admitted that both
 sides are needed in different contexts.

 Now Dijkstra pointed (40 years ago) in Discipline of Programming that
 this unfortunate dilemma arises due to lack of hardware support. I am
 unable to reproduce the elegance and succinctness of his language but
 the argument is as follows:

 Let us say that for a typical profile of a computer we have for every
 one instruction of the pathological one typified by the maximum
 function, a trillion 'normal' instructions.  This is what he calls a
 very-skew test -- an if-then-else that checks this would go the if-way
 way one trillion times for one else-way.  It is natural for a
 programmer to feel the pinch of these trillion checks and (be inclined
 to) throw them away.

 If however the check was put into hardware there would be no such
 dilemma. If every arithmetic operation was always checked for overflow
 *by hardware* even languages committed to efficiency like C could trap
 on errors with no extra cost.
 Likewise Lisp/python-like languages could easily be made more
 efficient.

 The diff arises from the fact that software costs per use whereas
 hardware costs per installation -- a transistor, unlike an if, does
 not cost any more if its used once or a trillion times.

 In short the problem is not C vs Lisp/Python but architectures like
 Intel wherein:

 1. an overflow bit harmlessly set by a compare operation is
 indistinguishable from one set by a signed arithmetic operation --
 almost certainly a problem

 2. An into instruction (interrupt on overflow) must be inserted into
 the software stream rather than raised as a hardware interrupt.

Hence the use of virtual machine: when your machine doesn't do what you
want, you have to write your own.

When Intel will realize that 99% of its users are running VM, perhaps
they'll start to wonder what they're making wrong...

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
Seebs usenet-nos...@seebs.net writes:

 On 2010-09-30, Ian Collins ian-n...@hotmail.com wrote:
 Which is why agile practices such as TDD have an edge.  If it compiles 
 *and* passes all its tests, it must be right.

 So far as I know, that actually just means that the test suite is
 insufficient.  :)

 Based on my experience thus far, anyway, I am pretty sure it's essentially
 not what happens that the tests and code are both correct, and it is usually
 the case either that the tests fail or that there are not enough tests.

It also shows that for languages such as C, you cannot limit the unit tests
to the types declared for the function, but that you should try all the
possible values of the language.

Which basically, is the same as with dynamically typed programming
language, only now, some unit tests will fail early, when trying to
compile them while others will give wrong results later.


static  dynamic

compiler detects wrong type fail at compile fails at run-time
(with exception
explaining this is
the wrong type)

compiler passes wrong type  wrong resultfails at run-time
(the programmer (with exception
spends hoursexplaining this is
finding the the wrong type)
problem)

compiler passes correct typewrong resultwrong result
   (normal bug to be corrected)

compiler passes correct typecorrect result  correct result



-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
BartC b...@freeuk.com writes:

 Pascal J. Bourguignon p...@informatimago.com wrote in message
 news:87sk0qkzhz@kuiper.lan.informatimago.com...
 rustom rustompm...@gmail.com writes:

 Much more mainstream, C# is almost as 'managed' as dynamic languages
 and has efficiency comparable to C.

 Nothing extraordinary here.  Common Lisp is more efficient than C.
 http://www.lrde.epita.fr/~didier/research/verna.06.ecoop.pdf
 http://portal.acm.org/citation.cfm?id=1144168

 It seems that to make Lisp fast, you have to introduce static
 typing. Which is not much different to writing in C, other than the
 syntax.

 Actually, it's hard to find a language that has no compiler generating
 faster code than C...

 But those implementers have to try very hard to beat C. Meanwhile C
 can be plenty fast without doing anything special.

 When Intel will realize that 99% of its users are running VM

 Which one?

Any implementation of a controlled environment is a virtual machine.
Sometimes it is explicitely defined, such as in clisp, parot or jvm, but
more often it is implicit, such as in sbcl, or worse, developed in an
ad-hoc way in applications (eg. written in C++).


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-10-01 Thread Pascal J. Bourguignon
BartC b...@freeuk.com writes:

 Pascal J. Bourguignon p...@informatimago.com wrote in message
 news:87zkuyjawh@kuiper.lan.informatimago.com...
 BartC b...@freeuk.com writes:

 Pascal J. Bourguignon p...@informatimago.com wrote in message

 When Intel will realize that 99% of its users are running VM

 Which one?

 Any implementation of a controlled environment is a virtual machine.
 Sometimes it is explicitely defined, such as in clisp, parot or jvm, but
 more often it is implicit, such as in sbcl, or worse, developed in an
 ad-hoc way in applications (eg. written in C++).

 But if you had to implement a VM directly in hardware, which one (of
 the several varieties) would you choose?

 And having chosen one, how would that impact the performance of a
 language with an incompatible VM?

Indeed.  C running on LispMachine, wasn't so fast.  All this bit
twiddling and pointer computing...  But if that could be construed as a
reason why to use dynamic languages (they run faster!) than C, it'd be
all for the best!


Otherwise we need to further go down the road of VM (cf. the hardware
virtualization stream), down to the microcode.  Again, it's because of
the cheapness of microprocessors founders that we forgot for a long time
the notion of microcode, which was found more often on big irons.
Nowadays the biggest microprocessors are back on the track of microcode;
this should be open, and virtual machines should be more routinely
implemented in microcode.


 Perhaps processors executing native code as it is now, aren't such a
 bad idea.

Perhaps if they had a more controlled execution model it would be a
better idea.  Remember that processors are like they are because C (and
unix) is like it is!


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   >