Re: Calling of GetVolumeInformation returns empty serial number
Hello! Eryk, your solution is the best solution for me: os.stat(drive).st_dev I don't need the real ID. I write a Recycle Bin manager in Python - for my machine only. It just simply registers all rec. bin files to an SQLite DB (only the new ones with a date). After 30 days it deletes too old registered recycle bin files. Without the serial the external drives make more records for one file (E:\, F:\, G:\). And because of different drives I may search in wrong drive (on deletion process the registered E:\...\x.dcu is on G:\., so I can't find it). But with this code is I can substitute the drive + RecBin folder with the serial, and later I can search it in good drive. Thank you! dd 2017-11-07 13:10 GMT+01:00 eryk sun : > On Tue, Nov 7, 2017 at 7:58 AM, Durumdara wrote: > > > > I want to get the serial number of the drives (without external modules > > like Win32 or WMI). > > The volume serial number is more easily available as > os.stat(drive).st_dev, which comes from calling > GetFileInformationByHandle. Note that despite using the volume serial > number (VSN) as the nearest equivalent of POSIX st_dev, there is no > requirement that the VSN is unique or even non-zero. The same applies > to the file index number that's used for POSIX st_ino. For example, > both values are 0 on a WebDav drive, for which Python's implementation > of os.path.samefile is useless. Practically speaking, however, it's > good enough in most cases, especially for mounted disk volumes. > > That said, maybe what you really want is the hardware (disk) serial > number -- not a volume serial number. The easiest way to get that is > via WMI. You can use subprocess to run wmic.exe if you don't want an > external dependency. You can also get the disk serial number by > calling DeviceIoControl via ctypes. This is a fairly complex > IOCTL_STORAGE_QUERY_PROPERTY request, with an input > STORAGE_PROPERTY_QUERY structure requesting the StorageDeviceProperty. > The result is a STORAGE_DEVICE_DESCRIPTOR structure that has a > SerialNumberOffset field that's the byte offset from the beginning of > the buffer of the serial number as a null-terminated string. > > Getting back to the VSN, note that the mount-point manager doesn't > rely on it as a unique identifier. For associating volume devices with > logical DOS drives and volume GUID names (i.e. names like > "Volume{12345678----123456789abc}", which are used to > mount volumes as NTFS junctions), the mount-point manager queries a > unique ID via IOCTL_MOUNTDEV_QUERY_UNIQUE_ID. Sometimes the volume > driver returns a unique ID that's very long -- over 200 bytes. This > doesn't matter because it's only used to uniquely associate a GUID > name (and maybe a DOS drive) with the given volume when the system > boots. This association is persisted in HKLM\System\MountedDevices. > -- https://mail.python.org/mailman/listinfo/python-list
Calling of GetVolumeInformation returns empty serial number
Hi! Windows 10, Python 3.6. I want to get the serial number of the drives (without external modules like Win32 or WMI). It is needed for identification of removable devices (like USB external drives). Somewhere I saw this code: def GetVolumeID(Drive): import ctypes kernel32 = ctypes.windll.kernel32 volumeNameBuffer = ctypes.create_unicode_buffer(1024) fileSystemNameBuffer = ctypes.create_unicode_buffer(1024) serial_number = None max_component_length = None file_system_flags = None rc = kernel32.GetVolumeInformationW( ctypes.c_wchar_p(Drive), volumeNameBuffer, ctypes.sizeof(volumeNameBuffer), serial_number, max_component_length, file_system_flags, fileSystemNameBuffer, ctypes.sizeof(fileSystemNameBuffer) ) return serial_number; print(GetVolumeID('c:\\')) This function is working with other values (volumeNameBuffer), but for serial it returns None. The serial number is empty. How to I pass this parameter to I get the value? The doc said it's LPDWORD (pointer to DWORD): https://msdn.microsoft.com/en-us/library/windows/desktop/aa364993(v=vs.85).aspx _Out_opt_ LPDWORD lpVolumeSerialNumber, Thank you for any advance in this theme! Best wishes dd -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3 - xml - crlf handling problem
Dear Stefan! So: may I don't understand the things well, but I thought that parser drop the "nondata" CRLF-s + other characters (not preserve them). Then don't matters that I read the XML from a file, or I create it from code, because all of them generating SAME RESULT. But Python don't do that. If I make xml from code, the code is without plus characters. But Python preserves parsed CRLF characters somewhere, and they are also flushing into the result. Example: original=''' AnyText ''' If I parse this, and write with toxml, the CRLF-s remaining in the code, but if I create this document line by line, there is no CRLF, the toxml write "only lined" xml. This also meaning that if I use prettyxml call, to prettying the xml, the file size is growing. If there is a multiple processing queue - if two pythons communicating in xml files, the size can growing every time. Py1 - read the Py2's file, process it, and write to a result file Py2 - read the Py1's result file, process it, and pass back to Py1 this can grow the file with each call, because "pretty" CRLF-s not normalized out from the code. original=''' AnyText ''' def main(): f = open('test.0.xml','w') f.write(original.strip()) f.close() for i in range(1, 10 + 1): xo = parse('test.%d.xml' % (i - 1)) de = xo.documentElement de.setAttribute('c', str(i)) t = de.getElementsByTagName('element')[0] tn = t.childNodes[0] print (dir(t)) print (tn) print (tn.nodeValue) tn.nodeValue = str(i) + '\t' + '\n' #s = xo.toxml() s = xo.toprettyxml() f = open('test.%d.xml' % i,'w') f.write(s) f.close() sys.exit() And: because Python is not converting CRLF to &013; I cannot make different from "prettied source's CRLF" (loaded from template file), "my own pretty's CRLF" (my own topretty), and really contained CRLF (for example a memo field's value). My case is that the processor application (for whom I pass the XML from Python) is sensitive to "plus CRLF"-s in text nodes, I must do something these "plus" items to avoid external's program errors. I got these templates and input files from prettied format (with CRLFS), but I must "eat" them to make an XML that one lined if possible. I hope you understand my problem with it. Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Python 3 - xml - crlf handling problem
Hi! As I see that XML parsing is "wrong" in Python. I must use predefined XML files, parsing them, extending them, and produce some result. But as I see that in Windows this is working wrong. When the predefined XMLs are "formatted" (prettied) with CRLFs, then the parser keeps these plus LF characters (not handle the logic that CR = LF = CRLF), and it is appearing in the new result too. xo = parse('test_original.xml') de = xo.documentElement de.setAttribute('b', "2") b = xo.toxml('utf-8') f = open('test_original2.xml', 'wb') f.write(b) f.close() And: if I used text elements, this can extend the information with plus characters and make wrong xml... I can use only "myowngenerated", and not prettied xmls because of this problem! Is this normal? Thanks for your read: dd -- http://mail.python.org/mailman/listinfo/python-list
No module named Pwd - under Apache 2.2
Hi! Win7/x64, Python 3.2, PyPGSQL f 3.2 and Apahce 2.2. I created a script that working in CGI mode, it is read some table, and returns with an XML. It was working with normal mode, under Pyscripter, and under command line. But! When I trying to use it from Apache 2.2 as cgi, I got the subjected error: "No module named Pwd" I checked the code. Everything is fine for the lines import postgresql # this is working con = postgresql.open() # this failed Traceback (most recent call last): File "C:/web/Apache2.2/cgi-bin/testpg.py", line 20, in Session Function() File "C:/web/Apache2.2/cgi-bin/testpg.py", line 38, in WebFunction db = postgresql.open("pq://postgres:m@localhost/webdbdb") File "C:\python32\lib\site-packages\postgresql\__init__.py", line 76, in open std_params = _pg_param.collect(prompt_title = None) File "C:\python32\lib\site-packages\postgresql\clientparameters.py", line 620, in collect cpd = normalize(extrapolate(chain(*d_parameters))) File "C:\python32\lib\site-packages\postgresql\clientparameters.py", line 563, in normalize for (k, v) in iter: File "C:\python32\lib\site-packages\postgresql\clientparameters.py", line 524, in extrapolate for item in iter: File "C:\python32\lib\site-packages\postgresql\clientparameters.py", line 130, in defaults user = getuser() or 'postgres' File "C:\python32\lib\getpass.py", line 156, in getuser import pwd ImportError: No module named pwd The Apache is running under my account (normal user). The sys.path is ok: ['C:\\web\\Apache2.2\\cgi-bin', 'C:\\Windows\\system32\\python32.zip', 'C:\\python32\\DLLs', 'C:\\python32\\lib', 'C:\\python32', 'C:\ \python32\\lib\\site-packages'] So we (me, and the postgresql's author) don't understand, why it happens. Any idea? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Linux drives me crazy... Rights, listdir...
Hi! Sorry. The path was wrong! "backup_redmine <> redmine_backup"... :-( Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Linux drives me crazy... Rights, listdir...
Dear Everybody! We have a redmine server with linux. I wrote some pythonic tool that backup the redmine (files and database) and put to ftp server. This was working fine. But today I checked, and I saw this failed in the prior week. As I checked more, I saw that part. is out of space. I stored the files, logs, dumps in the /home, and this dev. is out of space. Then I tried to move to /var/www/redmine_backup folder. And then I got result what I don't understand. In root mode I set the user on every file and directory, and set the mode to 777, and set sticky bit, and user/group on execution. Everything is same, but when I check os.listdir() on this dir, I see only two folders. 5 elements are here: Files_Zipped (dir) MySQL_Dump (dir) Log (dir) Log2 (dir) BackupRedmine.py (the script) The listdir show only: Files_Zipped MySQL_Dump (It don't see itself!) Because it don't see the Log dir, try to make it, and it have been failed... I checked all things in MC too, but I cannot find the differents... May this is Python bug? xuser@h2182:~$ /var/www/backup_redmine/BackupRedMine.py /log RedMine Backup V1.0 Log mode Make and get backup path Try to make /var/www/redmine_backup/Log Traceback (most recent call last): File "/var/www/backup_redmine/BackupRedMine.py", line 239, in logfilename = MakeBackupPath(_SubPath_Log) + '/log_' + NowStr File "/var/www/backup_redmine/BackupRedMine.py", line 78, in MakeBackupPath os.makedirs(path) File "/usr/lib/python2.5/os.py", line 171, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/var/www/redmine_backup/Log' xuser@h2182:/var/www/backup_redmine$ ./BackupRedMine.py /log RedMine Backup V1.0 Log mode Make and get backup path (os.listdir) ['Files_Zipped', 'MySQL_Dump'] xuser@h2182:/var/www/backup_redmine$ ls -l total 24 -rwxrwxrwt 1 xuser xuser 7350 2011-03-29 10:47 BackupRedMine.py drwsrwsrwt 2 xuser xuser 4096 2011-03-29 09:46 Files_Zipped drwsrwsrwt 2 xuser xuser 4096 2011-03-29 10:36 Log drwsrwsrwt 2 xuser xuser 4096 2011-03-29 10:42 Log2 drwsrwsrwt 2 xuser xuser 4096 2011-03-29 09:46 MySQL_Dump But interesting thing is that everything is working good on /home. So it is seems to be right problem, but what is the different? What can I do? Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi! On aug. 25, 08:07, Stefan Schwarzer wrote: > > The file is 2 GB in size and is fully transferred, without > blocking or an error message. The status message from the > server is '226-File successfully transferred\n226 31.760 > seconds (measured here), 64.48 Mbytes per second', so this > looks ok, too. > > I think your problem is related to the FTP server or its > configuration. > > Have you been able to reproduce the problem? Yes. I tried with saving the file, but I also got this error. but: Total COmmander CAN download the file, and ncftpget also can download it without problem... Hm... :-( Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: ftplib limitations?
Hi! > > So if I understand correctly, the script works well on > smaller files but not on the large one? Yes. 500-800 MB is ok. > 1 GB is not ok. > > > It down all of the file (100%) but the next line never reached. > > _Which_ line is never reached? The `print` statement after > the `retrbinary` call? Yes, the print. > > > Some error I got, but this was in yesterday, I don't remember the text > > of the error. > > Can't you reproduce the error by executing the script once > more? Can you copy the file to another server and see if the > problem shows up there, too? I got everytime, but I don't have another server to test it. > > I can imagine the error message (a full traceback if > possible) would help to say a bit more about the cause of > the problem and maybe what to do about it. This was: Filename: "Repositories 20100824_101805 (Teljes).zip" Size: 1530296127 ..download: 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Traceback (most recent call last): File "C:\D\LocalBackup\ftpdown.py", line 31, in ftp.retrbinary("retr " + s, CallBack) File "C:\Python26\lib\ftplib.py", line 401, in retrbinary return self.voidresp() File "C:\Python26\lib\ftplib.py", line 223, in voidresp resp = self.getresp() File "C:\Python26\lib\ftplib.py", line 209, in getresp resp = self.getmultiline() File "C:\Python26\lib\ftplib.py", line 195, in getmultiline line = self.getline() File "C:\Python26\lib\ftplib.py", line 182, in getline line = self.file.readline() File "C:\Python26\lib\socket.py", line 406, in readline data = self._sock.recv(self._rbufsize) socket.error: [Errno 10054] A lÚtez§ kapcsolatot a tßvoli ßllomßs kÚnyszerÝtette n bezßrta So this message is meaning that the remote station forced close the existing connection. Now I'm trying with saving the file into temporary file, not hold in memory. Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
ftplib limitations?
Hi! See this code: import os, sys, ftplib from ftplib import FTP ftp = FTP() ftp.connect('ftp.anything.hu', 2121) ftp.login('?', '?') print ftp.getwelcome() ftp.set_pasv(False) ls = ftp.nlst() for s in ls: print "\nFilename:", '"%s"' % s, fsize = ftp.size(s) print "Size:", fsize print "..download:", d = {} d['buffer'] = [] d['size'] = 0 d['lastpercentp10'] = 0 def CallBack(Data): d['size'] = d['size'] + len(Data) d['buffer'].append(Data) percent = (d['size'] / float(fsize)) * 100 percentp10 = int(percent/10) if percentp10 > d['lastpercentp10']: d['lastpercentp10'] = percentp10 print str(percentp10 * 10) + "%", ftp.retrbinary("retr " + s, CallBack) print "" print "..downloaded, joining" dbuffer = "".join(d['buffer']) adir = os.path.abspath("b:\\_BACKUP_") newfilename = os.path.join(adir, s) print "..saving into", newfilename f = open(newfilename, "wb") f.write(dbuffer) f.close() print "..saved" print "..delete from the server" ftp.delete(s) print "..deleted" #sys.exit() print "\nFinished" This code is login into a site, download and delete all files. I experienced some problem. The server is Windows and FileZilla, the client is Win7 and Python2.6. When I got a file with size 1 303 318 662 byte, python is halt on "retrbinary" line everytime. It down all of the file (100%) but the next line never reached. Some error I got, but this was in yesterday, I don't remember the text of the error. I want to ask that have Py2.6 some ftp limitations? I remembered that Zip have 2 GB limitation, the bigger size of the archive making infinite loop. May ftplib also have this, and this cause the problem... Or I need to add a "NOOP" command in Callback? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Python 3 - Is PIL/wxPython/PyWin32 supported?
Hi! I have an environment under Python 2.6 (WinXP). That is based on PIL, wxPython/PyWin32. In the project's pages I see official installer for only PyWin32. I don't know that PIL or wxPython supports Python 3 or not. May with some trick these packages are working. Does anybody know about it? Can I replace my Py2.6 without lost PIL/wxPython? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
KinterBasDB - how to change the mode: embedded/server
Hi! I want to use KinterBasDB in mixed mode: sometimes embedded, sometimes real local/remote server. How can I set up the connection to KinterBasDB can determine, what mode I want to use? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Decimal problem
On jún. 10, 23:01, Mark Dickinson wrote: > On Jun 10, 8:45 pm, durumdara wrote: > > > ne 91, in fixed_conv_out_precise > > from decimal import Decimal > > ImportError: cannot import name Decimal > > Is it possible that you've got another file called decimal.py > somewhere in Python's path? What happens if you start Python manually > and type 'from decimal import Decimal' at the prompt? > > -- > Mark Hi! A I found the problem. But before this I destroyed my machine fully... :- ( The problem is ACTUAL PATH. My script name was copy.py. If I tried to start python here, and type "import decimal" the python was crashed on. Because decimal is uses copy and number modules. The decimal is imported my module, not the system... Ajja... I need to reinstall all tools that was in this machine, because I uninstalled/deleted everything to find the source of the problem - what was next to my eyes... :-( Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: SQLite3 - How to set page size?
On jún. 10, 20:39, Ian Kelly wrote: > On Thu, Jun 10, 2010 at 12:25 PM, durumdara wrote: > > Hi! > > > I tried with this: > > > import sqlite3 > > pdb = sqlite3.connect("./copied4.sqlite") > > pcur = pdb.cursor() > > pcur.execute("PRAGMA page_size = 65536;") > > pdb.commit() > > pcur.execute('VACUUM;') > > pdb.commit() > > pcur.execute("PRAGMA page_size") > > rec = pcur.fetchone() > > print rec > > pdb.close() > > > But never I got bigger page size. > > > What I do wrong? > > According to the documentation, "The page size must be a power of two > greater than or equal to 512 and less than or equal to > SQLITE_MAX_PAGE_SIZE. The maximum value for SQLITE_MAX_PAGE_SIZE is > 32768." > > Cheers, > Ian Thanks! This was!!! dd -- http://mail.python.org/mailman/listinfo/python-list
Decimal problem
Hi! In the prev. week I tested my home Python projects with KinterBasDB embedded, PsyCOPG, and SQLite. All of them worked well, and everything was good. But the database blob table deletion was slow in SQLite, so I thought I will try this with FireBird and PGSQL. Today I tried to copy the SQLite DataBase into FireBird, and PGSQL. Psycopg drop this error: import psycopg2 pdb = psycopg2.connect("dbname=testx user=postgres password=x") C:\Python26\lib\site-packages\psycopg2\__init__.py:62: RuntimeWarning: can't import decimal module probably needed by _psycopg RuntimeWarning) I wondered, because this was working before... Then I tried to start KinterBasDB: File "C:\Python26\lib\site-packages\kinterbasdb\__init__.py", line 478, in con nect return Connection(*args, **keywords_args) File "C:\Python26\lib\site-packages\kinterbasdb\__init__.py", line 644, in __i nit__ self._normalize_type_trans() File "C:\Python26\lib\site-packages\kinterbasdb\__init__.py", line 1047, in _n ormalize_type_trans self.set_type_trans_out(_NORMAL_TYPE_TRANS_OUT) File "C:\Python26\lib\site-packages\kinterbasdb\__init__.py", line 1073, in se t_type_trans_out return _k.set_Connection_type_trans_out(self._C_con, trans_dict) File "C:\Python26\lib\site-packages\kinterbasdb\__init__.py", line 1907, in _m ake_output_translator_return_type_dict_from_trans_dict return_val = translator(sample_arg) File "C:\Python26\lib\site-packages\kinterbasdb \typeconv_fixed_decimal.py", li ne 91, in fixed_conv_out_precise from decimal import Decimal ImportError: cannot import name Decimal I deleted my complete Python with all packages, and I reinstalled it. But the problem also appeared... What is this? And what the hell happened in this machine that destroyed my working python apps...? Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
SQLite3 - How to set page size?
Hi! I tried with this: import sqlite3 pdb = sqlite3.connect("./copied4.sqlite") pcur = pdb.cursor() pcur.execute("PRAGMA page_size = 65536;") pdb.commit() pcur.execute('VACUUM;') pdb.commit() pcur.execute("PRAGMA page_size") rec = pcur.fetchone() print rec pdb.close() But never I got bigger page size. What I do wrong? Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQLDB - server has gone on blob insertion...
Hi! No, there is no same program. I got this ref: http://dev.mysql.com/doc/refman/5.0/en/gone-away.html I will try it. Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQLDB - server has gone on blob insertion...
Hi! > > drop table blobs > create table blobs (whatever definition it had originally) It was a test. In PGSQL, PYSQLite that was: delete from blobs where (file_id in (select file_id from pics where dir_id=?)) So I need to delete only all blobs that processed. > > > > > When I tried to start this, I got error: > > > _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away') > > > I read that server have some parameter, that limit the Query length. > > > Then I decreased the blob size to 1M, and then it is working. > > What is the table definition? In MySQL 4 (and likely not changed in > v5 -- I've got the old brown tree book handy, hence the mention of v4) > field type BLOB is limited to a length of 2^16 (64kB), MEDIUMBLOB is > 2^24, and LONGBLOB is 2^32 (if the system is using unsigned integers > internally, that should support 4GB... I used the latest community server. I tried LONGBLOB also, and I got also this error... :-( >But do you have enough memory to > pass such an argument? I have enough memory (4 GB), I want to insert only 1-8 MB size pictures, that is working under PYSQLITE/PGSQL. I saw that packet size need to configure - may mysqldb don't handle this with right error message, replace with "gone" error. I set these packet, and other parameters in my.ini, and I restarted the mysql, but this don't solve the problem. May I need to set this from client, but I don't know, how to do it... Thanks for every help: dd -- http://mail.python.org/mailman/listinfo/python-list
MySQLDB - server has gone on blob insertion...
Hi! I want to test my program that coded into PGSQL, and PySQLite. With these DBs I have problem on many blob deletion (2 hours) and compact/vacuum (1 hours)... So I'm trying to port my program, and before that making a test to check, which time needs to delete 1 GB of blobs. I installed MySQLDb from the exe (Py2.6, from stackoverflow version), set all parameters, etc. import MySQLdb conn = MySQLdb.connect (host = "localhost", user = "root", passwd = "", db = "db") cursor = conn.cursor () cursor.execute ("SELECT VERSION()") cursor.execute('delete from blobs;') s = time.time() for i in range(200): k = str(i) xbuffer = chr(65 + (i % 26)) xbuffer = xbuffer * 1024 * 1024 b = MySQLdb.escape_string(xbuffer) print len(b) cursor.execute('''insert into blobs (blob_id, file_id, size, ext, data) values (%s, %s, %s, %s, %s)''', (i, i, -1, 'org', b)) conn.commit() e = time.time() t = e - s print t sys.exit() When I tried to start this, I got error: _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away') I read that server have some parameter, that limit the Query length. Then I decreased the blob size to 1M, and then it is working. But: I can insert 800k-1,9 MB blobs. I tried to set this parameter, but nothing changed. My.ini: # SERVER SECTION # -- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] max_allowed_packet = 16M # The TCP/IP Port the MySQL Server will listen on port=3306 What is the problem? What I do wrong? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Python and read archives?
Hi!I want to make a multios (Win/Lin) Python based media catalog program, but I have a little problem with reading archives.Ok, Python supports zip and tar, but the world have more archives, like "rar", "7z", "tar.gz", "gz", etc.First I was happy with 7z.exe, because it is knows many formats, and 7z L "filename" command can retreive the list of files.Great! - I thought...But later I realized that password protection is halt the the 7z with password prompt. And I cannot ignore this with options as I see.So:I search for a solution to read archives (only filenames) if possible. I saw that 7z have callable dll, but I don't know, how to use it from python.Do you knows about a tool, or a code to read these archives, or or or...?Thanks for your help: dd -- Az Opera forradalmian új levelezőjét használva: http://www.opera.com/mail/-- http://mail.python.org/mailman/listinfo/python-list
Data exchange between Delphi and Python (Win)
Hi! I have an exotic db, with exotic drivers, and it have buggy ODBC driver. But I have native driver - under Delphi. I need to access this DB under Pylons (or mod_python). I wrote one solution that working with XML. But I search for easier way to transform and move data between apps. I saw Python for Delphi, but the installer is showing only Python 2.3 as selectable engine. I think to COM/OLE, because it is accessable from all program, and I think to DLL (but DLL have problematic parameterisation). The input data (what Delphi got) are SQL commands, and the output are the rows (if got). What do you thinking about it? Have anyone experience in this theme? Thanks for it! dd -- http://mail.python.org/mailman/listinfo/python-list
Pygresql, and query meta informations
Hi! Pygresql, DB-API. I search for a solution to get meta information about last query, because I must export these infos to Delphi. Delphi have TDataSet, and it have meta structure that must be defined before I create it. For char/varchar fields I must define their sizes! Pygresql is not retreive the field sizes. Ok, it have solution that CHAR fields values have full size; but varchars are not. Ok, secondary I can calc all field lengths before I export, and I can set THIS CALCULATED size to the field, but this is data dependent. If I have NULL only, the field size = 0. Next query get 71 to the field len. Next query is 234... So I wanna ask that a.) have I some special way in Pygresql to retreive the char/varchar field's length? b.) if not, how to I realize this (other ways)? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Bug or feature: double strings as one
Hi! I found an interesting thing in Python. Today one of my "def"s got wrong result. When I checked the code I saw that I miss a "," from the list. l = ['ó' 'Ó'] Interesting, that Python handle them as one string. print ['ó' 'Ó'] ['\xf3\xd3'] I wanna ask that is a bug or is it a feature? In other languages, like Delphi (Pascal), Javascript, SQL, etc., I must concatenate the strings with some sign, like "+" or "||". This technic is avoid the mistyping, like today. But in python I can miss the concat sign, and I got wrong result... Thanks for your help and for your answer: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Deletion/record visibility error in PG with Python...
Hi! Sorry for rtfm mail... I forgot to remove max_usage param in my real application... This parameter is limiting the number of cursor usage, and if max reached, the DBUtils is automatically open a new cursor in the background! This is break out of the actual transaction context... Uh I wasted 2 hours to found the bug in another source... :-( dd 2009/5/28 Durumdara > Hi! > > PGSQL makes me crazy... > > I port my apps to PGSQL, and I near to finish - but I got this problem... > > Params: PGSQL 8.3, Windows, Pylons, PGDB, DBUTILS... > > > What happened? How I can avoid the cursor changing? How to fix it in my > transaction? > I never ask for new cursor, I used same variable in all of my context > (self.Cur)... :-( > > So what is the solution? Drop DBUtils? Or what? > > Thanks for your help: > dd > > > > -- http://mail.python.org/mailman/listinfo/python-list
Deletion/record visibility error in PG with Python...
Hi! PGSQL makes me crazy... I port my apps to PGSQL, and I near to finish - but I got this problem... Params: PGSQL 8.3, Windows, Pylons, PGDB, DBUTILS... I opened the connection with DBUTILS. I have one thread (the test thread), possible it have more in the background, I don't know... See this pseudocode: start trs ("rollback; begin;") delete old recs insert new recs commit I delete all of old records with: "delete from levelfo where level = :level" This I do in one transaction that protect the next sequences to. Later I want to insert the records, BUT: def GetCodes(reks): l = [] for rek in reks: l.append(str(rek['KOD'])) return str(l) def LogInsertedReks(Index): csql = "select * from levelfo where level=%d" % self.LevelRek['KOD'] self.Cur.execute(csql) reks = dwdb.FetchAll(self.Cur) self.log.info(Index + ' INSERTED REKS ') self.log.info('%s' % GetCodes(reks)) for levelforek in self.LevelFoReks: LogInsertedReks('Start1') LogInsertedReks('Start2') LogInsertedReks('Start3') LogInsertedReks('Start4') LogInsertedReks('Start5') LogInsertedReks('Start6') kod = levelforek['KOD'] self.log.info(' INSERT ') self.log.info('%s' % levelforek['KOD']) LogInsertedReks('Start7') LogInsertedReks('Start8') LogInsertedReks('Start9') See this log: 18:07:02,276 INFO [xxx] Start1 INSERTED REKS 18:07:02,276 INFO [xxx] [] 18:07:02,292 INFO [xxx] Start2 INSERTED REKS 18:07:02,292 INFO [xxx] [] 18:07:02,292 INFO [xxx] Start3 INSERTED REKS 18:07:02,292 INFO [xxx] [] 18:07:02,306 INFO [xxx] Start4 INSERTED REKS 18:07:02,306 INFO [xxx] [] 18:07:02,306 INFO [xxx] Start5 INSERTED REKS 18:07:02,306 INFO [xxx] [] 18:07:02,306 INFO [xxx] Start6 INSERTED REKS 18:07:02,306 INFO [xxx] [] 18:07:02,306 INFO [xxx] INSERT 18:07:02,306 INFO [xxx] 11551 18:07:02,306 INFO [xxx] Start7 INSERTED REKS 18:07:02,306 INFO [xxx] [] 18:07:02,619 INFO [xxx] Start8 INSERTED REKS 18:07:02,619 INFO [xxx] ['11555', '11556', '11557', '11558'] 18:07:02,634 INFO [xxx] Start9 INSERTED REKS 18:07:02,634 INFO [xxx] ['11555', '11556', '11557', '11558'] 18:07:02,697 INFO [xxx] After UID INSERTED REKS 18:07:02,697 INFO [xxx] ['11555', '11556', '11557', '11558'] As you see, I don't do anything (like db operations), and deleted records are appearing... H... possible is it a cursor changing? When I change my logger to see the object ids, I can see the cursor changing (in the background): 18:21:29,134 INFO [xxx] Start7 INSERTED REKS 18:21:29,134 INFO [xxx] [] 18:21:29,134 INFO [xxx] Start7 CURSOR INFO 18:21:29,134 INFO [xxx] [] 18:21:29,134 INFO [xxx] [] ** 18:21:29,134 INFO [xxx] [] 18:21:29,431 INFO [xxx] Start8 INSERTED REKS 18:21:29,431 INFO [xxx] ['11555', '11556', '11557', '11558'] 18:21:29,431 INFO [xxx] Start8 CURSOR INFO 18:21:29,431 INFO [xxx] [] 18:21:29,431 INFO [xxx] [] ** 18:21:29,431 INFO [xxx] [] What happened? How I can avoid the cursor changing? How to fix it in my transaction? I never ask for new cursor, I used same variable in all of my context (self.Cur)... :-( So what is the solution? Drop DBUtils? Or what? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Cheetah and hungarian charset...
Hi! Ok, I forget to set encodings. from Cheetah.Template import Template d = {'a' : 'almás'} tp = Template("# encoding: iso-8859-2\nhello world éááá ${d['a']}!", searchList = {'d': d}) print tp sys.exit() This code is working for me in Windows. dd 2009.02.05. 16:21 keltezéssel, durumdara írta: Hi! I wanna ask that have anyone some exp. with Cheetah and the non-ascii chars? I have a site. The html template documents are saved in ansi format, psp liked them. But the cheetah parser makes ParseError on hungarian characters, like "á", "é", "í", etc. When I remove them, I got good result, but I don't want to remove them all... I cannot parse them. Please help me, how to force cheetah to eat them all? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
STMP, mail, sender-from, to-bcc-addr
Hi! I wanna ask that have anyone some experience with email.msg and smtplib? The email message and smtp.send already have "sender/from" and "recip/addr to". Possible the smtp components does not uses the email tags if I not define them only in the message? Can I do same thing as in GMAIL that the mail does not have TO, only BCC tags/recipients, and no one of the recipients who knows about the each others? Thanks: dd -- http://mail.python.org/mailman/listinfo/python-list
Cheetah and hungarian charset...
Hi! I wanna ask that have anyone some exp. with Cheetah and the non-ascii chars? I have a site. The html template documents are saved in ansi format, psp liked them. But the cheetah parser makes ParseError on hungarian characters, like "á", "é", "í", etc. When I remove them, I got good result, but I don't want to remove them all... I cannot parse them. Please help me, how to force cheetah to eat them all? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Lamaizm... XML problem...
Hi! Something makes me crazy!!! I wanna read some XML, but everytime I got "None" for the value of prop: The code is: from xml.dom import minidom import sys ResultList = [] def LoadProps(PropTag): print PropTag t_forms = PropTag.getElementsByTagName('form') for t_form in t_forms: t_comps = t_form.getElementsByTagName('component') for t_comp in t_comps: t_props = t_comp.getElementsByTagName('prop') for t_prop in t_props: attrs = t_prop.attributes.keys() print attrs print t_prop.nodeName print t_prop.nodeType print [t_prop.nodeValue] sys.exit() doc = minidom.parse('c:\\teszt3.xml') print doc t_langfile = doc.documentElement t_props = doc.getElementsByTagName('properties')[0] t_constants = doc.getElementsByTagName('constants')[0] LoadProps(t_props) --- The result is: >>> [u'id', u'name'] prop 1 [None] >>> --- The source file is: --- I can get the attrs, but I can't get nodeValue ()... I got None for it. Why??? Please help me a little!!! I'm sure that I miss st, but I don't know what is that... :-( Thanks for it: dd -- http://mail.python.org/mailman/listinfo/python-list
[half-off] LAMA - how to I use the news server with thunderbird
Hi! Is anyone knows about an NNTP servers that containing newsgroups about these lists: - python win32 (python-win32) - mod_python - gimp-python - gimp I wanna set thes nntp servers in thunderbird so I need correct addresses. Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Where can I suggest an enchantment for Python Zip lib?
> On my 3 year old 3Ghz Pentium III it takes about 8 seconds to zip 20Mb file. > So what is the problem? Not updating the process for 8-10 seconds > should be just fine for most applications. > > -Larry The problem, that: - I want to process 100-200 MB zip files. - I want to abort in process - I want to know the actual position - I want to slow the operation sometimes! Why I want to slow? The big archiving is slow operation. When it is slow, I want to working with other apps while it is processing. So I want to slow the zipping with time.sleep, then the background thread is not use the full CPU... I can work with other apps. dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Where can I suggest an enchantment for Python Zip lib?
Hi Larry! > durumdara wrote: > You can easily find out roughly how many bytes are in your .ZIP archive > by using following: > > zipbytes=Zobj.fp.tell() > The main problem is not this. I want to write a backup software, and I want to: - see the progress in the processing of the actual file - abort the progress if I need it If I compress small files, I don't have problems. But with larger files (10-20 MB) I get problems, because the zipfile's method is uninterruptable. Only one way I have to control this: if I modify the ZipFile module. dd On Jun 7, 8:26 pm, Larry Bates <[EMAIL PROTECTED]> wrote: > Where Zobj is your zipfile instance. You don't need a callback. > > Problem is ill defined for a better solution. You don't know how much > the "next" file will compress. It may compress a lot, not at all or > in some situations actually grow. So it is difficult (impossible?) to > know how many bytes are remaining. I have a rough calculation where > I limit the files to 2Gb, but you must set aside some space for the > table of contents that gets added at the end (whose size you don't > actually know either). So I use: > > maxzipbytesupperlimit=int((1L<<31)-(8*(1<<20))) > > That is 2Gb-8Mb maximum TOC limit of a zip file. > > I look at zipbytes add the uncompressed size of the next file, if it > exceeds maxzipbytesupperlimit, I close the file and move to the next > zip archive. If it is smaller, I add the file to the archive. > > Hope this helps. > > -Larry -- http://mail.python.org/mailman/listinfo/python-list
Where can I suggest an enchantment for Python Zip lib?
Hi! Where can I ask it? I want to ask that developers change the Python's Zip lib in the next versions. The Zip lib not have a callback procedure. When I zip something, I don't know, what is the actual position of the processing, and how many bytes remaining. It is simply rewriteable, but when I get new Python, it is forget this thing again... So some callback needed for it, if possible. With this I can abort processing and I can show the actual state when I processing a large file. See this thread: http://groups.google.com.kh/group/comp.lang.python/browse_thread/thread/c6069d12273025bf/18b9b1c286d9af7b?lnk=st&q=python+zip+callback&rnum=1#18b9b1c286d9af7b Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Python not giving free memory back to the os get's me in real problems ...
Hi! I got same problem with Lotus Domino + Python. The lotus COM objects are not freed, so I get out from memory. I solve this problem with a little trick: I make two py applications. The first is call the second as like another application. The first is put some work to the seconds datadir. Whe second is finished with it, it is close itself. Because the memory operations are existing in only the second script, all memory get freed after it is finished, and the first (controller) py script can call the next session without any problem. dd 25 Apr 2007 07:08:42 -0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]>: So I read quite a few things about this phenomenon in Python 2.4.x but I can hardly believe that there is really no solution to my problem. We use a commercial tool that has a macro functionality. These macros are written in python. So far nothing extraordinary. Our (python-)macro uses massively nested loops which are unfortunately necessary. These loops perform complex calculations in this commercial tool. To give you a quick overview how long this macros runs: The outer loop takes 5-7 hours for one cycle. Each cycle creates one outputfile. So we would like to perform 3-5 outer cycles en bloc. Unfortunately one of our computers (768MB RAM) crashes after just ~10% of the first cycle with the following error message: http://img2.freeimagehosting.net/uploads/7157b1dd7e.jpg while another computer (1GB RAM) crashes after ~10% of the fourth loop. While the virtual memory on the 1gb machine was full to the limit when it crashed the memory usage of the 768mb machine looked this this: http://img2.freeimagehosting.net/uploads/dd15127b7a.jpg The moment I close the application that launched the macro, my ressources get freed. So is there a way to free my memory inside my nested loops? thanks in advance, tim -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
PIL font formatting...
Hi! I wrote a web visitor counter with modpy and pil. It is must working on half-static page, so I must write this counter with jpeg image output. Everything is working good, but I need to change this counter a little, it's style isn't same as it's webpage's style. I want to make bold and italic text into image. How to I do it? font = ImageFont.truetype('arial.ttf', 15, ) .text('a', font = font, ) Please help me!!! dd -- http://mail.python.org/mailman/listinfo/python-list
Is any way to split zip archive to sections?
Hi! I want to create some backup archives with python (I want to write a backup application in Python). Some package managers (7z, arj, winzip) can create splitted archives (1 mega, 650, 700 mega, etc). Because I want to ftp these results to a ftp server, I want to split large volumes to 15 mb sections. Can I do it with any python wrapper automatically (like in Cobian), or I need to create the large volume, and next split it with another tool? Or anybody knows about a command line tool (like 7z, or arj) that can expand the splitted archive (and I can add files with them from Python one by one)? So what is the solution? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Zip file writing progress (callback proc)
Hi! I want to check my zip file writings. I need some callback procedure to show a progress bar. Can I do that? I don't want to modify the PyLib module to extend it, because if I get another py, the changes are lost. This happening too if I copy the zip module to modify it. Any solution? Thanks for it: dd -- http://mail.python.org/mailman/listinfo/python-list
Unicode zipping from Python code?
Hi! As I experienced in the year 2006, the Python's zip module is not unicode-safe. With the hungarian filenames I got wrong result. I need to convert iso-8859-2 to cp852 chset to get good result. As I see, this module is "a command line tool" imported as extension. Now I search for something that can handle the characters good, or handle the unicode filenames. Does anyone knows about a python project that can do this? Or other tool what I can use for zipping intern. characters? Thanks for your help! dd -- http://mail.python.org/mailman/listinfo/python-list
[Half-off] How to get textboxes (text blocks) from ps/pdf files?
Hi! I need to get textboxes/textblocks from pdf files. I can convert them into ps. Is anyone knows about method, trick, routine to I can get the textboxes from ps or pdf? (Pythonic, COM, or command line solutions needed.) I need to redraw them into my application, and user can reorder them, and next I concat. every text to process it. I need these infos: x, y, w, h, text Example: page1 textbox1{x:100,y:100;w:600;h:27;text:"TextBox1 /xfc /xfa"} textbox2{x:100,y:180;w:600;h:27;text:"TextBox2"} page2 textbox1{x:100,y:100;w:600;h:27;text:"TextBox1"} textbox2{x:100,y:180;w:600;h:27;text:"TextBox2"} ... Any solution? Thanks for it! dd ps1: I tried every pdf2text and pdf2html application. All failed in the test. Only one provide good informations, the pdftohtml, because it is makes divs with abs. position and size and the texts. But this program is not handle the iso-8859-2 chars, so I lost them. ps2: The program must run under Windows XP. So the solution is os specific. -- http://mail.python.org/mailman/listinfo/python-list
[Off] WXP Fingerprint + Python...
Hi! I have an application (Python + wx) somewhere. The users use they fingerprint to log[in/out]. But we have a problem that in this time the fingerprint logon is do REAL windows logon, so every user need a windows user too, and many times it need to close/open application, and Windows. We need better solution. We tried MS Fingerprint Reader, but that is working in same method: it store the fingerprints for every Windows user... I need something better. Do anyone knows about a Fingerprint Software and device that can recognize many-many fingerprints, and it can associate the actual fingerprint with a VIRTUAL(! not windows user! a virtual!) user that stored in a DB? This DB can store other infos, or an ID that we can use in another app... Or a software hook needed (DLL hook) that can add possibility to catch the login/outs, and I can do anything else as I need. Do anyone knows about same solution or product? Thanks for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Regexp Neg. set of chars HowTo?
Hi! Thanks for this! I'll use that! I found a solution my question in regexp way too: import re testtext = " minion battalion nation dion sion wion alion" m = re.compile("[^t^l]ion") print m.findall(testtext) I search for all text that not lion and tion. dd Paul McGuire wrote: > It looks like you are trying to de-hyphenate words that have been > broken across line breaks. > > Well, this isn't a regexp solution, it uses pyparsing instead. But > I've added a number of other test cases which may be problematic for an > re. > > -- Paul > -- http://mail.python.org/mailman/listinfo/python-list
Regexp Neg. set of chars HowTo?
Hi! I want to replace some seqs. in a html. Let: a- b = ab but: xxx - b must be unchanged, because it is not word split. I want to search and replace with re, but I don't know how to neg. this set ['\ \n\t']. This time I use full set without these chars, but neg. is better and shorter. Ok, I can use [^\s], but I want to know, how to neg. set of chars. sNorm1= '([^[\ \t\n]]{1})\-\\n' - this is not working. Thanks for the help: dd sNorm1= '([%s]{1})\-\\n' c = range(0, 256) c.remove(32) c.remove(13) c.remove(10) c.remove(9) s = ["\\%s" % (hex(v).replace('00x', '')) for v in c] sNorm1 = sNorm1 % ("".join(s)) print sNorm1 def Normalize(Text): rx = re.compile(sNorm1) def replacer(match): return match.group(1) return rx.sub(replacer, Text) print Normalize('a -\nb') print Normalize('a-\nb') sys.exit() -- http://mail.python.org/mailman/listinfo/python-list
Localization - set to default on Windows
Hi ! I want to set default locale on WXP. I want to create a formatting tool that can set up format locale in the init, and next the formatter functions are use these settings. Example: fmtunit: --- SetLocaleToDefault() def ToUnicode(text, encoding = None): if encoding == None: encoding = defaultencoding ... This is my test code that have been failed in more point. import locale # Get default print locale.getdefaultlocale('LANG') # Get actual print locale.getlocale(locale.LC_ALL) # Get default to "loc" loc, enc = locale.getdefaultlocale('LANG') try: # Try to set locale.setlocale(locale.LC_ALL, loc) except Exception, msg: # An error print msg # Get actual print locale.getlocale(locale.LC_ALL) # Set manually locale.setlocale(locale.LC_ALL, "HU") # Get the actual locale print locale.getlocale(locale.LC_ALL) print locale.getdefaultlocale('LANG') The result was: > Executing: C:\Program Files\ConTEXT\ConExec.exe "c:\python24\python" "c:\loc.py" ('hu_HU', 'cp1250') (None, None) unsupported locale setting (None, None) ('Hungarian_Hungary', '1250') ('hu_HU', 'cp1250') > Execution finished. Two interesting thing I see: The default locale (hu_HU) is not correct parameter for setlocale. Why ? I want to determine and set up the locale automatically, with script ! When I set locale with HU, then I cannot compare default locale with locale, to check it's state. These values are uncomparable: "HU", "hu_HU", "Hungarian_Hungary"... Please help me: how to do this localization ? Thanks for it: dd -- http://mail.python.org/mailman/listinfo/python-list
Localized (international) informations on WXP
Hi ! WXP, Py2.4.3. I want to get localized informations like month names, format parameters, etc. But nl_langinfo is not exists. Have the Python a way to get these informations in uniformed way (like php) to avoid multiplatform problems ? Thanks for help: dd -- http://mail.python.org/mailman/listinfo/python-list
How to increase buffer size of a file ?
Hello ! How to increase buffer size of a file ? I want to use more buffer, but I don't want to replace every file object with my class. It have a contant in a module ? Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Freeware link/html checker (validator) for mod_python site
Hi ! Very good !!! Thank you very much ! If I can "link it" with a w3c html checker, it will be very good... Interesting that it is also say "bad link" to site http://www.druk-ker.hu/. The HTML link validator do it same. dd Fredrik Lundh írta: "durumdara" <[EMAIL PROTECTED]> wrote: Sorry for non-pythonic subject, but if I not find good solution, I will write a routine for this in python... :-))) in your Python installation (or source) directory, do $ cd Tools/webchecker and then write $ python webchecker.py -x http://yoursite (use --help to get a man page) -- http://mail.python.org/mailman/listinfo/python-list
Freeware link/html checker (validator) for mod_python site
Hi ! Sorry for non-pythonic subject, but if I not find good solution, I will write a routine for this in python... :-))) My mod_py site is growing quickly. It have many pages, many of them are dynamic. I want to check the links, and if possible, validate the html. I search freeware tools in the net, but I found only "HTML Link Validator" that meet the requirements. This utility can check the whole site, and it handle the circular references - but it's not free, and it not check the whole site... :-( Some other utilites can check one URL (CSE HTML validator lite, html kit, Firefox check page links, Firefox validate HTML, Firefox webdeveloper extension), but not entire site... Do you knows about a good local site checker application that can do validation/link checking in one ? Is anybody have an experience in this theme ? Important thing: the OS is Windows XP. Thanks for your help: dd -- http://mail.python.org/mailman/listinfo/python-list
PyDoc and mod_python
Hi ! I need to write documentation for my mod_python website, for the base classes, functions, modules. The problem, that mod_python is imported "apache" that not existing in the normal, pythonic way (only in Apache). If my source is containing a tag that use this module, or it's submodule, the pydoc is not working. ### test.py ### from mod_python import apache ... The result is. c:\>c:\python24\Lib\pydoc.py c:\test.py problem in c:\test.py - ImportError: No module named _apache So I need a cheat, or I need to force the pydoc to avoid to parse these modules... Is anybody have an experience, how can I do it ? Thanks for it: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Best way to handle large lists?
Hi ! > Thanks Jeremy. I am in the process of converting my stuff to use sets! I > wouldn't have thought it would have made that big a deal! I guess it is > live and learn. > If you have simplified records with big amount of data, you can trying dbhash. With this you don't get out from memory... dd import dbhash import time import random import gc import sys itemcount = 25 db = dbhash.open('test.dbh','w') for i in range(itemcount): db[str(i)] = str(i) littlelist = [] littleset = set() while len(littlelist) < 1000: x = str(random.randint(0, itemcount-1)) if not (x in littlelist): littlelist.append(x) littleset.add(x) def DBHash(): gc.collect() hk = db.has_key st = time.time() newlist = [] for val in littlelist: if hk(val): newlist.append(val) et = time.time() print "Size", len(newlist) newlist.sort() print "Hash", hash(str(newlist)) print "Time", "%04f"%(et-st) print def Set(): gc.collect() largeset = set() for i in range(itemcount): largeset.add(str(i)) st = time.time() newset = largeset.intersection(littleset) newsetlist = [] while newset: newsetlist.append(newset.pop()) et = time.time() print "Size", len(newsetlist) newsetlist.sort() print "Hash", hash(str(newsetlist)) print "Time", "%04f"%(et-st) DBHash() Set() -- http://mail.python.org/mailman/listinfo/python-list
Re: Best way to handle large lists?
Chaz Ginger írta: > I have a system that has a few lists that are very large (thousands or > tens of thousands of entries) and some that are rather small. Many times > I have to produce the difference between a large list and a small one, > without destroying the integrity of either list. I was wondering if > anyone has any recommendations on how to do this and keep performance > high? Is there a better way than > > [ i for i in bigList if i not in smallList ] > > Thanks. > Chaz > Hi ! If you have big list, you can use dbm like databases. They are very quick. BSDDB, flashdb, etc. See SleepyCat, or see python help. In is very slow in large datasets, but bsddb is use hash values, so it is very quick. The SleepyCat database have many extras, you can set the cache size and many other parameters. Or if you don't like dbm style databases, you can use SQLite. Also quick, you can use SQL commands. A little slower than bsddb, but it is like SQL server. You can improve the speed with special parameters. dd -- http://mail.python.org/mailman/listinfo/python-list
Re: non-blocking PIPE read on Windows
Hi !Sorry, but I want to share my experiences. I hope this help to you.I think that specialized MSWindows based services too complicated. They have to many bug possibilites.So I trying with normal, "in python accessable" pipes. I see that with flush(), and some of the bintotext tricks I can use the subprocess/masterprocess communication. Ok, this is not asynchronous. I can send some job to sp(s), and I can receive the report from sp(s).But with threading I can create non-blocking communication.You can see in the example: the PipeBPPThr define a pipe based process-pool thread. This can communicate with a subprocess, can send/receive jobs, etc.If you collect these threads, and write a process pool object, you can handle all of the communications with one object.I hope to these examples can help to you. If not, you can try with wm_copydata messages in Windows.http://msdn.microsoft.com/library/default.asp?url="" This way of data exchanging is based on the message handling/sending.dd import os, sys, threading, Queue, time pp_SPTHR_READY = 1 pp_SPTHR_SENDING= 2 pp_SPTHR_RECEIVING = 3 class CustomProcessThread(threading.Thread): def __init__(self,UniqID,ThreadID,Params): threading.Thread.__init__(self) self.UniqID=UniqID self.ThreadID=ThreadID self.Params=Params self.IPCObj=self._CreateIPCObj(UniqID,ThreadID) self.ProcessObj=self._CreateProcess(UniqID,ThreadID,self.IPCObj) self.State=pp_SPTHR_READY self._Input=Queue.Queue() self._Output=Queue.Queue() def _CreateIPCObj(self,UniqID,ThreadID): pass def _CreateProcess(self,UniqID,ThreadID,IPCObj): pass def SendJob(self,JobID,Data): self._Input.put([JobID,Data]) def HaveFinishedJob(self): return not self._Output.empty() def ReceiveJob(self): return self._Output.get() def Abort(self): self._Input.put(None) def _SendToSP(self,JobID,Data): pass def _ReceiveFromSP(self): pass def run(self): while 1: inp=self._Input.get() if inp==None: break jobid,data=inp self._SendToSP(jobid,data) rdata=self._ReceiveFromSP() self._Output.put([jobid,rdata]) def Wait(self): while self.isAlive(): time.sleep(0.001) import PipeBPPThr, sys while 1: jobid,data=PipeBPPThr.ReadBinPacket(sys.stdin) if jobid==-1: PipeBPPThr.WriteBinPacket(sys.stdout,None) break PipeBPPThr.WriteBinPacket(sys.stdout,[jobid,[1,data]]) import os, sys, threading, Queue, subprocess, CustPPThread from cPickle import dumps, loads from binascii import hexlify, unhexlify def ReadTextPacket(SourceStream): packet=SourceStream.read(6) psize=int(packet) packet=SourceStream.read(psize) return packet def WriteTextPacket(DestStream,Packet): Packet=str(Packet) DestStream.write('%06d'%len(Packet)) DestStream.write(Packet) DestStream.flush() def ReadBinPacket(SourceStream): txtpacket=ReadTextPacket(SourceStream) obj=loads(unhexlify(txtpacket)) return obj def WriteBinPacket(DestStream,Obj): pckpacket=hexlify(dumps(Obj,1)) WriteTextPacket(DestStream,pckpacket) class PIPEBasedProcessThread(CustPPThread.CustomProcessThread): def _CreateIPCObj(self,UniqID,ThreadID): return None def _CreateProcess(self,UniqID,ThreadID,IPCObj): spfn = self.Params['scriptfilename'] cmd = [r'c:\python24\python.exe', spfn] p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE) return p def _SendToSP(self,JobID,Data): WriteBinPacket(self.ProcessObj.stdin,[JobID,Data]) def _ReceiveFromSP(self): return ReadBinPacket(self.ProcessObj.stdout) if __name__=='__main__': print "Start" ''' spfn='ppbpt_sub.py' cmd = [r'c:\python24\python.exe', spfn] p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE) for i in range(10): WriteBinPacket(p.stdin,[1,'AAA']) print ReadBinPacket(p.stdout) WriteBinPacket(p.stdin,[-1,None]) print ReadBinPacket(p.stdout) sys.exit() ''' import time st=time.time() thr=PIPEBasedProcessThread(1,1,{'scriptfilename':'ppbpt_sub.py'}) thr.start() for i in range(100): thr.SendJob(i,'AAABBB') print thr.ReceiveJob() thr.SendJob(-1,None) print thr.ReceiveJob() thr.Abort() thr.Wait() print "End" st=time.time()-st print st -- http://mail.python.org/mailman/listinfo/python-list
Re: non-blocking PIPE read on Windows
Hi !A new version with binary data handling. 103 seconds with 1000 data exchange.import os, sys, time, binascii, cPicklebpath,bname=os.path.split(sys.argv[0])def Log(Msg,IsMaster,First=False): fn=sys.argv[0]+'.'+['c','m'][int(IsMaster)]+'.log' mode='aw'[int(First)] f=open(fn,mode) f.write('\n%s:\n'%time.time()) f.write('%s\n'%Msg) f.flush() f.close()def ReadTextPacket(SourceStream): packet=SourceStream.read(6) psize=int(packet) packet=SourceStream.read(psize) return packetdef WriteTextPacket(DestStream,Packet): Packet=str(Packet) DestStream.write('%06d'%len(Packet)) DestStream.write(Packet) DestStream.flush() import base64def PackObj(Obj): pckpacket=cPickle.dumps(Obj) enstr=base64.encodestring(pckpacket) return enstrdef UnpackObj(Packet): pckpacket=base64.decodestring(Packet) obj=cPickle.loads(pckpacket) return obj#s=PackObj([1,None,'A']*10)#print s#print UnpackObj(s)#sys.exit()def ReadBinPacket(SourceStream): txtpacket=ReadTextPacket(SourceStream) obj=UnpackObj(txtpacket) return objdef WriteBinPacket(DestStream,Obj): txtpacket=PackObj(Obj) WriteTextPacket(DestStream,txtpacket) if 'C' in sys.argv: Log('Client started',0,1) try: while 1: #Log('Waiting for packet',0,0) data=""> #Log('Packet received',0,0) #Log('The packet is: %s'%([data]),0,0) #Log('Print the result',0,0) WriteBinPacket(sys.stdout,"Master wrote: %s"%([data])) if str(data).strip()=='quit': Log('Quit packet received',0,0) break except Exception,E: Log(str(E),0,0) Log('Client finished',0,0)else: Log('Master started',1,1) try: Log('Start subprocess',1,0) import time st=time.time() child_stdin,child_stdout=os.popen2(r'c:\python24\python.exe %s C'%(bname)) for i in range(1000): #Log('Send packet',1,0) WriteBinPacket(child_stdin,['Alma'*100,i]) #Log('Waiting for packet',1,0) s=ReadBinPacket(child_stdout) #Log('Packet is: %s'%([s]),1,0) #Log('Print packet',1,0) #print "Client's answer",[s] import time time.sleep(0.1) #Log('Send packet',1,0) WriteBinPacket(child_stdin,'quit') #Log('Waiting for packet',1,0) s=ReadBinPacket(child_stdout) #Log('Packet is: %s'%([s]),1,0) #Log('Print packet',1,0) #print "Client's answer",[s] Log('Master finished',1,0) except Exception,E: Log(str(E),1,0) print time.time()-stdd -- http://mail.python.org/mailman/listinfo/python-list
Re: non-blocking PIPE read on Windows
Hi !If you don't want to use MS-specific things, you can use the normal pipes.See this code. If you want to use non-blocking version, you need to create a thread that handle the reads/writes. import os, sys, time, binascii, cPicklebpath,bname=os.path.split(sys.argv[0])def Log(Msg,IsMaster,First=False): fn=sys.argv[0]+'.'+['c','m'][int(IsMaster)]+'.log' mode='aw'[int(First)] f=open(fn,mode) f.write('\n%s:\n'%time.time()) f.write('%s\n'%Msg) f.flush() f.close()def ReadTextPacket(SourceStream): packet=SourceStream.read(6) psize=int(packet) packet=SourceStream.read (psize) return packetdef WriteTextPacket(DestStream,Packet): Packet=str(Packet) DestStream.write('%06d'%len(Packet)) DestStream.write(Packet) DestStream.flush()'''def ReadBinPacket(SourceStream): txtpacket=ReadTextPacket(SourceStream) pckpacket=binascii.unhexlify(txtpacket) obj=cPickle.loads(pckpacket) return objdef WriteBinPacket(DestStream,Obj): pckpacket=cPickle.dumps (Obj) txtpacket=binascii.hexlify(pckpacket) WriteTextPacket(DestStream,txtpacket)'''if 'C' in sys.argv: #Log('Client started',0,1) while 1: #Log('Waiting for packet',0,0) data=""> #Log('Packet received',0,0) #Log('The packet is: %s'%([data]),0,0) #Log('Print the result',0,0) WriteTextPacket(sys.stdout,"Master wrote: %s"%([data])) if data.strip()=='quit': #Log('Quit packet received',0,0) break #Log('Client finished',0,0)else: #Log('Master started',1,1) #Log('Start subprocess',1,0) import time st=time.time() child_stdin,child_stdout=os.popen2(r'c:\python24\python.exe %s C'%(bname)) for i in range(1000): #Log('Send packet',1,0) WriteTextPacket(child_stdin,['Alma'*100,i]) #Log('Waiting for packet',1,0) s=ReadTextPacket(child_stdout) #Log('Packet is: %s'%([s]),1,0) #Log('Print packet',1,0) #print "Client's answer",[s] import time time.sleep(0.1) #Log('Send packet',1,0) WriteTextPacket(child_stdin,'quit') #Log('Waiting for packet',1,0) s=ReadTextPacket(child_stdout) #Log('Packet is: %s'%([s]),1,0) #Log('Print packet',1,0) #print "Client's answer",[s] #Log('Master finished',1,0) print time.time()-stdd2006/7/28, Dennis Lee Bieber < [EMAIL PROTECTED]>:On 27 Jul 2006 22:26:25 -0700, "placid" < [EMAIL PROTECTED]> declaimed thefollowing in comp.lang.python:>> readline() blocks until the newline character is read, but when i use> read(X) where X is a number of bytes then it doesnt block(expected > functionality) but i dont know how many bytes the line will be and its> not constant so i cant use this too.>> Any ideas of solving this problem?>Use a thread that reads one character at a time; when it sees whatever signals "end of line" (it sounds like you're reading a progressbar implemented via overwrite). Combine the characters into astring, return the string to the main program via a queue. If there is no such "end of line" character, but there IS anoticeable delay between "writes", a more complex method might suffice-- in which one thread does the byte reads, setting a time value on each read; a related thread then does a sleep() loop, checking the "last readtime" against the pause length -- if close enough to the pause duration,combine and return...Alternatively, take a good old style terminal keyboard (a VT100 Tempest-rated model should be ideal), and use it to beat Bill Gates overthe head until he agrees to push a high-priority upgrade to the commandline I/O system... or makes files work with select() (so you can combine the time-out with the byte read)--WulfraedDennis Lee Bieber KD6MOG[EMAIL PROTECTED] [EMAIL PROTECTED]HTTP://wlfraed.home.netcom.com/(Bestiaria Support Staff: [EMAIL PROTECTED] )HTTP://www.bestiaria.com/--http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Is anybody knows about a linkable, quick MD5/SHA1 calculator library ?
Hi ! I need to speedup my MD5/SHA1 calculator app that working on filesystem's files. I use the Python standard modules, but I think that it can be faster if I use C, or other module for it. I use FSUM before, but I got problems, because I "move" into "DOS area", and the parameterizing of outer process maked me very angry (not working). You will see this in this place: http://mail.python.org/pipermail/python-win32/2006-May/004697.html So: I must handle unicode filenames. I think that if I find a library that can working with py's unicode chars, and I can load and use it to hash files, the code be better, and faster. Anybody knows about same code ? Py2.4, Windows, Py2Exe, wxPy... That was the specification. Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
How to calc easier the "long" filesize from nFileSizeLow and nFileSizeHigh
Hi ! I get the file datas with FindFilesW. I want to calc the filesize from nFileSizeLow and nFileSizeHigh with easiest as possible, without again calling os.getsize(). How to I do it ? I need good result ! Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Unicode to DOS filenames (to call FSUM.exe)
John Machin írta: > Looks like you need a GetShortPathNameW() but it's not implemented. > Raise it as an issue on the pywin32 sourceforge bug register. Tell Mark > I sent you :-) > Another thought: try using ctypes. > Hi ! It seems to be I found a solution. A little tricky, but it is working: # import sys,os from sys import argv as sysargv UFN=u'%s\\xA\xff'%os.getcwd() if os.path.exists(UFN): os.remove(UFN) f=open(UFN,'w') f.write('%s\n'%('='*80)) f.close() from ctypes import windll, create_unicode_buffer, sizeof, WinError buf=create_unicode_buffer(512) if windll.kernel32.GetShortPathNameW(UFN,buf,sizeof(buf)): fname=buf.value #import win32api #dfn=win32api.GetShortPathName(name) #print dfn else: raise shortpath,filename=os.path.split(fname) import win32file filedatas=win32file.FindFilesW(fname) fd=filedatas[0] shortfilename=fd[9] or fd[8] shortfilepath=os.path.join(shortpath,shortfilename) print [UFN] print shortfilepath f=open(shortfilepath,'r') print f.read() sys.exit() But I don't understand: why the shortpathw not convert the filename too (like dir) ? Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Unicode to DOS filenames (to call FSUM.exe)
John Machin írta: > According to my reading of the source, the function you have called > expects an 8-bit string. > > static PyObject * > PyGetShortPathName(PyObject * self, PyObject * args) > { > char *path; > if (!PyArg_ParseTuple(args, "s:GetShortPathName", &path)) > > If it is given Unicode, PyArg_ParseTuple will attempt to encode it > using the default encoding (ascii). Splat. > > Looks like you need a GetShortPathNameW() but it's not implemented. > Raise it as an issue on the pywin32 sourceforge bug register. Tell Mark > I sent you :-) > > It may be possible to fake up your default encoding to say cp1252 BUT > take the advice of anyone who screams "Don't do that!" and in any case > this wouldn't help you with a Russian, Chinese, etc etc filename. > > Another thought: try using ctypes. > Hi ! I trying with that, but I get error, because the result is unicode too... :-((( from ctypes import windll, create_unicode_buffer, sizeof, WinError buf=create_unicode_buffer(512) if windll.kernel32.GetShortPathNameW(UFN,buf,sizeof(buf)): name=buf.value print [name] ## Commandline: C:\Python24\python.exe G:\SPEEDT~1\Module1.py Workingdirectory: G:\speedtest Timeout: 0 ms [u'G:\\SPEEDT~1\\xA\xff'] Process "Pyhton Interpeter" terminated, ExitCode: ## Can I do anything with this unicoded filename ? My code must be universal ! Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Use subprocesses in simple way...
10 May 2006 04:57:17 -0700, Serge Orlov <[EMAIL PROTECTED]>: > I thought md5 algorithm is pretty light, so you'll be I/O-bound, then > why bother with multi-processor algorithm? This is an assessor utility. The program's architecture must be flexible, because I don't know, where it need to run (only I have a possibility to fix this: I write to user's guide). But I want to speedup my alg. with native code, and multiprocess code. I not tested yed, but I think that 4 subprocess quickly as one large process. > > > 2.) > > Do you know command line to just like FSUM that can compute file > > hashes (MD5/SHA1), and don't have any problems with unicode alt. file > > names ? > > I believe you can wrap the broken program with a simple python wrapper. > Use win32api.GetShortPathName to convert non-ascii file names to DOS > filenames. I use FindFilesW with 8. (?) parameter. This is the alternative name of the file, but yet I found a file that not handled by FSUM utility... Thanx for help: dd Ps: I wrote some code to test pipes and subprocesses. The name of the mod. is testpipe.py. The code is: import sys,random,subprocess,time,os,popen2,threading,thread IsMaster=len(sys.argv)==1 class ProcessThread(threading.Thread): def __init__(self,Param): threading.Thread.__init__(self) self.Param=Param self.RetVal=None self.start() def run(self): param=self.Param print "New thread with param",param po=os.popen2('c:\\python24\\python.exe testpipe.py 1') child_stdin,child_stdout=po child_stdin.write(str(param)+'\n') retval=child_stdout.readlines() child_stdin.close() child_stdout.close() self.RetVal=retval if IsMaster: print "M:",time.time() print "M: Start" print "M: Open subprocess" cnt=1 fcnt=0 pths=[] ress=[None]*9 while True: if cnt<10: pt=ProcessThread(cnt) pths.append(pt) cnt+=1 pcnt=0 for pt in pths: if pt: pcnt+=1 if pcnt: for i in range(len(pths)): pt=pths[i] if pt and pt.RetVal: pths[i]=None ress[i]=pt.RetVal print [pt.RetVal] else: break print "\M: The results are:" for s in ress: print s print "M: End" else: print "S:",time.time() print "S: Start" print "S: Data" print "S: End" s=sys.stdin.readline() print "S: %s"%s time.sleep(1) print "Echo: %s"%s print "Finished" -- http://mail.python.org/mailman/listinfo/python-list
Re: How to encode html and xml tag datas with standard python modules ?
Hi ! I probed this function, but that is not encode the hungarian specific characters, like áéíóüóöőúüű: so the chars above chr(127). Have the python a function that can encode these chars too, like in Zope ? Thanx for help: dd Fredrik Lundh írta: > DurumDara wrote: > > >> Have the python standard mod. lib. a html/xml encoder functions/procedures ? >> > > you can use > > cgi.escape(s) # escapes < > & > > for CDATA sections, and > > cgi.escape(s, True) # escapes < > & " > > for attributes. > > to output ASCII, use > > cgi.escape(s).encode("ascii", "xmlcharrefreplace") > cgi.escape(s, True).encode("ascii", "xmlcharrefreplace") > > >> if b<32 or b>127 or c in ['<','>','"',';','&','@','%','#']: >> c="&#%03d;"%b >> > > that's a rather odd set of reserved characters... > > > > > > -- http://mail.python.org/mailman/listinfo/python-list
How to encode html and xml tag datas with standard python modules ?
Hi ! Have the python standard mod. lib. a html/xml encoder functions/procedures ? like this: def ToSafeHTM(Text): s=ToHuStrSafe(Text) l=[] for c in s: b=ord(c) if c=="\n": c="" else: if b<32 or b>127 or c in ['<','>','"',';','&','@','%','#']: c="%03d;"%b l.append(c) s="".join(l) return s Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
DB Interface for SQLite
Hi ! I develop some tools for better/easier usage of SQLite wrapper(s). It have two parts: 1.) Custom interface with common methods. 2.) APSW oriented class. The codes are abs. free, no copyright, licence is freeware. Use them as you need. I publish them, because I does not found same thing in net, and I want to avoid to other programmers create them painfully... I hope this is "usable" for anothers. dd dbif_apsw.py Description: application/python dbiface.py Description: application/python -- http://mail.python.org/mailman/listinfo/python-list
Re: SQLite (with APSW) and transaction separate
Hi ! Dennis Lee Bieber írta: > On Wed, 19 Apr 2006 17:29:28 +0200, Christian Stooker > <[EMAIL PROTECTED]> declaimed the following in comp.lang.python: > > >> Please answer me: it is wrong I write about, or that is seems to be not >> working in SQLite ? >> >> > I don't think the feature you want to use is available -- heck, it > may ONLY be a firebird feature... (I've not seen anything similar in any > of the MySQL back-ends, or if there, I've not encountered it; and there > aren't enough easily read documents for MaxDB [aka SAP-DB] in print to > see if it has such a feature. My MSDE books don't mention it either, so > I suspect M$ SQL Server may not support such). > Sorry, but I don't think that this feature is exists in only Firebird. Good RDMBS systems ***must have*** many transaction isolation levels. The higher isolation level guaranteed a safely view for actual user. READ COMMITTED, and REPEATABLE READ isolation level makes me sure, that if I have a snapshot from the database, I get it same datas *everytime* while my transaction opened. Until trs. is closed and reopened, I can see the new modifications created by another user(s). But when I keep alive my transaction, never I see any modifications, only what I created. In this mode I don't see any user's updates - vainly he or she do many commits (As I see in SQLite, the another user's committed records visible for me - this is corrupting my view !) That is the only way to I get good reports from an invoice manager application, because I see current state of the database - the sum(subtotals) everytime equal with total. In lower isolation level I can see the new committed records - and that is bad thing, because in this time the sum(subtotals) sometimes not equal with total. That problem is just like thread synch. problem, where I need to protect my subresources with locks/semaphores to avoid corrupting of data. In Delphi I must protect string vars, because when every thread want to write/read from this string, they easily make corrupted string from the source... > SQLite is a "file server" style database; Firebird is a > client/server model. > > In Firebird, the server can track independent connections and > maintain state for each. SQLite runs "locally"; each connection > considers itself to be the only user of the database file (and locking > for updates is on a file basis, not table or record). Once you commit a > transaction in SQLite, the file itself is fully modified, and any other > connections see that modified file -- there is no temporary session > journal for a connection that holds a snapshot of that connection's > data. > So I cannot use a transaction number that identify transactions, and make same isolation effect what I said about before ? Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Insertion (sql) bug in Py2.4 pySQLite 2.2
Hi ! I have this code in my program. Before this I use APSW, but that project's connection object doesn't have close method... ... crs.execute(*'''create table files (*f_id integer not null primary key, f_name varchar(255), f_size long, f_attr integer, f_crtime varchar(20), f_mdtime varchar(20), f_hash long )*''') ... crs=connection.cursor() crs.execute('insert into files (f_crtime,f_mdtime,f_attr,f_id,f_hash,f_size,f_name) values(?,?,?,?,?,?,?)',('1','1',1,1,1,1,'a')) ... * So: I create a table in the first, and later I want to push some elements to it. Before this example code I use special method to create insert sql with tuple of values. But everytime it have been failed with this message: SQL error or inaccessible database. Then I simplified the code with hand maded SQL. And then I got same error message. The database file removed and rebuilded with every execution. When I tired by this error, I returned to APSW, and then I don't got error messages. What is the problem in this package ? http://initd.org/pub/software/pysqlite/releases/2.2/2.2.0/pysqlite-2.2.0.win32-py2.4.exe With Py2.3 I use pysqlite package in my CD organizer program, and then I does not exp. same problems. Please help me: dd -- http://mail.python.org/mailman/listinfo/python-list
Py+SQLite or other (big output) ?
Hi ! I want to process many data with python, and want to store in database. In the prior version of my code I create a simple thing that delete the old results, recreate the database and fill up it. But that is not too flexible, because when an power supply or hardware problem occured, all of the processed items are lost. Then I thinking about a solution that can continue the work. This is like an diff or sync. directories problem. Everytime I need to compare existing datas with inputs, and get result about this database: is finished, or I need to drop/add some elements. That is not to hard to code, but I see that the very large files are very vulnerable. Example: the older version of my code is use zip to export files... When I processed many files, I access the physical limit of the zip (4 GB), and every results destroyed in the crash... Or when the database file is getting inconsistent state, I only way to make result is to drop db, and recreate it. So I thinking about that I split the datas into smaller sections, and use them. If any of them destroyed or injured, I need to recreate only that. But this solution have a problems too. 1.) I need a header file to "join" them logically. When this file injured, every data must drop. 2.) The sync. operation is harder, because some files are not needed (when input data amount less than before), or some records are not needed; or I need to add some records to some of the files. 3.) I need to use global db to store global values. If I use this concept, I pay with hardest coding, and many-many bug chance. So I want to use one database file - but I want to protect it. How to do it with SQLite ? I see that solutions: - 1. I use transactions. - 2. I create full copy of database after every bigger transation. - 3. Shadow file ??? - 4. Mirror database (this is problematic to synch.). The transactions are very good things, but does not protect the database from injuring. The copy operation is better, but very decrease the processing speed, because the result db grow fast, and copy of 1/2,2/3 GBs is slow, and not too good. Have SQLite any solution to this problem ? Or have you any solution to this problem ? Hash-DB ? Pickled elements ? Thanx for the help: dd -- http://mail.python.org/mailman/listinfo/python-list
Can I export my datas in pickle format safely ?
Hi ! I want to create a database from datas. I want to store my datas in lists/dicts/normal variables. I thinking about that I can use the pickle to serialize/load my datas from the file. But: I remember that in the year of 2004(?) I tried this thing. I store my CD informations in pickled objects (in files). And when I changed my python version from ??? to 2.3(?), and I get some error messages... So: I want to store datas in the simply as possible, but I don't want to get error messages in the future, when I upgrade a new python version. I see that the Gnosis project have pickle tools that can dump objects to XML. XML is compatible in any future versions, I can read it, etc. So. Anyone can help me: pickle module have problems when I want to load older dumped objects, or I can use it for dev. my application ? Or any tool I need to use ? Thanks for the advance: dd -- http://mail.python.org/mailman/listinfo/python-list
Graphs from Python
Hi ! I want to create graphs (edges, nodes) from python. In the last project I used graph.py that create dot files - and then I can convert these dot files with ATT neato command line tool. But ATT code is instable, sometimes need to reinstalling, and it is use horizontal layouts. This cause the reports are unprintable (27 meter length), and jpeg export failed. Then I search for another solution. I see that GML language is very good, but I don't see any viewer for this except yEd. yEd is GUI app, so I cannot use for batch convert GML files to another format (SVG, JPEG, etc.). Is anybody have a solution to big graph creation ? Please help me ! Thanks for advance: dd -- http://mail.python.org/mailman/listinfo/python-list
Py2Exe - how to set BDist (Build) directory ?
Hi ! I want to set the temp. build directory that it is does not placed in python source directory. I want to set this: \bin\ compiled version \src\ python source \tmp\ temp build dir How to I do it ? Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
wxPython and Py2Exe... 2 questions
Hi ! I have an application that I compile to exe. 1.) I want to compile main.ico into exe, or int zip. Can I do it ? 2.) Can I compile the result to my specified directory, not into the dist ? I want to structure my projects like this: \src\ python codes, etc. \res\ icons, etc \bin\ the compiled version (exe, bins) Can I do it with py2exe ? Thanx for the help: dd Ps: The setup file is: # A setup script showing advanced features. # # Note that for the NT service to build correctly, you need at least # win32all build 161, for the COM samples, you need build 163. # Requires wxPython, and Tim Golden's WMI module. # Note: WMI is probably NOT a good example for demonstrating how to # include a pywin32 typelib wrapper into the exe: wmi uses different # typelib versions on win2k and winXP. The resulting exe will only # run on the same windows version as the one used to build the exe. # So, the newest version of wmi.py doesn't use any typelib anymore. from distutils.core import setup import py2exe import sys # If run without args, build executables, in quiet mode. if len(sys.argv) == 1: sys.argv.append("py2exe") sys.argv.append("-q") class Target: def __init__(self, **kw): self.__dict__.update(kw) # for the versioninfo resources self.version = "1.5.0" self.company_name = "DurumDara Ltd." self.copyright = "Copyright by DurumDara 2006." self.name = "wxPyHDDirList" # A program using wxPython # The manifest will be inserted as resource into test_wx.exe. This # gives the controls the Windows XP appearance (if run on XP ;-) # # Another option would be to store it in a file named # test_wx.exe.manifest, and copy it with the data_files option into # the dist-dir. # wxPyHDDirList = Target( # used for the versioninfo resource description = "wxPyHDDirList", # what to build script = "wxPyHDDirList.py", icon_resources = [(1, "main.ico")], dest_base = "wxPyHDDirList") setup( options = {"py2exe": {"bundle_files": 1, "compressed": 1, "optimize": 2}}, # The lib directory contains everything except the executables and the python dll. # Can include a subdirectory name. windows = [wxPyHDDirList], ) Ps2: A BUG ! setup( options = {"py2exe": {"bundle_files": 1, "compressed": 3, # ! "optimize": 4}}, # ! # The lib directory contains everything except the executables and the python dll. # Can include a subdirectory name. windows = [wxPyHDDirList], ) These valus makes that compiled exe is cannot do encodings (example: iso-8859-2) :-( -- http://mail.python.org/mailman/listinfo/python-list
Path (graph) shower utility
Hi ! I need to create a program that read eml file headers, analyze the receive tags and create a path database. I finished with this program section. But I want to show a graphical summary about the paths. This is (what I want to show) like a graph - show ways, stations, etc, and I want to show the strength of lines (how many of mails use this way). Can anyone known about a freeware tool, software, or python module that can show graphs with best alignments ? Please help me. Thanx for it: dd -- http://mail.python.org/mailman/listinfo/python-list
Path (graph) shower utility
Hi ! I need to create a program that read eml file headers, analyze the receive tags and create a path database. I finished with this program section. But I want to show a graphical summary about the paths. This is (what I want to show) like a graph - show ways, stations, etc, and I want to show the strength of lines (how many of mails use this way). Can anyone known about a freeware tool, software, or python module that can show graphs with best alignments ? Please help me. Thanx for it: dd Ps: The OS is Windows XP: I tryed with pydot, but this code created many 0 length files. import pydot print pydot.find_graphviz() edges=[(1,2), (1,3), (1,4), (3,4)] g=pydot.graph_from_edges(edges) g.write_svg('graph_from_edges_dot.svg', prog='dot') g.write_svg('graph_from_edges_neato.svg', prog='neato') g.write_jpeg('graph_from_edges_dot.jpg', prog='dot') g.write_jpeg('graph_from_edges_neato.jpg', prog='neato') >>> {'fdp': 'c:\\Program Files\\ATT\\Graphviz\\bin\\fdp.exe', 'twopi': 'c:\\Program Files\\ATT\\Graphviz\\bin\\twopi.exe', 'neato': 'c:\\Program Files\\ATT\\Graphviz\\bin\\neato.exe', 'dot': 'c:\\Program Files\\ATT\\Graphviz\\bin\\dot.exe', 'circo': 'c:\\Program Files\\ATT\\Graphviz\\bin\\circo.exe'} Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Little tool - but very big size... :-(
Hi ! >I've snipped out the relatively small files above. Yes, true, some of >them consume about 0.5MB each. > >It seems to me that your choice of GUI framework is a major cost here. I >have never used wxpython. Instead my GUIs are based on tkinter. What I >typically end up with is roughly 7MB. My last example ended up in 7.5MB. >Zipping the whole thing reduces that to 2.6MB. Is it completly out of >the question to have a compressed version of the tool on your memory >stick, and to decompress it on the examined computer before actually >running the tool? > >/MiO > > Yes, the wxPython use the big files, and win32api do it too... I need them, but wxPython is changeable to tkinter, because it is use only one special thing: a wx.GenericDirCtrl (the user can choose file or directory with this control in same way). The last (compressed) version is 5 MB. That is better ! Thanx: dd -- http://mail.python.org/mailman/listinfo/python-list
Re: Little tool - but very big size... :-(
Dear Martin ! Thanx for it: setup( options = {"py2exe": {"bundle_files": 1, # < this help me "compressed": 1, "optimize": 2}}, # The lib directory contains everything except the executables and the python dll. # Can include a subdirectory name. windows = [wxPyHDDirList], ) Can I increase level of the optimization ? Can I increase level of the zip packing ? Please help me: dd Martin Franklin wrote: >Durumdara wrote: > > >>Hi ! >> >>I have a problem. >>I have a little tool that can get data about filesystems and wrote it in >>python. >> >>The main user asked me a GUI for this software. >> >>This user is needed a portable program, so I create this kind of the >>software with Py2Exe. >> >>But it have very big size: 11 MB... :-( >> >>I need to have more compressed result. Can I compress dll-s, pyd-s with >>Py2Exe ? >>Can I decrease the total size with something ? >> >> > > > >you could try UPX http://upx.sourceforge.net/ I think it will handle >the .pyd and dll files (it did the last time I tested it) > > > > >>If not, how to I create an self-unpackager and self-starter program that >>use an temporary directory in the disk ? With WinRar ? >> >> > >I may be wrong (havn't used py2exe in a while) but I think it can now >(again) create a single exe file? Otherwise something like Inno Setup >http://www.jrsoftware.org/isinfo.php or a simple self extracting zip >file > > > > > >>Thanx for help: >>dd >> >> >> >> >> > > > -- http://mail.python.org/mailman/listinfo/python-list
Re: Little tool - but very big size... :-(
Hi ! Yes, it is. But that tool is designed for USB PenDrive usage. The assessor is collect all tools it needed to see a machine(s) in the checked corporation. He/she needs little programs, because he need to store the results of the checkings too, not the tools only. So I need to minimalize the code size. Thanx for help: dd [EMAIL PROTECTED] wrote: >11MB is seldom a concern for today's machine. > >Durumdara wrote: > > >>Hi ! >> >>I have a problem. >>I have a little tool that can get data about filesystems and wrote it in >>python. >> >>The main user asked me a GUI for this software. >> >>This user is needed a portable program, so I create this kind of the >>software with Py2Exe. >> >>But it have very big size: 11 MB... :-( >> >>The dist directory: >>2006.02.21. 10:09 . >>2006.02.21. 10:09 .. >>2005.09.28. 12:4177 824 bz2.pyd >>2006.02.21. 10:09 0 dirlist.txt >>2006.02.20. 12:51 611 384 library.zip >>2006.02.15. 16:2223 558 main.ico >>2004.12.16. 17:22 348 160 MSVCR71.dll >>2005.09.28. 12:41 1 867 776 python24.dll >>2006.01.11. 12:19 102 400 pywintypes24.dll >>2005.09.28. 12:41 405 504 unicodedata.pyd >>2005.09.28. 12:41 4 608 w9xpopen.exe >>2006.01.11. 12:1973 728 win32api.pyd >>2006.01.11. 12:2081 920 win32file.pyd >>2006.01.11. 12:26 106 496 win32security.pyd >>2006.01.10. 19:09 4 943 872 wxmsw26uh_vc.dll >>2006.02.20. 12:5140 960 wxPyHDDirList.exe >>2005.09.28. 12:4169 632 zlib.pyd >>2006.01.10. 19:13 626 688 _controls_.pyd >>2006.01.10. 19:12 696 320 _core_.pyd >>2006.01.10. 19:13 364 544 _gdi_.pyd >>2006.01.10. 19:13 491 520 _misc_.pyd >>2006.01.10. 19:13 548 864 _windows_.pyd >> 20 file11 485 758 byte >> >>I need to have more compressed result. Can I compress dll-s, pyd-s with >>Py2Exe ? >>Can I decrease the total size with something ? >> >>If not, how to I create an self-unpackager and self-starter program that >>use an temporary directory in the disk ? With WinRar ? >> >>Thanx for help: >>dd >> >> > > > -- http://mail.python.org/mailman/listinfo/python-list
Little tool - but very big size... :-(
Hi ! I have a problem. I have a little tool that can get data about filesystems and wrote it in python. The main user asked me a GUI for this software. This user is needed a portable program, so I create this kind of the software with Py2Exe. But it have very big size: 11 MB... :-( The dist directory: 2006.02.21. 10:09 . 2006.02.21. 10:09 .. 2005.09.28. 12:4177 824 bz2.pyd 2006.02.21. 10:09 0 dirlist.txt 2006.02.20. 12:51 611 384 library.zip 2006.02.15. 16:2223 558 main.ico 2004.12.16. 17:22 348 160 MSVCR71.dll 2005.09.28. 12:41 1 867 776 python24.dll 2006.01.11. 12:19 102 400 pywintypes24.dll 2005.09.28. 12:41 405 504 unicodedata.pyd 2005.09.28. 12:41 4 608 w9xpopen.exe 2006.01.11. 12:1973 728 win32api.pyd 2006.01.11. 12:2081 920 win32file.pyd 2006.01.11. 12:26 106 496 win32security.pyd 2006.01.10. 19:09 4 943 872 wxmsw26uh_vc.dll 2006.02.20. 12:5140 960 wxPyHDDirList.exe 2005.09.28. 12:4169 632 zlib.pyd 2006.01.10. 19:13 626 688 _controls_.pyd 2006.01.10. 19:12 696 320 _core_.pyd 2006.01.10. 19:13 364 544 _gdi_.pyd 2006.01.10. 19:13 491 520 _misc_.pyd 2006.01.10. 19:13 548 864 _windows_.pyd 20 file11 485 758 byte I need to have more compressed result. Can I compress dll-s, pyd-s with Py2Exe ? Can I decrease the total size with something ? If not, how to I create an self-unpackager and self-starter program that use an temporary directory in the disk ? With WinRar ? Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
How to I write DBASE files without ODBC
Hi ! I have a text processor code and I want to put the results to standard files. HTML, XML, SQLite - they are ok. But I want to put these datas to DBF too, because when records are many, does not fit in Excel table (load from HTML). XML and SQLite need some development to get datas. I need some memo fields, not only standard field types. The fixed DBASE tables are simply writeable, but for memo I need some tool. ODBC is good for this, but I want to create standalone exe, without installation, and ODBC must be manually set... Can anybody known about DBASE handler module for Python ? Thanx for help: dd -- http://mail.python.org/mailman/listinfo/python-list
Copy files to Linux server through ssh tunnel
Hi ! I have some backup files on a server farm. I want to store these local backup files on a backup file server for "safety's snake". These files are compressed zip files with 12 character length password. But my system admin asked me, how can I improve the safety of the copy operation, and the storing (now I use Samba share to store these files. I map the SMB share on the client, copy these files, and unmap SMB). Then I thinking to ssh protocol to improve protection. The backup script is a py script. I see that Winscp can copy files through ssh tunnel. Can I do it too ? How ? How to I do it in pythonic way ? Please help me with some examples or urls or other infos ! Thanks * 1000: dd -- 1 Gbyte Ingyenes E-Mail Tárhely a MailPont-tól http://www.mailpont.hu/ -- http://mail.python.org/mailman/listinfo/python-list
Copy files to Linux server through ssh tunnel
Hi ! I have some backup files on a server farm. I want to store these local backup files on a backup file server for "safety's snake". These files are compressed zip files with 12 character length password. But my system admin asked me, how can I improve the safety of the copy operation, and the storing (now I use Samba share to store these files. I map the SMB share on the client, copy these files, and unmap SMB). Then I thinking to ssh protocol to improve protection. The backup script is a py script. I see that Winscp can copy files through ssh tunnel. Can I do it too ? How ? How to I do it in pythonic way ? Please help me with some examples or urls or other infos ! Thanks * 1000: dd -- 1 Gbyte Ingyenes E-Mail Tárhely a MailPont-tól http://www.mailpont.hu/ -- http://mail.python.org/mailman/listinfo/python-list
Copy files to Linux server through ssh tunnel
Hi ! I have some backup files on a server farm. I want to store these local backup files on a backup file server for "safety's snake". These files are compressed zip files with 12 character length password. But my system admin asked me, how can I improve the safety of the copy operation, and the storing (now I use Samba share to store these files. I map the SMB share on the client, copy these files, and unmap SMB). Then I thinking to ssh protocol to improve protection. The backup script is a py script. I see that Winscp can copy files through ssh tunnel. Can I do it too ? How ? How to I do it in pythonic way ? Please help me with some examples or urls or other infos ! Thanks * 1000: dd -- 1 Gbyte Ingyenes E-Mail Tárhely a MailPont-tól http://www.mailpont.hu/ -- http://mail.python.org/mailman/listinfo/python-list