Re: scanf in python
Thanks Fredrik, very nice examples. André AMD wrote: For reading delimited fields in Python, you can use .split string method. Yes, that is what I use right now, but I still have to do the conversion to integers, floats, dates as several separate steps. What is nice about the scanf function is that it is all done on the same step. Exactly like when you use % to format a string and you pass it a dictionary, it does all the conversions to string for you. You're confusing surface syntax with processing steps. If you want to do things on one line, just add a suitable helper to take care of the processing. E.g. for whitespace-separated data: >>> def scan(s, *types): ... return tuple(f(v) for (f, v) in zip(types, s.split())) ... >>> scan("1 2 3", int, int, float) (1, 2, 3.0) This has the additional advantage that it works with any data type that provides a way to convert from string to that type, not just a small number of built-in types. And you can even pass in your own local helper, of course: >>> def myfactory(n): ... return int(n) * "!" ... >>> scan("1 2 3", int, float, myfactory) (1, 2.0, '!!!') If you're reading multiple columns of the same type, you might as well inline the whole thing: data = map(int, line.split()) For other formats, replace the split with slicing or a regexp. Or use a ready-made module; there's hardly every any reason to read standard CSV files by hand when you can just do "import csv", for example. Also note that function *creation* is relatively cheap in Python, and since "def" is an executable statement, you can create them pretty much anywhere; if you find that need a helper somewhere in your code, just put it there. The following is a perfectly valid pattern: def myfunc(...): def myhelper(...): ... myhelper(...) myhelper(...) for line in open(file): myhelper(...) (I'd say knowing when and how to abstract things away into a local helper is an important step towards full Python fluency -- that is, the point where you're able to pack "a lot of action in a small amount of clear code" most of the time.) -- http://mail.python.org/mailman/listinfo/python-list
Re: scanf in python
In message <[EMAIL PROTECTED]>, AMD wrote: Actually it is quite common, it is used for processing of files not for reading parameters. You can use it whenever you need to read a simple csv file or fixed format file which contains many lines with several fields per line. I do that all the time, in Python and C++, but I've never felt the need for a scanf-type function. I agree scanf is not a must have function but rather a nice to have function. For reading delimited fields in Python, you can use .split string method. Yes, that is what I use right now, but I still have to do the conversion to integers, floats, dates as several separate steps. What is nice about the scanf function is that it is all done on the same step. Exactly like when you use % to format a string and you pass it a dictionary, it does all the conversions to string for you. Cheers, André -- http://mail.python.org/mailman/listinfo/python-list
Re: scanf in python
AMD wrote: I had seen this pure python implementation, but it is not as fast or as elegant as would be an implementation written in C directly within python with no need for import. maybe you should wait with disparaging comments about how Python is not what you want it to be until you've learned the language? Hello Fredrik, I didn't think my comment would offend anyone, I'm sorry if it did. I have been developping in Python for about 5 years, my company uses Python as a scripting language for all of its products. We use Jython for our server products. I think I know it pretty well by now. So I think I have earned the right to try to suggest improvements to the language or at least intelligent discussion of new features without need for offensive comments, don't you think? André -- http://mail.python.org/mailman/listinfo/python-list
Re: scanf in python
Robert Kern a écrit : AMD wrote: Hello, I often need to parse strings which contain a mix of characters, integers and floats, the C-language scanf function is very practical for this purpose. I've been looking for such a feature and I have been quite surprised to find that it has been discussed as far back as 2001 but never implemented. The second Google hit is a pure Python implementation of scanf. http://hkn.eecs.berkeley.edu/~dyoo/python/scanf/ Hi Robert, I had seen this pure python implementation, but it is not as fast or as elegant as would be an implementation written in C directly within python with no need for import. Cheers, André -- http://mail.python.org/mailman/listinfo/python-list
Re: scanf in python
I'm pretty certain python won't grow an additional operator for this. Yet you are free to create a scanf-implementation as 3rd-party-module. IMHO the usability of the approach is very limited though. First of all, the need to capture more than one input token is *very* seldom - nearly all commandline-tools I know that do require interactive user-input (like the linux kernel config tool) do so by providing either line-by-line value entry (including defaults, something you can't do with your approach), or even dialog-centric value entry with curses. So - I doubt you will gather much momentum on this. Good luck though. Diez Actually it is quite common, it is used for processing of files not for reading parameters. You can use it whenever you need to read a simple csv file or fixed format file which contains many lines with several fields per line. The advantage of the approach is that it combines the parsing and conversion of the fields into one operation. Another advantage of using simple formatting strings is that it allows for easy translation of these lines, just like you have with the % operator for output. I don't see why python can have an operator for output but it can't have one for input, it's just not symmetrical. I don´t see why you can't use this method for line-by-line value entry, just add \n between your %s or %d. The method is quite versatile and much simpler than regular expressions plus conversion afterwards. André -- http://mail.python.org/mailman/listinfo/python-list
scanf in python
Hello, I often need to parse strings which contain a mix of characters, integers and floats, the C-language scanf function is very practical for this purpose. I've been looking for such a feature and I have been quite surprised to find that it has been discussed as far back as 2001 but never implemented. The recommended approach seems to be to use split and then atoi or atof or to use regex and then atoi and atof. Both approaches seem to be a lot less natural and much more cumbersome than scanf. If python already has a % string operator that behaves like printf, why not implement either a %% or << string operator to behave like scanf, use could be like the followng: a, b, c = "%d %f %5c" %% "1 2.0 abcde" or a, b, c = "%d %f %5c" << "1 2.0 abcde" %% is closer to the % operator << seems more intuitive to me either of this methods seems to me much simpler than: lst = "1 2;0 abcde".split() a = int(lst[0]) b = float(lst[1]) c = lst[2] or even worse when using regular expressions to parse such simple input. I like python because it is concise and easy to read and I really think it could use such an operator. I know this has been discussed before and many times, but all previous threads I found seem to be dead and I would like to invite further investigation of this topic. Cheers, André M. Descombes -- http://mail.python.org/mailman/listinfo/python-list
Re: Too many open files
Thank you every one, I ended up using a solution similar to what Gary Herron suggested : Caching the output to a list of lists, one per file, and only doing the IO when the list reaches a certain treshold. After playing around with the list threshold I ended up with faster execution times than originally and while having a maximum of two files open at a time! Its only a matter of trading memory for open files. It could be that using this strategy with asynchronous IO or threads could yield even faster times, but I haven't tested it. Again, much appreciated thanks for all your suggestions. Andre M. Descombes > Hello, > > I need to split a very big file (10 gigabytes) into several thousand > smaller files according to a hash algorithm, I do this one line at a > time. The problem I have is that opening a file using append, writing > the line and closing the file is very time consuming. I'd rather have > the files all open for the duration, do all writes and then close them > all at the end. > The problem I have under windows is that as soon as I get to 500 files I > get the Too many open files message. I tried the same thing in Delphi > and I can get to 3000 files. How can I increase the number of open files > in Python? > > Thanks in advance for any answers! > > Andre M. Descombes -- http://mail.python.org/mailman/listinfo/python-list
Too many open files
Hello, I need to split a very big file (10 gigabytes) into several thousand smaller files according to a hash algorithm, I do this one line at a time. The problem I have is that opening a file using append, writing the line and closing the file is very time consuming. I'd rather have the files all open for the duration, do all writes and then close them all at the end. The problem I have under windows is that as soon as I get to 500 files I get the Too many open files message. I tried the same thing in Delphi and I can get to 3000 files. How can I increase the number of open files in Python? Thanks in advance for any answers! Andre M. Descombes -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Problems in Windows 2003 Server
Thanks Marc, I just tried shelve but it is very slow :( I haven't tried the dbs yet. Andre Marc 'BlackJack' Rintsch a écrit : > On Mon, 15 Oct 2007 11:31:59 +0200, amdescombes wrote: > >> Are there any classes that implement disk based dictionaries? > > Take a look at the `shelve` module from the standard library. > > Or object databases like ZODB or Durus. > > Ciao, > Marc 'BlackJack' Rintsch -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Problems in Windows 2003 Server
Hi Brad, I do the reading one line at a time, the problem seems to be with the dictionary I am creating. Andre > amdescombes wrote: >> Hi, >> >> I am using Python 2.5.1 >> I have an application that reads a file and generates a key in a >> dictionary for each line it reads. I have managed to read a 1GB file >> and generate more than 8 million keys on an Windows XP machine with >> only 1GB of memory and all works as expected. When I use the same >> program on a Windows 2003 Server with 2GB of RAM I start getting >> MemoryError exceptions! >> I have tried setting the IMAGE_FILE_LARGE_ADDRESS_AWARE on both >> Python.exe and Python25.dll and setting the /3GB flag on the boot.ini >> file to no avail. I still get the MemoryError exceptions. >> >> Has anybody encountered this problem before? >> >> Thanks in advance for any ideas/suggestions. >> >> Best Regards, >> >> André M. Descombes > > I forgot to mention that the OS itself or other processes may be using a > lot of memory. So, just because you have 2GB, that does not mean you can > access all of that at once. I would guess that 25% of memory is in > constant use by the OS. So, do your IO/reads in smaller chunks similar > to the example I gave earlier. > > Brad -- http://mail.python.org/mailman/listinfo/python-list