Re: Financial time series data
On 03-Sep-10 1:48 PM, Frederic Rentsch wrote: And do let us know if you get an answer from Yahoo. Hacks like this are unreliable. They fail almost certainly the next time a page gets redesigned, which can be any time. Indeed -- see my other post (regarding ystockquote.py). There's a CSV HTTP API that should be used if you want to obtain any Yahoo! Finance data programmatically. Trent. -- http://mail.python.org/mailman/listinfo/python-list
Re: Financial time series data
On 03-Sep-10 7:29 AM, Virgil Stokes wrote: A more direct question on accessing stock information from Yahoo. First, use your browser to go to: http://finance.yahoo.com/q/cp?s=%5EGSPC+Components Now, you see the first 50 rows of a 500 row table of information on S&P 500 index. You can LM click on 1 -50 of 500 |First|Previous|Next|Last below the table to position to any of the 10 pages. I would like to use Python to do the following. *Loop on each of the 10 pages and for each page extract information for each row --- How can this be accomplished automatically in Python?* Let's take the first page (as shown by default). It is easy to see the link to the data for "A" is http://finance.yahoo.com/q?s=A. That is, I can just move my cursor over the "A" and I see this URL in the message at the bottom of my browser (Explorer 8). If I LM click on "A" then I will go to this link --- Do this! You should now see a table which shows information on this stock and *this is the information that I would like to extract*. I would like to do this for all 500 stocks without the need to enter the symbols for them (e.g. "A", "AA", etc.). It seems clear that this should be possible since all the symbols are in the first column of each of the 50 tables --- but it is not at all clear how to extract these automatically in Python. Hopefully, you understand my problem. Again, I would like Python to cycle through these 10 pages and extract this information for each symbol in this table. --V You want the 'get_historical_prices' method of the (beautifully elegant) 'ystockquote.py': http://www.goldb.org/ystockquote.html. Just specify start date and end date and wallah, you get an array of historical price data for any symbol you pass in. I used this module with great success to download ten years of historical data for every symbol I've ever traded. Regards, Trent. -- http://mail.python.org/mailman/listinfo/python-list
Re: IDLE / Black frame on Mac
In article , Kristoffer Follesdal wrote: > *Forgot to tell that I am using a Mac with Snow Leopard. Which version of Python 3.1.2? From the python.org installer? MacPorts? Built from source - if so, which version of Tk? -- Ned Deily, n...@acm.org -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
Lawrence D'Oliveiro writes: >> GC's for large systems generally don't free (or even examine) individual >> garbage objects. They copy the live objects to a new contiguous heap >> without ever touching the garbage, and then they release the old heap. > > And suddenly you’ve doubled the memory requirements. And on top of that, > since you’re moving the valid objects into different memory, you’re forcing > cache misses on all of them as well. A minimal naive implementation indeed doubles the memory requirements, but from a Python perspective where every integer takes something like 24 bytes already, even that doesn't seem so terrible. More sophisticated implementations use multiple small heaps or other tricks. It still costs something in memory footprint, but less than the minimal implementation's 2x cost. The new heap is filled sequentially so accesses to it will have good locality. You do have to jump around within the old heap, but again, with generational schemes, in the more frequent collections, the old heap fits entirely in cache. For example, GHC's minor heap size is 256kB. For major collections, GHC switches (or used to) from copying to a mark/compact scheme once the amount of live data in the heap goes over some amount, giving the best of both worlds. It's also the case that programs with very large memory consumption tend to use most of the memory for large arrays that don't contain pointers (think of a database server with a huge cache). That means the gc doesn't really have to think about all that much of the memory. > This is the continuing problem with garbage collection: all the attempts to > make it cheaper just end up moving the costs somewhere else. Same thing with manual allocation. That moves the costs off the computer and onto the programmer. Not good, most of the time. Really, I'm no gc expert, but the stuff you're saying about gc is quite ill-informed. You might want to check out some current literature. -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
On 04/09/2010 03:21, Paul Rubin wrote: Lawrence D'Oliveiro writes: Java has considerably greater reputation for reliability than C or C++. Wonder why Sun’s licence explicitly forbade its use in danger-critical areas like nuclear power plants and the like, then? Probably because Sun lawyers demanded it. Is there a Sun C or C++ compiler with a license that doesn't have that restriction? Even if there is, it just means those languages are so unreliable that the lawyers felt confident that any meltdown could be blamed on a bug in the user's rather than the compiler ;-). Let’s put it this way: the life-support system on the International Space Station is written in Ada. Would you trust your life to code written in Java? The scary thing is I don't know whether I'm already doing that. Life support systems have hard real-time requirements (Ada's forte) but I'd expect lots of military decision-support systems are written in Java. Maybe one of them will raise a false alert and somebody will launch a war. I thought it was just that if it wasn't explicitly forbidden then someone might try to use it and then sue if something went wrong, even though common sense would have said that it was a bad idea in the first place! :-) -- http://mail.python.org/mailman/listinfo/python-list
Google Adsense Earnings, google adsense tips
Recently I found I had a a small amount of websites I had basically abandoned, Mostly as the products had befall with a reduction of desirable and were Veto longer worth the effort of promoting. I unfaltering more exactly than Accede to the sites simply sit in attendance price me hosting money - I Be supposed to try to optimize the sites in support of Google AdSense and see to it that if I Might promote to a few profits from the sites next to all. visit http://www.adsensesubstitutes.com/adSense-tutorial.htm -- http://mail.python.org/mailman/listinfo/python-list
Google Adsense Earnings, google adsense tips
I'm pretty certain with the purpose of we all can promote to this allegation - "I know quite a luck approaching AdSense". This of track will be authentic in a convinced context. For illustration what time you are conversation to an recluse, someone who doesn't apply AdSense to his or her website, so therefore all right, you will know quite a luck approaching AdSense. But on a level singing sports ground sat after that to any more AdSense extreme, might you promote to the same allegation? visit http://www.adsensesubstitutes.com/Adsense-help.htm -- http://mail.python.org/mailman/listinfo/python-list
Google Adsense Earnings, google adsense tips
It's an untoward information with the purpose of many AdSense users are getting their accounts terminated by the time they've complete nothing unsuitable. Considering the money individual can promote to from operational AdSense employment, the loss of one's bank account can be a real blow. Especially if the holder doesn't know why their bank account is being revoked! visit http://www.adsensesubstitutes.com/adsense-clicks.htm -- http://mail.python.org/mailman/listinfo/python-list
Google Adsense Earnings, google adsense tips
If you control a website and wish to promote it for money at home, therefore the Google AdSense syllabus is inescapable merely in support of you. It has befall a bonus in support of a luck of sites whether they are small or sizeable or form. Taking part in information, this has befall such a all the rage tool with the purpose of a number of web designers control making websites solely in support of the intent of AdSense. visit http://www.adsensesubstitutes.com/adsense-program.htm -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
Lawrence D'Oliveiro writes: >> Java has considerably greater reputation for reliability than C or C++. > > Wonder why Sun’s licence explicitly forbade its use in danger-critical > areas like nuclear power plants and the like, then? Probably because Sun lawyers demanded it. Is there a Sun C or C++ compiler with a license that doesn't have that restriction? Even if there is, it just means those languages are so unreliable that the lawyers felt confident that any meltdown could be blamed on a bug in the user's rather than the compiler ;-). > Let’s put it this way: the life-support system on the International Space > Station is written in Ada. Would you trust your life to code written in > Java? The scary thing is I don't know whether I'm already doing that. Life support systems have hard real-time requirements (Ada's forte) but I'd expect lots of military decision-support systems are written in Java. Maybe one of them will raise a false alert and somebody will launch a war. -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
In message <7xr5heufhb@ruckus.brouhaha.com>, Paul Rubin wrote: > Java has considerably greater reputation for reliability than C or C++. Wonder why Sun’s licence explicitly forbade its use in danger-critical areas like nuclear power plants and the like, then? > Ada is a different story, but Ada programs (because of the application > area Ada is used in) tend not to use a lot of dynamic memory allocation > in the first place. A little googling shows there are GC extensions > available for Ada, though I don't know if they are used much. Let’s put it this way: the life-support system on the International Space Station is written in Ada. Would you trust your life to code written in Java? -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
In message <7xmxs2uez1@ruckus.brouhaha.com>, Paul Rubin wrote: > Lawrence D'Oliveiro writes: > >> Whereas garbage collection will happen at some indeterminate time long >> after the last access to the object, when it very likely will no longer >> be in the cache, and have to be brought back in just to be freed, > > GC's for large systems generally don't free (or even examine) individual > garbage objects. They copy the live objects to a new contiguous heap > without ever touching the garbage, and then they release the old heap. And suddenly you’ve doubled the memory requirements. And on top of that, since you’re moving the valid objects into different memory, you’re forcing cache misses on all of them as well. This is the continuing problem with garbage collection: all the attempts to make it cheaper just end up moving the costs somewhere else. -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
In message <7xiq2que93@ruckus.brouhaha.com>, Paul Rubin wrote: > Lawrence D'Oliveiro writes: >> >>> Refcounting is susceptable to the same pauses for reasons already >>> discussed. >> >> Doesn’t seem to happen in the real world, though. > > def f(n): > from time import time > a = [1] * n > t0 = time() > del a > t1 = time() > return t1 - t0 > > for i in range(9): >print i, f(10**i) > > > on my system prints: > > 0 2.86102294922e-06 > 1 2.14576721191e-06 > 2 3.09944152832e-06 > 3 1.00135803223e-05 > 4 0.000104904174805 > 5 0.00098991394043 > 6 0.00413608551025 > 7 0.037693977356 > 8 0.362598896027 > > Looks pretty linear as n gets large. 0.36 seconds (the last line) is a > noticable pause. Which just proves the point. You had to deliberately set up the situation to make that happen. And it remains just as easy to pinpoint where it is happening, so you can control it. With a garbage collector, you don’t have that control. Even if you try to avoid freeing a single large structure at once, it’s still liable to batch up a lot of small objects to free at once, so the problem can still happen. -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue cleanup
In message <7xvd6sv0n4@ruckus.brouhaha.com>, Paul Rubin wrote: > Lawrence D'Oliveiro writes: >>> AddrObj = PyTuple_GetItem(TheBufferInfo, 0); >>> LenObj = PyTuple_GetItem(TheBufferInfo, 1); >>> >>> the first PyTuple_GetItem succeeds and the second one fails. >> >> Admittedly, I did take a shortcut here: array.buffer_info returns a tuple >> of two items, so I’m not expecting one GetItem to succeed and the other >> to fail. > > FromArray is a parameter to the function, with no type check to make > sure it's really an array. In fact your code allows for the possibility > that it doesn't support the buffer_info operation (if I understand the > purpose of the null return check after the PyObject_CallMethod) which > means it's prepared for the argument to -not- be an array. That reinforces my point, about how easy it was to check the correctness of the code. In this case one simple fix, like this diff --git a/spuhelper.c b/spuhelper.c index 83fd4eb..2ba8197 100644 --- a/spuhelper.c +++ b/spuhelper.c @@ -151,10 +151,12 @@ static void GetBufferInfo if (TheBufferInfo == 0) break; AddrObj = PyTuple_GetItem(TheBufferInfo, 0); - LenObj = PyTuple_GetItem(TheBufferInfo, 1); if (PyErr_Occurred()) break; Py_INCREF(AddrObj); + LenObj = PyTuple_GetItem(TheBufferInfo, 1); + if (PyErr_Occurred()) + break; Py_INCREF(LenObj); *addr = PyInt_AsUnsignedLongMask(AddrObj); *len = PyInt_AsUnsignedLongMask(LenObj); would render the code watertight. See how easy it is? -- http://mail.python.org/mailman/listinfo/python-list
Re: New to python - parse data into Google Charts
On Sep 3, 12:51 pm, Mike Kent wrote: > On Sep 3, 1:52 pm, alistair wrote: > > > I'm new to python and my programming years are a ways behind me, so I > > was looking for some help in parsing a file into a chart using the > > Google Charts API. > > Try this:http://pygooglechart.slowchop.com/ Thanks for that. I guess I should have noted that I am new enough that I'm not even really sure how I would go about using this. -- http://mail.python.org/mailman/listinfo/python-list
Re: Date Parsing Question
Gavin writes: > python-dateutil seems to work very well if everything is in English, > however, it does not seem to work for other languages and the > documentation does not seem to have any information about locale > support. Probably because I don't think there is much built in. You'll want to supply your own parserinfo to the parse function. I haven't had to parse non-english localized date strings myself yet, but yes, the default parserinfo used by the module is in English. Looks like the docs don't get into it too much, but if you review the parser.py source in the package you can see the default parserinfo definition. I would think defining your own (or subclass of the default) and replacing the WEEKDAYS and MONTHS values would work (you can get localized lists directly from the calendar module) and maybe adding to the jump table if you want to parser longer phrases. At a first glance, the lexer within the module does seem like there may be some possible issues with more esoteric encodings or unicode, but just something to stay aware of. If you already have a i18n/l10n setup in your application (or need to have finer grained control than a global locale setting), you could instead override the lookup methods, though there's a bit more work to do since the initial lookup tables will probably need to be created in each of the locales you may wish to switch between. -- David -- http://mail.python.org/mailman/listinfo/python-list
Re: IDLE / Black frame on Mac
*Forgot to tell that I am using a Mac with Snow Leopard. -- http://mail.python.org/mailman/listinfo/python-list
Re: Does MySQLdb rollback on control-C? Maybe not.
On 9/3/2010 12:41 PM, Mike Kent wrote: On Sep 3, 12:22 am, John Nagle wrote: I would expect MySQLdb to rollback on a control-C, but it doesn't seem to have done so. Something is broken. I wouldn't expect it to, I'd expect to roll back on an exception, or commit if not. MySQL does rollback properly on connection failure or program exit. It turns out I had a commit on that database handle in logging code. John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: Help needed with Windows Service in Python
On 02/09/2010 20:55, Edward Kozlowski wrote: On Sep 2, 2:38 pm, Ian wrote: On 02/09/2010 20:06, Edward Kozlowski wrote: On Sep 2, 10:22 am, Ian Hobsonwrote: Hi All, I am attempting to create a Windows Service in Python. I have the framework (from Mark Hammond and Andy Robinason's book) running - see below. It starts fine - but it will not stop. :( net stop "Python Service" and using the services GUI both leave the services showing it as "stopping" I guess this means SvcStop is called but it is not enough to get it out of the machine. Does anyone know why not? Python 2.7 with win32 extensions, sunning on Windows 7. Many thanks Ian the (complete) source code is #!/usr/bin/env python # coding=utf8 # service.py = testing services and Named pipes # import win32serviceutil import win32service import win32event class PythonService(win32serviceutil.ServiceFramework): _svc_name_ = "Python Service" _svc_display_name_ = "Test Service in Python" def __init__(self, args): win32serviceutil.ServiceFramework.__init__(self,args) self.hWaitStop = win32event.CreateEvent(None,0,0,None) def SvcStop(self): self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) wind32event.SetEvent(self.hWaitStop) def SvcDoRun(self): win32event.WaitForSingleObject(self.hWaitStop,win32event.INFINITE) if __name__ == '__main__': win32serviceutil.HandleCommandLine(PythonService) Looks to me like there may be a typo in your code. You probably meant win32event.SetEvent(self.hWaitStop), not wind32event. Regards, -Edward Kozlowski A huge big thank you Edward. That was the problem. Regards Ian You're most welcome. If you're looking at running services in Windows using Python, one other hangup I ran into was that my services would freeze for no reason. At Pycon '09, I learned that there were buffers for stdout and stderr that were filling. I wish I could remember who gave the talk that included the jewel of knowledge, because I'd love to give credit where it's due... After I redirected stdout and stderr to files, my problems with the services freezing went away. Regards, -Edward Kozlowski Hi Edward, Thanks for the heads up. That is really worth knowing. Ian -- http://mail.python.org/mailman/listinfo/python-list
Date Parsing Question
Hi, I'm using the python-dateutil package : http://labix.org/python-dateutil to parse a set of randomly formatted strings into dates. Because the formats are varied, I can't use time.strptime() because I don't know what the format is upfront. python-dateutil seems to work very well if everything is in English, however, it does not seem to work for other languages and the documentation does not seem to have any information about locale support. Here's an example showing my problem: Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from dateutil.parser import * >>> import datetime >>> import time >>> date_string1 = time.strftime("%Y-%B-%d %H:%M:%S",(2010,10,3,1,1,1,1,1,1)) >>> print date_string1 2010-October-03 01:01:01 >>> parse(date_string1) datetime.datetime(2010, 10, 3, 1, 1, 1) everything is ok so far, now retry with a date in german: >>> import locale >>> locale.setlocale(locale.LC_ALL, "german") 'German_Germany.1252' >>> locale.getlocale() ('de_DE', 'cp1252') >>> date_string1 = time.strftime("%Y-%B-%d %H:%M:%S",(2010,10,3,1,1,1,1,1,1)) >>> print date_string1 2010-Oktober-03 01:01:01 >>> parse(date_string1) Traceback (most recent call last): File "", line 1, in NameError: name 'date_string' is not defined >>> parse(date_string1) Traceback (most recent call last): File "", line 1, in File "c:\python26\lib\site-packages\python_dateutil-1.5-py2.6.egg \dateutil\parser.py", line 697, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "c:\python26\lib\site-packages\python_dateutil-1.5-py2.6.egg \dateutil\parser.py", line 303, in parse raise ValueError, "unknown string format" ValueError: unknown string format Am I out of luck with this package? Just wondering if anyone has used this to work with non-english dates. I'm also open to other ideas to handle this. Appreciate the assistance, Gavin -- http://mail.python.org/mailman/listinfo/python-list
Re: Help needed with Windows Service in Python
On 03/09/2010 01:38, Mark Hammond wrote: I expect that the Windows Event Log might have some clues, as would attempting to use it in "debug" mode. Thanks Mark. The error log holds the trackback - it identified the line with the typo. Now the typo is fixed, the service starts and stops properly. Regards Ian -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
BartC, 03.09.2010 22:17: for i in range(N): which (if I understood correctly) actually created a list of N objects, populated it with the values 0, 1, 2...N-1 (presumably using a more sensible loop), then iterated between the values of the list! I guess what applies here is "special cases aren't special enough to break the rules". The performance is good enough in most cases, it only hurts when the range is large and the loop body is small in comparison, such as in the most obvious stupid benchmarks. Also, xrange() is a pretty old addition the the language and now replaces range() in Python 3. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
"Michael Kreim" wrote in message news:mailman.362.1283422325.29448.python-l...@python.org... I was comparing the speed of a simple loop program between Matlab and Python. My Codes: $ cat addition.py imax = 10 a = 0 for i in xrange(imax): a = a + 10 print a Unfortunately my Python Code was much slower and I do not understand why. Are there any ways to speed up the for/xrange loop? Or do I have to live with the fact that Matlab beats Python in this example? I'm not sure the Python developers were interested in getting fast loops. For-loops which iterate between two numbers are amongst the easiest things to make fast in a language. Yet originally you had to use: for i in range(N): which (if I understood correctly) actually created a list of N objects, populated it with the values 0, 1, 2...N-1 (presumably using a more sensible loop), then iterated between the values of the list! So Python had the distinction of being one of the slowest languages in which to do nothing (ie. running an empty loop). -- Bartc -- http://mail.python.org/mailman/listinfo/python-list
Re: what should __iter__ return?
On 03/09/2010 20:35, ernest wrote: Hi, What is better: def __iter__(self): for i in len(self): yield self[i] or def __iter__(self): return iter([self[i] for i in range(len(self))]) The first one, I would say is more correct, however what if in a middle of an iteration the object changes in length? Then, the iterator will fail with IndexError (if items have been removed), or it will fail to iterate over the whole sequence (if items have been added). What do you think? I'd say the first one is less correct because you can't iterate over an int. :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: what should __iter__ return?
On Friday 03 September 2010, it occurred to ernest to exclaim: > Hi, > > What is better: > > def __iter__(self): > for i in len(self): > yield self[i] > > or > > def __iter__(self): > return iter([self[i] for i in range(len(self))]) > > The first one, I would say is more correct, > however what if in a middle of an iteration > the object changes in length? Then, the > iterator will fail with IndexError (if items > have been removed), or it will fail to iterate > over the whole sequence (if items have > been added). > > What do you think? Hmm. Modifying an object while iterating over it isn't a great idea, ever: >>> L = [1,2,3,4] >>> i = iter(L) >>> next(i) 1 >>> next(i) 2 >>> del L[0] >>> next(i) 4 You second version is wasteful. It creates a copy of the object just for iteration. I don't think that's something you should be doing. If you want "correct" behaviour as with lists, you might want something like this: def __iter__(self): class _Iter: def __init__(it): it.i = -1 def __next__(it): it.i += 1 try: return self[it.i] except IndexError: raise StopIteration return _Iter() > > Cheers. > Ernest -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
On Fri, 03 Sep 2010 11:21:36 +0200, Michael Kreim wrote: > An anonymous Nobody suggested to use Numpy. I did not do this, because I > am very very new to Numpy and I did not figure out a Numpy specific way > to do this. Maybe a Numpy expert has something for me? The problem with giving examples is that your original example is too contrived. Taken literally, it can be optimised to print imax * 10 A less contrived example would actually do something within the loop, in order to justify the existence of the loop. NumPy provides predefined loops which correspond to map, reduce, accumulate and zip, for all of the standard arithmetic operators and common mathematical functions. E.g. if your loop was: a = 0 for i in xrange(imax): a += i**2 print a the NumPy version would be: print numpy.sum(numpy.arange(imax)**2) The arange() function is similar to range() but generates an array. The ** operator is implemented for arrays; sum() sums the elements of the array. The main downside is that the intermediate array(s) must be constructed in memory, which rules out its use for very long sequences. -- http://mail.python.org/mailman/listinfo/python-list
Re: New to python - parse data into Google Charts
On Sep 3, 1:52 pm, alistair wrote: > I'm new to python and my programming years are a ways behind me, so I > was looking for some help in parsing a file into a chart using the > Google Charts API. > Try this: http://pygooglechart.slowchop.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQL Problem
On 03/09/2010 14:29, Victor Subervi wrote: This is an addendum to my last post. Please observe the following: mysql> select * from spreadsheets where Temp=1; +-++---+-++--+ | ID | Client | Multi | Item| Markup | Temp | +-++---+-++--+ | 611 | Lincoln_Properties | 0 | 2030572 | 0.00 |1 | | 621 | Lincoln_Properties | 0 | 2030572 | 0.00 |1 | +-++---+-++--+ 2 rows in set (0.00 sec) mysql> describe spreadsheets; ++--+--+-+-++ | Field | Type | Null | Key | Default | Extra | ++--+--+-+-++ | ID | int(11) unsigned | NO | PRI | NULL| auto_increment | | Client | varchar(40) | YES | | NULL|| | Multi | tinyint(1) | YES | | NULL|| | Item | varchar(40) | YES | | NULL|| | Markup | float(6,2) | YES | | NULL|| | Temp | tinyint(1) | YES | | NULL|| ++--+--+-+-++ 6 rows in set (0.00 sec) Yet from my script: cursor.execute('select * from spreadsheets where Temp=1') print cursor.fetchall() print nothing but an empty set: () Why?? TIA, beno Hi Victor Find out exactly what character encoding are you using for each of the following places when using MySQL client. The MySQL installation The database definition The table definition The Field definition The Link between Python and MySQL The Python source / The Msql Client. And then find out what encoding is being forced/used by the code you have written in Python in each of the above situations? You may have to go to the source of the library routine to find this out. What I suspect may be happening is this. Say you have a field containing a character/code point that is in UTF-8 but not in the ISO-8859-1 set. If such a field was written using UTF-8 throughout, but then read using ISO-8859-1 or similar then the read will generate an error. That error may be being ignored or suppressed causing the code to drop your data rows. IIRC, MySQL calls UTF-8 by the (incorrect) name of utf-8. My recommendation is for you to use UTF-8 for everything. UTF-8 can store any character in any language(1), is really efficient for English text, and acceptable for other languages. Performance it excellent, because it involves no encoding/decoding as the data moves between disk, MySQL link or Python. (1) There are some minor human languages that cannot be encoded - usually because no written form has yet been devised or the code points have not been agreed. These languages will not be met in practise. Regards Ian -- http://mail.python.org/mailman/listinfo/python-list
Re: Does MySQLdb rollback on control-C? Maybe not.
On Sep 3, 12:22 am, John Nagle wrote: > I would expect MySQLdb to rollback on a control-C, but it doesn't > seem to have done so. > Something is broken. I wouldn't expect it to, I'd expect to roll back on an exception, or commit if not. Perhaps this will help you. I use it in production code. ## # This is a transaction context manager. It will ensure that the code in # the context block will be executed inside a transaction. If any exception # occurs, the transaction will be rolled back, and the exception reraised. # If no exception occurs, the transaction will be committed. # db is a database connection object. from contextlib import contextmanager @contextmanager def transaction(db): db.begin() try: yield None except: db.rollback() raise else: db.commit() -- http://mail.python.org/mailman/listinfo/python-list
what should __iter__ return?
Hi, What is better: def __iter__(self): for i in len(self): yield self[i] or def __iter__(self): return iter([self[i] for i in range(len(self))]) The first one, I would say is more correct, however what if in a middle of an iteration the object changes in length? Then, the iterator will fail with IndexError (if items have been removed), or it will fail to iterate over the whole sequence (if items have been added). What do you think? Cheers. Ernest -- http://mail.python.org/mailman/listinfo/python-list
Re: bisection method: Simulating a retirement fund
On Sep 2, 1:37 pm, Baba wrote: > level: beginner In this economy, simulating the value of retirement funds with bisection is easy. Look: def retirement_fund_value(n_years,initial_value): for i in xrange(n_years): value /= 2 # <- bisect value of fund return value Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Performance: sets vs dicts.
On 9/3/10 12:08 PM, John Bokma wrote: Terry Reedy writes: On 9/1/2010 8:11 PM, John Bokma wrote: [...] Right. And if 'small values of n' include all possible values, then rejecting a particular O(log n) algorithm as 'unacceptable' relative to all O(1) algorithms is pretty absurd. I have little time, but want to reply to this one: anyone using big-Oh and claiming that an O(log n) algorithm is 'unacceptable' relative to all O(1) algorithms has no clue what he/she is talking about. big-Oh says something about the order of /growth/ of the running time of an algorithm. And since 1 is a constant O(1) means that the order of /growth/ of the running time is constant (independend of the input size. That's an ambiguous wording that is likely to confuse people. It seems like you are saying that the O() behavior is the order of the first derivative of the running time as a function of some interesting parameter of the problem, which it is not. O() notation *describes* the growth, but it *is not* the order of the growth itself. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
New to python - parse data into Google Charts
I'm new to python and my programming years are a ways behind me, so I was looking for some help in parsing a file into a chart using the Google Charts API. The file is simply a text file containing: Date, Total Accesses, Unique Accesses. I'd like date across the bottom, access numbers on the vertical axis and lines for the amounts. I'm not even really sure how to get started if someone wants to give me some pointers or throw together a short example, I would greatly appreciate it. Here is a sample of the data: 2010-08-01, 2324, 1800 2010-08-02, 3832, 2857 2010-08-03, 7916, 4875 2010-08-04, 7004, 4247 2010-08-05, 6392, 4026 2010-08-06, 5396, 3513 2010-08-07, 3238, 2285 2010-08-08, 3579, 2588 2010-08-09, 7710, 4867 2010-08-10, 6662, 4123 2010-08-11, 6524, 4045 2010-08-12, 6438, 3965 2010-08-13, 5472, 3543 2010-08-14, 3059, 2193 2010-08-15, 3255, 2379 2010-08-16, 7149, 4482 2010-08-17, 6727, 4247 2010-08-18, 6989, 4328 2010-08-19, 6738, 4192 2010-08-20, 5929, 3816 2010-08-21, 3245, 2302 2010-08-22, 4091, 2900 2010-08-23, 8237, 4857 2010-08-24, 7895, 4575 2010-08-25, 7788, 4564 2010-08-26, 7616, 4527 2010-08-27, 6671, 4159 2010-08-28, 3595, 2484 2010-08-29, 4377, 2991 2010-08-30, 9238, 5427 2010-08-31, 9274, 5406 -- http://mail.python.org/mailman/listinfo/python-list
Re: Financial time series data
On Fri, 2010-09-03 at 16:48 +0200, Virgil Stokes wrote: > On 03-Sep-2010 15:45, Frederic Rentsch wrote: > > On Fri, 2010-09-03 at 13:29 +0200, Virgil Stokes wrote: > >> A more direct question on accessing stock information from Yahoo. > >> > >> First, use your browser to go to: http://finance.yahoo.com/q/cp?s=% > >> 5EGSPC+Components > >> > >> Now, you see the first 50 rows of a 500 row table of information on > >> S&P 500 index. You can LM click on > >> > >>1 -50 of 500 |First|Previous|Next|Last > >> > >> below the table to position to any of the 10 pages. > >> > >> I would like to use Python to do the following. > >> > >> Loop on each of the 10 pages and for each page extract information for > >> each row --- How can this be accomplished automatically in Python? > >> > >> Let's take the first page (as shown by default). It is easy to see the > >> link to the data for "A" is http://finance.yahoo.com/q?s=A. That is, I > >> can just move > >> my cursor over the "A" and I see this URL in the message at the bottom > >> of my browser (Explorer 8). If I LM click on "A" then I will go to > >> this > >> link --- Do this! > >> > >> You should now see a table which shows information on this stock and > >> this is the information that I would like to extract. I would like to > >> do this for all 500 stocks without the need to enter the symbols for > >> them (e.g. "A", "AA", etc.). It seems clear that this should be > >> possible since all the symbols are in the first column of each of the > >> 50 tables --- but it is not at all clear how to extract these > >> automatically in Python. > >> > >> Hopefully, you understand my problem. Again, I would like Python to > >> cycle through these 10 pages and extract this information for each > >> symbol in this table. > >> > >> --V > >> > >> > >> > > Here's a quick hack to get the SP500 symbols from the visual page with > > the index letters. From this collection you can then order fifty at a > > time from the download facility. (If you get a better idea from Yahoo, > > you'll post it of course.) > > > > > > > > def get_SP500_symbols (): > > import urllib > > symbols = [] > > url = 'http://finance.yahoo.com/q/cp?s=^GSPC&alpha=%c' > > for c in [chr(n) for n in range (ord ('A'), ord ('Z') + 1)]: > > > > print url % c > > f = urllib.urlopen (url % c) > > html = f.readlines () > > f.close () > > for line in html: > > if line.lstrip ().startswith (' > id="yfs_params_vcr"'): > > line_split = line.split (':') > > s = [item.strip ().upper () for item in > > line_split [5].replace ('"', > > '').split (',')] > > symbols.extend (s [:-3]) > > > > return symbols > > # Not quite 500 (!?) > > > > > > Frederic > > > > > > > I made a few modifications --- very minor. But, I believe that it is a little > faster. > > import urllib2 > > def get_SP500_symbolsX (): > symbols = [] > for page in range(0,9): >url = 'http://finance.yahoo.com/q/cp?s=%5EGSPC&c='+str(page) >print url >f = urllib2.urlopen (url) >html = f.readlines () >f.close () >for line in html: > if line.lstrip ().startswith (' line_split = line.split (':') > s = [item.strip ().upper () for item in line_split [5].replace > ('"','').split (',')] > symbols.extend (s [:-3]) > > return symbols > # Not quite 500 -- which is correct (for example p. 2 has only 49 > symbols!) > # Actually the S&P 500 as shown does not contain 500 stocks (symbols) > > > symbols = get_SP500_symbolsX() > pass Oh, yes, and there's no use reading lines to the end once the symbols are in the bag. The symbol-line-finder conditional section should end with "break". And do let us know if you get an answer from Yahoo. Hacks like this are unreliable. They fail almost certainly the next time a page gets redesigned, which can be any time. Frederic -- http://mail.python.org/mailman/listinfo/python-list
Re: bisection method: Simulating a retirement fund
On 03/09/2010 09:06, Baba wrote: On Sep 2, 11:10 pm, MRAB wrote: Why are you saving 'fund' in SavingsRecord if you're returning just the last and discarding others? Basically you're returning the final value of fund. Hi MRAB ok i agree that this is not be ideal. I should shorten this to ONLY return SavingsRecord[-1] When performing this type of 'search' make sure that the interval (high - low) reduces at every step.> (integer division) and if the 'if' condition happens to be false then the value of 'low' won't change for the next iteration, leading to an infinite loop. If you look at the output you will see that the interval DOES seem to reduce at each interval as expenses and fundsize reduce gradually. The computation does not lead to an infinite loop. It doesn't in that particular case, but it might in some other cases. -- http://mail.python.org/mailman/listinfo/python-list
Re: Performance: sets vs dicts.
Terry Reedy writes: > On 9/1/2010 8:11 PM, John Bokma wrote: [...] > Right. And if 'small values of n' include all possible values, then > rejecting a particular O(log n) algorithm as 'unacceptable' relative > to all O(1) algorithms is pretty absurd. I have little time, but want to reply to this one: anyone using big-Oh and claiming that an O(log n) algorithm is 'unacceptable' relative to all O(1) algorithms has no clue what he/she is talking about. big-Oh says something about the order of /growth/ of the running time of an algorithm. And since 1 is a constant O(1) means that the order of /growth/ of the running time is constant (independend of the input size. Since "the growth of the running time is constant" is quite a mouth full, it's often shortened to 'constant time' since from the context it's clear what's being meant. But this doesn't mean that if the algorithm gets an input size of 10 versus 1 that it takes the same number of seconds for the latter to process. -- John Bokma j3b Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma Freelance Perl & Python Development: http://castleamber.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Performance: sets vs dicts.
On 09/02/2010 02:47 PM, Terry Reedy wrote: > On 9/1/2010 10:57 PM, ru...@yahoo.com wrote: > >> So while you may "think" most people rarely read >> the docs for basic language features and objects >> (I presume you don't mean to restrict your statement >> to only sets), I and most people I know *do* read >> them. And when read them I expect them, as any good >> reference documentation does, to completely and >> accurately describe the behavior of the item I am >> reading about. If big-O performance is deemed an >> intrinsic behavior of an (operation of) an object, >> it should be described in the documentation for >> that object. > > However, big-O performance is intentionally NOT so deemed. The discussion, as I understood it, was about whether or not it *should* be so deemed. > And I have > and would continue to argue that it should not be, for multiple reasons. Yes, you have. And others have argued the opposite. Personally, I did not find your arguments very convincing, particularly that it would be misleading or that the limits necessarily imposed by a real implementation somehow invalidates the usefulness of O() documentation. But I acknowledged that there was not universal agreement that O() behavior should be documented in the the reference docs by qualifying my statement with the word "if". But mostly my comments were directed towards some of the side comments in Raymond's post I thought should not pass unchallenged. I think that some of the attitudes expressed (and shared by others) are likely the direct cause of many of the faults I find in the currrent documentation. -- http://mail.python.org/mailman/listinfo/python-list
pyqt signals
Hello i have write this but i not able to connect the emit of the class Socket to the Form class, can you help me? class Socket(QtNetwork.QTcpSocket): def __init__(self, parent=None): super(Socket, self).__init__(parent) self.connect(self, QtCore.SIGNAL("readyRead()"), self.leggi_richiesta) def leggi_richiesta(self): messaggio = self.readData(self.bytesAvailable()) print 'dati_letti = ', messaggio self.emit(QtCore.SIGNAL("messaggio"), messaggio) print 'segnale emesso' class Form(QWidget, Ui_Form): """ Class documentation goes here. """ def __init__(self, parent = None): """ Constructor """ connesso_in_arr =False self.coda = False QWidget.__init__(self, parent) self.setupUi(self) self.dati = False self.lu = TcpServer() self.lu self.connect(Socket(self), QtCore.SIGNAL('messaggio'), self.leggo_risposta) def leggo_risposta(self, messaggio): self.plainTextEdit_2.appendPlainText(messaggio) Thanks Luca -- http://mail.python.org/mailman/listinfo/python-list
Re: Where do I report a bug to the pythonware PIL
On 9/3/10 4:35 AM, jc.lopes wrote: Does anyone knows what is the proper way to submit a bug report to pythonware PIL? http://mail.python.org/mailman/listinfo/image-sig -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: Where do I report a bug to the pythonware PIL
On Sep 3, 10:35 am, "jc.lopes" wrote: > Does anyone knows what is the proper way to submit a bug report to > pythonware PIL? > > thanks > JC Lopes The Python Image SIG list http://mail.python.org/mailman/listinfo/image-sig "Free Support: If you don't have a support contract, please send your question to the Python Image SIG mailing list. The same applies for bug reports and patches." -- http://www.pythonware.com/products/pil/ They don't appear to have a dedicated mailing list or public bug tracker. -- http://mail.python.org/mailman/listinfo/python-list
IDLE / Black frame on Mac
Hi I am new to python and have installed python 3.1.2. I have began using IDLE and like it very good. But when an IDLE window is active. There is a thick black frame around the white text field. Is there some way I can get rid of this frame? The frame is very distracting when I write. Kristoffer -- http://mail.python.org/mailman/listinfo/python-list
Re: Financial time series data
On Fri, 2010-09-03 at 13:29 +0200, Virgil Stokes wrote: > A more direct question on accessing stock information from Yahoo. > > First, use your browser to go to: http://finance.yahoo.com/q/cp?s=% > 5EGSPC+Components > > Now, you see the first 50 rows of a 500 row table of information on > S&P 500 index. You can LM click on > > 1 -50 of 500 |First|Previous|Next|Last > > below the table to position to any of the 10 pages. > > I would like to use Python to do the following. > > Loop on each of the 10 pages and for each page extract information for > each row --- How can this be accomplished automatically in Python? > > Let's take the first page (as shown by default). It is easy to see the > link to the data for "A" is http://finance.yahoo.com/q?s=A. That is, I > can just move > my cursor over the "A" and I see this URL in the message at the bottom > of my browser (Explorer 8). If I LM click on "A" then I will go to > this > link --- Do this! > > You should now see a table which shows information on this stock and > this is the information that I would like to extract. I would like to > do this for all 500 stocks without the need to enter the symbols for > them (e.g. "A", "AA", etc.). It seems clear that this should be > possible since all the symbols are in the first column of each of the > 50 tables --- but it is not at all clear how to extract these > automatically in Python. > > Hopefully, you understand my problem. Again, I would like Python to > cycle through these 10 pages and extract this information for each > symbol in this table. > > --V > > > Here's a quick hack to get the SP500 symbols from the visual page with the index letters. From this collection you can then order fifty at a time from the download facility. (If you get a better idea from Yahoo, you'll post it of course.) def get_SP500_symbols (): import urllib symbols = [] url = 'http://finance.yahoo.com/q/cp?s=^GSPC&alpha=%c' for c in [chr(n) for n in range (ord ('A'), ord ('Z') + 1)]: print url % c f = urllib.urlopen (url % c) html = f.readlines () f.close () for line in html: if line.lstrip ().startswith ('http://mail.python.org/mailman/listinfo/python-list
Re: MySQL Problem
On Fri, Sep 3, 2010 at 9:25 AM, Richard Arts wrote: > These are also mere suggestions. > > The statements you use in your print statement and the one you use to > feed the cursor differ slightly. The latter is missing quotes around > your search criterium. > > Isn't it possible to fetch results row by row and see if the missing > row is in the set? That way you can get a better feeling about the > nature of the error. > Well, I just tried, out of curiosity, inserting 0 instead of 1 and guess what? It shows up. So by simply changing that my problem is solved...but I sure as heck would like to know why! > > > cursor.execute('insert into spreadsheets values (Null, %s, 0, %s, 0, > Null)', (client, prod)) > > Out of curiosity, why would you want to insert null values in id > fields? That's a disaster waiting to happen. > Hardly. This is the standard way of inserting into auto_increment fields. That triggers the auto_increment! That makes it much easier to insert the correct value of the field. beno -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQL Problem
This is an addendum to my last post. Please observe the following: mysql> select * from spreadsheets where Temp=1; +-++---+-++--+ | ID | Client | Multi | Item| Markup | Temp | +-++---+-++--+ | 611 | Lincoln_Properties | 0 | 2030572 | 0.00 |1 | | 621 | Lincoln_Properties | 0 | 2030572 | 0.00 |1 | +-++---+-++--+ 2 rows in set (0.00 sec) mysql> describe spreadsheets; ++--+--+-+-++ | Field | Type | Null | Key | Default | Extra | ++--+--+-+-++ | ID | int(11) unsigned | NO | PRI | NULL| auto_increment | | Client | varchar(40) | YES | | NULL|| | Multi | tinyint(1) | YES | | NULL|| | Item | varchar(40) | YES | | NULL|| | Markup | float(6,2) | YES | | NULL|| | Temp | tinyint(1) | YES | | NULL|| ++--+--+-+-++ 6 rows in set (0.00 sec) Yet from my script: cursor.execute('select * from spreadsheets where Temp=1') print cursor.fetchall() print nothing but an empty set: () Why?? TIA, beno -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
On Fri, 2010-09-03 at 08:52 +0200, Ulrich Eckhardt wrote: > Tim Wintle wrote: > > [..] under the hood, cpython does something like this (in psudo-code) > > > > itterator = xrange(imax) > > while 1: > > next_attribute = itterator.next > > try: > > i = next_attribute() > > except: > > break > > a = a + 10 > > There is one thing that strikes me here: The code claims that each iteration > there is a lookup of the 'next' field in the iterator. I would expect that > this is looked up once before the loop only. > > Can you confirm that or am I misinterpreting your intention here? As Stefan and Hrvoje have posted, there is a lookup - but in 2.4 and above it's straight off the C structure and compiled efficiently. (I've been looking at 2.3's source recently and had forgotten the optimisation) Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQL Problem
These are also mere suggestions. The statements you use in your print statement and the one you use to feed the cursor differ slightly. The latter is missing quotes around your search criterium. Isn't it possible to fetch results row by row and see if the missing row is in the set? That way you can get a better feeling about the nature of the error. > cursor.execute('insert into spreadsheets values (Null, %s, 0, %s, 0, Null)', > (client, prod)) Out of curiosity, why would you want to insert null values in id fields? That's a disaster waiting to happen. Regards, Richard -- http://mail.python.org/mailman/listinfo/python-list
Re: python database
On Sep 3, 2:36 am, shai garcia wrote: > can you pls help me to make a database program in python? It's better if you do your homework yourself. You learn more that way. Now, if you have a specific question about some detail of your assignment, and can show us that you've really tried to do the work yourself, that's another matter. -- http://mail.python.org/mailman/listinfo/python-list
Re: Financial time series data
On Sep 2, 1:12 pm, Virgil Stokes wrote: > Has anyone written code or worked with Python software for downloading > financial time series data (e.g. from Yahoo financial)? If yes, would you > please contact me. > > --Thanks, > V. Stokes matplotlib has a finance module you can refer to. (matplotlib.finance.fetch_historical_yahoo) see the example: http://matplotlib.sourceforge.net/examples/pylab_examples/finance_work2.html -- http://mail.python.org/mailman/listinfo/python-list
Re: MySQL Problem
On Thu, Sep 2, 2010 at 3:02 PM, Ian wrote: > On 02/09/2010 19:34, Victor Subervi wrote: > >> for some reason running the command through python *omits* this one data!! >> The only difference is that a flag in spreadsheets (Temp) is set to 1. Why >> on earth doesn't it work in python?? >> > Some ideas to follow up. (These are only guesses). > > 1) One of the enum type fields contains an invalid value (perhaps a value > removed from the column definition). > There are no enum type fields. > > 2) The second id field (products.id?) appears to be very large. I wonder > what would happen if it was larger than the auto-increment value? > It's not an ID field. It doesn't auto_increment. > > 3) A field in the one of the rows in the missing data contains bytes that > are invalid in the character encoding you are using in python. > I changed the only bytes I thought might affect it. Furthermore, I successfully added the blasted data to that field so it would show up in the spreadsheet through another form. More on that later. > > 4) The python field type used for some column in the missing row, contains > a value that cannot be held in the python variable assigned. > If that were so, none of the data would show up. Please look at this comparison: FIELDBAD DATA GOOD DATA ID609161 ClientLincoln_PropertiesLincoln_Properties Multi00 Item203057240x48Green Markup0.0099.32 Temp1Null ID34337 Item203057240x48Green DescriptionAmerico 20" Beige Floor PadGreen Can Liners UOM5/cs1000/cs Cost15.8817.56 ID33537 ProductsID34337 CategoryID4923 ID4923 CategoryMatsCan Liners ParentRestaurant Paper/PlaBags I have changed the value of Temp to Null and Markup to 11.11 to see if that would somehow make a difference. It didn't. Then I used my TTW form for adding data "regularly" to spreadsheets and it worked. The form I'm testing enables the client to add data himself. The code is the same in both cases: "regular" cursor.execute('insert into spreadsheets values (Null, %s, 0, %s, 0, Null)', (client, prod)) "special" cursor.execute('insert into spreadsheets values (Null, %s, 0, %s, 0, Null)', (client, product[1])) I checked permissions and changed ownership to make the two scripts identical. Again, the data gets entered into MySQL correctly...it just doesn't show up with the rest of the data in the TTW form!! Why?? TIA, beno > > Regards > > Ian > > > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
Ulrich Eckhardt writes: > Tim Wintle wrote: >> [..] under the hood, cpython does something like this (in psudo-code) >> >> itterator = xrange(imax) >> while 1: >> next_attribute = itterator.next >> try: >> i = next_attribute() >> except: >> break >> a = a + 10 > > There is one thing that strikes me here: The code claims that each > iteration there is a lookup of the 'next' field in the iterator. I > would expect that this is looked up once before the loop only. It is looked up every time, but the lookup is efficient because "next" is one of the special methods that have a slot in the C struct that defines a Python type. A closer code would be something like: next_function = iterator->ob_type->tp_next; ...which is as fast as it gets. CPython implements this in Python/ceval.c, just look for FOR_ITER. -- http://mail.python.org/mailman/listinfo/python-list
Re: Financial time series data
A more direct question on accessing stock information from Yahoo. First, use your browser to go to: http://finance.yahoo.com/q/cp?s=%5EGSPC+Components Now, you see the first 50 rows of a 500 row table of information on S&P 500 index. You can LM click on 1 -50 of 500 |First|Previous|Next|Last below the table to position to any of the 10 pages. I would like to use Python to do the following. *Loop on each of the 10 pages and for each page extract information for each row --- How can this be accomplished automatically in Python?* Let's take the first page (as shown by default). It is easy to see the link to the data for "A" is http://finance.yahoo.com/q?s=A. That is, I can just move my cursor over the "A" and I see this URL in the message at the bottom of my browser (Explorer 8). If I LM click on "A" then I will go to this link --- Do this! You should now see a table which shows information on this stock and *this is the information that I would like to extract*. I would like to do this for all 500 stocks without the need to enter the symbols for them (e.g. "A", "AA", etc.). It seems clear that this should be possible since all the symbols are in the first column of each of the 50 tables --- but it is not at all clear how to extract these automatically in Python. Hopefully, you understand my problem. Again, I would like Python to cycle through these 10 pages and extract this information for each symbol in this table. --V -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
Michael, Thanks for summarizing and sharing your results. Very interesting. Regards, Malcolm -- http://mail.python.org/mailman/listinfo/python-list
Python for embedded systems development
Hi, Is there anyone using python for embedded systems development ? I have no idea where to start with. Google was of little help. Will appreciate if someone could guide me on "from where to start". Regards Vgnu -- http://mail.python.org/mailman/listinfo/python-list
Where do I report a bug to the pythonware PIL
Does anyone knows what is the proper way to submit a bug report to pythonware PIL? thanks JC Lopes -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
Michael Kreim, 03.09.2010 11:21: So finally I followed the recommendation of Tim Wintle to use cython. I did not know this before, but I figured out the following: additionWintle2.pyx: > def addition(): cdef long imax = 10 cdef long a = 0 cdef long i for i in xrange(imax): a = a + 10 print a => runs (wall clock time): 0:00.04 Note that this isn't the "real" runtime. If you look up the binary code that the C compiler spits out, you'll most likely find the final result for "a" written down as a literal that gets returned from the function. C compilers do these things to benchmarks these days. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
Hi, thanks a lot for your answers. I learn a lot and I like to sum up your suggestions and show you the results of the time command on my machine: Original code by me: imax = 10 a = 0 for i in xrange(imax): a = a + 10 print a => runs (wall clock time): 1:55.14 Peter Otten suggested to put the code into a function: def f(): imax = 10 a = 0 for i in xrange(imax): a = a + 10 print a f() => runs (wall clock time): 0:47.69 Tim Wintle and Philip Bloom posted some code using a while loop: imax = 10 a = 0 i = 0 while 1: i = i + 1 if (i > imax): break a = a + 10 print a => runs (wall clock time): 3:28.05 imax = 10 a = 0 i = 0 while(i runs (wall clock time): 3:27.74 Hrvoje Niksic suggested the usage of itertools: from itertools import repeat imax = 10 a = 0 for i in repeat(None, imax): a = a + 10 print a => runs (wall clock time): 1:58.25 I wrote a code combining these: def f(): from itertools import repeat imax = 10 a = 0 for i in repeat(None, imax): a = a + 10 print a f() => runs (wall clock time): 0:43.08 Then Roland Koebler suggested psyco but I am sitting on a 64bit machine and so I could not test it (although it looks promising). An anonymous Nobody suggested to use Numpy. I did not do this, because I am very very new to Numpy and I did not figure out a Numpy specific way to do this. Maybe a Numpy expert has something for me? So finally I followed the recommendation of Tim Wintle to use cython. I did not know this before, but I figured out the following: additionWintle2.pyx: def addition(): cdef long imax = 10 cdef long a = 0 cdef long i for i in xrange(imax): a = a + 10 print a setup.py: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("additionWintle2", ["additionWintle2.pyx"])] setup( name = 'Cython test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) $ python setup.py build_ext --inplace run.py: from additionWintle2 import addition addition() running build_ext => runs (wall clock time): 0:00.04 And to compare this. I wrote something similar in Matlab and C++ (although some authors, pointed out that it is not that easy to compare "for" loops in these three languages): addition.cpp #include using namespace std; int main() { long imax = 1e9; long a = 0; long i; for(i=0; i < imax; i++) { a = a + 10; } cout << a << endl; return 0; } => Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.32 addition.m imax = 1e9; a = 0; for i=0:imax-1 a = a + 10; end disp(a); exit; => Elapsed (wall clock) time (h:mm:ss or m:ss): 0:08.39 With best regards, Michael -- http://mail.python.org/mailman/listinfo/python-list
Re: PyPy and RPython
Well then, wouldn't it make sense for PyPy to use Shedskin and its definition of Restricted Python? I have heard repeatedly that PyPy RPython is very difficult to use. Then why isn't PyPy using Shedskin to compile its PyPy-Jit? Sarvi On Sep 2, 11:59 pm, John Nagle wrote: > On 9/2/2010 10:30 PM, sarvi wrote: > > > > > > > On Sep 2, 2:19 pm, John Nagle wrote: > >> On 9/2/2010 1:29 AM, sarvi wrote: > > >>> When I think about it these restrictions below seem a very reasonable > >>> tradeoff for performance. > > >> Yes. > > >>> And I can use this for just the modules/sections that are performance > >>> critical. > > >> Not quite. Neither Shed Skin nor RPython let you call from > >> restricted code to unrestricted code. That tends to happen > >> implicitly as objects are passed around. It's the global > >> analysis that makes this work; when you call something, you > >> need to know more about it than how to call it. > > > It should technically be possible to allow Python to call a module > > written in RPython? > > The problem is that, in a language where everything is an object, > everything you call calls you back. > > The basic performance problem with CPython comes from the fact > that it uses the worst-case code for almost everything. Avoiding > that requires global analysis to detect the places where the code > clearly isn't doing anything weird and simpler code can be used. > Again, look at Shed Skin, which represents considerable progress > made by one guy. With more resources, that could be a very > good system. > > John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: bisection method: Simulating a retirement fund
On Sep 2, 11:10 pm, MRAB wrote: > > Why are you saving 'fund' in SavingsRecord if you're returning just the > last and discarding others? Basically you're returning the final value > of fund. Hi MRAB ok i agree that this is not be ideal. I should shorten this to ONLY return SavingsRecord[-1] > When performing this type of 'search' make sure that the interval (high > - low) reduces at every step. > (integer division) and if the 'if' condition > happens to be false > then the value of 'low' won't change for the next iteration, leading to an > infinite loop. If you look at the output you will see that the interval DOES seem to reduce at each interval as expenses and fundsize reduce gradually. The computation does not lead to an infinite loop. tnx Baba -- http://mail.python.org/mailman/listinfo/python-list
pyqt unhandled RuntimeError \"no access to protected functions or signals for objects not created from Python
Hello i have also write to pyqt mailing list but maybe you can help me to solve this question: I have write a tcpserver but wheni try to read by the signal readyRead() e get the error this the part of the program: def nuova_connessione(self): print 'Nuova connessione' self.nuovo_socket = self.mio_server.nextPendingConnection() print self.nuovo_socket self.connect(self.nuovo_socket, QtCore.SIGNAL("connected()"), self. connessione) self.connect(self.nuovo_socket, QtCore.SIGNAL("readyRead()"), self. leggo_risposta) self.connect(self.nuovo_socket, QtCore.SIGNAL("disconnected"), self. sconnesso) self.connect(self.nuovo_socket, QtCore.SIGNAL("error(Qtcore. QAbsatctSocket::SocketError)"), self.server_errore) def leggo_risposta(self): cosa_leggo = self.nuovo_socket.readData(self.nuovo_socket. bytesAvailable()) can you help me Thanks -- http://mail.python.org/mailman/listinfo/python-list
Re: Speed-up for loops
Ulrich Eckhardt, 03.09.2010 08:52: Tim Wintle wrote: [..] under the hood, cpython does something like this (in psudo-code) itterator = xrange(imax) while 1: next_attribute = itterator.next try: i = next_attribute() except: break a = a + 10 There is one thing that strikes me here: The code claims that each iteration there is a lookup of the 'next' field in the iterator. I would expect that this is looked up once before the loop only. Can you confirm that or am I misinterpreting your intention here? It needs to do that. Nothing keeps you from redefining "next" in each call. That's even a well known way to implement state machines. However, as usual, the details are a bit different in CPython, which has a C level slot for the "next" method. So the lookup isn't as heavy as it looks. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: PyPy and RPython
On 9/2/2010 10:30 PM, sarvi wrote: On Sep 2, 2:19 pm, John Nagle wrote: On 9/2/2010 1:29 AM, sarvi wrote: When I think about it these restrictions below seem a very reasonable tradeoff for performance. Yes. And I can use this for just the modules/sections that are performance critical. Not quite. Neither Shed Skin nor RPython let you call from restricted code to unrestricted code. That tends to happen implicitly as objects are passed around. It's the global analysis that makes this work; when you call something, you need to know more about it than how to call it. It should technically be possible to allow Python to call a module written in RPython? The problem is that, in a language where everything is an object, everything you call calls you back. The basic performance problem with CPython comes from the fact that it uses the worst-case code for almost everything. Avoiding that requires global analysis to detect the places where the code clearly isn't doing anything weird and simpler code can be used. Again, look at Shed Skin, which represents considerable progress made by one guy. With more resources, that could be a very good system. John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: python database
shai garcia writes: > can you pls help me to make a database program in python? Thank you for your interest in Python, and I hope you will have much success as you learn about it. Your best way of using this forum to help you is to ask *specific* questions about problems as you go along. Do your research of the documentation for Python http://docs.python.org/>, the Python Package Index http://pypi.python.org/>, the Python Wiki http://wiki.python.org/> and so on; then start. Having started, work through the problems that come along as you go, and come here to ask about problems in *your* work, showing the code which confuses you and explaining the resources you've already researched. This will be much more instructive for you, and a much better use of the time of the volunteers here. > Submitted via [something that spews adverts on the end of messages] Please don't use these advertising services to post your messages. Instead, post them using a normal mail service that won't advertise at us. -- \ “The difference between a moral man and a man of honor is that | `\ the latter regrets a discreditable act, even when it has worked | _o__) and he has not been caught.” —Henry L. Mencken | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: PyPy and RPython
sarvi, 03.09.2010 07:30: It should technically be possible to allow Python to call a module written in RPython? What's "Python" here? CPython? Then likely yes. I don't see a benefit, though. It should also compile RPython to a python module.so right? Why (and how) would CPython do that? If you want a binary extension module for CPython, you can try to push the RPython module through Cython. However, in that case, you wouldn't be restricted to RPython in the first place. Stefan -- http://mail.python.org/mailman/listinfo/python-list