Re: [web2py] Re: Speed of rendering html data (10,000+ rows)
try def get_data(): custdata = db.executesql(qry, as_dict=True) return response.json(custdata.as_list()) also go to this link http://yoururl/app/controller/get_data you should see a json response
Re: [web2py] Re: Speed of rendering html data (10,000+ rows)
On Saturday, June 25, 2011 1:07:39 PM UTC-4, elffikk wrote: try def get_data(): custdata = db.executesql(qry, as_dict=True) return response.json(custdata.as_list()) He's already using as_dict=True in the executesql call, so doing custdata.as_list() shouldn't be necessary.
Re: [web2py] Re: Speed of rendering html data (10,000+ rows)
why you need to return 10k records in one go ? -- Sebastian E. Ovide
Re: [web2py] Re: Speed of rendering html data (10,000+ rows)
Or print it. I have had to do massive amounts of data in html and pdf format. BR, Jason Brower On 06/21/2011 08:26 PM, Vineet wrote: @Sebastian, @pbreit, I understand what you mean to say. If I get you rightly, I should fetch only a limited no. of rows through pagination. It makes sense for most of the situations. But for some cases, rendering all the records on single page is required. e.g. consider a big automobile workshop. A report for spare parts price list (with12,000 parts) is to be viewed. For the customer, using pagination clicking next, next,... is unsuitable. After rendering the total parts on single page, he/she may apply a desired filter and analyze the data. On Jun 21, 9:50 pm, Sebastian E. Ovidesebastian.ov...@gmail.com wrote: why you need to return 10k records in one go ? -- Sebastian E. Ovide
Re: [web2py] Re: Speed of rendering html data (10,000+ rows)
It can paginate data? _ *Gilson Filho* *Web Developer http://gilsondev.com*