Splitting an application across multiple systems leaves your
application with a downtime which is the sum of the downtimes of the
individual systems. I wouldn't like to do that. I would hope Google
would lift the file limit so we could get the extra speed within the
same system.

On Jan 28, 7:15 am, Prem <playofwo...@gmail.com> wrote:
> Keeping static files on external storage like Amazon S3/Amazon
> Cloudfront or any CDN might help. I have not hit the file limit yet
> but I did this to speed up page response. Maybe this will help?
>
> On Jan 27, 3:30 am, phtq <pher...@typequick.com.au> wrote:
>
>
>
> > Our application error log for the 26th showed around 160 failed http
> > requests due to timeouts. That's 160 users being forced to hit the
> > refresh button on their browser to get a normal response. A more
> > typical day has 20 to 60 timeouts. We have been waiting over a year
> > for this bug to get fixed with no progress at all. Its beginning to
> > look like it's unfixable so perhaps Google could provide some
> > workaround. In our case, the issue arises because of the 1,000 file
> > limit. We are forced to hold all our .js, .css, .png. mp3, etc. files
> > in the database and serve them from there. The application is quite
> > large and there are well over 10,000 files. The Python code serving up
> > the files does just one DB fetch and has about 9 lines of code so
> > there is no way it can be magically restructured to make the Timeout
> > go away. However, putting all the files on the app engine as real
> > files would avoid the DB access and make the problem go away. Could
> > Google work towards removing that file limit?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to