[google-appengine] Re: Problems when downloading data from the app

2011-02-08 Thread Ricardo Bánffy
I would suggest placing a strategic pdb.set_trace() around the error
lines right before the bulkloader checks if the error is in the
non_fatal_error_codes set. This way, you could check what the error
code is and add it in the same fashion I did in my patch.

On Feb 2, 3:48 am, antichrist  wrote:
> I have similar problem, but slight different error message.
>
> [ERROR    2011-02-02 14:40:50,286 adaptive_thread_pool.py] Error in
> Thread-1: 
> [DEBUG    2011-02-02 14:40:50,301 adaptive_thread_pool.py] Traceback
> (most recent call last):
>   File "C:\Documents and Settings\Administrator\My Documents
> \pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
> line 693, in PerformWork
>     transfer_time = self._TransferItem(thread_pool)
>   File "C:\Documents and Settings\Administrator\My Documents
> \pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
> line 1081, in _TransferItem
>     self, retry_parallel=self.first)
>   File "C:\Documents and Settings\Administrator\My Documents
> \pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
> line 1358, in GetEntities
>     results = self._QueryForPbs(query)
>   File "C:\Documents and Settings\Administrator\My Documents
> \pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
> line 1308, in _QueryForPbs
>     result_pb)
>   File "D:\Program Files\Google\google_appengine\google\appengine\api
> \apiproxy_stub_map.py", line 78, in MakeSyncCall
>     return apiproxy.MakeSyncCall(service, call, request, response)
>   File "D:\Program Files\Google\google_appengine\google\appengine\api
> \apiproxy_stub_map.py", line 278, in MakeSyncCall
>     rpc.CheckSuccess()
>   File "D:\Program Files\Google\google_appengine\google\appengine\api
> \apiproxy_rpc.py", line 149, in _WaitImpl
>     self.request, self.response)
>   File "D:\Program Files\Google\google_appengine\google\appengine\ext
> \remote_api\remote_api_stub.py", line 223, in MakeSyncCall
>     handler(request, response)
>   File "D:\Program Files\Google\google_appengine\google\appengine\ext
> \remote_api\remote_api_stub.py", line 232, in _Dynamic_RunQuery
>     'datastore_v3', 'RunQuery', query, query_result)
>   File "D:\Program Files\Google\google_appengine\google\appengine\ext
> \remote_api\remote_api_stub.py", line 155, in MakeSyncCall
>     self._MakeRealSyncCall(service, call, request, response)
>   File "D:\Program Files\Google\google_appengine\google\appengine\ext
> \remote_api\remote_api_stub.py", line 167, in _MakeRealSyncCall
>     encoded_response = self._server.Send(self._path, encoded_request)
>   File "D:\Program Files\Google\google_appengine\google\appengine\tools
> \appengine_rpc.py", line 346, in Send
>     f = self.opener.open(req)
>   File "D:\Python25\lib\urllib2.py", line 381, in open
>     response = self._open(req, data)
>   File "D:\Python25\lib\urllib2.py", line 399, in _open
>     '_open', req)
>   File "D:\Python25\lib\urllib2.py", line 360, in _call_chain
>     result = func(*args)
>   File "D:\Python25\lib\urllib2.py", line 1107, in http_open
>     return self.do_open(httplib.HTTPConnection, req)
>   File "D:\Python25\lib\urllib2.py", line 1082, in do_open
>     raise URLError(err)
> URLError: 
>
> [DEBUG    2011-02-02 14:40:50,301 bulkloader.py] Waiting for
> progress_thread to terminate...
> [DEBUG    2011-02-02 14:40:50,316 bulkloader.py] [Thread-11]
> ExportProgressThread: exiting
> [DEBUG    2011-02-02 14:40:50,316 bulkloader.py] ... done.
> [INFO     2011-02-02 14:40:50,332 bulkloader.py] Have 892 entities, 0
> previously transferred
> [INFO     2011-02-02 14:40:50,332 bulkloader.py] 892 entities (5134783
> bytes) transferred in 63.6 seconds
>
> ---
>
> This "URLError: " error occur
> every time I try to download data after some time.
>
> Has anyone know about this problem?
>
> On 1월24일, 오후5시23분, Ricardo Bánffy  wrote:
>
> > in case it affects anyone else, one line fixes it:
>
> > Index: google/appengine/tools/bulkloader.py
> > ===
> > --- google/appengine/tools/bulkloader.py        (revision 142)
> > +++ google/appengine/tools/bulkloader.py        (working copy)
> > @@ -698,6 +698,8 @@
> >                         transfer_time)
> >            sys.stdout.write('.')
> >            sys.stdout.flush()
> > +          # Since we had at least one successful transfer, we could
> > assume DNS errors are transient
> > +          non_fatal_error_codes.add(-2)
> >            status = adaptive_thread_pool.WorkItem.SUCCESS
> >            if transfer_time <= MAXIMUM_INCREASE_DURATION:
> >              instruction = adaptive_thread_pool.ThreadGate.INCREASE
>
> > BTW, is there a nice constant for this kind of error? Adding a -2 to
> > the set seems dirty, to say the least.
>
> > 2011/1/22 Ricardo Bánffy :
>
> > > Hi.
>
> > > I have been, for the past couple days, to download data from the live
> > > app to my local development copy. Every ti

[google-appengine] Re: Problems when downloading data from the app

2011-02-01 Thread antichrist
I have similar problem, but slight different error message.

[ERROR2011-02-02 14:40:50,286 adaptive_thread_pool.py] Error in
Thread-1: 
[DEBUG2011-02-02 14:40:50,301 adaptive_thread_pool.py] Traceback
(most recent call last):
  File "C:\Documents and Settings\Administrator\My Documents
\pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
line 693, in PerformWork
transfer_time = self._TransferItem(thread_pool)
  File "C:\Documents and Settings\Administrator\My Documents
\pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
line 1081, in _TransferItem
self, retry_parallel=self.first)
  File "C:\Documents and Settings\Administrator\My Documents
\pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
line 1358, in GetEntities
results = self._QueryForPbs(query)
  File "C:\Documents and Settings\Administrator\My Documents
\pointmarket\google_appengine\google\appengine\tools\bulkloader.py",
line 1308, in _QueryForPbs
result_pb)
  File "D:\Program Files\Google\google_appengine\google\appengine\api
\apiproxy_stub_map.py", line 78, in MakeSyncCall
return apiproxy.MakeSyncCall(service, call, request, response)
  File "D:\Program Files\Google\google_appengine\google\appengine\api
\apiproxy_stub_map.py", line 278, in MakeSyncCall
rpc.CheckSuccess()
  File "D:\Program Files\Google\google_appengine\google\appengine\api
\apiproxy_rpc.py", line 149, in _WaitImpl
self.request, self.response)
  File "D:\Program Files\Google\google_appengine\google\appengine\ext
\remote_api\remote_api_stub.py", line 223, in MakeSyncCall
handler(request, response)
  File "D:\Program Files\Google\google_appengine\google\appengine\ext
\remote_api\remote_api_stub.py", line 232, in _Dynamic_RunQuery
'datastore_v3', 'RunQuery', query, query_result)
  File "D:\Program Files\Google\google_appengine\google\appengine\ext
\remote_api\remote_api_stub.py", line 155, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
  File "D:\Program Files\Google\google_appengine\google\appengine\ext
\remote_api\remote_api_stub.py", line 167, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
  File "D:\Program Files\Google\google_appengine\google\appengine\tools
\appengine_rpc.py", line 346, in Send
f = self.opener.open(req)
  File "D:\Python25\lib\urllib2.py", line 381, in open
response = self._open(req, data)
  File "D:\Python25\lib\urllib2.py", line 399, in _open
'_open', req)
  File "D:\Python25\lib\urllib2.py", line 360, in _call_chain
result = func(*args)
  File "D:\Python25\lib\urllib2.py", line 1107, in http_open
return self.do_open(httplib.HTTPConnection, req)
  File "D:\Python25\lib\urllib2.py", line 1082, in do_open
raise URLError(err)
URLError: 

[DEBUG2011-02-02 14:40:50,301 bulkloader.py] Waiting for
progress_thread to terminate...
[DEBUG2011-02-02 14:40:50,316 bulkloader.py] [Thread-11]
ExportProgressThread: exiting
[DEBUG2011-02-02 14:40:50,316 bulkloader.py] ... done.
[INFO 2011-02-02 14:40:50,332 bulkloader.py] Have 892 entities, 0
previously transferred
[INFO 2011-02-02 14:40:50,332 bulkloader.py] 892 entities (5134783
bytes) transferred in 63.6 seconds

---

This "URLError: " error occur
every time I try to download data after some time.

Has anyone know about this problem?




On 1월24일, 오후5시23분, Ricardo Bánffy  wrote:
> in case it affects anyone else, one line fixes it:
>
> Index: google/appengine/tools/bulkloader.py
> ===
> --- google/appengine/tools/bulkloader.py        (revision 142)
> +++ google/appengine/tools/bulkloader.py        (working copy)
> @@ -698,6 +698,8 @@
>                         transfer_time)
>            sys.stdout.write('.')
>            sys.stdout.flush()
> +          # Since we had at least one successful transfer, we could
> assume DNS errors are transient
> +          non_fatal_error_codes.add(-2)
>            status = adaptive_thread_pool.WorkItem.SUCCESS
>            if transfer_time <= MAXIMUM_INCREASE_DURATION:
>              instruction = adaptive_thread_pool.ThreadGate.INCREASE
>
> BTW, is there a nice constant for this kind of error? Adding a -2 to
> the set seems dirty, to say the least.
>
> 2011/1/22 Ricardo Bánffy :
>
>
>
>
>
> > Hi.
>
> > I have been, for the past couple days, to download data from the live
> > app to my local development copy. Every time, sometimes a couple hours
> > and gigabytes into the download, I get a
>
> > .[INFO    ] An error occurred. Shutting down...
> > ..[ERROR   ] Error in Thread-8:  > not known')>
>
> > [INFO    ] Have 210 entities, 0 previously transferred
> > [INFO    ] 210 entities (2145028 bytes) transferred in 54.4 seconds
>
> > message.
>
> > I assume a try/except with a couple retries around it would fix the
> > problem - as it looks like a transient DNS failure - an

[google-appengine] Re: Problems when downloading data from the app

2011-01-24 Thread Ricardo Bánffy
in case it affects anyone else, one line fixes it:

Index: google/appengine/tools/bulkloader.py
===
--- google/appengine/tools/bulkloader.py(revision 142)
+++ google/appengine/tools/bulkloader.py(working copy)
@@ -698,6 +698,8 @@
transfer_time)
   sys.stdout.write('.')
   sys.stdout.flush()
+  # Since we had at least one successful transfer, we could
assume DNS errors are transient
+  non_fatal_error_codes.add(-2)
   status = adaptive_thread_pool.WorkItem.SUCCESS
   if transfer_time <= MAXIMUM_INCREASE_DURATION:
 instruction = adaptive_thread_pool.ThreadGate.INCREASE

BTW, is there a nice constant for this kind of error? Adding a -2 to
the set seems dirty, to say the least.

2011/1/22 Ricardo Bánffy :
> Hi.
>
> I have been, for the past couple days, to download data from the live
> app to my local development copy. Every time, sometimes a couple hours
> and gigabytes into the download, I get a
>
> .[INFO    ] An error occurred. Shutting down...
> ..[ERROR   ] Error in Thread-8:  not known')>
>
> [INFO    ] Have 210 entities, 0 previously transferred
> [INFO    ] 210 entities (2145028 bytes) transferred in 54.4 seconds
>
> message.
>
> I assume a try/except with a couple retries around it would fix the
> problem - as it looks like a transient DNS failure - and I'll dig
> deeper into appcfg.py's code tomorrow in order to fix this, but,
> before I do,  has anyone fixed this before?
>
> BTW, I'm running 1.4.1 on Ubuntu 10.10 with a built-from-sources
> Python 2.5.5 (Ubuntu doesn't package it anymore).



-- 
Ricardo Bánffy
http://www.dieblinkenlights.com
http://twitter.com/rbanffy

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.