[appengine-java] Re: Is it possible to implement Open EntityManager in View to avoid JDODetachedFieldAccessException?

2009-11-28 Thread datanucleus
 I'm using Spring therefore I tried to configure the
 OpenEntityManagerInViewInterceptor, if this works as it's supposed to
 I shouldn't get JDODetachedFieldAccessException anymore, but I still
 get it.

And what state are the objects in when you access the field ?
detached ? and what PersistenceContext is used ?
As per 
http://www.datanucleus.org/products/accessplatform_2_0/jpa/object_lifecycle.html
DataNucleus 1.x supports Transaction out of the box (whereas
DataNucleus 2.x supports both).
You could easily use the persistence properties
datanucleus.DetachAllOnCommit = false
datanucleus.DetachOnClose=true
if you wanted Extended.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] [ANN] Gaelyk 0.3.2, a lightweight Groovy toolkit for App Engine

2009-11-28 Thread Guillaume Laforge
Hi all,

This is the first time I'm posting about Gaelyk in this group, I hope it's
okay according to the netiquette of the Group.
If not, please advise me the best place to give some news on that topic, on
an occasional basis.

I wanted to share with you the r*elease of Gaelyk 0.3.2, a lightweight
Groovy toolkit for App Engine*.

Gaelyk is a thin layer on top of the Google App Engine SDK, that is aimed at
simplifying the creation of applications written in the *G**roovy dynamic
language* to be deployed on GAE.
Thanks to Groovlets (servlet scripts basically) and templates, and entities,
you can follow an *MVC paradigm* for creating your applications.
Several nice *syntax shortcuts simplify the usage of the low-level SDK APIs*,
such as an email.send to: recipi...@gmail.com shorthand and more.

This new release adds some handy *URL routing system* to have nice RESTful
URLs.

If you want to learn more about this little framework, please have a look at
the extensive Tutorial:
http://gaelyk.appspot.com/tutorial

Gaelyk was also presented recently at the Devoxx conference in Belgium,
along with Patrick Chanezon descripting the GAE platform.
You can view the slides of the presentation here:
http://glaforge.free.fr/blog/groovy/286

You can download Gaelyk and a template project in the download section of
the website: http://gaelyk.appspot.com/download

And get involve in the community: http://gaelyk.appspot.com/community
Particularly, you can join the Gaelyk Google Group for posting questions and
chatting with the users and developers:
http://groups.google.com/group/gaelyk

Gaelyk is an *Open Source project* licensed under the terms of the *Apache
Software License 2.0*.
The project is hosted on GitHub: http://github.com/glaforge/gaelyk

A number of applications online already use this lightweight toolkit.

   - For example, the Groovy Web Console lets you execute and share Groovy
   scripts online: http://groovyconsole.appspot.com/
   - The Gaelyk website itself is of course a Gaelyk application:
   http://gaelyk.appspot.com/
   - The iUI iPhone JavaScript toolkit website is also powered by Gaelyk:
   http://iui-js.appspot.com/ (best viewed with an iPhone)
   - The Averone's company website is developed with Gaelyk:
   http://www.averone.com.br/

In the future, we're looking forward to simplifying other parts of the GAE
SDK, for instance providing some nice DSLs (Domain-Specific Languages) to
improve and enrich the low-level datastore APIs to further simplify querying
the datastore, or to create a RESTful language on top of the URL Fetch
service to interact with REST backends.

We're looking forward to your feedback and having you on board!

-- 
Guillaume Laforge
Groovy Project Manager
Head of Groovy Development at SpringSource
http://www.springsource.com/g2one

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: Is there a recommended way to differentiate between production and dev GAE environments?

2009-11-28 Thread Guillaume Laforge
Another approach I've just found is doing something like:

ApiProxy.getCurrentEnvironment().getClass().getName().contains
(LocalHttpRequestEnvironment)

Not sure in the end what's the best approach of them all.

On 24 nov, 16:29, Marcel Overdijk marceloverd...@gmail.com wrote:
 Or use a Listener as described 
 herehttp://marceloverdijk.blogspot.com/2009/10/determining-runtime-enviro...

 On 23 nov, 15:58, Nacho Coloma icol...@gmail.com wrote:



  To answer my own question, this has been my best shot this far:

  SecurityManager sm = System.getSecurityManager();
  localDevelopmentEnvironment = sm == null ||
  com.google.appengine.tools.development.DevAppServerFactory
  $CustomSecurityManager.equals(sm.getClass().getName());

  If anyone has a better way, I will be glad to hear.

  On Nov 23, 1:17 pm, Nacho Coloma icol...@gmail.com wrote:

   Hi all,

   I was considering options, but I first wanted to ask: is there a
   recommended way to differentiate between my local development
   environment and the real GAE server? This far, the only options I can
   think of are:

   * adding a -Dtest=true to my eclipse launcher

   * looking up for any test environment classes (Class.forName) but it's
   not reliable as they could get included by mistake in any WAR release.

   * I have been searching for instanceof alternatives i.e.:
   DatastoreServiceFactory.getService() instanceof LocalDatastoreService
   but I could not find any such expression that could possibly work.

   Ideas? What are people using out there?

   Nacho.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] problem in XMPP sendMessage()

2009-11-28 Thread sahil mahajan
I am using XMPP and getting following error when I try
/CODE*/
Message msg = new MessageBuilder()
.withRecipientJids(receiverJid)
 .withFromJid(new JID(recipientJid[0].getId()) )
  .withMessageType(MessageType.NORMAL)
.withBody(msgBody)
.build();

  SendResponse status =xmpp.sendMessage(msg);

My JID's are correct. msgBody is not null
Problem occures at xmpp.sendMessage(msg);
I don't understand what is null?

/***ERROR
DETAILS/

Uncaught exception from servlet
java.lang.NullPointerException
at 
com.google.appengine.api.xmpp.XMPPServiceImpl.createMessageRequest(XMPPServiceImpl.java:120)
at 
com.google.appengine.api.xmpp.XMPPServiceImpl.sendMessage(XMPPServiceImpl.java:105)
at 
com.ChatRoom.server.XMPPReceiverServlet.doPost(XMPPReceiverServlet.java:165)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093)
at 
com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:35)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
at 
com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
at 
com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:238)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
at org.mortbay.jetty.Server.handle(Server.java:313)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:830)
at 
com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java:76)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381)
at 
com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:139)
at 
com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:239)
at 
com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5135)
at 
com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5133)
at 
com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24)
at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:363)
at com.google.net.rpc.impl.Server$2.run(Server.java:814)
at 
com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56)
at 
com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:516)
at com.google.net.rpc.impl.Server.startRpc(Server.java:769)
at com.google.net.rpc.impl.Server.processRequest(Server.java:351)
at 
com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:437)
at 
com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319)
at 
com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290)
at com.google.net.async.Connection.handleReadEvent(Connection.java:436)
at 
com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:762)
at 
com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:207)
at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:101)
at 
com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251)
at 
com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:396)
at java.lang.Thread.run(Unknown Source)

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.


[appengine-java] Re: xmlHttp request status is 0 for google maps http geocoder

2009-11-28 Thread shaz
Realized the issue - didn't realize there was a cross domain
restriction on AJAX. Problem solved. thanks

On Nov 27, 5:09 pm, shaz ssh...@gmail.com wrote:
 Hi,

 I am trying to submit an HTTP request via AJAX from the client to the
 Google Maps Geocoding service. I keep getting a status of 0.

 I know the request is valid because when I enter the URL right into
 the browser address bar I get a valid result. Here is the code (assume
 'url_string' has a valid url to the geocoding service - I have tested
 this already as mentioned above):

 var request = GXmlHttp.create(); request.open(GET, url_string,
 true); request.onreadystatechange = function() { if
 (request.readyState == 4) { alert(STATUS IS + request.status);

 } }

 request.send(null);

 My app is running on Google appengine and I get the error when I try
 it locally but also when I deploy and try it.

 Any help would be appreciated.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: Why is it called Google App Engine for Java ?

2009-11-28 Thread ted stockwell


On Nov 27, 7:19 pm, Diana Cruise diana.l.cru...@gmail.com wrote:

 Ted... java.lang.Thread, you want to launch new processes from within
 your app server...that's a job for URLFetch.


Unlike Thread, I can't use URLFetch to perform a task asynchronously
and return a result to the calling thread.


--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




Re: [appengine-java] problem in XMPP sendMessage()

2009-11-28 Thread Ravi Sharma
I am not sure, but i think you dont need to(should not)  set fromJid, as
message will be sent from your application JID.
I am running following code and its working .

JID jid = new JID(responseJid);
Message msg = new MessageBuilder()
.withRecipientJids(jid)
.withBody(msgBody)
.build();

boolean messageSent = false;
XMPPService xmpp = XMPPServiceFactory.getXMPPService();
if (xmpp.getPresence(jid).isAvailable()) {
SendResponse status = xmpp.sendMessage(msg);
messageSent = (status.getStatusMap().get(jid) ==
SendResponse.Status.SUCCESS);
}

On Sat, Nov 28, 2009 at 2:51 PM, sahil mahajan sahilm2...@gmail.com wrote:

 I am using XMPP and getting following error when I try
 /CODE*/
 Message msg = new MessageBuilder()
 .withRecipientJids(receiverJid)
  .withFromJid(new JID(recipientJid[0].getId()) )
   .withMessageType(MessageType.NORMAL)
 .withBody(msgBody)
 .build();

   SendResponse status =xmpp.sendMessage(msg);

 My JID's are correct. msgBody is not null
 Problem occures at xmpp.sendMessage(msg);
 I don't understand what is null?

 /***ERROR
 DETAILS/

 Uncaught exception from servlet
 java.lang.NullPointerException
   at 
 com.google.appengine.api.xmpp.XMPPServiceImpl.createMessageRequest(XMPPServiceImpl.java:120)
   at 
 com.google.appengine.api.xmpp.XMPPServiceImpl.sendMessage(XMPPServiceImpl.java:105)
   at 
 com.ChatRoom.server.XMPPReceiverServlet.doPost(XMPPReceiverServlet.java:165)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093)
   at 
 com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:35)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
   at 
 com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
   at 
 com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:238)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
   at org.mortbay.jetty.Server.handle(Server.java:313)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:830)
   at 
 com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java:76)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381)
   at 
 com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:139)
   at 
 com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:239)
   at 
 com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5135)
   at 
 com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5133)
   at 
 com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24)
   at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:363)
   at com.google.net.rpc.impl.Server$2.run(Server.java:814)
   at 
 com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56)
   at 
 com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:516)
   at com.google.net.rpc.impl.Server.startRpc(Server.java:769)
   at com.google.net.rpc.impl.Server.processRequest(Server.java:351)
   at 
 com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:437)
   at 
 com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319)
   at 
 com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290)
   at com.google.net.async.Connection.handleReadEvent(Connection.java:436)
   at 
 com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:762)
   at 
 

[appengine-java] Re: Problem in uploading jsp file

2009-11-28 Thread Sahil Mahajan
Problem solved
I found solution at comment 16 of Issue 1226
http://groups.google.com/group/google-appengine-java/browse_thread/thread/24aadd04f3ae0245

Regards
Sahil

On Nov 25, 11:14 pm, Sahil Mahajan sahilm2...@gmail.com wrote:
 Hello Stephan

 I am new to gae. My JAVA_HOME variable has value C:\Program Files\Java
 \jdk1.6.0_01
 I also checked build.xml, but I could not understand where I need to
 mention jdk instead of jre.

 Can you give me some more details

 Regards
 Sahil

 On Nov 24, 11:00 pm, Stephan Hartmann hartm...@metamesh.de wrote:

  You are using a Java Runtime Environment (JRE) which does not include a
  compiler.

  You have to use a JDK instead.

  Regards,
  Stephan

  sahil mahajan schrieb:

   Hello

   I am working on java google app engine. When I try to upload my
   application, I receive following error

   Error Details:

   Nov 24, 2009 10:18:11 PM org.apache.jasper.JspC processFile

   INFO: Built File: \addressbook.jsp

   java.lang.IllegalStateException: cannot find javac executable based on
   java.home

   , tried C:\Program Files\Java\jre1.6.0_04\bin\javac.exe and
   C:\Program Files\

   Java\bin\javac.exe

   Unable to upload app: cannot find javac executable based on java.home,
   tried C:

   \Program Files\Java\jre1.6.0_04\bin\javac.exe and C:\Program
   Files\Java\bin\ja

   vac.exe

   If I remove addressbook.jsp file, error does not occur

   What could be the reason?

   --

   You received this message because you are subscribed to the Google
   Groups Google App Engine for Java group.
   To post to this group, send email to
   google-appengine-j...@googlegroups.com.
   To unsubscribe from this group, send email to
   google-appengine-java+unsubscr...@googlegroups.com.
   For more options, visit this group at
  http://groups.google.com/group/google-appengine-java?hl=en.



--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: Why is it called Google App Engine for Java ?

2009-11-28 Thread ted stockwell

Actually, many people had the same reaction when GAE/J was released.
See for instance, 
http://weblogs.java.net/blog/2009/04/16/google-app-engine-java-sucks

Without a doubt if some smaller player created such an incompatible
implementation they would not be allowed to call it 'Java'.

On Nov 27, 7:19 pm, Diana Cruise diana.l.cru...@gmail.com wrote:
As far
 as the naming goes, you may be the first to raise this concern in
 GAE's existence (2 years or so).


--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] need comment on using @transactional on service layer

2009-11-28 Thread asianCoolz
hi, can anyone comments whether my architecture is ok for using
@transactional with spring on service layer?

http://tinyurl.com/ybmev2b

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: problem in XMPP sendMessage()

2009-11-28 Thread Sahil Mahajan
I removed .withFromJid(new JID(recipientJid[0].getId()) )
but I am still facing problem.

The servlet works correctly for first two messages. But problem starts
when servlet receives third message.

I find this strange. Initially it works fine, but gives problem from
third message.

Regards
Sahil Mahajan



On Nov 28, 8:19 pm, Ravi Sharma ping2r...@gmail.com wrote:
 I am not sure, but i think you dont need to(should not)  set fromJid, as
 message will be sent from your application JID.
 I am running following code and its working .

 JID jid = new JID(responseJid);
         Message msg = new MessageBuilder()
             .withRecipientJids(jid)
             .withBody(msgBody)
             .build();

         boolean messageSent = false;
         XMPPService xmpp = XMPPServiceFactory.getXMPPService();
         if (xmpp.getPresence(jid).isAvailable()) {
             SendResponse status = xmpp.sendMessage(msg);
             messageSent = (status.getStatusMap().get(jid) ==
 SendResponse.Status.SUCCESS);
         }

 On Sat, Nov 28, 2009 at 2:51 PM, sahil mahajan sahilm2...@gmail.com wrote:
  I am using XMPP and getting following error when I try
  /CODE*/
  Message msg = new MessageBuilder()
                  .withRecipientJids(receiverJid)
           .withFromJid(new JID(recipientJid[0].getId()) )
    .withMessageType(MessageType.NORMAL)
                  .withBody(msgBody)
                  .build();

        SendResponse status =xmpp.sendMessage(msg);

  My JID's are correct. msgBody is not null
  Problem occures at xmpp.sendMessage(msg);
  I don't understand what is null?

  /***ERROR
  DETAILS/

  Uncaught exception from servlet
  java.lang.NullPointerException
     at 
  com.google.appengine.api.xmpp.XMPPServiceImpl.createMessageRequest(XMPPServiceImpl.java:120)
     at 
  com.google.appengine.api.xmpp.XMPPServiceImpl.sendMessage(XMPPServiceImpl.java:105)
     at 
  com.ChatRoom.server.XMPPReceiverServlet.doPost(XMPPReceiverServlet.java:165)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
     at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
     at 
  org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093)
     at 
  com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:35)
     at 
  org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
     at 
  com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43)
     at 
  org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
     at 
  org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360)
     at 
  org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
     at 
  org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
     at 
  org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
     at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
     at 
  com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:238)
     at 
  org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
     at org.mortbay.jetty.Server.handle(Server.java:313)
     at 
  org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506)
     at 
  org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:830)
     at 
  com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java:76)
     at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381)
     at 
  com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:139)
     at 
  com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:239)
     at 
  com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5135)
     at 
  com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5133)
     at 
  com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24)
     at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:363)
     at com.google.net.rpc.impl.Server$2.run(Server.java:814)
     at 
  com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56)
     at 
  com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:516)
     at com.google.net.rpc.impl.Server.startRpc(Server.java:769)
     at com.google.net.rpc.impl.Server.processRequest(Server.java:351)
     at 
  com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:437)
     at 
  

[appengine-java] Re: Web Service deployment using Axis in Google app engine for java

2009-11-28 Thread Andrey
It's interesting for me too.

This is not the answer but may be useful:
http://blog.cloudwhiz.com/2009/09/exposing-soap-service-on-gae-part-1.html

-Andrey

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] JPA Like query

2009-11-28 Thread Wouter
Hi all,

I have a problem with a fairly basic JPA query that contains a Like
clause

My query looks like this

SELECT FROM package.Object q WHERE q.property LIKE 'w%'

According to the discussions in this forum there should be limited
support for this. However I can´t get this limited query (which is all
I need) to work.

What am I doing wrong ?

regards
Wouter

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] import deployed project into eclipse

2009-11-28 Thread IlyaE
Is there a way to import the source code into eclipse of a previously
deployed project from the app engine cloud? I want to work on the site
but from another pc that does not share the workspace.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Spring MVC with annotations

2009-11-28 Thread Rafael Reuber
I developed an program with Spring MVC + annotations. He done in my
computer. But when I make deploy and try acess the application, the GAE
throws an error. Someone got use annotations with Spring MVC?

-- 
Rafael Reuber
MSN/Gtalk: psico.in...@gmail.com

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: JPA Like query

2009-11-28 Thread Wouter
Hi,

found the problem, I was still using an old GAE version. When I switch
to GAE 1.2.6 the query described above works

regards
Wouter

On Nov 28, 8:53 pm, Wouter wouter.nie...@gmail.com wrote:
 Hi all,

 I have a problem with a fairly basic JPA query that contains a Like
 clause

 My query looks like this

 SELECT FROM package.Object q WHERE q.property LIKE 'w%'

 According to the discussions in this forum there should be limited
 support for this. However I can´t get this limited query (which is all
 I need) to work.

 What am I doing wrong ?

 regards
 Wouter

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




Re: [appengine-java] Re: JSF 2.0.1 java.lang.ClassNotFoundException: void

2009-11-28 Thread Derek Berube
A simple project (where the JSF2 framework initializes and a default page is 
displayed) is available at Google Code: http://code.google.com/p/jsf2template/. 
 This is the source code for the application I wrote developing the tutorial 
for configuring a JSF 2 project to run on Google App Engine - 
http://jsf2template.appspot.com/.

Derek


On Nov 23, 2009, at 1:55 PM, Alexandre Calvão wrote:

 Does anyone have a archetype project for GAE, whith the JSF Working ?
 
 I could not do this think work.
  
 Thanks
 
 
  
 2009/11/20 addy.bhardwaj addy.bhard...@gmail.com
 I had a similar issue. The way I fixed it was to configure state
 saving to client rather than server. For details check my blog
 http://consultingblogs.emc.com/jaddy/archive/2009/11/20/jsf2-in-google-app-engine.aspx
 
 Let me know if this fixes your problem.
 
 On Nov 12, 6:15 pm, Mirco Attocchi ami...@gmail.com wrote:
  Problem using majorra 2.0.1 and GAE 1.2.6, but some times pages render
  correctly.
 
  javax.servlet.ServletException: java.lang.RuntimeException:
  java.lang.ClassNotFoundException: void
  at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle
  (AppVersionHandlerMap.java:240)
  at org.mortbay.jetty.handler.HandlerWrapper.handle
  (HandlerWrapper.java:139)
  at org.mortbay.jetty.Server.handle(Server.java:313)
  at 
  org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:
  506)
  at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete
  (HttpConnection.java:830)
  at 
  com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable
  (RpcRequestParser.java:76)
  at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381)
  at
  com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceReques 
  t
  (JettyServletEngineAdapter.java:139)
  at com.google.apphosting.runtime.JavaRuntime.handleRequest
  (JavaRuntime.java:239)
  at com.google.apphosting.base.RuntimePb$EvaluationRuntime
  $6.handleBlockingRequest(RuntimePb.java:5135)
  at com.google.apphosting.base.RuntimePb$EvaluationRuntime
  $6.handleBlockingRequest(RuntimePb.java:5133)
  at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest
  (BlockingApplicationHandler.java:24)
  at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:
  363)
  at com.google.net.rpc.impl.Server$2.run(Server.java:814)
  at com.google.tracing.LocalTraceSpanRunnable.run
  (LocalTraceSpanRunnable.java:56)
  at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan
  (LocalTraceSpanBuilder.java:516)
  at com.google.net.rpc.impl.Server.startRpc(Server.java:769)
  at com.google.net.rpc.impl.Server.processRequest(Server.java:351)
  at com.google.net.rpc.impl.ServerConnection.messageReceived
  (ServerConnection.java:437)
  at com.google.net.rpc.impl.RpcConnection.parseMessages
  (RpcConnection.java:319)
  at com.google.net.rpc.impl.RpcConnection.dataReceived
  (RpcConnection.java:290)
  at com.google.net.async.Connection.handleReadEvent(Connection.java:
  436)
  at com.google.net.async.EventDispatcher.processNetworkEvents
  (EventDispatcher.java:762)
  at com.google.net.async.EventDispatcher.internalLoop
  (EventDispatcher.java:207)
  at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:
  101)
  at com.google.net.rpc.RpcService.runUntilServerShutdown
  (RpcService.java:251)
  at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run
  (JavaRuntime.java:396)
  at java.lang.Thread.run(Unknown Source)
  Caused by: java.lang.RuntimeException:
  java.lang.ClassNotFoundException: void
  at com.google.apphosting.runtime.jetty.SessionManager.deserialize
  (SessionManager.java:389)
  at com.google.apphosting.runtime.jetty.SessionManager.loadSession
  (SessionManager.java:307)
  at com.google.apphosting.runtime.jetty.SessionManager.getSession
  (SessionManager.java:282)
  at org.mortbay.jetty.servlet.AbstractSessionManager.getHttpSession
  (AbstractSessionManager.java:237)
  at org.mortbay.jetty.Request.getSession(Request.java:998)
  at
  com.sun.faces.application.WebappLifecycleListener.syncSessionScopedBeans
  (WebappLifecycleListener.java:393)
  at 
  com.sun.faces.application.WebappLifecycleListener.requestDestroyed
  (WebappLifecycleListener.java:117)
  at com.sun.faces.config.ConfigureListener.requestDestroyed
  (ConfigureListener.java:341)
  at org.mortbay.jetty.handler.ContextHandler.handle
  (ContextHandler.java:725)
  at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:
  405)
  at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle
  (AppVersionHandlerMap.java:238)
  ... 27 more
  Caused by: java.lang.ClassNotFoundException: void
  at 

[appengine-java] Re: redirect /file.jsp

2009-11-28 Thread Jeune
You might want to use a framework like Struts or Spring. I also have a
redirect.jsp in my app like Rusty, only that any of my Spring MVC
controllers can use it and package in the request the url where the
page will be redirected.

Anyway the benefit I see here is that for one you don't have to
declare it as static in the appengine-web.xm (it works for me) and
of course many others that the framework can give.

On Nov 28, 2:49 am, Don lydonchan...@gmail.com wrote:
 Hi,

 Trivial question for the gurus here,

 if i do:
     response.redirect(bla.jsp)
 I get
 WARNING: Can not serve /bla.jsp directly.  You need to include it in
 static-files in your appengine-web.xml. on development server
 (localhost)

 Everything is ok when it is run on the cloud.
 Why??

 I know I have to do this below so it works on localhost...
 servlet
         servlet-nameblajsp/servlet-name
         jsp-file/bla.jsp/jsp-file
     /servlet

     servlet-mapping
         servlet-nameblajsp/servlet-name
         url-pattern/blajsp/url-pattern
     /servlet-mapping
 

--

You received this message because you are subscribed to the Google Groups 
Google App Engine for Java group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.




[appengine-java] Re: problem in XMPP sendMessage()

2009-11-28 Thread m seleron
Hi.

I tested execution by the following source.

JID jid = new JID(x...@gmail.com); // set your send gmail address
Message msg = new MessageBuilder().withRecipientJids(jid)
.withFromJid(new JID(x...@appspot.com) ) // set your
ap...@appspot.com
.withMessageType(MessageType.NORMAL)
.withBody(send-message).build();  //set msg String
boolean messageSent = false;
XMPPService xmpp = XMPPServiceFactory.getXMPPService();
if (xmpp.getPresence(jid).isAvailable()) {
SendResponse status = xmpp.sendMessage(msg);
messageSent = (status.getStatusMap().get(jid) ==
SendResponse.Status.SUCCESS);
}

I executed about ten times.
My Gtalk is seem to receive it normally.

Though various possibilities are thought.
If it is possible
Please execute it by the fixed value
as much as possible to simplify a problem.

Though something only has to be able to be useful.

thanks.


On 11月29日, 午前2:52, Sahil Mahajan sahilm2...@gmail.com wrote:
 I removed .withFromJid(new JID(recipientJid[0].getId()) )
 but I am still facing problem.

 The servlet works correctly for first two messages. But problem starts
 when servlet receives third message.

 I find this strange. Initially it works fine, but gives problem from
 third message.

 Regards
 Sahil Mahajan

 On Nov 28, 8:19 pm, Ravi Sharma ping2r...@gmail.com wrote:

  I am not sure, but i think you dont need to(should not)  set fromJid, as
  message will be sent from your application JID.
  I am running following code and its working .

  JID jid = new JID(responseJid);
          Message msg = new MessageBuilder()
              .withRecipientJids(jid)
              .withBody(msgBody)
              .build();

          boolean messageSent = false;
          XMPPService xmpp = XMPPServiceFactory.getXMPPService();
          if (xmpp.getPresence(jid).isAvailable()) {
              SendResponse status = xmpp.sendMessage(msg);
              messageSent = (status.getStatusMap().get(jid) ==
  SendResponse.Status.SUCCESS);
          }

  On Sat, Nov 28, 2009 at 2:51 PM, sahil mahajan sahilm2...@gmail.com wrote:
   I am using XMPP and getting following error when I try
   /CODE*/
   Message msg = new MessageBuilder()
                   .withRecipientJids(receiverJid)
            .withFromJid(new JID(recipientJid[0].getId()) )
     .withMessageType(MessageType.NORMAL)
                   .withBody(msgBody)
                   .build();

         SendResponse status =xmpp.sendMessage(msg);

   My JID's are correct. msgBody is not null
   Problem occures at xmpp.sendMessage(msg);
   I don't understand what is null?

   /***ERROR
   DETAILS/

   Uncaught exception from servlet
   java.lang.NullPointerException
      at 
   com.google.appengine.api.xmpp.XMPPServiceImpl.createMessageRequest(XMPPServiceImpl.java:120)
      at 
   com.google.appengine.api.xmpp.XMPPServiceImpl.sendMessage(XMPPServiceImpl.java:105)
      at 
   com.ChatRoom.server.XMPPReceiverServlet.doPost(XMPPReceiverServlet.java:165)
      at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)
      at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
      at 
   org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
      at 
   org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093)
      at 
   com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:35)
      at 
   org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
      at 
   com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43)
      at 
   org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084)
      at 
   org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360)
      at 
   org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
      at 
   org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
      at 
   org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
      at 
   org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
      at 
   com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:238)
      at 
   org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
      at org.mortbay.jetty.Server.handle(Server.java:313)
      at 
   org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506)
      at 
   org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:830)
      at 
   com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java:76)
      at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381)
      at 
   com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:139)
      at 
   

[google-appengine] Re: Can a bank system be transactional and low contentious?

2009-11-28 Thread niklasro.appspot.com


On Nov 27, 2:42 pm, 风笑雪 kea...@gmail.com wrote:
 Actually, I just want to know how to avoid big entity group, is it
 possible or not.
 It obvious has some other parts need to be figured out, but I'm not
 really building a bank system, just concern about it.
Main anomaly called arbitrage (no such thing...) while floats and
transaction costs testable: stoploss method can reweight markowitz
portfolio with cron=you win. Handling currencies requires external
verification. One approach was keep a table with exchangerates. Which
is no modern way. You can implement an own currency amount, register
app as a computergame, use external certified you trust. Tech we can
verify selves, economy more like acceptance or psychology

 2009/11/27 Niklas Rosencrantz teknik...@gmail.com:



  On Fri, Nov 27, 2009 at 10:22 AM, 风笑雪 kea...@gmail.com wrote:
  If so, why google's engineers wast so much time for supporting the
  useless transaction system?
  No google guys would like tell me the reason?
  For the future, tech team and investment banking very 2 systems you
  know, one is more legislative (de jure) and technology more de facto.
  Ask a technical physicists can matter tunnel through... answer yes.
  Ask investment banker and answer is the opposite (normative) that your
  vault is sealed.

  I don't think only using app layer is a good choice, since I need
  connect the outside db by urlfetch which not efficient and safe
  enough.
  It's most natural obvious very good choice avoid storage whenever you
  can and just pass on.

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Issue 777: Officially Support Naked Domains for GAE Apps

2009-11-28 Thread Greg
Why can you not set up forwarding from the naked domain to www? It is
a DNS function, not anything to do with your browser.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Issue 777: Officially Support Naked Domains for GAE Apps

2009-11-28 Thread niklasro.appspot.com
On Nov 28, 9:38 am, Greg g.fawc...@gmail.com wrote:
 Why can you not set up forwarding from the naked domain to www? It is
 a DNS function, not anything to do with your browser.
Onway supplierdependent got enom records, godaddy records, offsite,
onsite, looks {Fridge.koolbusiness,www.koolbusiness,koolbusiness}.com,
godaddy roundabout number this number that
A record for ___.com has been deleted which is keeping the domain from
forwarding properly. You will need to create a new A record and point
it to our forwarding IP address;___
First, log into your account:
• Go to the Go Daddy Account Login Page
• Log in using your account username (which may be the same as your
customer number) and password
If you have trouble logging in, our password reset form may help you.
You can find this form through the following link:
Account Retrieval Page
Once logged in, follow these steps:
• In the My Products section, click Domain Manager.
• Click the domain name for which you want to create an A record.
• In the Total DNS section, click Total DNS Control.
• Click the Add New A Record option.
• Complete the following:
Host Name
The host name the A record links to. You can enter @ to map the record
directly to your domain.
•OK.
Sincerely,
-- Forwarded message --
From:
Date: Thu, Nov 19, 2009 at 4:34 PM
Subject: Re: Issue 777 in googleappengine: Officially Support Naked
Domains
for GAE Apps
To: nikla...@gmail.com
Comment #41 on issue 777 by jason.a.collins: Officially Support Naked
Domains for GAE Apps
http://code.google.com/p/googleappengine/issues/detail?id=777
GoDaddy does support maintaining the path and query string on a
redirect (we
have it configured this way).
You just have to navigate through

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Waiting already for 6 hours for 300 records to index

2009-11-28 Thread HC
Hi,

I added an index to a table with approx 300 records almost 6 hours ago
and the index is still building. I am starting to worry, in particular
since I will have to add indexes in the future when there are many
more records. Is this kind of delay the norm? I don't see any quota
related errors, as some of the other posts here report. The app-id is
wavedirectory.

Regards,
-HC

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: preserve data during dev stage

2009-11-28 Thread dburns
By default, the development server should preserve data between runs.
Are you sure you're not launching dev_appserver with the -c or --
clear_datastore flags?

On Nov 27, 11:38 pm, james_027 cai.hai...@gmail.com wrote:
 hi,

 How do i preserve my data during the development stage. Every time I
 start my application all my previous inputted data are lost.

 thanks,

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Waiting already for 6 hours for 300 records to index

2009-11-28 Thread mably
Same pb here with a table containing approx 100 records.  Already 30
minutes and still building.

On 28 nov, 14:07, HC hc...@gmx.net wrote:
 Hi,

 I added an index to a table with approx 300 records almost 6 hours ago
 and the index is still building. I am starting to worry, in particular
 since I will have to add indexes in the future when there are many
 more records. Is this kind of delay the norm? I don't see any quota
 related errors, as some of the other posts here report. The app-id is
 wavedirectory.

 Regards,
 -HC

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Storing Intervals (datetime.timedelta) in the datastore.

2009-11-28 Thread Stephen


On Nov 28, 3:19 pm, JDT john.david@googlemail.com wrote:
 Hello All,

 I have an application where I need store the difference in time
 between two events. Unfortunately the datastore only supports storing
 'time' as a datetime.datetime object.

 In an attempt to circumvent this constraint, I am adding the interval
 to the smallest possible datetime.datetime object and saving to the
 datastore, then doing the reverse when recalling the interval.

 Before I implement this I though I would check to see that I wasn't
 reinventing the wheel.

 Does anyone know if there a better way to do this?
 Will datetime.min will ever change?
 I am stupidly creating a CPU intensive process?

 All comments/suggestions most appreciated.

 Thanks,

 David

 For example:

 Model:
 class Item(db.Model):
         interval = db.DateTimeProperty()

 To save the interval :
 days = int(self.request.get(days))
 hours = int(self.request.get(hours))
 minutes = int(self.request.get(minutes)
 Item = db.Item()

 delay = timedelta(days = days, hours = hours, minutes = minutes)
 Item.interval = datetime.min +  delay
 Item.put()

 To recall the interval:
 Item = db.Item.get(key)
 delay = Item.interval - datetime.min


You could create your own IntervalProperty, which would store the
interval as integer seconds and convert to a python timedelta when you
access it:


http://googleappengine.blogspot.com/2009/07/writing-custom-property-classes.html

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Re: Waiting already for 6 hours for 300 records to index

2009-11-28 Thread Prashant
same here. i was testing a new class design and indexes are stuck for 5
records :(

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Re: gql not giving full result set

2009-11-28 Thread Prashant
same problem here...

following is my JDO class:

@PersistenceCapable(identityType = IdentityType.APPLICATION)
public class _Contact{

@Persistent(primaryKey = true)
private String EmailID;

@Persistent
private String Name;

@Persistent
private ListString Groups;
}



following is my test case:


PersistenceManager pm = pmf.getPersistenceManager();

Query query = pm.newQuery(_Contact.class);

query.setOrdering(EmailID);
query.setFilter(Groups.contains(\mygroup\));

int i = 1;
for(_Contact cont : (List_Contact) query.execute()){
resp.getWriter().print(i++ +   + cont.getID() + br);
}

pm.close();


above code printed 23 contacts and when I replaced  *
query.setOrdering(EmailID); *by *query.setOrdering(EmailID desc); *it
printed 18 contacts only.

This proves that indexes are not working properly, i am stuck in the middle
of development because of this bug and no body seems to listening to this
problem.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Eric

Thanks for the response.

Maybe I don't understand something, but why should the 5 second setup
on a new instance bother me? A new instance should be created when
other instances are near capacity, and not when they exceed it, right?
So once initialized it can be dummy-run internally and only
available 5 seconds later while the existing instance continue to take
care of the incoming queries.

Also, do you think the latency requirements are realistic with GAE?
That is, in the ordinary case, could the response be consistently
served back to the querying user with delay of max 3 seconds?

On Nov 28, 8:35 am, 风笑雪 kea...@gmail.com wrote:
 The White House hosted an online town hall meeting on GAE with GWT,
 and received 700 hits per second at its 
 peak.http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

 But more than 1000 queries a second is never been tested.

 I think Java is not a good choice in your case. When your user
 suddenly increasing, starting a new Jave instance may cost more than 5
 seconds, while Python needs less than 1 second.

 2009/11/27 Eric shel...@gmail.com:



  Hi,

  I wish to set up a CPU-intensive time-important query service for
  users on the internet.
  Is GAE with Java the right choice? (as compared to other clouds, or
  non-cloud architecture)
  Specifically, in terms of:
  1) pricing
  2) latency resulting from slow CPU, JIT compiles, etc..
  3) latency resulting from communication of processes inside the cloud
  (e.g. a queuing process and a calculation process)
  4) latency of communication between cloud and end user

  A usage scenario I am expecting is:
  - a typical user sends a query (XML of size around 1K) once every 30
  seconds on average,
  - Each query requires a numerical computation of average time 0.2 sec
  and max time 1 sec (on a 1 GHz Pentium). The computation requires no
  data other than the query itself.
  - The delay a user experiences between sending a query and receiving a
  response should be on average no more than 2 seconds and in general no
  more than 5 seconds.
  - A background save to a DB of the response should occur (not time
  critical)
  - There can be up to 3 simultaneous users - i.e., on average 1000
  queries a second, each requiring an average 0.2 sec calculation, so
  that would necessitate around 200 CPUs.

  Is this feasible on GAE Java?
  If so, where can I learn about the correct design methodology for such
  a project on GAE?

  If this is the wrong forum to ask this, I'd appreciate redirection.

  Thanks,

  Eric

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Eric
Another question, you both recommended Python for some of its
features, but isn't Python much slower than Java? So wouldn't that
necessitate many more instances/CPUs to keep with the query load?

On Nov 28, 9:45 am, Niklas Rosencrantz teknik...@gmail.com wrote:
  1) pricing

 absolutely seems so. gae apps 1/20 cheaper than previous
 hostingmethods (servers) 2) latency resulting from slow CPU, JIT compiles, 
 etc.

 latency oriented group we don't focus 
 onhttp://groups.google.com/group/make-the-web-faster
 in the long run, yes. you can compare to dedicated physical server,
 much more difficult to configure, compile modules spec for physical
 architecture, get superiour response time with C++ server pages ouput
 hello world while best project is security and convenience are
 kings. latency least prio still important.

 python is good, same thing in python 1/10 code compared to java, no
 XML, yaml very neat. java strong point: more ways to solve same
 problem.

 On Sat, Nov 28, 2009 at 6:35 AM, 风笑雪 kea...@gmail.com wrote:
  The White House hosted an online town hall meeting on GAE with GWT,
  and received 700 hits per second at its peak.
 http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

  But more than 1000 queries a second is never been tested.

  I think Java is not a good choice in your case. When your user
  suddenly increasing, starting a new Jave instance may cost more than 5
  seconds, while Python needs less than 1 second.

  2009/11/27 Eric shel...@gmail.com:

  Hi,

  I wish to set up a CPU-intensive time-important query service for
  users on the internet.
  Is GAE with Java the right choice? (as compared to other clouds, or
  non-cloud architecture)
  Specifically, in terms of:
  1) pricing
  2) latency resulting from slow CPU, JIT compiles, etc..
  3) latency resulting from communication of processes inside the cloud
  (e.g. a queuing process and a calculation process)
  4) latency of communication between cloud and end user

  A usage scenario I am expecting is:
  - a typical user sends a query (XML of size around 1K) once every 30
  seconds on average,
  - Each query requires a numerical computation of average time 0.2 sec
  and max time 1 sec (on a 1 GHz Pentium). The computation requires no
  data other than the query itself.
  - The delay a user experiences between sending a query and receiving a
  response should be on average no more than 2 seconds and in general no
  more than 5 seconds.
  - A background save to a DB of the response should occur (not time
  critical)
  - There can be up to 3 simultaneous users - i.e., on average 1000
  queries a second, each requiring an average 0.2 sec calculation, so
  that would necessitate around 200 CPUs.

  Is this feasible on GAE Java?
  If so, where can I learn about the correct design methodology for such
  a project on GAE?

  If this is the wrong forum to ask this, I'd appreciate redirection.

  Thanks,

  Eric

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: preserve data during dev stage

2009-11-28 Thread Sylvain
Are you using a Mac ?

On Nov 28, 3:21 pm, dburns drrnb...@gmail.com wrote:
 By default, the development server should preserve data between runs.
 Are you sure you're not launching dev_appserver with the -c or --
 clear_datastore flags?

 On Nov 27, 11:38 pm, james_027 cai.hai...@gmail.com wrote:

  hi,

  How do i preserve my data during the development stage. Every time I
  start my application all my previous inputted data are lost.

  thanks,

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread 风笑雪
An app instance cannot serve 2 request at the same time.
Suppose you have 100 requests/sec, and GAE offers 100 app instances to
serve these requests.
At one time it suddenly raise up to 200 requests/sec, so GAE tries to
create 100 more app instances to serve.
But create a new Java instance may need more than 5 seconds, in these
5 seconds, 500 requests have to wait, and new requests are still
coming.
So GAE create even more app instances, maybe 50, 100 or 200, and kill
them when the 500 requests have been finished.
Thus I think Java will waste much more time than Python.

Query between Python and Java is almost the same speed, but you need
low-level api for batch query:
http://gaejava.appspot.com/

2009/11/29 Eric shel...@gmail.com:
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java? So wouldn't that
 necessitate many more instances/CPUs to keep with the query load?

 On Nov 28, 9:45 am, Niklas Rosencrantz teknik...@gmail.com wrote:
  1) pricing

 absolutely seems so. gae apps 1/20 cheaper than previous
 hostingmethods (servers) 2) latency resulting from slow CPU, JIT compiles, 
 etc.

 latency oriented group we don't focus 
 onhttp://groups.google.com/group/make-the-web-faster
 in the long run, yes. you can compare to dedicated physical server,
 much more difficult to configure, compile modules spec for physical
 architecture, get superiour response time with C++ server pages ouput
 hello world while best project is security and convenience are
 kings. latency least prio still important.

 python is good, same thing in python 1/10 code compared to java, no
 XML, yaml very neat. java strong point: more ways to solve same
 problem.

 On Sat, Nov 28, 2009 at 6:35 AM, 风笑雪 kea...@gmail.com wrote:
  The White House hosted an online town hall meeting on GAE with GWT,
  and received 700 hits per second at its peak.
 http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

  But more than 1000 queries a second is never been tested.

  I think Java is not a good choice in your case. When your user
  suddenly increasing, starting a new Jave instance may cost more than 5
  seconds, while Python needs less than 1 second.

  2009/11/27 Eric shel...@gmail.com:

  Hi,

  I wish to set up a CPU-intensive time-important query service for
  users on the internet.
  Is GAE with Java the right choice? (as compared to other clouds, or
  non-cloud architecture)
  Specifically, in terms of:
  1) pricing
  2) latency resulting from slow CPU, JIT compiles, etc..
  3) latency resulting from communication of processes inside the cloud
  (e.g. a queuing process and a calculation process)
  4) latency of communication between cloud and end user

  A usage scenario I am expecting is:
  - a typical user sends a query (XML of size around 1K) once every 30
  seconds on average,
  - Each query requires a numerical computation of average time 0.2 sec
  and max time 1 sec (on a 1 GHz Pentium). The computation requires no
  data other than the query itself.
  - The delay a user experiences between sending a query and receiving a
  response should be on average no more than 2 seconds and in general no
  more than 5 seconds.
  - A background save to a DB of the response should occur (not time
  critical)
  - There can be up to 3 simultaneous users - i.e., on average 1000
  queries a second, each requiring an average 0.2 sec calculation, so
  that would necessitate around 200 CPUs.

  Is this feasible on GAE Java?
  If so, where can I learn about the correct design methodology for such
  a project on GAE?

  If this is the wrong forum to ask this, I'd appreciate redirection.

  Thanks,

  Eric

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

  --

  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

 --

 You received this message because you are subscribed to the Google Groups 
 Google App Engine group.
 To post to this group, send email to google-appeng...@googlegroups.com.
 To unsubscribe from this group, send email to 
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/google-appengine?hl=en.




--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to 

[google-appengine] Re: How can I pre-fill the email address when deploying my apps?

2009-11-28 Thread dburns
On the command line, you can use:

appcfg.py --emai...@b.com update [dir]

You still have to enter your password.  But I find it only prompts me
for my password every 24 hours.

On Nov 27, 8:44 pm, samwyse samw...@gmail.com wrote:
 *sigh*

 Thanks for letting me know it isn't just me.  Maybe someone from
 Google will fix this.

 On Nov 27, 8:35 am, Brian br...@semo.net wrote:

  I got tired of that and went back to command line. After i run it
  once, then it is up arrow followed by Enter to start the upload,
  another up arrow + Enter to auto-fill the email field. And then I
  type in the password. It is much faster than the launcher.

  From your command prompt move to \program files\google
  \google_appengine on pre-Win7 or \program files (x86)\google
  \google_appengine on Win7. Then issue the command

  appcfg.py update [your app folder]

  Brian

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Andy Freeman
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java?

Maybe, maybe not, but it may not matter.  What fraction of your run-
time actually depends on language speed?

On Nov 28, 9:13 am, Eric shel...@gmail.com wrote:
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java? So wouldn't that
 necessitate many more instances/CPUs to keep with the query load?

 On Nov 28, 9:45 am, Niklas Rosencrantz teknik...@gmail.com wrote:



   1) pricing

  absolutely seems so. gae apps 1/20 cheaper than previous
  hostingmethods (servers) 2) latency resulting from slow CPU, JIT 
  compiles, etc.

  latency oriented group we don't focus 
  onhttp://groups.google.com/group/make-the-web-faster
  in the long run, yes. you can compare to dedicated physical server,
  much more difficult to configure, compile modules spec for physical
  architecture, get superiour response time with C++ server pages ouput
  hello world while best project is security and convenience are
  kings. latency least prio still important.

  python is good, same thing in python 1/10 code compared to java, no
  XML, yaml very neat. java strong point: more ways to solve same
  problem.

  On Sat, Nov 28, 2009 at 6:35 AM, 风笑雪 kea...@gmail.com wrote:
   The White House hosted an online town hall meeting on GAE with GWT,
   and received 700 hits per second at its peak.
  http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

   But more than 1000 queries a second is never been tested.

   I think Java is not a good choice in your case. When your user
   suddenly increasing, starting a new Jave instance may cost more than 5
   seconds, while Python needs less than 1 second.

   2009/11/27 Eric shel...@gmail.com:

   Hi,

   I wish to set up a CPU-intensive time-important query service for
   users on the internet.
   Is GAE with Java the right choice? (as compared to other clouds, or
   non-cloud architecture)
   Specifically, in terms of:
   1) pricing
   2) latency resulting from slow CPU, JIT compiles, etc..
   3) latency resulting from communication of processes inside the cloud
   (e.g. a queuing process and a calculation process)
   4) latency of communication between cloud and end user

   A usage scenario I am expecting is:
   - a typical user sends a query (XML of size around 1K) once every 30
   seconds on average,
   - Each query requires a numerical computation of average time 0.2 sec
   and max time 1 sec (on a 1 GHz Pentium). The computation requires no
   data other than the query itself.
   - The delay a user experiences between sending a query and receiving a
   response should be on average no more than 2 seconds and in general no
   more than 5 seconds.
   - A background save to a DB of the response should occur (not time
   critical)
   - There can be up to 3 simultaneous users - i.e., on average 1000
   queries a second, each requiring an average 0.2 sec calculation, so
   that would necessitate around 200 CPUs.

   Is this feasible on GAE Java?
   If so, where can I learn about the correct design methodology for such
   a project on GAE?

   If this is the wrong forum to ask this, I'd appreciate redirection.

   Thanks,

   Eric

   --

   You received this message because you are subscribed to the Google 
   Groups Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.

   --

   You received this message because you are subscribed to the Google Groups 
   Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.- Hide quoted 
   text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Andy Freeman
 Maybe I don't understand something, but why should the 5 second setup
 on a new instance bother me? A new instance should be created when
 other instances are near capacity, and not when they exceed it, right?
 So once initialized it can be dummy-run internally and only
 available 5 seconds later while the existing instance continue to take
 care of the incoming queries.

What makes you think that the request that causes the creation of a
new instance doesn't wait for the creation of said instance?  (The
scheme you suggest is plausible, but why do you think that it's how
appengine works?)

On Nov 28, 9:03 am, Eric shel...@gmail.com wrote:
 Thanks for the response.

 Maybe I don't understand something, but why should the 5 second setup
 on a new instance bother me? A new instance should be created when
 other instances are near capacity, and not when they exceed it, right?
 So once initialized it can be dummy-run internally and only
 available 5 seconds later while the existing instance continue to take
 care of the incoming queries.

 Also, do you think the latency requirements are realistic with GAE?
 That is, in the ordinary case, could the response be consistently
 served back to the querying user with delay of max 3 seconds?

 On Nov 28, 8:35 am, 风笑雪 kea...@gmail.com wrote:



  The White House hosted an online town hall meeting on GAE with GWT,
  and received 700 hits per second at its 
  peak.http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

  But more than 1000 queries a second is never been tested.

  I think Java is not a good choice in your case. When your user
  suddenly increasing, starting a new Jave instance may cost more than 5
  seconds, while Python needs less than 1 second.

  2009/11/27 Eric shel...@gmail.com:

   Hi,

   I wish to set up a CPU-intensive time-important query service for
   users on the internet.
   Is GAE with Java the right choice? (as compared to other clouds, or
   non-cloud architecture)
   Specifically, in terms of:
   1) pricing
   2) latency resulting from slow CPU, JIT compiles, etc..
   3) latency resulting from communication of processes inside the cloud
   (e.g. a queuing process and a calculation process)
   4) latency of communication between cloud and end user

   A usage scenario I am expecting is:
   - a typical user sends a query (XML of size around 1K) once every 30
   seconds on average,
   - Each query requires a numerical computation of average time 0.2 sec
   and max time 1 sec (on a 1 GHz Pentium). The computation requires no
   data other than the query itself.
   - The delay a user experiences between sending a query and receiving a
   response should be on average no more than 2 seconds and in general no
   more than 5 seconds.
   - A background save to a DB of the response should occur (not time
   critical)
   - There can be up to 3 simultaneous users - i.e., on average 1000
   queries a second, each requiring an average 0.2 sec calculation, so
   that would necessitate around 200 CPUs.

   Is this feasible on GAE Java?
   If so, where can I learn about the correct design methodology for such
   a project on GAE?

   If this is the wrong forum to ask this, I'd appreciate redirection.

   Thanks,

   Eric

   --

   You received this message because you are subscribed to the Google Groups 
   Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.- Hide quoted 
   text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Waiting already for 6 hours for 300 records to index

2009-11-28 Thread mably
Yes ! My index is serving :-)

On 28 nov, 17:04, Prashant antsh...@gmail.com wrote:
 same here. i was testing a new class design and indexes are stuck for 5
 records :(

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Re: Waiting already for 6 hours for 300 records to index

2009-11-28 Thread Prashant
mine too...

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Help with an accurate counter with no contention...?

2009-11-28 Thread peterk
Hey all,

I've been looking at the Task Queue API and counter example. In my
app, each user will have a couple of counters maintained for them,
counting various things.

Thing is, these counters need to be accurate. So I'm not sure if the
example given for the Task Queue API using memcache would be
appropriate for me - it would not be good, really, if my counters were
to be inaccurate. My users would expect accurate counts here.

So I was thinking about a sort of modified version whereby each change
to the counter would be stored in the DS in its own entity. E.g. an
entity called 'counter_delta' or some such, which holds the delta to
apply to a counter, and the key to the counter that the delta is to be
applied to.

Then, using the Task Queue I guess I could hoover up all these delta
entities, aggregate them, and apply them in one go (or in batches) to
the counter. And then delete the delta entries.

Thus the task queue is the only thing accessing the counter entity,
and it does so in a controllable fashion - so no real contention. Each
change to the counter gets written to the store in its own
counter_delta entity, so no contention there either. And because the
deltas are stored in DS and not in memcache, it should be much more
reliable.

However, I'm not entirely sure how I should actually go about
implementing this, or specifically, the task queue end of things.

I'm thinking if there is a change to a counter to be made, I should
check if there's a task already running for this counter, and if so,
not to do insert any new task, and let the currently running task take
care of it. If there is no running task for this counter, I would
instead create one, and set it to run in - say - 60 seconds, allowing
time for further deltas for this counter to accumulate so the task can
take care of more than just one delta. This would mean the counter
might be inaccurate for up to 60 seconds, but I can live with that.

But what I'm wondering is, how can I implement this 'don't insert a
new task if one for this counter is already in the queue or running'
behaviour?

I was thinking initially that I could give the task a name based on
the counter, so that only one such task can exist at any one time.
However, I believe we have no control over when that name is freed up
- it isn't necessarily freed up when the task ends, I believe names
can be reserved for up to 7 days (?) So that wouldn't work. If a name
could be freed up when a task was really finished then this could
work, I think.

I was thinking also I could store a flag so that when a counter_delta
is created, I'd look to see if a flag for this counter was present,
and if so, do nothing. If not, create the task, and create the flag.
Then when the task was all done and didn't see any more
counter_deltas, it'd delete the flag. But I'm worried that there could
be race conditions here, and some deltas might get overlooked as a
result? And if I were to use transactions on such a flag, would I not
fall into the same contention trap I'm trying to avoid in the first
place?

Help? :| Thanks for any advice/insight...

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Eric
Thanks for all your comments.
Regarding Python/Java speed, 99% of the runtime is spent iterating in
an attempt to converge to some numerical solution .
Loops, arithmetic and memory updates. I would guess an interpreted
language like Python (am I right?) would be much slower.

About the instance set up time, I agree that if query frequency
suddenly rises from 100/sec to 200/sec the system I suggested would
stall, but I'm don't think I'm expecting such drastic changes in a
small amount of time.
Regarding my suggestion on new instance creation, don't I have a say
on when and how that happens? Or is it totally up to AppEngine to
define when a new instance is created? Can I, for example, have 10
instances that are simply waiting for situations like Keakon
described?
I'm trying to form some intuitive picture of how GAE works by guessing
and  hoping to be corrected by someone when/if I'm wrong.
Maybe I require a cloud platform that allows tinkering at lower
levels?

On Nov 28, 9:54 pm, Andy Freeman ana...@earthlink.net wrote:
  Maybe I don't understand something, but why should the 5 second setup
  on a new instance bother me? A new instance should be created when
  other instances are near capacity, and not when they exceed it, right?
  So once initialized it can be dummy-run internally and only
  available 5 seconds later while the existing instance continue to take
  care of the incoming queries.

 What makes you think that the request that causes the creation of a
 new instance doesn't wait for the creation of said instance?  (The
 scheme you suggest is plausible, but why do you think that it's how
 appengine works?)

 On Nov 28, 9:03 am, Eric shel...@gmail.com wrote:

  Thanks for the response.

  Maybe I don't understand something, but why should the 5 second setup
  on a new instance bother me? A new instance should be created when
  other instances are near capacity, and not when they exceed it, right?
  So once initialized it can be dummy-run internally and only
  available 5 seconds later while the existing instance continue to take
  care of the incoming queries.

  Also, do you think the latency requirements are realistic with GAE?
  That is, in the ordinary case, could the response be consistently
  served back to the querying user with delay of max 3 seconds?

  On Nov 28, 8:35 am, 风笑雪 kea...@gmail.com wrote:

   The White House hosted an online town hall meeting on GAE with GWT,
   and received 700 hits per second at its 
   peak.http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

   But more than 1000 queries a second is never been tested.

   I think Java is not a good choice in your case. When your user
   suddenly increasing, starting a new Jave instance may cost more than 5
   seconds, while Python needs less than 1 second.

   2009/11/27 Eric shel...@gmail.com:

Hi,

I wish to set up a CPU-intensive time-important query service for
users on the internet.
Is GAE with Java the right choice? (as compared to other clouds, or
non-cloud architecture)
Specifically, in terms of:
1) pricing
2) latency resulting from slow CPU, JIT compiles, etc..
3) latency resulting from communication of processes inside the cloud
(e.g. a queuing process and a calculation process)
4) latency of communication between cloud and end user

A usage scenario I am expecting is:
- a typical user sends a query (XML of size around 1K) once every 30
seconds on average,
- Each query requires a numerical computation of average time 0.2 sec
and max time 1 sec (on a 1 GHz Pentium). The computation requires no
data other than the query itself.
- The delay a user experiences between sending a query and receiving a
response should be on average no more than 2 seconds and in general no
more than 5 seconds.
- A background save to a DB of the response should occur (not time
critical)
- There can be up to 3 simultaneous users - i.e., on average 1000
queries a second, each requiring an average 0.2 sec calculation, so
that would necessitate around 200 CPUs.

Is this feasible on GAE Java?
If so, where can I learn about the correct design methodology for such
a project on GAE?

If this is the wrong forum to ask this, I'd appreciate redirection.

Thanks,

Eric

--

You received this message because you are subscribed to the Google 
Groups Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/google-appengine?hl=en.-Hide quoted 
text -

  - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this 

Re: [google-appengine] Help with an accurate counter with no contention...?

2009-11-28 Thread Eli Jones
I think there would have to be some divergence between what the counter
should be and what the user will actually see at any given time.. since if
you have a high rate of counts happening for a user.. you'd get contention
when trying to update all the count each time the event being counted
happened.

Of course, you know that part.. since they have all these sharding examples.

So, you gotta decide how stale the count can be...

Once you decide that, since you don't seem to want any potential loss of
counts... you'd probably need two Models to do counting for each user.
(memcache would be out since that allows potential lost counts)

One for each individual count inserted (call it UserCounter) and one for the
count that the user sees (UserTotalCount).

So, if a count-event happens you insert a new row into UserCounter.

Then you should have a task that runs that selects __key__ from UserCounter,
finds out how many entities were returned, updates the UserTotalCount model
with the additional counts, and once that update is successful, it deletes
the keys, entities for those counts that it selected.  AND then, once all of
that is successful, have the Task essentially schedule itself to run again
in N seconds or however long you've decided to give it.

Presumably, doing it this way would allow you to make sure that the
counterupdate task is running one at a time for each user (since it can only
start itself up again after it is done counting).. and you would avoid write
contention since the task is the only thing updating the user's counter.

Probably, you could set up two Expando models to do this for all users.. and
each time a new user was created, you'd add a new Column to the Expando
models for that user.

so, you'd have these initial definitions:

class UserCounter(db.Expando):
BobCountEvent = db.BooleanProperty(required=True)

class UserTotalCount(db.Expando):
BobTotalCount = db.IntegerProperty(required=True)


Then, each time user Bob has a count event you do:

bobCount = UserCounter(BobCountEvent = True)
bobCount.put()

And when you want to update Bob's Total Count, you do (I have to do this
quasi-pseudo since it isn't trivial to do):

results = Select __key__ from UserCounter Where BobCountEvent = True
If len(results)  0:
  countResult = Select * from UserTotalCount Where BobTotalCount = 0
  if len(countResult)  0:
countResult.BobTotalCount += len(results)
db.put(countResults)
  else:
newCount = UserTotalCount(BobTotalCount = len(results))
newCount.put()
  db.delete(results)

Now, you might wonder... how do I do puts for variable user names? (You can'
t just create new put functions for each new user)..  In Python, you can use
exec to do that..

I have not tested how any of this performs... having an expando model may
hurt performance.. but, I don't think so, and I know the method works for
other things (not sure how it'd do on this counter method).

See here for Google's sharded counts example:
http://code.google.com/appengine/articles/sharding_counters.html

On Sat, Nov 28, 2009 at 5:17 PM, peterk peter.ke...@gmail.com wrote:

 Hey all,

 I've been looking at the Task Queue API and counter example. In my
 app, each user will have a couple of counters maintained for them,
 counting various things.

 Thing is, these counters need to be accurate. So I'm not sure if the
 example given for the Task Queue API using memcache would be
 appropriate for me - it would not be good, really, if my counters were
 to be inaccurate. My users would expect accurate counts here.

 So I was thinking about a sort of modified version whereby each change
 to the counter would be stored in the DS in its own entity. E.g. an
 entity called 'counter_delta' or some such, which holds the delta to
 apply to a counter, and the key to the counter that the delta is to be
 applied to.

 Then, using the Task Queue I guess I could hoover up all these delta
 entities, aggregate them, and apply them in one go (or in batches) to
 the counter. And then delete the delta entries.

 Thus the task queue is the only thing accessing the counter entity,
 and it does so in a controllable fashion - so no real contention. Each
 change to the counter gets written to the store in its own
 counter_delta entity, so no contention there either. And because the
 deltas are stored in DS and not in memcache, it should be much more
 reliable.

 However, I'm not entirely sure how I should actually go about
 implementing this, or specifically, the task queue end of things.

 I'm thinking if there is a change to a counter to be made, I should
 check if there's a task already running for this counter, and if so,
 not to do insert any new task, and let the currently running task take
 care of it. If there is no running task for this counter, I would
 instead create one, and set it to run in - say - 60 seconds, allowing
 time for further deltas for this counter to accumulate so the task can
 take care of more than just one delta. This 

[google-appengine] Reference Properties

2009-11-28 Thread MajorProgamming
I was wondering: isn't using reference properties a waste of space?
Wouldn't it make more sense to store the id (assuming not using
key_name) of the entity. After all, if the Kind is known, one can
easily generate the full key based on that. And with a reference
property it actually stores the _entire_ key (very long!).

Why?

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Help with an accurate counter with no contention...?

2009-11-28 Thread Eli Jones
To be fair, this method I described may be overkill.

I developed it when I was thinking about how to lighten up insert costs to
the datastore.

I figured that, if one could store some of the relevant information in the
Column name (specifically, string info like who's count is this?), that
would reduce the total size of the entity.. and thus speed up writes.

It was suggested that the performance wouldn't be much different than just
having models like this:

class UserCounter(db.Model):
  Username = db.StringProperty(required = True)

class UserTotalCount(db.Model):
  Username = db.StringProperty(required = True)
  Count = db.IntegerProperty(required = True)

Then, you'd just
Select __key__ from UserCounter Where Username = 'Bob'
and
Select * from UserTotalCount Where Username = 'Bob'

To do your counting and updating..

Though, my intuition is that doing it this way would take more processing
power (and maybe lead to some contention) since you're inserting
StringProperties into one big column when putting UserCounter events.

Here is the initial thread covering what I was trying to figure out:
Expando and Index
Partitioninghttp://groups.google.com/group/google-appengine/browse_thread/thread/6210ba5158ec0971/25b09b70ef9b82ff


On Sat, Nov 28, 2009 at 6:46 PM, Eli Jones eli.jo...@gmail.com wrote:

 I think there would have to be some divergence between what the counter
 should be and what the user will actually see at any given time.. since if
 you have a high rate of counts happening for a user.. you'd get contention
 when trying to update all the count each time the event being counted
 happened.

 Of course, you know that part.. since they have all these sharding
 examples.

 So, you gotta decide how stale the count can be...

 Once you decide that, since you don't seem to want any potential loss of
 counts... you'd probably need two Models to do counting for each user.
 (memcache would be out since that allows potential lost counts)

 One for each individual count inserted (call it UserCounter) and one for
 the count that the user sees (UserTotalCount).

 So, if a count-event happens you insert a new row into UserCounter.

 Then you should have a task that runs that selects __key__ from
 UserCounter, finds out how many entities were returned, updates the
 UserTotalCount model with the additional counts, and once that update is
 successful, it deletes the keys, entities for those counts that it selected.
  AND then, once all of that is successful, have the Task essentially
 schedule itself to run again in N seconds or however long you've decided to
 give it.

 Presumably, doing it this way would allow you to make sure that the
 counterupdate task is running one at a time for each user (since it can only
 start itself up again after it is done counting).. and you would avoid write
 contention since the task is the only thing updating the user's counter.

 Probably, you could set up two Expando models to do this for all users..
 and each time a new user was created, you'd add a new Column to the Expando
 models for that user.

 so, you'd have these initial definitions:

 class UserCounter(db.Expando):
 BobCountEvent = db.BooleanProperty(required=True)

 class UserTotalCount(db.Expando):
 BobTotalCount = db.IntegerProperty(required=True)


 Then, each time user Bob has a count event you do:

 bobCount = UserCounter(BobCountEvent = True)
 bobCount.put()

 And when you want to update Bob's Total Count, you do (I have to do this
 quasi-pseudo since it isn't trivial to do):

 results = Select __key__ from UserCounter Where BobCountEvent = True
 If len(results)  0:
   countResult = Select * from UserTotalCount Where BobTotalCount = 0
   if len(countResult)  0:
 countResult.BobTotalCount += len(results)
 db.put(countResults)
   else:
 newCount = UserTotalCount(BobTotalCount = len(results))
 newCount.put()
   db.delete(results)

 Now, you might wonder... how do I do puts for variable user names? (You
 can' t just create new put functions for each new user)..  In Python, you
 can use exec to do that..

 I have not tested how any of this performs... having an expando model may
 hurt performance.. but, I don't think so, and I know the method works for
 other things (not sure how it'd do on this counter method).

 See here for Google's sharded counts example:
 http://code.google.com/appengine/articles/sharding_counters.html

 On Sat, Nov 28, 2009 at 5:17 PM, peterk peter.ke...@gmail.com wrote:

 Hey all,

 I've been looking at the Task Queue API and counter example. In my
 app, each user will have a couple of counters maintained for them,
 counting various things.

 Thing is, these counters need to be accurate. So I'm not sure if the
 example given for the Task Queue API using memcache would be
 appropriate for me - it would not be good, really, if my counters were
 to be inaccurate. My users would expect accurate counts here.

 So I was thinking about a sort of modified version whereby each 

[google-appengine] Re: Help with an accurate counter with no contention...?

2009-11-28 Thread peterk
Hey Eli,

Thanks very much for your replies.

You're thinking along the same lines as me, although I wasn't
considering using Expandos to store the data.

My concern is sort of independent of this anyway - i'm worried that
you can actually have more than one task aggregating changes to a
counter running simultaneously.

For example, an update is recorded for Bob's counter.

How do we know if a task is already running to aggregate Bob's
updates? If it's not we want to create one, but if there is already
one we don't, because we want to try and avoid multiple tasks running
simultaneously for one counter.

So we could use  a flag to indicate if a task is already running. So
before starting a task you could look to see if a flag is set. But
there is a window there where two updates could see no flag, and
create two tasks, and then create their flag. We could maybe use a
transaction to get around this, but then I think (?) we have a point
of contention for updates as if we were updating a single counter
entity, and we're losing the benefits of all our other work.

So then I was thinking we could move that flag to memcache. Which I
think might solve our contention issue on the flag using add() etc.
However there's then the possibility that the flag could be dropped by
memcache prematurely. In that case, a second or third concurrent task
for a given counter could be started up. But at least we won't be
starting a task up for every update.

I was thinking...maybe this isn't a problem to have a few tasks
perhaps running concurrently for one counter if we put all our updates
for a given counter into a single entity group. Then we could read the
update, add it to the aggregation, and delete it in a transaction. So
I think then, with a transaction with a delete in it, if another task
concurrently tries to process that update, it'll fail. So our updates
will only get hoovered up once by one task.

I'm not entirely sure if this will be the case though. Will deletion
of an entity in a transaction cause another task trying to do the same
thing to fail? Obviously in this case we would want that behaviour so
we could lock access to a given counter update to one task.

On Nov 29, 12:19 am, Eli Jones eli.jo...@gmail.com wrote:
 To be fair, this method I described may be overkill.

 I developed it when I was thinking about how to lighten up insert costs to
 the datastore.

 I figured that, if one could store some of the relevant information in the
 Column name (specifically, string info like who's count is this?), that
 would reduce the total size of the entity.. and thus speed up writes.

 It was suggested that the performance wouldn't be much different than just
 having models like this:

 class UserCounter(db.Model):
   Username = db.StringProperty(required = True)

 class UserTotalCount(db.Model):
   Username = db.StringProperty(required = True)
   Count = db.IntegerProperty(required = True)

 Then, you'd just
 Select __key__ from UserCounter Where Username = 'Bob'
 and
 Select * from UserTotalCount Where Username = 'Bob'

 To do your counting and updating..

 Though, my intuition is that doing it this way would take more processing
 power (and maybe lead to some contention) since you're inserting
 StringProperties into one big column when putting UserCounter events.

 Here is the initial thread covering what I was trying to figure out:
 Expando and Index
 Partitioninghttp://groups.google.com/group/google-appengine/browse_thread/thread/...

 On Sat, Nov 28, 2009 at 6:46 PM, Eli Jones eli.jo...@gmail.com wrote:
  I think there would have to be some divergence between what the counter
  should be and what the user will actually see at any given time.. since if
  you have a high rate of counts happening for a user.. you'd get contention
  when trying to update all the count each time the event being counted
  happened.

  Of course, you know that part.. since they have all these sharding
  examples.

  So, you gotta decide how stale the count can be...

  Once you decide that, since you don't seem to want any potential loss of
  counts... you'd probably need two Models to do counting for each user.
  (memcache would be out since that allows potential lost counts)

  One for each individual count inserted (call it UserCounter) and one for
  the count that the user sees (UserTotalCount).

  So, if a count-event happens you insert a new row into UserCounter.

  Then you should have a task that runs that selects __key__ from
  UserCounter, finds out how many entities were returned, updates the
  UserTotalCount model with the additional counts, and once that update is
  successful, it deletes the keys, entities for those counts that it selected.
   AND then, once all of that is successful, have the Task essentially
  schedule itself to run again in N seconds or however long you've decided to
  give it.

  Presumably, doing it this way would allow you to make sure that the
  counterupdate task is running one at a time for each 

Re: [google-appengine] Re: Help with an accurate counter with no contention...?

2009-11-28 Thread Eli Jones
Well, it sounds like you're wanting to have the aggregation task fired off
when a count event happens for a user.  So, then, as you mention, you'd need
a way to check if there wasn't already an aggregation task running.  And, in
the worst case scenario, you could have two tasks get fired off at once.. to
aggregate counts..  before the either of the tasks had a chance to set the
flag to indicate they were running.

You can give the task a name when you add it to the queue.. and once a task
exists in the Queue with given name.. you cannot insert a new task using
that same name.

So, the question becomes, how do you figure out what this Task Name should
be..?

A quick and dirty guess leads me to think you could do something like.

Task Name = AggregateBobCount_ + BobsCurrentTotalCount

Thus, you would ensure that no additional tasks were fired off until the
current AggregateBobCount_1208 task was done updating the count..

But, then .. as you mention, what about that window between the time that
the Aggregator updates the totalCount and flags,deletes the counted
counters?

If you lock it up with a transaction, will that effect the insertion of
other counts for Bob?  It might not.. and using a transaction along with
Task Names could solve this issue.

Another approach is to have the Task NOT be generally fired off anytime a
count event is inserted for a User.

I think having the Task be configured to be recursive might be the most
simple to manage.

At the beginning of the day, the initial Aggregator task runs (doing
aggregation for all users), once it is done, it adds a new Task to the Queue
with a Task Name like I mentioned above (this would cover the extremely
random chance that the Task ended up getting created twice somehow), and it
sets the delay to 60 seconds or whatever it is that you want.

So, the task is chained.. and a new one only runs once an old one is
finished.

The problem with this approach is... will you be wasting a lot of CPU time
by having tasks running for all Users trying to aggregate counts if the
users have no counts to aggregate?  That's something you'd just have to test
to see.. (were you to attempt this method).




On Sat, Nov 28, 2009 at 7:45 PM, peterk peter.ke...@gmail.com wrote:

 Hey Eli,

 Thanks very much for your replies.

 You're thinking along the same lines as me, although I wasn't
 considering using Expandos to store the data.

 My concern is sort of independent of this anyway - i'm worried that
 you can actually have more than one task aggregating changes to a
 counter running simultaneously.

 For example, an update is recorded for Bob's counter.

 How do we know if a task is already running to aggregate Bob's
 updates? If it's not we want to create one, but if there is already
 one we don't, because we want to try and avoid multiple tasks running
 simultaneously for one counter.

 So we could use  a flag to indicate if a task is already running. So
 before starting a task you could look to see if a flag is set. But
 there is a window there where two updates could see no flag, and
 create two tasks, and then create their flag. We could maybe use a
 transaction to get around this, but then I think (?) we have a point
 of contention for updates as if we were updating a single counter
 entity, and we're losing the benefits of all our other work.

 So then I was thinking we could move that flag to memcache. Which I
 think might solve our contention issue on the flag using add() etc.
 However there's then the possibility that the flag could be dropped by
 memcache prematurely. In that case, a second or third concurrent task
 for a given counter could be started up. But at least we won't be
 starting a task up for every update.

 I was thinking...maybe this isn't a problem to have a few tasks
 perhaps running concurrently for one counter if we put all our updates
 for a given counter into a single entity group. Then we could read the
 update, add it to the aggregation, and delete it in a transaction. So
 I think then, with a transaction with a delete in it, if another task
 concurrently tries to process that update, it'll fail. So our updates
 will only get hoovered up once by one task.

 I'm not entirely sure if this will be the case though. Will deletion
 of an entity in a transaction cause another task trying to do the same
 thing to fail? Obviously in this case we would want that behaviour so
 we could lock access to a given counter update to one task.

 On Nov 29, 12:19 am, Eli Jones eli.jo...@gmail.com wrote:
  To be fair, this method I described may be overkill.
 
  I developed it when I was thinking about how to lighten up insert costs
 to
  the datastore.
 
  I figured that, if one could store some of the relevant information in
 the
  Column name (specifically, string info like who's count is this?), that
  would reduce the total size of the entity.. and thus speed up writes.
 
  It was suggested that the performance wouldn't be much different than
 just
  

Re: [google-appengine] Re: Help with an accurate counter with no contention...?

2009-11-28 Thread Eli Jones
Though, let me re-iterate... all the round-about stuff I'm talking about is
something to consider if you don't want to try and modify the sharded
counter technique mentioned in this article:
http://code.google.com/appengine/articles/sharding_counters.html

http://code.google.com/appengine/articles/sharding_counters.htmlYou'd need
some db.Expando model that you used to replace the Memcached counting.  And
each user could have 20 counters in the model..

And each time a count event happened, you would increment a random counter
for that User.. and when you wanted to aggregate, you would get all the
counters for that user, add them up and then change the TotalCount to
whatever it was you got.

I think that might be a worthwhile approach too.

On Sat, Nov 28, 2009 at 8:20 PM, Eli Jones eli.jo...@gmail.com wrote:

 Well, it sounds like you're wanting to have the aggregation task fired off
 when a count event happens for a user.  So, then, as you mention, you'd need
 a way to check if there wasn't already an aggregation task running.  And, in
 the worst case scenario, you could have two tasks get fired off at once.. to
 aggregate counts..  before the either of the tasks had a chance to set the
 flag to indicate they were running.

 You can give the task a name when you add it to the queue.. and once a task
 exists in the Queue with given name.. you cannot insert a new task using
 that same name.

 So, the question becomes, how do you figure out what this Task Name should
 be..?

 A quick and dirty guess leads me to think you could do something like.

 Task Name = AggregateBobCount_ + BobsCurrentTotalCount

 Thus, you would ensure that no additional tasks were fired off until the
 current AggregateBobCount_1208 task was done updating the count..

 But, then .. as you mention, what about that window between the time that
 the Aggregator updates the totalCount and flags,deletes the counted
 counters?

 If you lock it up with a transaction, will that effect the insertion of
 other counts for Bob?  It might not.. and using a transaction along with
 Task Names could solve this issue.

 Another approach is to have the Task NOT be generally fired off anytime a
 count event is inserted for a User.

 I think having the Task be configured to be recursive might be the most
 simple to manage.

 At the beginning of the day, the initial Aggregator task runs (doing
 aggregation for all users), once it is done, it adds a new Task to the Queue
 with a Task Name like I mentioned above (this would cover the extremely
 random chance that the Task ended up getting created twice somehow), and it
 sets the delay to 60 seconds or whatever it is that you want.

 So, the task is chained.. and a new one only runs once an old one is
 finished.

 The problem with this approach is... will you be wasting a lot of CPU time
 by having tasks running for all Users trying to aggregate counts if the
 users have no counts to aggregate?  That's something you'd just have to test
 to see.. (were you to attempt this method).




 On Sat, Nov 28, 2009 at 7:45 PM, peterk peter.ke...@gmail.com wrote:

 Hey Eli,

 Thanks very much for your replies.

 You're thinking along the same lines as me, although I wasn't
 considering using Expandos to store the data.

 My concern is sort of independent of this anyway - i'm worried that
 you can actually have more than one task aggregating changes to a
 counter running simultaneously.

 For example, an update is recorded for Bob's counter.

 How do we know if a task is already running to aggregate Bob's
 updates? If it's not we want to create one, but if there is already
 one we don't, because we want to try and avoid multiple tasks running
 simultaneously for one counter.

 So we could use  a flag to indicate if a task is already running. So
 before starting a task you could look to see if a flag is set. But
 there is a window there where two updates could see no flag, and
 create two tasks, and then create their flag. We could maybe use a
 transaction to get around this, but then I think (?) we have a point
 of contention for updates as if we were updating a single counter
 entity, and we're losing the benefits of all our other work.

 So then I was thinking we could move that flag to memcache. Which I
 think might solve our contention issue on the flag using add() etc.
 However there's then the possibility that the flag could be dropped by
 memcache prematurely. In that case, a second or third concurrent task
 for a given counter could be started up. But at least we won't be
 starting a task up for every update.

 I was thinking...maybe this isn't a problem to have a few tasks
 perhaps running concurrently for one counter if we put all our updates
 for a given counter into a single entity group. Then we could read the
 update, add it to the aggregation, and delete it in a transaction. So
 I think then, with a transaction with a delete in it, if another task
 concurrently tries to process that update, it'll fail. So 

Re: [google-appengine] Re: Help with an accurate counter with no contention...?

2009-11-28 Thread Eli Jones
oh, and duh.. the first part of that article does the sharded counting..
using transactions without memcached.

So, presumably, it should do exactly what you want.

you just need to modify that to allow counting for an arbitrary number of
users.

On Sat, Nov 28, 2009 at 8:30 PM, Eli Jones eli.jo...@gmail.com wrote:

 Though, let me re-iterate... all the round-about stuff I'm talking about is
 something to consider if you don't want to try and modify the sharded
 counter technique mentioned in this article:
 http://code.google.com/appengine/articles/sharding_counters.html

 http://code.google.com/appengine/articles/sharding_counters.htmlYou'd
 need some db.Expando model that you used to replace the Memcached counting.
  And each user could have 20 counters in the model..

 And each time a count event happened, you would increment a random counter
 for that User.. and when you wanted to aggregate, you would get all the
 counters for that user, add them up and then change the TotalCount to
 whatever it was you got.

 I think that might be a worthwhile approach too.


 On Sat, Nov 28, 2009 at 8:20 PM, Eli Jones eli.jo...@gmail.com wrote:

 Well, it sounds like you're wanting to have the aggregation task fired off
 when a count event happens for a user.  So, then, as you mention, you'd need
 a way to check if there wasn't already an aggregation task running.  And, in
 the worst case scenario, you could have two tasks get fired off at once.. to
 aggregate counts..  before the either of the tasks had a chance to set the
 flag to indicate they were running.

 You can give the task a name when you add it to the queue.. and once a
 task exists in the Queue with given name.. you cannot insert a new task
 using that same name.

 So, the question becomes, how do you figure out what this Task Name should
 be..?

 A quick and dirty guess leads me to think you could do something like.

 Task Name = AggregateBobCount_ + BobsCurrentTotalCount

 Thus, you would ensure that no additional tasks were fired off until the
 current AggregateBobCount_1208 task was done updating the count..

 But, then .. as you mention, what about that window between the time that
 the Aggregator updates the totalCount and flags,deletes the counted
 counters?

 If you lock it up with a transaction, will that effect the insertion of
 other counts for Bob?  It might not.. and using a transaction along with
 Task Names could solve this issue.

 Another approach is to have the Task NOT be generally fired off anytime a
 count event is inserted for a User.

 I think having the Task be configured to be recursive might be the most
 simple to manage.

 At the beginning of the day, the initial Aggregator task runs (doing
 aggregation for all users), once it is done, it adds a new Task to the Queue
 with a Task Name like I mentioned above (this would cover the extremely
 random chance that the Task ended up getting created twice somehow), and it
 sets the delay to 60 seconds or whatever it is that you want.

 So, the task is chained.. and a new one only runs once an old one is
 finished.

 The problem with this approach is... will you be wasting a lot of CPU time
 by having tasks running for all Users trying to aggregate counts if the
 users have no counts to aggregate?  That's something you'd just have to test
 to see.. (were you to attempt this method).




 On Sat, Nov 28, 2009 at 7:45 PM, peterk peter.ke...@gmail.com wrote:

 Hey Eli,

 Thanks very much for your replies.

 You're thinking along the same lines as me, although I wasn't
 considering using Expandos to store the data.

 My concern is sort of independent of this anyway - i'm worried that
 you can actually have more than one task aggregating changes to a
 counter running simultaneously.

 For example, an update is recorded for Bob's counter.

 How do we know if a task is already running to aggregate Bob's
 updates? If it's not we want to create one, but if there is already
 one we don't, because we want to try and avoid multiple tasks running
 simultaneously for one counter.

 So we could use  a flag to indicate if a task is already running. So
 before starting a task you could look to see if a flag is set. But
 there is a window there where two updates could see no flag, and
 create two tasks, and then create their flag. We could maybe use a
 transaction to get around this, but then I think (?) we have a point
 of contention for updates as if we were updating a single counter
 entity, and we're losing the benefits of all our other work.

 So then I was thinking we could move that flag to memcache. Which I
 think might solve our contention issue on the flag using add() etc.
 However there's then the possibility that the flag could be dropped by
 memcache prematurely. In that case, a second or third concurrent task
 for a given counter could be started up. But at least we won't be
 starting a task up for every update.

 I was thinking...maybe this isn't a problem to have a few tasks
 perhaps 

[google-appengine] Re: Newly registered App doesn't appear in App Engine Administrator Console

2009-11-28 Thread Roy
Are you an Apps user (ie. /a/mydomain.com) as opposed to a gmail.com
user?


On Nov 27, 6:55 pm, mika-vienna mka...@deepsec.net wrote:
 Hi all,

 I'm new to Google Apps and Wave, experimenting with Wave Bots

 I registered my first App with SMS verification and it doesn't appear
 in my Administrator Console (https://appengine.google.com/start)
 like described.
 When I try to repeat the steps (with the same App ID) I get an error
 message App or User already exists

 Using Firefox 3.0

 Any clue?

 regards,

 MiKa

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread 风笑雪
2009/11/29 Eric shel...@gmail.com:
 Thanks for all your comments.
 Regarding Python/Java speed, 99% of the runtime is spent iterating in
 an attempt to converge to some numerical solution .
 Loops, arithmetic and memory updates. I would guess an interpreted
 language like Python (am I right?) would be much slower.


You can try a simulation test before your decision.

 About the instance set up time, I agree that if query frequency
 suddenly rises from 100/sec to 200/sec the system I suggested would
 stall, but I'm don't think I'm expecting such drastic changes in a
 small amount of time.
 Regarding my suggestion on new instance creation, don't I have a say
 on when and how that happens? Or is it totally up to AppEngine to
 define when a new instance is created? Can I, for example, have 10
 instances that are simply waiting for situations like Keakon
 described?

At least not right now, It's up to the app master.
http://code.google.com/events/io/2009/sessions/FromSparkPlugToDriveTrain.html

 I'm trying to form some intuitive picture of how GAE works by guessing
 and  hoping to be corrected by someone when/if I'm wrong.
 Maybe I require a cloud platform that allows tinkering at lower
 levels?

 On Nov 28, 9:54 pm, Andy Freeman ana...@earthlink.net wrote:
  Maybe I don't understand something, but why should the 5 second setup
  on a new instance bother me? A new instance should be created when
  other instances are near capacity, and not when they exceed it, right?
  So once initialized it can be dummy-run internally and only
  available 5 seconds later while the existing instance continue to take
  care of the incoming queries.

 What makes you think that the request that causes the creation of a
 new instance doesn't wait for the creation of said instance?  (The
 scheme you suggest is plausible, but why do you think that it's how
 appengine works?)

 On Nov 28, 9:03 am, Eric shel...@gmail.com wrote:

  Thanks for the response.

  Maybe I don't understand something, but why should the 5 second setup
  on a new instance bother me? A new instance should be created when
  other instances are near capacity, and not when they exceed it, right?
  So once initialized it can be dummy-run internally and only
  available 5 seconds later while the existing instance continue to take
  care of the incoming queries.

  Also, do you think the latency requirements are realistic with GAE?
  That is, in the ordinary case, could the response be consistently
  served back to the querying user with delay of max 3 seconds?

  On Nov 28, 8:35 am, 风笑雪 kea...@gmail.com wrote:

   The White House hosted an online town hall meeting on GAE with GWT,
   and received 700 hits per second at its 
   peak.http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

   But more than 1000 queries a second is never been tested.

   I think Java is not a good choice in your case. When your user
   suddenly increasing, starting a new Jave instance may cost more than 5
   seconds, while Python needs less than 1 second.

   2009/11/27 Eric shel...@gmail.com:

Hi,

I wish to set up a CPU-intensive time-important query service for
users on the internet.
Is GAE with Java the right choice? (as compared to other clouds, or
non-cloud architecture)
Specifically, in terms of:
1) pricing
2) latency resulting from slow CPU, JIT compiles, etc..
3) latency resulting from communication of processes inside the cloud
(e.g. a queuing process and a calculation process)
4) latency of communication between cloud and end user

A usage scenario I am expecting is:
- a typical user sends a query (XML of size around 1K) once every 30
seconds on average,
- Each query requires a numerical computation of average time 0.2 sec
and max time 1 sec (on a 1 GHz Pentium). The computation requires no
data other than the query itself.
- The delay a user experiences between sending a query and receiving a
response should be on average no more than 2 seconds and in general no
more than 5 seconds.
- A background save to a DB of the response should occur (not time
critical)
- There can be up to 3 simultaneous users - i.e., on average 1000
queries a second, each requiring an average 0.2 sec calculation, so
that would necessitate around 200 CPUs.

Is this feasible on GAE Java?
If so, where can I learn about the correct design methodology for such
a project on GAE?

If this is the wrong forum to ask this, I'd appreciate redirection.

Thanks,

Eric

--

You received this message because you are subscribed to the Google 
Groups Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/google-appengine?hl=en.-Hide quoted 
 

Re: [google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Niklas Rosencrantz
On Sat, Nov 28, 2009 at 5:13 PM, Eric shel...@gmail.com wrote:
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java? So wouldn't that
 necessitate many more instances/CPUs to keep with the query load?
Yes, the python VM is slower than JVM. Python dev and prototyping time
however amazingly faster and less frustrating. I recommend anybody
avoid xml and use beautiful yaml whenever able. Both py and java can
use just function or static method which won't create instance, very
fast and no proof here you can't do everything in Java with static
methods or likewise python withno classas, just functions. Example
advance python phonetic algorithm, just a function, no class
http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/at_download/file
or a snippet I posted in python than will be a mess in java, 23 rows
in python refactorable to half
http://www.djangosnippets.org/snippets/1821/
good luck
Niklas (montao.googlecode.com)

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




Re: [google-appengine] Reference Properties

2009-11-28 Thread Niklas Rosencrantz
On Sun, Nov 29, 2009 at 12:09 AM, MajorProgamming sefira...@gmail.com wrote:
 I was wondering: isn't using reference properties a waste of space?
 Wouldn't it make more sense to store the id (assuming not using
 key_name) of the entity. After all, if the Kind is known, one can
 easily generate the full key based on that. And with a reference
 property it actually stores the _entire_ key (very long!).

 Why?
We do waste space to quicken dev time and rapid prototyping. I call it
a tradeoff, know ways to easy storage use several halves that view
won't react to, making 100 created object 50 and 25 etc, priotitizing
security, stability and views top, optimizations last.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.