Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
I don't understand this code at all. You're querying the data and then iterating over a separate call? And what does the extra call to list.size() have anything to do with this? The code linked from http://slim3demo.appspot.com/performance/ has just gone back to calling list.size() without iterating the results, which once again goes back to a bogus benchmark. Jeff On Wed, Jun 8, 2011 at 9:30 PM, Yasuo Higa higaya...@gmail.com wrote: Slim3 uses LL API. To resolve a strange issue that slim3 is faster than LL, I tried the following samples: One: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } Two: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); // VERY IMPORTANT list.size(); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } The second one is much faster than the first one. I fixed the samples to call list.size(). http://slim3demo.appspot.com/performance/ As a result, LL is as fast as slim3 (^^; Yasuo Higa On Thu, Jun 9, 2011 at 10:17 AM, Jeff Schnitzer j...@infohazard.org wrote: Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
It's *NO* bogus benchmark because the sample iterates the results. http://code.google.com/p/slim3/source/browse/trunk/slim3demo/src/slim3/demo/controller/performance/GetLLController.java Yasuo Higa The code linked from http://slim3demo.appspot.com/performance/ has just gone back to calling list.size() without iterating the results, which once again goes back to a bogus benchmark. Jeff On Wed, Jun 8, 2011 at 9:30 PM, Yasuo Higa higaya...@gmail.com wrote: Slim3 uses LL API. To resolve a strange issue that slim3 is faster than LL, I tried the following samples: One: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } Two: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); // VERY IMPORTANT list.size(); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } The second one is much faster than the first one. I fixed the samples to call list.size(). http://slim3demo.appspot.com/performance/ As a result, LL is as fast as slim3 (^^; Yasuo Higa On Thu, Jun 9, 2011 at 10:17 AM, Jeff Schnitzer j...@infohazard.org wrote: Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Ok - so what you're saying is that the extra call to list.size() before iterating through the list makes list iteration faster? Oddly enough, this does seem to make a difference. This looks like some sort of performance bug in the Low-Level API. It's clearly not related to Slim3... except in as much as Slim3 inadvertently works around this bug. It's also very peculiar to working with List results - you obviously can't call size() when you're iterating through an unbounded result set. The benchmark is bogus... which is to say, it does not show what you think it shows. Jeff On Thu, Jun 9, 2011 at 12:09 AM, Yasuo Higa higaya...@gmail.com wrote: It's *NO* bogus benchmark because the sample iterates the results. http://code.google.com/p/slim3/source/browse/trunk/slim3demo/src/slim3/demo/controller/performance/GetLLController.java Yasuo Higa The code linked from http://slim3demo.appspot.com/performance/ has just gone back to calling list.size() without iterating the results, which once again goes back to a bogus benchmark. Jeff On Wed, Jun 8, 2011 at 9:30 PM, Yasuo Higa higaya...@gmail.com wrote: Slim3 uses LL API. To resolve a strange issue that slim3 is faster than LL, I tried the following samples: One: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } Two: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); // VERY IMPORTANT list.size(); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } The second one is much faster than the first one. I fixed the samples to call list.size(). http://slim3demo.appspot.com/performance/ As a result, LL is as fast as slim3 (^^; Yasuo Higa On Thu, Jun 9, 2011 at 10:17 AM, Jeff Schnitzer j...@infohazard.org wrote: Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
On Thu, Jun 9, 2011 at 1:32 AM, Gal Dolber gal.dol...@gmail.com wrote: I am not comparing reflexion vs byte-code generation or anything like that, apt generates code, is not a runtime technology. Like or not reflexion is known to be slower than actually writing the code. This is entirely irrelevant. We've now established that the issue at hand is a strange quirk in the Low-Level API and has nothing to do with reflection. 100% (or close to it) of the performance gain of Slim3 is because it calls .size() on a List before iterating it. Slim3 generates the minimal possible code you need to talk with the low level api, if the low level bench is faster is just because its not converting the Entity to the Pojo. You missed the point - in the benchmark, Slim3 was *faster* than the Low-Level API. This clearly indicated something was amiss, and now we've uncovered the actual cause. It turns out to be something quite interesting indeed. This is a classic case of what Feynman called Cargo Cult Science. You all believed that Slim3 should be faster than other APIs because code generation is faster than reflection, so when someone produced an ill-conceived benchmark that seemed to confirm your preconceived notion, you just accepted the entire narrative: Slim3 is fast because it doesn't use reflection! This is sloppy thinking (see: Confirmation Bias). If you haven't read this, it's a gem (I make a habit of re-reading it at least once a year): http://www.lhup.edu/~DSIMANEK/cargocul.htm Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Star this issue: http://code.google.com/p/googleappengine/issues/detail?id=5167 I'm willing to bet I know exactly what's going on. When you call size() first, the backing ArrayList is being created initially with the proper size. If you don't call size() first, the backing ArrayList is initialized small and reallocs organically up to the 10,000-element size. Thus the performance penalty. The GAE internals obviously can obtain the size information ahead of time; they just aren't initializing the ArrayList to the proper size in this one case. I could probably find the exact line of code in the SDK and submit a patch. Jeff On Thu, Jun 9, 2011 at 5:16 AM, Jeff Schnitzer j...@infohazard.org wrote: Ok - so what you're saying is that the extra call to list.size() before iterating through the list makes list iteration faster? Oddly enough, this does seem to make a difference. This looks like some sort of performance bug in the Low-Level API. It's clearly not related to Slim3... except in as much as Slim3 inadvertently works around this bug. It's also very peculiar to working with List results - you obviously can't call size() when you're iterating through an unbounded result set. The benchmark is bogus... which is to say, it does not show what you think it shows. Jeff On Thu, Jun 9, 2011 at 12:09 AM, Yasuo Higa higaya...@gmail.com wrote: It's *NO* bogus benchmark because the sample iterates the results. http://code.google.com/p/slim3/source/browse/trunk/slim3demo/src/slim3/demo/controller/performance/GetLLController.java Yasuo Higa The code linked from http://slim3demo.appspot.com/performance/ has just gone back to calling list.size() without iterating the results, which once again goes back to a bogus benchmark. Jeff On Wed, Jun 8, 2011 at 9:30 PM, Yasuo Higa higaya...@gmail.com wrote: Slim3 uses LL API. To resolve a strange issue that slim3 is faster than LL, I tried the following samples: One: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } Two: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); // VERY IMPORTANT list.size(); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } The second one is much faster than the first one. I fixed the samples to call list.size(). http://slim3demo.appspot.com/performance/ As a result, LL is as fast as slim3 (^^; Yasuo Higa On Thu, Jun 9, 2011 at 10:17 AM, Jeff Schnitzer j...@infohazard.org wrote: Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Haha, excellent. I studied cargo cults a bit in anthropology classes, long ago, and never suspected how relevant they would be. You would probably enjoy this: http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality Until Google makes a change, maybe the other frameworks should try the same trick? On Thu, Jun 9, 2011 at 8:31 AM, Jeff Schnitzer j...@infohazard.org wrote: On Thu, Jun 9, 2011 at 1:32 AM, Gal Dolber gal.dol...@gmail.com wrote: I am not comparing reflexion vs byte-code generation or anything like that, apt generates code, is not a runtime technology. Like or not reflexion is known to be slower than actually writing the code. This is entirely irrelevant. We've now established that the issue at hand is a strange quirk in the Low-Level API and has nothing to do with reflection. 100% (or close to it) of the performance gain of Slim3 is because it calls .size() on a List before iterating it. Slim3 generates the minimal possible code you need to talk with the low level api, if the low level bench is faster is just because its not converting the Entity to the Pojo. You missed the point - in the benchmark, Slim3 was *faster* than the Low-Level API. This clearly indicated something was amiss, and now we've uncovered the actual cause. It turns out to be something quite interesting indeed. This is a classic case of what Feynman called Cargo Cult Science. You all believed that Slim3 should be faster than other APIs because code generation is faster than reflection, so when someone produced an ill-conceived benchmark that seemed to confirm your preconceived notion, you just accepted the entire narrative: Slim3 is fast because it doesn't use reflection! This is sloppy thinking (see: Confirmation Bias). If you haven't read this, it's a gem (I make a habit of re-reading it at least once a year): http://www.lhup.edu/~DSIMANEK/cargocul.htm Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Ok, you actually got me to waste 1 hour on benchmarking and I hate to say that you are right. The use of reflexion seems not to be heavy enough to make the difference. Some extra features on slim3 make it slower when you don't make use them, I almost have a working path to avoid this problem, but even with the patch, both frameworks give almost the same performance On Thu, Jun 9, 2011 at 10:15 AM, Dennis Peterson dennisbpeter...@gmail.comwrote: Haha, excellent. I studied cargo cults a bit in anthropology classes, long ago, and never suspected how relevant they would be. You would probably enjoy this: http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality Until Google makes a change, maybe the other frameworks should try the same trick? On Thu, Jun 9, 2011 at 8:31 AM, Jeff Schnitzer j...@infohazard.orgwrote: On Thu, Jun 9, 2011 at 1:32 AM, Gal Dolber gal.dol...@gmail.com wrote: I am not comparing reflexion vs byte-code generation or anything like that, apt generates code, is not a runtime technology. Like or not reflexion is known to be slower than actually writing the code. This is entirely irrelevant. We've now established that the issue at hand is a strange quirk in the Low-Level API and has nothing to do with reflection. 100% (or close to it) of the performance gain of Slim3 is because it calls .size() on a List before iterating it. Slim3 generates the minimal possible code you need to talk with the low level api, if the low level bench is faster is just because its not converting the Entity to the Pojo. You missed the point - in the benchmark, Slim3 was *faster* than the Low-Level API. This clearly indicated something was amiss, and now we've uncovered the actual cause. It turns out to be something quite interesting indeed. This is a classic case of what Feynman called Cargo Cult Science. You all believed that Slim3 should be faster than other APIs because code generation is faster than reflection, so when someone produced an ill-conceived benchmark that seemed to confirm your preconceived notion, you just accepted the entire narrative: Slim3 is fast because it doesn't use reflection! This is sloppy thinking (see: Confirmation Bias). If you haven't read this, it's a gem (I make a habit of re-reading it at least once a year): http://www.lhup.edu/~DSIMANEK/cargocul.htm Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- Guit: Elegant, beautiful, modular and *production ready* gwt applications. http://code.google.com/p/guit/ -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Don't feel bad - I've now wasted over a day on this! Sorry if I've come across as grumpy in previous emails... but I've wasted over a day on this :-( Jeff On Thu, Jun 9, 2011 at 4:25 PM, Gal Dolber gal.dol...@gmail.com wrote: Ok, you actually got me to waste 1 hour on benchmarking and I hate to say that you are right. The use of reflexion seems not to be heavy enough to make the difference. Some extra features on slim3 make it slower when you don't make use them, I almost have a working path to avoid this problem, but even with the patch, both frameworks give almost the same performance On Thu, Jun 9, 2011 at 10:15 AM, Dennis Peterson dennisbpeter...@gmail.com wrote: Haha, excellent. I studied cargo cults a bit in anthropology classes, long ago, and never suspected how relevant they would be. You would probably enjoy this: http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality Until Google makes a change, maybe the other frameworks should try the same trick? On Thu, Jun 9, 2011 at 8:31 AM, Jeff Schnitzer j...@infohazard.org wrote: On Thu, Jun 9, 2011 at 1:32 AM, Gal Dolber gal.dol...@gmail.com wrote: I am not comparing reflexion vs byte-code generation or anything like that, apt generates code, is not a runtime technology. Like or not reflexion is known to be slower than actually writing the code. This is entirely irrelevant. We've now established that the issue at hand is a strange quirk in the Low-Level API and has nothing to do with reflection. 100% (or close to it) of the performance gain of Slim3 is because it calls .size() on a List before iterating it. Slim3 generates the minimal possible code you need to talk with the low level api, if the low level bench is faster is just because its not converting the Entity to the Pojo. You missed the point - in the benchmark, Slim3 was *faster* than the Low-Level API. This clearly indicated something was amiss, and now we've uncovered the actual cause. It turns out to be something quite interesting indeed. This is a classic case of what Feynman called Cargo Cult Science. You all believed that Slim3 should be faster than other APIs because code generation is faster than reflection, so when someone produced an ill-conceived benchmark that seemed to confirm your preconceived notion, you just accepted the entire narrative: Slim3 is fast because it doesn't use reflection! This is sloppy thinking (see: Confirmation Bias). If you haven't read this, it's a gem (I make a habit of re-reading it at least once a year): http://www.lhup.edu/~DSIMANEK/cargocul.htm Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- Guit: Elegant, beautiful, modular and *production ready* gwt applications. http://code.google.com/p/guit/ -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
When frameworks compete, everyone wins! -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine-java/-/3zCQizDuxPIJ. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
No problem... I say what I think, but have no shame to admit when I'm wrong On Thu, Jun 9, 2011 at 8:55 PM, Jay Young jayyoung9...@gmail.com wrote: When frameworks compete, everyone wins! -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine-java/-/3zCQizDuxPIJ. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- Guit: Elegant, beautiful, modular and *production ready* gwt applications. http://code.google.com/p/guit/ -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
[appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo: http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Apologies, no offense meant. My impression was that if you wanted to, say, display all that data, it's going to take around 1000 ms to get it, not 1 ms. On Wed, Jun 8, 2011 at 10:55 AM, Yasuo Higa higaya...@gmail.com wrote: It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
[appengine-java] Re: Is the native API really so much faster than JDO and slim3?
I get... The number of entities: 1 low level 1717 millis slim3 1502 millis objectify 2970 millis jdo 3485 millis probably should modify this example to do an average of several runs one important thing to note, is slim3 allows you to update multiple entity types in a single transaction. not possible with the other 3 APIs On Jun 7, 4:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Hi Dennis, You can see all sources. http://slim3demo.appspot.com/performance/ Java runtime reflections are very very slow. If you don't think so, please try it by you. Yasuo Higa On Thu, Jun 9, 2011 at 12:00 AM, Dennis Peterson dennisbpeter...@gmail.com wrote: Apologies, no offense meant. My impression was that if you wanted to, say, display all that data, it's going to take around 1000 ms to get it, not 1 ms. On Wed, Jun 8, 2011 at 10:55 AM, Yasuo Higa higaya...@gmail.com wrote: It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Hi Dennis, The following document will help you about global transactions: http://sites.google.com/site/slim3appengine/#gtx Yasuo Higa On Thu, Jun 9, 2011 at 12:33 AM, Dennis Peterson dennisbpeter...@gmail.com wrote: Those multi-entity transactions are definitely interesting to me. There's some overhead but no getting around that. A while back I was playing around with some adhoc methods to do it in a specific case, but I suspect Slim3 is more solid and maybe faster than what I was doing. Definitely easier. When I get a chance I want to dig in and find out how it works. On Wed, Jun 8, 2011 at 10:26 AM, Mike Lawrence m...@systemsplanet.com wrote: I get... The number of entities: 1 low level 1717 millis slim3 1502 millis objectify 2970 millis jdo 3485 millis probably should modify this example to do an average of several runs one important thing to note, is slim3 allows you to update multiple entity types in a single transaction. not possible with the other 3 APIs On Jun 7, 4:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Those multi-entity transactions are definitely interesting to me. There's some overhead but no getting around that. A while back I was playing around with some adhoc methods to do it in a specific case, but I suspect Slim3 is more solid and maybe faster than what I was doing. Definitely easier. When I get a chance I want to dig in and find out how it works. On Wed, Jun 8, 2011 at 10:26 AM, Mike Lawrence m...@systemsplanet.comwrote: I get... The number of entities: 1 low level 1717 millis slim3 1502 millis objectify 2970 millis jdo 3485 millis probably should modify this example to do an average of several runs one important thing to note, is slim3 allows you to update multiple entity types in a single transaction. not possible with the other 3 APIs On Jun 7, 4:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
No I agree, and slim3 looks very interesting to me. It was just the very fast low-level times I was wondering about, but it looks like normally slim3 and low-level will be about the same speed. On Wed, Jun 8, 2011 at 11:42 AM, Yasuo Higa higaya...@gmail.com wrote: Hi Dennis, You can see all sources. http://slim3demo.appspot.com/performance/ Java runtime reflections are very very slow. If you don't think so, please try it by you. Yasuo Higa On Thu, Jun 9, 2011 at 12:00 AM, Dennis Peterson dennisbpeter...@gmail.com wrote: Apologies, no offense meant. My impression was that if you wanted to, say, display all that data, it's going to take around 1000 ms to get it, not 1 ms. On Wed, Jun 8, 2011 at 10:55 AM, Yasuo Higa higaya...@gmail.com wrote: It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value); } Note that the duration is now considerably longer. Eyeballing the median elapsed time, I'd say somewhere around 3000ms. ## Objectify fetching from datastore http://voodoodyne.appspot.com/fetch Objectify ofy = ObjectifyService.begin(); ListThing things = ofy.query(Thing.class).list(); for (Thing thing: things) { thing.getId(); thing.getValue(); } Note that the timing is pretty much the same as the LL API when it includes actual fetches of the entity values. It is, no doubt, just a little higher. ## A pure measurement of Objectify's overhead http://voodoodyne.appspot.com/fakeFetch This causes Objectify to translate 10,000 statically-created Entity objects to POJOs. You can see the code here: http://code.google.com/p/scratchmonkey/source/browse/appengine/performance-test/src/test/FakeFetchServlet.java You'll notice (after you hit the URL a couple times to warm up the JIT) that elapsed time converges to somewhere around 120ms. --- Conclusion: The numbers in the original benchmark are a result of improper measurements. The actual wall-clock overhead for Objectify in this test is ~4% (120ms out of 3000ms). Further speculation on my part, but probably correct: The overhead of reflection is unlikely to be a significant part of that 4%. Sloppy work. Jeff On Wed, Jun 8, 2011 at 7:55 AM, Yasuo Higa higaya...@gmail.com wrote: It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value); } Note that the duration is now considerably longer. Eyeballing the median elapsed time, I'd say somewhere around 3000ms. ## Objectify fetching from datastore http://voodoodyne.appspot.com/fetch Objectify ofy = ObjectifyService.begin(); ListThing things = ofy.query(Thing.class).list(); for (Thing thing: things) { thing.getId(); thing.getValue(); } Note that the timing is pretty much the same as the LL API when it includes actual fetches of the entity values. It is, no doubt, just a little higher. ## A pure measurement of Objectify's overhead http://voodoodyne.appspot.com/fakeFetch This causes Objectify to translate 10,000 statically-created Entity objects to POJOs. You can see the code here: http://code.google.com/p/scratchmonkey/source/browse/appengine/performance-test/src/test/FakeFetchServlet.java You'll notice (after you hit the URL a couple times to warm up the JIT) that elapsed time converges to somewhere around 120ms. --- Conclusion: The numbers in the original benchmark are a result of improper measurements. The actual wall-clock overhead for Objectify in this test is ~4% (120ms out of 3000ms). Further speculation on my part, but probably correct: The overhead of reflection is unlikely to be a significant part of that 4%. Sloppy work. Jeff On Wed, Jun 8, 2011 at 7:55 AM, Yasuo Higa higaya...@gmail.com wrote: It is not bogus. LazyList#size() fetches all data as follows: public int size() { resolveAllData(); return results.size(); } Yasuo Higa On Wed, Jun 8, 2011 at 11:32 PM, Dennis Peterson dennisbpeter...@gmail.com wrote: It's not my benchmark, it's Slim3's :) ...but you're right, it's bogus. I asked on the main appengine group too, and it turns out the low-level benchmark is doing lazy loading. With that fixed, their numbers come out like yours. I found this one too, which also gets results like yours: http://gaejava.appspot.com/ On Wed, Jun 8, 2011 at 4:44 AM, Erwin Streur erwin.str...@gmail.com wrote: Indeed Dennis's measurements are very suspicious. First you should do a couple of warming ups on each of the implementations to prevent pollution like the JDO classpath scan for enhanced classes (which is one of the reasons for the high initial run). Then do a couple of run to determine a range of measurements to spot outlyers. your low-level API 2millis is definately one. When I did the measurements I got the following results low-level: 1150-1550 Slim3: 1150-1600 Objectify: 1950-2400 JDO: 2100-2700 These measurements confirm that GAE designed implementations are faster then the GAE implementation of a generic data access layer (JDO), but not so extrem as initially posted. The initial response using JDO is a known issue and especially low trafic website should not use it or use the always on feature (maybe this will change in the new pricing model) Regards, Erwin On Jun 7, 11:00 am, Ian Marshall ianmarshall...@gmail.com wrote: The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value); } Note that the duration is now considerably longer. Eyeballing the median elapsed time, I'd say somewhere around 3000ms. ## Objectify fetching from datastore http://voodoodyne.appspot.com/fetch Objectify ofy = ObjectifyService.begin(); ListThing things = ofy.query(Thing.class).list(); for (Thing thing: things) { thing.getId(); thing.getValue(); } Note that the timing is pretty much the same as the LL API when it includes actual fetches of the entity values. It is, no doubt, just a little higher. ## A pure measurement of Objectify's overhead http://voodoodyne.appspot.com/fakeFetch This causes Objectify to translate 10,000 statically-created Entity objects to POJOs. You can see the code here: http://code.google.com/p/scratchmonkey/source/browse/appengine/performance-test/src/test/FakeFetchServlet.java You'll notice (after you hit the URL a couple times to warm up the JIT) that elapsed time converges to somewhere around 120ms. --- Conclusion: The numbers in the original benchmark are a result of improper measurements. The actual wall-clock overhead for Objectify in this test
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Jeff, Objectify is a great project, I have used it a lot before slim3. I didn't use slim3 global transactions yet, but I don't understand why you attack that when that is the one feature that jdo/jpa/objectify/twig cannot deliver. Slim3 is indeed faster than any other because of the simple fact that it uses apt(code generation) instead of reflexion, the generated code it's almost the same that you'll write by-hand to wrap the low-level api. I have made several small projects with the low level api and I needed to write the util classes to do Entity-Pojo, Pojo-Entity anyway, so it is understandable that slim3 will have the same performance than using the low level api, cause in the end, the code is the same... you just don't need to write it. I like to have many libraries to choose from... you should not try to defeat slim3, there are many ideas you can use for objectify to make it much better, I am not sure if you tried it yet, but it really worth it, they did a great job. Regards On Wed, Jun 8, 2011 at 6:45 PM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value);
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value); } Note that the duration is now considerably longer. Eyeballing the median elapsed time, I'd say somewhere around 3000ms. ## Objectify fetching from datastore http://voodoodyne.appspot.com/fetch Objectify ofy = ObjectifyService.begin(); ListThing things = ofy.query(Thing.class).list(); for (Thing thing: things) { thing.getId(); thing.getValue(); } Note that the timing is pretty much the same as the LL API when it includes
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
On Wed, Jun 8, 2011 at 4:26 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is indeed faster than any other because of the simple fact that it uses apt(code generation) instead of reflexion, the generated code it's almost the same that you'll write by-hand to wrap the low-level api. I'm really not trying to attack Slim3 the product - I'm sure it's solid, it clearly has quite a community behind it. And I'm sure the global transactions feature is great news for those that need it. I'm annoyed by two things: 1) People who post benchmarks that purport to show something they do not. 2) People who repeat memes uncritically. For example, you repeat that Slim3 is faster because it uses code generation instead of reflection. Have you profiled these systems? Have the Slim3 developers? Even if Slim3 is faster, how do you/they know it's because of reflection and not some other characteristic of their code? This has the same sound of people who blindly proclaimed garbage collection is slow! and designed synchronization schemes (ala early EJB) that actually reduced performance. I am deeply skeptical that reflection is the issue, considering that there are exactly three reflection calls (construction and two setters) per entity, and a helluva lot of other processing involved. Sure, it's possible, but I won't believe it without profiling. Another problem with the reflection theory: According to the corrected Slim3 benchmark, Slim3 is now considerably faster then the Low-Level API. Which presumably doesn't use reflection. Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as your test case - a single string property. Here's what happens (try it - and remember to reload the urls several times to get a realistic median value): ## Low Level API with just .size() http://voodoodyne.appspot.com/fetchLLSize The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); things.size(); Note that results are almost always under 2000ms. Wild guess I'd say the median elapsed is ~1900, just like your example. ## Low Level API with actual fetch of the data http://voodoodyne.appspot.com/fetchLL The code: ListEntity things = DatastoreServiceFactory.getDatastoreService() .prepare(new Query(Thing.class.getAnnotation(javax.persistence.Entity.class).name())) .asList(FetchOptions.Builder.withDefaults()); for (Entity ent: things) { ent.getKey(); ent.getProperty(value); } Note that the duration is now considerably longer.
Re: [appengine-java] Re: Is the native API really so much faster than JDO and slim3?
Slim3 uses LL API. To resolve a strange issue that slim3 is faster than LL, I tried the following samples: One: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } Two: AsyncDatastoreService ds = DatastoreServiceFactory.getAsyncDatastoreService(); Query q = new Query(Bar); PreparedQuery pq = ds.prepare(q); ListEntity list = pq.asList(FetchOptions.Builder.withDefaults().limit( Integer.MAX_VALUE)); // VERY IMPORTANT list.size(); for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } The second one is much faster than the first one. I fixed the samples to call list.size(). http://slim3demo.appspot.com/performance/ As a result, LL is as fast as slim3 (^^; Yasuo Higa On Thu, Jun 9, 2011 at 10:17 AM, Jeff Schnitzer j...@infohazard.org wrote: Thank you for fixing the benchmark. I am very curious. According to this new benchmark - it's hard to tell without pushing the buttons a lot of times, but there seems to be a trend - Slim3 is somewhat faster than the Low Level API. Doesn't Slim3 use the Low Level API underneath? How can it possibly be faster? Jeff On Wed, Jun 8, 2011 at 4:27 PM, Yasuo Higa higaya...@gmail.com wrote: What I want to provide is a fair and casual benchmark. As jeff advised, I modified samples as follows: for (Entity e : service.getBarListUsingLL()) { e.getKey(); e.getProperty(sortValue); } for (Bar bar : service.getBarListUsingSlim3()) { bar.getKey(); bar.getSortValue(); } for (BarObjectify bar : service.getBarListUsingObjectify()) { bar.getKey(); bar.getSortValue(); } for (BarJDO bar : service.getBarListUsingJDO()) { bar.getKey(); bar.getSortValue(); } LL API is much slower than before. http://slim3demo.appspot.com/performance/ Yasuo Higa On Thu, Jun 9, 2011 at 7:45 AM, Jeff Schnitzer j...@infohazard.org wrote: Slim3 may be a nice piece of software, but it has not been demonstrated to be faster than anything (including JDO). It might or might not be faster - I don't know - but based on the sloppy benchmarking, I'm pretty certain that the people making this claim don't know either. There's another ill-conceived performance claim on the Slim3 website: You may worry about the overhead of global transactions. Don't worry. It is not very expensive. There are three problems with their little performance test: 1) It only measures wall-clock time, not cost. 2) It does not measure what happens under contention. 3) The numbers are obviously wrong - they don't even pass a smoke test. Look at these numbers (from the Slim3 home page): Entity Groups Local Transaction(millis) Global Transaction(millis) 1 67 70 2 450 415 3 213 539 4 1498 981 5 447 781 Just like the 1ms low-level API query performance in the benchmark that spawned this thread, even a casual observer should be able to recognize the obvious flaw - the numbers don't show any expected relationship between # of entity groups or the use of global transactions. Interpreted literally, you would assume that local transactions are much faster for 5 entity groups, but global transactions are much faster for 4 entity groups. It's pretty obvious that the benchmark author just ran one pass and took the numbers as-is. If you actually run multiple passes, you'll find that there is enormous variance in the timing. The only way you can realistically measure results like this on appengine is to execute the test 100 times and take a median. FWIW, I deliberately haven't made any performance claims for Objectify because I haven't put the necessary amount of due diligence into creating a proper set of benchmarks. It is not easy to measure performance, especially in a dynamic environment like appengine. I only hope that the Slim3 authors have put more care and thought into crafting their library than they have their benchmarks. Jeff On Wed, Jun 8, 2011 at 2:34 PM, Gal Dolber gal.dol...@gmail.com wrote: Slim3 is not only fast, the api is completely awesome. It has been my choice for a year now for all gae projects. It includes name safety and and amazing querying utils. Very recommendable! On Wed, Jun 8, 2011 at 3:41 PM, Jeff Schnitzer j...@infohazard.org wrote: You are wrong. Try adding getProperty() calls to your LL performance test, and the speed advantage of the LL API goes away. I don't know what to say about Slim3, but here's my test case: http://code.google.com/p/scratchmonkey/source/browse/#svn%2Fappengine%2Fperformance-test I created 10,000 entities in the datastore that have the same format as
[appengine-java] Re: Is the native API really so much faster than JDO and slim3?
The low-level API does indeed look very fast. Just a comment on JDO: repeat runs roughly halve the JDO run time. I presume that this is because for repeat runs the JDO persistence manager factory has already been constructed. On Jun 6, 8:44 pm, DennisP dennisbpeter...@gmail.com wrote: I'm looking at this online demo:http://slim3demo.appspot.com/performance/ Sample run: The number of entities: 1 low-level API:get: 2 millis Slim3: 2490 millis JDO: 6030 millis Is the low-level API really that much faster? -- You received this message because you are subscribed to the Google Groups Google App Engine for Java group. To post to this group, send email to google-appengine-java@googlegroups.com. To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en.