[google-appengine] Re: Can't include javascript file....need help

2010-09-01 Thread Joseph Letness
I think the problem might be that you need a leading slash in your
href (I don't think that a relative url works in the context of static
media)

instead of "static/abc.css", try "/static/abc.css"

On Sep 1, 11:08 am, salehin  wrote:
> Hi!
>
> I am new in appengine development.
>
> Having problem to include javascript files.
>
> my app.yaml code as follows:
>
> handlers:
> - url: /main
>   script: main.html
>
> - url: /static
>   static_dir: static
>
> - url: /.*
>   script: demo.py
>
> Now in dejango main template if include access ass follows:
>
>  
>
>  If works, but If I add javascript file as follows:
>
>  
>
>  It can not add the file (error: 404)
>
> To avoid confusion, both files in same static/ folder.
>
> Can any body reply ASAP?
>
> Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Odd behaviour with referenceproperty and collections :\

2010-09-03 Thread Joseph Letness
Are you saying that it works on your development server but fails on
your production app, or that your local app begins to fail after you
update?

If the former, it might be an indexing problem that is not is not
being explicitly identified by the error message.  I've encountered a
similar problem in the past (I've never used search.SearchableModel,
but it sounds like something that might rely on a defined index which,
if you were using an otherwise mundane property class would work with
an autogenerated index).  A good way to test for this is to restart
your dev server with "--require_indexes" and see if you get the same
failure.

If your problem is that your reference suddenly begins to fail on your
development server then there must be something going on with your
update script and your local datastore.  You might try using the
Dataviewer in your SDK console to inspect your entities before and
after an update to see if you are losing data, somehow.  Perhaps
search.SearchableModel (again, pure speculation here) has some sort of
caching property that is getting cleared on update?

Good luck,

Joe


On Sep 3, 6:49 am, peterk  wrote:
> Hello,
>
> I hadn't used referenceproperty much before in my applications, but
> have cause to use it now, and the behaviour of it is really confusing
> me.
>
> I have an entity with a reference to a second entity.
>
> From the documentation I believe I should be able to say something
> like:
>
> x.entityb_set
>
> to retrieve all instances of model entityb that reference x.
>
> This works, but as soon as I do an appcfg.py update of my application,
> it stops working. It's like for the existing data, the application has
> forgotten the references. I get the error:
>
> AttributeError: 'ModelA' object has no attribute 'modelb_set'
>
> (I have replaced the model names here)
>
> If I delete the data in my datastore and recreate these entities, it
> works. Until I do another update, when again I get this error.
>
> Why would it behave like this? It's frustrating me no end. If it makes
> a difference, these models are extending from search.SearchableModel.
>
> Thanks for any help!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] High-Performance Image Serving Cache-Control

2010-09-14 Thread Joseph Letness
Hi everybody,  I would like to allow browser caching of images served
from get_serving_url().  I've had success using get_serving_url() for
generating images and thumbnails but the Cache-Control is set to "no-
cache" and the expiration dates are in the past (I've only implemented
this functionality on the development server, I have not tried to
deploy yet).

Is there any way of setting the Cache-Control?  I can't seem to find
any info in the documentation or with a general search, other than a
reference to High-Performance Image Serving:  "It also handles setting
proper Cache-Control headers so you don't have to worry about that."

I'm using django (appenginepatch).

Thanks, in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: High-Performance Image Serving Cache-Control

2010-09-15 Thread Joseph Letness
Thanks Peter, you are correct.  When I deployed my app the my images
were set to cache for 1 day.

On Sep 14, 7:14 pm, Peter Liu  wrote:
> You have to test it production. The headers on dev on all requests are
> very different than in live.
>
> I believe all the images from the live image server has a 1 day cache
> expiration.
>
> On Sep 14, 4:25 pm, Joseph Letness  wrote:
>
>
>
> > Hi everybody,  I would like to allow browser caching of images served
> > from get_serving_url().  I've had success using get_serving_url() for
> > generating images and thumbnails but the Cache-Control is set to "no-
> > cache" and the expiration dates are in the past (I've only implemented
> > this functionality on the development server, I have not tried to
> > deploy yet).
>
> > Is there any way of setting the Cache-Control?  I can't seem to find
> > any info in the documentation or with a general search, other than a
> > reference to High-Performance Image Serving:  "It also handles setting
> > proper Cache-Control headers so you don't have to worry about that."
>
> > I'm using django (appenginepatch).
>
> > Thanks, in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Unable to add a developer to my app

2010-09-15 Thread Joseph Letness
I'm having the same trouble with trying to send mail from my app as
well.  Every time I try to invite a developer (another email address
in my Google Apps domain) it just defaults to my original email
address for that domain.  Does anybody have any ideas on how to solve
this?

Thanks!

On Sep 14, 3:46 pm, KWaves  wrote:
> Hi,
>
> I am hosting yyy.com with google apps.  In addition, xxx.com is mapped
> to yyy.com as a domain alias so email to a...@xxx.com will show up in
> a...@yyy.com's email box.  When I access my app engine apps, I go to
> appengine.google.com/a/yyy.com/  I want my app to send email from
> a...@xxx.com.  So I invite a...@xxx.com as a developer for my app.
> However, after I complete the process, the developer that shows up is
> a...@yyy.com.  This is fine except when I use the send mail service, I
> cannot send emails from a...@xxx.com.  I must be able to send email
> from with xxx.com.  How can I accomplish this?
>
> Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: short-term quota limit on Total Stored Data

2010-09-16 Thread Joseph Letness
Hi Chris, I've noticed a similar problem with an erroneous amount of
Total Stored Data reflected in the Quota Details.

I have about 2 or 3Gb of data in the Blobstore, yet Total Stored Data
shows that I have 20Gb (which is still under quota for my billing
settings).  I first noticed this discrepancy last friday (9-10).
Everyday my Total Stored Data shows to be around 10 times higher then
it actually is.  However, when I check my Billing History for the same
days, the reports show the actual amount so I am assuming that I am
being billed properly, at least I hope...

I can't image that my indexes would be the cause of this problem.

If anybody has any ideas...

Thanks,

Joe

On Sep 15, 7:34 am, Chris  wrote:
> Hi all
>
> My application is curently limited due to a short-term quota limit on
> Total Stored Data.
> I'm limited to 1 gb and the Quota detail tell me i'm using 1 Gb, so
> far so good.
>
> But when i'm going to the datastore statistic , my datastore only use
> 164 MBits.
> are my indexes really eating 800+ Mbits of space ? (that's a 4:1
> ration on data)
>
> How to find out ?
>
> Thanks
>
> Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] SEO and High Performance Image API get_serving_url(). Use a GET parameter?

2010-09-17 Thread Joseph Letness
I'm no SEO expert but I've always considered it Best Practice to use
semantically meaningful URLs for the both "src" and "alt" attributes
in my HTML.  However, using get_serving_url() takes away any SEO
relevance in the "src" attribute.  But after testing performance with
get_serving_url(), I think the improved user-experience is worth the
loss in SEO.

Although, I think that a possible work-around for this would be to
append the URL in the markup with a GET parameter containing
meaningful data.  For example:

http://lh5.ggpht.com/V53SofI9tmIdjz28H7=s160?filename=keyword-friendly-filename.jpg

I've tested this and it seems to work just fine.  However, I am
**assuming** that Googles image server is just ignoring the GET
parameter.  Does anybody know if this is the case or could I be
negatively affecting performance with the extra parameter?

Also, if there are any SEO experts in this group, does this sound like
a worthwhile approach?

Thanks!

--Joe

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: bolbstore and content type via swfupload

2010-10-01 Thread Joseph Letness
I would hope that a the blobstore would  get it's own method of
progress status.  Something like using AJAX at timed intervals during
the upload to hit the upload url (probably appended with a GET
parameter?) and get back how many bytes have been transferred.

Right now I just throw up an html marked-up modal box that says
"Please wait for your upload to complete..." with a spinning gif.
Some feedback is better than nothing.

GAE developers, how about it?

:-)

--Joe

On Sep 29, 1:40 am, msmart  wrote:
> Hi,
>
> has anyone managed to set the correct content type of an upload to the
> blobstore when using a flash based solution like swfupload? Swfupload
> is used to indicated a progress bar during the upload?
>
> I've set up everything correctly, but all by blobs have the content
> type octet streams. As the blobstore is read only this cannot be
> changed afterwards.
>
> thanks for any tips
>
> Michael

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How to share data stored in Datastore service to my other applications.

2010-10-07 Thread Joseph Letness
Hi Ikai, I've had a similar need come up as well.

My case is this:

I supply 3d images of consumer-package-goods to my clients who use
them in their marketing materials.  I've developed a GAE app that
handles all of the production flow on my end as well as providing my
clients with deliverables via a searchable asset-management service.

Now, one of my clients likes the asset-management process of my app so
much that they wish for me to develop a similar system for dealing
with all of their internal graphics as well as those supplied by other
vendors (these would include the source files that I use to create the
3d images) .

Ideally, my deliverables (referenced blobstore objects) should be
available within my client's app and behave just like any other asset
that belongs to their app. Also, I would like to integrate my clients
source files into my production flow (eliminating an admin task for my
client).   This would essentially entail sharing datastore objects
between apps.

I have not had much time to think about a solution to this (my client
made the request just yesterday..).  My first thought was to write a
service that would use url fetch to create new objects in my client's
app duplicating my own or just uploading the data twice (once to my
app and once to my clients).  Both of those solutions sound
problematic for both redundancy of resources and maintaining
consistency.

If there is a specific way of sharing datastore entities between apps
it would be great for my situation.

Thanks!

--Joe


On Oct 7, 12:29 pm, "Ikai Lan (Google)" 
wrote:
> It's probably easier for you to keep the data together in one app unless you
> really need it apart. What's your use case?
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> Blogger:http://googleappengine.blogspot.com
> Reddit:http://www.reddit.com/r/appengine
> Twitter:http://twitter.com/app_engine
>
>
>
> On Thu, Oct 7, 2010 at 9:21 AM, Robert Kluin  wrote:
> > Yes it is possible.  Write a service API within app1 that makes the
> > data accessible to the other apps.
>
> > Robert
>
> > On Thu, Oct 7, 2010 at 04:07, imlangzi  wrote:
> > > I have some applications in GAE.
> > > App1
> > > App2
> > > App3
>
> > > we stored some data in app1.
>
> > > I want to share them to app2 and app3? Is it possible?  And how?
>
> > > Anybody has some suggestion?
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appeng...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com > >  e...@googlegroups.com>
> > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com > e...@googlegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: SDK Upgrade 1.5.2: --datastore_path ignored?

2011-07-22 Thread Joseph Letness
I tried the --default_partition="" flag but now validation.py is
throwing an exception:

Traceback (most recent call last):
  File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/
GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
google/appengine/tools/dev_appserver.py", line 4099, in _HandleRequest
default_partition)
  File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/
GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
google/appengine/api/validation.py", line 360, in __setattr__
value = self.GetValidator(key)(value, key)
  File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/
GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
google/appengine/api/validation.py", line 598, in __call__
return self.Validate(value, key)
  File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/
GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
google/appengine/api/validation.py", line 923, in Validate
'\'%s\'' % (value, key, self.re.pattern))
ValidationError: Value '""~tagassetspro' for application does not
match expression '^(?:[a-z\d\-]{1,100}\~)?(?:(?!\-)[a-z\d\-\.]
{1,100}:)?(?!-)[a-z\d\-]{1,100}$'

This has got to be a bug. It's not like using stored test data with
the dev server is some kind of oddball edge case ;-)

--Joe



On Jul 22, 4:57 pm, Chris Copeland  wrote:
> Thanks, Matthew.
>
> I was able to update to 1.5.2 and use my existing datastore by adding that
> flag.
>
> It would have been useful if the release notes had mentioned that this would
> be necessary.
>
> -Chrsi
>
> On Fri, Jul 22, 2011 at 2:39 PM, Matthew Blain 
> wrote:
>
>
>
> > That's a clever way to update the appid. I do not know if it works for
> > all cases (e.g. it may not work for all reference properties (stored
> > keys)) but is a neat trick.
>
> > Another way to deal with it is to use the  --default_partition="" flag
> > rather than using an older version of the sdk.
>
> > --Matthew
>
> > On Jul 22, 12:17 pm, c h  wrote:
> > > hi all,
>
> > > i *think* that it is honoring your datastore location (though the log
> > > message is incorrect), but the change to rename your application to
> > > dev~ in development has just rendered all of our test data
> > > useless.
>
> > > after re-importing my test data it does look like it is stored where i
> > ask
> > > it to be, but under the new application name.
>
> > > if you are lucky enough to be using sqlite you can connect to the db and
> > > rename some tables to get it to work:
>
> > > sqlite3 local_appname_dev_sqlite.datastore
> > > sqlite> .tables
> > > Apps
> > > IdSeq
> > > Namespaces
> > > appname!!Entities
> > > appname!!EntitiesByProperty
> > > appname!namespace!Entities
> > > appname!namespace!EntitiesByProperty
> > > sqlite> alter table `appname!!Entities` rename to
> > `dev~appname!!Entities`;
> > > sqlite> alter table `appname!!EntitiesByProperty` rename to
> > > `dev~appname!!EntitiesByProperty`;
> > > sqlite> alter table `appname!namespace!Entities` rename to
> > > `dev~appname!namespace!Entities`;
> > > sqlite> alter table `appname!namespace!EntitiesByProperty` rename to
> > > `dev~appname!namespace!EntitiesByProperty`;
>
> > > where you substitute 'appname' for your application's name, and
> > 'namespace'
> > > for your datanamespace.
>
> > > cfh
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: SDK Upgrade 1.5.2: --datastore_path ignored?

2011-07-23 Thread Joseph Letness
That did the trick, Cat.  Thanks!

On Jul 23, 2:46 am, Cat  wrote:
> 
> SOLUTION
> 
>
> 1. USE --default_partition= BUT DO NOT INCLUDE THE QUOTES as mentioned
> in Matthew's post.
> 2. IGNORE THE INCORRECT LOG MESSAGE ... rdbms_sqlite.py:58] Connecting
> to SQLite database ...
>
> LauncherFlags:
> --datastore_path=/Users/cat/repositories/appengine/my.datastore --
> default_partition=
>
> Console Flags:
> dev_appserver.py --datastore_path=/Users/cat/repositories/appengine/
> my.datastore --default_partition= -p 8080 .
>
> That's it.
>
> On 23 Jul., 09:25, Cat  wrote:
>
>
>
> > One thing I know for sure now is that the following log message is
> > bogus, it appears even if dev_appserver.py successfully connects and
> > uses a store at a different location than the TMP directory.
> > INFO     2011-07-23 07:22:11,762 rdbms_sqlite.py:58] Connecting to
> > SQLite database '' with file '/var/folders/u5/u5xmrm5gHPGXhfjlyv98u+++
> > +TI/-Tmp-/dev_appserver.rdbms'

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Filter query by key name

2011-08-30 Thread Joseph Letness
Hi, the model class already has a built in function for retrieving an
entity by key name:

my_entity = MyModel.get_by_key_name(key_name)

Also, key names are unique within a data model. Your example shows an
iteration over multiple entities which will never happen if you are
querying on a key name.

Hope this helps.

--Joe

On Aug 29, 2:33 pm, "S.Prymak"  wrote:
> I have to query datastore for entities with key names which are greater or
> equals to specified string. Something like this:
>
> def get_pages_by_key_name(key):
>   p = models.Page(key_name=key)
>   query = GqlQuery("SELECT * FROM Page WHERE  :1", p.key.name())
>   return [i for i in query]
>
> How can I do that?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: "500 Server Error" for over 24 hours: python 2.5, django, appenginepatch

2011-12-08 Thread Joseph Letness
Hi John, I'm experiencing the same with my appenginepatch app on M/S,
a marked increase in DeadlineExceeded errors. It's usable but the user
experience has got to be pretty sucky.  I have a similar app (python
2.5, appenginpath) running on HDR and it is solid.

--Joe

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: SWFUpload with Python in App Engine

2011-12-19 Thread Joseph Letness
Hi Nick, I implemented a progress indicator using html5 a few months
ago.  If I remember correctly, the main issue I had with using the
blobstore was getting my ajax request headers correct.  Take a look at
this blog post by Nick Johnson where he uses Plupload (which uses SWF)
as his client for for the blobstore:

http://blog.notdot.net/2010/04/Implementing-a-dropbox-service-with-the-Blobstore-API-part-2

By looking at how he wrote his upload handlers I was able to adapt it
to my use case.

Hope this helps, good luck!

--Joe



On Dec 17, 12:34 pm, Nick  wrote:
> Hi all. I'm trying to upload video files to blobstore with SWFUpload.
> I've tested my code with PHP and it's working fine. I've also tested a
> regular HTML multipart/form-data upload with App Engine in Python and
> that also works fine.  But for some reason beyond me, when I use
> SWFUpload and submit to a Python handler in App Engine, the upload
> simply fails, with no error or progress feedback.
>
> What's going on? Is there something special I need to do to get
> SWFUpload to work?  Maybe it has something to do with GAE/Python
> needing the multipart/form-data header?
>
> Can't find any discussion on this topic with a google search. Any
> advice would be very appreciated.
>
> -Nick

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Blobstore Downloads using IE seeing errors connecting to appspot

2011-12-22 Thread Joseph Letness
Hi Will, your probably sending your blobs without a properly formatted
header response.  I had this same problem when I was testing content
types and I inadvertently left the content_type header undeclared.
Every other browser worked fine but IE was broken (apparently IE is
the only browser that cares to know before hand what the content type
is ;).  After I fixed that, all was good with IE again.

Hope this helps and good luck!

--Joe

On Dec 21, 12:20 pm, Will Reiher  wrote:
> I've heard some support issues come up when some of our customers have
> tried to download a file from the blobstore using Internet Explorer and get
> an error. The download dialog comes up and everything seems to be acting
> property but when the download begins they see some type of cannot connect
> to *appspot.com error. Has anyone else seen this and more importantly is
> there a work around?
>
> We all work in firefox or chrome browsers and have never seen this issue.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Do images served with get_serving_url() affect my bandwidth quota?

2011-12-28 Thread Joseph Letness
Hi everybody, I'm hoping someone might know what the billing policy is
concerning image requests served directly from the High Performance
Image Service API (Picasa).

I've been searching the docs for this but I can't seem to come up with
a definitive answer.  My GAE app generates a url with
get_serving_url(), such as  "https://lh6.ggpht.com/hash...";.  When
that url is requested, does it get billed to my outgoing bandwidth
quota?  If so, is there anyway of tracking it?

The reason I'm asking is that I have been supplying a client with
images for his e-commerce site which he downloads from my asset
manager that I built on GAE.  Now he thinks it would be great idea if
he just used my urls directly, as a content delivery network, instead
of having them stored and served from his host.  I just need to know
what costs I might incur and how I can bill my client.

Thanks in advance!

--Joe

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Do images served with get_serving_url() affect my bandwidth quota?

2011-12-28 Thread Joseph Letness
Thanks Barry, I had assumed it was included but it was hard to be
sure.  When I examined my billing history, the outgoing bandwidth on a
particular app that uses the high performance image urls seemed to be
lower than expected.  Perhaps there is some edge caching going on (or,
most probably, I'm just misinterpreting my billing histories ;)

Thanks again,

--Joe

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Migration to HRD datastore, do I get to transfer my $50 in billing credits?

2012-01-08 Thread Joseph Letness
Hi Marzia, could you please transfer the remaining credit for my
recently migrated app as well?

ideation3d to ideation3d-hrd

Thank you very much!

On Jan 8, 3:58 am, Aurelian  wrote:
> Hi Marzia,
> I have the same problem, could you transfer the credit from app
> 'riparautonline' to app 'riparautonline-hrd'?
>
> Regards,
> Aurelian
>
> On 4 Dic 2011, 22:30, Igor  wrote:
>
>
>
>
>
>
>
> > Hi Marzia,
> > I have the same problem, could you transfer the credit from app 'shop-
> > gallery' to app 'shop-gallery-hrd'?
>
> > On Nov 7, 4:13 am, Marzia Niccolai  wrote:
>
> > > Hi Jason,
>
> > > Yes, we can transfer the credit between apps. I will do this now.
>
> > > -Marzia
>
> > > On Sat, Nov 5, 2011 at 10:39 PM, Jason  wrote:
> > > > I migrated my MS app (glutenfreemeapp) to HRD (j-findmegf), but in doing
> > > > so, I lost my $50 in billing credits that I had on my old app id.  Is 
> > > > there
> > > > any way to get the credit transferred to my new app id?
>
> > > > --
> > > > You received this message because you are subscribed to the Google 
> > > > Groups
> > > > "Google App Engine" group.
> > > > To view this discussion on the web visit
> > > >https://groups.google.com/d/msg/google-appengine/-/VlKqSRllq4wJ.
> > > > To post to this group, send email to google-appengine@googlegroups.com.
> > > > To unsubscribe from this group, send email to
> > > > google-appengine+unsubscr...@googlegroups.com.
> > > > For more options, visit this group at
> > > >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: images.GetUrlBase deadline problems since yesterday morning

2012-02-14 Thread Joseph Letness
Hi Amy, I'm running 1.6.2 and I've tried with both a 4k PNG and a 500k
JPEG and both hit the deadline exceeded error after 5 sec.  Earlier in
this thread, Stuart Langley stated that the deadline for
images.GetUrlBase() had been increased to 15 sec but that does not
seem to be the case.

Also, I'm encountering this on two apps, one python 2.5 and the other
2.7

Thanks.

--Joe

On Feb 14, 2:05 pm, Amy Unruh  wrote:
> For those of you seeing this deadline issue, do you see any pattern w.r.t.
> it occurring with large images?  Approximately how large?
> Are you running 1.6.2?
>
>
>
>
>
>
>
> On Wed, Feb 15, 2012 at 1:17 AM, Andreas  wrote:
> > got a few of them too.
>
> > On Feb 14, 2012, at 7:46 AM, Ubaldo Huerta wrote:
>
> > I'm observing the issue right now.
>
> > The API call images.GetUrlBase() took too long to respond and was cancelled.
> > Traceback (most recent call last):
> >   File 
> > "/base/data/home/apps/s~yagruma-site/433.356791994696982198/handlers/projec 
> > t.py", line 177, in get
> >     project.set_image(image_blob)
> >   File 
> > "/base/data/home/apps/s~yagruma-site/433.356791994696982198/models/__init__ 
> > .py", line 214, in set_image
> >     self.lowres_image_url = get_serving_url(lowres_image_blob_key)
> >   File 
> > "/base/python_runtime/python_lib/versions/1/google/appengine/api/images/__i 
> > nit__.py", line 1273, in get_serving_url
> >     response)
> >   File 
> > "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_s 
> > tub_map.py", line 94, in MakeSyncCall
> >     return stubmap.MakeSyncCall(service, call, request, response)
> >   File 
> > "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_s 
> > tub_map.py", line 308, in MakeSyncCall
> >     rpc.CheckSuccess()
> >   File 
> > "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_r 
> > pc.py", line 133, in CheckSuccess
> >     raise self.exception
> > DeadlineExceededError: The API call images.GetUrlBase() took too long to 
> > respond and was cancelled.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To view this discussion on the web visit
> >https://groups.google.com/d/msg/google-appengine/-/i83Hg7yJsiIJ.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> >  --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: django_setup

2012-02-16 Thread Joseph Letness
If your using Django-Nonrel, you can use it "right out of the box"
without any special configuration.  It uses it's own supplied version
of Django that resides in your app's codebase and ignores GAE's built-
in version so you can ignore any references to "django_setup".

--Joe


On Feb 16, 7:26 am, anatoly techtonik  wrote:
> Yes, I am also interested to know.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: images.GetUrlBase deadline problems since yesterday morning

2012-02-19 Thread Joseph Letness
Hi Stuart, the pattern that I use involves calling get_serving_url
once, without size or crop arguments, and then storing the result as a
string in the entity that references the blob_info.

Thanks for any help getting this resolved.

On Feb 18, 3:19 am, Stuart Langley  wrote:
> Does you app typically call get_serving_url on the same set of images over
> an over, or do you typically only call it once per blob?
>
> Just trying to understand you application pattern and how you use this API.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Getting Access-Control-Allow-Origin header for images stored in Appengine Blobstore for use in canvas tag.

2012-02-26 Thread Joseph Letness
Hi Vinuth,

I think the only way to do this right now would be to forgo the
getServingUrl method from the Blobstore Images API and generate a
private url of the image within your app.  Much in the same way that
you would serve a non-image blobstore entity such as a .zip or some
other binary.

However, there are drawbacks: If you rely on the sizing and cropping
arguments (that append to the API generated url) to render you images,
you will need to build your own functionality for storing and serving
de-normalized results from the transformations.  Also, your app will
incur the overhead for both a request as well as a few RPCs for each
image served, which can add up if you are displaying many thumbnails
on a single page which can noticeably impact latency and/or force your
app to scale up more resources.

On the other hand, it does allow for setting the headers any way you
want, rendering readable urls (which *some* SEO auditors consider
critical), as well as giving a mechanism for monitoring the usage of
each image asset.  That last part is helpful if you don't want some
user consuming your apps bandwidth for their own CDN purposes, since
AFAIK, GAE does not have any specific logging for individual Blobstore
images served. (If it can, someone please let me know ;)

Maybe this will fit your use case, good luck!

--Joe

On Feb 25, 3:49 am, Vinuth Madinur  wrote:
> Had posted this on Stackoverflow, since I didn't see any response there,
> retrying to start the conversation here.
>
> I hit upon this problem while trying to display image in a canvas to fetch
> it's color properties. However, since the image from blobstore is served
> from a different domain, the canvas gets tainted. Is there a workaround for
> this problem? Shouldn't blobstore be sending the
> Access-Control-Allow-Origin header based on the requesting domain and
> whether the domain owns the content?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 503 errors uploading to GCS via blobstore API

2015-07-09 Thread Joseph Letness
Hi Jeff,

Does your app-id have permission to write to the bucket? I ran into this 
issue just a few days ago and I needed to explicitly give my app write 
access to the api by adding the app's service account 
(your-app...@appspot.gserviceaccount.com) in the Permissions tab. That did 
the trick, no more 503 errors.

Hope this helps.

Joe

On Wednesday, July 8, 2015 at 7:05:36 PM UTC-5, Jeff Schnitzer wrote:
>
> Billing is enabled with a valid card. There's also plenty of the startup 
> credit. With a little hunting I found the Daily Budget setting which was 0, 
> I upped that. Still getting 503 - should it recognize the change 
> immediately?
>
> This budget setting has not been necessary in the other two test 
> environments that I have deployed this code to, which are still working 
> fine. What changed?
>
> Thanks,
> Jeff
>
> On Wed, Jul 8, 2015 at 1:41 PM, Patrice (Cloud Platform Support) <
> pvout...@google.com > wrote:
>
>> Hi Jeff,
>>
>> Just doing the normal checks : Is billing enabled on the project? GCS 
>> needs billing, so it's possible you get 503 if it isn't (or if your daily 
>> budget is still at 0, the default value).
>>
>> Cheers
>>
>>
>> On Wednesday, July 8, 2015 at 4:09:09 PM UTC-4, Jeff Schnitzer wrote:
>>>
>>> I'm setting up a demo environment on a new appid and running into some 
>>> problems with GCS. Unless otherwise mentioned, all of this is using the new 
>>> console.
>>>
>>> The first strange thing is that a default bucket was not created. When 
>>> making appids before, the default bucket was created automatically. With a 
>>> little digging I found a button in the old console App Settings that added 
>>> a default GCS bucket, and that seemed to work. I have a default bucket 
>>> named gearlaunch-hub-demo.appspot.com.
>>>
>>> Now when I try to upload (using the same code that works on other 
>>> appids), I get a 503 error on the client. There's nothing in the logs about 
>>> this error.
>>>
>>> Any idea what's up, or how I can get uploads working?
>>>
>>> My appid is gearlaunch-hub-demo. The options passed 
>>> to createUploadUrl are:
>>>
>>>
>>> UploadOptions.Builder.withGoogleStorageBucketName(AppIdentityServiceFactory.getAppIdentityService().getDefaultGcsBucketName())
>>>
>>> Thanks,
>>> Jeff
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/bca62856-0b94-488b-8d67-fcce92d64c84%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] Re: App Engine Serious Trouble Started 10 minutes ago

2010-05-20 Thread Joseph Letness
Something is going on, my apps are all working correctly but I can't
deploy or log in to my dashboard.  It's been this way for the last
20-30 mins or so.  I guess it's time to pack it in for the day...

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How import datas to BigTable from my local mysql ?

2010-11-17 Thread Joseph Letness
When you use the development server from the SKD, it stores your local
datastore in a temp file that will get flushed when you restart your
system (at least on OSX it does).  Add this flag "--datastore_path=/
path_to_your_datastore.datastore" when you launch your development
server to create a persistent file for your data.

Hope that helps.

On Nov 16, 8:42 pm, YF CAO  wrote:
> thanks.
> after close IDE, all test datas disappeared.
> is it in memory?
> how to create local BigTable ?
>
> 2010/11/16 Robert Kluin 
>
>
>
> > You've got a few good options.  Check out:
> >http://code.google.com/appengine/docs/python/tools/uploadingdata.html
>
> > Although, I generally find it easier to write my own 'services' for
> > importing data.
>
> > Robert
>
> > On Mon, Nov 15, 2010 at 22:31, YF CAO  wrote:
> > > hi all,
> > > How import datas to BigTable from my local mysql ?
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appeng...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com > >  e...@googlegroups.com>
> > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com > e...@googlegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Two equality filters on the same property + others equalities failing

2010-11-24 Thread Joseph Letness
If 'a' is not a ListProperty you will never get a matching result for
"a=x AND a=y".  What is sounds like is that you want something like
this "b=z AND c=t AND(a=x OR a=y)". Unfortunately, AFAIK, this will
not work with current implementation of the datastore due to the query
generating an exploding index (although, I think this will be covered
as part of the "Next-Gen Queries" when they are rolled out).

A work-around would be to de-normalize the 'a' property in your
model :

create two distinct properties to represent the value 'a' as 'a' and
'ab' and populate both with the same value. Then your query would be
"a=x AND ab=y AND b=z AND c=t" which shouldn't need an custom index.

You will incur the overhead of storing the extra property.



On Nov 7, 1:13 am, ZS  wrote:
> I have a query like
> a=x AND a=y AND b=z AND c=t
> and it is failing saying there is no suitable index. I thought all
> equalities is always allowed? Does that not apply if there are two
> equalities on the same property? Yet a=x AND a=y AND b=z  works ok so
> what is the rule?
>
> Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Two equality filters on the same property + others equalities failing

2010-11-24 Thread Joseph Letness
It just occurred to me that my previous post is incorrect.  I should
probably wait until I've had more coffee in the morning before
offering lame advice ;-)

In my example, the equality is still failing for either 'a" or 'ab'
which will prevent getting a result.

On Nov 24, 10:27 am, Joseph Letness  wrote:
> If 'a' is not a ListProperty you will never get a matching result for
> "a=x AND a=y".  What is sounds like is that you want something like
> this "b=z AND c=t AND(a=x OR a=y)". Unfortunately, AFAIK, this will
> not work with current implementation of the datastore due to the query
> generating an exploding index (although, I think this will be covered
> as part of the "Next-Gen Queries" when they are rolled out).
>
> A work-around would be to de-normalize the 'a' property in your
> model :
>
> create two distinct properties to represent the value 'a' as 'a' and
> 'ab' and populate both with the same value. Then your query would be
> "a=x AND ab=y AND b=z AND c=t" which shouldn't need an custom index.
>
> You will incur the overhead of storing the extra property.
>
> On Nov 7, 1:13 am, ZS  wrote:
>
>
>
> > I have a query like
> > a=x AND a=y AND b=z AND c=t
> > and it is failing saying there is no suitable index. I thought all
> > equalities is always allowed? Does that not apply if there are two
> > equalities on the same property? Yet a=x AND a=y AND b=z  works ok so
> > what is the rule?
>
> > Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Two equality filters on the same property + others equalities failing

2010-11-24 Thread Joseph Letness
Ok, _now_ I remember how I did this...

This work-around will only be practical if your result set is
relatively small (mine were less than 500 entities but maybe you could
get away with more)

You perform two separate queries that match both values for 'a'.  Be
sure to use the "keys_only=True" in your query.  Then you need to loop
through the results, eliminate the duplicates and store the keys in a
list.  At that point simply do a batch get of the keylist:

final_results = db.get(list_of_keys)

Sorry about the previous post.


On Nov 24, 10:27 am, Joseph Letness  wrote:
> If 'a' is not a ListProperty you will never get a matching result for
> "a=x AND a=y".  What is sounds like is that you want something like
> this "b=z AND c=t AND(a=x OR a=y)". Unfortunately, AFAIK, this will
> not work with current implementation of the datastore due to the query
> generating an exploding index (although, I think this will be covered
> as part of the "Next-Gen Queries" when they are rolled out).
>
> A work-around would be to de-normalize the 'a' property in your
> model :
>
> create two distinct properties to represent the value 'a' as 'a' and
> 'ab' and populate both with the same value. Then your query would be
> "a=x AND ab=y AND b=z AND c=t" which shouldn't need an custom index.
>
> You will incur the overhead of storing the extra property.
>
> On Nov 7, 1:13 am, ZS  wrote:
>
>
>
> > I have a query like
> > a=x AND a=y AND b=z AND c=t
> > and it is failing saying there is no suitable index. I thought all
> > equalities is always allowed? Does that not apply if there are two
> > equalities on the same property? Yet a=x AND a=y AND b=z  works ok so
> > what is the rule?
>
> > Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] LIFO key name strategy?

2011-02-12 Thread Joseph Letness
Hi everybody, I was wondering if anybody has any good ideas for
generating LIFO (Last In FIrst Out) key names.  I can't use a
composite index since it would explode with my use case.

Currently, I can think of two methods:

Use the auto generated id (which, I believe is accumulative), query
for keys only and reverse the list in memory.  This would be fine if I
can guarantee that my entire result set can be handled within a single
request.

OR

Create a de-accumulator Entity in the datastore and have it count down
from some reasonably high integer and create my key name with that (a
composite of the de-accumulation and the entity nam).  The draw back
for this method is that I'm incurring an additional read-write every
time a new LIFO entity is created and possible contention on the de-
accumulator if I run it in a transaction (I haven't decided if
consistency of the de-accumulation is imperative for my use case yet).

I'm using Python.  If anybody has any better ideas it would be much
appreciated!

Thanks,

--Joe


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: LIFO key name strategy?

2011-02-13 Thread Joseph Letness
Hi Calvin and Robert, thanks for your replies.  I should have been
more clear about what I am doing, here is some more info:

Calvin, thanks for the link to Ikai's blog post, I haven't seen that
one and it was very interesting.

Robert, here are specific answers to your questions:

>>Why do you say: " I can't use a composite index since it would explode with 
>>my use case"?

I'm using Brett Slatkin's "Relation Index" method of building and
querying set memberships (Google I/O 2009 - Building Scalable, Complex
Apps on App Engine).  According to Brett, using a composite index on
this kind would cause explosion, so any ordering of results will need
to be done in-memory during the request. If the sort order is
immutable, sorted key names can be used to order results based on the
their lexicographical position.

Since a creation timestamp is "immutable" data, I figured that using
lexicographic key names would be the way to go.

>>What would be fine if you could handle your entire result set in one request?

Ordering the result set in-memory.

>>What are you trying to do?

The app is a digital-asset manager.  Users need to be able to query a
set (using the relation index method) and have the results return the
most recent additions first.  The result set could easily be a few
thousand, so I want to use cursor-pagination to display the results
which would preclude any in-memory ordering.

(I'm actually refactoring my existing app that I use to manage/deliver
graphic assets to my clients so that they can add their own data.)

>>Is there a single global LIFO stack, or are there multiple stacks?

The entities are all of the same kind, however, LIFO behavior is
localized to individual user groups.

>>How are new items added to the stack(s)?,  What is the addition rate?

Just one item per user request.  User groups would be just a few
individual users probably less than twenty. The rate per group would
be so low that chances of contention on any sort of accumulator would
be almost nonexistent.

>>Is there a requirement that the items are precisely ordered or are some (or 
>>small) mis-orderings acceptable?

Precision is NOT critical.  Close approximation of chronology is just
fine.


--The auto-generated ids are not strictly increasing

I did not know that.  Thanks!

--Using the current time may also be problematic since the machines
will have slight variations, and in some cases significant variations.

I was aware of that, but since absolute precision is not necessary I
could still use the timestamp as an accumulator if there is some thing
as an "inverse-timestamp algorithm"!?!?

So...

After spending some more time thinking about this, here is what I plan
to do:

Create a counter model kind that is created with an IntegerProperty
starting value of ten billion (I'd like to see somebody reach the
bottom of that!). Give each user group it's own counter and de-count
the values in a transaction (or not, it might be simpler to dismiss
contention and write a handler that ensures uniqueness of the key name
but maintains approximate lexicographic position).  When the counter
value is read, convert the value to a padded string and concatenate it
with the user group name and a leading lowercase letter (k999836/
usergroupname) and use that as the key name for the new asset.

Furthermore, it occurred to me that as a result set is reduced to a
manageable in-memory size, I could test for the length of results and
offer the user the ability to custom order their results (asset name
alphanumeric or asset kind, for example).  Just a thought.

Thanks again for the replies, If anyone thinks there is a better
approach please let me know, I kind of make this stuff up as I go
along..

--Joe


On Feb 12, 10:52 pm, Robert Kluin  wrote:
> Hi Joe,
>   What are you actually trying to do?  Is there a single global LIFO
> stack, or are there multiple stacks?  How are new items added to the
> stack(s)?  In batches to one stack at a time, batches across stacks?
> What is the addition rate?  How are items removed / processed from the
> stack(s)?  Is there a requirement that the items are precisely ordered
> or are some (or small) mis-orderings acceptable?
>
>   Why do you say: " I can't use a composite index since it would
> explode with my use case"?
>
>   The auto-generated ids are not strictly increasing.  What would be
> fine if you could handle your entire result set in one request?
>
>   Using the current time may also be problematic since the machines
> will have slight variations, and in some cases significant variations.
>
> Robert
>
>
>
> On Sat, Feb 12, 2011 at 14:38, Joseph Letness  wrote:
> > Hi everybody, I was wondering if anybody has any good ideas for
> > generating LIFO (Last In FIrst Out) key names.  I 

[google-appengine] Re: LIFO key name strategy?

2011-02-15 Thread Joseph Letness
Julian, that is _exactlty_ what I was looking for.  Counting up to a
future time to create descending values...  It seems so obvious now!
It's a much better solution than the de-accumulator handler that I
wrote.

Thanks!

On Feb 14, 11:57 pm, Julian Namaro  wrote:
> I am not sure about the mathematics of it, but intuitively there is no
> perfect algorithm for constructing timestamps in a reverse
> lexicographical ordering, because adding a character to a string will
> always make it lexicographically superior.
>
> But I noticed the mapreduce library just pick a "ridiculously future
> time" and go in reverse order from 
> there:http://code.google.com/p/appengine-mapreduce/source/browse/trunk/pyth...
>
> The library also add a random string to reduce the chance of
> duplicates, maybe that can be replaced by an UUID if you're really
> concerned by uniqueness.
>
> On Feb 14, 5:57 am, Joseph Letness  wrote:
>
>
>
> > Hi Calvin and Robert, thanks for your replies.  I should have been
> > more clear about what I am doing, here is some more info:
>
> > Calvin, thanks for the link to Ikai's blog post, I haven't seen that
> > one and it was very interesting.
>
> > Robert, here are specific answers to your questions:
>
> > >>Why do you say: " I can't use a composite index since it would explode 
> > >>with my use case"?
>
> > I'm using Brett Slatkin's "Relation Index" method of building and
> > querying set memberships (Google I/O 2009 - Building Scalable, Complex
> > Apps on App Engine).  According to Brett, using a composite index on
> > this kind would cause explosion, so any ordering of results will need
> > to be done in-memory during the request. If the sort order is
> > immutable, sorted key names can be used to order results based on the
> > their lexicographical position.
>
> > Since a creation timestamp is "immutable" data, I figured that using
> > lexicographic key names would be the way to go.
>
> > >>What would be fine if you could handle your entire result set in one 
> > >>request?
>
> > Ordering the result set in-memory.
>
> > >>What are you trying to do?
>
> > The app is a digital-asset manager.  Users need to be able to query a
> > set (using the relation index method) and have the results return the
> > most recent additions first.  The result set could easily be a few
> > thousand, so I want to use cursor-pagination to display the results
> > which would preclude any in-memory ordering.
>
> > (I'm actually refactoring my existing app that I use to manage/deliver
> > graphic assets to my clients so that they can add their own data.)
>
> > >>Is there a single global LIFO stack, or are there multiple stacks?
>
> > The entities are all of the same kind, however, LIFO behavior is
> > localized to individual user groups.
>
> > >>How are new items added to the stack(s)?,  What is the addition rate?
>
> > Just one item per user request.  User groups would be just a few
> > individual users probably less than twenty. The rate per group would
> > be so low that chances of contention on any sort of accumulator would
> > be almost nonexistent.
>
> > >>Is there a requirement that the items are precisely ordered or are some 
> > >>(or small) mis-orderings acceptable?
>
> > Precision is NOT critical.  Close approximation of chronology is just
> > fine.
>
> > --The auto-generated ids are not strictly increasing
>
> > I did not know that.  Thanks!
>
> > --Using the current time may also be problematic since the machines
> > will have slight variations, and in some cases significant variations.
>
> > I was aware of that, but since absolute precision is not necessary I
> > could still use the timestamp as an accumulator if there is some thing
> > as an "inverse-timestamp algorithm"!?!?
>
> > So...
>
> > After spending some more time thinking about this, here is what I plan
> > to do:
>
> > Create a counter model kind that is created with an IntegerProperty
> > starting value of ten billion (I'd like to see somebody reach the
> > bottom of that!). Give each user group it's own counter and de-count
> > the values in a transaction (or not, it might be simpler to dismiss
> > contention and write a handler that ensures uniqueness of the key name
> > but maintains approximate lexicographic position).  When the counter
> > value is read, convert the value to a padded string and concatenate it
> > with the user gr

[google-appengine] Re: LIFO key name strategy?

2011-02-16 Thread Joseph Letness
Hi Nick, my query uses "zig-zag merge-join" with an arbitrary length
of equality filters on a StringListProperty to get result sets, so I
can't use a composite index (unless the exploding index problem has
been solved or I've fundamentally misunderstood exploding index).

Also Nick, maybe you could let me know if this is correct:

It occurred to me that using lexically descending keys would be an
optimization for certain use cases like mine.  My users are primarily
only interested in recently added items.  Since I can predict that the
majority of my queries are going to be for recently added items, it
makes sense to position them, lexicographically, so that table scans
will find matches at the beginning of the scan, especially with a zig-
zag merge-join.  Is this a correct assumption?

Thanks again to everyone who has replied.  I've used the method from
the mapreduce _get_descending_keys handler that Julian pointed me to
and have modified it for my use case.  It's not an "ideal" inverse-
timestampe algorithm, but it is definitely practical.

--Joe



The users of my app are mostly going to be interacting with the most
recently added items.

On Feb 15, 5:54 pm, "Nick Johnson (Google)" 
wrote:
> Why not just use regular timestamps, and sort descending?
>
> -Nick
>
> On Wed, Feb 16, 2011 at 1:42 AM, Joseph Letness wrote:
>
>
>
>
>
> > Julian, that is _exactlty_ what I was looking for.  Counting up to a
> > future time to create descending values...  It seems so obvious now!
> > It's a much better solution than the de-accumulator handler that I
> > wrote.
>
> > Thanks!
>
> > On Feb 14, 11:57 pm, Julian Namaro  wrote:
> > > I am not sure about the mathematics of it, but intuitively there is no
> > > perfect algorithm for constructing timestamps in a reverse
> > > lexicographical ordering, because adding a character to a string will
> > > always make it lexicographically superior.
>
> > > But I noticed the mapreduce library just pick a "ridiculously future
> > > time" and go in reverse order from there:
> >http://code.google.com/p/appengine-mapreduce/source/browse/trunk/pyth...
>
> > > The library also add a random string to reduce the chance of
> > > duplicates, maybe that can be replaced by an UUID if you're really
> > > concerned by uniqueness.
>
> > > On Feb 14, 5:57 am, Joseph Letness  wrote:
>
> > > > Hi Calvin and Robert, thanks for your replies.  I should have been
> > > > more clear about what I am doing, here is some more info:
>
> > > > Calvin, thanks for the link to Ikai's blog post, I haven't seen that
> > > > one and it was very interesting.
>
> > > > Robert, here are specific answers to your questions:
>
> > > > >>Why do you say: " I can't use a composite index since it would
> > explode with my use case"?
>
> > > > I'm using Brett Slatkin's "Relation Index" method of building and
> > > > querying set memberships (Google I/O 2009 - Building Scalable, Complex
> > > > Apps on App Engine).  According to Brett, using a composite index on
> > > > this kind would cause explosion, so any ordering of results will need
> > > > to be done in-memory during the request. If the sort order is
> > > > immutable, sorted key names can be used to order results based on the
> > > > their lexicographical position.
>
> > > > Since a creation timestamp is "immutable" data, I figured that using
> > > > lexicographic key names would be the way to go.
>
> > > > >>What would be fine if you could handle your entire result set in one
> > request?
>
> > > > Ordering the result set in-memory.
>
> > > > >>What are you trying to do?
>
> > > > The app is a digital-asset manager.  Users need to be able to query a
> > > > set (using the relation index method) and have the results return the
> > > > most recent additions first.  The result set could easily be a few
> > > > thousand, so I want to use cursor-pagination to display the results
> > > > which would preclude any in-memory ordering.
>
> > > > (I'm actually refactoring my existing app that I use to manage/deliver
> > > > graphic assets to my clients so that they can add their own data.)
>
> > > > >>Is there a single global LIFO stack, or are there multiple stacks?
>
> > > > The entities are all of the same kind, however, LIFO behavior is
> > > > localized to individua

[google-appengine] Re: AppEngine seems slow to me. Is it normal?

2011-02-23 Thread Joseph Letness
You could try this:

http://code.google.com/p/he3-appengine-lib/wiki/PagedQuery

It's a complete module that uses cursors for a paging abstraction
similar to Django's built-in pagination and just as easy to implement
and you can jump to any page in your result set.

On Feb 23, 3:59 am, tobik  wrote:
> Independent? Now I'm confused. Does it or does it not matter whether I
> select first page or 100th using offset?
>
> I checked the cursors, it looks nice, but it seems to me, that it's
> good only for "next next next" type of pagination. If the user jumped
> from the first to the 100th page and then back to the 50th page, I
> would still need to go through all previous entries and it would be
> slow. Or am I wrong? Naturally, I know that "next next" type of
> pagination would be suitable for most of cases.
>
> Ok and really last question, I hope I don't bother you much. What
> about pagination based on time periods. For example like "all entries
> created today, yesterday, last week, last month".  It would use simple
> filtering (greater than, less than) so it should be fast.. or not?
>
> On 23 ún, 06:25, Robert Kluin  wrote:
>
>
>
> > Performance is independent of the number of entities you have.
> > Namespaces are great for segregating data, but you can't query across
> > them -- you'll only get results from one.
>
> > Like Chris mentioned, use cursors instead of offsets.  And if you can,
> > then yes a well thought out caching strategy is a good way to improve
> > performance.
>
> > Robert
>
> > On Tue, Feb 22, 2011 at 18:57, tobik  wrote:
> > > Sorry, one more tricky question. What about namespaces. If I had 10
> > > namespaces and in each namespace 10 entries, would the performance be
> > > the same as if I had only one namespace but with 100 entries?
>
> > > On 22 ún, 23:31, Chris Copeland  wrote:
> > >> Django's Paginator is not going to be efficient on GAE and is definitely 
> > >> not
> > >> going to scale.
>
> > >> GAE provides cursors which are a very efficient way to page through query
> > >> results:http://code.google.com/appengine/docs/python/datastore/queries.html#Q...
>
> > >> On Tue, Feb 22, 2011 at 3:57 PM, tobik  wrote:
> > >> > I built a page using Django's Paginator which displays a simple table
> > >> > with 20 items from around 1000 total stored in database. I don't know
> > >> > how the Paginator works from the inside, but according to the
> > >> > appstats, it makes two queries (first counts items, second selects
> > >> > given page) and each one of them takes around 130ms of cpu time. Is it
> > >> > a normal value? The truth is that the page loads noticeably slower
> > >> > than a page without any queries. And I also counted, that with 6.5 cpu
> > >> > hours limit I can afford around 3000 such queries every day which is a
> > >> > quite small number. And it's only 1000 entries in the database.
>
> > >> > So far I've been using PHP+MySQL and I am used to that such simple
> > >> > queries are really fast, even on poor free hostings. I tried to apply
> > >> > caching on every single page generated by Paginator and it naturally
> > >> > reduced the loading time to minimum. So is it the right approach to
> > >> > AppEngine? Cache everything as much as possible?
>
> > >> > --
> > >> > You received this message because you are subscribed to the Google 
> > >> > Groups
> > >> > "Google App Engine" group.
> > >> > To post to this group, send email to google-appengine@googlegroups.com.
> > >> > To unsubscribe from this group, send email to
> > >> > google-appengine+unsubscr...@googlegroups.com.
> > >> > For more options, visit this group at
> > >> >http://groups.google.com/group/google-appengine?hl=en.
>
> > > --
> > > You received this message because you are subscribed to the Google Groups 
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appengine@googlegroups.com.
> > > To unsubscribe from this group, send email to 
> > > google-appengine+unsubscr...@googlegroups.com.
> > > For more options, visit this group 
> > > athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Solutions to an Exploding Index Problem

2011-03-12 Thread Joseph Letness
Hi Aaron, there are a number of ways of getting around the "exploding-
index" problem.  It's hard to offer a good solution without knowing
what the end-result is that you are trying to accomplish.

If your sort order is immutable and can be determined when your
entities are created, the best solution is to create a key_name that
lexicographically represents the sort order (no need to add a .order()
to your query object).

However, if your sort order is dynamic I would suggest that you create
a special model kind that de-normalizes the sort order and either the
owners or the topics (basically, this would act as an index) and write
some code that generates these index entities when your keywords are
written.  This will be an expensive write operation, both executing
the code and writing to multiple entities (always remember to use
batch put to write multiple entities), but your reads (which I am
assuming are going to be significantly more frequent than writes) will
be efficient.

Obviously, the simple solution, If you can guarantee that your result
set is small ( <1000 for a simple model like you described), would be
to retrieve the entities and perform the sort in your code.  Just
remember that this will be an expensive request that will happen on
all your reads (you could use memcache to mitigate it somewhat)

Curious, what is the integer sort for?

Good luck,

--Joe



Also, check out Brett Slatkin's "Building Scalable, Complex Apps on
App Engine" http://www.youtube.com/watch?v=AgaL6NGpkB8

It's got a lot of great insights about data-modeling the GAE way.


On Mar 11, 2:58 am, Aaron  wrote:
> Hi, I'm currently running into an exploding index problem with the
> following model:
> class Keywords(db.Model):
>     owners_saved = db.ListProperty(db.Key)#list of user keys who saved
> this keyword
>     topics = db.ListProperty(db.Key)#list of keys of topic objects
>     sort1property = db.IntegerProperty#need to sort by this
>
> Keywords can mark as saved, and I need to be able to query for
> keywords a user has saved that are in a specific topic area.  I'm
> running into an exploding index problem because I need to be able to
> query by a composite index on two list properties (owners_saved and
> topics)...both list properties can potentially become really long.
>
> Can anyone suggest a way for me to be able to avoid the exploding
> index problem AND be able to sort by property?
>
> Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.