Re: heroku restart inside app for clearing http cache

2010-04-13 Thread Carl Fyffe
There are much easier ways to expire a cache. The docs that explained
how to create the cache more than likely will tell you how to expire
it. Start there.

On Tue, Apr 13, 2010 at 2:23 PM, Chap chapambr...@gmail.com wrote:
 Need a button for a client to clear the cached version of a resource.

 As I understand it, redeploying and potentially heroku restart will
 cause this to happen.

 Is it possible for the app to restart itself? I wonder how people are
 handling this immediate cache expire problem.

 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: heroku restart inside app for clearing http cache

2010-04-13 Thread Carl Fyffe
That isn't forcing an expire, that is we clean up when you push, just
to let you know.

Please read this: http://tomayko.com/writings/things-caches-do
And then this: http://guides.rubyonrails.org/caching_with_rails.html

If you still can't figure it out, then come back. You should have a
better understanding and can ask a more pointed question.

On Tue, Apr 13, 2010 at 2:32 PM, Chap chapambr...@gmail.com wrote:
 Thanks for responding Carl,

 I've been going over the docs and the only way it mentions forcing an
 expire is deploying:
 http://docs.heroku.com/http-caching#cache-purge-on-deploy

 On Apr 13, 2:29 pm, Carl Fyffe carl.fy...@gmail.com wrote:
 There are much easier ways to expire a cache. The docs that explained
 how to create the cache more than likely will tell you how to expire
 it. Start there.



 On Tue, Apr 13, 2010 at 2:23 PM, Chap chapambr...@gmail.com wrote:
  Need a button for a client to clear the cached version of a resource.

  As I understand it, redeploying and potentially heroku restart will
  cause this to happen.

  Is it possible for the app to restart itself? I wonder how people are
  handling this immediate cache expire problem.

  --
  You received this message because you are subscribed to the Google Groups 
  Heroku group.
  To post to this group, send email to her...@googlegroups.com.
  To unsubscribe from this group, send email to 
  heroku+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/heroku?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Streaming files without locking up dynos

2010-04-06 Thread Carl Fyffe
Here's a little tough love: stop using Twilo. They don't provide the
features that you need. Requiring you to make multiple hacks to your
code and bringing the code through your server to fix headers means it
isn't worth the price you are paying. Put the files on S3 and be done
with it.

I don't know what Twilo is, or if I am spelling it right for that
matter, but if you need to get them to fix their service or leave.

On 4/6/10, Eric Anderson e...@pixelwareinc.com wrote:
 I am having some difficulty trying to figure out how to stream out
 files from Heroku without locking up my dynos.

 The basic setup
 
 I am making an app that combines Heroku and Twilio. Part of this app
 involves presenting users with the recordings that have been made on
 Twilio. Twilio provides a URL to these recordings but they don't
 really set the headers right (for the MP3 they don't provide the
 content length messing up some clients and they don't set the content
 disposition to attachment causing some clients to play the file inline
 and not do a very good job at that).

 In addition to these headers issues I would ideally like to not give
 out the URL to the recording as it is not protected by any
 authentication.

 Attempt One
 ---
 So my first thought was to simply proxy the download through Heroku. I
 figured this would allow me to adjust the headers and hide the Twilio
 URL. The problem with this is that while proxying the file it prevents
 the dyno from handing other requests. I had hoped if I streamed the
 content out (in rails render a block that chunks it out) or by using
 some sort of sendfile header. None of those options worked. I
 contacted support about this hoping there would be a decent solution
 but they said a dyno gets locked up until the file is downloaded.
 There is no way around this. This means that if I get a user on a slow
 modem downloading a file it takes down the dyno. If I get a multiple
 slow people downloading files and don't have enough dynos it basically
 will take down my site. :(

 Attempt Two
 --
 So my next thought was to allow them to download from a location other
 than Heroku but still one where I have control over the HTTP headers.
 S3 seemed like an ideal candidate as Heroku runs on Amazon as well. So
 I wrote the code to copy the file from Twilio to S3 through Heroku
 when a download is requested then redirect to S3. This works but not
 well. It still locks up my dyno for about 20 seconds (for a 30 min
 conversation) while copying (most of the time is spent downloading
 from Twilio although some still spent uploading from Heroku to S3).
 Longer conversations would lock up my dynos even longer.

 I am trying to avoid pre-sending all conversations up to S3 as I want
 to avoid paying S3 to store the conversations when I am already paying
 Twilio to store them. I don't mind sending them up to S3 for temp
 storage but I need to do it on demand and it looks like they can't be
 downloaded fast enough to make that happen.

 So I am looking for suggestions of ways to deliver these files while
 not locking up the dynos. Is there any way to send a file from a URL
 to S3 without having it go through Heroku. Any suggestions in general?

 --
 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Best way to add a huge dataset?

2010-03-12 Thread Carl Fyffe
This is just an idea:

Instead of bringing the data down, and turning your app off for
multiple days you could leave the app up and do all of the processing
on Heroku. You will want to create a branch from your current
production code, and in this branch you will create the migrations for
the new tables. Then you create a background job that goes through
your live data and updates it. You should probably have a table that
has a list of the data that needs to be updated. On first load this
will have every record in the db, and as your background job works
through the data it removes the record. The best part is, whenever
your live application makes a change you can have an after filter that
puts an entry back into the queue table for that data to be migrated
(if it isn't already in there because it hadn't been processed yet).
When the jobs get close to being completed ( say an hours worth of
work remains) then you take the system down, let the background job
complete, and do a check of the data. You will have all of the old
data still and the new data at the same time. Then you can push all of
the other changes to the code base and turn your system back on!

The benefits are:

* Your system stays live while the work is being done
* you don't have to worry about bringing data down and then back up
* your system stays live
* the update can take as long as you need to get it right
* you can watch the progress by keeping an eye on the queue
A you know all of the data will be migrated because of the queue

The down sides:
* it will probably cost more because you should up your db size in
heroku to the max so users don't notice the impact
* it might take longer because you don't want it to go as fast as
possible because it might impact your live system
* lots of moving parts

Just an idea. Good luck!


On 3/12/10, Mike mikel...@gmail.com wrote:
 I'm going to be adding a number of discrete, but enormous (maybe many
 gigs each), datasets to my Heroku app's database.  In many ways, I'm
 in a similar situation faced by Tobin in another current post, but
 with a different question:
 http://groups.google.com/group/heroku/browse_thread/thread/141c3ef84b22fc18

 Right now I still haven't merged the datasets into my database yet.
 What's the best way for me to approach this?

 The lack of ability to push individual tables with taps suggests to me
 I'm going to want to do this probably as a one shot deal, rather than
 doing each dataset sequentially and testing that one before proceeding
 to the next.  I'm thinking about doing a db:pull to get the current
 state of my database, and then shutting down my application in
 maintenance mode, running a local merge of the datasets (maybe taking
 days I'm guessing just to process the enormous things), doing some
 exhaustive local testing on the result, and then doing a push back to
 Heroku (maybe taking days again), before reactivating my app.  Because
 of their massive size, it seems like after I've done one, doing any
 further db:pulls is going to be basically impossible.  Just the idea
 of possibly having made a mistake in merging the datasets that I don't
 catch until after it's been pushed to the site gives me the shivers.
 Overall, I wonder if there could be a better way that I'm overlooking.

 One possible alternative I thought of is would it be possible to do
 something involving creating a local bundle from my database using
 YamlDB?  But then I'm not sure how to get the bundle back onto the
 server and then to restore from it?  The documentation on Heroku
 doesn't seem to really talk about that possibility.

 Also, in my case this data is integral to the application, so I'm not
 going to be able to split it up into a separate Heroku application
 like in Tobin's case.  Is there going to be any practical way for me
 to be pulling just the non-dataset data from the server in order to
 use on a development machine?

 Does anyone have any ideas on how they would approach this problem?
 If so, I'd be filled with gratitude.

 Mike

 --
 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Best way to add a huge dataset?

2010-03-12 Thread Carl Fyffe
Let me clarify something (I am writing on a blackberry so I couldn't
read your post and write at the sametime) that I mistakenly implied. I
said update your live data and what I meant to say was this:

Copy your live data as you are merging it with the new dataset to the
new tables you have created. So basically:

* bring app down
* push migrations for processing
* bring app back online (shouldn't take but a minute)
* push new dataset up while app is live. None of the current code base
can see that table so should be good
* load queue with records that need to be processed (all records)
* start background job
** job looks in queue for a record to process
** job grabs record, merges with new dataset and saves into new table
** repeat until queue is down to aproximate number of records that are
updated daily.
* put app in maintenance mode
* process remaining records as quickly as possible
* ensure data integrity
* push latest code
* migrate new tables to final resting spot
* come out of maintenance mode

I think that is morwe clear than my last attempt :-)

On 3/12/10, Carl Fyffe carl.fy...@gmail.com wrote:
 This is just an idea:

 Instead of bringing the data down, and turning your app off for
 multiple days you could leave the app up and do all of the processing
 on Heroku. You will want to create a branch from your current
 production code, and in this branch you will create the migrations for
 the new tables. Then you create a background job that goes through
 your live data and updates it. You should probably have a table that
 has a list of the data that needs to be updated. On first load this
 will have every record in the db, and as your background job works
 through the data it removes the record. The best part is, whenever
 your live application makes a change you can have an after filter that
 puts an entry back into the queue table for that data to be migrated
 (if it isn't already in there because it hadn't been processed yet).
 When the jobs get close to being completed ( say an hours worth of
 work remains) then you take the system down, let the background job
 complete, and do a check of the data. You will have all of the old
 data still and the new data at the same time. Then you can push all of
 the other changes to the code base and turn your system back on!

 The benefits are:

 * Your system stays live while the work is being done
 * you don't have to worry about bringing data down and then back up
 * your system stays live
 * the update can take as long as you need to get it right
 * you can watch the progress by keeping an eye on the queue
 A you know all of the data will be migrated because of the queue

 The down sides:
 * it will probably cost more because you should up your db size in
 heroku to the max so users don't notice the impact
 * it might take longer because you don't want it to go as fast as
 possible because it might impact your live system
 * lots of moving parts

 Just an idea. Good luck!


 On 3/12/10, Mike mikel...@gmail.com wrote:
 I'm going to be adding a number of discrete, but enormous (maybe many
 gigs each), datasets to my Heroku app's database.  In many ways, I'm
 in a similar situation faced by Tobin in another current post, but
 with a different question:
 http://groups.google.com/group/heroku/browse_thread/thread/141c3ef84b22fc18

 Right now I still haven't merged the datasets into my database yet.
 What's the best way for me to approach this?

 The lack of ability to push individual tables with taps suggests to me
 I'm going to want to do this probably as a one shot deal, rather than
 doing each dataset sequentially and testing that one before proceeding
 to the next.  I'm thinking about doing a db:pull to get the current
 state of my database, and then shutting down my application in
 maintenance mode, running a local merge of the datasets (maybe taking
 days I'm guessing just to process the enormous things), doing some
 exhaustive local testing on the result, and then doing a push back to
 Heroku (maybe taking days again), before reactivating my app.  Because
 of their massive size, it seems like after I've done one, doing any
 further db:pulls is going to be basically impossible.  Just the idea
 of possibly having made a mistake in merging the datasets that I don't
 catch until after it's been pushed to the site gives me the shivers.
 Overall, I wonder if there could be a better way that I'm overlooking.

 One possible alternative I thought of is would it be possible to do
 something involving creating a local bundle from my database using
 YamlDB?  But then I'm not sure how to get the bundle back onto the
 server and then to restore from it?  The documentation on Heroku
 doesn't seem to really talk about that possibility.

 Also, in my case this data is integral to the application, so I'm not
 going to be able to split it up into a separate Heroku application
 like in Tobin's case.  Is there going to be any practical way for me
 to be pulling just

Re: Maximum RAM per dyno?

2010-03-10 Thread Carl Fyffe
These kinds of questions crack me up. I am going to try to adress this
without sounding rude, but if I do happen to come across as rude and I
apologize for that up front.

No, the $xx global variables are not persistent across the cloud. If
you have more than one machine then you have to jump through very
complex hoops to get them to share memory like that.

First, before you start jumping through creating crazy caches in
memory you should benchmark your app to ensure that it permorms at a
respectable level with the number of users you expect. If it doesn't
the first thing you should do is increase the dynos and see if that
helps. If it does, stop working on the caches and deploy! An extra $32
a month is far less than the pain and heartache and time you will
waste trying to create a frakencache.
Secondly, caching in memory isn't very useful if you only have 12
users and only use one dyno. If you have many users and multiple dynos
then caching in memory makes even less sense because you will almost
certainly be getting cache misses all the time because the users are
not guarenteed to return to the same dyno.

If caching is absolutely neccessary beg your way into the private beta
for memcached or wait until it becomes public. However there are
pretty solid caching mechanisms already in place, see more below.

Do people normally write large apps in Sinatra? I thought it was meant
for dinky one off apps If you were using Rails the caching would
be taken care of for you and you wouldn't have to ask these questions.
Basically all of the suggestions you have posted are less than
adequate answers that have been answered by the guys that wrote the
frameworks. Use the best practices of the frameworks and you should be
fine. But in the end let the benchmarks be the measure for the work
you have to do.

On one project a collegue sys admin wanted to keep adding memory to
make the app run faster. I asked him to setup multiple app servers and
load balance instead. Both looked to perform the same, until we
benchmarked it. Multiple app servers blew the monster memory app
server away.

Benchmark twice, code once as my carpenter father would say.

On 3/10/10, Daniele to...@vitamino.it wrote:
 So a global variable $xx is not shared in the cloud?!

 On 10 Mar, 06:49, Oren Teich o...@heroku.com wrote:
 I don't have a good answer for your question, but note there is no
 guarantee
 that your requests will be served from the same physical machine - we'll
 move the dyno around as demanded by the cloud.

 --
 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Maximum RAM per dyno?

2010-03-10 Thread Carl Fyffe
Daniele,

My apologies for coming across rude, it was not my intention, which
was why I apologized up front. As Chris said, there are better ways to
skin the global variable cat. Also, in most Ruby docs it says use them
sparingly:

http://www.rubyist.net/~slagell/ruby/globalvars.html

Which to most people means, avoid at all costs. You can put them in
the db. If they are constant and never change, you can put them in
config. If they are client specific, put them in a cookie and store
them in the browser (as long as they aren't huge). If they are huge,
then the database is your best bet because you don't want to clutter
your memory with it. If the database is to slow, then put it on the
file system.

But whatever you do, please avoid using global variables. I have
googled for the article that explains why, but I can't find it. All I
could find was a repeat of the use sparingly.

As I said earlier, benchmark the easiest solution first. It might just
meet your needs.

On Wed, Mar 10, 2010 at 10:38 AM, Chris r3ap3r2...@gmail.com wrote:
 Carl,
 I think your tone is fine :-) and I appreciate you taking the time to
 post your experiences.  Looking forward to memcached being deployed.

 Daniele,
  If you really need a global variable to be accessible across servers
 then memcached works good as long as it doesn't matter if that global
 variable gets expired.  You need to store the variable persistent in a
 database. Pull it from memcached if it's there, if not then hit the
 database.  (Alternatively, depending on the variable, just re-create
 it if it's expired and doesn't need to be persisted somewhere.)

 -Chris

 On Wed, Mar 10, 2010 at 11:51 PM, Carl Fyffe carl.fy...@gmail.com wrote:
 These kinds of questions crack me up. I am going to try to adress this
 without sounding rude, but if I do happen to come across as rude and I
 apologize for that up front.

 No, the $xx global variables are not persistent across the cloud. If
 you have more than one machine then you have to jump through very
 complex hoops to get them to share memory like that.

 First, before you start jumping through creating crazy caches in
 memory you should benchmark your app to ensure that it permorms at a
 respectable level with the number of users you expect. If it doesn't
 the first thing you should do is increase the dynos and see if that
 helps. If it does, stop working on the caches and deploy! An extra $32
 a month is far less than the pain and heartache and time you will
 waste trying to create a frakencache.
 Secondly, caching in memory isn't very useful if you only have 12
 users and only use one dyno. If you have many users and multiple dynos
 then caching in memory makes even less sense because you will almost
 certainly be getting cache misses all the time because the users are
 not guarenteed to return to the same dyno.

 If caching is absolutely neccessary beg your way into the private beta
 for memcached or wait until it becomes public. However there are
 pretty solid caching mechanisms already in place, see more below.

 Do people normally write large apps in Sinatra? I thought it was meant
 for dinky one off apps If you were using Rails the caching would
 be taken care of for you and you wouldn't have to ask these questions.
 Basically all of the suggestions you have posted are less than
 adequate answers that have been answered by the guys that wrote the
 frameworks. Use the best practices of the frameworks and you should be
 fine. But in the end let the benchmarks be the measure for the work
 you have to do.

 On one project a collegue sys admin wanted to keep adding memory to
 make the app run faster. I asked him to setup multiple app servers and
 load balance instead. Both looked to perform the same, until we
 benchmarked it. Multiple app servers blew the monster memory app
 server away.

 Benchmark twice, code once as my carpenter father would say.

 On 3/10/10, Daniele to...@vitamino.it wrote:
 So a global variable $xx is not shared in the cloud?!

 On 10 Mar, 06:49, Oren Teich o...@heroku.com wrote:
 I don't have a good answer for your question, but note there is no
 guarantee
 that your requests will be served from the same physical machine - we'll
 move the dyno around as demanded by the cloud.

 --
 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



 --
 You received this message because you are subscribed

Re: Maximum RAM per dyno?

2010-03-10 Thread Carl Fyffe
Chris,

Will this work for you? http://docs.heroku.com/http-caching

Carl

On Wed, Mar 10, 2010 at 2:22 PM, Chris Hanks
christopher.m.ha...@gmail.com wrote:
 On Mar 9, 10:40 pm, Chris Hanks christopher.m.ha...@gmail.com wrote:
 I'm interested in this too. I have several thousand MongoDB documents
 that are read-only and frequently accessed, so I figured I'd just
 cache them in the dyno's memory to speed up requests.

 So is 300 MB the hard limit for each dyno's RAM, then? I suppose that
 if it grows beyond that point, the dyno is restarted?

 Does anyone have an answer for this? Thanks in advance!

 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Basic Production Site

2010-03-08 Thread Carl Fyffe
There is an add-on for New Relic:
http://addons.heroku.com/newrelic

Basically you want to watch how long your response time is using the
Apdex Scoring to determine if your site is fast enough for the traffic
that you have.
http://newrelic.com/features.html#ApdexScoring

With the traffic that you have, more than likely you won't need but
one dyno (as Oren stated).

On Mon, Mar 8, 2010 at 4:17 PM, Terence Lee hon...@gmail.com wrote:
 You would add dynos to increase the number of rails instances that are
 run concurrently, so you can scale your site.  Workers would correspond
 to background jobs.

 -Terence

 On Mon, 2010-03-08 at 13:10 -0800, DAZ wrote:
 Thanks Oren,

 That's useful as a starting point. What do I use to 'see how it goes'
 - sorry I'm new to this game and not sure if I'd know if the site was
 performing well or not. What do you mean when you say 'scaling traffic
 instantly' - why would I add dynos and/or workers?

 cheers,

 DAZ

 On Mar 8, 9:06 pm, Oren Teich o...@heroku.com wrote:
  Everything is fully backed for disaster recovery purposes.  We don't 
  provide
  user accessible backups - it's only in case something goes wrong with the
  systems.  This includes your DB and your app.
 
  Start on a koi + 1 dyno, and see how it goes. You can scale the traffic
  instantly.
 
  Oren
 
 
 
  On Mon, Mar 8, 2010 at 12:31 PM, DAZ daz4...@gmail.com wrote:
   Hi,
 
   I'm planning on launching a production site using Heroku, but have a
   few questions:
 
   1) Are sites that are hosted on Heroku backed up or do I have to do it
   manually - what is the procedure for doing this?
   2) Is the database backed up as well?
   3) How does the pricing work? This site has around 500 unique visitors
   a day, generating 4000 hits. It is a basic CMS site with a database
   backend. Do I need to choose what type of plan I have in advance or
   will I be told if any limits are being exceeded?
 
   Thanks for any help anybody can give me,
 
   DAZ
 
   --
   You received this message because you are subscribed to the Google Groups
   Heroku group.
   To post to this group, send email to her...@googlegroups.com.
   To unsubscribe from this group, send email to
   heroku+unsubscr...@googlegroups.comheroku%2bunsubscr...@googlegroups.com
   .
   For more options, visit this group at
  http://groups.google.com/group/heroku?hl=en.



 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: How can I edit my db schema from the console?

2010-02-15 Thread Carl Fyffe
Ask the DataMapper mailing list?

On 2/15/10, senihele mjmmayern...@gmail.com wrote:
 No one has any ideas?  This is a pretty basic question...

 On Jan 30, 8:29 pm, senihele mjmmayern...@gmail.com wrote:
 Hey all,

 I have a Sinatra/Datamapper application on Heroku.  Datamapper doesn't
 handle migrations very well, and to the best of my knowledge can't
 remove or rename a column without wiping out the entire database.
 Anyone have any ideas on how I can do this manually?  I tried to run
 the raw sql via the console, but when i run

 ActiveRecord::Base.connection.execute(select ...)

 I get the error message:

 ActiveRecord::ConnectionNotEstablished:
 ActiveRecord::ConnectionNotEstablished

 It seems like Datamapper also now has support to AR style migrations,
 but I can' find anything on how to set this up with Sinatra.  Since I
 haven't been using AR style migrations up until this point, it would
 just be easier to be able to to remove and rename the columns
 directly.

 Any ideas?  Thanks.

 --
 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Querying Across Apps on Heroku

2010-01-26 Thread Carl Fyffe
There are many ways to solve this problem.

One way: Have each app provide a search service that the central app
can call. This works if you don't mind having the data segmented by
application.

Second way: If the results need to be ranked as a whole then you
should put all of the data into a single database.

On Tue, Jan 26, 2010 at 12:09 PM, Ken Collins k...@metaskills.net wrote:

 I've seen live DB to DB solutions fail many times before. But I'm willing to 
 admit I could have been doing it wrong :) My advice is to go with on DB and 
 partition the app. Others may have better advice.

  - Ken


 On Jan 26, 2010, at 2:51 AM, Splashlin wrote:

 I have multiple apps on Heroku that each have their own database.  I
 want to build a new application that serves as a summary tool for all
 the data across the different databases.  What is the best way to
 tackle this issue?

 Should I use one massive database on the new application and just
 point all my other apps to it or is there a way to move from database
 to database collecting the information?

 Thanks

 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: How to investigate Backlog too deep? (follow-up)

2009-12-29 Thread Carl Fyffe
This mailing list is one of the reasons Heroku is so awesome. The fact
that Heroku participates and listens to this list just makes the
Heroku service that much better. Thank you to the Heroku team and this
community. Happy Holidays!

Carl

On Tue, Dec 29, 2009 at 12:08 PM, Oren Teich o...@heroku.com wrote:
 Hi guys,
 yes, this is a (now) known issue - during the slug spin up process we have
 found an edge case that isn't so edge, causing this error.  We have
 hot-patched the servers to fix the bug, and are working on implementing a
 cleaned up code path.  No apps should be experiencing this, and we hope to
 have prevented it from happening on the future as well.
 Sorry for the lack of updates - while the technical team has had full
 coverage over the holidays and has been working on this, we haven't dont a
 great job giving you guys updates.
 Service stability hasn't been meeting our targets the past two months.
  We've found a lot of areas that we've improved, and this looks to be one of
 the last outstanding major fixes.  I'm working on a summary of the work
 we've done for you guys so you can know what the status is, and what we've
 done to make sure we just get better in the future.
 Oren
 On Tue, Dec 29, 2009 at 8:57 AM, Trevor Turk trevort...@gmail.com wrote:

 On Dec 29, 9:48 am, Casper Fabricius casper.fabric...@gmail.com
 wrote:
  I am in the same boat as Tim. Despite very low traffic and two active
  dynos, my site tends to go down with a permanent Backlog too deep and no
  other resolution than rebooting the app. It is very frustrating to have the
  app go down like this from time to other, with no logs and no way to debug
  it.
 
  I have no long running processes in the dynos - everything like that
  goes to Delayed Job, even sending out emails.

 I've noticed this problem as well with a few of my applications. The
 sites will go down and the only way to get them started again is to
 commit something meaningless and re-push the application. Honestly, I
 expected to hear something from Heroku about this by now.

 Is this a known issue? Is there anything we can do to provide help in
 solving the problem? Next time an application of mine fails with this
 message, I'll create a support ticket and try to help troubleshoot.

 Thanks,
 - Trevor

 --

 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.



 --

 You received this message because you are subscribed to the Google Groups
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/heroku?hl=en.


--

You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.




Re: If you reserve full instance for custom SSL - why don't I get more dynos?

2009-12-10 Thread Carl Fyffe
Thank you for writing this up!

On Thu, Dec 10, 2009 at 12:00 PM, Wojciech Kruszewski wojci...@oxos.pl wrote:
 In fact this is possible with their current environment:
 http://wojciech.oxos.pl/post/277669886/save-on-herokus-custom-ssl-addons

 On Dec 9, 7:58 pm, Wojciech Kruszewski wojci...@oxos.pl wrote:
 This is theoretically possible with their architecture, but they are
 currently reviewing how easy it would be to implement it and if it's
 worth the trouble.

 I created a public feature 
 request:http://support.heroku.com/forums/42310/entries/87156
 - would you care to add your vote?

 Cheers,
 Wojciech

 On Dec 8, 11:47 pm, Chris Hanks christopher.m.ha...@gmail.com wrote:

  Wojciech, if you ask support about that and get some good news, would
  you report back? I'm curious about this too.

  Thanks!

  Chris

  On Dec 8, 2:05 pm, Oren Teich o...@heroku.com wrote:

   I don't know if that's possible or not it's probably a function of the
   SSL protocol and our routing mesh, but it's beyond my technical
   knowledge.  Best bet is to drop support@ a line, and see what they
   say.  They'll be able to dig into the details for you.

   Oren

   On Tue, Dec 8, 2009 at 12:42 PM, Wojciech Kruszewski wojci...@oxos.pl 
   wrote:
Thanks Oren, this makes sense.

So can that one mostly idle server handle SSL requests for multiple
applications?

I mean I tried Heroku and was very happy with the experience - looks
like it needs little to no maintenance on my part. I'd wish to host a
handful smaller web apps, each with 1-3 dynos.

I could live with piggyback ssl, if it was my own wildcard
certificate.

- Wojciech

On Dec 8, 8:58 pm, Oren Teich o...@heroku.com wrote:
They are totally independent.  The way our architecture works, dynos
run on machines called railguns, which are specially set up for the
job.  We have to setup a special (and yes, mostly idle) server just to
handle the SSL requests.  It's not possible with the product we have
today to run dynos on that server.

Oren

On Tue, Dec 8, 2009 at 7:48 AM, Wojciech Kruszewski 
wojci...@oxos.pl wrote:
 Hi,

 I've read your explanation about why you charge $100/mo for custom 
 SSL
 (http://docs.heroku.com/ssl#faq). You need exclusive IP, Amazon
 assigns only one IP for an instance, so you need to reserve full
 instance just to use one SSL cert - seems fair.

 Ok, but if you reserve full EC2 instance just for me... then why do 
 I
 have to pay for extra dynos? Aren't you double-billing for this
 instance?

 I believe it's just against your architecture but still I'd like 
 to
 know the explanation.

 Regards,
 Wojciech

 --
http://twitter.com/WojciechKhttp://oxos.pl-RubyonRails development

 --

 You received this message because you are subscribed to the Google 
 Groups Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/heroku?hl=en.

--

You received this message because you are subscribed to the Google 
Groups Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/heroku?hl=en.

 --

 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.




--

You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.




Re: Debugging Email

2009-11-16 Thread Carl Fyffe
It isn't a great way to debug, but you should probably test it on a
staging branch on heroku. There's a great write up here:

http://stackoverflow.com/questions/1279787/staging-instance-on-heroku

On Mon, Nov 16, 2009 at 10:05 AM, Neil neil.middle...@gmail.com wrote:
 We have an app that's sending out emails on certain events.  In
 development this works fine, but in production (on Heroku) we're not
 getting any email coming out of the app.

 I can't see anything in the logs that looks useful, and am struggling
 to think how to debug this.  Do you guys have any tips?  I am using
 the Sendgrid add-on.

 --

 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=.




--

You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/heroku?hl=.




Re: Can Heroku receive mail?

2009-11-02 Thread Carl Fyffe

The first thing you need is the ability to reach out to the mail
server and the emails. There are several plugins that will help with
this. A little Googling and you should find a couple of examples.

The second piece is repeated queries to the mail server. The best way
to do this is use Background Job and have it check your email and then
schedule a new Background Job (simply genius, I wish I had thought of
it). Recursion wins again.

On Mon, Oct 26, 2009 at 3:43 PM, Shpigford j...@joshpigford.com wrote:

 The app I'd like to run on Heroku will need to eventually be able to
 receive email.

 As an example, a user would send an email to a unique email address
 and then our app would parse the email and do various things with it.

 Is that possible with Heroku?

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Nightly Backups or Don't Learn the Hard Way

2009-10-13 Thread Carl Fyffe

Nice change to the docs! Thanks! Much clearer now.

On Mon, Oct 12, 2009 at 3:58 PM, Jim Gilliam j...@gilliam.com wrote:
 It never occurred to me that the unlimited bundles were a backup strategy.
 It's probably because the resources form says nightly backup soon - which
 indicates that bundles aren't backup, and that backup isn't available yet.

 People tend to keep a rolling 7 days of db backup, at least I do.  Having
 that as like a $5/mo option, separate from single bundle or unlimited
 bundles, would probably be used a lot.  If the heroku costs are pretty much
 just S3, having it super cheap (or even free) if you supply your own S3,
 would be awesome.

 Jim

 On Mon, Oct 12, 2009 at 10:36 AM, Oren Teich o...@heroku.com wrote:

 Coincidentally, we've been working on documenting our security
 policies (both how we treat your data as well as how we protect it).
 This Danger/MS kerfufle shows me I can't get it out soon enough.

 In brief, there's two different aspects to this.

 1) protection we provide.  We provide disaster recovery of all data.
 All database data is stored in a Raid 10 configuration.  This provides
 us a huge amount of resiliancy in case of individual hardware failure
 on Amazon's side.  In addition, all data in the database is backed up
 once every 24 hours to Amazon S3.  These backups are stored in
 different availability zones to ensure no SPOF (single point of
 failure).  The backups are provided for disaster recovery only at this
 time - they are not there to help individual application developers
 recover.  This is mostly due to process, not capability.  We're
 backing up the data in aggregate, so it's a few minutes of work to
 restore an entire DB, but a few hours of work to restore an individual
 app.

 2) Protection we enable.  Bundles are the best way for an individual
 app owner to backup their entire app - git, database, etc.  These
 enable you to either store the data on our S3 account (with unlimited
 bundles), or download them to your local machine.  One common pattern
 is to have cron on your mac automatically capture them for you and
 download the next day.  We've had surprisingly little adoption of the
 unlimited_bundles add-on, and also not too much feedback on how we can
 specifically improve the experiece.  One obvious way would be to auto-
 capture at a regular time, perhaps as part of the cron addon.

 Oren

 On Oct 12, 2009, at 6:11 AM, Chap wrote:

 
  I'm sure we've all heard the news of Danger/MS loosing all their
  sidekicker's data.
 
  Which gets me thinking, what are you guys doing for backup? The
  bundles seem cool, but it would be nice if there was some automated
  way of creating them and downloading them on a regular basis. Not that
  I don't trust the cloud...
 
 
  





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Nightly Backups or Don't Learn the Hard Way

2009-10-12 Thread Carl Fyffe

Regarding the bundles not being adopted... I never understood their
benefit. I just returned to the docs to see if I missed something...
There isn't anything there. No explanation of what they are or how
they should be used. There is a mention of them in the Backups section
of Import/Export but that is it. If Bundles are the primary means of
doing backups then they deserve a full section by themselves.

If there is something there and I missed it, then that means it isn't
in the right place or needs better marketing. And I am not alone
because it isn't being used. However, backups are not on many
developer's minds until something awful happens. I would bet you guys
would see improved use after the Danger/MS mess if you A) actually had
something to save the day and B) advertised it better.

I always kinda had the feeling that Bundles were the little red
button that I didn't ask about.

From the Fifth Element:
Zorg: I hate warriors, too narrow-minded. I'll tell you what I do like
though: a killer, a dyed-in-the-wool killer. Cold blooded, clean,
methodical and thorough. Now a real killer, when he picked up the
ZF-1, would've immediately asked about the little red button on the
bottom of the gun.

On Mon, Oct 12, 2009 at 1:36 PM, Oren Teich o...@heroku.com wrote:

 Coincidentally, we've been working on documenting our security
 policies (both how we treat your data as well as how we protect it).
 This Danger/MS kerfufle shows me I can't get it out soon enough.

 In brief, there's two different aspects to this.

 1) protection we provide.  We provide disaster recovery of all data.
 All database data is stored in a Raid 10 configuration.  This provides
 us a huge amount of resiliancy in case of individual hardware failure
 on Amazon's side.  In addition, all data in the database is backed up
 once every 24 hours to Amazon S3.  These backups are stored in
 different availability zones to ensure no SPOF (single point of
 failure).  The backups are provided for disaster recovery only at this
 time - they are not there to help individual application developers
 recover.  This is mostly due to process, not capability.  We're
 backing up the data in aggregate, so it's a few minutes of work to
 restore an entire DB, but a few hours of work to restore an individual
 app.

 2) Protection we enable.  Bundles are the best way for an individual
 app owner to backup their entire app - git, database, etc.  These
 enable you to either store the data on our S3 account (with unlimited
 bundles), or download them to your local machine.  One common pattern
 is to have cron on your mac automatically capture them for you and
 download the next day.  We've had surprisingly little adoption of the
 unlimited_bundles add-on, and also not too much feedback on how we can
 specifically improve the experiece.  One obvious way would be to auto-
 capture at a regular time, perhaps as part of the cron addon.

 Oren

 On Oct 12, 2009, at 6:11 AM, Chap wrote:


 I'm sure we've all heard the news of Danger/MS loosing all their
 sidekicker's data.

 Which gets me thinking, what are you guys doing for backup? The
 bundles seem cool, but it would be nice if there was some automated
 way of creating them and downloading them on a regular basis. Not that
 I don't trust the cloud...


 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Reading Email

2009-10-07 Thread Carl Fyffe

It almost looks like you should have cron on another box create
Delayed Jobs to do this. This solves your 30 second timeout problem
but introduces another complexity. Maybe this is a necessary
complexity. As for the Google get issue, I would just not put a link
on any page to the URL, and put a Disallow in your robots.txt

User-agent: *
Disallow: /process-emails.html

Or just leave it out entirely, that way no one knows it exists.
Security through obscurity is usually not good enough but in this
case I am not sure that it matters.


On Wed, Oct 7, 2009 at 11:13 AM, Yuri Niyazov yuri.niya...@gmail.com wrote:

 Also, I forgot the following fun fact about Heroku's cron service.
 This was true when I investigated it; might still be true now - not
 sure.

 Since your app runs on X Heroku VMs, where X is often  1, then, when
 you use Heroku's cron, the cronjob is executed on each box
 simultaneously - unless you do something clever (and I was unable to
 figure out what that something clever is), X email processor instances
 run at the same time. If you need guarantee that each email is
 processed once only, this will screw it up for you.

 On Wed, Oct 7, 2009 at 11:05 AM, Yuri Niyazov yuri.niya...@gmail.com wrote:
 I haven't checked out the online cron services yet, but there's
 another issue that I had to solve, and I don't know whether they would
 support this or not:

 Heroku limits the execution time of every request to 30 seconds each,
 and a request that takes longer than that is abruptly interrupted.
 This means that the magic URL handler has to be written in such a way
 that it doesn't take longer than 30 secs; I decided to take the
 dirty-hack approach to this: the URL handler processes two emails at a
 time (let's say that 30 seconds is almost always enough to open an
 IMAP connection, do a search, and download the text of two emails).
 However, the URL handler checks the total number of messages to be
 processed, and returns a status code for same. So:

      upto = 2
      msg_id_list = imap.search([NOT, DELETED])
      msg_id_list = msg_id_list[0, upto] if upto
      msg_id_list.each do |msg_id|
        m = imap.fetch(msg_id, RFC822)[0].attr[RFC822]
        process m
      end
      render :json = msg_id_list.to_json


 and then in the script on the cron-box:

      do
         msg_id_list = call_url.parse_json
      until msg_id_list.empty?


 As far as the Google indexing your URL issue: make sure that the GET
 request returns a blank page, and the POST actually executes the
 cronjob. And, of course, you can always protect that URL via
 basic-auth or authenticity-token.

 On Wed, Oct 7, 2009 at 7:42 AM, Wojciech wojci...@oxos.pl wrote:

 so I have a separate box with actual crond on it, and
 it has a script that hits a specific URL on my app on heroku every x
 minutes to process email.

 There are services that do it for you (i.e. periodically call your
 magic URL):
 http://www.onlinecronservices.com/

 But be careful: this URL could be called by anybody and could even get
 indexed by Google. You might allow only certain IPs (ip of your online
 cron service) to call this URL to protect the app.

 There's also this poor man's cron approach, I've seen in Drupal:
 http://drupal.org/project/poormanscron - but it's a bit crazy.

 Cheers,
 Wojciech


 On Tue, Oct 6, 2009 at 3:06 PM, Carl Fyffe carl.fy...@gmail.com wrote:

  Rails makes it so easy to send emails. Recieving emails isn't that
  difficult either, but requires a cron or daemon. What is the best way
  to do this on Heroku today?

  Carl


 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Reading Email

2009-10-06 Thread Carl Fyffe

Rails makes it so easy to send emails. Recieving emails isn't that
difficult either, but requires a cron or daemon. What is the best way
to do this on Heroku today?

Carl

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Heroku push rejected, error: Unable to append

2009-09-29 Thread Carl Fyffe

Heroku had a git outage today. It did not affect running applications,
but did affect deployment.

On Tue, Sep 29, 2009 at 4:23 PM, Shane Becker
veganstraighte...@gmail.com wrote:

 Yes, I got the exact same error earlier today. I contacted support
 about it 6 hours ago and haven't heard anything. It seems to be fixed
 now, though, at least for me.

 yep. working now for me too.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Spam

2009-06-24 Thread Carl Fyffe

Honestly, it would just be cool if you had someone from Heroku that
checked the support group on a regular basis. It seems like questions
go unanswered from time to time. I know you guys are young and small,
but a dedicated person watching the forums and mailing lists would be
great. Someone that could answer the easy questions on their own but
has direct access to you guys for the more difficult questions.

On Wed, Jun 24, 2009 at 7:27 PM, Morten Bagaimor...@heroku.com wrote:

 Hey there,

 I made that settings change for the group, and also nuked some of
 recent spam. Sorry about the inconvenience. On a related note, we're
 very interested in how we can make the support/community experience
 better for you guys.

 One of the actions I'm considering is moving the Heroku discussion
 forum to our Zendesk support system at support.heroku.com. It has a
 decent forums feature, which you can check out here: 
 http://support.heroku.com/forums/51588/entries
 . If you have any feedback on that I'd love to hear about it.

 Best,

 Morten

 On Jun 24, 2009, at 8:18 AM, ladyfox14 wrote:


 I second that.

 It would also be nice if groups.google.com had a spam button that
 could only be used for registered users.

 Or is it possible for heroku to set up a forum on their main page?

 On Jun 23, 9:12 am, Trevor Turk trevort...@gmail.com wrote:
 I've read elsewhere that setting the first post by a user must be
 approved by a moderator thingy in Google Groups can really help with
 spam...
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Writing to /public

2009-05-08 Thread Carl Fyffe

Wow, I am just bumping into all of the constraints these days...

Is there any way to get around the read-only filesystem? My app
dynamically loads assets at runtime to public. Is there anyway we can
put them in tmp and link to it? Or create a special area in public
that is like tmp?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Git Submodule Support

2009-05-07 Thread Carl Fyffe

Google searching turns up a lack of support for git submodules in projects.

What is the current magic incantation to make this work, or is it
better to just make the code part of the project?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Git Submodule Support

2009-05-07 Thread Carl Fyffe

I am glad I am not alone wanting submodule support. I found the same
code and stayed away.

I think some kind of answer from Heroku (I know they may be in the air
right now) would be awesome.

Carl

On 5/7/09, Bill Burcham bill.burc...@gmail.com wrote:
 I found a rake recipe that purportedly did the trick. I won't paste it here
 since I never got it to work on Heroku myself. My approach is currently just
 to collapse all my submodules into the main repo and push it.
 I do hope heroku adds support for submodules since its a valuable tool for
 code reuse.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: RMagick

2009-05-01 Thread Carl Fyffe

The problem was rmagick. Changed to RMagick and everything worked.

On Thu, Apr 30, 2009 at 9:42 PM, Carl Fyffe carl.fy...@gmail.com wrote:
 When I try to use the console, it fails in the same way with the same
 error as in this gist:
 http://gist.github.com/104251

 The code that I have does not have VERSION anywhere in it, which
 appears to be colliding with something in Rack. I don't know if this
 is causing the plugins not to load or not. Then there is still the
 issue with rmagick not loading as well.

 I am trying to deploy a vanilla install of CommunityEngine:
 http://communityengine.com

 Thanks,

 Carl

 On Thu, Apr 30, 2009 at 7:08 PM, Morten Bagai mor...@heroku.com wrote:

 Carl,

 Is it possibly you're doing something like require 'rmagick'? Instead
 of the correct form which is require 'RMagick'? Try pulling up a
 console, and doing something like this (substituting a valid image
 path for you app):

 $ heroku console
 Ruby console for boing.heroku.com
   require 'RMagick'
 = []
   include Magick
 = Object
   ImageList.new('public/images/logo.png')
 [public/images/logo.png PNG 186x50 186x50+0+0 DirectClass 8-bit 1kb]
 = scene=0

 Best,

 /Morten

 On Apr 29, 2009, at 9:05 PM, Carl Fyffe wrote:


 When I remove it from .gems I get this fancy error:

 /usr/local/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/
 action_controller/vendor/rack-1.0/rack.rb:17:
 warning: already initialized constant VERSION
 /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/rails/plugin/
 loader.rb:184:in
 `ensure_all_registered_plugins_are_loaded!': Could not locate the
 following plugins: community_engine and white_list (LoadError)
   from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/rails/plugin/
 loader.rb:44:in
 `load_plugins'
   from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/
 initializer.rb:348:in
 `load_plugins'

  several lines snipped 

 no such file to load -- rmagick
 /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in
 `gem_original_require'
 /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in
 `require'
 /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/
 active_support/dependencies.rb:158:in
 `require_without_desert'
 /home/slugs/12838_f2b2289_c549/mnt/.gems/gems/desert-0.5.0/lib/
 desert/ruby/object.rb:8:in
 `require'


 The whole error can be seen here: http://gist.github.com/104251

 The app is working locally...

 On Wed, Apr 29, 2009 at 9:38 PM, Morten Bagai mor...@heroku.com
 wrote:

 Hey,

 Rmagick is preinstalled. Try removing it from .gems file and just
 requiring it as you normally would.

 Morten



 On Apr 29, 2009, at 6:08 PM, Carl Fyffe carl.fy...@gmail.com wrote:


 My application uses RMagick. I put it into my .gems file and do the
 git push heroku dance while everything compiles and deploys.
 Unfortunately, it does not work.

 - Heroku receiving push

 - Installing gem rmagick from http://gems.rubyforge.org
   ERROR:  Error installing rmagick:
   ERROR: Failed to build gem native extension.

   /usr/local/bin/ruby extconf.rb install rmagick --no-ri --no-
 rdoc --bindir=/code/repos/12838/bin --version= 0 -s 
 http://gems.rubyforge.org


   Gem files will remain installed in /code/repos/12838/
 gems_build/
 gems/rmagick-2.9.1 for inspection.
   Results logged to /code/repos/12838/gems_build/gems/
 rmagick-2.9.1/ext/RMagick/gem_make.out
   Building native extensions.  This could take a while...


 error: hooks/pre-receive exited with error code 1


 I even tried the rmagick 1.15.17 which is said to be installed in
 the
 full list of gems, but that dies too.


 help?






 


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Full Text Search

2009-05-01 Thread Carl Fyffe

What is the best way to do Full Text Search on Heroku?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: Full Text Search

2009-05-01 Thread Carl Fyffe

Awesome. Thanks for the pointer!

On 5/1/09, shaners veganstraighte...@gmail.com wrote:

 What is the best way to do Full Text Search on Heroku?


 http://docs.heroku.com/full-text-indexing

 
 still vegan. still straightedge.
 shane becker
 +1 801 898-9481
 blog: http://iamshane.com
 shirts: http://theresistancearmy.com




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



RMagick

2009-04-29 Thread Carl Fyffe

My application uses RMagick. I put it into my .gems file and do the
git push heroku dance while everything compiles and deploys.
Unfortunately, it does not work.

- Heroku receiving push

- Installing gem rmagick from http://gems.rubyforge.org
   ERROR:  Error installing rmagick:
ERROR: Failed to build gem native extension.

   /usr/local/bin/ruby extconf.rb install rmagick --no-ri --no-
rdoc --bindir=/code/repos/12838/bin --version= 0 -s http://gems.rubyforge.org


   Gem files will remain installed in /code/repos/12838/gems_build/
gems/rmagick-2.9.1 for inspection.
   Results logged to /code/repos/12838/gems_build/gems/
rmagick-2.9.1/ext/RMagick/gem_make.out
   Building native extensions.  This could take a while...


error: hooks/pre-receive exited with error code 1


I even tried the rmagick 1.15.17 which is said to be installed in the
full list of gems, but that dies too.


help?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: RMagick

2009-04-29 Thread Carl Fyffe

When I left rmagick out of the manifest it complained...

On Wed, Apr 29, 2009 at 9:31 PM, Ricardo Chimal, Jr. rica...@heroku.com wrote:

 for rmagick 1.15.17 you don't need to include rmagick in your .gems
 manifest

 On Apr 29, 6:08 pm, Carl Fyffe carl.fy...@gmail.com wrote:
 My application uses RMagick. I put it into my .gems file and do the
 git push heroku dance while everything compiles and deploys.
 Unfortunately, it does not work.

 - Heroku receiving push

 - Installing gem rmagick fromhttp://gems.rubyforge.org
        ERROR:  Error installing rmagick:
         ERROR: Failed to build gem native extension.

        /usr/local/bin/ruby extconf.rb install rmagick --no-ri --no-
 rdoc --bindir=/code/repos/12838/bin --version= 0 -shttp://gems.rubyforge.org

        Gem files will remain installed in /code/repos/12838/gems_build/
 gems/rmagick-2.9.1 for inspection.
        Results logged to /code/repos/12838/gems_build/gems/
 rmagick-2.9.1/ext/RMagick/gem_make.out
        Building native extensions.  This could take a while...

 error: hooks/pre-receive exited with error code 1

 I even tried the rmagick 1.15.17 which is said to be installed in the
 full list of gems, but that dies too.

 help?
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---



Re: RMagick

2009-04-29 Thread Carl Fyffe

When I remove it from .gems I get this fancy error:

/usr/local/lib/ruby/gems/1.8/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack.rb:17:
warning: already initialized constant VERSION
/usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/rails/plugin/loader.rb:184:in
`ensure_all_registered_plugins_are_loaded!': Could not locate the
following plugins: community_engine and white_list (LoadError)
from 
/usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/rails/plugin/loader.rb:44:in
`load_plugins'
from 
/usr/local/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/initializer.rb:348:in
`load_plugins'

 several lines snipped 

no such file to load -- rmagick
/usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in
`gem_original_require'
/usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
/usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:158:in
`require_without_desert'
/home/slugs/12838_f2b2289_c549/mnt/.gems/gems/desert-0.5.0/lib/desert/ruby/object.rb:8:in
`require'


The whole error can be seen here: http://gist.github.com/104251

The app is working locally...

On Wed, Apr 29, 2009 at 9:38 PM, Morten Bagai mor...@heroku.com wrote:

 Hey,

 Rmagick is preinstalled. Try removing it from .gems file and just
 requiring it as you normally would.

 Morten



 On Apr 29, 2009, at 6:08 PM, Carl Fyffe carl.fy...@gmail.com wrote:


 My application uses RMagick. I put it into my .gems file and do the
 git push heroku dance while everything compiles and deploys.
 Unfortunately, it does not work.

 - Heroku receiving push

 - Installing gem rmagick from http://gems.rubyforge.org
       ERROR:  Error installing rmagick:
           ERROR: Failed to build gem native extension.

       /usr/local/bin/ruby extconf.rb install rmagick --no-ri --no-
 rdoc --bindir=/code/repos/12838/bin --version= 0 -s 
 http://gems.rubyforge.org


       Gem files will remain installed in /code/repos/12838/gems_build/
 gems/rmagick-2.9.1 for inspection.
       Results logged to /code/repos/12838/gems_build/gems/
 rmagick-2.9.1/ext/RMagick/gem_make.out
       Building native extensions.  This could take a while...


 error: hooks/pre-receive exited with error code 1


 I even tried the rmagick 1.15.17 which is said to be installed in the
 full list of gems, but that dies too.


 help?

 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to heroku@googlegroups.com
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~--~~~~--~~--~--~---