Re: Maximum RAM per dyno?
Ok, thanks Oren! On Mar 10, 1:17 pm, Oren Teich wrote: > As the document you point to indicates, 300MB is the hard limit. We > automatically kill any dyno using more than 300MB. > > Oren > > On Wed, Mar 10, 2010 at 12:10 PM, Chris Hanks > > > > wrote: > > No, I'm familiar with http caching, and it's not what I'm looking to > > do. Thanks anyway, though. > > > What I'm doing is actually not that complex. MongoMapper already has > > an identity map, I'll just be tweaking it to persist between requests. > > And I'm only doing this for a few of my models (ones that are accessed > > somewhat randomly by id several times per request, and whose records > > are only modified during site maintenance anyway). It's not like I'm > > trying to write my own caching system from scratch. > > > Anyway, can someone verify that 300 MB is the maximum RAM available > > for a dyno? I don't expect to get near it anytime soon, but it would > > be helpful to know. > > > Thanks! > > > On Mar 10, 11:36 am, Carl Fyffe wrote: > > > Chris, > > > > Will this work for you?http://docs.heroku.com/http-caching > > > > Carl > > > > On Wed, Mar 10, 2010 at 2:22 PM, Chris Hanks > > > > wrote: > > > > On Mar 9, 10:40 pm, Chris Hanks wrote: > > > >> I'm interested in this too. I have several thousand MongoDB documents > > > >> that are read-only and frequently accessed, so I figured I'd just > > > >> cache them in the dyno's memory to speed up requests. > > > > >> So is 300 MB the hard limit for each dyno's RAM, then? I suppose that > > > >> if it grows beyond that point, the dyno is restarted? > > > > > Does anyone have an answer for this? Thanks in advance! > > > > > -- > > > > You received this message because you are subscribed to the Google > > Groups "Heroku" group. > > > > To post to this group, send email to her...@googlegroups.com. > > > > To unsubscribe from this group, send email to > > heroku+unsubscr...@googlegroups.com > > . > > > > For more options, visit this group athttp:// > > groups.google.com/group/heroku?hl=en. > > > -- > > You received this message because you are subscribed to the Google Groups > > "Heroku" group. > > To post to this group, send email to her...@googlegroups.com. > > To unsubscribe from this group, send email to > > heroku+unsubscr...@googlegroups.com > > . > > For more options, visit this group at > >http://groups.google.com/group/heroku?hl=en. -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
As the document you point to indicates, 300MB is the hard limit. We automatically kill any dyno using more than 300MB. Oren On Wed, Mar 10, 2010 at 12:10 PM, Chris Hanks wrote: > No, I'm familiar with http caching, and it's not what I'm looking to > do. Thanks anyway, though. > > What I'm doing is actually not that complex. MongoMapper already has > an identity map, I'll just be tweaking it to persist between requests. > And I'm only doing this for a few of my models (ones that are accessed > somewhat randomly by id several times per request, and whose records > are only modified during site maintenance anyway). It's not like I'm > trying to write my own caching system from scratch. > > Anyway, can someone verify that 300 MB is the maximum RAM available > for a dyno? I don't expect to get near it anytime soon, but it would > be helpful to know. > > Thanks! > > > > On Mar 10, 11:36 am, Carl Fyffe wrote: > > Chris, > > > > Will this work for you?http://docs.heroku.com/http-caching > > > > Carl > > > > On Wed, Mar 10, 2010 at 2:22 PM, Chris Hanks > > > > > > > > wrote: > > > On Mar 9, 10:40 pm, Chris Hanks wrote: > > >> I'm interested in this too. I have several thousand MongoDB documents > > >> that are read-only and frequently accessed, so I figured I'd just > > >> cache them in the dyno's memory to speed up requests. > > > > >> So is 300 MB the hard limit for each dyno's RAM, then? I suppose that > > >> if it grows beyond that point, the dyno is restarted? > > > > > Does anyone have an answer for this? Thanks in advance! > > > > > -- > > > You received this message because you are subscribed to the Google > Groups "Heroku" group. > > > To post to this group, send email to her...@googlegroups.com. > > > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com > . > > > For more options, visit this group athttp:// > groups.google.com/group/heroku?hl=en. > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com > . > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
No, I'm familiar with http caching, and it's not what I'm looking to do. Thanks anyway, though. What I'm doing is actually not that complex. MongoMapper already has an identity map, I'll just be tweaking it to persist between requests. And I'm only doing this for a few of my models (ones that are accessed somewhat randomly by id several times per request, and whose records are only modified during site maintenance anyway). It's not like I'm trying to write my own caching system from scratch. Anyway, can someone verify that 300 MB is the maximum RAM available for a dyno? I don't expect to get near it anytime soon, but it would be helpful to know. Thanks! On Mar 10, 11:36 am, Carl Fyffe wrote: > Chris, > > Will this work for you?http://docs.heroku.com/http-caching > > Carl > > On Wed, Mar 10, 2010 at 2:22 PM, Chris Hanks > > > > wrote: > > On Mar 9, 10:40 pm, Chris Hanks wrote: > >> I'm interested in this too. I have several thousand MongoDB documents > >> that are read-only and frequently accessed, so I figured I'd just > >> cache them in the dyno's memory to speed up requests. > > >> So is 300 MB the hard limit for each dyno's RAM, then? I suppose that > >> if it grows beyond that point, the dyno is restarted? > > > Does anyone have an answer for this? Thanks in advance! > > > -- > > You received this message because you are subscribed to the Google Groups > > "Heroku" group. > > To post to this group, send email to her...@googlegroups.com. > > To unsubscribe from this group, send email to > > heroku+unsubscr...@googlegroups.com. > > For more options, visit this group > > athttp://groups.google.com/group/heroku?hl=en. -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
Chris, Will this work for you? http://docs.heroku.com/http-caching Carl On Wed, Mar 10, 2010 at 2:22 PM, Chris Hanks wrote: > On Mar 9, 10:40 pm, Chris Hanks wrote: >> I'm interested in this too. I have several thousand MongoDB documents >> that are read-only and frequently accessed, so I figured I'd just >> cache them in the dyno's memory to speed up requests. >> >> So is 300 MB the hard limit for each dyno's RAM, then? I suppose that >> if it grows beyond that point, the dyno is restarted? > > Does anyone have an answer for this? Thanks in advance! > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
On Mar 9, 10:40 pm, Chris Hanks wrote: > I'm interested in this too. I have several thousand MongoDB documents > that are read-only and frequently accessed, so I figured I'd just > cache them in the dyno's memory to speed up requests. > > So is 300 MB the hard limit for each dyno's RAM, then? I suppose that > if it grows beyond that point, the dyno is restarted? Does anyone have an answer for this? Thanks in advance! -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
As far as I know the beta is closed, but if you're willing to wait memcached will be rolled out to the general public "soon" (from what Oren has said). I agree with everyone else, it sounds like that is your best bet for something that's fast. I'd also look at something like redis if you want to do something outside of heroku. -Terence On Wed, 2010-03-10 at 08:22 -0800, Daniele wrote: > Thanks Chris, > the memcached solution is right but: > > - it is in private beta (by the way I will send a mail to join the > beta) > - i don't know how much it will cost by I suppose that it will be too > much (form me) to just handle some Kb > > What's I'm doing is a micro firewall that act as a filter before every > request. So what I need it to keep a moderate size array in memory, > read and write it *quickly*. > Any suggestion are welcome. > > On 10 Mar, 16:38, Chris wrote: > > > Daniele, > > If you really need a global variable to be accessible across servers > > then memcached works good as long as it doesn't matter if that global > > variable gets expired. You need to store the variable persistent in a > > database. Pull it from memcached if it's there, if not then hit the > > database. (Alternatively, depending on the variable, just re-create > > it if it's expired and doesn't need to be persisted somewhere.) > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
Hi Carl, no problem :) As I wrote in another post I need to use (read & write) a small amount of data (about 200Kb) in a before_filter to. I need to operate *fast* and the DB is not a viable solution (the I need to pack it as plugin so I prefer to don't have any DB dependencies). Filesystem... is not supported on Heroku :P So locally I tested a global var and it did its work. Yes I know it is not a best practice. Any other ideas? Thank you for your time. On 10 Mar, 17:25, Carl Fyffe wrote: > Daniele, > > My apologies for coming across rude, it was not my intention, which > was why I apologized up front. As Chris said, there are better ways to > skin the global variable cat. Also, in most Ruby docs it says use them > sparingly: > > http://www.rubyist.net/~slagell/ruby/globalvars.html > > Which to most people means, avoid at all costs. You can put them in > the db. If they are constant and never change, you can put them in > config. If they are client specific, put them in a cookie and store > them in the browser (as long as they aren't huge). If they are huge, > then the database is your best bet because you don't want to clutter > your memory with it. If the database is to slow, then put it on the > file system. > > But whatever you do, please avoid using global variables. I have > googled for the article that explains why, but I can't find it. All I > could find was a repeat of the "use sparingly". > > As I said earlier, benchmark the easiest solution first. It might just > meet your needs. -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
Daniele, My apologies for coming across rude, it was not my intention, which was why I apologized up front. As Chris said, there are better ways to skin the global variable cat. Also, in most Ruby docs it says use them sparingly: http://www.rubyist.net/~slagell/ruby/globalvars.html Which to most people means, avoid at all costs. You can put them in the db. If they are constant and never change, you can put them in config. If they are client specific, put them in a cookie and store them in the browser (as long as they aren't huge). If they are huge, then the database is your best bet because you don't want to clutter your memory with it. If the database is to slow, then put it on the file system. But whatever you do, please avoid using global variables. I have googled for the article that explains why, but I can't find it. All I could find was a repeat of the "use sparingly". As I said earlier, benchmark the easiest solution first. It might just meet your needs. On Wed, Mar 10, 2010 at 10:38 AM, Chris wrote: > Carl, > I think your tone is fine :-) and I appreciate you taking the time to > post your experiences. Looking forward to memcached being deployed. > > Daniele, > If you really need a global variable to be accessible across servers > then memcached works good as long as it doesn't matter if that global > variable gets expired. You need to store the variable persistent in a > database. Pull it from memcached if it's there, if not then hit the > database. (Alternatively, depending on the variable, just re-create > it if it's expired and doesn't need to be persisted somewhere.) > > -Chris > > On Wed, Mar 10, 2010 at 11:51 PM, Carl Fyffe wrote: >> These kinds of questions crack me up. I am going to try to adress this >> without sounding rude, but if I do happen to come across as rude and I >> apologize for that up front. >> >> No, the $xx global variables are not persistent across the cloud. If >> you have more than one machine then you have to jump through very >> complex hoops to get them to share memory like that. >> >> First, before you start jumping through creating crazy caches in >> memory you should benchmark your app to ensure that it permorms at a >> respectable level with the number of users you expect. If it doesn't >> the first thing you should do is increase the dynos and see if that >> helps. If it does, stop working on the caches and deploy! An extra $32 >> a month is far less than the pain and heartache and time you will >> waste trying to create a frakencache. >> Secondly, caching in memory isn't very useful if you only have 12 >> users and only use one dyno. If you have many users and multiple dynos >> then caching in memory makes even less sense because you will almost >> certainly be getting cache misses all the time because the users are >> not guarenteed to return to the same dyno. >> >> If caching is absolutely neccessary beg your way into the private beta >> for memcached or wait until it becomes public. However there are >> pretty solid caching mechanisms already in place, see more below. >> >> Do people normally write large apps in Sinatra? I thought it was meant >> for dinky one off apps If you were using Rails the caching would >> be taken care of for you and you wouldn't have to ask these questions. >> Basically all of the suggestions you have posted are less than >> adequate answers that have been answered by the guys that wrote the >> frameworks. Use the best practices of the frameworks and you should be >> fine. But in the end let the benchmarks be the measure for the work >> you have to do. >> >> On one project a collegue sys admin wanted to keep adding memory to >> make the app run faster. I asked him to setup multiple app servers and >> load balance instead. Both looked to perform the same, until we >> benchmarked it. Multiple app servers blew the monster memory app >> server away. >> >> Benchmark twice, code once as my carpenter father would say. >> >> On 3/10/10, Daniele wrote: >>> So a global variable $xx is not shared in the cloud?! >>> >>> On 10 Mar, 06:49, Oren Teich wrote: I don't have a good answer for your question, but note there is no guarantee that your requests will be served from the same physical machine - we'll move the dyno around as demanded by the cloud. >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "Heroku" group. >>> To post to this group, send email to her...@googlegroups.com. >>> To unsubscribe from this group, send email to >>> heroku+unsubscr...@googlegroups.com. >>> For more options, visit this group at >>> http://groups.google.com/group/heroku?hl=en. >>> >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Heroku" group. >> To post to this group, send email to her...@googlegroups.com. >> To unsubscribe from this group, send email to >> heroku+unsubscr...@googlegroups.com. >> For more options, visit this group at >> h
Re: Maximum RAM per dyno?
Thanks Chris, the memcached solution is right but: - it is in private beta (by the way I will send a mail to join the beta) - i don't know how much it will cost by I suppose that it will be too much (form me) to just handle some Kb What's I'm doing is a micro firewall that act as a filter before every request. So what I need it to keep a moderate size array in memory, read and write it *quickly*. Any suggestion are welcome. On 10 Mar, 16:38, Chris wrote: > Daniele, > If you really need a global variable to be accessible across servers > then memcached works good as long as it doesn't matter if that global > variable gets expired. You need to store the variable persistent in a > database. Pull it from memcached if it's there, if not then hit the > database. (Alternatively, depending on the variable, just re-create > it if it's expired and doesn't need to be persisted somewhere.) -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
Carl, I think your tone is fine :-) and I appreciate you taking the time to post your experiences. Looking forward to memcached being deployed. Daniele, If you really need a global variable to be accessible across servers then memcached works good as long as it doesn't matter if that global variable gets expired. You need to store the variable persistent in a database. Pull it from memcached if it's there, if not then hit the database. (Alternatively, depending on the variable, just re-create it if it's expired and doesn't need to be persisted somewhere.) -Chris On Wed, Mar 10, 2010 at 11:51 PM, Carl Fyffe wrote: > These kinds of questions crack me up. I am going to try to adress this > without sounding rude, but if I do happen to come across as rude and I > apologize for that up front. > > No, the $xx global variables are not persistent across the cloud. If > you have more than one machine then you have to jump through very > complex hoops to get them to share memory like that. > > First, before you start jumping through creating crazy caches in > memory you should benchmark your app to ensure that it permorms at a > respectable level with the number of users you expect. If it doesn't > the first thing you should do is increase the dynos and see if that > helps. If it does, stop working on the caches and deploy! An extra $32 > a month is far less than the pain and heartache and time you will > waste trying to create a frakencache. > Secondly, caching in memory isn't very useful if you only have 12 > users and only use one dyno. If you have many users and multiple dynos > then caching in memory makes even less sense because you will almost > certainly be getting cache misses all the time because the users are > not guarenteed to return to the same dyno. > > If caching is absolutely neccessary beg your way into the private beta > for memcached or wait until it becomes public. However there are > pretty solid caching mechanisms already in place, see more below. > > Do people normally write large apps in Sinatra? I thought it was meant > for dinky one off apps If you were using Rails the caching would > be taken care of for you and you wouldn't have to ask these questions. > Basically all of the suggestions you have posted are less than > adequate answers that have been answered by the guys that wrote the > frameworks. Use the best practices of the frameworks and you should be > fine. But in the end let the benchmarks be the measure for the work > you have to do. > > On one project a collegue sys admin wanted to keep adding memory to > make the app run faster. I asked him to setup multiple app servers and > load balance instead. Both looked to perform the same, until we > benchmarked it. Multiple app servers blew the monster memory app > server away. > > Benchmark twice, code once as my carpenter father would say. > > On 3/10/10, Daniele wrote: >> So a global variable $xx is not shared in the cloud?! >> >> On 10 Mar, 06:49, Oren Teich wrote: >>> I don't have a good answer for your question, but note there is no >>> guarantee >>> that your requests will be served from the same physical machine - we'll >>> move the dyno around as demanded by the cloud. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Heroku" group. >> To post to this group, send email to her...@googlegroups.com. >> To unsubscribe from this group, send email to >> heroku+unsubscr...@googlegroups.com. >> For more options, visit this group at >> http://groups.google.com/group/heroku?hl=en. >> >> > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
Yes Carl, you tone is quite rude. And perhaps you reply to the wrong person. By the way I need the global variable for other uses, caching is not a problem for me. On 10 Mar, 15:51, Carl Fyffe wrote: > These kinds of questions crack me up. I am going to try to adress this > without sounding rude, but if I do happen to come across as rude and I > apologize for that up front. _cut -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
These kinds of questions crack me up. I am going to try to adress this without sounding rude, but if I do happen to come across as rude and I apologize for that up front. No, the $xx global variables are not persistent across the cloud. If you have more than one machine then you have to jump through very complex hoops to get them to share memory like that. First, before you start jumping through creating crazy caches in memory you should benchmark your app to ensure that it permorms at a respectable level with the number of users you expect. If it doesn't the first thing you should do is increase the dynos and see if that helps. If it does, stop working on the caches and deploy! An extra $32 a month is far less than the pain and heartache and time you will waste trying to create a frakencache. Secondly, caching in memory isn't very useful if you only have 12 users and only use one dyno. If you have many users and multiple dynos then caching in memory makes even less sense because you will almost certainly be getting cache misses all the time because the users are not guarenteed to return to the same dyno. If caching is absolutely neccessary beg your way into the private beta for memcached or wait until it becomes public. However there are pretty solid caching mechanisms already in place, see more below. Do people normally write large apps in Sinatra? I thought it was meant for dinky one off apps If you were using Rails the caching would be taken care of for you and you wouldn't have to ask these questions. Basically all of the suggestions you have posted are less than adequate answers that have been answered by the guys that wrote the frameworks. Use the best practices of the frameworks and you should be fine. But in the end let the benchmarks be the measure for the work you have to do. On one project a collegue sys admin wanted to keep adding memory to make the app run faster. I asked him to setup multiple app servers and load balance instead. Both looked to perform the same, until we benchmarked it. Multiple app servers blew the monster memory app server away. Benchmark twice, code once as my carpenter father would say. On 3/10/10, Daniele wrote: > So a global variable $xx is not shared in the cloud?! > > On 10 Mar, 06:49, Oren Teich wrote: >> I don't have a good answer for your question, but note there is no >> guarantee >> that your requests will be served from the same physical machine - we'll >> move the dyno around as demanded by the cloud. > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
So a global variable $xx is not shared in the cloud?! On 10 Mar, 06:49, Oren Teich wrote: > I don't have a good answer for your question, but note there is no guarantee > that your requests will be served from the same physical machine - we'll > move the dyno around as demanded by the cloud. -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
I'm interested in this too. I have several thousand MongoDB documents that are read-only and frequently accessed, so I figured I'd just cache them in the dyno's memory to speed up requests. So is 300 MB the hard limit for each dyno's RAM, then? I suppose that if it grows beyond that point, the dyno is restarted? On Mar 9, 9:49 pm, Oren Teich wrote: > I don't have a good answer for your question, but note there is no guarantee > that your requests will be served from the same physical machine - we'll > move the dyno around as demanded by the cloud. memcached is the way to do > persistent (beyond single request) caching. We're a few days away from > making it public beta. > > Oren > > > > On Tue, Mar 9, 2010 at 9:02 PM, Alex Chaffee wrote: > > I've got some frequently-accessed data I'd like to store in RAM > > between requests. I'm using Sinatra so I'll probably just use an LRU > > cache in a @@class variable. I think I can muddle through all the > > technical issues but one: > > > How big can I reasonably make my cache? I.e. how high (or low) should > > I put my threshold before I start expiring unused data? > > > The only guidance I could find from a quick perusal of heroku.com was > > onhttp://legal.heroku.com/aup:"Dyno RAM usage: 300MB - Hard" -- > > which is good to know, but not a complete answer. I'll obviously want > > to set my cache well below that limit. But without monitoring tools I > > don't have any idea how much RAM is used by the normal processing of > > Rack + Sinatra per request, nor do I know how many requests are being > > serviced per second. My cache is supposed to increase performance, not > > decrease it by hammering the dyno into swap space, or otherwise > > interfering with other system functions on the dyno. > > > So... any ideas? Has anyone else done this? Are there any low-level > > monitoring tools I can use to find out how much RAM I'm currently > > using, or how loaded the system is, or anything of that nature? Would > > New Relic help here (and does it work for Sinatra apps)? > > > BTW, although I may want to use memcached as an *additional* caching > > layer, what I'm interested in exploring now is the feasibility of > > storing transient data in the app server itself. (I don't want the > > overhead of instantiating Ruby objects, especially ActiveRecord > > objects, not to mention that memcached isn't officially available as > > an addon.) > > > --- > > Alex Chaffee - a...@cohuman.com -http://alexch.github.com > > Stalk me:http://friendfeed.com/alexch|http://twitter.com/alexch| > >http://alexch.tumblr.com > > > -- > > You received this message because you are subscribed to the Google Groups > > "Heroku" group. > > To post to this group, send email to her...@googlegroups.com. > > To unsubscribe from this group, send email to > > heroku+unsubscr...@googlegroups.com > > . > > For more options, visit this group at > >http://groups.google.com/group/heroku?hl=en. -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
On Tue, Mar 9, 2010 at 11:02 PM, Alex Chaffee wrote: > I've got some frequently-accessed data I'd like to store in RAM > between requests. I'm using Sinatra so I'll probably just use an LRU > cache in a @@class variable. I think I can muddle through all the > technical issues but one: > > How big can I reasonably make my cache? I.e. how high (or low) should > I put my threshold before I start expiring unused data? > > The only guidance I could find from a quick perusal of heroku.com was > on http://legal.heroku.com/aup: "Dyno RAM usage: 300MB - Hard" -- > which is good to know, but not a complete answer. I'll obviously want > to set my cache well below that limit. But without monitoring tools I > don't have any idea how much RAM is used by the normal processing of > Rack + Sinatra per request, nor do I know how many requests are being > serviced per second. My cache is supposed to increase performance, not > decrease it by hammering the dyno into swap space, or otherwise > interfering with other system functions on the dyno. > > So... any ideas? Has anyone else done this? Are there any low-level > monitoring tools I can use to find out how much RAM I'm currently > using, or how loaded the system is, or anything of that nature? Would > New Relic help here (and does it work for Sinatra apps)? > > BTW, although I may want to use memcached as an *additional* caching > layer, what I'm interested in exploring now is the feasibility of > storing transient data in the app server itself. (I don't want the > overhead of instantiating Ruby objects, especially ActiveRecord > objects, not to mention that memcached isn't officially available as > an addon.) > > --- > Alex Chaffee - a...@cohuman.com - http://alexch.github.com > Stalk me: http://friendfeed.com/alexch | http://twitter.com/alexch | > http://alexch.tumblr.com > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com > . > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > Are you sure you need to do this? Is it slow enough that you've decided this is a necessary optimization? My experience has been that Heroku is extremely fast, and worrying about things like this aren't necessary, they just make the code base more complex and less reliable for no noticeable gain. I've spent days complicating my code base, making optimizations that were pointless. Both DataMapper and ActiveRecord make some very impressive optimizations already. If you haven't documented an issue with speed, and profiled your app to see where the real problem lies, then you might want to try that before taking the time and effort, to "muddle through all the technical issues". -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
Re: Maximum RAM per dyno?
I don't have a good answer for your question, but note there is no guarantee that your requests will be served from the same physical machine - we'll move the dyno around as demanded by the cloud. memcached is the way to do persistent (beyond single request) caching. We're a few days away from making it public beta. Oren On Tue, Mar 9, 2010 at 9:02 PM, Alex Chaffee wrote: > I've got some frequently-accessed data I'd like to store in RAM > between requests. I'm using Sinatra so I'll probably just use an LRU > cache in a @@class variable. I think I can muddle through all the > technical issues but one: > > How big can I reasonably make my cache? I.e. how high (or low) should > I put my threshold before I start expiring unused data? > > The only guidance I could find from a quick perusal of heroku.com was > on http://legal.heroku.com/aup: "Dyno RAM usage: 300MB - Hard" -- > which is good to know, but not a complete answer. I'll obviously want > to set my cache well below that limit. But without monitoring tools I > don't have any idea how much RAM is used by the normal processing of > Rack + Sinatra per request, nor do I know how many requests are being > serviced per second. My cache is supposed to increase performance, not > decrease it by hammering the dyno into swap space, or otherwise > interfering with other system functions on the dyno. > > So... any ideas? Has anyone else done this? Are there any low-level > monitoring tools I can use to find out how much RAM I'm currently > using, or how loaded the system is, or anything of that nature? Would > New Relic help here (and does it work for Sinatra apps)? > > BTW, although I may want to use memcached as an *additional* caching > layer, what I'm interested in exploring now is the feasibility of > storing transient data in the app server itself. (I don't want the > overhead of instantiating Ruby objects, especially ActiveRecord > objects, not to mention that memcached isn't officially available as > an addon.) > > --- > Alex Chaffee - a...@cohuman.com - http://alexch.github.com > Stalk me: http://friendfeed.com/alexch | http://twitter.com/alexch | > http://alexch.tumblr.com > > -- > You received this message because you are subscribed to the Google Groups > "Heroku" group. > To post to this group, send email to her...@googlegroups.com. > To unsubscribe from this group, send email to > heroku+unsubscr...@googlegroups.com > . > For more options, visit this group at > http://groups.google.com/group/heroku?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Heroku" group. To post to this group, send email to her...@googlegroups.com. To unsubscribe from this group, send email to heroku+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.