Perhaps using tmpfs may be an option. Benefit of using tmpfs is that you
can create a filesystem that is larger than physical memory. This has
the benefit that virtual memory manager will swap out unused items to
disk. You can then perhaps NFS export the file system or do something
else. Diff
If you have control over the reproxy, you can do a simple hash against the
list of all machines you have. Varnish/squid can also do internal forwards
after hashing.
It's a little weird but varnish affords you a lot of smarts from the
serving end.
Another thing worth noting I guess, is that typic
On Nov 2, 2009, at 1:35 PM, Jay Paroline wrote:
What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less effic
You could also do a relatively simple solution like tack a two digit shard
ID onto the front of your key then use this to direct your request to a
specific cluster internally. Give the clusters a lot of RAM and rely on OS
filesystem caching to keep frequently requested files in memory. Would be
v
Adam: yes, we serve up the file contents, not the URL to the media.
lighthttpd makes this simple with the X-SendFile header.
Having not used varnish or squid before, do either support some form
of distributed memory caches so that rather than buying a single
expensive box with tons of memory, we
> You could also redirect the client to the proxy/cache after computing the
> filename, but that exposes the name in a way that might be reusable.
perlbal is great for this... I think nginx might be able to do it too?
Internal "reproxy". Server returns headers for where the load balancer is
to re
dormando wrote:
You could put something like varnish inbetween that final step and your
client..
so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.
Sinc
You could put something like varnish inbetween that final step and your
client..
so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.
Since these files are
So you actually give back the file contents in the response, not the URL to
the media? If so, then that does complicate things a little bit. I still
think that memcached might not be the best solution for this, though it
could obviously be configured to do it.
On Mon, Nov 2, 2009 at 5:44 PM, Jay
I'm not sure how well a reverse proxy would fit our needs, having
never used one before. The way we do streaming is a client sends a one-
time-use key to the stream server. The key is used to determine which
file should be streamed, and then the file is returned. The effect is
that no two requests
I'm guessing you might get better mileage out of using something written
more for this purpose, e.g. squid set up as a reverse proxy.
On Mon, Nov 2, 2009 at 4:35 PM, Jay Paroline wrote:
>
> I'm running this by you guys to make sure we're not trying something
> completely insane. ;)
>
> We alread
I'm running this by you guys to make sure we're not trying something
completely insane. ;)
We already rely on memcached quite heavily to minimize load on our DB
with stunning success, but as a music streaming service, we also serve
up lots and lots of 5-6MB files, and right now we don't have a
di
12 matches
Mail list logo