Re: help with a multi process http server.

2007-01-25 Thread David Bovill

Along hose lines Andre - why not just put the Rev HTTP servers behind a
mature http proxy as they do in the Java and Zope world? Here is an article:


http://www.vmunix.com/mark/blog/archives/2006/01/02/fastcgi-scgi-and-apache-background-and-future

And a quote from the above:

What Java and Zope app servers do (for the unfamiliar) is run their own

solid HTTP servers that do intelligent URL parsing/generation for you to
make sticking them behind a HTTP proxy (like Apache's mod_proxy, or Squid,
or whatever) at an arbitrary point in the URI a piece of cake. Typically you
redirect some URL's traffic (a virtual host, subdirectory, etc.) off to the
dedicated app server the same way a proxy server sits between your web
browser and the web server. It works just like directing requests off to a
Handler in Apache, except the request is actually sent off to another HTTP
server instead of handed off to a module or CGI script. And of course the
reply comes back as a HTTP object that's sent back to the originator.
There's a bunch of reasons why doing this with HTTP instead of CGI is a
really nice approach. One is that setting up these app servers becomes
pretty simple for sysadmins and doing the configs on the upstream
webserver/proxy is IDENTICAL no matter what kind of downstream app server
you're talking to. That's reduces errors. It's flexible, too, allowing you
start up an app server instance (which, of course, acts like a web server)
on a port, run it as whatever system user you want, jail it, zone it,
firewall it, whatever, and then you send HTTP requests to the thing. You can
go straight to the app server in your web browser to debug stuff. Since it's
HTTP we already have a full suite of tools that can do intelligent things
with the protocol. Firewalls, load balancers, proxies, and so on. There's a
huge market of mature HTTP brokers both free and commercial, including
Apache itself (mod_proxy, which can be hooked into rewrites).






On 23/01/07, Jan Schenkel [EMAIL PROTECTED] wrote:


--- Andre Garzia [EMAIL PROTECTED] wrote:
 Hello Folks,

 I am here experimenting with a new approach to
 building revolution
 based servers. We've been asking for threads or
 forks for a while but
 until the day of such implementation comes, we need
 to use what we
 can. One other language I always liked (to the point
 that we created
 a whole company around it some years ago) was REBOL.
 I remember from
 the blogosphere that someone did a REBOL based
 webserver with some
 nice benchmarks, I decided to check it out because
 last I remembered,
 REBOL was also a single thread language. After
 checking many sites, I
 discovered that they along with other people around
 are using a
 process pool and a scheduler that handles the
 connections and
 redirect them to the next free process on the pool.

 I decided to take the same approach, I made a
 tweeked version of my
 RevHTTP server that can be launched from a shell()
 call so that I can
 create a scheduler that can launch some number of
 processes to make
 the pool. I created a simple communicating scheme
 using wget to make
 little semaphores so that each server instance can
 tell the scheduler
 where it is busy or free. So far, so good but
 there's one problem.

 Suppose the scheduler is running 8080 and each
 server instance in the
 pool is running from 8081 onwards. When the client
 connects to 8080,
 the scheduler sends back a redirection response so
 that the client
 refreshes to a different port (of a free instance in
 the pool). The
 problem is that a http client such as a browser will
 then request
 favico and all the links in the html from the same
 port for it re-
 uses data from the connection that yielded that
 result to fill
 missing data in the URL. For example, if you make a
 link that goes
 to /newpage.html, then the server will make it
 http://yourserver/
 newpage.html. If I answered from port 8081, all
 links in the page
 will inherit that port and I want all the
 connections to come to
 the scheduler running on a different port.

 One approach to solve this is to parse all the
 response and change
 all the html responses to include the correct URLs.
 This is very
 boring and slow for we must cope with href, src,
 link, rel and all
 kinds of css includes and stuff. What I hoped to
 find was some HTTP
 header field that would tell like: hey this server
 is acutally
 running at port bla bla bla such as:

 host: localhost:8080

 despite the fact that that answer came thru 8081.
 This way the whole
 thing would work and maybe we would have a a web
 server built with
 Rev that could see some real world use...

 Anyone has two cents?

 Andre


Hi Andre et al,

During one of my similar experiments, I ended up with
an http server that didn't do any of the serving
itself, but acted as a router between the client
computer and the server apps.
The server apps ran on ports that were blocked from
outside access, instead of using redirects. The
housekeeping can be done 

Re: help with a multi process http server.

2007-01-25 Thread Andre Garzia

David,

my current idea is to keep everything in 100% transcript with maybe  
some shell() calls. What I have right now is:

* An 100% transcript based webserver.
* A lot of CGI libraries and utilities.
* an experimental FastCGI Daemon written in transcript.

That blog post is very interesting and I too follow some ideas  
outlined in there. The only thing I am trying to solve is the denial  
of service problem during blocking calls. Even with FastCGI we do  
have such trouble. None of my web apps running under FastCGI can  
contain a single blocking call for even if a smal wait 10 ticks  
happen inside the app, the whole engine will block.


Thats why I (and others) recommend using plain old CGI for real world  
scenarios. Spawning one rev instance per request is not that bad, the  
interpreter is small enough and starts very fast.


As for your question on putting RevHTTP behind a good proxy, the  
problem again is blocking calls. If the engine is busy figuring out  
some lenghty calculation and the master proxy tries to send something  
more, then we're pretty much in denial of service.


any clue?

Andre

On Jan 25, 2007, at 11:57 AM, David Bovill wrote:

Along hose lines Andre - why not just put the Rev HTTP servers  
behind a
mature http proxy as they do in the Java and Zope world? Here is an  
article:



http://www.vmunix.com/mark/blog/archives/2006/01/02/fastcgi-scgi- 
and-apache-background-and-future


And a quote from the above:

What Java and Zope app servers do (for the unfamiliar) is run their  
own
solid HTTP servers that do intelligent URL parsing/generation for  
you to
make sticking them behind a HTTP proxy (like Apache's mod_proxy,  
or Squid,
or whatever) at an arbitrary point in the URI a piece of cake.  
Typically you
redirect some URL's traffic (a virtual host, subdirectory, etc.)  
off to the
dedicated app server the same way a proxy server sits between your  
web
browser and the web server. It works just like directing requests  
off to a
Handler in Apache, except the request is actually sent off to  
another HTTP
server instead of handed off to a module or CGI script. And of  
course the

reply comes back as a HTTP object that's sent back to the originator.
There's a bunch of reasons why doing this with HTTP instead of CGI  
is a
really nice approach. One is that setting up these app servers  
becomes

pretty simple for sysadmins and doing the configs on the upstream
webserver/proxy is IDENTICAL no matter what kind of downstream app  
server
you're talking to. That's reduces errors. It's flexible, too,  
allowing you
start up an app server instance (which, of course, acts like a web  
server)

on a port, run it as whatever system user you want, jail it, zone it,
firewall it, whatever, and then you send HTTP requests to the  
thing. You can
go straight to the app server in your web browser to debug stuff.  
Since it's
HTTP we already have a full suite of tools that can do intelligent  
things
with the protocol. Firewalls, load balancers, proxies, and so on.  
There's a
huge market of mature HTTP brokers both free and commercial,  
including

Apache itself (mod_proxy, which can be hooked into rewrites).


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-25 Thread Richard Gaskin

Andre Garzia wrote:
Thats why I (and others) recommend using plain old CGI for real world  
scenarios. Spawning one rev instance per request is not that bad, the  
interpreter is small enough and starts very fast.


I enjoyed that blog post as well (thanks for posting it, David), and 
among other things it offers one of the clearest explanations of the 
various CGI alternatives I've come across.


That said, I think you raise a good point here, Andre.  There's a lot of 
concern about scalability, but perhaps not enough about right-sizing.


Not every system will need to handle the traffic load Google Maps gets. 
And the relative few that do have budget concerns far bigger than 
picking a CGI alternative; if they can address those, the interface is a 
no-brainer.


Where I see Andre's work as being very useful is the other 99% of web 
services that serve a smaller audience.


Look at Base Camp:  most teams using a Base Camp installation number in 
the dozens, maybe the low hundreds, rarely much larger.   The number of 
concurrent users in any given group is of course a fraction of that.


Being able to snap together instant web services for small and medium 
businesses with an ROI greater than the alternatives would seem a worthy 
goal, and it seems Andre's work moves us all a bit closer to that goal.


--
 Richard Gaskin Managing Editor, revJournal
 ___
 Rev tips, tutorials and more: http://www.revJournal.com
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-23 Thread Alex Shaw

Hi Dave

I see two situations where this can be a problem (particularly when 
related to file serving) which would need a lot of extra code to work 
properly.


1. Large files. Using the seek command would get around this.

2. Dynamic environments where file contents can change.

So serving ever-changing large files would need a few locks  checks.

I haven't tried this but if you use the for write form to open a file 
for writing does this mean the same file cannot be opened for a read?


regards
alex

Dave Cragg wrote:

This would put an extra burden on the scheduler when it has to write 
back large quantities of data to simulataneous requests from different 
clients. But I think it should be possible to slice up the responses so 
that you only write back to the client sockets in small chunks (say 4096 
KB at a time). This should allow simultaneous connections to appear to 
work simultaneously.

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-23 Thread Jan Schenkel
--- Andre Garzia [EMAIL PROTECTED] wrote:
 Hello Folks,
 
 I am here experimenting with a new approach to
 building revolution  
 based servers. We've been asking for threads or
 forks for a while but  
 until the day of such implementation comes, we need
 to use what we  
 can. One other language I always liked (to the point
 that we created  
 a whole company around it some years ago) was REBOL.
 I remember from  
 the blogosphere that someone did a REBOL based
 webserver with some  
 nice benchmarks, I decided to check it out because
 last I remembered,  
 REBOL was also a single thread language. After
 checking many sites, I  
 discovered that they along with other people around
 are using a  
 process pool and a scheduler that handles the
 connections and  
 redirect them to the next free process on the pool.
 
 I decided to take the same approach, I made a
 tweeked version of my  
 RevHTTP server that can be launched from a shell()
 call so that I can  
 create a scheduler that can launch some number of
 processes to make  
 the pool. I created a simple communicating scheme
 using wget to make  
 little semaphores so that each server instance can
 tell the scheduler  
 where it is busy or free. So far, so good but
 there's one problem.
 
 Suppose the scheduler is running 8080 and each
 server instance in the  
 pool is running from 8081 onwards. When the client
 connects to 8080,  
 the scheduler sends back a redirection response so
 that the client  
 refreshes to a different port (of a free instance in
 the pool). The  
 problem is that a http client such as a browser will
 then request  
 favico and all the links in the html from the same
 port for it re- 
 uses data from the connection that yielded that
 result to fill  
 missing data in the URL. For example, if you make a
 link that goes  
 to /newpage.html, then the server will make it
 http://yourserver/ 
 newpage.html. If I answered from port 8081, all
 links in the page  
 will inherit that port and I want all the
 connections to come to  
 the scheduler running on a different port.
 
 One approach to solve this is to parse all the
 response and change  
 all the html responses to include the correct URLs.
 This is very  
 boring and slow for we must cope with href, src,
 link, rel and all  
 kinds of css includes and stuff. What I hoped to
 find was some HTTP  
 header field that would tell like: hey this server
 is acutally  
 running at port bla bla bla such as:
 
 host: localhost:8080
 
 despite the fact that that answer came thru 8081.
 This way the whole  
 thing would work and maybe we would have a a web
 server built with  
 Rev that could see some real world use...
 
 Anyone has two cents?
 
 Andre
 

Hi Andre et al,

During one of my similar experiments, I ended up with
an http server that didn't do any of the serving
itself, but acted as a router between the client
computer and the server apps.
The server apps ran on ports that were blocked from
outside access, instead of using redirects. The
housekeeping can be done fairly easily, and you can
use a chunking approach along with asynchronous write
to socket commands.
However, rapidly refreshing the browser can result in
a serious overload of the routing/scheduling app so
you'll want to add the necessary logic to cancel
requests that are being processed by the server apps.

Just my two eurocents,

Jan.

Quartam Reports for Revolution
http://www.quartam.com

=
As we grow older, we grow both wiser and more foolish at the same time.  (La 
Rochefoucauld)


 

Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


help with a multi process http server.

2007-01-22 Thread Andre Garzia

Hello Folks,

I am here experimenting with a new approach to building revolution  
based servers. We've been asking for threads or forks for a while but  
until the day of such implementation comes, we need to use what we  
can. One other language I always liked (to the point that we created  
a whole company around it some years ago) was REBOL. I remember from  
the blogosphere that someone did a REBOL based webserver with some  
nice benchmarks, I decided to check it out because last I remembered,  
REBOL was also a single thread language. After checking many sites, I  
discovered that they along with other people around are using a  
process pool and a scheduler that handles the connections and  
redirect them to the next free process on the pool.


I decided to take the same approach, I made a tweeked version of my  
RevHTTP server that can be launched from a shell() call so that I can  
create a scheduler that can launch some number of processes to make  
the pool. I created a simple communicating scheme using wget to make  
little semaphores so that each server instance can tell the scheduler  
where it is busy or free. So far, so good but there's one problem.


Suppose the scheduler is running 8080 and each server instance in the  
pool is running from 8081 onwards. When the client connects to 8080,  
the scheduler sends back a redirection response so that the client  
refreshes to a different port (of a free instance in the pool). The  
problem is that a http client such as a browser will then request  
favico and all the links in the html from the same port for it re- 
uses data from the connection that yielded that result to fill  
missing data in the URL. For example, if you make a link that goes  
to /newpage.html, then the server will make it http://yourserver/ 
newpage.html. If I answered from port 8081, all links in the page  
will inherit that port and I want all the connections to come to  
the scheduler running on a different port.


One approach to solve this is to parse all the response and change  
all the html responses to include the correct URLs. This is very  
boring and slow for we must cope with href, src, link, rel and all  
kinds of css includes and stuff. What I hoped to find was some HTTP  
header field that would tell like: hey this server is acutally  
running at port bla bla bla such as:


host: localhost:8080

despite the fact that that answer came thru 8081. This way the whole  
thing would work and maybe we would have a a web server built with  
Rev that could see some real world use...


Anyone has two cents?

Andre

___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread Dave Cragg

On 22 Jan 2007, at 19:25, Andre Garzia wrote:

Suppose the scheduler is running 8080 and each server instance in  
the pool is running from 8081 onwards. When the client connects to  
8080, the scheduler sends back a redirection response so that the  
client refreshes to a different port (of a free instance in the  
pool). The problem is that a http client such as a browser will  
then request favico and all the links in the html from the same  
port for it re-uses data from the connection that yielded that  
result to fill missing data in the URL. For example, if you make a  
link that goes to /newpage.html, then the server will make it  
http://yourserver/newpage.html. If I answered from port 8081, all  
links in the page will inherit that port and I want all the  
connections to come to the scheduler running on a different port.


One approach to solve this is to parse all the response and change  
all the html responses to include the correct URLs. This is very  
boring and slow for we must cope with href, src, link, rel and all  
kinds of css includes and stuff. What I hoped to find was some HTTP  
header field that would tell like: hey this server is acutally  
running at port bla bla bla such as:


host: localhost:8080

despite the fact that that answer came thru 8081. This way the  
whole thing would work and maybe we would have a a web server built  
with Rev that could see some real world use...


Anyone has two cents?


This is very interesting, Andre. I wish I had your energy.

One thought.

If I understand correctly, under this system, the scheduler  
immediately responds to the client with a redirect to the same url  
but on a different port. Intead of using a redirect, is it not  
possible for the scheduler to hand off the request directly to an  
available process (for example, on localhost:8082), wait for the  
response, and then the scheduler writes the response directly back to  
the client? This would preserve the socket details for the client.


This would put an extra burden on the scheduler when it has to write  
back large quantities of data to simulataneous requests from  
different clients. But I think it should be possible to slice up the  
responses so that you only write back to the client sockets in small  
chunks (say 4096 KB at a time). This should allow simultaneous  
connections to appear to work simultaneously.


Also, is there not a problem in redirecting clients that have made a  
POST request? My memory of the http rfc is that redirects only use  
the GET method. The above idea would get round that problem.


Cheers
Dave
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread [EMAIL PROTECTED]
Hi Andre,

I hope I understood you correctly:

did you try the old base href= tag?

If you use in the header of the first html file:

head
base href=http://server2.com:;
  !-- ...  ... --
/head

all will be loaded from the server2 at port .
A declaration like
img src=/src/logo.gif
will become like
img src=http://server2.com:/src/logo.gif;.

 
If you put in ANY html page delivered from your servers
head
base href=http://schedulerserver.com;
  !-- ...  ... --
/head

all requests will be loaded from schedulerserver.com at port 80 even if the 
parent page was delivered from another server.


Even in the old days of HTML one could send a base href tag in the head of 
the html-file and all the following relational links have been calculated in 
the browsers according to this base. I think this works in all browsers on the 
market.
If you have a php or shtml generated index file on the main server which sends 
a first html page with such a base href tag, the browsers will connect to the 
declared server for downloading and so on - you do not need to parse any pages, 
the relational base mechanism makes the rest.
  
Even the favicon can be declared explicitely:
compare 
http://www.selfhtml.org/
http://de.selfhtml.org/navigation/suche/index.htm?Suchanfrage=base+href
there might be english versions also


and any stuff comes from the second server ...

http://de.selfhtml.org/html/kopfdaten/beziehungen.htm#quellen
favicon in line 4 of the header:

head
!-- ... andere Angaben im Dateikopf ... --
link rel=stylesheet type=text/css href=../../src/selfhtml.css

link rel=shortcut icon type=image/x-icon href=/src/favicon.ico

link rel=author title=Impressum href=../../editorial/impressum.htm
link rel=contents title=Inhaltsverzeichnis 
href=../../navigation/inhalt.htm
link rel=index title=Stichwortverzeichnis 
href=../../navigation/stichwort.htm
link rel=search title=Suche href=../../navigation/suche/index.htm
link rel=help title=Hilfe href=../../editorial/index.htm
link rel=copyright title=Urheberrecht href=../../editorial/copyright.htm
link rel=top title=SELFHTML href=../../index.htm
link rel=up title=HTML-Kopfdaten href=index.htm
link rel=next title=Durchsuchbarkeit mit Server-Kommunikation 
href=durchsuchbarkeit.htm
link rel=prev title=Adressbasis und Zielfensterbasis href=basis.htm
link rel=first title=Titel einer HTML-Datei href=titel.htm
link rel=last title=Durchsuchbarkeit mit Server-Kommunikation 
href=durchsuchbarkeit.htm
/head

Regards, Franz

Mit freundlichen Grüßen
Franz Böhmisch

[EMAIL PROTECTED]
http://www.animabit.de
GF Animabit Multimedia Software GmbH
Am Sonnenhang 22
D-94136 Thyrnau
Tel +49 (0)8501-8538
Fax +49 (0)8501-8537



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread Andre Garzia

Franz,

I own you a beer! :-)

You just saved my life... I'll experiment with this and post the  
result on this list! :D


Cheers
andre

On Jan 22, 2007, at 6:37 PM, [EMAIL PROTECTED] wrote:


Hi Andre,

I hope I understood you correctly:

did you try the old base href= tag?

If you use in the header of the first html file:

head
base href=http://server2.com:;
  !-- ...  ... --
/head

all will be loaded from the server2 at port .
A declaration like
img src=/src/logo.gif
will become like
img src=http://server2.com:/src/logo.gif;.


If you put in ANY html page delivered from your servers
head
base href=http://schedulerserver.com;
  !-- ...  ... --
/head

all requests will be loaded from schedulerserver.com at port 80  
even if the parent page was delivered from another server.



Even in the old days of HTML one could send a base href tag in  
the head of the html-file and all the following relational links  
have been calculated in the browsers according to this base. I  
think this works in all browsers on the market.
If you have a php or shtml generated index file on the main server  
which sends a first html page with such a base href tag, the  
browsers will connect to the declared server for downloading and so  
on - you do not need to parse any pages, the relational base  
mechanism makes the rest.


Even the favicon can be declared explicitely:
compare
http://www.selfhtml.org/
http://de.selfhtml.org/navigation/suche/index.htm?Suchanfrage=base 
+href

there might be english versions also


and any stuff comes from the second server ...

http://de.selfhtml.org/html/kopfdaten/beziehungen.htm#quellen
favicon in line 4 of the header:

head
!-- ... andere Angaben im Dateikopf ... --
link rel=stylesheet type=text/css href=../../src/selfhtml.css

link rel=shortcut icon type=image/x-icon href=/src/favicon.ico

link rel=author title=Impressum href=../../editorial/ 
impressum.htm
link rel=contents title=Inhaltsverzeichnis href=../../ 
navigation/inhalt.htm
link rel=index title=Stichwortverzeichnis href=../../ 
navigation/stichwort.htm
link rel=search title=Suche href=../../navigation/suche/ 
index.htm

link rel=help title=Hilfe href=../../editorial/index.htm
link rel=copyright title=Urheberrecht href=../../editorial/ 
copyright.htm

link rel=top title=SELFHTML href=../../index.htm
link rel=up title=HTML-Kopfdaten href=index.htm
link rel=next title=Durchsuchbarkeit mit Server-Kommunikation  
href=durchsuchbarkeit.htm
link rel=prev title=Adressbasis und Zielfensterbasis  
href=basis.htm

link rel=first title=Titel einer HTML-Datei href=titel.htm
link rel=last title=Durchsuchbarkeit mit Server-Kommunikation  
href=durchsuchbarkeit.htm

/head

Regards, Franz

Mit freundlichen Grüßen
Franz Böhmisch

[EMAIL PROTECTED]
http://www.animabit.de
GF Animabit Multimedia Software GmbH
Am Sonnenhang 22
D-94136 Thyrnau
Tel +49 (0)8501-8538
Fax +49 (0)8501-8537



___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread Andre Garzia

Dave,

I didn't want to add that burden to the scheduler because I am afraid  
of denial of service, my whole quest in the field of revolution based  
servers is trying to avoid such situations. I need to check if I can  
redirect POST requests... If I can't then I'll need to implement a  
solution like you outlined here at least for POST calls.


If I managed to build such thing, I'll post here on the list.

Cheers
andre

On Jan 22, 2007, at 6:18 PM, Dave Cragg wrote:


On 22 Jan 2007, at 19:25, Andre Garzia wrote:

Suppose the scheduler is running 8080 and each server instance in  
the pool is running from 8081 onwards. When the client connects to  
8080, the scheduler sends back a redirection response so that the  
client refreshes to a different port (of a free instance in the  
pool). The problem is that a http client such as a browser will  
then request favico and all the links in the html from the same  
port for it re-uses data from the connection that yielded that  
result to fill missing data in the URL. For example, if you make a  
link that goes to /newpage.html, then the server will make it  
http://yourserver/newpage.html. If I answered from port 8081, all  
links in the page will inherit that port and I want all the  
connections to come to the scheduler running on a different port.


One approach to solve this is to parse all the response and change  
all the html responses to include the correct URLs. This is very  
boring and slow for we must cope with href, src, link, rel and all  
kinds of css includes and stuff. What I hoped to find was some  
HTTP header field that would tell like: hey this server is  
acutally running at port bla bla bla such as:


host: localhost:8080

despite the fact that that answer came thru 8081. This way the  
whole thing would work and maybe we would have a a web server  
built with Rev that could see some real world use...


Anyone has two cents?


This is very interesting, Andre. I wish I had your energy.

One thought.

If I understand correctly, under this system, the scheduler  
immediately responds to the client with a redirect to the same url  
but on a different port. Intead of using a redirect, is it not  
possible for the scheduler to hand off the request directly to an  
available process (for example, on localhost:8082), wait for the  
response, and then the scheduler writes the response directly back  
to the client? This would preserve the socket details for the client.


This would put an extra burden on the scheduler when it has to  
write back large quantities of data to simulataneous requests from  
different clients. But I think it should be possible to slice up  
the responses so that you only write back to the client sockets in  
small chunks (say 4096 KB at a time). This should allow  
simultaneous connections to appear to work simultaneously.


Also, is there not a problem in redirecting clients that have made  
a POST request? My memory of the http rfc is that redirects only  
use the GET method. The above idea would get round that problem.


Cheers
Dave
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your  
subscription preferences:

http://lists.runrev.com/mailman/listinfo/use-revolution


___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread David Bovill

Not sure if this text give you some arguments :)

 http://www.nightmare.com/medusa/medusa.html
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: help with a multi process http server.

2007-01-22 Thread Dave Cragg


On 22 Jan 2007, at 20:54, Andre Garzia wrote:

 I need to check if I can redirect POST requests... If I can't then  
I'll need to implement a solution like you outlined here at least  
for POST calls.



Andre, I just looked up the rfc to see if this was the case. This is  
the intro to the section describing the various redirect responses  
(301, 302, etc.):



10.3 Redirection 3xx

   This class of status code indicates that further action needs to be
   taken by the user agent in order to fulfill the request.  The  
action

   required MAY be carried out by the user agent without interaction
   with the user if and only if the method used in the second  
request is

   GET or HEAD.


So it looks like POST requests can't be redirected, at least not  
without the browser asking the user for some interaction.


Cheers
Dave
___
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution