Re: Webapp Getting redirected to an external IP Address

2024-06-05 Thread Owen Rubel
Are you using this with a cloud service?

This seems more like a misconfig of your setup. I have seen this in AWS
before where it routes to their internal IP due to a misconfiguration.

Owen Rubel
oru...@gmail.com


On Tue, Jun 4, 2024 at 6:26 PM Tom Robinson 
wrote:

> Hi Mark,
>
> On Tue, 4 Jun 2024 at 15:50, Mark Thomas  wrote:
>
> > On 04/06/2024 05:07, Tom Robinson wrote:
> > > Hi,
> > >
> > > We are running a tomcat7 application
> >
> > You do realise that support for Tomcat 7 ended on 31 March 2021 don't
> you?
> >
>
> Yes, I do realise that tomcat7 is very old. We are running a legacy
> application not of our design.
>
> > on our LAN which gets redirected from
> > > a private, internal IP Address to an external ip address at which point
> > it
> > > fails. I can't find where this is happening.
> >
> > Is it an actual redirect - i.e. a 30x response? Or do you mean something
> > else?
> >
> > If a redirect, does it redirect on the first request?
> >
>
> OK, you are right, it's not a redirect (not a 30x response). I didn't think
> to go into developer mode on the browser to check this until now.
>
> > Where and what can I check for this redirect and how to control it or
> > > switch it off all together.
> >
> > Tomcat doesn't do this by default.
> >
> > Tomcat 7 doesn't have the redirect valve so it won't be that.
> >
> > Are you sure that the redirect is being issued by Tomcat? Might there be
> > a reverse proxy in mix somewhere?
> >
>
> No reverse proxies configured that I specifically know about.
>
>
> > Other than that, it would have to be in the application code somewhere.
> >
>
> In that case it must be as you say; i.e. in the code somewhere.
>
>
> > > I browse to here on our LAN:
> > >
> > > https://myinternalhost.mydomain.com.au:8443
> >
> > Check what myinternalhost.mydomain.com.au resolves to in terms of an IP
> > address.
> >
>
> Amongst other things, I administer the network, DNS and DHCP so I know that
> the name resolution is correct. I have re-checked to confirm.
>
> Try requesting a page that won't trigger a directory redirect. Something
> > like:
> >
> > https://myinternalhost.mydomain.com.au:8443/index.html
> >
> > You may need to adjust that for your application.
> >
>
> I found this in tomcat7/webapps/index.jsp:
>
> # cat index.jsp
> <%@ taglib uri="/tags/struts-logic" prefix="logic" %>
>
> 
>
> That's the entire file! I'm not really clued in to how that works but it
> does look like a code based redirect.
>
> This whole query has come about because I've been trying to secure the
> tomcat webapps with SSL. The certificate management in java is challenging
> having to use yet another certificate management tool (keytool).
>
> I realise now that I've just browsed to a default webapp running on tomcat.
> Further investigation shows that the other webapps (some 17 separate
> webapps) are indeed working correctly and SSL secured. I think I just
> panicked a little seeing the 'redirect' to an external IP and got
> bogged down unnecessarily into that redirect.
>
> For example, if I browse to:
>
> https://myinternalhost.mydomain.com.au:8443/legacyapp1
>
> the webapp runs, there is no redirect and it's SSL secured. The same for
> legacyapp[2-17].
>
> I appreciate and thank you for your help.
>
> Kind regards,
> Tom
>
>
> >
> > > I end up here:
> > >
> > > https://a.b.c.d:8443/kb
> > >
> > > Where a.b.c.d is our external, ISP provided IP Address.
> > >
> > > Why is this happening and how can I fix it?
> > >
> > > *Kind regards,*
> > >
> > > *Tom Robinson*
> > > *IT Manager/System Administrator*
> > >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> > For additional commands, e-mail: users-h...@tomcat.apache.org
> >
> >
>
> --
> *MoTeC Pty Ltd*
>
> 121 Merrindale Drive
> Croydon South 3136
> Victoria Australia
> *T: *61 3 9761 5050
> *W: *www.motec.com.au <https://www.motec.com.au/>
>
>
> --
>  <http://www.facebook.com/motec.global>
> <http://www.youtube.com/user/MoTeCAustralia>
> <https://www.instagram.com/motec_global/>
> <https://www.linkedin.com/company/motec-global>
>
>
> --
>  <https://www.thebatteryshow.eu/en/home.html>
>
> --
>
>
> Disclaimer Notice: This message, including any attachments, contains
> confidential information intended for a specific individual and purpose
> and
> is protected by law. If you are not the intended recipient you should
> delete this message. Any disclosure, copying, or distribution of this
> message or the taking of any action based on it is strictly prohibited.
>


Re: Is ARM64 architecture officially supported ?

2020-04-17 Thread Owen Rubel
I run tomcat on Armbian https://www.armbian.com/

Owen Rubel
oru...@gmail.com


On Fri, Apr 17, 2020 at 4:14 AM Emilio Fernandes <
emilio.fernande...@gmail.com> wrote:

> Hola Tomcat community!
>
> We consider using AWS Graviton [1] based instances which use ARM64
> processors for our backend services.
> I've googled around and found [2] saying that Tomcat is being tested on
> ARM64 architecture at TravisCI! This is great!
> Does this mean that Tomcat is officially supported on ARM64 ? I was not
> able to find any specific documentation listing which platforms are
> officially supported.
>
> Does anyone from the community have any experience with Tomcat/HTTPD on
> ARM64 in production ?
>
> Gracias,
> Emilio
>
> 1. https://aws.amazon.com/ec2/graviton/
> 2. https://tomcat.apache.org/ci.html#TravisCI
>


Re: Error parsing HTTP request header

2019-12-07 Thread Owen Rubel
Well this isn't an issue with Tomcat. I'm using an embedded version of
Tomcat in the BeAPI API Toolkit and this does not occur; mainly because it
automates 100% of the api functionality. I would say it is most likely your
application.

How are you testing your app? You should be doing integration tests. This
would show the issue or problem in the response data vs what is sent.

Owen Rubel
oru...@gmail.com


On Sat, Dec 7, 2019 at 3:11 AM thulasiram k  wrote:

> Hi Chris,
>
> Thanks for trying to help here. As suggested I have checked the access logs
> and find the below error line when ever the above exception occurs in
> catalina.out.
>
>  "GET null null" 400 -
>
>
> Thanks
> Ram
>
>
> On Wed, Dec 4, 2019 at 7:41 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > Ram,
> >
> > On 12/4/19 06:02, thulasiram k wrote:
> > > Hi,
> > >
> > > we recently upgrade our tomcat from 7.0.91 to 7.0.94 in windows
> > > platform. Post upgrade we are seeing the below exception in logs
> > > very frequently. can you please suggest to avoid this.
> > >
> > > Dec 02, 2019 1:34:09 PM
> > > org.apache.coyote.http11.AbstractHttp11Processor process INFO:
> > > Error parsing HTTP request header Note: further occurrences of HTTP
> > > header parsing errors will be logged at DEBUG level.
> > > java.lang.IllegalArgumentException: Invalid character found in the
> > > request target. The valid characters are defined in RFC 7230 and
> > > RFC 3986 at
> > > org.apache.coyote.http11.InternalInputBuffer.parseRequestLine(Internal
> > InputBuffer.java:199)
> > >
> > >
> > at
> > > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp1
> > 1Processor.java:1050)
> > >
> > >
> > at
> > > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(A
> > bstractProtocol.java:637)
> > >
> > >
> > at
> > > org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint
> > .java:319)
> > >
> > >
> > at
> > > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> > Executor.java:1128)
> > >
> > >
> > at
> > > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> > lExecutor.java:628)
> > >
> > >
> > at
> > > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThr
> > ead.java:61)
> > >
> > >
> > at java.base/java.lang.Thread.run(Thread.java:830)
> > >
> > > And will this effect anything? as I didn't receive any complaints
> > > from users.
> >
> > No spec-compliant client should be sending anything in an HTTP header
> > that causes a problem. Unfortunately, not all clients (browsers) are
> > actually spec-compliant, and it's very easy to write an application
> > that also violates the spec.
> >
> > Each version of Tomcat becomes more and more strict in order to either
> > improve security or improve the web ecosystem in general or both.
> >
> > When you get this error, do you see an entry in your access log? Can
> > you correlate the error with what you see in the access log? You
> > should see a 400 status code for this request. If you aren't sure
> > which character is illegal, post the whole line from the request
> > (perhaps sanitized for hostname, IP address, any secrets it may -- but
> > shouldn't -- contain) and we'll take a look.
> >
> > A few years ago, Tomcat changed the way it performs Cookie parsing to
> > be much more strict and that broke a bunch of applications, including
> > my own. It turned out that we were writing a cookie value that was
> > actually not allowed, but Tomcat was allowing it. Upgrading would have
> > caused my application to break in two places: when writing that Cookie
> > value and also when reading it. The solution was to fix the
> > application to encode those cookie values so that they were HTTP-safe.
> > (In our case, we decided that base64 encoding would be best.)
> >
> > I'm not saying that this problem is cookie-related. It's just an
> > example of a situation where the application was the problem and we
> > didn't notice until Tomcat became more strict. Our application is
> > actually *safer* now because it is guaranteed to work with all clients
> > which strictly adhere to the specifications, while before it was
> > relying on sloppy interpretations

Re: Initiating httpservletrequest from inside Tomcat / TomEE

2019-05-06 Thread Owen Rubel
I didn't think that TomEE was Tomcat. Wasn't it it's own thing??? I mean
TomEE is this separate project maintained by this up here in Seattle vs
Tomcat which is an Apache Project.

Owen Rubel
oru...@gmail.com


On Mon, May 6, 2019 at 9:16 AM Paul Carter-Brown
 wrote:

> Hi John,
>
> See original request. It's pretty much a Kafka/Servlet proxy/gateway:
>
> I'm trying to design a Kafka consumer and producer that will run inside the
> tomcat jvm and pick up messages off a Kafka topic and translate them into a
> servlet request and pass it through tomcat and then when the response is
> complete then translate it into a Kafka message and put it onto another
> topic as a reply. This way I can reuse our existing jax-rs rest services
> and expose them as an async api over Kafka. The idea is to make the Kafka
> messages similar to http in that they would consist of headers and a body.
> The body would be json.
>
>
> On Mon, May 6, 2019 at 6:13 PM John Dale  wrote:
>
> > You could try debugging the tomcat code and find out how, right after
> > it parses the TCP request, it invokes the servlet.  You can then
> > create your own harness for tomcat code after initializing the
> > appropriate context for the request to tomcat.  I don't know off hand
> > where in the tomcat code this cut point can be found.
> >
> > Is this a performance issue, or are you building a proxy?
> >
> > What is the problem you're trying to solve?
> >
> > On 5/6/19, Paul Carter-Brown  wrote:
> > > Yea, but the issue is that only works when calling in the context of a
> > > current servlet call.
> > >
> > > Here is the kind of problem I want to solve:
> > >
> > > @WebServlet(name = "MyExample", urlPatterns = {"/example"},
> > loadOnStartup =
> > > 1)
> > > public class Example extends HttpServlet {
> > >
> > > @PersistenceContext
> > > private EntityManager em;
> > >
> > > @Override
> > > public void init(ServletConfig config) {
> > > Thread t = new Thread(() -> {
> > > while (true) {
> > > try {
> > > // Do a GET to /example/ and get the response
> without
> > > going out on localhost and back in
> > > // We cant just call doGet as we want the request
> to
> > > flow through the servlet filters, do the entitymanager injection etc
> > > Thread.sleep(1);
> > > } catch (Exception e) {
> > > }
> > > }
> > > });
> > > t.start();
> > >
> > > }
> > >
> > > @Override
> > > protected void doGet(HttpServletRequest req, HttpServletResponse
> > resp)
> > > throws ServletException, IOException {
> > > // do stuff like use em
> > > resp.setStatus(200);
> > > resp.getWriter().write("Hello World");
> > > }
> > >
> > > }
> > >
> > >
> > >
> > >
> > > On Mon, May 6, 2019 at 5:35 PM John Dale  wrote:
> > >
> > >> For reference, I did find this after searching "calling a servlet
> > >> programmatically":
> > >> https://docs.oracle.com/cd/E19146-01/819-2634/abxbn/index.html
> > >>
> > >> On 5/6/19, Paul Carter-Brown  wrote:
> > >> > I think we are completely missing each other. Forget sockets - that
> > was
> > >> > just an example. I have code running in a Tomcat App server which is
> > >> > not
> > >> > managed by Tomcat and is not initiated by anything within Tomcat.
> That
> > >> code
> > >> > now wants to call a servlet hosted in that very same JVM. Any way to
> > do
> > >> > that without going out and back in on TCP?
> > >> >
> > >> >
> > >> > On Mon, May 6, 2019 at 5:14 PM John Dale  wrote:
> > >> >
> > >> >> Sockets are an implementation of TCP/UDP inherently.
> > >> >>
> > >> >> Perhaps a mountaintop signal fire?
> > >> >>
> > >> >> ;)
> > >> >>
> > >> >> John
> > >> >>
> > >> >>
> > >> >> On 5/6/19, Paul Carter-Brown  wrote:
> > >> >> > lol on the Semaphore Telegraph,
> > >> >> >
> > >> >> > I

Re: Per EndPoint Threads???

2017-08-15 Thread Owen Rubel
Owen Rubel
oru...@gmail.com

On Tue, Aug 15, 2017 at 8:23 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/13/17 10:46 AM, Owen Rubel wrote:
> > Owen Rubel oru...@gmail.com
> >
> > On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 12:47 PM, Owen Rubel wrote:
> >>>> What I am talking about is something that improves
> >>>> communication as we notice that communication channel needing
> >>>> more resources. Not caching what is communicated... improving
> >>>> the CHANNEL for communicating the resource (whatever it may
> >>>> be).
> >
> > If the channel is an HTTP connection (or TCP; the application
> > protocol isn't terribly relevant), then you are limited by the
> > following:
> >
> > 1. Network bandwidth 2. Available threads (to service a particular
> > request) 3. Hardware resources on the server
> > (CPU/memory/disk/etc.)
> >
> > Let's ignore 1 and 3 for now, since you are primarily concerned
> > with concurrency, and concurrency is useless if the other resources
> > are constrained or otherwise limiting the equation.
> >
> > Let's say we had "per endpoint" thread pools, so that e.g. /create
> > had its own thread pool, and /show had another one, etc. What would
> > that buy us?
> >
> > (Let's ignore for now the fact that one set of threads must always
> > be used to decode the request to decide where it's going, like
> > /create or /show.)
> >
> > If we have a limited total number of threads (e.g. 10), then we
> > could "reserve" some of them so that we could always have 2 threads
> > for /create even if all the other threads in the system (the other
> > 8) were being used for something else. If we had 2 threads for
> > /create and 2 threads for /show, then only 6 would remain for e.g.
> > /edit or /delete. So if 6 threads were already being used for /edit
> > or /delete, the 7th incoming request would be queued, but anyone
> > making a request for /show or /create would (if a thread in those
> > pools is available) be serviced immediately.
> >
> > I can see some utility in this ability, because it would allow the
> > container to ensure that some resources were never starved... or,
> > rather, that they have some priority over certain other services.
> > In other words, the service could enjoy guaranteed provisioning
> > for certain endpoints.
> >
> > As it stands, Tomcat (and, I would venture a guess, most if not
> > all other containers) implements a fair request pipeline where
> > requests are (at least roughly) serviced in the order in which they
> > are received. Rather than guaranteeing provisioning for a
> > particular endpoint, the closest thing that could be implemented
> > (at the application level) would be a
> > resource-availability-limiting mechanism, such as counting the
> > number of in-flight requests and rejecting those which exceed some
> > threshold with e.g. a 503 response.
> >
> > Unfortunately, that doesn't actually prioritize some requests, it
> > merely rejects others in order to attempt to prioritize those
> > others. It also starves endpoints even when there is no reason to
> > do so (e.g. in the 10-thread scenario, if all 4 /show and /create
> > threads are idle, but 6 requests are already in process for the
> > other endpoints, a 7th request for those other endpoints will be
> > rejected).
> >
> > I believe that per-endpoint provisioning is a possibility, but I
> > don't think that the potential gains are worth the certain
> > complexity of the system required to implement it.
> >
> > There are other ways to handle heterogeneous service requests in a
> > way that doesn't starve one type of request in favor of another.
> > One obvious solution is horizontal scaling with a load-balancer. An
> > LB can be used to implement a sort of guaranteed-provisioning for
> > certain endpoints by providing more back-end servers for certain
> > endpoints. If you want to make sure that /show can be called by any
> > client at any time, then make sure you spin-up 1000 /show servers
> > and register them with the load-balancer. You can survive with only
> > maybe 10 nodes servicing /delete requests; others will either wait
> > in a queue or receive a 503 from the lb.
> >
> > For my money, I'd maximize the number of threads available 

Re: Per EndPoint Threads???

2017-08-13 Thread Owen Rubel
Owen Rubel
oru...@gmail.com

On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > What I am talking about is something that improves communication as
> > we notice that communication channel needing more resources. Not
> > caching what is communicated... improving the CHANNEL for
> > communicating the resource (whatever it may be).
>
> If the channel is an HTTP connection (or TCP; the application protocol
> isn't terribly relevant), then you are limited by the following:
>
> 1. Network bandwidth
> 2. Available threads (to service a particular request)
> 3. Hardware resources on the server (CPU/memory/disk/etc.)
>
> Let's ignore 1 and 3 for now, since you are primarily concerned with
> concurrency, and concurrency is useless if the other resources are
> constrained or otherwise limiting the equation.
>
> Let's say we had "per endpoint" thread pools, so that e.g. /create had
> its own thread pool, and /show had another one, etc. What would that
> buy us?
>
> (Let's ignore for now the fact that one set of threads must always be
> used to decode the request to decide where it's going, like /create or
> /show.)
>
> If we have a limited total number of threads (e.g. 10), then we could
> "reserve" some of them so that we could always have 2 threads for
> /create even if all the other threads in the system (the other 8) were
> being used for something else. If we had 2 threads for /create and 2
> threads for /show, then only 6 would remain for e.g. /edit or /delete.
> So if 6 threads were already being used for /edit or /delete, the 7th
> incoming request would be queued, but anyone making a request for
> /show or /create would (if a thread in those pools is available) be
> serviced immediately.
>
> I can see some utility in this ability, because it would allow the
> container to ensure that some resources were never starved... or,
> rather, that they have some priority over certain other services. In
> other words, the service could enjoy guaranteed provisioning for
> certain endpoints.
>
> As it stands, Tomcat (and, I would venture a guess, most if not all
> other containers) implements a fair request pipeline where requests
> are (at least roughly) serviced in the order in which they are
> received. Rather than guaranteeing provisioning for a particular
> endpoint, the closest thing that could be implemented (at the
> application level) would be a resource-availability-limiting
> mechanism, such as counting the number of in-flight requests and
> rejecting those which exceed some threshold with e.g. a 503 response.
>
> Unfortunately, that doesn't actually prioritize some requests, it
> merely rejects others in order to attempt to prioritize those others.
> It also starves endpoints even when there is no reason to do so (e.g.
> in the 10-thread scenario, if all 4 /show and /create threads are
> idle, but 6 requests are already in process for the other endpoints, a
> 7th request for those other endpoints will be rejected).
>
> I believe that per-endpoint provisioning is a possibility, but I don't
> think that the potential gains are worth the certain complexity of the
> system required to implement it.
>
> There are other ways to handle heterogeneous service requests in a way
> that doesn't starve one type of request in favor of another. One
> obvious solution is horizontal scaling with a load-balancer. An LB can
> be used to implement a sort of guaranteed-provisioning for certain
> endpoints by providing more back-end servers for certain endpoints. If
> you want to make sure that /show can be called by any client at any
> time, then make sure you spin-up 1000 /show servers and register them
> with the load-balancer. You can survive with only maybe 10 nodes
> servicing /delete requests; others will either wait in a queue or
> receive a 503 from the lb.
>
> For my money, I'd maximize the number of threads available for all
> requests (whether within a single server, or across a large cluster)
> and not require that they be available for any particular endpoint.
> Once you have to depart from a single server, you MUST have something
> like a load-balancer involved, and therefore the above solution
> becomes not only more practical but also more powerful.
>
> Since relying on a one-box-wonder to run a high-availability web
> service isn't practical, provisioning is necessarily above the
> cluster-node level, and so the problem has effectively moved from the
> app server to the load-balancer (or reverse proxy). I believe the
> application server is an inapprop

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 3:13 PM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 11:21 AM, Owen Rubel wrote:
> >>>> On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas
> >>>> <ma...@apache.org> wrote:
> >>>>
> >>>>> On 12/08/17 06:00, Christopher Schultz wrote:
> >>>>>> Owen,
> >>>>>>
> >>>>>> Please do not top-post. I have re-ordered your post to
> >>>>>> be bottom-post.
> >>>>>>
> >>>>>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>>>>> On Fri, Aug 11, 2017 at 5:58 PM,
> >>>>>>> <christop...@baus.net> wrote:
> >>>>>>
> >>>>>>>>> Hi All,
> >>>>>>>>>
> >>>>>>>>> I'm looking for a way (or a tool) in Tomcat to
> >>>>>>>>> associate threads with endpoints.
> >>>>>>>>
> >>>>>>>> It isn't clear to me why this would be necessary.
> >>>>>>>> Threads should be allocated on demand to individual
> >>>>>>>> requests. If one route sees more traffic, then it
> >>>>>>>> should automatically be allocated more threads. This
> >>>>>>>> could starve some requests if the maximum number of
> >>>>>>>> threads had been allocated to a lessor used route,
> >>>>>>>> while available threads went unused for more commonly
> >>>>>>>> used route.
> >>>>>>
> >>>>>>> Absolutely but it could ramp up more threads as
> >>>>>>> needed.
> >>>>>>
> >>>>>>> I base the logic on neuron and neuralTransmitters.
> >>>>>>> When neurons talk to each other, they send back neural
> >>>>>>> transmitters to enforce that pathway.
> >>>>>>
> >>>>>>> If we could do the same through threads by adding
> >>>>>>> additional threads for endpoints that receive more
> >>>>>>> traffic vs those which do not, it would enforce better
> >>>>>>> and faster communication on those paths.> The current
> >>>>>>> way Tomcat does it is not dynamic and it just applies
> >>>>>>> to ALL pathways equally which is not efficient.
> >>>>>> How would this improve efficiency at all?
> >>>>>>
> >>>>>> There is nothing inherently "showy" or "edity" about a
> >>>>>> particular thread; each request-processing thread is
> >>>>>> indistinguishable from any other. I don't believe there
> >>>>>> is a way to improve the situation even if "per-endpoint"
> >>>>>> (whatever that would mean) threads were a possibility.
> >>>>>>
> >>>>>> What would you attach to a thread that would make it any
> >>>>>> better at editing records? Or deleting them?
> >>>>>
> >>>>> And I'll add that the whole original proposal ignores a
> >>>>> number of rather fundamental points about how Servlet
> >>>>> containers (and web servers in general) work. To name a
> >>>>> few:
> >>>>>
> >>>>> - Until the request has been parsed (which requires a
> >>>>> thread) Tomcat doesn't know which Servlet (endpoint) the
> >>>>> request is destined for. Switching processing to a
> >>>>> different thread at that point would add significant
> >>>>> overhead for no benefit.
> >>>>>
> >>>>> - Even after parsing, the actual Servlet that processes
> >>>>> the request (if any) can change during processing (e.g. a
> >>>>> Filter that conditionally forwards to a different Servlet,
> >>>>> authentication, etc.)
> >>>>>
> >>>>> There is nothing about a endpoint specific thread that
> >>>>> would allow it to process a req

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org>
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>> On Fri, Aug 11, 2017 at 5:58 PM, <christop...@baus.net>
> >>>> wrote:
> >>>
> >>>>>> Hi All,
> >>>>>>
> >>>>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>>>> threads with endpoints.
> >>>>>
> >>>>> It isn't clear to me why this would be necessary. Threads
> >>>>> should be allocated on demand to individual requests. If
> >>>>> one route sees more traffic, then it should automatically
> >>>>> be allocated more threads. This could starve some requests
> >>>>> if the maximum number of threads had been allocated to a
> >>>>> lessor used route, while available threads went unused for
> >>>>> more commonly used route.
> >>>
> >>>> Absolutely but it could ramp up more threads as needed.
> >>>
> >>>> I base the logic on neuron and neuralTransmitters. When
> >>>> neurons talk to each other, they send back neural
> >>>> transmitters to enforce that pathway.
> >>>
> >>>> If we could do the same through threads by adding additional
> >>>> threads for endpoints that receive more traffic vs those
> >>>> which do not, it would enforce better and faster
> >>>> communication on those paths.> The current way Tomcat does it
> >>>> is not dynamic and it just applies to ALL pathways equally
> >>>> which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> i

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org>
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>> On Fri, Aug 11, 2017 at 5:58 PM, <christop...@baus.net>
> >>>> wrote:
> >>>
> >>>>>> Hi All,
> >>>>>>
> >>>>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>>>> threads with endpoints.
> >>>>>
> >>>>> It isn't clear to me why this would be necessary. Threads
> >>>>> should be allocated on demand to individual requests. If
> >>>>> one route sees more traffic, then it should automatically
> >>>>> be allocated more threads. This could starve some requests
> >>>>> if the maximum number of threads had been allocated to a
> >>>>> lessor used route, while available threads went unused for
> >>>>> more commonly used route.
> >>>
> >>>> Absolutely but it could ramp up more threads as needed.
> >>>
> >>>> I base the logic on neuron and neuralTransmitters. When
> >>>> neurons talk to each other, they send back neural
> >>>> transmitters to enforce that pathway.
> >>>
> >>>> If we could do the same through threads by adding additional
> >>>> threads for endpoints that receive more traffic vs those
> >>>> which do not, it would enforce better and faster
> >>>> communication on those paths.> The current way Tomcat does it
> >>>> is not dynamic and it just applies to ALL pathways equally
> >>>> which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>

Re: Per EndPoint Threads???

2017-08-12 Thread Owen Rubel
Ah ok thank you for very concise answer. am chasing a pipe dream I guess.
Maybe there is another way to get this kind of benefit.

Thanks again for your answer.

Owen Rubel
oru...@gmail.com

On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org> wrote:

> On 12/08/17 06:00, Christopher Schultz wrote:
> > Owen,
> >
> > Please do not top-post. I have re-ordered your post to be bottom-post.
> >
> > On 8/11/17 10:12 PM, Owen Rubel wrote:
> >> On Fri, Aug 11, 2017 at 5:58 PM, <christop...@baus.net> wrote:
> >
> >>>> Hi All,
> >>>>
> >>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>> threads with endpoints.
> >>>
> >>> It isn't clear to me why this would be necessary. Threads should
> >>> be allocated on demand to individual requests. If one route sees
> >>> more traffic, then it should automatically be allocated more
> >>> threads. This could starve some requests if the maximum number of
> >>> threads had been allocated to a lessor used route, while
> >>> available threads went unused for more commonly used route.
> >
> >> Absolutely but it could ramp up more threads as needed.
> >
> >> I base the logic on neuron and neuralTransmitters. When neurons
> >> talk to each other, they send back neural transmitters to enforce
> >> that pathway.
> >
> >> If we could do the same through threads by adding additional
> >> threads for endpoints that receive more traffic vs those which do
> >> not, it would enforce better and faster communication on those
> >> paths.> The current way Tomcat does it is not dynamic and it just
> >> applies to ALL pathways equally which is not efficient.
> > How would this improve efficiency at all?
> >
> > There is nothing inherently "showy" or "edity" about a particular
> > thread; each request-processing thread is indistinguishable from any
> > other. I don't believe there is a way to improve the situation even if
> > "per-endpoint" (whatever that would mean) threads were a possibility.
> >
> > What would you attach to a thread that would make it any better at
> > editing records? Or deleting them?
>
> And I'll add that the whole original proposal ignores a number of rather
> fundamental points about how Servlet containers (and web servers in
> general) work. To name a few:
>
> - Until the request has been parsed (which requires a thread) Tomcat
> doesn't know which Servlet (endpoint) the request is destined for.
> Switching processing to a different thread at that point would add
> significant overhead for no benefit.
>
> - Even after parsing, the actual Servlet that processes the request (if
> any) can change during processing (e.g. a Filter that conditionally
> forwards to a different Servlet, authentication, etc.)
>
> There is nothing about a endpoint specific thread that would allow it to
> process a request more efficiently than a general thread.
>
> Any per-endpoint thread-pool solution will require the additional
> overhead to switch processing from the general parsing thread to the
> endpoint specific thread. This additional cost comes with zero benefits
> hence it will always be less efficient.
>
> In short, there is no way pre-allocating threads to particular endpoints
> can improve performance compared to just adding the same number of
> additional threads to the general thread pool.
>
> Mark
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Per EndPoint Threads???

2017-08-11 Thread Owen Rubel
Absolutely but it could ramp up more threads as needed.

I base the logic on neuron and neuralTransmitters. When neurons talk to
each other, they send back neural transmitters to enforce that pathway.

If we could do the same through threads by adding additional threads for
endpoints that receive more traffic vs those which do not, it would enforce
better and faster communication on those paths.

The current way Tomcat does it is not dynamic and it just applies to ALL
pathways equally which is not efficient.


Owen Rubel
oru...@gmail.com

On Fri, Aug 11, 2017 at 5:58 PM, <christop...@baus.net> wrote:

> > Hi All,
> >
> > I'm looking for a way (or a tool) in Tomcat to associate threads with
> > endpoints.
>
> It isn't clear to me why this would be necessary. Threads should be
> allocated on demand to individual requests. If one route sees more
> traffic, then it should automatically be allocated more threads. This
> could starve some requests if the maximum number of threads had been
> allocated to a lessor used route, while available threads went unused
> for more commonly used route.
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Per EndPoint Threads???

2017-08-11 Thread Owen Rubel
Hi All,

I'm looking for a way (or a tool) in Tomcat to associate threads with
endpoints.

The reason being is that on a whole, threads are used not by the whole
system but distributed dynamically to specific pieces. Tomcat repeats this
process over and over but never stores this knowledge of which pieces
endpoints continually have high volume and which have lower volume traffic.

Even at startup/restart, these individual endpoints in the system should
start with a higher number of threads by DEFAULT as a result of the
continual higher traffic.

Is there a way to assign/distribute much like 'load balancing' the number
of threads across available endpoints???

ie:
localhost/v0.1/user/show: 50%
localhost/v0.1/user/create: 10%
localhost/v0.1/user/edit: 5%
localhost/v0.1/user/delete: 2%

Owen Rubel
oru...@gmail.com