Re: [rt-users] RT 4 Upgrade Slow Performance - CustomFields?

2012-06-02 Thread Nathan Baker
Ruslan,

I agree with your recommendation in general for most installations,
especially ones larger than ours.  I don't think increasing the
KeepAliveTimeout is necessary anymore now that I fixed the swapping issue,
because the initial page load does not take a long time anymore.  However,
for our environment I know that there will never be more than 5 people
accessing the site at once, so we will never run into the scenario you were
giving as an example.  The reason I left the KeepAliveTimeout at 60 was
because I'd like to have it as fast as possible once a user starts doing
something, and I think a user could look at a ticket for more than 15
seconds (the default KeepAliveTimeout in my apache configuration) and then
continue on, but it's not as likely they would take longer than 60 seconds
in between clicks.  For our environment it would be fine to have 5 apache
processes dedicated for 5 users at once.

Thanks for the information on the lightweight front-end, it looks like it
would help a lot, although I think it might be overkill for our relatively
small installation.

-Nate

On Sat, Jun 2, 2012 at 6:00 PM, Ruslan Zakirov wrote:

> It shouldn't be necessary if you know how to fit things in.
>
> You don't want KeepAliveTimeout to be very high. Keep alive at 60
> seconds means that user when touched apache process holds it from
> serving other users for 60 seconds even if he doesn't do anything. 10
> users hit the server within a minute - you need 11 apache processes to
> serve next user. Your deployment is not configured for such values.
>
> For big keep alive values you need two step processing with light
> frontend and heavy backend. Frontend keep connections open and can
> hold many of them with low footprint. For example take a look at the
> following blog post:
> http://blog.webfaction.com/a-little-holiday-present, especially memory
> footprint chart.
>
> As the backend you either use FCGI server running RT, your current
> apache setup or something else.
>
> Take a look at the following extension:
>
>
> http://search.cpan.org/~ruz/RT-Extension-Nginx-0.02/lib/RT/Extension/Nginx.pm#FEATURES
>
> It generates config for nginx where a few features of the server and
> knowledge of RT are used to lower memory footprint, increase
> concurrency, lower page load times.
>
> --
> Best regards, Ruslan.
>


Re: [rt-users] RT 4 Upgrade Slow Performance - CustomFields?

2012-06-02 Thread Ruslan Zakirov
On Fri, Jun 1, 2012 at 12:07 AM, Nathan Baker  wrote:
> Thanks Kenn, I checked and didn't see any permissions globally set for
> everyone, except the Create Ticket right is set for Everyone on each of our
> queues.
>
> I made a few more changes though and am considering the problem fixed at
> this point.  I found that the system was doing a lot of memory swapping,
> even though I increased the memory from 512MB with RT 3.8.8 (and MySQL) to
> 2GB with 4.0.5 (and Postgresql).  I disabled all debugging and heavy
> logging, and adjusted the Apache configuration to increase the
> KeepAliveTimeout to 60 and reduce the MinSpareServers and MaxSpareServers.
>  The apache processes were using between 60-100MB each (because of modperl I
> think), so if you have 15 apache processes running that's potentially 1.5GB.
>  After making that change the system is "lightning fast" again.  I still
> might add 1-2GB of memory just to be safe, I just didn't think that much
> should be necessary.

It shouldn't be necessary if you know how to fit things in.

You don't want KeepAliveTimeout to be very high. Keep alive at 60
seconds means that user when touched apache process holds it from
serving other users for 60 seconds even if he doesn't do anything. 10
users hit the server within a minute - you need 11 apache processes to
serve next user. Your deployment is not configured for such values.

For big keep alive values you need two step processing with light
frontend and heavy backend. Frontend keep connections open and can
hold many of them with low footprint. For example take a look at the
following blog post:
http://blog.webfaction.com/a-little-holiday-present, especially memory
footprint chart.

As the backend you either use FCGI server running RT, your current
apache setup or something else.

Take a look at the following extension:

http://search.cpan.org/~ruz/RT-Extension-Nginx-0.02/lib/RT/Extension/Nginx.pm#FEATURES

It generates config for nginx where a few features of the server and
knowledge of RT are used to lower memory footprint, increase
concurrency, lower page load times.

> I also have rt-clean-sessions running every night, which should help some.
>
> Thank you everyone that helped!

-- 
Best regards, Ruslan.


Re: [rt-users] Cross site request forgery?

2012-06-02 Thread Paul Tomblin
On Sat, Jun 2, 2012 at 4:04 PM, Paul Tomblin  wrote:
> But when I try to access this file as the source in my .autocomplete,
> it gets a 404.  I've tried it with a relative path and an absolute
> path, same results.

It would help, I suppose, if I were actually able to distinguish
between using upper case or lower case when comparing the name as I
wrote it in the javascript and as I wrote it in when I created the
file.  I feel like such an idiot now.


-- 
http://www.linkedin.com/in/paultomblin
http://careers.stackoverflow.com/ptomblin


[rt-users] Cross site request forgery?

2012-06-02 Thread Paul Tomblin
I'm trying to do a jquery autocomplete, but using my "other" database
rather than the RT database.  I created a web form in my extension's
own html/cf directory, which I can access.  I also put a autocomplete
file in html/cf/AutoComplete called "People", which looks a lot like
your Helpers/Autocomplete/Users:

% $r->content_type('application/json');
<% JSON( \@suggestions ) |n %>
% $m->abort;
<%ARGS>
$field => undef
$term => undef

<%INIT>
use RTx::FooBar::Records::Peoples;

$RT::Logger->debug("called AutoComplete/People");

my $people = RTx::FooBar::Records::Peoples->new(Handle => CFHandle());
$people->Limit(
FIELD   =>  $field,
OPERATOR=>  'LIKE',
VALUE   =>  '%'.$term.'%',
ENTRYAGGREGATOR =>  'AND');

my @suggestions
while (my $person = $people->Next)
{
  my $suggestion = { label => $person->$field, value => $person };
  push @suggestions, $suggestion;
}


I've already tested that my autohandler provides the correct CFHandle
to my database, and that RTx::FooBar::Records::Peoples returns the
correct rows when accessed like this.

But when I try to access this file as the source in my .autocomplete,
it gets a 404.  I've tried it with a relative path and an absolute
path, same results.
And if I try to access the url directly, I get this RT page that says
it's a possible cross-site request forgery.

What can I do to make this work?

-- 
http://www.linkedin.com/in/paultomblin
http://careers.stackoverflow.com/ptomblin