Matt
You may consider using multiple PostgreSQL replica servers in HOT mode
servers with pgpool in front of them it may help but a little. but
because of te way spacewalk does its queries I doubt it will help much
as the safties in pgpool will think just about every query may write
and send it to t
Yeah, it was quite a trial to get it to scale to that degree, so I tried a
lot of different things. I increased the max_fds in the jabber configs and
the ulimits for the jabber user and it's pretty stable now, though even the
back end Python code wasn't written for having multiple osa dispatchers
Matt,
you ave a lot of clients so those numbers start to make sense to
increase the database connections.
also OSAD isn't meant to scale that high in fact you should run out of
file handles for it before it even gets to that many clients.
further more I hope you are using spacewalk proxies if not y
Well for a smaller number of clients (<500-1,000) separating out the
database might be more work than it's worth (though good experience).
First step I would say is to take a look at increasing the logging in
Postgres to get an idea where the slowness is occurring. Also, something
like atop or ano
Thank you for the tips,
In this case there is available 6GB of memory and the high I/O occur at the
postgres disk. Other disks, the I/O is normal. The system not is using swap
and there is 3GB of swap.
I will separate the postgre and apply your tips.
2016-10-10 21:58 GMT-03:00 Matt Moldvan :
>
On 10 Oct 2016, at 17:33, Jan Dobes wrote:
>
> On 10.10.2016 11:43 Morten A. Middelthon wrote:
>>
>> Has anyone gone through the trouble of applying the patch on Spacewalk
>> 2.2? My Spacewalk server is unfortunately running RHEL 5, which means
>> I'm can't upgrade past spacewalk v2.2 :(
>> My p