Nice, I did intend to do some kind of replication to the other datacenter
for a hot/cold standby (or even active/active if it was fast enough) but
bucardo had a pretty steep learning curve, and I'm sort of bound by ITIL to
create change requests and so on, along with other priorities. Anyway, at
s
Matt
You may consider using multiple PostgreSQL replica servers in HOT mode
servers with pgpool in front of them it may help but a little. but
because of te way spacewalk does its queries I doubt it will help much
as the safties in pgpool will think just about every query may write
and send it to t
Yeah, it was quite a trial to get it to scale to that degree, so I tried a
lot of different things. I increased the max_fds in the jabber configs and
the ulimits for the jabber user and it's pretty stable now, though even the
back end Python code wasn't written for having multiple osa dispatchers
Matt,
you ave a lot of clients so those numbers start to make sense to
increase the database connections.
also OSAD isn't meant to scale that high in fact you should run out of
file handles for it before it even gets to that many clients.
further more I hope you are using spacewalk proxies if not y
Well for a smaller number of clients (<500-1,000) separating out the
database might be more work than it's worth (though good experience).
First step I would say is to take a look at increasing the logging in
Postgres to get an idea where the slowness is occurring. Also, something
like atop or ano
Thank you for the tips,
In this case there is available 6GB of memory and the high I/O occur at the
postgres disk. Other disks, the I/O is normal. The system not is using swap
and there is 3GB of swap.
I will separate the postgre and apply your tips.
2016-10-10 21:58 GMT-03:00 Matt Moldvan :
>
We have about 6,000 systems to manage and it was unusable otherwise... I
had way too much trouble trying to get OSAD to work through proxies and F5
load balancers, so I had to end up pointing them all to two masters that
are still using the same Postgres database VM. I was also toying with
having
tuning for 5000 clients is nuts that would hurt your performance
try running pgtune for about 50 to maybe 500 clients max, but I try
the lower setting first.
Now lets talk about the high IO that usually happens when you don't
have enough working memory in PostgreSQL's configuration. When that
happe
I had similar issues and ended up first breaking out the database to it's
own VM, then increasing the Postgres debug logs. I saw that there were a
large number of operations running against the snapshot tables, with locks
and so on being set for a long period of time. In /etc/rhn/rhn.conf, try
di
I understood. Can you explain me what is registered in database after a
request information from SP and why the default installation not make
sense? The correct is, I install httpd, tomcat and postgres with my
optimizations?
2016-10-10 17:06 GMT-03:00 Konstantin Raskoshnyi :
> Because all your sy
Because all your systems request information from SP, and default
installation doesn't make any sense if you have more that 50 machines, so
you need to tyne postgres, tomcat & linux itself
On Mon, Oct 10, 2016 at 12:34 PM, Allan Moraes
wrote:
> Hi
> In my CentOS 7 server, is installed the spacew
Hi
In my CentOS 7 server, is installed the spacewalk 2.4 and PostgreSQL from
default installation. Via iotop, my postgresql write a lot of informations,
during all day. Why this occur?
___
Spacewalk-list mailing list
Spacewalk-list@redhat.com
https://www.
12 matches
Mail list logo