On Sat, 31 Mar 2012, Daniel Baumann wrote:
> On 03/30/2012 10:26 PM, Martin Bagge / brother wrote:
>> The hosting for the Pootle server might be gone in less than three
>> months. I'll try to find an alterantive hosting solution
>
> how ressource intensive is pootle (cpu/ram/hdd)? if it's not too much,
> you could run it on in an lxc container on one of my machines (where
> {lists,git}.lxde.org is).
It's a django application and I use some minor scripts to do clean up.
The current box has the following specs
total used free shared buffers cached
Mem: 512636 482264 30372 0 45764 107788
-/+ buffers/cache: 328712 183924
Swap: 1510072 197548 1312524
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 36961640 25445284 11140840 70% /
tmpfs 5120 0 5120 0% /lib/init/rw
tmpfs 51264 220 51044 1% /run
udev 251532 0 251532 0% /dev
tmpfs 102528 0 102528 0% /run/shm
(not only running pootle on that machine though so the usage numbers are a
bit wrong =))
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Pentium(R) 4 CPU 2.80GHz
stepping : 9
cpu MHz : 2792.999
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe up pebs bts
cid xtpr
bogomips : 5585.99
clflush size : 64
cache_alignment : 128
address sizes : 36 bits physical, 32 bits virtual
power management:
It runs the pootle application behind apache and I use memcached to lessen
the stress somewhat.
Possible?
--
/brother
http://martin.bagge.nu
The output of Bruce Schneier's pseudorandom generator follows no describable
pattern and cannot be compressed.
------------------------------------------------------------------------------
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
_______________________________________________
Lxde-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lxde-list