Re: Update duplicity

2012-03-03 Thread Scott Kitterman
On Saturday, March 03, 2012 07:35:13 PM Andreas Moog wrote:
> On 03.03.2012 19:18, Dan Lange wrote:
> > v0.6.17 was released three months ago but Ubuntu hasn't updated their
> > packages for 11.10 (oneiric). Is there something blocking this or has
> > Duplicity slipped through the cracks?
> 
> We generally do not package new upstream versions for previous releases
> of Ubuntu due to the risk of introducing regressions. If there is a
> particular bug you like to get fixed, have a look at the Stable Release
> Upgrade procedure outlined at https://wiki.ubuntu.com/StableReleaseUpdates

Precise does have 0.6.17, so you'll see it in 12.04.  If you'd like to see the 
package as a whole for 11.10, backports is what you should be looking into:

https://wiki.ubuntu.com/UbuntuBackports

Scott K

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Update duplicity

2012-03-03 Thread Andreas Moog
On 03.03.2012 19:18, Dan Lange wrote:
> v0.6.17 was released three months ago but Ubuntu hasn't updated their
> packages for 11.10 (oneiric). Is there something blocking this or has
> Duplicity slipped through the cracks?

We generally do not package new upstream versions for previous releases
of Ubuntu due to the risk of introducing regressions. If there is a
particular bug you like to get fixed, have a look at the Stable Release
Upgrade procedure outlined at https://wiki.ubuntu.com/StableReleaseUpdates

Cheers,
  Andreas



signature.asc
Description: OpenPGP digital signature
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Update duplicity

2012-03-03 Thread Dan Lange
v0.6.17 was released three months ago but Ubuntu hasn't updated their
packages for 11.10 (oneiric). Is there something blocking this or has
Duplicity slipped through the cracks? 0.6.17 fixes some critical bugs and
I'd much rather stick to the built-in packages over maintaining my own
packages. I just noticed 0.6.18 was released on Feb 29th. Perhaps you can
skip right over 0.6.17 and use 0.6.18?

Thanks,

Dan
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: zram swap on Desktop

2012-03-03 Thread John Moser

On 03/03/2012 12:05 AM, Phillip Susi wrote:

On 02/27/2012 08:58 PM, John Moser wrote:

I believe that swap space is only actually freed when the memory it is 
backing is freed.  In other words, if the process frees the memory, 
the swap is freed, but when the page is read back in from swap, it is 
left in swap so that the page can be discarded again in the future 
without having to write it back out again.  This can lead to some 
wasted memory by having pages still in zswap that have also been moved 
back into regular ram.




This may be true.  Also zswap seems to not bother compacting until it's 
being added to, so it bloats and then doesn't shrink much.  For example 
you can put 500MB into zswap at 29% compression ratio and 30% total 
memory usage, free 200MB and have it stay at 29% compression ratio with 
the actual data 70MB smaller ... but around 40% total memory usage 
because you only saved 20MB of RAM, as fragmentation left a lot of empty 
space in the zswap in pages that also contain in-use compressed data.  
It doesn't background compact.



- Desktops may benefit by eschewing physical swap for RAM
* But this breaks suspend-to-disk; then again, so does everything:
+ Who has a 16GB swap partition? Many people have 4GB, 8GB, 16GB RAM
+ The moment RAM + SWAP - CACHE > TOTAL_SWAP, suspend to disk breaks


Cache can be discarded at hibernate time, so you only need RAM + SWAP. 
Also people generally don't go to hibernate while that much ram is in 
use, and almost never have much swap used.  Also, I *think* I saw a 
patch somewhere recently to address this by avoiding the zswap device 
for hibernation and falling back to other swaps instead.




Well I mean I shut off my VMs and all.  A quick glance and some math at 
top tells me right now I'm using 2.3GB ... and there's nothing I'd want 
to close if I decided to hibernate my computer for the night.  Closing 
down programs sort of defeats the purpose.  Maybe LibreOffice.
It looks like CleanCache/CompCache is a better solution since it 
avoids the step of emulating a block device.





zcache on cleancache is just for compressing page cache (file backed), 
not swap (anonymous).  zcache on freeswap is the solution for 
compressing swap without a block device, also written by the zram guy, 
not sure how to configure it though.  CleanCache and zcache are in 3.2 
staging, freeswap is not.  For what it's worth I'm running both zram 
swap and CleanCache zcache in tandem; one does not affect the other.  
I've tested this running Ubuntu 11.10 at 288MB of RAM, which is painful; 
it's crippling MUCH faster without zcache enabled.


http://i.imgur.com/aAeSE.png

All of the above is swap on zram, no disk backed device.  With zcache 
enabled and two CPUs (kswapd uses 60%-70% CPU like this!) I can get this 
far with about 20-25MB of page cache, and then I can still raise and 
lower Firefox and open a new tab in gnome-terminal to run killall on 
Firefox (attempting to close Firefox was taking too long for dialog 
boxes to load and draw--it annoyed me).


I'm pretty sure zram will be superceded by zcache on freeswap.  zcache 
is a tmem backend, freeswap and CleanCache are freemem frontends.  Any 
backend can be used on any frontend, so when (if) the freeswap frontend 
goes into mainline zcache will load onto that.  zcache is zram, it uses 
zram's xvmalloc when running on freeswap (uses a different allocator on 
page cache) and everything, it's even written by the same guy.  zcache 
is just zram ported to tmem, which makes it both the same and 
separate--it is zram, but it's not zram.  As tmem looks like the way the 
kernel is moving in the future, zram will probably go away--the 
appropriate compressed in-memory file system is tmpfs with zswap, as RAM 
used to back tmpfs can be swapped and thus zcache and zram will both act 
to compress tmpfs, so zram's usefulness as a block device in RAM that 
can house a compressed file system is limited.


Of course, that theory then raises the question:  what about when you 
don't have swap?  Does the kernel make its swapping decisions and then 
ask freeswap if it's got something to do with this memory when you don't 
actually have any swap space?  (i.e. attempting to swap without swap)


Anyway in the end I feel the situation boils down to this:  You get an 
Intel CPU these days and it comes with 6 cores.  What do you do with 6 
cores?  You run some pretty extreme applications.  What does your 
average desktop do with 6 cores with hyperthreading enabled running 12 
way SMP?  It uses 1-2 cores ... what do you do with the other 10?  Run 
compression/decompression and compact fragmented compressed swap, what 
else?  On something like i.e. the XO laptop where RAM is limited, it's 
simply a necessity*.  On a desktop with a smaller RAM space and a slower 
CPU, it's a livable trade-off that does enhance performance some (it's a 
godsend on a dual core with just 512MB RAM trying to run Unity).




*I believe the use is different on a sy

Re: cpufreqd as standard install?

2012-03-03 Thread John Moser

On 03/03/2012 12:13 AM, Phillip Susi wrote:

On 02/29/2012 04:40 PM, John Moser wrote:

At full load (encoding a video), it eventually reaches 80C and the
system shuts down.


It sounds like you have some broken hardware.  The stock heatsink and 
fan are designed to keep the cpu from overheating under full load at 
the design frequency and voltage.  You might want to verify that your 
motherboard is driving the cpu at the correct frequency and voltage.




Possibly.

The only other use case I can think of is when ambient temperature is 
hot.  Remember server rooms use air conditioning; I did find that for a 
while my machine would quickly overheat if the room temperature was 
above 79F, and so kept the room at 75F.  The heat sink was completely 
clogged with dust at the time, though, which is why I recently cleaned 
and inspected it and checked all the fan speed monitors and motherboard 
settings to make sure everything was running as appropriate.


In any case if the A/C goes down in a server room, it would be nice to 
have the system CPU frequency scaling kick in and take the clock speed 
down before the chip overheats.  Modern servers--for example, the new 
revision of the Dell PowerEdge II and III as per 4 or 5 years ago--lean 
on their low-power capabilities, and modern data centers use a 
centralized DC converter and high voltage (220V) DC mains in the data 
center to reduce power waste because of the high cost of electricity.  
It's extremely likely that said servers would provide a low enough clock 
speed to not overheat without air conditioning, which is an emergency 
situation.


Of course, the side benefit of not overheating desktops with inadequate 
cooling or faulty motherboard behavior is simply a bonus.  Still, I 
believe in fault tolerance.



I currently have cpufreqd configured to clock to 1.8GHz at 73C, and move
to the ondemand governor at 70C.


This need for manual configuring is a good reason why it is not a 
candidate for standard install.




I've attached a configuration that generically uses sensors (i.e. if the 
program 'sensors' gives useful output, this works).  It's just one core 
though (a multi-core system reads the same temperature for them all, as 
it's per-CPU); you can easily automatically generate this.


Mind you on the topic of automatic generation, 80C is a hard limit.  It 
just is.  My machine reports (through sensors) +95.0C as "Critical", but 
my BIOS shuts down the system at +80.0C immediately.  Silicon physically 
does not tolerate temperatures above 80.0C well at all; if a chip claims 
it can run at 95.0C it's lying.  Even SOD-CMOS doesn't tolerate those 
temperatures.


As well, again, you could write some generic profiles that detect when 
the system is running on battery (UPS, laptop) and make appreciable 
adjustments based on how much battery life is left.



At 73C, the system switches from 1.9GHz to 1.8GHz. Ten seconds later,
it's at 70C and switches back to 1.9GHz. 41 seconds after that, it
reaches 73C again and switches to 1.8GHz.

That means at stock frequency (1.9GHz) with stock cooling equipment, the
CPU overheats under full load. Clocked 0.1GHz slower than its rated
speed, it rapidly cools. Which is ridiculous; who designed this thing?


This sounds like your motherboard is overvolting the cpu in that 1.9 
GHz stepping.




Possibly, but the settings are all default, nothing set to overclock (it 
has jumper free overclocking configuration, but the option "Standard" is 
default for clock rate and voltage settings, which I assume the CPU 
supplies).


Basically the argument here is between "Supply fault tolerance" and 
"Well your motherboard is [old|poorly designed] so buy a new one."  
That's an excellent argument for hard drives (I have, in fact, suggested 
in the past that Ubuntu monitor hard disks for behavior indicative of 
dying drives--SMART errors, IDE RESET commands because the drive hangs, 
etc--and begin annoying the user with messages about the SEVERE risk of 
extreme data loss if he doesn't back up his data), but really if my 
mobo/CPU is aging and the CPU runs a little hot I'm not going to cry 
when the CPU suddenly burns out and my machine shuts down.  I'll be 
confused, annoyed, but I'll buy a new one--I might buy an entire new 
computer, unaware that just my CPU is broken, and shove the hard drive 
in there.  So there's no harm in allowing the user's hardware to go 
ahead and burn itself out if you think that's what's going on here.


By all means that doesn't mean you can't have a diagnostic center 
somewhere that the user can review and see the whole collection.  
"Ethernet: Lots of garbage [Possibly:  Faulty switch, faulty NIC, 
another computer with a chattering NIC spewing packets]."  "CPU:  
Overheats under high CPU load [Possibly:  Dust-clogged CPU heat sink, 
failing CPU fan, overclocking, failing CPU, failing motherboard voltage 
regulators, buggy motherboard BIOS]."  "/!\ Hard drive:  Freezes and 
needs IDE Resets [Possibly