Re: Symbols files for C++ libraries for Ubuntu main

2023-06-15 Thread Christopher James Halse Rogers



On Fri, Jun 9 2023 at 12:10:07 -0700, Steve Langasek 
 wrote:

Hi Seb,

On Fri, Jun 09, 2023 at 02:27:02PM +0200, Sebastien Bacher wrote:
 I would like to ask if there is any chance the MIR team would 
reconsider
 their position on the topic (at least until the day we have a 
somewhat

 working solution we can use)?



 which also included those types of changes



 - _Znam@Base 2.0~b2-0ubuntu3
 + _Znaj@Base 2.0~b2-0ubuntu4
 +#MISSING: 2.0~b2-0ubuntu4# _Znam@Base 2.0~b2-0ubuntu3


 I personally don't understand why we have those symbols existing on 
armhf
 which don't exist on amd64. Nor why _Znam@Base is becoming 
_Znaj@Base nor

 how we are supposed to handle such cases


Passing them through c++filt may help explain:

$ echo _Znam@Base | c++filt
operator new[](unsigned long)@Base
$ echo _Znaj@Base | c++filt
operator new[](unsigned int)@Base
$

There are various C++ functions whose signature will change based on 
the

architecture word length.

.symbols files support various kinds of globbing etc to be able to 
express
this logically (e.g., you could say '_Zna[mj]@Base' instead of 
listing two

different symbols as optional), but as you've found, it's an onerous,
iterative process to identify all the ways C++ symbols vary across
architectures and then encode this in a .symbols file.  And in this 
case,
the symbol isn't part of the library's public ABI anyway, this is 
just a

function from the base C++ library!

 4. doing those tweaks need to be done manually since it's not only 
applying
 the diff but also adding optional keyword at places, I got one 
wrong and it

 failed to build again



 add one more symbol specific to risvc
 
http://launchpadlibrarian.net/647875197/libcupsfilters_2.0~b2-0ubuntu6_2.0~b2-0ubuntu7.diff.gz


Yep.

 I understand the motivation for wanting a symbol file but I agree 
with what
 Robie, what's the benefit. In that case we spent a few hours to end 
up with
 a .symbols which as over 150 '(optional)' entries, that doesn't 
protect us
 much better than just not having a .symbols or using -c0 but still 
has a

 high cost.


I wouldn't say that it doesn't protect you.  It's a pain to set up 
initially
and as you note, you can even have to do further fix-ups as a result 
of
toolchain changes, as the set of template functions and other C++ 
sugar from
outside of the library that gets exported as ELF symbols can change.  
It

DOES have a high cost, but in the end it provides the same level of
protection against accidental ABI breakage as it does for C libraries.

It would be nice to have better consistent tooling around ABI 
checking for
C++ libraries.  I think the KDE team had some tools around automating 
the
generation of symbols files - it does require two passes, first to 
build on
all archs and then to merge the results.  But in principle that's 
better

than whack-a-mole.

We could also consider using abi-compliance-checker instead of 
symbols files
for C++ libraries.  There is a dh-acc debhelper addon, but I've never 
used
it.  We are currently using abi-compliance-checker for the ABI 
analysis of
armhf for the move to 64-bit time_t; it's unmaintained upstream, but 
it does
seem to work pretty well - the vast majority of issues we've 
encountered
with it, when trying to run it over the entire Ubuntu archive, have 
been due
to header files that #include headers from packages they don't depend 
on, or
collections of headers that can't all be included together.  Both of 
these
are issues of much less significance when it's the maintainer doing 
the
work.  It would require the same sort of two-pass setup process as 
the KDE

tools, though, and if it has to be done per-arch (probably), it's more
awkward to set up because a-c-c .dump files aren't ascii that you can 
scrape
from the build logs of failed builds - but it might be a more 
reliable tool

over the long term than dpkg-gensymbols for C++ libraries.


In the Mir (not MIR ☺) team we've periodically been annoyed by 
maintaining symbols files for C++¹, and have experimented with both 
abi-compliance-checker and abigail. None of those experiments have 
ended up sticking, though, for reasons which I'm not fully aware of. 
Alan Griffiths and Michał Sawicz did most of that investigation; I'll 
see if they can help shed light on problems we encountered.


If we *can* get (one of) the ABI checking tools working they'll be more 
valuable than a symbols file anyway, as they actually check that ABI 
didn't change rather than just that the symbol strings in the DSO match.


¹: and we're playing on *easy* mode, where we, as upstream, use a 
linker script to export only symbols we intend to export.




Downside: symbols files also let you track what version of a package a
symbol was added in, which lets packages' versioned dependencies on
libraries be no stricter than actually necessary.


I don't speak for the MIR team, I have no objection to them relaxing 
the
requirement of .symbols files for C++ libraries in main.  Just 
offering 

Re: systemd-oomd issues on desktop

2022-06-16 Thread Christopher James Halse Rogers



On Mon, Jun 13 2022 at 14:18:53 +0200, Lukas Märdian 
 wrote:

Am 10.06.22 um 12:17 schrieb Sebastien Bacher:

Le 10/06/2022 à 11:40, Julian Andres Klode a écrit :

The bug reports we see show that systemd-oomd is working correctly:
The browser gets killed, the system remains responsive without 
having

become unresponsive as would be the usual case.


It might be working 'correctly' but is not perceived as such by 
users. I've seen regular complains from users since the release 
stating that their browser or code editor got closed in front on 
them without warning, on a machine they had been using for years 
with the same configuration and software without issue.


They might be getting short in resources but in practice they never 
experienced a sluggish system due to it and just see the feature as 
buggy.


I agree with Julian in that systemd-oomd in general seems to be 
working as expected. Its purpose is all about jumping in _before_ a 
system reaches its point of no return and unresponsive swapping death.


Therefore, I feel like we should not increase the recommended 
"SwapUsedLimit=90%" default much higher, i.e. option (1), as that 
could lead to situations where it's already too late to clear some 
memory and thus defeat the purpose of having sd-oomd.



OTOH, receiving those bug reports shows that sd-oomd is not yet 
properly optimized either, killing people's "important" applications 
first (such as the browser). Especially, if the browser applies some 
memory monitoring on its own to discard/unload unused tabs and free 
up memory, as suggested by Olivier.



The option (3) recommended by Nick, could be one viable option in the 
Ubuntu context (only 1G swap available) for the time being, until we 
can have a proper upstream solution (using notifications and hooks) 
[1]. Thanks for bringing this up with the upstream developers, Michel!


I wonder if we could use a more selective approach, though, using 
"OOMScoreAdjust=" in the systemd.exec environment (i.e. Gnome-Shell 
launcher in Ubuntu's context, as sd-oomd is currently only enabled on 
Ubuntu Desktop) [2], to reduce the probability of certain "important" 
apps being killed first, e.g. by maintaining an allow-list of such 
apps.


Of course we do not want to introduce different classes of apps 
randomly, but would need to come up with a proper policy of which 
apps would be eligible to have a lower "OOMScoreAdjust" value. I feel 
like having individual mechanisms on the application layer to keep 
memory consumption under control, such as a browser's tab unloading, 
could be a fair eligibility criteria.


I'm not sure the extent to which this would be acceptably backported to 
22.04, but I understand that we actually have all the infrastructure in 
place so that GNOME Shell could adjust the *foreground* application's 
OOM score.


systemd-oomd has generally been a great improvement on my 16GiB RAM + 
2GiB swap laptop, turning ~1/day hard lockups due to OOM conditions 
into something being relatively-cleanly terminated, but it would be 
nicer still if it could preferentially kill Evolution chugging along in 
the background rather than the Firefox window I'm currently using (or 
terminal window I'm currently compiling in).


That, and a notification that an OOM condition was averted and 
 was killed to avoid it would be a significant improvement 
in behaviour.


-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Packaging policy discussion: After=network-online.target

2021-05-13 Thread Christopher James Halse Rogers



On Thu, May 13 2021 at 17:34:58 +0100, Dimitri John Ledkov 
 wrote:

On Thu, May 13, 2021 at 4:12 PM Steve Langasek
 wrote:


 Hi there,

 On Wed, May 12, 2021 at 05:52:07PM +1000, Christopher James Halse 
Rogers wrote:
 > There's an nfs-utils SRU¹ hanging around waiting for a policy 
decision on
 > use of the After=network-online.target systemd unit dependency. 
I'm not an
 > expert here, but it looks like part of my SRU rotation today is 
starting the

 > discussion on this so we can resolve it one way or another!

 > I am not an expert in this area, but as I understand it, the 
tradeoff here

 > is:
 > 1. Without a dependency on After=network-online.target there is 
no guarantee
 > that the network interface(s) will be usable at the time the 
nfs-utils unit
 > triggers, and nfs-utils will fail if the relevant ntwork 
interface is not

 > usable, or
 > 2. With a dependency on After=network-online.target nfs-utils 
will reliably
 > start, but if there are any interfaces which are configured but 
do not come

 > up this will result in the boot hanging until the timeout is hit.

 > In mitigation of (2), there are apparently a number of default 
packages
 > which already have a dependency on After=network-online.target, 
so boot

 > hanging if interfaces are down is the status quo?

 From one of the comments in the bug report, I gathered that systemd 
upstream
 (who, specifically?) was taking a position that distributions 
should not use
 After=network-online.target.  I think this is entirely unhelpful; 
the target
 exists for this purpose, it is not required for systemd internally 
to get

 the system up but exists only for other services to depend on.

 There are risks of services not starting on boot because the 
network-online
 target is not reached.  However, that is not the same thing as a 
"hung
 boot", because other services will still start on their own, and 
things like
 gdm and tty don't depend on network-online.target, *unless* you're 
in a
 situation where you've introduced a dependency between the 
filesystem and
 network-online.  This is possible when we're talking about nfs, 
because the
 same system might both export nfs filesystems and mount them from 
localhost.

 But I'm not sure it should block this specific change.

 > The obvious thing to do here would be to follow Debian, but as 
far as I can
 > tell there is not currently a Debian policy about this - the best 
I can find
 > is an ancient draft of a best-practises-guide² suggesting that 
pacakages
 > SHOULD handle networking dynamically, but if they do not MUST 
have a

 > dependency on After=network-online.target

 > As far I understand it, handling networking dynamically requires 
upstream

 > code changes (although maybe fairly simple code changes?).

 It does require upstream code changes; not always simple.  And it's 
not
 always *correct* to make upstream code changes instead of simply 
starting
 the service when the system is "online"; you can find a number of 
examples
 in Ubuntu of services that it only makes sense to start once your 
network is

 "up" - e.g. apt-daily.service, update-notifier, whoopsie, ...


 There are issues with the network-online target, to be sure.  There 
is not a

 clear definition of the target, and there have definitely been
 implementation bugs in what does/does not block the target.  I've 
had
 discussions with the Foundations Team in the past about this but it 
has yet

 to result in a specification.

 My working definition of what network-online.target SHOULD mean is:

  - at least one interface is up, with routes
  - all interfaces which are 'optional: no' (netplan sense) are up
- including completion of ipv6 RA and ipv4 link-local if enabled 
on the

  interface
  - there is a default route for at least one configured address 
family
  - attempts to discover default routes for other configured address 
families

have completed (success or fail)
  - DNS is configured

 Thinks that must not block the network-online target:
  - interfaces that are marked 'optional: yes'
  - address sources that are listed in 'optional-addresses' for an 
interface

  - default route for an address family for which no interfaces have
addresses

 At least historically, neither networkd nor NetworkManager has 
fulfilled
 this definition.  It would be nice to get there, but the first step 
is

 having some agreed definition such as the above so that we can treat
 deviations as bugs.



If netplan.io can implement that would be nice. I.e. either
synthetically (i.e. by generating a service unit on the fly that calls
systemd-networkd-wait-online with extra arguments specifying all the
non-optional interfaces) , or by creating a new binary which is
"netplan-wait-online" which will be wanted by network-online.target
and perform all of the above.

 > It seems unlikely that, whatever 

Packaging policy discussion: After=network-online.target

2021-05-12 Thread Christopher James Halse Rogers

Hello everyone,

There's an nfs-utils SRU¹ hanging around waiting for a policy decision 
on use of the After=network-online.target systemd unit dependency. I'm 
not an expert here, but it looks like part of my SRU rotation today is 
starting the discussion on this so we can resolve it one way or another!


I am not an expert in this area, but as I understand it, the tradeoff 
here is:
1. Without a dependency on After=network-online.target there is no 
guarantee that the network interface(s) will be usable at the time the 
nfs-utils unit triggers, and nfs-utils will fail if the relevant ntwork 
interface is not usable, or
2. With a dependency on After=network-online.target nfs-utils will 
reliably start, but if there are any interfaces which are configured 
but do not come up this will result in the boot hanging until the 
timeout is hit.


In mitigation of (2), there are apparently a number of default packages 
which already have a dependency on After=network-online.target, so boot 
hanging if interfaces are down is the status quo?


The obvious thing to do here would be to follow Debian, but as far as I 
can tell there is not currently a Debian policy about this - the best I 
can find is an ancient draft of a best-practises-guide² suggesting 
that pacakages SHOULD handle networking dynamically, but if they do not 
MUST have a dependency on After=network-online.target


As far I understand it, handling networking dynamically requires 
upstream code changes (although maybe fairly simple code changes?).


It seems unlikely that, whatever we decide, we'll immediately do a full 
sweep of the archive and fix everything, so it looks like our choice is 
between:


1. The long-term goal is to have no After=network-online.target 
dependencies in default boot (stretch goal: in main). Whenever we run 
into a package-fails-if-network-is-not-yet-up bug, we patch the code 
and submit upstream. Over time we audit existing users of 
After=network-online.target and patch them for dynamic networking, as 
time permits.


2. We don't expect to be able to reach no After=network-online.target 
dependencies in the default boot, so it's not a priority to avoid them. 
Whenever we run into a package-fails-if-network-is-not-yet-up bug, we 
add an After=network-online.target dependency.


Option (1) seems to be the technically superior option (and is 
recommended by systemd upstream³), but appears to require more work. I 
have limited insight into how much work that would be; someone from 
Foundations or Server probably needs to weigh in on that.


Option (2) seems to be formalisation of the status-quo, so would seem 
to be less work.


Let the discussion begin!

¹: https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/1918141
²: 
https://github.com/ajtowns/debian-init-policy/blob/master/systemd-best-practices.pad

³: https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/



--
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: System setting for changing modifier keys?

2014-11-26 Thread Christopher James Halse Rogers

On Wed, Nov 26, 2014 at 5:45 AM, Barry Warsaw ba...@ubuntu.com wrote:
Are there any plans to make changing modifier keys more human 
friendly in

Vivid (unity8 perhaps)?

AFAICT, the only way to change modifier keys (e.g. swap control and 
caps lock,
remap left  right alt) is to use the tried and true xmodmap, but 
this is far

from user friendly.


What's even worse is that this isn't the best place to do this - what 
you really want to do is set


XKBOPTIONS=compose:menu,ctrl:nocaps

in /etc/default/keyboard; then this is applied everywhere, even VTs.


--
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


SRU team PSA - verifying your own SRU uploads

2013-11-20 Thread Christopher James Halse Rogers
Hi all!

Some time ago the SRU team announced that it's perfectly ok to verify
your own SRU uploads.

We *prefer* an unrelated verification, as it's so easy to accidentally
have some cruft hanging around accidentally fixing things.

But we prefer even more strongly to not have SRUs hanging around in
*-proposed forever, unloved, and representing work done that's not
benefiting users.

We now have a bot sweeping the SRU bugs in *-proposed commenting when a
package has been waiting to be verified for 90 days. Feel free to treat
this as a trigger to do a (real!) self-verification.

Chris Halse Rogers


signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Should gir1.2-gudev-1.0 depend on libgudev-1.0?

2013-03-25 Thread Christopher James Halse Rogers
On Tue, Mar 26, 2013 at 8:12 AM, Andreas Hasenack 
andr...@canonical.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Apologies if this was already discussed in this mailing list.

I got a bug filed against landscape-client today
(https://bugs.launchpad.net/landscape-client/+bug/1159997) complaining
that it wouldn't start on Raring unless libgudev-1.0-0 is installed.

I checked and it seems that on Oneiric, Precise and Quantal
gir1.2-gudev-1.0 pulls in libgudev-1.0, but not on Raring. And
landscape-client depends on gir1.2-gudev-1.0.

So, before I add an explicit dependency for libgudev-1.0 in
landscape-client in Raring, I thought I should ask if gir1.2-gudev-1.0
dropped the dependency on libgudev-1.0 by mistake or on purpose.



Yes, gir1.2-gudev-1.0 should pull in libgudev-1.0; this would be a 
packaging bug in gir1.2-gudev-1.0.



--
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: cpufreqd as standard install?

2012-03-05 Thread Christopher James Halse Rogers
On Sat, 2012-03-03 at 03:16 -0500, John Moser wrote:
 On 03/03/2012 12:13 AM, Phillip Susi wrote:
  On 02/29/2012 04:40 PM, John Moser wrote:
  At full load (encoding a video), it eventually reaches 80C and the
  system shuts down.
 
  It sounds like you have some broken hardware.  The stock heatsink and 
  fan are designed to keep the cpu from overheating under full load at 
  the design frequency and voltage.  You might want to verify that your 
  motherboard is driving the cpu at the correct frequency and voltage.
 
 
 Possibly.
 
 The only other use case I can think of is when ambient temperature is 
 hot.  Remember server rooms use air conditioning; I did find that for a 
 while my machine would quickly overheat if the room temperature was 
 above 79F, and so kept the room at 75F.  The heat sink was completely 
 clogged with dust at the time, though, which is why I recently cleaned 
 and inspected it and checked all the fan speed monitors and motherboard 
 settings to make sure everything was running as appropriate.
 
 In any case if the A/C goes down in a server room, it would be nice to 
 have the system CPU frequency scaling kick in and take the clock speed 
 down before the chip overheats.  Modern servers--for example, the new 
 revision of the Dell PowerEdge II and III as per 4 or 5 years ago--lean 
 on their low-power capabilities, and modern data centers use a 
 centralized DC converter and high voltage (220V) DC mains in the data 
 center to reduce power waste because of the high cost of electricity.  
 It's extremely likely that said servers would provide a low enough clock 
 speed to not overheat without air conditioning, which is an emergency 
 situation.
 
 Of course, the side benefit of not overheating desktops with inadequate 
 cooling or faulty motherboard behavior is simply a bonus.  Still, I 
 believe in fault tolerance.
 
  I currently have cpufreqd configured to clock to 1.8GHz at 73C, and move
  to the ondemand governor at 70C.
 
  This need for manual configuring is a good reason why it is not a 
  candidate for standard install.
 
 
 I've attached a configuration that generically uses sensors (i.e. if the 
 program 'sensors' gives useful output, this works).  It's just one core 
 though (a multi-core system reads the same temperature for them all, as 
 it's per-CPU); you can easily automatically generate this.
 
 Mind you on the topic of automatic generation, 80C is a hard limit.  It 
 just is.  My machine reports (through sensors) +95.0C as Critical, but 
 my BIOS shuts down the system at +80.0C immediately.  Silicon physically 
 does not tolerate temperatures above 80.0C well at all; if a chip claims 
 it can run at 95.0C it's lying.  Even SOD-CMOS doesn't tolerate those 
 temperatures.
 
 As well, again, you could write some generic profiles that detect when 
 the system is running on battery (UPS, laptop) and make appreciable 
 adjustments based on how much battery life is left.

To restrict the maximum frequency when on battery / low battery?  The
last analysis I've seen, by Matthew Garrett, was that the most
power-efficient way to run modern CPUs is to have them run as fast as
possible - in order to do the pending work in the shortest possible time
- then drop down to a low-power C-state.

Thermal management is a different matter, and one which *should* be
handled reasonably currently - either by the BIOS or by the kernel's
ACPI subsystem.  Restricting the CPU clock is not one of the things this
will currently do, though.

I recall when this has been brought up before the consensus has been
that a robust solution would need to be implemented in the kernel, as
there's no guarantee that userspace code will be run in time.  Talking
to the ARM guys at Plumbers 2011 it seems like this is something they'll
be tackling.





signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: cpufreqd as standard install?

2012-03-05 Thread Christopher James Halse Rogers
On Mon, 2012-03-05 at 21:10 -0500, Phillip Susi wrote:
 On 03/05/2012 08:10 PM, Christopher James Halse Rogers wrote:
  To restrict the maximum frequency when on battery / low battery?  The
  last analysis I've seen, by Matthew Garrett, was that the most
  power-efficient way to run modern CPUs is to have them run as fast as
  possible - in order to do the pending work in the shortest possible time
  - then drop down to a low-power C-state.
 
 This is incorrect.  Lower frequency ( coupled with lower voltage )
 provides less power per instruction.  You may be confusing some of his
 writing about the p4clockmod driver, which doesn't actually lower the
 cpu voltage or frequency, but rather just forces the cpu to HLT as if
 it were idle for part of the time.  This does not give better
 efficiency, which is why he patched that driver to refuse to bind to
 the ondemand governor.
 

Less power per instruction, or less power per instruction amortized over
the run-time?  My understanding was that hitting the low C-states was
such a huge power win that the increased power per instruction was
offset by the longer C-state residency¹.

¹: http://www.codon.org.uk/~mjg59/power/good_practices.html


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: cpufreqd as standard install?

2012-03-05 Thread Christopher James Halse Rogers
On Mon, 2012-03-05 at 21:39 -0500, Phillip Susi wrote:
 On 03/05/2012 09:19 PM, Christopher James Halse Rogers wrote:
  Less power per instruction, or less power per instruction amortized over
  the run-time?  My understanding was that hitting the low C-states was
  such a huge power win that the increased power per instruction was
  offset by the longer C-state residency¹.
  
  ¹: http://www.codon.org.uk/~mjg59/power/good_practices.html
 
 That's the article that I was thinking of that is mostly about the
 p4clockmod driver.  Using it means you have the same power per
 instruction, but also are spending less time in the deeper C states,
 and so is bad.  With correct frequency management, the lower power per
 instruction of the lower frequencies outweighs the reduced time in the
 lower C states.  There probably still is an optimal point somewhere
 between the min and max frequency where you get the benefit of both
 lower power per instruction when executing instructions, without
 giving up too much time in the lower C states, but finding that
 balance is tricky and highly hardware and load specific.  My current
 CPU operates from 1600 to 3301 MHz ( where the last 1 MHz just enables
 turbo boost ).  Disabling that turbo boost state would probably
 provide the most savings in power power per instruction with minimal
 loss of deep C6 time.  At 1600 MHz with a load that keeps that
 frequency mostly busy very well may be more effi
  cient at 
 one of the intermediate frequencies, but figuring out which is tricky.
 One thing is almost certain: it isn't 3301 MHz.  Of course, a load
 that keeps 1600 MHz rather busy would trigger the ondemand governor to
 shift to one of the intermediate frequencies anyway, so the default
 situation is probably quite near optimal.  Load spikes that would
 cause the ondemand governor to shift to 3301 ( turbo boost ) would be
 best disabled when on battery power, but I believe that many laptops
 have proper ACPI bios that does disable turbo boost when on battery
 power, so we're good there too.
 

I think we might have wildly different interpretations of that article.
The third paragraph is:

“C states offer significant power savings, but cannot be entered when
the CPU is executing instructions. The best power savings can be
obtained by running the CPU as fast as possible until any outstanding
work is completed and then allowing the CPU to go completely idle. The
powersave governor will extend the time taken to complete the work and
reduce the amount of time spent idle. 

On any modern CPU the benefit of carrying out the work at a lower clock
and voltage will be outweighed by the loss of the idle time.

In almost any workload, powersave will consume more energy than any
other option.”

And has:

“Summary: Use ondemand. Conservative is a valid option for processors
that take a sufficiently long time to switch performance states that
ondemand will not work.”

There's a *separate* dot-point about not using p4clockmod, but I read
that as entirely separate.

Of course, that was written in 2008; IIRC this is before the age of
turbo mode, so that might change the conclusions.



signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


X transition time!

2012-01-23 Thread Christopher James Halse Rogers
Hey all!

It's that time of the cycle again - the time to switch to a new X
server, when the Ubuntu-X team conspires to break everyone's system for
a day or so while the archive settles back into consistent state.

But wait!  This time we're trying out a new method for X transitions
that should entirely remove the breakage. The full X stack has been
staged in a PPA, everything is built against it, and we're preparing to
copy that wholesale to the archive.

There should not be any time when the archive is in an inconsistent
state.  In fact, you shouldn't notice anything much, apart from a large
number of upgrades. This includes users of the proprietary nvidia and
fglrx drivers, with the exception of users of nvidia-96 (for extremely
old cards).

While the Ubuntu-X team has tested the packages and upgrades to the
packages, we obviously can't cover everything.  Please be on the lookout
for X problems in the next couple of days.

Thanks!
Chris


signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: [ubuntu-x] Smoke testing of Precise X server bits

2012-01-19 Thread Christopher James Halse Rogers
On Fri, 2012-01-20 at 03:01 +0100, Chase Douglas wrote:
 Hi all,
 
 We have everything ready (almost) for the upload of the X server into
 Precise. It includes X server 1.11 plus the input stack from 1.12. It
 also includes a bunch of interdependent packages that would break if you
 were only to update your X server. Here's the known issues with the PPA:
 
 * No utouch-geis support, which means most of your gestures won't work
   - Will be fixed by feature freeze
 * Multitouch in Qt from indirect devices (e.g. trackpads) is broken
   - Will be fixed in next Qt upload *after* we push these packages
 * Qt is still building for armel, so don't test this on armel yet
 * A security hole will kill your screen saver if you type
   Ctrl+Alt+KP_Multiply
   - Will be fixed in xkeyboard-config upload in the next couple hours

This is now fixed.


signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Korean users: request for comments on switch to ttf-nanum fonts

2011-10-25 Thread Christopher James Halse Rogers
Hi all!

In bug #836430¹ (and associated merge proposals) we have a request to
switch the default fonts for Korean locales from ttf-unfonts-core to
ttf-nanum.  It seems that ChromeOS and OS X are both using these fonts.

All the pieces are ready to make this change, but I'm not a native
Korean speaker and so I'd like to ensure that this change has broad
support before it's made.

If no-one has any objections to this change I propose that it be made in
2 weeks, after everyone's back from UDS.  That should give interested
parties long enough to raise any problems.  If people do have objections
to the change then they should be resolved first, obviously.

¹: https://bugs.launchpad.net/bugs/836430


signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Korean users: request for comments on switch to ttf-nanum fonts

2011-10-25 Thread Christopher James Halse Rogers
Hi all!

In bug #836430¹ (and associated merge proposals) we have a request to
switch the default fonts for Korean locales from ttf-unfonts-core to
ttf-nanum.  It seems that ChromeOS and OS X are both using these fonts.

All the pieces are ready to make this change, but I'm not a native
Korean speaker and so I'd like to ensure that this change has broad
support before it's made.

If no-one has any objections to this change I propose that it be made in
2 weeks, after everyone's back from UDS.  That should give interested
parties long enough to raise any problems.  If people do have objections
to the change then they should be resolved first, obviously.

¹: https://bugs.launchpad.net/bugs/836430


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Python startup time (was: Brainstorming for UDS-P)

2011-09-28 Thread Christopher James Halse Rogers
On Tue, 2011-09-27 at 01:08 +0200, Benjamin Drung wrote:
 Am Montag, den 26.09.2011, 13:48 +0200 schrieb Sebastien Bacher:
  Le vendredi 23 septembre 2011 à 21:56 +0100, Allison Randal a écrit :
   Hi all,
   
   While we're all in the final preparations for Oneiric, it's round about
   that time in the cycle to start thinking about plans for the next cycle.
   What's on your mind?
  
  Some desktopish suggestions:
  
  - Boot and desktop performances (boot time, memory usage, power
  consumption)
 
 Moving applications from Python 2 to Python 3 will increase the startup
 time. I wrote a little script that runs n times hello world programs and
 takes the time. That reveals a startup time increase of factor two
 between Python 2 and 3. Running the hello world programs 100 times on my
 system with an Core i5 takes following time (in seconds):
 
 C:   0.04
 D:   0.14
 Haskell: 0.22
 Bash:0.23
 CShell:  0.25
 Perl:0.27
 PHP: 0.81
 Python:  1.54
 Python3: 3.22
 Ruby:0.28
 Shell:   0.10
 ZShell:  0.22
 C#:  2.55

If we cared sufficiently about C# startup time, I suspect we could cut
this significantly by doing AOT compilation on install, same as we
byte-compile python.

Perhaps I should try some benchmarking…


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Patch pilot report: 2011/08/02

2011-08-02 Thread Christopher James Halse Rogers
Today's patch piloting involved lots of poking other people but I feel
was nonetheless productive:

https://bugs.launchpad.net/ubuntu/+source/apt-offline/+bug/819514
 • ACK'd sync, subscribed archive.  Contributor transitioned package to
dh_python2 in Debian, this is sync back to Ubuntu.
https://bugs.launchpad.net/ubuntu/+source/gnote/+bug/819532
 • ACK'd new-upstream-version sync.

https://code.launchpad.net/~hypodermia/ubuntu/oneiric/compiz/fix-for-bug-301174/+merge/64632
 • Bell sound should follow XDG sound theme
 • Discussed with Sam Spillsbury in #ayatana; it'll either get merged
then fixed or fixed then merged.

https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/388904
 • UI Disagreement with upstream
 • Specific patch seems to have problems; 
 • Sent email to desktop list to begin discussion on whether we want to
carry this as a distro-patch.




signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Patch Pilot report, 2011/07/26

2011-07-25 Thread Christopher James Halse Rogers
I tried to do a sweep of the “Needs Fixing” branches in the hope of
quickly knocking out some of the noise.  Of course, it always takes
longer than you think…

https://code.launchpad.net/~pro-mathesh812004/ubuntu/oneiric/scim-tables/oneiric/+merge/64119
• Set status to In Progress, noted what the submitter needed to do to
get the merge back on the queue.
• Asked about the “All rights reserved” copyright statement on the
submitted tables.

https://code.launchpad.net/~fougner/ubuntu/oneiric/kdepim/fix-for-791635/+merge/65073
• Checked out the upstream status; there's some ongoing discussion.
• Set to In Progress

https://code.launchpad.net/~dpolehn-gmail/ubuntu/oneiric/pxlib/fix-755924-use-pkg-config/+merge/65151
• Last Needs Fixing review has not been responded to.
• Set to In Progress

https://code.launchpad.net/~neonofneophyte/ubuntu/oneiric/notification-daemon/fix-for-557887/+merge/65720
• Cosmetic issue in the description of notification-daemon.
• Patch was sent upstream to Debian a month ago, still sitting in the
BTS.  Made minor fixes to the packaging, and uploaded.

https://code.launchpad.net/~elmo/ubuntu/oneiric/libstomp-ruby/bug-707317/+merge/67463
• Marked as In Progress - this looks like it'll be uploaded to Debian
and then synced back, so doesn't need to be on the queue.

https://code.launchpad.net/~pali/ubuntu/oneiric/pulseaudio/pulseaudio/+merge/67286
• Marked as In Progress - previous Needs Fixing review comments correct
and not yet addressed

https://code.launchpad.net/~vanvugt/ubuntu/lucid/nvidia-graphics-drivers/fix-627022/+merge/66255
• Upload looks good, pinged Alberto for an ACK as he manages the
restricted drivers




signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Very odd issue, dead spots on screen

2011-07-10 Thread Christopher James Halse Rogers
On Sun, 2011-07-10 at 22:04 -0400, John Moser wrote:
 I can't click links or pictures or highlight text...
 
 so I unmaximize Chrome, move it, and then click it.
 
 Can't highlight in any other app either ... it's that rectangle of the
 screen, it's like there's an invisible window overlayed and I can't
 click or right click through it.
 
 I don't know how to diagnose this.
 

If you run “xwinprop -all” from a terminal and click on the dead area of
the screen you'll probably find there's an InputOnly window, probably
also with the OverrideRedirect flag set.  If you're lucky it will have
set _NET_WM_PID so you'll be able to work out the pid, and hence
program, which owns it.

This can sometimes be caused by compiz stacking bugs.  It's generally
not an X issue; the server is just doing what's been asked of it.

Chris


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: IronPython and Mono are very old. How can we get an update?

2011-04-07 Thread Christopher James Halse Rogers
On Fri, Apr 8, 2011 at 2:31 AM, Vernon Cole vernondc...@gmail.com wrote:
 On Thu, Apr 7, 2011 at 6:26 AM, chalserog...@gmail.com
 chalserog...@gmail.com wrote:
 [...]
 Mono's upstream 2.10 release page suggests that they're shipping both
 IronPython and IronRuby in the main mono distribution, but looking at
 the source I can't find it, so I'm not sure what's happening there.

 IronPython and IronRuby are now building from the same source.
  https://github.com/IronLanguages/main

 Regardless, Mono 2.10.1 is now available in Debian experimental, and
 so will be available early on in the Oneiric (what will become Ubuntu
 11.10) development cycle.  Since we're very close to the end of the
 Natty cycle upgrading to 2.10.1 presents too big a regression risk to
 pull it in this time.

 What needs to be done is to update the dlr-languages² source package.
 This is maintained by the pkg-cli-libs team in Debian, and we sync it
 across from there.  As we're well after Natty feature freeze, updating
 in Natty would require a Feature Freeze exception¹.  As there seems to
 be only one package with a(n optional) dependency on IronPython in the
 archive it *might* be possible to get an FFe and have the new package
 in the Natty release, it would be more reasonable to aim for Oneiric.

 One can RUN IronPython 2.7 on Mono 2.6.7, but not BUILD it,
 http://ironpython.codeplex.com/wikipage?title=IronPython%20on%20Mono
 So I'm not sure how well an FFe would work.

Oh.  Yeah, that's not going to work then!

 I'm assuming that the
 build engine runs on the same Mono version as the release?

Yes.  The policy is that everything in the archive has to be built by
tools in the archive.


  Perhaps we will need to maintain a .NET 2.0 compatible binary of IPy
 on the IronPython site until well after Debian releases with Mono 2.10
 under the hood.


Debian stable is going to have Mono 2.6.7 until the next release,
which is likely to be about 2 years away.  There's likely to be a
backport done, though, so it should (eventually) be reasonably easy
for Debian users to have Mono 2.10.  Ubuntu 11.10 will have Mono 2.10,
Ubuntu 11.04 won't - but, again, is likely to get *some* sort of
backport, even through a PPA.  Whether that makes it worth maintaining
a 2.0-compatible IPy is up to you :).

 If you'd like help in the mechanical process of updating the package,
 the Ubuntu packaging guide³ has a good rundown, or feel free to ask -
 IRC #debian-cli on oftc.net is friendly and generally active.  Since
 it looks like dlr-languages is one of the more complex things to
 package, I could probably find the time to update it in the next month
 or so if you're not comfortable with the process.

 ¹: https://wiki.ubuntu.com/FreezeExceptionProcess
 ²: http://git.debian.org/?p=pkg-cli-libs/packages/dlr-languages.git;a=summary
 ³: https://wiki.ubuntu.com/PackagingGuide

 Okay, I just looked at the link, and would require a month or so for
 me to figure out how to do it. I have been a bit hesitant about
 changing things I don't understand, ever since I crashed the Internet
 in several neighboring states by incorrectly updating a gateway
 router. (Long story, and the Internet was much smaller then.) So, yes,
 please make those changes when you can.  :-)

Well, we'd be reviewing and sponsoring your changes, so if you *did*
break the Internet again it'd be our fault :).


 Make sure that the source is coming from github, not codeplex.
 I'll see that the build patches get into a new tarball, They are now
 in the git trunk but not backported to the 2.7 maintenance branch yet.
 How often do you fetch a new tarball?

As often as the maintainer feels like it.  For well maintained
packages (here's where you can come in ☺) that's generally once per
upstream release, unless there's some problem - like it not being
buildable against Mono 2.6.7 :).

There's a certain amount of impedance mismatch between Debian and
predominantly-Windows upstreams like Iron*, but that's something that
can largely be worked around.  The biggest problem is probably the
different IronPython/IronRuby release schedules which mean we can't
have IronPython 2.7 and IronRuby 1.0 unless we have two different
source packages, and even then there's the problem of the shared DLR
component.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: IronPython and Mono are very old. How can we get an update?

2011-03-20 Thread Christopher James Halse Rogers
On Sun, 2011-03-20 at 18:28 -0600, Vernon Cole wrote:
 Chris:
 Thanks for the detailed information. That was pretty much what I was
 looking for. It makes everything make sense.
 
 I had tried downloading the mono source and rebuilding -- what an
 error!  I've spent most of my morning removing the new, broken mono
 and replacing the stock version. Ugh!

Hm.  I should have linked http://apebox.org/wordpress/linux/370/ which
is a description of how to do a parallel mono install in a way that
works with the Debian CLI policy and tools.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: build-from-branch into the primary archive

2011-02-27 Thread Christopher James Halse Rogers
On Thu, 2011-02-24 at 09:20 +0100, Martin Pitt wrote:
 Steve Langasek [2011-02-19 11:48 -0800]:
  That sounds reasonable to me.  How do you gpg sign a tag in bzr?  I've never
  seen any information about this in the UDD documentation.
 
 Neither have I, I'm eager to learn whether and how that works as well.
 But I think it's a prerequisite to maintain the current GPG strength
 security.

bzr sign-my-commits (unsurprisingly!) adds a GPG signature to all the
commits you've made in the branch, and there's an option exposed in the
bzr configuration capplet in bzr-gtk to always sign your commits.

I'm not aware of any way of signing a tag, although simply requiring all
top-level commits to the Ubuntu branch be GPG-signed would probably be
sufficient.

Chris



signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: HDMI and automatically hardware recognition

2011-02-08 Thread Christopher James Halse Rogers

I see Luke has taken the audio half of this, as for the video part…

On Tue, 2011-02-08 at 13:36 +0100, Yann Santschi wrote:
 Hi everybody ! I just found one point that can really be improved in
 future releases of Ubuntu. In a first time I will explain how I found
 it, in a second time what I think about and the third point (the most
 important), a proposition to get rid of this lack...
 
 I got Ubuntu because I was worried about Windows (it's slow, it has a
 lot of glitches,...). With Ubuntu I can work without complications,
 until today... I bought a TV and wanted to connect it through HDMI
 cable. I configured my nvidia-settings and chose digital audio ouptut.
 It works fine, but...
 Every time I reconnect my TV, I must :
 - Manually detect connected screens
 - Set the correct screen settings
 - Change audio output to digital
 Every time I disconnect my TV, I must :
 - Set the correct screen settings (before disconnecting)
 - Change audio output to analog (otherwise laptop speakers don't work).
 In Windows, it works automatically in the most cases (sometimes not,
 it's just another Windows glitch...).
 

This is a combination of the nvidia-settings tool not being integrated
into GNOME, the binary nvidia drivers not supporting the standard XRandR
interface for monitor configuration, and no-one investing the time to
make the standard GNOME display tools talk the proprietary NV-CONTROL
protocol.

If we had the code, we could fix the nvidia drivers - indeed, this
*should* Just Work™ with the open-source nouveau drivers, and does for
me.  If convenience is more important than raw 3D performance you may
find the nouveau drivers to be better for you.

Of course, there's also the chance that hotplug detection won't work for
your system with the nouveau drivers, but at least in that case we've
got a chance of fixing it.

 I think that it's a pain for end-users (in this case I am) to have to
 always (re)set these damn settings to watch a movie, to connect a
 projector, to connect the external display. I know that's possible to
 script that, but as an end-user, I just don't want to : I want to use
 my computer and not to configure it all the time. When I connect my
 usb flash drive, it's automatically mounted and ready-to-use. Why it
 isn't the same with external displays ?
 
 A way to fix this problem, is to create something such as an hardware
 profile that contains settings and when a new display and/or audio
 output are detected (I think it's possible to detect a hardware
 change), a window / widget asks the user if he want's to apply saved
 settings or if he want's to set new settings.
 
 What do you think about that ?

I think that what you actually want is for things to work automatically,
like with USB, rather than have a pop up window.  :)

This is how it works for drivers which support XRandR and generate
hotplug events right now - if you plug in a monitor, it will be set up
the same way as it was last time, or if it hasn't been plugged in before
it will have a sensible default set up.  The big 3 - ati, intel, nouveau
- all have the capabilities to do this.

Similarly for audio (although I'm less familiar with the hardware
capabilities here) - if we see an HDMI audio device is connected, and
totem starts playing a movie, sending that stream over the HDMI
connection is a good default.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [ubuntu-x] New X stack in xorg-edgers

2011-01-14 Thread Christopher James Halse Rogers
On Fri, 2011-01-14 at 14:02 -0800, Bryce Harrington wrote:
 Thanks Felix.  Weird we didn't bang into that problem on our machines,
 but Chris is uploading rebuilds for these drivers presently.
 
 We'll send a notice once these are corrected.
 

Corrected, via a combination of rebuilds, a new wacom upstream version,
and (temporarily) dropping -siliconmotion and -openchrome from
xserver-xorg-video-all.

The proprietary fglrx and nvidia drivers obviously don't work against
this new Xserver.  Due to packaging bugs, they *also* won't prevent the
upgrade going through, leaving a non-functioning X until they are
removed.

Please remove the fglrx or nvidia drivers before testing.

If you're not using a proprietary driver, -siliconmotion, or
-openchrome, xorg-edgers is good to go.


 Bryce
 
 On Fri, Jan 14, 2011 at 04:13:48PM -0500, Felix Kuehling wrote:
  Hi Bryce,
  
  The PPA is still not usable:
  
  xserver-xorg-video-siliconmotion, openchrome and vmware are broken
  (still depend on the old xorg-video-abi-8.0).
  
  If I try to remove them, other packages break due to unresolved
  dependencies:
  
* xserver-xorg-video-all depends on xserver-xorg-video-*
* xserver-xorg depends on xserver-xorg-video-all
* xserver-xorg-core depends on xserver-xorg-video-all
  
  In the end the only way to resolve the broken dependencies is to not
  update at all or completely uninstall Xorg.
  
  Similar story on the input side with xserver-xorg depending on
  xserver-xorg-input-all depending on xserver-xorg-input-wacom, which is
  broken due to a dependency on an old xorg-input-abi-11.0.
  
  Regards,
Felix
  
  On Fri, 2011-01-14 at 15:54 -0500, Bryce Harrington wrote:
   We're going to be moving natty to its new X stack within the next week
   or two.  This includes snapshots for the upcoming xserver 1.10 and mesa
   7.10, along with some new driver bits.  Currently this stack is
   available in xorg-edgers, and maps well to what we'll be uploading
   shortly.
   
   You can help us pre-test these bits by installing xorg-edgers on your
   systems at this time.  Our testing has shown it to be quite stable, but
   the devil's in the corner cases when it comes to X, so your testing will
   help us get a head's start on making it work better for your HW.
   
   1.  The PPA is available at:
   
   https://launchpad.net/~xorg-edgers/+archive/ppa
   
   2.  Here's a simple way to install:
   
   sudo add-apt-repository ppa:xorg-edgers/ppa
   sudo apt-get update
   sudo apt-get upgrade
   
   Then reboot your system to get to the new bits.
   
   3.  If you want to go back to stock Ubuntu, there is a corresponding
   ppa-purge tool:
   
   sudo ppa-purge xorg-edgers
   sudo apt-get update
   sudo apt-get upgrade
   
   4.  If you happen to find bugs, you can report them the usual way:
   
 ubuntu-bug xorg
   
   Include a photo of the screen if appropriate (e.g. screen corruption).
   
   Thanks,
   Ubuntu X Team
   
   
  
  -- 
   _Felix Kuehling
   \ _  |   MTS Software Development Eng.
   /|_| |   SW-Linux Base Gfx
  |__/ \|   T 905.882.2600 x8928
  
  
  
  -- 
  Ubuntu-x mailing list
  ubunt...@lists.ubuntu.com
  Modify settings or unsubscribe at: 
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-x
 




signature.asc
Description: This is a digitally signed message part
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: maverick radeon 6.13.2 SRU?

2010-09-30 Thread Christopher James Halse Rogers
On Thu, 2010-09-30 at 10:56 -0700, Bryce Harrington wrote:
 On Thu, Sep 30, 2010 at 02:47:02PM +0100, Daniel J Blueman wrote:
  What is the consensus an SRU for the radeon 6.13.2 Xorg driver in Maverick?
  
  http://lists.x.org/archives/xorg-driver-ati/2010-September/017206.html
  
  It is probably too late to ship with the media, so is eligible for a
  post-release update, right?
 
 No, the SRU team has typically been strictly against doing new upstream
 versions of video drivers as SRUs, so ubuntu-x usually does not do this.

I've also seen some IRC reports of this driver release dramatically
reducing performance under KDE, although that's somewhat of an anecdote
at this point.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Xserver 1.9 transition

2010-08-09 Thread Christopher James Halse Rogers
Summary for the impatient:

A new X server is soon to be uploaded which requires all the
drivers to be rebuilt.  Be careful when upgrading in the next few
days.


As promised earlier in the cycle, we've got the second X server
transition coming up.  This will take us to X server 1.9, which features
improvements to startup time, some memory usage improvements, and many
DRI2 fixes.

The 1.9 server has a new input and video ABI which means existing driver
packages will break.

The server needs to be uploaded first so the new drivers build against
the correct ABI, which means that there will be a period where safe
upgrades will have held-back packages.  The dependencies should ensure
that you won't accidentally get a non-working combination of X server
and drivers, but be careful if an upgrade wants to remove X packages.

Happy testing!


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Maverick graphics driver updates?

2010-07-08 Thread Christopher James Halse Rogers
On Thu, 2010-07-08 at 11:05 +0100, Daniel J Blueman wrote:
 At present in Maverick repositories, we have:
 
 mesa 7.8.1
 xserver-xorg-video-radeon 6.13.0
 xserver-xorg-video-intel 2.11.0
 xserver-xorg-video-nouveau 0.0.16+git20100518
 
 Does it make sense to move to mesa 7.8.2, intel 2.12.0, radeon 6.13.1
 and potentially a newer nouveau snapshot as early as possible, to
 maximise exposure?

These are all planned - in fact, we additionally plan to move to mesa
7.9 and Xserver 1.9.

A launchpad bug won't speed this process.  You're welcome to help,
though - come and say hi in #ubuntu-x!

I was hoping to get Xserver 1.9 in first.  All the drivers will need a
rebuild after this anyway, so it's a good time to update them.

Chris Halse Rogers


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Upcoming X breakage

2010-06-06 Thread Christopher James Halse Rogers
Summary for the impatient:

A new X server is about to be uploaded which requires all the
drivers to be rebuilt.  Be careful when upgrading in the next few
days.


So far in Maverick X has been a relatively sedentary beast, not much
changed from Lucid.  It's time for X to get a bit more interesting…

We're going to be merging the new 1.8 X server and associated drivers
from Debian over the next couple of days.  This server has a new input
and video ABI, which means existing driver packages will break.

The server needs to be uploaded first so the new drivers build against
the correct ABI, which means that there will be a period where safe
upgrades will have held-back packages.  The dependencies should ensure
that you won't accidentally get a non-working combination of Xserver and
drivers, but be careful if an upgrade wants to remove X packages.

There will be another X transition later in the Maverick cycle when we
switch to X server 1.9.  I'll send out a similar notice then, too.

Happy testing!


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: i915 - kms - lucid - dri(?) problem

2010-04-29 Thread Christopher James Halse Rogers
It's highly likely that you're seeing bug 541511[1] or one of it's many
duplicates.  There's a kernel patch that *probably* fixes this getting
close to being applied upstream, so the hope is that we can fix this in
an SRU.

 [1]
https://bugs.edge.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/541511


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [RFC] MP3 player management via Rhythmbox etc.

2010-04-29 Thread Christopher James Halse Rogers
On Thu, 2010-04-29 at 14:37 -0400, John Moser wrote:
 On Thu, Apr 29, 2010 at 2:31 PM, Brandon Holtsclaw
 m...@brandonholtsclaw.com wrote:
  I use Rhythmbox.  I don't care about Amarok, or whatever.  This will
  be framed for Rhythmbox; as for Kubuntu and whatever else, just cry
  Feature Parity until someone duplicates work solving the same
  problem again.
 
  No need. In this case, Amarok since 1.4-ish has handled media players
  both generic and branded ones ( even media cards ) in this way, I think
  this will be one of those rare cases where the shoes of feature parity
  are reversed :)
 

This has also been (mostly) supported in Rhythmbox[1]  Banshee for
quite some time, via the .is_(audio|media)_player file.  I'd guess that
Amarok is also reading these files?  I'm more familiar with Banshee than
Rhythmbox, but I'm fairly sure both will detect removable storage with
an .is_(audio|media)_player file as a generic DAP, and support the “drag
a playlist to the DAP” workflow you're after.

There's currently no tool I know of to generate the .is_(audio|
media)_player file, but I believe this would be a pretty simple Nautilus
extension to write.

The more advanced parts of your proposed interface - transcode when
space is tight, multiple policies for transcoding, etc, would need to be
implemented, but the basics should work right now.


[1] live.gnome.org/Rhythmbox/FAQ


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Kernel bug in Intel driver

2010-04-17 Thread Christopher James Halse Rogers
On Fri, 2010-04-16 at 09:54 +0100, Mark Ellse wrote:
 Also, what about this one, which freezes graphics on a large number of
 Intel boards? (Is Intel a small, insignificant manufacturer whose
 products are not a priority?)
 
 https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/456902
 
I'm not entirely sure what more you expect from that bug - it's
importance high, has had an upstream developer participating, and an
awesome member of the x-swat team providing PPA testing packages.  The
fact that it remains unfixed is not due to any lack of work on it!

The X team have been monitoring this, and other similar problems - the
i8xx chips have not had a good time of Lucid.  As you see on that bug
there has been lots of work upstream to isolate and fix the cause.  

Tracking the various upstream bugs you'll see that there have been a
number of false starts, but it looks like there *might* be a fix soon.
That fix won't be going into Lucid's release, as it's a big patch which
touches code for all the Intel cards, and hence might introduce new
problems in the i9xx chips.  If all goes well, this might make it into
lucid-updates and 10.04.1.

Because of the serious stability issues i845 and i855 users have been
reporting we're testing disabling both 3D and KMS on these chips.  If
feedback shows that this still isn't stable, we've got a fallback plan
to switch to vesa on those cards, which has been reported stable.
That's a serious functionality regression, but might be necessary to
provide a sufficiently stable desktop.  In this case, users will be able
to manually enable the -intel driver, KMS, and 3D if they wish - these
bugs are incredibly timing-specific, with some setups crashing almost
immediately after starting X, and others crashing less than once a week.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Call for Testing: Nouveau

2010-03-18 Thread Christopher James Halse Rogers
Hi all,

The drive towards stabilising Nouveau for a rocking Lucid LTS release
has resulted in large changes to the packaging stack.  Nouveau's kernel
module is now in the main kernel package, which fixes most of the
problems people have previously reported.  If you have previously tested
nouveau and found your VTs didn't work, or that plymouth splash didn't
work, or that you just got no display at all please test again with the
2.6.32-16 kernel.  If you've updated in the last day, you should already
have this kernel installed.  You can check which kernel you are running
by running “uname -r” from a terminal - it should print something like
“2.6.32-16-generic”.  The important bit is 2.6.32-16.

This work means that nouveau now has a lack of good bugs reported
against it.  I'm hoping that the fine testers of Ubuntu can rectify
this!  If you've got an nvidia graphics card, please try disabling the
restricted drivers and testing nouveau for a day.  Just using your
computer with the nouveau drivers and reporting any problems you
encounter will be useful, but if you want to test more systematically,
there's a list[1] of things to check on the wiki.

Filing a good bug is as simple as running “ubuntu-bug xorg” and
describing the problem you see, and how you can trigger it, in the
report.  The apport hooks will magically attach all the relevant logs
for you.

If you run into a bug and want to invest some extra effort to help get
it fixed, after reporting the bug you can test with a newer version of
Nouveau.  These packages are available in the xorg-edgers/nouveau PPA.  
Further instructions are on the wiki[2].

Your feedback is greatly appreciated.  Let's go find some bugs!

[1]: https://wiki.ubuntu.com/X/Testing/NouveauEvaluation
[2]:
https://wiki.ubuntu.com/X/Testing/NouveauEvaluation#upstreamtesting 


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Lots of Kernel related brakage in Lucid (amd64)

2010-03-15 Thread Christopher James Halse Rogers
On Mon, 2010-03-15 at 05:09 +, Scott Beamer wrote:
 Just downloaded the latest nightly live CD and installed.
 
 After reboot computer freezes at splash screen.

This might be plymouth; see if hitting SysRq+Alt+k kills the splash
screen and brings up X.

 
 Reinstalled from a previous dialy Live CD (about a week old) and ran sudo 
 apt-get-update  sudo apt-get dist-upgrade.
 
 Rebooted, screen freeze again at Ubuntu splash screen.
 
 Rebooted and reverted to previous kernal and was able to log in just fine.
 
 On a related note
 
 Upgrding kernel-headers package to the latest causes apt/dpkg to lock up 
 while unpackaing.
 
 That took a lot of playing with to work around.
 
 Workaround for now is to remain with older Kernel version 2.6.32-15
 
 
 Hardware:
 
 ASUS Laptop
 Intel Core i5-430M CPU
 4GB of DDR3 1066MHz SDRAM
 500GB Hard Drive 
 NVidia GT325M Graphics Engine with 1GB DDR3 Dedicated VRAM (also Intel 
 Onboard GMA and sound).

Ah!  Someone's got some switchable graphics.  That's not going to be as
well-tested as other setups, so you might run into problems others
aren't seeing.

If you feel like some additional testing, the NouveauEvaluation wiki
page[1] has instructions for testing more recent upstream versions of
Nouveau.  There's at least one commit in there relevant to switchable
graphics.

Regardless of whether you test the newer nouveau, it would be useful if
you could file a bug on launchpad.  “ubuntu-bug xorg” will attach a
bunch of useful information.


[1]: https://wiki.ubuntu.com/X/Testing/NouveauEvaluation#upstreamtesting



signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Call for Testing: Nouveau

2010-03-09 Thread Christopher James Halse Rogers
Hi all,

The drive towards stabilising Nouveau for a rocking Lucid LTS release
has resulted in large changes to the packaging stack.  Nouveau's kernel
module is now in the main kernel package, which fixes most of the
problems people have previously reported.  If you have previously tested
nouveau and found your VTs didn't work, or that plymouth splash didn't
work, or that you just got no display at all please test again with the
2.6.32-16 kernel.  If you've updated in the last day, you should already
have this kernel installed.  You can check which kernel you are running
by running “uname -r” from a terminal - it should print something like
“2.6.32-16-generic”.  The important bit is 2.6.32-16.

This work means that nouveau now has a lack of good bugs reported
against it.  I'm hoping that the fine testers of Ubuntu can rectify
this!  If you've got an nvidia graphics card, please try disabling the
restricted drivers and testing nouveau for a day.  Just using your
computer with the nouveau drivers and reporting any problems you
encounter will be useful, but if you want to test more systematically,
there's a list[1] of things to check on the wiki.

Filing a good bug is as simple as running “ubuntu-bug xorg” and
describing the problem you see, and how you can trigger it, in the
report.  The apport hooks will magically attach all the relevant logs
for you.

If you run into a bug and want to invest some extra effort to help get
it fixed, after reporting the bug you can test with a newer version of
Nouveau.  These packages are available in the xorg-edgers/nouveau PPA.  
Further instructions are on the wiki[2].

Your feedback is greatly appreciated.  Let's go find some bugs!

[1]: https://wiki.ubuntu.com/X/Testing/NouveauEvaluation
[2]:
https://wiki.ubuntu.com/X/Testing/NouveauEvaluation#upstreamtesting 



signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Making Resolution Setting More User-Friendly

2010-01-04 Thread Christopher James Halse Rogers
On Tue, Jan 5, 2010 at 12:49 AM, Markus Hitter m...@jump-ing.de wrote:

 Am 01.01.2010 um 01:13 schrieb Craig Van Degrift:

 I have been troubled by how confusing it can be for new users of
 Ubuntu/Kubuntu/Xubuntu to get their old display hardware showing
 higher
 resolution.  I have attempted to write a WWW page that is designed
 to help
 these newbies as well as confused old-timers like myself.  Could
 anyone
 interested in helping with this look at

 http://yosemitefoothills.com/UbuntuLucidDisplayNotes.html

 and give me feedback.

 I have the same problem and solve it by putting something like this
 into /etc/X11/Xsession.d/45custom_xrandr-settings ($HOME/.Xsession no
 longer works):

 xrandr --newmode 1280x1024 SGI 134.400 1280 1296 1440 1688 1024
 1025 1028 1066 +hsync +vsync
 xrandr --addmode VGA1 1280x1024 SGI

 You can get the required numbers with PowerStrip, a tool for MS
 Windows.
You can also get them from the “cvt command.

I have toyed with the idea of adding this to a “Do you not see the
resolution you're after button in gnome-display-properties.  It would
actually be quite easy to implement, although I think it'd require
extending the xrandr plugin for gnome-settings-daemon a bit.

For added bonus points, it would talk to gdm's gnome-settings-daemon, too.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [minor] Tray Tooltips

2009-09-29 Thread Christopher James Halse Rogers
On Tue, 2009-09-29 at 12:43 -0400, Brian Vidal wrote:
 If we look at the tooltips on the tray we will see a big inconsistency  
 between them.

This sounds like a good papercut candidate.  Consistency among the
default notification icons seems a worthy goal.

 
 I suggest to use to volume-applet way of displaying tooltips.
 
 
 Volume Applet
 bOut: xx%/b
 {volume in dB}
 {device name}
 
 nm-applet
 Network Manager
 
 bNetwork: {network name}/b
 {signal strengh} %
 Time connected: {uptime} or IP: {ip} or Speed: {speed}
 
 Power Manager
 gnome-power-manager
 
 bBattery: {batery} %/b
 Time available: {h} {m}
 Battery Status:  {capacity}% - Good.

Battery capacity might be a bit niche for a default tooltip!


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [minor] Tray Tooltips

2009-09-29 Thread Christopher James Halse Rogers
On Tue, 2009-09-29 at 18:29 -0400, Brian Vidal wrote:
 On Tue, 29 Sep 2009 18:18:49 -0400, Christopher James Halse Rogers  
 r...@ubuntu.com wrote:
 
 So, should I fill a bug?
 or will *somebody* fix it without notice??

As this isn't a clear-cut case, I'd wait for further comments on this
idea and the design.  Once we have a clear sense of what we want, *then*
file bugs against the relevant packages.

 
  On Tue, 2009-09-29 at 12:43 -0400, Brian Vidal wrote:
  If we look at the tooltips on the tray we will see a big inconsistency
  between them.
 
  This sounds like a good papercut candidate.  Consistency among the
  default notification icons seems a worthy goal.
 
 
  I suggest to use to volume-applet way of displaying tooltips.
 
 
  Volume Applet
  bOut: xx%/b
  {volume in dB}
  {device name}
 
  nm-applet
  Network Manager
 
  bNetwork: {network name}/b
  {signal strengh} %
  Time connected: {uptime} or IP: {ip} or Speed: {speed}
 
  Power Manager
  gnome-power-manager
 
  bBattery: {batery} %/b
  Time available: {h} {m}
  Battery Status:  {capacity}% - Good.
 
  Battery capacity might be a bit niche for a default tooltip!
 
 
 -- 
 Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
 




signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Fwd: [Ubuntu-gaming] AMD/ATI vs NVIDIA vs Intel

2009-07-05 Thread Christopher James Halse Rogers
On Sun, 2009-07-05 at 22:03 -0400, Danny Piccirillo wrote:
 Fwd'd. 
 
 -- Forwarded message --
 From: John Vivirito gnomefr...@gmail.com
 Date: Sun, Jul 5, 2009 at 21:37
 Subject: Re: [Ubuntu-gaming] AMD/ATI vs NVIDIA vs Intel
 To: ubuntu-devel-discuss@lists.ubuntu.com
 
 
 On 06/22/2009 09:36 AM, Remco wrote:
  On Sun, Jun 21, 2009 at 10:57 AM, Ryan Swartserjndest...@gmail.com
 wrote:
  Well, Intel adopting Moblin might see them becoming a much stronger
 Linux
  partner, but then again, you never know what clandestine things are
 going on
  in the background dealings (although moblin looks really cool now,
 so
  personally I'm glad about Intel's involvement).
 
  And I think Intel is heavily involved with the Xorg project. Aren't
 a
  couple of Intel guys leading the project now?
 
  Nvidia drivers have never given me trouble under Linux
 
  If you want to try the Nouveau drivers, you should install Fedora 11
  on a USB stick and add nouveau.modeset=1 to GRUB. That's a really
 easy
  way to test it, without messing up your Ubuntu install.
 
  Remco
 
 
 We have the free drivers in the following PPA
 deb http://ppa.launchpad.net/xorg-edgers/ppa/ubuntu karmic main
 deb-src http://ppa.launchpad.net/xorg-edgers/ppa/ubuntu karmic main
 Please let it be known the packages in this repo are not
 stable and may not work.
 Had to add that so people dont blame me when they fail
 to work :).
 I have nothing to do with those packages other than a user

If you're wanting to play with nouveau, the xorg-edgers' nouveau ppa[1]
is a fine place to start.

[1] https://edge.launchpad.net/~xorg-edgers/+archive/nouveau



signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Infrastructure vs. Interface

2009-07-02 Thread Christopher James Halse Rogers
On Thu, 2009-07-02 at 13:40 -0500, Patrick Goetz wrote:
...
snip
...
 One area where I agree that there is considerable room for improvement 
 and something that would have to be fixed in order for your idea to work 
 is package dependencies. Currently most people use stuff like debhelper 
 to build packages, which means AFAIK that dependencies are set against 
 the currently installed packages on the machine on which the package is 
 being built, which usually translates into dependencies that are 
 stricter than they need to be.  In the Real World(tm) I often find 
 myself working on a project where I must have feature X in application 
 Z, which is only available in version N+1, while version N is available 
 in the installed distro.  No problem; I go to the package pool for 
 distro-newer, grab version N+1 of application Z and try to dpkg -i it, 
 only to be told something like unmet dependencies:  this program 
 requires libncurses_5.7.1.2 and you only have libncurses_5.7.1.1 
 installed.  For reals?  Sorry, I'm skeptical.  Of course I can download 
 the source package and use dpkg-buildpackage, but at this point you lose 
 about 90%+ of whatever users were still left trying to solve their 
 problem in a somewhat sane fashion.

The premise here is inaccurate: packages don't pick up versioned library
dependencies based on the installed version.  The observation is
accurate, though - the versions can be unnecessarily strict.  

The versioned library dependency information is taken from the shlibs
file provided by the library package.  This is meant to have the version
bumped each time new interfaces are added - otherwise the dependency
would be incorrect.  This is unnecessarily strict, in that if the
application doesn't link to any of these new interfaces it still gets a
versioned dependency against the most recent package that added
interfaces.

dpkg has fairly recently grown a different dependency mechanism, based
on versioning the actual ABI symbols; that allows less conservative
versioning of dependencies while still providing correct dependencies.
This still requires manual work, and isn't mandatory, so not all
libraries are going to use symbol files anytime soon.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Replace PulseAudio with OSS v4?

2009-06-20 Thread Christopher James Halse Rogers
On Sat, 2009-06-20 at 01:47 -0400, Danny Piccirillo wrote:
 After reading this post on Insane Coding (via Slashdot) it seems that
 PulseAudio is actually a very bad choice in the long term due to
 horrible latency
[Data needed]
  and lower sound quality
[Data needed]
 and that we should work to use OSS v4. It's a long read but seems to
 be worth it. What do others think about this? 

That the blog post was long on verbiage and contained no data.  Also
that the author concentrated on the audio-mixing role of PulseAudio to
the exclusion of its other, in my opinion more interesting, features
such as audio hotplug.  Oh, and that the comments suggest that the OSSv4
kernel components would apparently require extensive work to be accepted
into mainline.

There may be value in considering OSS v4, but the foundation of that
consideration should be actual data.  I don't believe that blog post (a)
contained any data, or (b) made a particularly strong argument for OSS
v4 over ALSA.

Members of the audio-team may have more interesting and informed
contributions.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Replace Tomboy with Gnote?

2009-06-16 Thread Christopher James Halse Rogers
On Tue, 2009-06-16 at 23:32 -0400, Danny Piccirillo wrote:
 Let's all refrain from a mono flamewar. We all know where we stand (if
 you don't, look elsewhere to learn more!) and won't change anyone's
 opinion. 
 
 
 Anyways, someone on the forums started a discussion about this and i
 was wondering what you guys on the list though. There was a surprising
 amount of support and quite a few people seem to have already switched
 to Gnote. Reasons seem to be: improved integration, similar look,
 faster and uses less memory, and it's smaller (and for those who care,
 it doesn't require mono). Reasons against seem to be: lacking some
 features. There didn't seem to be much detail on any of the points on
 both sides though. 
 

There doesn't seem to be a lot of content here.
Questions that would need to be answered:
* Better integration with what?
* Faster - as measured by?  How much faster?  Will this remain when it
is feature-complete?
* Less memory - again, as measured by?  How much less?  Will this remain
when it is feature-complete?
* What features does it lack?

And additionally:
* How responsive is upstream?  
* How quickly are bugs fixed?
* Is upstream likely to be robust?
* Security flaws?

Without answers to these questions there's really nothing to discuss.
If you can provide some answers to these questions, there's a discussion
to be had and the tradeoffs can be weighed.  Otherwise there's no data,
and the discussion will revolve solely around posters objections to
Mono.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Gnome is falling behind in Karmic--

2009-06-14 Thread Christopher James Halse Rogers
On Sun, 2009-06-14 at 21:35 -0700, Dean Loros wrote:
 On 06/14/2009 09:11 PM, Dmitrijs Ledkovs wrote:
  2009/6/15 Dean Loros autocross...@gmail.com:

  Talking in the forums it has been noted that Gnome will be at 2.27-3 by
  mid this week...Whilst we are still at 2.26---I was wondering if there
  are problems with bringing the new stuff in?
 
  --
  Dean Loros
  Performance by Design Ltd.
  autocrosser at ubuntuforums.org
  
  Gnome follows major, odd/even releases. 2.27.3 will be 3 alpha
  milestone of a 27 UNSTABLE  EXPERIMENTAL release for developers. the
  2.27 series will eventually (somewhere in september) will become a
  STABLE 2.28 release.
 
  Karmic will ship with 2.28. If you really want to run 2.27.X gnome you
  can run Karmic Alpha's (currently at alpha 2). This Karmic Alpha is
  not production ready release either. So expect a lot of brekage,
  regressions and instability. Afterall it has new X.org kernel unstable
  Gnome and etc.
 
  We are not behind, we are using the latest stable 2.26 release of
  gnome in a stable release and playing around with experimental gnome
  in our development release.
 

 Yes--I do know all of the above--Have been with Ubuntu from Warty  have
 run the Garnome project many times several years ago--I am currently
 using Karmic  was looking thru Synaptic--hence my question.
 
 I have been testing software for over 10 years now---So I not afraid to
 run unstable or testing appsI was meerly curious---I see more 2.26
 gnome that I do 2.27..

Many GNOME components haven't actually released a tarball for 2.27.  As
far as a cursory browse of the actual tarballs goes [1], Karmic is
pretty much up to date with all the latest releases.  Evolution is the
only one that I've noticed behind.

[1] ftp://ftp.gnome.org/pub/gnome/desktop/2.27/2.27.2/sources/


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: GRUB 2 now default for new installations

2009-06-11 Thread Christopher James Halse Rogers
On Thu, 2009-06-11 at 09:20 +0200, Markus Hitter wrote:
 Am 10.06.2009 um 21:44 schrieb Lars Wirzenius:
 
  ke, 2009-06-10 kello 15:21 -0400, John Moser kirjoitti:
  Every argument for putting Grub or the kernel on a separate partition
  has been based around the idea that these files are somehow more
  important than, say, /bin/sh
 
  Putting the kernel (i.e., /boot) on a separate partition is often
  mandated by the BIOS not being able to read all of a large hard  
  disk. I
  have a motherboard from 2008 that has that problem, so it's not  
  ancient
  history, either.
 
 Additionally, if you have more than one installation of Ubuntu on the  
 same platter, you really want to share /boot with both installations.
 
 Not doing so means two /boot's, while you can address only one of  
 those in the master boot record. As /boot also contains kernels, you  
 end up booting grub from one partition and the kernel from the other  
 partition. Kernel install scripts can't deal with such a situation,  
 you end up sync'ing those two /boots manually after each update of  
 one of the kernels.
 
Kind of.  I don't have separate /boot partitions for my Karmic, Jaunty,
 Squeeze installs - grub2 + os-prober makes this work pretty well, but
it does require running update-grub2 in the Karmic install to update the
master grub.cfg.

It's a bit of a trade-off, really.  Not sharing /boot means a manual
step for non-Karmic kernel ABI updates, sharing /boot in my experience
results in contention for menu.lst.


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: GRUB 2 now default for new installations

2009-06-10 Thread Christopher James Halse Rogers
On Wed, 2009-06-10 at 23:35 +0200, Reinhard Tartler wrote:
 John Moser john.r.mo...@gmail.com writes:
 
  GRUB2 on its own partition is silly.  Like having a separate /boot.
 
 It is required for stuff like root on LVM, a configuration supported by
 the alternate installer.

This is news to my laptop, which is happily booting from /-on-LVM with
grub2.  That's one of the advantages of grub2.

As far as I'm aware, the alternate installer doesn't yet understand that
grub2 can happily handle /boot residing on an LVM volume and installs
LILO instead, but grub2 is quite happy to boot from LVM.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-25 Thread Christopher James Halse Rogers
On Mon, 2009-05-25 at 11:12 +0800, Christopher Chan wrote:
  That is not the case with OpenSolaris based ZFS root capable 
  installations. While the whole disk maybe taken up by a zfs pool, the 
  installation will create three at least zfs filesystems. ROOT/, 
  ROOT/opt, export, and export/home all exist on my OpenSolaris 
  installation. So all data is stored in the user's home directory is not 
  at all affected by upgrades or downgrades.
 

 
 ...I need more sleep and to get out of Hong Kong...my command of English 
 has gone down the drain.
 
 Allow me to retype that:
 
 That is not the case with OpenSolaris based ZFS root capable 
 installations. While the whole disk maybe taken up by a zfs pool, the 
 installation will create at least three zfs filesystems. ROOT/, 
 ROOT/opt, export, and export/home all exist on my OpenSolaris 
 installation. So all data is stored in the user's home directory and is not 
 at all affected by upgrades or downgrades.
 
So, what happens when, say, I upgrade to a new version of Evolution and
it decides to convert all its existing mailboxes to the new database
format on first run, and I later want to revert because of new bugs?  It
doesn't matter that I can roll back everything but /home to the previous
Evolution version - that mail is now essentially gone as far as the old
Evolution is concerned.

Alternatively, replace Evolution with MySQL or such.

This is what I understand to be the hard problem in *supporting* package
downgrades.


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-25 Thread Christopher James Halse Rogers
On Mon, 2009-05-25 at 07:58 +0200, Vincenzo Ciancia wrote:
 Il giorno lun, 25/05/2009 alle 02.09 +0200, Markus Hitter ha scritto:
  
  Craft a system where people can switch back and forth between  
  different package versions. This update broke foo? - Report a bug  
  and switch foo back to the previous version - Damage gone, user
  happy.
 
 Using alpha (which I do very often, and this reminds me I have to
 download karmic) may lead to data loss. I recall a bug in the gnome
 control center (which was in ubuntu for a short while), where you'd
 loose your entire home directory in a single shot for a bug. I often
 synchronise data with a portable disk but I do not version it, so if I
 loose a directory and then synchronise without checking, I'm **.
 Yes, I should start using a different method.
 
 Crafting a safe testing environment ideally would mean to use a
 filesystem with snapshots. I do not know how difficult would be to
 implement snapshots in ubuntu but it seems that if it was easy we would
 already have those.
 
We do already have those - you want to look up LVM.  It's not as fancy
as ZFS, but it does copy-on-write snapshots just fine, if a little less
efficiently than you'd get with filesystem-level snapshots.  It also
lacks any form of swanky UI.


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-24 Thread Christopher James Halse Rogers
On Mon, 2009-05-25 at 03:03 +0100, Dmitrijs Ledkovs wrote:
 2009/5/25 Christopher James Halse Rogers r...@ubuntu.com:
  Having some sort of roll back to previous package version button might
  be a nice idea, though it would need to be designed in such a way that
  made it clear there was no guarantee that it'd work.  I'm not sure
  whether we'd be doing users a favour here.
 
 
 There is a technology to do this but it's not GPL...
 
 OpenSolaris and Nexetra use ZFS which supports snapshots. Before each
 package installation transaction (i.e. one upgrade of N packages) they
 take a snapshot of a system, do the upgrade. If the user doesn't like
 it, they can rollback to any of the previous snapshots because they
 are available.

Won't that only be an acceptable solution if the user is willing to
throw away all the work they've done since the package upgrade?  ZFS is
cool and all, but this doesn't seem to address any of the hard parts of
why downgrades aren't supported now.


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu sourcecode

2009-05-04 Thread Christopher James Halse Rogers
On Tue, 2009-05-05 at 00:56 -0400, Lakshmanan MR wrote:
 hi,
 
 I am so interested in Ubuntu and i want to go and see whats under the
 hood. So i just want to tweek the ubuntu source code.. 

This is a common misunderstanding as to the role of Ubuntu - there isn't
really much Ubuntu source code, there's just the code of the various
upstream projects (GNOME - which is itself composed of many disparate
projects, the linux kernel, bzr, etc) that are shipped together to form
an Ubuntu install.

That said...
 Can you please help
 I want to.
 1. obtain the ubuntu source code (not the binaries) ..
for each package $PACKAGENAME:

apt-get source $PACKAGENAME

 2. build it myself .. 
cd $PACKAGEDIR;
debuild -us -uc

 3. make an iso image out of it .. 
I don't know this part of our infrastructure.  Someone else may pipe up.

 4. put it on a cd 
With your choice of CD writing software.

 5. boot my system with that cd and install it .
As you would any other Ubuntu CD.

That said, you probably don't actually want to do this.  If you want to
modify a particular piece of software, all you need to do is to get the
source package (apt-get source $PACKAGENAME), make your change, build it
(with debuild or dpkg-buildpackage), then install the package.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: notify-osd specification: something is missing/changed?

2009-03-20 Thread Christopher James Halse Rogers
On Thu, 2009-03-19 at 21:03 -0400, Mackenzie Morgan wrote:
 On Thursday 19 March 2009 5:45:17 am Nicolò Chieffo wrote:
  I've seen that in notify-osd source there are some calls that contains
  the name blur. Does it blur for you? Maybe it's a problem of my
  video card...
 
 It can only do the dimming/blurring if you use a compositing window manager 
 like Compiz, or if you turn on compositing in KWin or Metacity.

I suspect that it can only blur while using Compiz (I don't remember
bumping into a blur plugin in kwin's kontrol kentre, and even if kwin
supports blur I'm not sure whether it uses the same X flags that compiz
uses).  This would also require Compiz's blur plugin to (a) be enabled,
and (b) be running on a card with enough features to support it.  There
are 3 blur modes and all but the MipMap one require pixel shaders
IIRC.  Your card may not have these.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: notify-osd specification: something is missing/changed?

2009-03-20 Thread Christopher James Halse Rogers
On Fri, 2009-03-20 at 10:07 +0100, Nicolò Chieffo wrote:
 I don't think that compiz is required, because compiz blur plugin
 blurs everything, so it shouldn't be required the blur code in
 notify-osd, but only a composite window manager enabled.
 
 Can you tell me which video card you have and if it works for you
 (without the compiz blur plugin enabled)? Thanks.

From bubble.c:

/* the behind-bubble blur only works with the enabled/working compiz-plugin blur
 * by setting the hint _COMPIZ_WM_WINDOW_BLUR on the bubble-window */

Is this what you're after?  The actual Gaussian blur convolution appears
to only be used to draw the shadow, which seems reasonable to me - my
understanding is that blurring behind the window can only be performed
by the composite manager, since it's the only thing that actually knows
what's behind the window.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Reason for removing animation from Gnome login?

2009-03-20 Thread Christopher James Halse Rogers
On Fri, 2009-03-20 at 22:25 +0100, Ernst Persson wrote:
 Hi,
 
 I saw Scott removing animation from gnome login. What's the reason for
 that? I don't see any motivation or reference to a bugreport.
 It works very nice here and looks nice with all the nvidia drivers,
 nv, nouveau and nvidia.
 

In one of the comparative-startup-time threads it was mentioned that
these effects added ~2sec each to the startup time on someone's (I think
Scott's) system.

It might be good to have that documented somewhere, possibly the
changelog.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Metacity as a compositing manager

2009-02-09 Thread Christopher James Halse Rogers
On Mon, 2009-02-09 at 14:10 -0500, Danny Piccirillo wrote:
 Would it be a good idea to plan to use Metacity as the default
 compositing manager for Ubuntu instead of compiz in the future? 
 
 Compiz seems mostly unnecessary. If metacity was used, it would be
 easier on the machine and work for people who don't have the hardware
 for compiz. Anyone who wants all the exra effects can still install
 compiz, but for almost everyone, shouldn't metacity be fine? 

There are two problems here: the first is that Metacity's compositor is
_slower_ and more CPU intensive than Compiz for people with decent 3d
drivers (particularly nvidia users - the blob is great at 3d, not so
good at 2d).  For example, the alt-tab provided by Metacity's compositor
is significantly slower than Compiz's, at least for me.

The second is that Metacity's compositor is in no way feature-comparable
with Compiz.  I believe the 'scale' plugin is enabled in our default
compiz setup; this gives exposé-like functionality which is not provided
by Metacity, and is a _huge_ usability win.

The characterisation of Compiz as just about shiny effects is wrong.
The default plugin set also provides a better _window manager_ than
Metacity in many ways.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Metacity as a compositing manager

2009-02-09 Thread Christopher James Halse Rogers
On Tue, 2009-02-10 at 01:25 +0100, Remco wrote:
 On Tue, Feb 10, 2009 at 12:03 AM, Christopher James Halse Rogers
 chalserog...@gmail.com wrote:
  The characterisation of Compiz as just about shiny effects is wrong.
  The default plugin set also provides a better _window manager_ than
  Metacity in many ways.
 
 I wouldn't know about that last claim. Metacity doesn't have all the
 useful features of Compiz, but it does work a lot better. While pure
 compositing actions such as Alt+Tab may be faster on Compiz (I don't
 notice any difference), the applications themselves are a lot slower.
 Have you ever tried resizing a window in Compiz? Metacity is much
 smoother.
If I remember correctly, this is an artefact of the slowness of
GL_EXT_tfp and making lots of new textures.  This is why the default
resize mode for Compiz is 'rectangle', which isn't slow at all.  Some of
this slowness will be going away as drivers get better.

  Also, Compiz is different with regards to snapping windows.
 That's a very subtle difference, but combined with the slow resizing,
 it makes the desktop feel a lot harder to manipulate.
I, personally, don't find Compiz's snapping behaviour better or worse
than Metacity's.  If you can provide a use-case where Compiz's snapping
behaviour is bad (as opposed to simply different from Metacity's
behaviour), then this can be fixed in Compiz.

 
 Switching between non-composited and composited Metacity is also a
 smoother transition. One, it doesn't take too long for the desktop to
 come up again (though it should really be as seamless as in Windows
 Vista). But what's more important: everything still works the same.
 It's just slightly more beautiful and useful.
 
 Would you recommend switching from Compiz to Metacity when your laptop
 goes from AC to batery power? That's not a pretty sight. While
 Metacity doesn't do this perfectly seamlessly either, at least it's
 relatively fast, and it doesn't mess up your window positions.

No, I wouldn't.  But mainly because Compiz is no worse on battery life
than Metacity.  Compositing should be a battery-life _win_, generally.
There were some powertop benchmarks done, Compiz vs Metacity a year or
so ago, and the outcome was that they didn't make a consistent
difference.

I don't switch between Metacity and Compiz, and I don't suggest other
people do, either.

 
 The ideal solution would be for Metacity (or the appropriate Gnome
 app) to implement some of the features that Compiz provides. Scale,
 Animations and the Desktop Wall come to mind. Animations may sound
 like a useless eye-candy thing, but when done subtly, it just provides
 more clues as to where objects are moving toward. Right now, for
 example, even compositing Metacity shows some kind of black rectangle
 effect when minimizing. That doesn't fit with the nicely shaded
 windows.

You appear to be describing something we already have.  Compiz.  It
behaves differently to Metacity, yes, but that's not necessarily a bad
thing in and of itself.  If Compiz does something badly, file a bug.  If
Metacity does something badly, file a bug.  But having Compiz behave
exactly the same as Metacity is an explicit non-goal.

 
 Bottom line: the core of Metacity is just a lot better than that of
 Compiz. Compiz has the advantage of a huge amount of plugins. But I
 don't see why Metacity couldn't get plugins itself.

Because this is pretty much anathema to the goals of Metacity.  The
motto is Metacity is Cheerios!  Also, providing a plugin system would
almost certainly require serious changes to Metacity's core; you'd end
up with something like... Compiz.

 
 Does anyone know if there is any development going on with Metacity's
 compositing mode (or the appropriate Gnome app)?
 
There is gnome-shell, which uses a forked metacity with a completely
different compositor to provide a whole desktop, with panels and
task-switcher and such.

I would be amazed if any of these suggestions moved into Metacity's
trunk.

I'm not suggesting that Compiz is perfect; far from it.  I *am*
suggesting that the way to get a better desktop is to improve Compiz,
rather than reimplementing it in Metacity.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Midnight Commander in 8.10

2008-10-30 Thread Christopher James Halse Rogers
On Thu, 2008-10-30 at 01:26 -0400, Felix Miata wrote:
 On 2008/10/30 12:41 (GMT+1000) Chris Jones composed:
 
  And as mentioned already, anyone who uses mc will probably not be using
  it off a live cd anyway.
 
 Any time I boot a Knoppix CD it's a virtual certainty that the first thing I
 do once it finishes booting is start MC. I boot Knoppix to fix things, and MC
 is my main tool for generic fixing. A live CD without MC is like a tool chest
 that contains no wrench or socket that fits the most common bolt sizes, and
 no fitsall wrenches either.

Except the Ubuntu Desktop CD isn't intended as a recovery disc.  A
Knoppix CD (which doesn't have the constraint that it should give a
preview of a full Ubuntu install) is full of useful recovery programs.

You appear to be complaining that the Ubuntu live CD isn't a good
recovery disc, which is a reasonable statement.  It's not really
intended to be, so your follow up so you should add some things to make
it a good recovery disc *isn't* a reasonable request when the CD is
already full.  If there were space enough to allow us to ship a Desktop
CD that was both a full Ubuntu install *and* a good recovery disc, you
could reasonably argue that we should ship mc on there.  However there
simply *isn't* enough space; the Desktop CD is an endless fight to fit
onto a CD.  Even if Midnight Commander could automatically fix every
problem that could ever occur on any system, it *still* wouldn't get on
the Desktop CD as a recovery tool!


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Proposal for apt install-recommends settings

2008-10-27 Thread Christopher James Halse Rogers
Because I suck, here's the mail I accidentally privately sent.

On Mon, 2008-10-27 at 20:03 +0100, Markus Hitter wrote:
...
snip
...
Perhaps you've seen it already, Synaptic has such a switch in it's  
 preferences. While this switch isn't ill-placed there, I think it  
 would be an advantage to put this into a more global place, like the  
 sources.list file. Then, the adjustment of this switch would go to  
 the package sources selector accordingly.

 What would you think about a global switch, without making a hijack- 
 package?
 
Isn't the contents /etc/apt/apt.conf.d already such a global switch?
Would you like a low-priority debconf question to more easily toggle
this (I have no idea how acceptable such a question would be; I suspect
it would be frowned upon)?  All the tools have the ability to override
global preferences too (--without-recommends for aptitude, for example).

I'm not sure that installing/removing a package to flip a config switch
is particularly elegant - it seems like it would be more useful to
educate the relevant users about the rich apt configuration options
available.



signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: OCaml support on Ubuntu Proposal to improve it

2008-09-28 Thread Christopher James Halse Rogers
On Sat, 2008-09-27 at 18:58 +0200, Vincenzo Ciancia wrote:
 On sab, 2008-09-27 at 08:28 -0500, Scott Kitterman wrote:
  
  While that's true for major changes, if there are updates that would
  help 
  you should feel free to suggest them.
 
 Let's clarify this a bit more: ubuntu has a huge set of ocaml packages,
 that makes it appear a wonderful platform for ocaml development.
 Unfortunately in ocaml you need to build dependencies in the right
 order, because - if I recall this correctly - depending libraries store
 an hash coding of the ancestor in the dependency graph, and refuse to
 load if this does not match. Thus, we currently have in ubuntu a huge
 set of packages that just can't be used, and one tipically resorts to a
 local installation of ocaml.

This sounds like it could be easily fixed by a set of no-change binary
rebuilds, something that would almost certainly be acceptable at this
(and almost any) stage of development.

[...snip..]

 Perhaps the right thing to do is to invent a
 way to build dependencies in order on build servers?
 
I may be missing some of the subtleties here, but why can't this be
handled right now using versioned build-depends?


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Backtracing, Invalidated Bugs and Quality

2008-08-21 Thread Christopher James Halse Rogers
On Thu, 2008-08-21 at 15:26 +0200, Krzysztof Lichota wrote:
 2008/8/20 Null Ack [EMAIL PROTECTED]:
  I'm not convinced that the strategy of asking users to install
  specialised debugging packages is the right way to go. I see a very
  low hit rate with this working in practice.
 
 It is not surprising. Asking people to install multi-megabyte packages
 and reproduce bug is not going to work as:
 1) the bug often cannot be reproduced (and the user does not want to
 be hit by a bug again)
 2) the user is requested an extra work he does not understand
 3) some users do not have resources to install debug packages as they are huge
 
 IMO the solution would be to create debugging symbols server.
 Microsoft had these for years. The information about debugging symbols
 is only needed on server, client only sends (in simplest version) MD5
 sum of library and address offset, which is transformed into the
 symbol by symbol server.

In what way is this different to the current Apport infrastructure?  My
understanding is that the client sends in the crashdump and the apport
retracers on launchpad replay it on a system with the debugging symbols
installed.

The retracable crashdumps are already nicely handled; how does the
symbols server help in cases when the retracers would fail?


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Dealing with codecs, was: Making deals with Microsoft

2008-06-09 Thread Christopher James Halse Rogers
On Mon, 2008-06-09 at 15:59 -0400, Martin Owens wrote:
 Hey Remco,
 
  I only have/had two problems with the situation, and that's not
  something against Canonical per se
 
 I do have some problems with this but it can not be solved by limiting
 the users workstation; I don't even believe it's right to keep certain
 formats off the CD for instance keeping liblame away from ubuntu is a
 great travesty since it's free software.
...snip...
 I'd favour having all codecs on board by default; for instance we
 should be trying to get real media to make a _real_ real media plugin
 and stop fobbing us off with their helix stuff, we should ask them for
 open source versions of the codec that works with everything. Not only
 that but any codec that we currently use windows dlls for we _must_
 reverse engineer and recode from scratch, work is already going into
 ffmpeg for wmv support. More money is needed to free these parts
 properly.
 
The problem is not that we don't have open-source encoders/decoders for
these formats - ffmpeg is open-source, and has has been able to decode
most of the codecs provided by the windows dlls for some time.  The
problem is that the compression algorithms used by these codecs are
patented, and we don't have a license.  It's not possible to decode MP3s
without needing a license (which, IIRC, Frauenhoffer provides to all
end-users).  It's not possible to decode MPEG2 video without needing a
license.  It's not possible to decode wmv (or, technically, VC-1)
without needing a license.

It sucks, but that's software patents for you.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: making deals with M$

2008-06-09 Thread Christopher James Halse Rogers
On Sun, 2008-06-08 at 19:54 +0200, Milosz Derezynski wrote:
 Sorry for dropping in.
 
 Has there been any mention of Microsoft that they will never, ever sue
 anyone who uses Mono nor the Mono developers themselves, or is this
 all under the Novell/Microsoft convenant? If they never made such a
 statement, on what else than pure hope that they will never litigate
 can something like Mono be built?
 
Yes. All the EMCA-standardised parts of the CLI are available under
royalty-free non-descriminatory terms.  Link:
http://web.archive.org/web/20030424174805/http://mailserver.di.unipi.it/pipermail/dotnet-sscli/msg00218.html


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: pulseaudio enabled on dist-upgrades?

2008-04-07 Thread Christopher James Halse Rogers
On Mon, 2008-04-07 at 21:51 +0100, James Westby wrote:
 On Mon, 2008-04-07 at 13:31 -0700, Matt Price wrote:
  i notice on my hardy upgrade that pulseaudio, though installed, is not
  active by default.  is there a standard way to enable it on upgrades?
  the wiki lists this for gutsy:
  https://wiki.ubuntu.com/PulseAudio
  but that looks dated.  happy to write up any changes for the wiki if a
  developer knows more  can respond on thel ist.  thanks,
  matt
 
 Hi,
 
 pulseaudio doesn't start because of the file
 
   /etc/default/pulseaudio
 
 the comments in their explain how to make it start. I don't know
 if you need to do anything else.

My understanding is that we don't use the system-wide daemon even on
fresh installs, because running per-user daemons offers less of a
security target (among other things) and works just as well or better.
My system is far from a vanilla dist-upgrade, but I believe that
System-Preferences-Sound-Sounds-Enable Software Mixing is the button
you want to press.  Assuming the pulseaudio ESD bridge is installed as a
part of the upgrade (I believe it is), that will get gnome-session to
start pulseaudio on login.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Does metacity compositor use acceleration?

2008-03-31 Thread Christopher James Halse Rogers
On Mon, 2008-03-31 at 20:57 +0100, chombee wrote:
 I want to use the metacity compositor but I'm finding that it's much
 slower than with no compositor or with compiz. Something's up. I'm using
 an Nvidia Quadro4 900 XGL, and have the nvidia driver enabled via the
 ...
Does that mean that you are using Xgl?  You probably don't want to be -
nvidia's 2d acceleration isn't so terrible that you need to put an extra
layer in there to use the 3d engine to do 2d.


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: BitTorrent support in Ubuntu

2008-03-03 Thread Christopher James Halse Rogers
On Mon, 2008-03-03 at 21:58 -0500, Mackenzie Morgan wrote:
 A bunch of us argued for Deluge and were told we could install it
 after the fact if we wanted access to all those features, but that
 Transmission was chosen for simplicity.  Mentioning the number of
 features your program has makes your case harder to fight to get it
 included.

It's not so much the number of features your program has, it's more the
number of options your program exposes.  Compare Transmission's
preferences dialog to Deluge's - Transmission's is a single page which
captures (in my opinion) all of the useful options that Deluge exposes 6
tabs worth of pages plus one for the plugins, which then have their own
configuration pages.  The default Ubuntu applications should do what
they need to do with a clean, simple UI.  Deluge is undoubtedly more
featureful, but it does so at the expense of a complicated UI.  In my
opinion the extra features of Deluge aren't worth the trade-off for most
people - especially the people most likely to be using the default
applications.

 On Mon, Mar 3, 2008 at 9:53 PM, Alan McGovern
 [EMAIL PROTECTED] wrote:
 Hi,
 
 I'd just like to generate a bit of discussion on the choice of
 Transmission as the default bittorrent client for Ubuntu.
 First, i'm the developer of the C# based MonoTorrent library
 (X11/MIT License), so I'm probably biased in my opinions as to
 what client is best. The bittorent client i'd like to propose
 is the GUI for MonoTorrent which was developed as part of the
 google Summer of Code. My thoughts are laid out in my blog
 (http://monotorrent.blogspot.com/2008/03/so-thought-struck-me.html) 
 but i'll put the main text here for ease of reading.
 
 My angle is that MonoTorrent supports everything Transmission
 does [b]and significantly more[/b]. Features such as the Fast
 Peer Extensions, multi-tracker protocol and UDP Tracker
 protocol are all great things for end users which MonoTorrent
 supports (details linked in blog). The GUI has RSS
 integration, cool tagging of downloads into groups, integrates
 into the notification area and can monitor a directory to
 automatically download new torrents which are placed there. It
 also has a nice clean interface. Best of all, in the video[1],
 which i encourage you to watch, it downloads a Ubuntu
 torrent ;)
 
 One issue which i can think some people might raise is that of
 memory consumption. All i can say is that i have done
 extensive work on optimising MonoTorrent for both memory
 consumption and CPU usage and am happy that it is pretty good.
 Secondly - install size. The current packages i have for Suse
 are ~380kB (~600kB installed). This can be reduced by at least
 100kB as outlined in the blogpost.
 
More likely people are going to ask where is the Ubuntu package?,
closely followed by where is the code?.  I presume you're talking
about the project associated with this website[1], but there didn't seem
to be any links to it from the blog you linked to.  There also doesn't
seem to be an actual release tarball on [1], either, which will make
people hesitant to package it in the first place.

The video looks kinda cool.  The tagging of torrents is neat.  I'd be
interested in seeing this in Ubuntu, but it doesn't look appealing to do
that yet - packagers _really like_ projects to have tarball releases
(with version numbers and logs of significant changes and all that jazz)
and definitely _bugtrackers that aren't forums_[2].  If you need a
bugtracker, somewhere to store tarball releases, and such, there's
always launchpad[3] :).

[1] http://monotorrent.com
[2] This seems to be a common feature of projects for which Windows is a
significant target.  I don't get it.
[3] http://launchpad.net


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: A Look at the Ubuntu Installer

2008-01-09 Thread Christopher James Halse Rogers

On Tue, 2008-01-08 at 22:30 +, (=?utf-8?q?=60=60-=5F-=C2=B4=C2=B4?=)
-- Fernando wrote:
 On Monday 07 January 2008 21:10:41 Mackenzie Morgan wrote:
  10GB is more than enough under normal usage.  You'd have to install..all of
  GNOME, KDE, Enlightenment...and it still wouldn't be full.  Even with all
  that and a lot more, I'm at around 7GB full.
  
  On Jan 7, 2008 4:05 PM, Mario Vukelic [EMAIL PROTECTED] wrote:
  
  
   On Tue, 2008-01-08 at 09:50 +1300, Jonathan Musther wrote:
One thing I've been thinking would be good for quite some time is
creating separate / and /home partitions by default.
  
   While a separate /home makes reinstalls easier, how would you know the
   size of / the user needs?
 
 $ df -h
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sda1 9.7G  6.5G  2.7G  71% /
 
 10 GiBs for small disks, and 20GiBs should be enough for most beginners and 
 powerusers.
 Even powerfull users will most probably set extra mountpoints.
 
You might want to bear in mind that /tmp is on /, not /home.  A number
of apps (k3b, for example) sometimes want to write large files (such as
9GB DVD images) there temporarily, and it's puzzling for a user when
Gnome says 13Gb available disk space but writing a 9Gb file fails.
Partitioning up hard drive space in this manner is not trivial, and the
least surprising option seems to be the standard just-one-partition
layout.

If we really want to push the convenience of multiple partitions by
default, I think we'd need something more dynamic.  A totally blue-sky
idea would be something like: default to LVM, with /, /home,
swap, /whatever logical volumes and a daemon that watches disk usage and
lvextends a logical volume once it gets over 75% full (or, for added
bonus points, when some process wants to allocate a file that would push
the usage over 75%).  Or something.

I don't think the benefits of a separate /home are sufficient to offset
the unexpected failure-cases introduced.  Especially since the benefits
are mostly only for power users who will likely set up their own
partitioning scheme.


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Using iwl3945 instead of ipw3945 in gutsy?

2007-08-27 Thread Christopher James Halse Rogers
On Mon, 2007-08-27 at 19:30 +0300, Marius Gedminas wrote: 
 On Mon, Aug 27, 2007 at 05:25:05PM +0200, Reinhard Tartler wrote:
  Hi fellow gutsy users,
 
 Hi,
 
  I noticed that gutsy ships the ipw3945 driver in
  /lib/modules/2.6.22-10-generic/ubuntu/wireless/iwlwifi/iwl3945.ko in
  addition to the ipw3945:
  /lib/modules/2.6.22-10-generic/ubuntu/wireless/ipw3945/ipw3945.ko
 
 Cool, I'll have to try it.
 
  On boot, ipw3945 is loaded. Is there some easy way to switch from
  ipw3945 to iwl3945? I mean, I can of course `modprobe -r $driver 
  modprobe $other_driver`, but is there some way to make this more
  permanent?
 
 You could add ipw3945 to /etc/modprobe.d/blacklist, but I don't know if
 that will get you iwl3945 loaded automatically.  Probably not: I do not
 see the pci alias lines in modinfo iwl3945 that I see in modingo ipw3945.

You can add iwl3945 to /etc/modules, and it'll get loaded on startup.
If I remember the changelog correctly, the PCI aliases are deliberately
patched out to ensure iwl isn't accidentally loaded.  Experiences with
this driver vary - for some people it apparently works better than ipw,
for me it stops working after 15-60 minutes and leaks references
preventing the module being unloaded and a clean shutdown[1].

  Moreover, a manual module load didn't make network manager detect my
  wifi hardware with iwl. Does this work for other people?

I find that restarting network manager works
- /etc/dbus/event.d/25NetworkManager  26... can be restarted like init
scripts.

[1]https://bugs.launchpad.net/ubuntu/+source/linux-ubuntu-modules-2.6.22/+bug/130457


signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu development...

2007-08-25 Thread Christopher James Halse Rogers

On Sun, 2007-08-26 at 05:41 +0100, Chris Warburton wrote:
 On Sun, 2007-08-26 at 00:26 -0400, Tim Hull wrote:
 
  2. The current default fonts look dreadfully ugly. For one thing, they
  are MASSIVE (on Gutsy at least), and they use a type of hinting which
  makes them look quite ugly when compared to OS X. What I suggest here
  is to 1) revert the auto-DPI-detect/change to 11-pt font in Gutsy -
  things look much better in Feisty with the settngs where they were 2)
  investigate changing the default hinting to either autohinter or no
  hinting - this, while a little blurry, seems to look eons better than
  the native hinter.
  
 On this point I'm wondering if this may be caused by CompositeByDefault,
 since running XGL on my laptop (non-free ATI drivers, no aiglx for
 me :( ) causes the font rendering to change and ends up with much larger
 fonts than regular Xorg. This happens regardless of whether a
 compositing window manager is being run or not. Perhaps an accelerated X
 server is the reason Gutsy fonts look bigger and uglier for you than
 Feisty? (I haven't changed the font preferences from the Ubuntu
 defaults)
 

This is possibly an artifact of incorrect DPI reporting by Xgl, if
indeed it's only apparent under Xgl.  I've got a new xgl package in the
works, waiting for review [1].  Once that's hit, I'd be very interested
in hearing whether you still see this behaviour - it should be easy to
fix if it's still apparent.

[1] https://bugs.launchpad.net/ubuntu/+source/xserver-xgl/+bug/126255




signature.asc
Description: This is a digitally signed message part
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss