Re: Ubuntu Desktop Security Defaults

2009-04-14 Thread Null Ack
Considering some noise happening in the blog space over a Linux
magazine article about security problems with Ubuntu server I think we
should re-visit this topic. The article is at:

http://www.linux-mag.com/id/7297/2/

The key criticisms of Ubuntu server raised by Linux magazine are:

1. Default permissions of users home dirs open by default
2. Install allows for blank mysql root password
3. Allowing system accounts unnecessary shell session authority
4. Nonsensical deamons listening on the network despite other
configurations servicing those needs

In our previous discussion on this topic here, I introduced some
personal concerns I have with Ubuntu desktop security with:

1. No firewall enabled by default
2. That AppArmor is providing a false sense of safety for users in
controlling the damage zero day exploits could potentially do.
AppArmor only protects one daemon, CUPS. By default it does very
little.

The reality is that other desktop distros such as Fedora have a far
stronger set of security features than our beloved Ubuntu,

I think we need to make progress on these issues. I think John
previously made an excellent suggestion about using something like
Plash with hooks into GTK.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Security Defaults

2009-04-14 Thread Null Ack
Thanks Mathias. I note that discussion is limited to the Server build,
whereas this discussion has both desktop and server build topics.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Security Defaults

2009-04-14 Thread Null Ack
 I guess I was hallucinating working on the apparmor profile for
 clamav-daemon and freshclam (also run as a daemon) today.


Thats great, though Scott please don't make the mistake of taking a
strawman approach. What I said was about AppArmor defaults. I dont see
my current dev build of the desktop having any profiles loaded by
default other than CUPS.

If the considered opinion is to continue with AppArmor then clearly
getting more profiles into it is the way to go.

However, if you look back into this discussion thread I think John
made a very sound set of points about the limitations of AppArmor /
SELInux etcetc type approaches for a desktop system and weaknesses of
X security. He makes what seems to be a very sound suggestion about
Plash and hooking into GTK, thus overcoming the problem of needing to
in advance make determinations about what a desktop user might do and
the X security problems.

Regards
Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Archive frozen for preparation of Ubuntu 9.04

2009-04-09 Thread Null Ack
In relation to this archive freeze, can I please point out that with
gstreamer plugins we are currently behind the upstream stable releases
in:

1. gstreamer ugly plugin
2. gstreamer ugly multiverse plugin
3. gstreamer ffmpeg plugin

We are leaving ourselves open to bug reports on already resolved
errors that are fixed in these stable releases. Especially given that
gstreamer is the default backend for Ubuntu.

I have a large test library of samples I could use to help test
updates to these plugins. I'm there to help you all with this - I need
packages to install to test though.

Thanks
Nullack

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


Re: Ubuntu Desktop Security Defaults

2009-03-17 Thread Null Ack
Gday John,

Good to see another Aussie on the list and contributing some top info :)

I've looked into Plash and I think your suggestion is excellent.

I was thinking of a two pronged approach:

1. AppArmor / SELInux or whatever static like central policy to
contain deamons, as these services typically have fixed functions and
can be locked down in a static way. I note here that Microsoft did
this locking down for Vista services, where they went through all the
services and implemented a least privileged model. We could exceed
Windows by doing least privileged but also protecting it through
mandatory access control policies as well.

2. A longer term secondary phase of securing X. Again we find
ourselves behind Windows where for Vista the security of their system
was made more resilient against shatter attacks with a number of
changes to make it far more difficult. Depending on the specifics of
how X is secured, sandboxes like Plash could be considered too.

I do disagree with you on enabling a firewall by default. What you say
is well informed - yes, you can use injection attacks to bypass
firewalls. A firewall is a basic level of protection that Windows and
OSX use by default. Attacks have to be more sophisticated to
circumvent a firewall using injection attacks for example.

Regards,

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Ubuntu Desktop Security Defaults

2009-03-16 Thread Null Ack
Gday folks :)

There is difference between what I foresee as sensible security
defaults for our desktop build against what is being currently
delivered. It may very well be that there is aspects to the current
setup that I am not fully aware of, and I'd like to better understand
the reasoning behind the current situation if so. Otherwise, perhaps I
could please suggest some possible enhancements:

* Enabling UFW by default or some other firewall by default
* Having AppArmor actually protecting the desktop build rather than
what seems as currently a false illusion of coverage with just CUPS
being protected

In my view the users want to feel secure in knowing that should a zero
day exploit be identified, that AppArmor or SELinux or foo or whatever
will trap the damage the exploited service can take beyond the
standard user is not root UNIX setup.

Thanks and regards,
Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Packaging Problem with SDL in Jaunty

2009-02-13 Thread Null Ack
Afternoon MOTU's :)

Thanks to a helpful person on the Ubuntu forums, a fix has been
identified for the SDL related problems users are experiencing on
jaunty. It's covered in bug #328932:

https://bugs.launchpad.net/ubuntu/+source/libsdl1.2/+bug/328932

Seems as though a recompile with default options fixes the problem.
Following a package search I see SDL is a universe package. It would
be terrific to have this issue fixed if possible.

Thanks and regards

Nullack

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


Reasons Why Jaunty Will Not Ship With 2.6.29

2009-02-11 Thread Null Ack
Can I please be advised of why Jaunty will not ship with 2.6.29, and
that the kernel team has elected to ship on .28?

I'm sure the kernel team are aware of the many driver changes in .29,
but I'm not clear if they they propose to backport those into .28?
What about features? Or any patches that for one reason or another
have not made it into .28 as well that really should have gone into
.28 as fixes. I would appreciate being better informed of how its
proposed this will be managed.

Regards
Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Any news on skype+pulseaudio+intel_hda_realtek ?

2009-02-10 Thread Null Ack
So in essence Scott, due to what you've highlighted as a lack of
testing input during the pre production lifecycle phases, your
suggesting that end users should endure the brunt of testing? As
Ubuntu needs to move forward rapidly, being cutting edge and cant be
so highly concerned with the risk of regressions?

Yes, pulse audio was implemented. Yes, it was a disaster and even the
upstream developer head more or less said so about Ubuntu's
implementation. It was half baked. So too is Compiz, with all its
incompatibilities with things like 3d OpenGL, that Ubuntu decided to
enable by default even though we all know that key architectural items
are missing like GEM. Lots of new users clambered onto the look at my
cool wobbly windows Linux stuff then were disheartened when they
realised that it didnt work properly, and that there is many other
visible bugs in the Ubuntu desktop experience. A bug in NM I reported
way back in the alpha still isnt fixed, that for my user experience,
is a nuisance. Cruft remover was poorly tested and entered production
in a problematic state. I could go on, but I wont.

If Ubuntu and Canonical are truly serious about quality, clearly the
professionals amongst us who sport big cowboy spurs with a good ol
wild western release philosophy need to be tamed. Otherwise, we might
as well all join Fedora. Thats not the Ubuntu I want to be involved
in. I want to contribute towards a robust system that provides a
quality desktop user experience. I'd like to reinforce Andrew Morton's
comments when he expressed an observation that too many kernel
developers focus on new features without resolving existing problems.

We are far better off focusing on improving the testing phases than
dumping it on end users. We will only alienate new users and limit the
strategic growth of Ubuntu if we go all cowboy.

Regards

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


OpenAL Regressions In Intrepid

2008-09-23 Thread Null Ack
Gday everyone,

The Linux Standard Base is surely a good thing. I don't know if OpenAL
is included in the LSB or not. What I do know is that someone decided
to change naming for OpenAL in Intrepid and this is causing many
regressions in other apps that now can't find OpenAL.

Can I please refer people to this bug:

https://bugs.launchpad.net/ubuntu/+source/openal-soft/+bug/273558

Some questions that come to mind are:

1. Why did we change the naming?
2. What is the best solution in the long term here for us?

Regards

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Possible Idea for New MOTU Contributors

2008-09-20 Thread Null Ack
Gday folks :)

A number of users on the Ubuntu forums have been writing how to's in
the tutorials  tips section. Some of these involve tweaks, patch
compiles and other things that would be better served by contributions
in MOTU. Some ideas I've got on this:

* Perhaps the creation of a role for a MOTU member or similar to act
as a bridge between that section of the forum and participating in
MOTU. I don't think there'd be much work in this, just keeping an eye
on the types of how to's being posted and maybe sending a PM to the
author along the guides of thats great, that helps the users, are you
interested in helping get that into Ubuntu as a whole? And heres some
places to start looking in the wiki and some videos on youtuube in
case you are.

* Perhaps a sticky in the tutorials section along the same lines.

My own story with this might be helpful. I had figured out I needed to
compile a patched version of a package to get the functionality I
needed to work right. Not wanting to be selfish I posted a how to
guide on the forums and have been helping with support questions from
other users. When a friend on IRC suggested what I was doing was
pointless, that was motivation enough to get over the fear with the
learning curve and time needed to fix the package right, as a root
cause no hassles fix for everyone. My feedback is that the MOTU wiki
is excellent, and the you tube videos of Daniel is also excellent.
Coupled with the support on the MOTU IRC channel (many thanks to all)
I managed to blunder my way through it. I think I can probably now do
similar patches in less than five minutes. And now, I am a proud
parent who has taken the birth of my new deb package, given her a
really nice value on the process list, and I will treasure her
functions as she's runs and sleeps on the kernel! Blessed is she!

While I make a living in ICT, I dont call myself a developer, and I
think MOTU contributions arent prerequisites to it.

Regards

Nullack

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


Re: failure of linux-image-2.6.24-19-generic 2.6.24-19.41 to upgrade

2008-09-16 Thread Null Ack
Trevor if you have the proposed source activated in sources, turn it
off if you want a more robust upgrade experience.

Generally kernel updates are useful and should be looked at for why it
didnt work if there is problems.

Theres some good documentation Trevor to help you report a bug, and to
do a little debugging yourself. Its not so hard if you can follow some
instructions :) Here

https://help.ubuntu.com/community/ReportingBugs

https://wiki.ubuntu.com/Bugs/BestPractices

https://wiki.ubuntu.com/DebuggingProcedures

https://wiki.ubuntu.com/KernelTeamBugPolicies

Regards

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: failure of linux-image-2.6.24-19-generic 2.6.24-19.41 to upgrade

2008-09-16 Thread Null Ack
Thats what were her for, to extend Australians embracing Ubuntu :)

Yes, thats what I was meaning in Software Sources. Its not enabled by
default so if you havent changed it, it wont be there.

As much as Ubuntu is tested across the world sometimes things slip
through. The way to contribute is by getting involved in the bug
process that it didnt work for. The links I referenced has some doco
to help you.


2008/9/17 Trevor Cowan [EMAIL PROTECTED]:

 Null Ack,
 Thank you for your reply and assistance, I am most grateful.

 I went System/Administration/Software Sources. Is this where you meant? I am 
 a newbie and not sure how to proceed to turn off the proposed source 
 activated in sources.

 Trevor


 --- On Wed, 17/9/08, Null Ack [EMAIL PROTECTED] wrote:

 From: Null Ack [EMAIL PROTECTED]
 Subject: Re: failure of linux-image-2.6.24-19-generic 2.6.24-19.41 to 
 upgrade
 To: ubuntu-au@lists.ubuntu.com
 Received: Wednesday, 17 September, 2008, 9:07 AM
 Trevor if you have the proposed source activated in sources,
 turn it
 off if you want a more robust upgrade experience.

 Generally kernel updates are useful and should be looked at
 for why it
 didnt work if there is problems.

 Theres some good documentation Trevor to help you report a
 bug, and to
 do a little debugging yourself. Its not so hard if you can
 follow some
 instructions :) Here

 https://help.ubuntu.com/community/ReportingBugs

 https://wiki.ubuntu.com/Bugs/BestPractices

 https://wiki.ubuntu.com/DebuggingProcedures

 https://wiki.ubuntu.com/KernelTeamBugPolicies

 Regards

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


  Make the switch to the world's best email. Get Yahoo!7 Mail! 
 http://au.yahoo.com/y7mail

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Backtracing, Invalidated Bugs and Quality

2008-09-13 Thread Null Ack
Gday everyone. As part of my work with the QA Team I want to
contribute to fixing the process gaps in this area. Can I summarise
what I see as the problem:

Problem situation: I'm increasingly noticing that certain types of
bugs are being marked invalid or incomplete with boilerplate type
messages instructing the bug reporter to conduct a backtrace. The
engagement of the end user is poor, the user experience is
non-intuitve, the documentation to walk through how a user can do this
is poor and the net effect is that the hit rate of users actually
fulfilling the request is very low. The net result is usually that the
bug stagnates and duplicate bugs pile up. It may, or may not, then get
filled upstream if the number of duplicate bugs gets high enough for
somebody to notice it. I have a high regard for all the Ubuntu
developers, and I say this carefully, but I think if were all honest
about the situation like some developers have been to me there is an
element of gee Im so busy and this bug report looks non trivial...I
might just copy and paste my back trace wording to this, move the
status to get it out of the way and if it's a real problem eventually
a bunch of users will report this too and then I might send it
upstream then because upsteam have the time to look at this.

There was a discussion on IRC which I've summarised here for the list
and some proposed action items:

1. The Tool Chain For Debugging is Not Robust

The point was made that the debugging toolchain is complex and will
not consistently provide the needed debugging information on all
occasions. Sometimes the retracing will fail for some reason. I simply
see this as a longer term challenge for the FOSS community to work on
bugs in the toolchain - obviously having a reliable and repeatable
method for getting right into the guts of the registers and stack is
important for fixing the more curly bugs. The toolchain being
imperfect however is not an excuse for failing to implement best
practices in Ubuntu for debugging in the meantime. We can make
progress.

2. The Volume Of Bugs Coming Through Makes The Hard Ones Too Hard

It was suggested that the number of bugs coming through is so high
that trying to fix the more tricky ones isnt worth the time given
available amounts of person power. I made the point and I'd like to
highlight it again that the complexity of fixing a bug should not be
the criteria for which bugs get developer attention. The best practice
for building quality in Ubuntu in my view is the determinants should
be how seriously it effects the user experience and how common that
user experience is. When I've got stuck in my testing work on Ubuntu
I've appealed for help in the testing section of the Ubuntu forum or
on IRC, and I've been greatly encouraged by good helpful responses
back. Im sure bug squadders and Ubuntu testers would be happy to
respond to developers with unit testing, feedback etcetc. I recently
helped Alexander Sack with performance feedback on a web browsing item
and unit testing - it was fun!

2. The Need For Improving Apport

A developer suggested that there is not a gap with Apport as it exists
now. I disagree and I sighted the example of where a package is
compiled with optimised compiler flags that a debug package will need
to be installed to get a meaningful trace. I know from experience that
automated ways of doing this, or atleast having an easier and
intuitive user workflow for this is better than documentation. I
really like Markus' ideas for improving Apports functionality he
shared earlier.

Action Item 1: I'm not a developer, but I can help any developers with
testing and feedback for enhancements to Apport. I might also be able
to assist with design / blueprints / discussing possible features. Or,
someone come up with compelling reasons why Apport is fine the way it
is and the worflow issues can be resolved another way.

Another thing that came up in the talks was that the backtrace
boilerplate copy and paste wasnt always accurate in the circumstances
its being used. Sometimes the real issue is being able to replicate
the problem not the backtrace. Or, a backtrace on a debug build is
truly needed but the user doesnt know how to help in detail and bug
squadders cant replicate the problem at will on their configurations.
Or, since there is considerable obselete info hanging around there is
confusion with bug squadders about what exactly to do and human error
has occured.

Action Item 2: A review of the documentation on both the user side and
the bug squadder / developer side to more fully explain and walk
people through the situation. I can help here too, but again I'm not a
developer so especially the more technical aspects of the backtrace,
why it sometimes fails, how to do manually, will need other peoples
involvement. Basically to improve the hit rate.

That seems to be what the IRC logs touched on, thanks.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings 

Re: The Case For Re-Evaluating Our Release Approach To FFMPEG

2008-09-12 Thread Null Ack
2008/9/10 Reinhard Tartler [EMAIL PROTECTED]:
 Null Ack [EMAIL PROTECTED] writes:

 Summary : I think we need to have regular snapshots of svn ffmpeg,
 libavcodec and so forth released in both the current development build
 and as backports to production builds. User's expect to have video
 experiences atleast as good as Windows and Mac, and this is necessary
 for actually delivering that.

 The main problem is lack of manpower. Every time ffmpeg is updated, we
 can more or less expect applications and libraries that use them to
 break.

 FWIW, the next upstream snapshot that I'm preparing for
 debian/experimental right now is going to drop nearly all
 patches. Packaging new snapshots should become pretty easy then.

Thanks for the responses guys.

Reinhard I'm excited to hear about the progress with dropping many
patches and streamlining the process for synching from SVN. I'm also
thankful for your interest in bug 263153 which I think is likely fixed
in the latest gstreamer ffmpeg plugin release.

I understand about person power and I will comit to helping you with
testing new ffmpeg releases and related applications. I have a test
library that involves many different containers and compression types
and other features. I'm somewhat new to gstreamer but I've got a
pretty solid understanding of digital media technologies and
practices.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Backtracing, Invalidated Bugs and Quality

2008-09-12 Thread Null Ack
Thanks for all the discussion on this folks. :)

Just now I had a crash in totem with apport leading me to 9 previously
reported bugs that are either invalid or incomplete because the bug
reporter did not do a backtrace to help fix the problem. Now I have
the same issue, when it was originally reported in the first bug
report all the way back in May 2007 with no concrete progress since.

On top of this, people have said that its a recurring discussion that
comes up every six months or so, so lets fix this eh.

To recap, I've suggested that all Alpha builds could be debug by default builds.

Others, such as Markus have what I frankly think is a better idea
where apport tells the user the situation and downloads a debug
version of the package and waits for it to occur again. Then it sends
the backtrace to the right bug for analysis.

Krzysztof seemed to have a promising idea similar to apparently what
MS do The information about debugging symbols
is only needed on server, client only sends (in simplest version) MD5
sum of library and address offset, which is transformed into the
symbol by symbol serve

Can we focus on a debate about what the best approach is? This in turn
can lead to the details with implementation.

Thanks

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


The Case For Re-Evaluating Our Release Approach To FFMPEG

2008-09-08 Thread Null Ack
Gday everyone,

It was suggested to me on IRC that I should discuss this matter on
this mail list.

Summary : I think we need to have regular snapshots of svn ffmpeg,
libavcodec and so forth released in both the current development build
and as backports to production builds. User's expect to have video
experiences atleast as good as Windows and Mac, and this is necessary
for actually delivering that.

My argument :

To be honest my original approach with meeting my video needs on
Ubuntu was to turf out the default apps and do my own custom compiles
of mplayer, mencoder and gnome-mplayer. This continues to work well
and frankly is still superior to what I can do under gstreamer and
totem (such as deinterlacing and other video filters). However I felt
guilty about doing this because I was not supporting the Ubuntu
principle of having one standard method for doing things and I was
restricting the value of my testing work I do on Ubuntu by not using
default applications in all circumstances. So some time ago I bit the
bullet, committed myself to using default apps and leaving mplayer for
any related tests.

I am thankful for Sebastien's updates to the gstreamer good and ugly
plugins recently, as well as the updates Intrepid has received with
Totem.

However, the ffmpeg gstreamer plugin is a key plugin for most user's
multimedia experiences. It provides to gstreamer:

* 256 elements
* 39 types

Of particular note amongst these many features is that some very
common video formats are used by gstreamer, such as AVC / H.264
decoding. AVC is one of the formats that is gaining much momentum with
it being widely used in BluRay, HDDVD, some Digital Video Broadcasters
and as an efficient backup format for personal media. As a subscriber
to the ffmpeg commit mailing list I know that in the past months there
has been substantial improvement to the code for AVC decoding and the
resolution of many related bugs.

AVC is just one decoder that ffmpeg handles out of many decoders that
has had many bug fixes in the past months.

Since gstreamer released a new ffmpeg plugin I have been enthusiastic
to see this arrive into Ubuntu and have Intrepid enjoy the more
reliable video experience this would offer our users. I'm advised
though that what is needed is to upgrade ffmpeg and related libraries
across the board to deliver the new gstreamer plugin. Upgrading ffmpeg
across the board would also give benefits to more advanced Ubuntu
users, whom for example maybe conducying video transcoding via
libavcodec. They wont need to suffer known bugs with old ffmpef
builds.

I want to note how the FFMPEG project manages releases:

* They dont do them
* Their standard response in reporting bugs is to compile SVN and retest.

What seems to happen in practice for FFMPEG in Ubuntu is that it
rarely is updated  - Intrepid's packages are currently seven months
old. On an upstream project that has numerous commits daily.

I feel bad for our users because I see bug reports on Launchpad that I
know is never going to go anywhere because ffmpeg currently isnt kept
up to date and is not backported for their build.

Anyone who has a passing observation of the situation has to agree
this is not ideal. I contend the risk of having old binaries in the
repos and all the problems that brings with poor user experiences
outweighs the risk that new code will bring new problems. My practical
experience of doing my own compiles of SVN head has consistently been
things are fixed and enhanced. On one occasion I had a problem where
the code would not compile and on another a bad commit occurred, which
effected functionality, but that was fixed in half a day and I simply
recompiled. Upstream strive for the SVN build to be fully functional
and in my experience thats meet on nearly all occasions.

My skills are not in packaging, but I can certainly assist with
testing and helping construct a freeze exception rationale for
Intrepid. Please consider.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Bugs for NM 0.7

2008-09-05 Thread Null Ack
To add to this, we have some serious regressions with problems of not
being able to consistently apply static IPs as well as custom MTU
values:

https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/258743

https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/256054

http://bugzilla.gnome.org/show_bug.cgi?id=548114

I'm concerned that gnome seem to be pushing through beta 1, beta 2 and
onwards without resolving these bugs.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Correct Process for Package Update Requests

2008-09-01 Thread Null Ack
Gday folks,

Can I please be clarified on what the correct process is for package
update requests?

On 27th of June I asked the MOTU mail list what it was and advised
The correct way to do this is to file a bug against the package and
tag it upgrade. Since I've done this for 6+ bugs.

Yesterday I filed two bugs complying with this, one of them was a MOTU
package not a core dev one, and I was advised in the bug comments that
there is no need to open new version update request. Also when I
asked a dev who's interested in this function area (video) on IRC if
he could confirm the bug he felt it was'nt useful to bug upgrade
requests.

I think we need general agreement on the process here.

Regards,

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Backtracing, Invalidated Bugs and Quality

2008-08-20 Thread Null Ack
Evening Devs,

Tonight I was doing some of my test suite and I had the
tracker-preferences crash unexpectedly doing routine workflow with
viewing (not changing) preferences. Apport came through and I ended up
at an invalid existing bug from 2007 because the user had not
submitted debugging symbols. This has happened to me before and my
mind has been busy since with thinking about how this detracts from
quality and what to do about it. These are real bugs, some of them are
in production, that are not being fixed.

I'm not convinced that the strategy of asking users to install
specialised debugging packages is the right way to go. I see a very
low hit rate with this working in practice. I have professional
experience in managing testing projects and consulting on related
fields so with Ubuntu being close to my heart I often think about how
we approach testing and what might be processes that could be
improved. Can I please offer some thoughts:

1. The Debug By Default Build. This would be where the entire
operating system is built using debug packages. This could be at a
targeted point of the lifecycle, such as during Alpha, where apport
will deliver all debug symbols by default. We could still distribute a
non debug build for users who must have that type of build, but it
could be hidden away so that the most common type is the debug build.
We engage some community evangilists who promote the importance of it
so it gets readily brought into practice.

2. The Hybrid Debug Build. Similar, but for technical reasons only
some packages are debug builds.

3. Extending Investment at the Canonical Test Lab. There is sound and
proven arguments I could help to present that demonstrate the cost to
fix defects as they progress in the lifecycle, both in terms of
monetary costs as well as costs to things like image, future sales and
so forth - how it increases at an escalating rate the further on it
progresses in the lifecycle. A business case could be built that looks
at extending whatever Canonical Test Lab exists now with the mission
for capturing the higher priority backtracing bugs and replicating
them in house under controlled conditions. My consulting career has
been based in Australia and I only have knowledge of what the rates
are for various testing roles in my country. I do though understand
that some other countries have far lower rates. I'm not suggesting
exploitation of cheap labour but I am suggesting that the labour costs
could be reduced considerably with choosing a location for the lab at
fair market prices. It might be additionally possible to setup a large
scale test lab using donated hardware from partners to Canonical. A
more aggressive strategy could be looked at into the future, such as
having planned multiple phases. Like another team looking at building
the automated test harness up to the point where most function points
are tested all night, every night, in every build before it gets
posted to daily ISOs. The results could be datamined by automatic
processes that then engage Ubuntu developers or upstream projects
automatically.

4. Extending The Ubuntu Entry Criteria. At least from my perspective
(which maybe insular) the practice of the Ubuntu release methodology
for eligibility of new code into existing packages is something along
the lines of has Debian accepted it and does it compile. I fully
understand that upstream projects lack person power with runtime
testing and they need their code to be included in pre-release
distro's to be tested. One thing that has gotten results for me in
projects I've managed is not just focusing on runtime level tests.
Static testing tools really can be useful and can be quite specific.
It's possible to set arbitary benchmarks for release entry criteria as
a minimum standard. You can set levels of compliance such as mandatory
where certain code problems are specifically banned, and others with
an allowed number of warnings and so on. I realise this would need
careful implementation but I think chipping away at it piece by piece
could realistically over time became an accepted part of what upstream
projects do in a standard way to demonstrate their new code changes
are ready for distros to look at.

These are just some ideas I had. Anyway, sincerely thanks for Ubuntu
and all the work that goes into it.

Regards

Nullack

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Backtracing, Invalidated Bugs and Quality

2008-08-20 Thread Null Ack
2008/8/21 Markus Hitter [EMAIL PROTECTED]:

 Am 20.08.2008 um 11:42 schrieb Null Ack:

 I'm not convinced that the strategy of asking users to install
 specialised debugging packages is the right way to go. I see a very
 low hit rate with this working in practice.

 How about getting this even more automated? Apport would have three buttons:

  [ Abort ]   [ Submit Report only ]  [ Allow getting bug fixed ]

 The third button would not only send the bug report, but replace (apt-get)
 the standard package with a symbol-equipped equivalent as well. Having a
 debug version of a package among standard packages hurts only neglible and
 most users won't even notice.

 Voi-la, next crash time Apport will come along with a backtrace.

Markus I particularly like your suggestion here. If there are certain
types of bugs that cannot be fixed without backtraces of debugging
symbols we must come up with easy tools on the desktop that creates
those conditions.



 1. The Debug By Default Build.

 Good idea, but the distro won't fit on the CD any longer. Don't know if this
 is an issue for developers.


Personally I dont care about the size, I'd just burn a DVD.


@Bryce - I dont think it matters what other processes other projects
use. To my way of thinking it is about process improvement and having
processes that are all geared to delivering the outcomes. Outcomes
that show Ubuntu to have rock solid stability, to be easy to use, to
have a quality user experience and so on.

@Emmet - I think it's unhealthy to treat the difficulty/time in fixing
bugs to the developer as the criteria for what gets look at. A quality
user experience should be the primary factor and any developer in my
book who's committed to Ubuntu quality would be tenacious about
chasing it.

Back to Markus:


 4. Extending The Ubuntu Entry Criteria.

 This would hobble invention of new packages immediately. As seen with the
 recent Empathy discussion, new packages don't go straight from the
 developer's alpha release into the distribution CD anyways.


I'm not so sure it would hobble open source software projects. Can I
please explain more fully? I am talking about packages that the Ubuntu
architects have all ready allowed into the distro. In this case for
example, we might be considering allowing in a new revision of gedit
into the alpha repos. I'm not talking about new packages all together.

Best practices on commercial projects that I've seen would involve
something along the lines of:

* Devs come up with the new code
* It is fully code reviewed by a human and made to meet certain benchmarks
* Static testing on the code occurs using static testing tools and
made to meet certain benchmarks
* and so on

In the case of the Ubuntu with our example new version of gedit:

* Has any code review been done?
* Has any static testing tool looked at the code?

As to the implementation, as I said it would have to be carefully
implemented. Can I summarise please:

Core basis for my extending the entry criteria argument: The earlier
problems are fixed, the less compounding multiplier effect of time and
money goes into fixing it.

I'm suggesting a staggered implementation. There are many ways this
could be done, one might be:

1. The Ubuntu security team start a pro active security initiative
that uses a static test tool to identify problems with memory
management that are security problems. The security team contact the
upstream projects with saying something along the lines of were using
this code analysis tool and we suggest your code has security
problems.

2. Case studies and outcomes are shared on the websphere. Promotion of
the benefits occur over time and open source interest rises.

3. Ubuntu makes the leading step in showing their commitment to
quality by requiring that all upstream projects run the security test
static test tool before it will be accepted into the repos. Tools are
bit to make this pretty easy for upstream.

4. As time goes on, this becomes second nature. More people get
interested in it and adds on are written that expand what the static
test tool looks at and expands the rules regarding acceptance of new
code from existing repo packages.

Skip to Step 22: Imagine my ultimate vision where every upstream
project is required and does perform extensive static testing on their
code and there is pages of standards about criteria for Ubuntu entry.
Imagine a teenager with a killer idea for a really cool app, and he
comes along to IRC and says Oh, what the heck, why do I have to deal
with this crap? And the cowboy developer is responded too by a
seasoned open source dev guru who replies because it results in
better code, with better quality, with better user experiences without
encumbering you with doing it all yourself.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: DVD - CD BURNING

2008-07-30 Thread Null Ack
Please dont get the impression on a Gnome crusade, but what does
Brasero not do that you want? My personal experience has been that
dual layer, single layer DVDs as well as CDs all burn fine using good
media at max speeds.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: DVD - CD BURNING

2008-07-29 Thread Null Ack
Just drop the burn speed down some in Brasero

Personally I wouldnt install Qt lib based apps onto Gnome - guess Im a purist :)

Its probably due to poor quality media - I use verbatim now and can
burn 16x no problems using Brasero

cheers

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: DVD - CD BURNING

2008-07-29 Thread Null Ack
Try flashing the drivers firmware to the latest

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Utilities wanted please

2008-07-23 Thread Null Ack
Gnome has an app called HardInfo

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Utilities wanted please

2008-07-23 Thread Null Ack
Dave its in the system menu not the application menu, cheers

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: [OT] Optus: Why does cutting a cable bring down the entire network?

2008-07-17 Thread Null Ack
Good insights there gents.

This has me wondering how Australia's internet infrastructure is
vulnerable to terrorist attack. Imagine the damage that could be done.

Unlike countries like the US, we obviously dont have the huge
conglomeration of backbone networks that would make a similar attack
far more isolated.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: [OT] Optus: Why does cutting a cable bring down the entire network?

2008-07-16 Thread Null Ack
Thats an interesting question, and one which should be put full force
to Optus! The internet was actually designed to withstand nuclear
attack by the US military before it became public. Network design
treats redundancy is a key goal. You could do some tracerts to get
some insight into how your packets are being routed.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: The Power of Ubuntu

2008-07-11 Thread Null Ack
I actually wrote up a how to on the forum:

http://ubuntuforums.org/showthread.php?t=848144

have fun

2008/7/11 Sridhar Dhanapalan [EMAIL PROTECTED]:
 On Wed, 9 Jul 2008 at 10:00, Null Ack [EMAIL PROTECTED] wrote:
 It occurred to me how powerful Ubuntu is. Even without Gnome, I could
 still run my Upnp server to watch films on my xbox 360.

 How do you achieve this? I'd love to see my Xbox 360 being used for more than
 just gaming.

 Cheers :)

 --
 Your toaster doesn't crash. Your television doesn't crash.
 Why should your computer? http://www.linux.org.au/linux


-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


The Power of Ubuntu

2008-07-08 Thread Null Ack
I was reflecting this morning on my previous criticisms for areas that
Ubuntu and GNU/Linux needs to be improve in order to reach its
potential. I recently had an experience that crystallized the power of
Ubuntu for me and I felt I should share this view with you all.

Part of my ICT specialisation is in test management and release
management. Naturally my contributions to Ubuntu have tended to lean
into bugs and testing. I was playing around with the Intrepid Alpha 1
release. Having raised some bugs and satisfied myself I have
sufficiently covered core functions I was interested in, I settled
into using it daily. One of the upgrades was for Xorg and for a few
days I was without X and Gnome while all the package dependencies were
met in development.

It occurred to me how powerful Ubuntu is. Even without Gnome, I could
still run my Upnp server to watch films on my xbox 360. I could still
use apt-get to keep my systems configuration items up to date with the
repos binaries. I had nano for basic docos and my printer working
fine.

Best of all, I was able to leverage the power the system has with
logging. I was easily able to determine the problems with X in the
logs at \var\log unlike the vaugeries of Windows Server or Vista where
at some level the actual problem gets lost inside the web of hidden
layers within the system internals. And trust me, I know the Windows
platform very well.

There is so much exciting things happening too. I hear a rumour that
ZFS native in kernel space is coming. DRI2 so that memory on video
cards can be fully managed. In comparison, we have more vague
references to Windows 7 and midori.

I have the comfort in knowing there is no back door, no hidden little
Government probe that can be put into closed code. Am I paranoid? I
think not, open code, as Schneider puts it, is a cornerstone of
security.

I am free to put up new ideas and show how certain functions might be
improved. I was interested in the gnome-mplayer, and provided some
insights there, which the actual Developer responded too and is now
looking at for a future release. I am part of the ecosystem and can
support the betterment of it.

Going back to xbox 360 media sharing, what do we find? MS implement a
upnp service that is not standards compliant so that it works out of
the box with Windows stuff only. To make it worse, they also have
implemented it in error as the video side is dealt differently to the
music side at a technical level. So all their interfacing code has
worked around this, rather than doing it in a consistently correct way
to start with.

The future of ICT relies on open, consistent standards that avoids
vendor lock in.

In my house, and professionally, I have used Windows, OS X, Unix,
Mainframes and so on. No system is perfect, but what is clear to me is
that Linux has the momentum behind it and the right free and open
approach to be the system that lasts for centuries ahead.

I love you Ubuntu.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Advice

2008-07-06 Thread Null Ack
And do you have DHCP on or do you have to statically allocate your IPs?

2008/7/7 Karl Goetz [EMAIL PROTECTED]:
 On Mon, 2008-07-07 at 05:53 +1000, bobkay6 wrote:
 Abuntu,

Just installed ubuntu 8.04. having trouble getting  on net
 through  through a network DI router   system do I need to install a

 A what network router? What is DI?

  different  driver

 To get on net and could you advise as to where I can download same.


 The device is an ADSL router? USB or ethernet?
 kk

  Bob.

 --
 Karl Goetz [EMAIL PROTECTED]

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au



-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Monitor problem

2008-07-06 Thread Null Ack
Yes, but what about texture compression?

And accelerated direct rendering in 2d?

For example Ive noticed a good improvement with mplayer using xv
instead of x11 as an output driver

If you have sourcs for further reference Im generally interested, thanks

2008/7/7 Daniel Mons [EMAIL PROTECTED]:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Null Ack wrote:
 | Martin you appear to not be familiar with 2d acceleration on Linux?
 |

 You appear not to be familiar with exactly what 2D acceleration is,
 how generic it is, and how little it has to do with the actual
 application level number crunching.

 2D acceleration has been around since the days of PCI graphics with
 256KB frame buffers, and yes, even in Linux (or more correctly, XFree86
 and today Xorg).  If you own a card made in the last 10 years with
 enough frame buffer space to hold your entire resolution at the correct
 colour depth [*], upgrading to a new card will do absolutely nothing for
 your application speed when dealing with programs such as F-Spot, GIMP, etc.

 The recommendation was to upgrade from a low-end ATi video card to
 something better to improve F-Spot performance.  This is incorrect, and
 will not yield the performance benefits desired.  The bottleneck is
 somewhere else.  My guess is F-Spot is doing some heavy reading or
 pre-caching of images from the disk on first start, which is usually the
 case for such programs.  A lot of this can be disabled in the
 application preferences.

 - -Dan

 [*] Some maths for you:
 Full HD is 1920x1080 at 32 bits per pixel.

 1920 pixels * 1080 pixels * 4 bytes per pixel / (1024^2 bytes per
 megabyte) = 7.9MB

 It requires only 7.9 MB of framebuffer space to store a screen worth of
 information at HD resolution with full colour depth.  Anything more is
 totally unused when dealing with programs like F-Spot and other image
 viewers.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFIcS3ZeFJDv0P9Qb8RAkHsAKCm4FI8QEoLmZzMInXi/hBC6o7RyQCdH1I6
 qyAnaaWSp7J2ep56OOrA/i0=
 =tm5o
 -END PGP SIGNATURE-

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Linux Servers for Infrastructure

2008-07-06 Thread Null Ack
Hi Daniel, Im also interested inthis topic too. I was trying to get
more of an in depth understanding from a forum post I did here:

http://ubuntuforums.org/showthread.php?t=848194

It would be great if you shared your views on that

I thought there was a generic 386 only, other than the AMD64, server
kernels, IA build etcetc

Thanks again

2008/7/5 Daniel Mons [EMAIL PROTECTED]:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Which kernel are you using?  The linux-image-generic kernel supplied
 with Ubuntu requires a 686 equivalent processor (first appearing with
 the Pentium Pro CPUs).

 You might need to switch to the linux-image-386 for support on your
 processor.

 - -Dan
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFIby2teFJDv0P9Qb8RAgndAJ9WGwtI40C3PQmvkqXLjVdLJQ0nuwCdFPs6
 nvHUC+ZyvkUqgm97d7s8ttQ=
 =OmQe
 -END PGP SIGNATURE-

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Monitor problem

2008-07-06 Thread Null Ack
Many thanks for that Dan

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Monitor problem

2008-07-04 Thread Null Ack
2008/7/4 The Wassermans [EMAIL PROTECTED]:
 On Fri, 2008-07-04 at 21:38 +1000, Null Ack wrote:
 Are you running your monitor at its native resolution in Ubuntu?

 Sorry Null,  I have a jargon problem.  What does native resolution
 mean in this context?

 The GUI offers a selection of from 1680x1050 to 720x400.  Upon
 installation Ubuntu defaulted to 1680x1050.  That's what I'm using.

 Regards
 Dave W



Dave, LCD monitors have a native resolution that works well. Unlike
a CRT, when other non native resolutions are displayed on the monitor,
the pixels have to be lit in a way that causes the sharpness to drop
due to the way that LCD works. The first place I would explore with
this is to identify the native resolution of your particular LCD and
ensure you are using that before looking at other causes.

Cheers

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: iinet users

2008-07-03 Thread Null Ack
Just use synaptic, click on settings, repos, download from, other,
then choose iinet in the list

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: [OT]Re: Microsoft CDs

2008-07-03 Thread Null Ack
Its not lunacy at all.

I do own a 360, but I consider the PC gaming experience to be superior
to a console. As well, there is many use cases of having cool uses for
linux gaming - such as on mobile media devices or mobile phones.

There is no technical reason why Linux gaming cant occur. Linux works
on many platforms and is open, so cool apps like say a linux phone
running GLTron or indeed an SLI powerhouse running the latest opengl
and openal FPS.

You will be surprised Daniel how often I see people saying they dont
run Linux because the gaming experience is poor.

Perhaps if you more carefully considered your position you would not
be so quick to dismiss it out of hand.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Linux Servers for Infrastructure

2008-07-03 Thread Null Ack
Im reflecting on a infrastructure project I did recently and how this
might have been done using Linux servers (Ubuntu). In this example the
desktops have to remain the approved XP SOE. To give an outline of the
environment:

* 1450 desktops running Windows XP on a SOE in three buildings
separated via fibre connections
* Beyond the SOE applications are packaged into MSI's and controlled
via group policy
* AD is used throughout

The services for the servers are:

* File serving over the fibre connection to the large replicated SANs
(there is two) that stores all data
* Authentication
* Software distribution for patches and MSI packages to be installed
into the desktops as allowed by group policy
* DNS
* Mail
* NTP
* Intranet and Internet web serving
* Print serving
* Monitoring and alert system
* Single sign on
* Security auditing of desktops

Two eight way servers (for scalability) were depoyed in seperate
physical locations and setup in a cluster for all services to allow
for online maintenance of one node. The servers had no internal
storage and they booted off a LUN in the SAN.

Im not sure about the software distribution aspetcs and group policy?

Im curious about this. What I see happening is Linux being used for
app / web / DB servers but not alot in infrastructure for desktops -
maybe it just the places Ive worked at.

Thoughts?

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Partition Help

2008-06-28 Thread Null Ack
Simple fix for grub is:

ALT+F2
gksudo gedit /boot/grub/menu.lst
Edit partition number
Remove unwanted boot descriptors

cheers

2008/6/28 Simon Ives [EMAIL PROTECTED]:
 I forgot to mention that Hardy is located on the second partition.  I
 was worried that that there may be an issue because Hardy wouldn't be
 physically located at the beginning of the drive.  Also, would Grub
 recognise the changes?

 Thanks.

 Simon

 Message: 10
 Date: Sat, 28 Jun 2008 12:44:22 +1000
 From: Null Ack [EMAIL PROTECTED]
 Subject: Re: Partition Help
 To: ubuntu-au@lists.ubuntu.com
 Message-ID:
   [EMAIL PROTECTED]
 Content-Type: text/plain; charset=UTF-8

 I would think you should be able to simply delete the unwanted
 partition and resize the one you want but to be safe Id backup the
 data beforehand :)

 2008/6/28 Simon Ives [EMAIL PROTECTED]:
  I've got a, hopefully, simple question regarding the partitions on my
  system.
 
  I have two equal size partitions (ext3) with the first containing Gutsy
  and the second Hardy.  I no longer need Gutsy and would like to have
  just a single partition with Hardy.  I don't want to remove the Hardy
  install that I already have.  ?Can I simply use a tool such as GParted
  to accomplish this or is there some other process that's better/easier?
 
  Thanks.
 
  Simon.
 
 
  --
  ubuntu-au mailing list
  ubuntu-au@lists.ubuntu.com
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-au

 --
 ubuntu-au mailing list
 ubuntu-au@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Synaptic and Picasa

2008-06-27 Thread Null Ack
Download the deb file (be it x86 or x64) from:

http://picasa.google.com/linux/download.html

And double click it to install

Why did you edit your sources file? I would revert that to fixup your sources

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Getting Packages Updated

2008-06-26 Thread Null Ack
My apologies for asking what seem's like a basic question but how does
a user go about requesting that a package get updated? The motu wiki
seems geared to becoming a motu. I went looking on the Ubuntu forums
as well, no relevant posts in the search, created a post, bumped it
some time later, still no replies despite numerous people reading it.

Additionally, I was getting confused about how to determine where a
package comes from. I since discovered if I right click in synaptic
one of the tabs shows an email address for the maintainer. Of then the
comeback from upstream devs is contact the maintainer for a new
version.

Ive been compiling some packages from source but its messy because
then it wont be integrated into my system and the dependencies become
a problem. I would really appreciate updates to:

1. Tripwire
2. SVN mplayer
3. SVN gnome-mplayer (revision 700 has some important fixes)
4. GIT x264 and libx264

I especially consider mplayer, gnome-player and x264 to be a special
case where SVN builds do not pose any real risk to being backported
into released Ubuntu revisions. Many people are resorting to compiling
these by themselves to get updates but it causes dependency problems
with the rest of the system. It would be really terrific if these
three packages could be built say on a weekly basis and made available
to the backport repository. Development on these moves ahead most days
and the only problem Ive ever had is it not compiling, which is fixed
in a few hours by a new release. This is compounded by these three not
being core to a default Ubuntu install so new users wont be confused
by the regular updates.

Thanks

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


Re: Graphics Card Issue

2008-06-25 Thread Null Ack
Youll need to post xorg.conf and Xorg.0.log for needed details. Youll
get a quicker fix for this if you do a bit of investigation yourself
:) Just trawl through the log and see what X is doing.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Graphics Card Issue

2008-06-25 Thread Null Ack
Here ya go, use this xorg.conf it should get your up and happening on
the NV driver. Then uninstall envy to clean it all out


xorg.conf.nullack
Description: Binary data
-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Graphics Card Issue

2008-06-25 Thread Null Ack
Are you sure you used the xorg conf I provided? It wont load the nv
driver? Whats x say in the logs?

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Ubuntu x64 Build Options

2008-06-24 Thread Null Ack
Hi everyone :)

I was reading on a blog about some drastic performance gains a bloke
got from recompiling various bits of his video software with GCC
compiler options optimised for his CPU. The results were good. This
got me curious:

1. What compiler options are used for compiling official Ubuntu x64 packages?
2. Is it possible to query a compiled program to identify the compile
options used?

I know that video algorithms in particular are sped up by sse4 and so on.

Many thanks

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Totally Out of Ideas on Fixing Network Bug

2008-06-23 Thread Null Ack
Ok so an update on this. Thanks alot for the help with this :)

So I took the advice and disabled ACPI in the BIOS. Wouldnt boot, but
then my research showed adding acpi=off into the grub boot line made
it get into init. The init is messy, with messages like fatal error
cant find battery (battery.ko) twice, and I no longer get times in my
var kernel log outputs. It may very well be issues I can ignore but it
just doesnt feel right.

So I joined the acpi kernel list and reported the issue. I just hope
Andrew Morton is wrong and I get a patch fixing this bug :)

Ive since learnt that Intel has a direct bug line for ACPI issues and
VIA doesnt. VIA is apparently known for ACPI dramas. It is somewhat
comforting to be able to deflect my emotions away from Linux and onto
that evil company called Via!!

I hope this is resolved because then atleast if a fix comes Ive done
something little that will prevent others with the same hardware
having problems into the future.

-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: Totally Out of Ideas on Fixing Network Bug

2008-06-22 Thread Null Ack
Thanks again Daniel, very much appreciated. Ive been through my CMOS
settings but unfortunately no joy.

So I have been doing more research and I think Im narrowing it down now. In
summary I think:

After a period of no network use, ACPI thinks IRQ 23 isnt needed
ACPI turns off IRQ 23
eth0 times out and wont come back without reboot
ifdown/ifup wont fix it
Im yet to test manually unloading and reloading the module for my Via rhine
II nic to see if that brings it back without reboot. Just waiting for the
timeout.

Jun 21 03:04:07 ppp kernel: [   57.447218] eth0: no IPv6 routers present
Jun 21 04:29:46 ppp kernel: [ 5193.747505] irq 23: nobody cared (try booting
with the irqpoll option)
Jun 21 04:29:46 ppp kernel: [ 5193.747514] Pid: 0, comm: swapper Tainted:
P2.6.24-19-generic #1
Jun 21 04:29:46 ppp kernel: [ 5193.747516]
Jun 21 04:29:46 ppp kernel: [ 5193.747517] Call Trace:
Jun 21 04:29:46 ppp kernel: [ 5193.747519]  IRQ
[__report_bad_irq+0x1e/0x80] __report_bad_irq+0x1e/0x80
Jun 21 04:29:46 ppp kernel: [ 5193.747550]  [note_interrupt+0x2ad/0x2e0]
note_interrupt+0x2ad/0x2e0
Jun 21 04:29:46 ppp kernel: [ 5193.747562]  [handle_fasteoi_irq+0xa1/0x110]
handle_fasteoi_irq+0xa1/0x110
Jun 21 04:29:46 ppp kernel: [ 5193.747571]  [do_IRQ+0x7b/0x100]
do_IRQ+0x7b/0x100
Jun 21 04:29:46 ppp kernel: [ 5193.747577]  [ret_from_intr+0x0/0x0a]
ret_from_intr+0x0/0xa
Jun 21 04:29:46 ppp kernel: [ 5193.747583]  [pci_conf1_read+0x0/0x100]
pci_conf1_read+0x0/0x100
Jun 21 04:29:46 ppp kernel: [ 5193.747596]  [__do_softirq+0x60/0xe0]
__do_softirq+0x60/0xe0
Jun 21 04:29:46 ppp kernel: [ 5193.747609]  [call_softirq+0x1c/0x30]
call_softirq+0x1c/0x30
Jun 21 04:29:46 ppp kernel: [ 5193.747614]  [do_softirq+0x35/0x90]
do_softirq+0x35/0x90
Jun 21 04:29:46 ppp kernel: [ 5193.747618]  [irq_exit+0x88/0x90]
irq_exit+0x88/0x90
Jun 21 04:29:46 ppp kernel: [ 5193.747621]  [do_IRQ+0x80/0x100]
do_IRQ+0x80/0x100
Jun 21 04:29:46 ppp kernel: [ 5193.747624]  [default_idle+0x0/0x40]
default_idle+0x0/0x40
Jun 21 04:29:46 ppp kernel: [ 5193.747628]  [default_idle+0x0/0x40]
default_idle+0x0/0x40
Jun 21 04:29:46 ppp kernel: [ 5193.747630]  [ret_from_intr+0x0/0x0a]
ret_from_intr+0x0/0xa
Jun 21 04:29:46 ppp kernel: [ 5193.747633]  EOI
[lapic_next_event+0x0/0x10] lapic_next_event+0x0/0x10
Jun 21 04:29:46 ppp kernel: [ 5193.747648]  [default_idle+0x29/0x40]
default_idle+0x29/0x40
Jun 21 04:29:46 ppp kernel: [ 5193.747654]  [cpu_idle+0x6f/0xc0]
cpu_idle+0x6f/0xc0
Jun 21 04:29:46 ppp kernel: [ 5193.747662]  [start_kernel+0x2c5/0x350]
start_kernel+0x2c5/0x350
Jun 21 04:29:46 ppp kernel: [ 5193.747670]
[x86_64_start_kernel+0x12e/0x140] _sinittext+0x12e/0x140
Jun 21 04:29:46 ppp kernel: [ 5193.747678]
Jun 21 04:29:46 ppp kernel: [ 5193.747679] handlers:
Jun 21 04:29:46 ppp kernel: [ 5193.747680] [usbcore:usb_hcd_irq+0x0/0x60]
(usb_hcd_irq+0x0/0x60 [usbcore])
Jun 21 04:29:46 ppp kernel: [ 5193.747702]
[via_rhine:rhine_interrupt+0x0/0x7f0] (rhine_interrupt+0x0/0x7f0
[via_rhine])
Jun 21 04:29:46 ppp kernel: [ 5193.747710] Disabling IRQ #23
Jun 21 04:34:46 ppp kernel: [ 5493.104588] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 04:34:46 ppp kernel: [ 5493.104738] eth0: Transmit timed out, status
0003, PHY status 786d, resetting...
Jun 21 04:34:46 ppp kernel: [ 5493.105384] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 05:05:02 ppp kernel: [ 7308.203455] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 05:05:02 ppp kernel: [ 7308.203606] eth0: Transmit timed out, status
1003, PHY status 786d, resetting...
Jun 21 05:05:02 ppp kernel: [ 7308.204254] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 05:35:16 ppp kernel: [ 9121.303308] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 05:35:16 ppp kernel: [ 9121.303457] eth0: Transmit timed out, status
1003, PHY status 786d, resetting...
Jun 21 05:35:16 ppp kernel: [ 9121.304106] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 06:05:32 ppp kernel: [10936.402170] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 06:05:32 ppp kernel: [10936.402319] eth0: Transmit timed out, status
0003, PHY status 786d, resetting...
Jun 21 06:05:32 ppp kernel: [10936.402968] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 06:12:40 ppp kernel: [11364.189787] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 06:12:40 ppp kernel: [11364.189937] eth0: Transmit timed out, status
0003, PHY status 786d, resetting...
Jun 21 06:12:40 ppp kernel: [11364.190589] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 06:36:06 ppp kernel: [12769.492097] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 06:36:06 ppp kernel: [12769.492247] eth0: Transmit timed out, status
0003, PHY status 786d, resetting...
Jun 21 06:36:06 ppp kernel: [12769.492892] eth0: link up, 100Mbps,
full-duplex, lpa 0x45E1
Jun 21 07:06:22 ppp kernel: [14584.590959] NETDEV WATCHDOG: eth0: transmit
timed out
Jun 21 07:06:22 ppp kernel: [14584.591109] eth0: Transmit timed out, status
0003, PHY status 786d, resetting...
Jun 21 07:06:22 ppp kernel: 

Re: On Bugs and Linux Quality

2008-06-22 Thread Null Ack
Daniel with respect, I did not mean to present that the solution to
improving the quality of GNU/Linux is for centralised control.

However, people are in control of aspects of Linux - such as release
decisions about key sub systems, or release decisions as it relates to
Distros. These decision makers have the power to conform, or not to conform
as some unfortunately choose, to decades old principles to do with what
consitutes an alpha, beta or production release.

Clearly, there are allot of problems when parties who are in control declare
a release as stable when its not. With the kernel, I gave the example where
Andrew Morton shared with us that he often see's regression bugs go without
fixes, he see's developers ignore bug reports. There is other examples too
in other key sub systems of just about any Linux distro. Take for example,
all the problems with X releases and how most recently a new release of X
was made with a blocker bug and other serious bugs.

If more focus and discipline was put into what constitutes a production
release I think that would be a very good direction to take. Who cares if
there is more release candidates for kernels or more betas for X, if its not
ready its not ready. Some bugs can be tricky for a developer to replicate
and resolve. Its human nature not to see the severity the same way with an
issue if it's not happening on your machine.

I dont see proper release management stifling any freedoms in FOSS projects.
It just means having a proper quality standard before bits are declared
stable and ready for production. I greatly enjoy Ubuntu, over all other
distro's Ive tried (Arch, OpenSuse, Fedora) but I am certainly not the only
person Ive seen sharing their views that arbitary time based releases arent
condusive to good software.
-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Re: On Bugs and Linux Quality

2008-06-22 Thread Null Ack
Slawek having been on the tender process for numerous Government contracts
(both inside in the Government and outside in vendors) the key pros / cons
for Linux I see are:

1. Pro - reduced TCO
2. Pro - easy sell for servers
3. Con - hard sell for desktops. I did not see anything particularly solid
in preventing this - its more a lack of understanding. Im sure some areas
really could not do without Office but most that make this claim are in my
experience wrong about OpenOffice capabilities. Some sites have custom .net
apps running so it would be critical that Mono or some equiviliant really
worked. Actually I dont really understand all the whining about Mono as I
understand that is is now an open standard and not a MS standard? Theres
probably going to be the occasional legacy app written on the win32
application platform that doesnt play nice with Linux. What we did on one
project where all the infrastructure was replaced was to have a few citrix
sessions running legacy apps - for some reason they didnt want
virtualisation for desktop apps.

In my experience even getting OpenOffice into departments was difficult. The
one place that was done was on a Java developer build where the users were
all developers working on Java projects.
-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au


Totally Out of Ideas on Fixing Network Bug

2008-06-20 Thread Null Ack
Gday everyone :)

So Im having alot of problems with loosing eth0 connectivity after a period
of time. I'm trying to be an advocate for Ubuntu, but it's hard when a major
bug makes the experience painful. I'm desperate to fix this problem. I have
various things up on the lanuchpad bug report at:

https://bugs.launchpad.net/ubuntu/+source/acpi/+bug/111282

Many thanks for any help.
-- 
ubuntu-au mailing list
ubuntu-au@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-au