Re: Causes of session management problems in Plasma 5

2015-11-26 Thread Henry Miller
I'll object to no interaction after logout. More than once I've asked "why is 
logout taking so long, Jumped to a terminal (not always konsole) to look.

 Also, on Windows (at least) a running terminal application will block logout 
so I may need to kill the application while in a logout context.  I'm not sure 
how this relates to konsole on other platforms, but it seems like the use case 
exists. 

On November 23, 2015 7:02:46 PM CST, Andreas Hartmetz  
wrote:
>Hello,
>
>As apparently one of the last users of session management, because I 
>shut down my computers about once every day, I run into problems about 
>as often as I log into a session that is supposed to be restored.
>The number one problem is Konsole just not restoring.
>So I took some time to investigate the problem. The result is that
>there 
>are several bugs that conspire to break session restore. It goes about 
>like this:
>
>- ksmserver (the session manager) sends clients the "SaveYourself" 
>  message and collects the responses. This works fine.
>- In Qt applications, this results in a call to 
>  QGuiApplicationPrivate::commitData(), which calls 
>  QApplicationPrivate::tryCloseAllWindows() after the part that sends
>  the SaveYourselfDone response to the session manager.
>  When QGuiApplication::quitOnLastWindowClosed() is true (the default),
>  this results in the application quitting.
>- ksmserver notices that (e.g.) konsole has terminated and purges it
>  from its internal data
>- ksmserver rounds up remaining processes, which at this point do not 
> include konsole, and saves their restore data. konsole thus has saved 
>  its state, but ksmserver forgot about it and doesn't remember to do
>  anything with konsole when restoring the session later.
>
>The two most obvious errors are thus:
>
>- QGuiApplicationPrivate::commitData() calling 
>  QApplicationPrivate::tryCloseAllWindows(), together with
>QGuiApplication::quitOnLastWindowClosed() being true by default. Quote 
>  from documentation of signal QGuiApplication::commitDataRequest():
>  "You should not exit the application within this signal. Instead, the
>  session manager may or may not do this afterwards, depending on the
>  context."
>  Note that it says session manager and afterwards, not QGuiApplication
>  and virtually immediately.
>
>- The session manager not "locking down" or better copying the list of
>  clients *while* logging out. This would arguably only help buggy
>  clients, but may still be a net positive.
>  Why copy the list? Logout may be canceled, so it is valuable to keep
>  the main client list updated for after logout cancellation.
>
>- Bonus: I've found that KMainWindowPrivate::init() calls 
>  QCoreApplication::setQuitLockEnabled(true), which is equivalent to
>  QGuiApplication::setQuitOnLastWindowClosed(true), which is either
>  redundant with the default or overrides something the user has
>  explicitly changed. I noticed this while trying to narrow down the 
>  problem with a call to 
>  QGuiApplication::setQuitOnLastWindowClosed(false) from konsole's
>  Application class. Which is not a good solution, but sufficient as a
>  proof of concept fix. That fix works as far as session save and 
>  restore are concerned.
>
>So now I wonder what the chances are to fix those bugs.
>
>- Make ksmserver more robust in the face of clients dying too early,
>  sure. I hope to get around to it Soon(TM).
>
>- Remove QCoreApplication::setQuitLockEnabled(true) from 
> KMainWindowPrivate::init() - seems like a good idea to me, objections?
>
>- Remove any window closing from QGuiApplicationPrivate::commitData() -
>  this is actually an old feature that was even modified in 2014 to
>  fix a problem on Windows(?!) - I guess that one is there to prevent 
>  interaction after session saving, but it's a very crude way to do 
>  that. IMO it would be better to do nothing, it would be even better 
>  to block user input and possibly I/O handling for the duration of 
>  logout and unblock them when logout is canceled.
>  Note: the Windows fix is about the method being expected to *kill*
> the application, which probably comes from a lack of knowledge about X
>  session management which is the main purpose of that method. Commit
>  9835a63dde06a1e5d2f in qtbase.
>
>I'd be grateful for any additional insight and / or historical 
>perspective.
>
>Cheers,
>Andreas

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: CI Requirements - Lessons Not Learnt?

2017-01-16 Thread Henry Miller

Having been paid to maintain a system that was a maze of #ifdef I
disagree with your statement that a professional should have no problem
with adding them. Sometimes a professional will add them, but not if
there is another choice. Every ifdef adds to the cost of maintenance,
and that is a very unprofessional thing to do to your company. 

Professional compiler writers tell us that the preprocessor is the
hardest part of supporting c++. Ide writers tell us that it is because
of the preprocessor they do not give us nearly as many tools for
refactoring as java has. The preprocessor is also why you find bugs in
the refactoring tools we get. 

Most of the work in KWin these days is Wayland. It is perfectly
reasonable for Martin to say that you must have the latest for Wayland,
it is a bleeding edge feature, distributions should not ship it without
a willingness to update as bugs in core components are fixed.  If they
don't like this, running KWin LTS on old fashioned X is what we should
point them to. 

This doesn't solve the ci problem that sparked this discussion, but your
proposal actually makes it worse. You are indirectly asking the ci
system to provide and build KDE with the old and the new xcbcommon. (not
just KWin, all of KDE needs to be tested in both configurations!) 
-- 
  Henry Miller
  h...@millerfarm.com

On Sun, Jan 15, 2017, at 03:58 PM, Alexander Neundorf wrote:
> Hi Martin,
> 
> just replying somewhere...
> 
> On 2017 M01 15, Sun 14:52:30 CET Martin Gräßlin wrote:
> > I think that is a reasonable suggestion. If distros patch our
> > dependencies we need to consider this as a fork. And a fork should be
> > called a fork. It needs to be clear that KDE is not responsible for any
> > issues caused by the fork and thus the complete product must be renamed.
> > 
> > Also if a component like KWin gets forked this means that the complete
> > product Plasma has to be considered as forked.
> 
> please excuse me, but I'd like to share some thoughts on the situation
> from a 
> non-technical POV. Maybe it can give you an impulse to think about the 
> situation. Ignore them if you think it is inappropriate.
> 
> You are maintaining kwin since several years now, and AFAIK the last
> years 
> more or less fulltime as your day job, right ?
> 
> If I'm right that this is the scenario, here are my thoughts
> - it would be good to have a second developer working on kwin, so you are
> a 
> (small) team. Then it is easier to discuss stuff and it would take some 
> pressure off of you.
> - is kwin more or less the only thing you're working on ? Doing that for 
> several years, and alone... it might have a relaxing effect if there was
> some 
> more variety in your job
> - OTOH, if you are maintaining kwin fulltime as paid job, I consider it 
> reasonable to expect that the maintainer is able to maintain necessary 
> #ifdefs, or apply pragmatic solutions, just to solve the problem for his 
> users... but that is not realistic to expect if the maintainer is quite 
> stressed out and all alone with this work.
> 
> All the best
> Alex
> 


Re: Releases in 3 months

2013-07-11 Thread henry miller


"Àlex Fiestas"  wrote:
>On Wednesday 10 July 2013 13:22:20 Sune Vuorela wrote:
>> On 2013-07-09, Sune Vuorela  wrote:
>> > So. first one.
>> 
>> Second one
>> 
>> Release frequency.
>> 
>> We have a giant quality problem. Distros won't ship a .0 release to
>real
>> users (as opposed to testers/power users) and wait until there has
>been
>> a couple of bug fix releases. Until we ensure that our .0 releases
>are
>> usable I don't see how we can cut down on that.
>> 
>> Some distros release in a 6 month cycle. Others in a 8. and ones even
>in
>> longer cycles. Going for anything shorter than 6 months will ensure
>that
>> distros are going to skip releases. why work with releases that they
>> aren't going to ship to users anyways?
>Not by distributions working that way I guess.
>
>Part of the reasons why I want this release schedule is exactly for
>these 
>distros. Let me explain.
>
>Right now distributions pick the release they see fit and make a distro
>with 
>it. It might be .0, .2 or .5.
>
>If a distribution in their right decide to pick a .5 release wile a .0
>is 
>already out there (this already happened), what is happening here is
>that a 
>HUGE release with a LOT of changes won't even get to the users of that 
>distribution at least for another distribution cycle. This usually
>happens 
>with distributions that have a release cycle of 9 months.
>
>With having releases every 3 months we make the amount of features
>smaller and 
>more often so distributions will always be able to pick a more updated
>release 
>than with the current situation.
>
>> And given there need to be some stabilization and integration work,
>I'm
>> sure skipping releases would be the default for most distros.
>Hopefully
>> distros can coordinate and at least skip the same. Mostly leading to
>the
>> other releases being useless because they only reach very few users.
>This is already happening, no change here.
>
>> And as it currently is, we need the .4 and .5 releases.
>and .6 and .7 and .8 and .9, we could have a 4.0.200, there is always
>need of 
>bugfixing releases, question is how many of these point releases are
>pending 
>of upstream KDE and not downstream distros.
>
>To make it clear, I WANT to have .4 and .5 releases, just not made by
>upstream 
>developers.

Might I suggest the following addition: every year (exact time debatable) we 
mark a release long term. Distributions are encouraged to release the latest, 
but those will never get beyond a .3 release.  Distributions that want more 
stability can work together to submit patches to the long term release: every 
month we will release a new version of any long term release that has a change.

I realize this is more work for our packagers, but I hope they can script it to 
a cron job that just checks for changes and creates tarballs once a month.

The purpose of my proposal is to make it easier for our downstreams to work 
together. If RHEL ships 4.14 in a long term release and Kubunu ships 4.15: odds 
are a security bug found in one is in the other. However patches between 
versions may not apply cleanly so it is extrawork for the second distribution 
to fix: and this assumes they inform each other of the bug.  By giving them a 
common place where they are encouraged to work together they can both provide 
better quality which makes us look good.

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Re: Adopting AppData in KDE?

2013-11-04 Thread henry miller


Richard Hughes  wrote:
>
>Please don't portray me as a modern-day highwayman as I'm really just
>trying to build an awesome application installer for GNOME. It's two
>orders of magnitude harder to actually write a shared standard and ask
>other desktops to adopt it (making changes as required) rather than a
>quick hack that just works with one desktop on one distribution.

Let me rewritte the above into a FAQ format:
Q: Why does KDE not ship appdata files
A: the maintainers of appdata have admited they have no interest in standards, 
thus KDE has no formal ability to get things we need changed.  In addition 
while appdata claims to be distribution/gnome, it really is a Fedora thing and 
few other distribution packages use it, thus violating KDEs no patches for on 
distribution only.

Is appdata good or bad? I say bad, not on technial grounds, but social: the 
effort to standardise something that should be standardised was not done.  It 
meets Fedoras needs, but what about debian?  What about netbed.  What about... 
Yes standardization is a lot of effort, that is because getting a good 
compromise for everyone is hard.  

KDE should in my opinion refuse appdata in hopes that the next time someone has 
a great idea they remember appdata failed (or took too long to catch on) 
because it failed presue standardization early.  If someone has a better idea 
how to stop developers from this anti-social behavor I'm open to it so I can 
consider appdata on technial merits like it should be.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Re: Adopting AppData in KDE?

2013-11-05 Thread henry miller


Sven Brauch  wrote:
>Let's not make a fight of this. I think the point that some people
>(including 
>me) didn't find the strategy for creating a standard quite optimal was
>made, 
>and we should drop it now and focus on discussing the adoption of the 
>specification.
>

I want to formally state that I agree with this.


Re: KMountPoint::probablySlow and cifs mount points

2013-11-26 Thread henry miller


"Aurélien Gâteau"  wrote:
>Le dimanche 24 novembre 2013 19:42:25 Mark Gaiser a écrit :
>> On Sun, Nov 24, 2013 at 5:05 PM, Albert Astals Cid 
>wrote:
>> > In Okular we just got bug
>> > https://bugs.kde.org/show_bug.cgi?id=327846
>> > PDF Render time is unreasonably slow over cifs on high latency
>(WAN)
>> > network connections
>> > 
>> > Basically the issue is that poppler is quite read-intensive over
>files,
>> > reading them over and over, and since the file is "local but really
>> > remote" it takes ages to render for big files.
>> > 
>> > The only solution i can think of is doing a local copy and then
>working on
>> > that.
>> 
>> That would work with small files (< 10 MB) but will get you into
>> trouble for bigger files.
>> You can't - or shouldn't - do that in an automated manner. If the
>user
>> manually copies the file and then opens it in a pdf reader: fine.
>Just
>> don't auto copy the file. So you can probably give the user a popup
>I disagree with that: I believe the use should not have to worry about
>this 
>and trust the application to be smart and do whatever is most
>appropriate to 
>deliver the best result instead.

I agree, with the note that if there isn't 'significantly' more disk space 
locally than the file size the right thing is don't copy locally.  Hopefully 
not a problems for pdfs though

>> 
>> it is. It also depends on your internet connection and where you're
>> connecting to. Saying smb, nfs or cifs is slow - per se - is plain
>> wrong. All of those are "likely" fast if you mount them locally.
>
>It really depends on the definition of "fast". I made measurements some
>time 
>ago for my home setup (so LAN only): NFS over a 100Mbit connection was
>almost 
>as fast as local access. Same NFS over a wifi connection capped at
>3MBytes. 
>That is much slower and can be worth adjusting the code path to take
>into 
>account this difference.

If you have the right RAID setup on a NAS box connected with gigabit ethernet, 
the network drives can even be faster than the local ones. This is becoming 
common in offices which is one target of okular

>
>> > Or anyone has a better suggestion to fix this issue?

I just had a complex, and perhaps crazy idea: check the times for the first few 
reads, if they come back as the remote file is 'slow', do a local copy. Okaular 
might be able to get away with reading all the data for the first page, 
rendering that, and then starting the local copy if reading took too long.

It seems to me that anything else is optimizing for one set of users at the 
expense of the other. On the otherhand, tablets on wifi (or less) is probably 
hurt more by the wrong optimzation than office users, and the code is much 
simpler.

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Re: DrKonqi improvement idea

2012-03-11 Thread henry miller
Good ideas, if anyone actually implements it. A couple of comments.

Most users are not programmers - they will not know how to recogize similear 
backtraces are the same root cause. Worse I know of many cases where I - a 
programmer - was wrong.

The more automated detection we can do the better. In fact many random crashes 
have been traced down to the same root cause, so we really need the ability to 
check the user's config in a distribution specific way. If some 
misconfiguration is found we can ignore the backtrace.

Keep privacy in mind as well.

Since I'm not offering to do the painting I'll step away from the bikeshed now.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Niko Sams  wrote:

Hi,

I'd like to talk about an idea on how DrKonqi (which is a really
useful thing btw) could be
further improved.
In short: DrKonqi shouldn't create bugs directly but talk to a "layer" between.

DrKonqi -> crashes.kde.org -> bugs.kde.org

crashes.kde.org is a new web application - a bit similar to bugzilla:
- lists all reported crashes with intelligent filtering duplicates
- developers can set duplicates
- developers can link a crash to a bug (or create automatically a bug
for a crash)
- developers can enter a solution (that will be presented to the user
that hits this crash)
eg.:
- "update to version x.y"
- "temporary workaround: don't click button x"
- "you missconfigured x, see http://y.com how to fix"
- "the developers are aware of this issue but have not yet fixed
it, see http://bugs.kde.org/... for details"
- "the bug is fixed but an update has not yet been released.
Update to version x.y once it released."
- comments can be added by users and developers (to ask how to reproduce etc)

For the end user there could be the following scenarios:
- user posts the crash, crashes.kde.org finds a duplicate crash in
it's database and will tell the
user on how to proceed (see possible solutions above)
- user posts the crash, crashes.kde.org can't find an exact duplicate
and will show the user
all possible duplicates
- user posts the crash, crashes.kde.org doesn't find a duplicate. User
gets the possibility to
subscribe to updates for this crash to get an email when a solution
for his crash was entered
by the developers

One big difference in implementation I would propose:
DrKonqi makes a *single* POST to crashes.kde.org including all
information and then just shows
a WebView where the server side can do anything. That gives greater
independence of the used
KDE version and changes on the server side.

Advantages over current solution:
- bugs.kde.org isn't filled with duplicates
- crashes.kde.org can be specialized on crashes
- sending a crash would not only help developers fixing that bug but
also help the user by showing
a solution for his issue.

What software could crashes.kde.org run? I'm not sure, maybe a
bugzilla installation or something
custom written. Or some other bugtracking software.

So, what do you think?
Niko



Re: DrKonqi improvement idea

2012-03-13 Thread henry miller
Don't forget security and privacy. The last n keys would be a bad thing to have 
if the user just entered a password.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Thomas Zander  wrote:

Quoting Niko Sams :

> On Mon, Mar 12, 2012 at 11:36, Thomas Zander  wrote:
>> I'd say this is a great idea, mostly because it means a lot more can be
>> automated on lots of ends.
>> Naturally, the actual automation means research and development, which means
>> manpower.   I didn't get from your email if you wanted to volunteer for some
>> of this work :)
>>
>> Personally I'd go for a solution that also tries to register the last 20
>> keystrokes and 20 mouse clicks (qt global event listener) and if and when a
>> crash occurs that info can be send with the backtrace.  So even if there are
>> no debug packages installed we get some info to do data-mining on.
>
> That would provide useful information? I guess it depends on the
> application and bug.

In my experience its useful in many cases. If your application is 
complicated then just having a backtrace, even with linenumbers, may 
not allow you to reproduce the crash. You need some extra info to know 
what the user did just before it crashed.
Asking the user is not the best solution; I've seen many backtraces 
with a comment from the user stating they were typing some text, while 
the backtrace showed they were printing. Its hard to be a debugger 
when your work just got lost...
A better solution would be that we know which buttons were clicked and 
that the user pressed the tab button or the ctrl-w buttons etc.
So, I'd say this can be very useful to get the whole picture of the 
moment the crash happened :)

And this also means that duplicates are good, the backtrace may be the 
same but we still add useful info to the issue.

>> Either case, I'd think you want something custom written. Its not too much
>> work to do the basics and maybe we can steal some code that compares
>> backtraces and steal some ideas or code for on-disk data-store of those
>> backtraces.
> +1
> The existing solutions are very complex and have lots of features. And
> they solve different
> use cases.

Hmm, its indeed not easy to find something fitting. Looking at the 
code of one of the links found in this thread, I notice its written in 
perl. While I understand the urge to do that, I'm not convinced that 
there is enough perl knowledge in KDE to make sysadmin happy if we 
just imported their systems. How hard can it be to parse a backtrace 
anyway? ;-)
Anyway, the actual core parsing and matching will probably be 
something that can be improved over time. Possibly with the help of 
other open source code.

-- 
Thomas Zander



Re: Review Request: Make KAuth ready for frameworks + API Changes

2012-03-19 Thread Henry Miller

---
This is an automatically generated e-mail. To reply, visit:
http://git.reviewboard.kde.org/r/104337/#review11598
---


In kauth/autotests/HelperTest.cpp The comment on line 57-68 should be reworded. 
In general when someone is told not to touch some lines they won't. You should 
be clear on why the code is that way.  However saying "you don't want to touch 
this code" is a bad thing.  It gives someone permission to not look close, even 
when in the future their change would break things.  Yes what you are doing is 
subtle, but that is no excuse for someone to not understand it.

- Henry Miller


On March 18, 2012, 10:25 p.m., Dario Freddi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://git.reviewboard.kde.org/r/104337/
> ---
> 
> (Updated March 18, 2012, 10:25 p.m.)
> 
> 
> Review request for kdelibs, Kevin Ottens, David Faure, and Alexander Neundorf.
> 
> 
> Description
> ---
> 
> Preamble - sorry for having to name-call people but apparently we still don't 
> have a frameworks way for reviewing code (which sucks). And sorry for the 
> long summary, but it's worth reading. However.
> 
> This huge patchsets brings KAuth in the marvelous world of Frameworks. If you 
> dislike ReviewBoard's way of displaying diffs or simply want to see a commit 
> list, please refer to the URL in "Branch".
> 
> First of all, I pulled in a dependency on KJob after a chat with Kevin. This 
> makes KAuth tier2, but shouldn't be a big issue.
> 
> Then there's the hard part: source compatibility is reasonably broken here. 
> The changes I had to do were mostly for the sake of revamping the internal 
> workflow of the library. The main problem KAuth had was the fact it was 
> completely synchronous, leading to a multitude of problems. After these 
> changes it's fully asynchronous instead (reason for pulling in KJob), the API 
> was simplified, and some unused features like multiple action execution have 
> been removed.
> 
> The main changes at a glance:
> 
>  * Some renaming to the enums
>  * Moving Action & ActionReply to be implicitly shared
>  * Removing ActionWatcher (now useless due to the new semantics of execute
>  * Removing some useless APIs from Action, namely executeActions, 
> execute(helper)
>  * execute() now returns a KJob
>  * helperID() -> helperId()
>  * Static action replies are now static accessors returning a new instance. 
> This was a complete mistake in the first place, but it's still there with a 
> different semantic to ease porting. The main use case for changing this is a 
> failure to handle implicitly shared classes in multithreaded environments 
> with that approach.
> 
> Of course, while it would be awesome to have all the code reviewed, I 
> understand it's a very big change so I'd like at least some feedback on the 
> following points:
> 
>  * General sanity of the new API
>  * Consistency of the enums. StatusInvalid vs. ExecuteMode vs. 
> AuthorizationDeniedError. While the semantic seems correct to me, I'd like to 
> have some feedback on whether consistency is valuable in the ordering of 
>  vs.  and which one should be preferred in case.
>  * Whether to deprecate static accessors such as static const ActionReply 
> SuccessReply(). I strongly favor this.
>  * Whether the new dependency of kcoreaddons for the sake of using KJob is 
> reasonable or I should go for a different alternative.
>  * CMake sanity for the new dependency of kcoreaddons.
> 
> The code is pretty much unit-tested and it should have a decent coverage, 
> even if I had no way to check this. For unit tests, I had to create a fake 
> authorization backend for testing purposes, whereas I managed to reuse the 
> dbus backend for helper communication, so that I could even test that. For 
> running the helper and the client in the same process, in the unit test I am 
> resorting to making the dbus service of the helper live in a separate thread, 
> to prevent asynchronous DBus calls from failing due to QDBus' local-loop 
> optimization. The test is also run on the session bus.
> 
> 
> Diffs
> -
> 
>   staging/kauth/CMakeLists.txt PRE-CREATION 
>   staging/kauth/autotests/BackendsManager.h PRE-CREATION 
>   staging/kauth/autotests/BackendsManager.cpp PRE-CREATION 
>   staging/kauth/autotests/CMakeLists.txt PRE-CREATION 
>   staging/kauth/autotests/HelperTest.cpp PRE-CREATION 
>   staging/kauth/autotests/SetupActionTest.cp

Re: why kdelibs?

2010-11-02 Thread Henry Miller
On Saturday 30 October 2010 12:01:18 Albert Astals Cid wrote:

> 
> Just for those that have short memories let me explain what happened.
> 
> We killed our printing stack because we were "promised" that QPrinter would
> be maintained and better than KPrinter was. And years later, QPrinter is
> unmaintained and provides less features KPrinter delivered much more years
> ago.
> 
> So please come back to the real world were Nokia doesn't have infinite
> manpower and where the only thing Nokia wants to do is sell cell phones.

Woah.   My recollection is we killed kprinter stack because while everyone 
wanted us to have it, nobody was actually interested in maintaining it.  A 
couple months before 4.0 it was brought up on this very list, that kprinter 
did not work and nobody stepped up to do the work (or at least it didn't work 
in a good KDE 4.0 way - I too didn't look into the issue).  Nobody considered 
QPrinter better than what we could do - but it was much better than the 
nothing we could offer as an alternative.

Phonon is a better example of what could happen.   If only we could get Nokia 
to import our fixes once in a while it might even be a good example of what 
the proposal is.   There are many areas where KDE offers something that non-
kde QT users would want if they could get it - but they do not want or need a 
full desktop.