Re: CI Requirements - Lessons Not Learnt?

2017-01-12 Thread Jan Kundrát

On čtvrtek 12. ledna 2017 15:28:08 CET, Kevin Kofler wrote:
What will happen now is that they will revert your commits that require the 
unavailable version of the library. It is just more work for us packagers


Hi Kevin,
do you have some examples of distribution maintainers actually doing such a 
stupid thing?


In my professional epxerience, the distro maintainers that I have worked 
with were reasonable people who invest time into doing valuable QA and 
packaging duties. Surely there's no place for "hey, let's go break this 
code" as your proposal suggests.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: CI Requirements - Lessons Not Learnt?

2017-01-11 Thread Jan Kundrát

On středa 11. ledna 2017 6:57:50 CET, Martin Gräßlin wrote:
That doesn't work. Such inflexibility take away the advantage 
of having a CI.


What base system(s) do you prefer to target as a developer, Martin?

A CI system can have different sets of base images for different projects 
(and different branches, etc). Something along these lines:


- KF5 might care a bit about slow-moving distributions such as RHEL7 (well, 
except for a requirement on Qt 5.6)


- Plasma LTS might want to target versions of faster-moving distros which 
were around at a time of their release. Say, Fedora 24? Ubuntu 16.04 LTS?


- The master branches of Plasma might want to get rid of the legacy 
workarounds. Can we use the latest Fedora with aggressive backports of 
rawhide packages upon request for this?


- Various applications within KDE might have completely different 
requiremens. Some "leaf" applications might want to target slow-moving 
distributions with their ancient Qt.


So this incomplete set of requirements probably translates to four base 
images. I'm using a RPM-centric terminology and picking these distros 
because of my professional background with these systems:


1) CentOS 7 with Qt 5.6 from EPEL and installed devtoolset
2) Ubuntu 16.04 LTS with a distro Qt (that is 5.5)
3) latest Fedora with Qt from git and an unspecified number of packages 
from rawhide

4) Debian Jessie with a system Qt 5.3

Each project in KDE can then choose whether they care about these 
individual base images (subject to the availability of dependencies, of 
course -- if KF5 don't care about Jessie and Qt 5.3, no project which uses 
any KF5 can possibly opt in to support that configuration for obvious 
reasons). By default, all projects get just 3) for the "latest and 
greatest" and for minimal wasted manpower.


With my (non-KDE) sysadmin hat on, I believe that the infrastructure should 
be provided as a service offering for developers. It is the developer's job 
to produce working code which is packageable. I don't think it's a 
developer's job to make a CI's sysadmin life uneventful, though. Perhaps 
the architecture outlined above can help achieve these goals with minimal 
manpower?


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: What's kde-core-devel for?

2016-12-19 Thread Jan Kundrát
KDE has expanded over the last few years to include projects which are not 
based on kdelibs or kf5 (or even Qt). There are e-mails about new project 
incubation, upcoming conferences and CFPs and other semi-social topics. I 
am interested in these discussions and I thought that this is what k-c-d is 
for.


+1 for killing the ReviewBoard traffic on kde-core-devel, though.

With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: announcement: Kwave is in kdereview

2016-10-17 Thread Jan Kundrát

On úterý 11. října 2016 21:41:09 CEST, Thomas Eschenbacher wrote:

the _(...) macro has nothing to do with i18n


Isn't that a bit confusing? Underscore is used by gettext to mean the 
*opposite* from what Kwave uses it for. It is also a reserved identifier in 
C++. Inventing non-standard idioms with non-obvious semantics just to save 
one from typing QLatin1String or QStringLiteral doesn't seem like a good 
idea.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: KDE CI enforcing ECMEnableSanitizers.cmake/KDECompilerSettings.cmake?

2016-04-26 Thread Jan Kundrát

On Wednesday, 27 April 2016 01:21:22 CEST, Albert Astals Cid wrote:
It is strange that your Qt5-only tests are failing, may it be that they are 
loading some plugin that is linked against some KF5 lib?


Qt guesses what platform one is running on in order to blend with it. In 
KDE and under the Plasma desktop, this involves loading 
plugins/platformthemes/KDEPlatformTheme.so which belongs to KF5's 
frameworkintegration.


Is the KDE CI setting some variables which might trigger loading of these 
variables?


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-09 Thread Jan Kundrát
I've taken the liberty to remove the ad-hominem which you used. I'm not 
happy with your approach to this discussion, but I'll try to stick with the 
technical points.


There is active work within the DMARC WG, with first drafts being published 
only *two months ago* [1]. My suggestion for everybody who doesn't have 
time to follow this process is to sit back, relax, and watch the IETF come 
up with a solution, and *then* start implementing their suggestions. Asking 
one's user base to reach every list service administrator out there with a 
"fix your DKIM/DMARC" is not going to work. Deploying DMARC at this point 
in time, when substantial changes are still being worked on, doesn't look 
like a good idea, either. This is all that I'm saying.



The mailing list hosts don't have to deploy DKIM. All they have to do
is not break signatures on mails bearing a DKIM signature.
Which, as I noted in my email is something that only requires a few
toggles within the Mailman administration interface.
(And, using the withlist tool can be changed on all lists on an entire
server with relative ease). This is what Debian has chosen to do.


You're saying that it's easy to configure a ML to stop breaking DMARC 
signatures. I disagree. Here's my reasoning:


1) Full compliance with DMARC requires a substantial reduction of features 
which distinguish mailing lists from dumb forwarders. This includes:


- the Reply-To munging,
- adding a [prefix] to subject headers,
- automatic signatures,
- in case of overly strict DKIM setup, the various List-* headers which are 
actually mandated by RFCs to be automatically added.


2) Some domains might specify DMARC policies which prevent *any* 
distribution of their e-mails over mailing lists. The only solution for 
this problem is rewriting the RFC5322.From header to something like:


From: "Foo Bar via a KDE ML" 

This in turns leads to e-mails where one cannot reply to the original 
author anymore, etc etc etc.


In case someone is still following this thread, let me quote [2] John R. 
Levine, one of the Internet graybeards:



Mailing list apps can't "implement DMARC" other than by getting rid of every 
feature that makes lists more functional than simple forwarders. Given that we haven't 
done so for any of the previous FUSSPs that didn't contemplate mailing lists, because 
those features are useful to our users, it seems unlikely we'll do so now.

If receivers want to implement DMARC policy, they need to make their false 
alarm whitelist first. This appears to be a substantial, perhaps 
insurmountable, hurdle.


"FUSSP" is a "Final Ultimate Solution to the Spam Problem".

That entire thread is worth reading, btw.

Cheers,
Jan

[1] https://tools.ietf.org/html/draft-andersen-arc-00
[2] http://www.ietf.org/mail-archive/web/ietf/current/msg87157.html

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-08 Thread Jan Kundrát

On Tuesday, 8 December 2015 10:19:51 CET, Ben Cooksley wrote:

a) Clearing the "subject_prefix" setting
b) Clearing "msg_header" and "msg_footer"
c) Disabling "scrub_nondigest" and "first_strip_reply_to"

Depending on who posts to your list, you may also need to:
a) Set "reply_goes_to_list" to "Poster"
b) Set "include_sender_header" to "False".


So you're proposing that all mailing lists over the whole world should 
cease adding the "[foo-bar]" prefix into subjects, and refrain from 
adding/overwriting the Reply-To header (among other things). I've seen many 
instances of these discussions, the bikeshedding was fun, but there was 
never any particular outcome.


It is irrelevant what our personal preference is. The fact of life is that 
there *are* mailing lists out there which perform these modifications, and 
these MLs won't change their config despite changes on our side. If we 
start rejecting these e-mails, well, our addresses will be unsubscribed 
from these MLs and we won't be able to participate in relevant technical 
discussions. If that happens, I'm afraid that the @kde.org e-mail addresses 
will no longer provide their stated value.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-08 Thread Jan Kundrát

On Tuesday, 8 December 2015 16:09:43 CET, Nicolás Alvarez wrote:
It is irrelevant what our personal preference about doing 
modifications to messages is (like the tag in the subject). The 
fact of life is that there *are* mail providers out there (like 
Yahoo) which are already enforcing DMARC and may reject messages 
with such DKIM-breaking modifications, and these mail providers 
won't change their config to accommodate us.


Nicely said. Yes, there are providers such as Yahoo, AOL, and nobody else 
:) who decided to break a long-working infrastructure. The question is 
whether we want to join this club.


Should we start enforcing the same rules that Yahoo is enforcing? (Ben 
didn't say what SPF and DKIM rules he's planning to publish for @kde.org, 
btw.) Do we have many Yahoo/AOL users among our developers?


Should we start publishing rules which effectively instruct others to 
discard all e-mails from @kde.org once they go through a non-DMARC mailing 
list?


Should we discard e-mails which are intended for our developers because 
they went through a non-DMARC mailing list?


My answer to these two questions is "no" and "no", obviously. I don't know 
how else to say this -- Debian is not exactly a small open source project, 
and their sysadmins apparently haven't cared about DKIM so far. It's a 
technology which requires Everybody Else™ to perform extra work and to 
configure new services on the servers which host various mailing lists. Are 
we seriously trying to push an unspecified number of third-party ML 
providers to deploy DKIM because Ben decided that it's worth the effort? 
Seriously, the Internet just doesn't work this way. Even if Debian's 
ifnrastructure is changed, there is still a number of mailing lists which 
have worked well in the past years, and now they will stop working for 
@kde.org accounts.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-08 Thread Jan Kundrát

On Friday, 4 December 2015 10:56:42 CET, Ben Cooksley wrote:

To be specific I will be enabling the following line:

On-BadSignature tempfail

within the configuration of OpenDKIM on our servers.


Thanks, but that's not a full answer. What is the proposed content of the 
following DNS records?


1) TXT, kde.org (for the final SPF policy)
2) TXT, _dmarc.kde.org (for *our* DMARC policy, which is an extremely 
important piece of missing information)

3) TXT, default._domainkey.kde.org (and others which you intend to use)

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-04 Thread Jan Kundrát

On Friday, 4 December 2015 10:56:42 CET, Ben Cooksley wrote:

Note that in the long run with DMARC looming you will need to switch
to #2 anyway, and keeping your current behaviour will likely lead to
mail from people who use Yahoo / AOL / etc ending up in the spam
folder with many mailing list members. I'll be starting a discussion
regarding taking this step on KDE systems at some point in the near
future (switching to DMARC compatible policies).

For more information, please see http://wiki.list.org/DEV/DMARC


Do I understand your plan correctly? The following projects appear to not 
re-sign their ML traffic, and they mangle headers at the same time. If I 
understand your plan correctly, this means that I won't be able to use my 
@kde.org addresses on mailing lists of these projects, for example:


- Qt,
- Debian,
- Gentoo,
- OpenStack,
- anything hosted at SourceForge,
- and many, many more, essentially anybody who were ignoring DKIM.

Please, change your plans, this is obviously a huge no-go.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Change to Mail Infrastructure - SPF and DKIM verification will now be enforced

2015-12-03 Thread Jan Kundrát

On Thursday, 3 December 2015 07:13:07 CET, Ben Cooksley wrote:

I will be re-enabling DKIM validation in one week's time - which will
then break subscriptions to Debian mailing lists (as any email from
anyone who has enabled DKIM which hits their lists will not be
accepted by KDE email infrastructure)


Ben, could you please briefly explain your idea about how a complying 
mailing list service should behave? Suppose that I have an installation of 
mlmmj which:


- mangles the Subject header,
- preserves the original From header,
- maybe replaces a Reply-To with the ML's address,
- introduces a bunch of specific List-* headers,
- otherwsie doesn't manipulate the MIME tree or the message texts.

What should I do to make sure that this service continues working once you 
flip the switch?


I would like to have more information about what you mean by "DKIM 
validation" -- what specific steps are you going to introduce, and how is 
the end result going to react to a missing or invalid DKIM signatures.


Also, quoting RFC 6376, section 6.3:

  In general, modules that consume DKIM verification output SHOULD NOT
  determine message acceptability based solely on a lack of any
  signature or on an unverifiable signature; such rejection would cause
  severe interoperability problems.  If an MTA does wish to reject such
  messages during an SMTP session (for example, when communicating with
  a peer who, by prior agreement, agrees to only send signed messages),
  and a signature is missing or does not verify, the handling MTA
  SHOULD use a 550/5.7.x reply code.

That seems in line with what e.g. GMail is doing, only enforcing DKIM 
validation for notoriously faked domains like eBay and PayPal where the 
phishing potential is high.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Bringing back rsibreak from unmaintained

2015-08-18 Thread Jan Kundrát

On Monday, 17 August 2015 20:04:04 CEST, Albert Astals Cid wrote:

Other comments?


Nice, happy to see it -- it builds here, with a bunch of warnings:

[2/29] Generating index.cache.bz2
index.docbook:2: element para: validity error : ID gnu-fdl already defined
element div: validity error : ID header already defined
element div: validity error : ID header_content already defined
element div: validity error : ID header_left already defined
element div: validity error : ID header_right already defined
element div: validity error : ID header already defined
element div: validity error : ID header_content already defined
element div: validity error : ID header_left already defined
element div: validity error : ID header_right already defined
element div: validity error : ID footer already defined
element div: validity error : ID footer_text already defined
element div: validity error : ID header already defined
element div: validity error : ID header_content already defined
element div: validity error : ID header_left already defined
element div: validity error : ID header_right already defined
element div: validity error : ID footer already defined
element div: validity error : ID footer_text already defined
element div: validity error : ID header already defined
element div: validity error : ID header_content already defined
element div: validity error : ID header_left already defined
element div: validity error : ID header_right already defined
element div: validity error : ID footer already defined
element div: validity error : ID footer_text already defined
element div: validity error : ID header already defined
element div: validity error : ID header_content already defined
element div: validity error : ID header_left already defined
element div: validity error : ID header_right already defined
element div: validity error : ID footer already defined
element div: validity error : ID footer_text already defined
element div: validity error : ID footer already defined
element div: validity error : ID footer_text already defined

The stderr is full of output which probably just wastes space. I don't 
think that these are good default settings:


** resetAfterTinyBreak !!
** resetAfterBigBreak !!
** resetAfterTinyBreak !!
** resetAfterTinyBreak !!

However, the worst thing is that the passive popup for tiny breaks doesn't 
appear to notice that I'm still moving my mouse. This is with KF5 and 
Plasma5 from git from very late July, on X11.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Bringing back rsibreak from unmaintained

2015-08-18 Thread Jan Kundrát

On Wednesday, 19 August 2015 01:00:13 CEST, Christoph Feck wrote:

-- Build files have been written to: /local/build/kf5/rsibreak
/local/git/extragear/utils/rsibreak/src/rsitimer.cpp:26:20: fatal 
error: kdebug.h: No such file or directory


A missing dep on kde4libssupport, https://git.reviewboard.kde.org/r/124809/ 
.


Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Bringing back rsibreak from unmaintained

2015-08-18 Thread Jan Kundrát

On Wednesday, 19 August 2015 00:30:01 CEST, Albert Astals Cid wrote:
These messages are not new, IMHO does not apply to this request of bringing 
back from unmaintiained ;)


I agree that it's not a blocker of course, but you asked for feedback :).


However, the worst thing is that the passive popup for tiny breaks doesn't
appear to notice that I'm still moving my mouse. This is with KF5 and
Plasma5 from git from very late July, on X11.


Are you sure about that? You mean the countdown goes down even 
if you move the mouse?


Yes, that's what I'm seeing. It's interesting that the idle tracking 
appears to work when nothing is displayed, i.e., during the normal mode, 
the app detects that I'm actively using my computer and starts tracking my 
activity and the taskbar icon (the violet pie) goes from 100% to 75% when I 
start typing something, and back to 100% violet after a short while of 
inactivity. It's just the attention grabber hey, stop working now which 
ignores my activity and continues the countdown as if I were idle.


The countdown please relax for %1 second(s) just doesn't notice my mouse 
or keyboard activity. In addition, it's always shown as a passive pop-up 
even when I pick Simple Gray Effect as a notification effect during 
breaks.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Plasma Applet for Audio Volume for kdereview

2015-08-14 Thread Jan Kundrát

On Thursday, 6 August 2015 12:43:28 CEST, Martin Klapetek wrote:

You can still use kmix with Plasma, there is even a port to kf5 though I'm
not sure what's its state.


FYI, I've been running with the KF5 kmix for a couple of months without any 
issues. I'm using just plain old ALSA, not PA.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: 3 UDSEntry optimizations

2015-07-20 Thread Jan Kundrát

On Sunday, 19 July 2015 23:11:05 CEST, Mark Gaiser wrote:

Regarding gerrit. How can i make patch 2 and 3 dependent on 1?


You did a good job. A correct way is to produce three commits locally, 1 
being parent of 2 and 2 being a parent of 3, and push these to 
refs/for/master, which is what you did. Gerrit will do the right thing 
here.



And why is gerrit failing?


You're getting a Verified-1 vote from the CI because the test suite doesn't 
pass. I would encourage people to take a look at the CI matrix overview [1] 
and fix these test failures. I was trying to get rid of most of these, but 
my time is limited and I don't know much about these libraries. Jenkins 
says the same, btw.


With kind regards,
Jan

[1] http://ci-logs.kde.flaska.net/matrix.html

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


A CI dashboard with multiple versions of Qt5 on Linux

2015-06-11 Thread Jan Kundrát

Hi,
if you would like to check how well the KF5 builds cope with multiple Qt5 
versions, take a look at this page generated from the Zuul/Gerrit CI 
system:


http://ci-logs.kde.flaska.net/matrix.html

I am open for suggestions for service improvements, so if you have an idea 
on how to make this more useful, I'm all ears.


The tests failures look like real stuff, not some occasional 
non-deterministic occasional breakage, btw.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Something has gone horribly wrong.. Linux builds carnage.

2015-05-14 Thread Jan Kundrát

On Thursday, 14 May 2015 17:40:09 CEST, Scarlett Clark wrote:
I woke up this morning to a sea of red. Almost all of the linux builds are 
failing. It looks like QT5 was triggered by an scm change, but 
hard to tell as 
it was also started and aborted by kaning.


Sune reported this to Thiago who proposed a patch:

https://codereview.qt-project.org/112060

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: KDE Frameworks 5.10.0 released

2015-05-09 Thread Jan Kundrát

Hi David,
could you please clarify the release procedure, in particular what 
determines whether commits pushed after the -rc1 tag are included or not?


I pushed a Qt 5.5 build fix to kitemmodels yesterday, but it apparently 
didn't make it through. Not a big deal, of course, but it got me curious. 
Should I send a mail to the release team next time, maybe?


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Gerrit upgraded to 2.11

2015-04-22 Thread Jan Kundrát

Hi,
we're now running with Gerrit 2.11. This brings a couple of new features 
and changes.


My personal favorite is the online editing via web browser. One can now fix 
up small changes in patches without going to git. It's also possible to 
create new changes from scratch, without touching Git at all. Stuff is 
still checked by Continuous Integration (CI), etc. Detailed documentation 
on how to use this is at [1].


A full list of changes is available from [2] and [3]; previously, we were 
on the 2.9 branch.


With kind regards,
Jan

[1] https://gerrit.vesnicky.cesnet.cz/r/Documentation/user-inline-edit.html
[2] 
https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.11.html
[3] 
https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.10.html


--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Distros and QtWebEngine

2015-04-20 Thread Jan Kundrát

On Monday, 20 April 2015 21:14:51 CEST, Sune Vuorela wrote:

And Red Hat is following Fedora.


RHEL might not be a good example here because they are a rather a strange 
beast. RHEL has actually never shipped QtWebKit (!) and they also aren't 
shipping Qt5 so far.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Distros and QtWebEngine

2015-04-20 Thread Jan Kundrát

On Monday, 20 April 2015 21:12:44 CEST, Franz Fellner wrote:
Is it really necessary to use a multiprocess web framework just 
to view HTML mails?


I suppose that it is necessary to use an HTML content renderer which:

- is still supported,
- remains reasonably secure and up-to-date,
- provides sufficient features to make sure that users' privacy is not 
compromised.


Whether it implies using multiprocess architecture or not is an internal 
implementation detail. We might think that it's an overengineered beast, 
but our opinion is not as important as the opinion of the guys who are 
doing the actual work.


Can't this be done with different backends, so users/distros 
have the option to simply use KHTML?


I cannot speak for KDEPIM, but I can speak for Trojita which is currently 
using QtWebKit.


Based on a quick glance through the KHTMLPart's public API, I cannot use it 
in Trojita. One of the reasons is that HTML e-mails use the cid: URL scheme 
for accessing data from other MIME parts in the same message. I don't see 
any way to implement custom URL schemes *and* to disable arbitrary network 
access on a per-message basis at the same time.


The e-mail clients unfortunately ar especial; their use case is much 
different from a web browser model, they cannot work through a just render 
this HTML you read from this QIODevice, and at the same time thay are 
expected to render rich HTML with full support for modern CSS. I'm not 
surprised that KHTML's API is not sufficient; the QML-based API of Qt's 
WebKit2 wasn't sufficient, either.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-02-03 Thread Jan Kundrát

On Tuesday, 3 February 2015 12:37:30 CEST, Martin Sandsmark wrote:

So everyone with a KDE account will be able to push to any KDE project,
bypassing Gerrit?


Yes.

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-02-03 Thread Jan Kundrát

On Tuesday, 3 February 2015 11:36:37 CEST, Martin Sandsmark wrote:

On Fri, Jan 30, 2015 at 11:44:22PM -0200, Thiago Macieira wrote:
Many of your complaints about usability (threading, replies, 
etc.) are solved 
or at least partially addressed in the new Gerrit UI, which 
versions like 2.7 
have. It might not be the default on the installation, so 
check the settings 
and try to turn it on.


Do you know of any Gerrit installations that have this enabled?


Yes, the one we're testing in KDE is reasonably recent. It lives at 
https://gerrit.vesnicky.cesnet.cz/ , and it uses the new change screen by 
default.


On other Gerrit servers = 2.8, you can go to (your name at upper right) - 
Settings - Preferences - Change view (second option from the bottom) and 
select New screen.



We use Gerrit (2.7) at work, and the UI is still pretty horrible,


The new change screen only got added in 2.8, and 2.7 doesn't have it.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-02-03 Thread Jan Kundrát

On Tuesday, 3 February 2015 11:48:30 CEST, Martin Sandsmark wrote:

As mentioned already, we've been using Gerrit at work for quite a while now,
and having the code broken up by comments (sometimes many lines in case of a
discussion) makes it extremely hard to actually follow the flow of the code.

Do you know if upstream would accept to change this, or how hard it would be
to change?


I believe that this is fixed in the new change UI:

- The diff viewer shows comments minimized/collapsed and in a way which 
consumes less space.
- The review page shows file/line/range comments with a pointer to what 
file and what part of a file this is about.


Now, one thing which is arguably missing and can be improved is adding a 
small chunk of actual file content to the comments shown on review page. I 
think that upstream will be happy to accept such a patch.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-02-03 Thread Jan Kundrát

On Tuesday, 3 February 2015 11:53:30 CEST, Martin Sandsmark wrote:

I think the point was more that what Gerrit has fixed were simple UI
glitches, not radical improvements that change the existing design to make
it easier for less experienced or casual users (or even experienced users,
but that's another discussion). :-)


Thanks for explaining this to me.

As they completely revamped the change screen UI in 2.8, I do not think 
that this point is true, either.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-02-02 Thread Jan Kundrát

On Monday, 2 February 2015 11:22:57 CEST, David Jarvie wrote:

I occasionally contributed patches in the past to Qt, but since the
current gerrit setup was introduced I've never even contemplated doing so
because it looks too much effort to get to grips with. It's far too
off-putting for occasional contributors.


Seems that the Qt project has a big problem in the *perceived* complexity 
of getting involved, then. I'm saying perceived because you said that you 
did not try; the mere fact that they use a tool which has a rumour of being 
hard to use, along with the state of their documentation, suggests to you 
and to others that it is apparently a big pain to get involved. I 
understand that people do not want KDE to end up that way.


However, we also have people with little to no experience using Gerrit just 
fine. Shall we therefore focus on explaining that contributing through 
Gerrit is actually not painful?


The fact that the Qt project uses an obsolete version of Gerrit along with 
a non-stadard workflow and keep changing the branch names certainly doesn't 
help. I can see why it's confusing to suddenly see no master branch, for 
example.


Anyway, the proposal which I have for solving this problem is about writing 
nice documentation. Maybe even screencasts showing how easy it can be for a 
total beginner to send their first patch.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Friday, 30 January 2015 03:30:55 CEST, Kevin Kofler wrote:
Unfortunately, file level strikes me as a less than helpful default. Can 
this be changed to line-level merges in our instance? (I think the ideal 
would be to use git's native merging algorithm(s), but I expect some 
limitations due to the convenient web resolving UI.)


Um, it seems that I managed to confuse myself here -- this feature already 
exists and is active on our instance. A content merge is already enabled. 
I'm afraid I never needeed it yet, so I cannot comment on what sorts of 
conflicts it can solve, and what conflicts require a human action to fix.


Maybe someone already needed this and can provide more details?

As a result, people who opt to disable JavaScript in their browser for 
whatever reason (e.g., security) will have:


I agree with the sentiment in general, but at the same time, one could 
reasonably point out that Gerrit's choice of port 29184 for git-over-SSH 
might trip some corporate firewalls because it is not HTTP or HTTPS. Sure, 
disabling outbound traffic to insecure ports can increase security of 
our corporation. It is up to everyone to evaluate whether that particular 
benefit is something worth the trouble for them.


In this context, I wonder what security benefits it brings when someone 
disables JavaScript for a trusted service where the entire set of JS code 
is free software.



* the Gerrit web interface not working at all (or at least not until such an
  alternative web UI is implemented in a way not requiring client-side
  JavaScript and deployed on KDE infrastructure),
* the integration between various utilities also not working, e.g., Bugzilla
  will not list pending review requests at all.
To me, this contradicts the web maxim of graceful degradation.


Note that even if people disable JS, they are still be able to do any of 
the following as soon as they get a change number from e.g. the project 
mailing list or an IRC channel:


- pull changes from Gerrit for local testing,
- upload patches and create new changes or push updates to existing ones,
- record a result of their code review, including full voting and an 
eventual merge.


Why can the work not be done on the server side? Especially for the 
integration between services, I would expect a simple API call for data 
lookup to be doable on the server side at least as easily as from client-

side JavaScript.


Yes, the technical options are assentially unlimited and someone /could/ 
write code doing just that. Maybe nobody sees a value in disabling JS to be 
compelling enough to commit their time. Or maybe people actually like JS 
and appreciate the feature set it brings.


One benefit of having the UI implemented in JS is that the APIs are 
*guaranteed* to offer enough functionality to be able to implement an 
alternative client as, say, an IDE plugin. If Gerrit was generating static 
web pages, it would be very easy to accidentally introduce features which 
just could not be implemented in other clients because the required APIs 
were not made public by accident.


These other clients exist today, btw. If a lack of support for JS-less 
browsers bothers you, may I suggest installing Gertty? It even has support 
for making patch review offline when on a plane, and bidirectionally synces 
stuff when you reconnect later.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Saturday, 31 January 2015 22:09:36 CEST, Thomas Lübking wrote:
Aside that this is an exhaustive HowTo on git and gerrit*, 
there're apparently upload your plain diff webfrontends.
(Though I think the question was brought up and not answered 
how follow-up patches are handled - eg. whether you've to pass 
some existing gerrit url to fetch the change-id from)


The gerrit-patch-uploader says that you are supposed to copy-paste the 
Change-Id line from e.g. the change page, yes.


Keep in mind that Gerrit 2.11 allows editing your change right from the 
Gerrit's web UI, so there is no need to upload another followup patch for a 
significant chunk of these changes.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Saturday, 31 January 2015 21:38:23 CEST, Inge Wallin wrote:
It is one thing if there is one tool that is totally too weak to work for 
experienced people and one tool that is awesome but very 
difficult to learn.  
But that's not the situation we have here.  I think we have one 
tool that is 
very good and then one that many have pointed out as cumbersome 
and difficult to 
learn for not very experienced people - but - with an edge when it comes to 
advanced git users.


I am a bit surprised to read that Phabricator is apparently considered 
very good even though no KDE project has tried it yet. I was under the 
impression that the usual order of steps is:


1) install a tool,
2) play with it,
3) identify its strenghts and weeknesses,
4) make an informed opinion.

Did I miss something? Which KDE project has been testing Phabricator?

I know how long it took for me to get used ot git, and I think I'm pretty 
experienced. Adding to that burden is not the way to get new people.


I'll repeat my request from earlier in this thread -- please quantify the 
expected increase of the barrier to entry. Different does not imply 
harder.


In Trojita, we have GCI students submitting patches wihin 15 minutes, 
following the developer-oriented newbie-unfriendly documentation. 15 
minutes for a high-schooler to start contributing. Translators have sent 
patches via Gerrit as well. Interestingly enough, the only complaint was 
from an experienced KDE developer, not from a newcomer.


Maybe the newcomers just do not care whether they're learning about 
Phabricator, Reviewboard or Gerrit.


Anyway, I don't really see a problem with the following steps:

0) Ignore any documentation, especially the clone command which already 
includes getting the hook. I don't read no documentation.

1) git clone ...
2) make changes  make  run
3) git commit -a
4) git push
5) Damn, that stupid Gerrit tells me on a command line that I should push 
to refs/for/master instead of just master. Oh well, stupid crap.

6) git push origin refs/for/master
7) Damn, now that stupid Gerrit tells me that I am supposed to copy-paste 
this line into my terminal for some crazy hook or what not, and to amend my 
commits. Annoying crap!

8) (copy-paste a line from Gerrit's output)
9) git commit --amend
10) git push origin refs/for/master

For those who actually read the documentation, it's of course more 
straighforward, but we all know that nobody reads documentation, hence the 
extra detours.


Besides, it has been already established that there will be support for 
direct patch upload.


It would be very interesting to hear the VDG people's and the 
translators' and documentors' view on this.


Given that the translators still use SVN, I sincerely hope that the 
proposed outcome does not involve us switching back to Subversion, or 
reducing our use of Git to an SVN equivalent.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Thursday, 29 January 2015 19:31:20 CEST, Eike Hein wrote:

Just for the record: I consider you a KDE sysadmin (you're
administrating infra used by KDE, after all), so I meant the
kde.org more general. Thanks.


I forgot about this mail, and I realize that I am not sure whether my reply 
was clear or if it left some space for a possible misunderstanding. Sorry 
for noise if we already understood each other.


The following KDE people have root there:
- Ben Cooksley
- Frederik Gladhorn
- Victor Blazques

If there are others who need access, there's no problem adding them. The 
KDE server holding the PostgreSQL backups is tesla.kde.org.


--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Saturday, 31 January 2015 12:20:15 CEST, Inge Wallin wrote:
Given how few of our community who have participated so far, I think it 
borders on pure falsehood to claim clear consensus on *anything*. I would 
put more like some people want it, and I can certainly see 
the appeal.


Fair enough, you have a point -- I suspect there is no consensus that CI is 
useful, or that there is any value in having a clean git history without 
fix missing semicolon commits. I agree that having a per-commit CI 
coverage can be well considered an undesirable thing by some developers.


Which is why I have no intention of pushing these to all KDE projects. What 
I am proposing is an opt-in for those who care.


But 
from that to simply state the costs in HW are worth it (and conveniently 
forgetting cost in maintenance) is a very long step.


I believe that the cost of maintenance is sufficiently covered by section 5 
of the proposal, so I have to admit that I don't know what I am 
conventiently forgetting about.


Could you please explain what maintenance cost you are worried about? Is it 
perhaps related to the number of build jobs being run, or the number of 
throwaway VMs we use for builds? Is it about the services which replace 
Jenkins?


The scripting which is currently used for build VMs with Gerrit/Zuul lives 
at [1]. The bootstrapping part which turns a downloaded, minimal VM image 
into a build node is [2].


Cheers,
Jan

[1] http://quickgit.kde.org/?p=sysadmin%2Fgerrit-project-config.gita=tree
[2] 
http://quickgit.kde.org/?p=sysadmin%2Fgerrit-project-config.gita=blobf=turbo-hipster%2Fcloud-init.sh


--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Thursday, 29 January 2015 22:57:33 CEST, Ben Cooksley wrote:

Given that upstream has had multiple attempts now at an improved
interface, I would question whether they would be willing to accept a
user interface which is suitable for our needs. It appears that they
are quite comfortable with an interface many find unintuitive or
difficult to use. If they weren't, considering the number of backers
it has - one would think someone would have sponsored such an
interface.


I don't think this is an accurate and fair description of upstream. They 
fixed a UI usability glitch that Martin complained about in less than 12 
hours. That sounds like they are pretty welcoming to 3rd-party feedback, 
IMHO.



As for the CI backend, please mention what is wrong with Jenkins - if
it would be integrated to check code review submissions.


The reasons for considering another CI platform are described in my report 
in section 3.3.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Thursday, 29 January 2015 21:03:32 CEST, Eike Hein wrote:

I think it's a real concern, and I'm wary of we can patch
it away because carrying a huge custom patch delta for UI
mods is what kept us from upgrading Bugzilla for multiple
years. I think is it realistic that we can maintain this
and keep up with upstream even if Ben or Jan get hit by a
bus is an important question with regard to both proposals.


That's a very good question, and a reason for why I am not patching Gerrit 
with stuff not accepted upstream. I agree that carrying big custom patches 
won't scale.


So far, we don't have any patches at all. I'll be backporting stuff such as 
the show-headers-prior-to-cpp from 2.11 because it is super-easy to do so, 
and because 2.11 isn't released yet.


We also have some JavaScript proof-of-concept for Bugzilla integration. You 
can check its complexity at [1]. I managed to write that over a Sunday, and 
I am definitely not a web guy. I had zero jQuery experience prior to this.



I have similar concerns with some of the promised benefits
in the proposal because they strike me more of we could,
which is cool, but it's not we will. E.g. if test build-
ing precombined patches takes an OpenStack cluster - do we
have one? Where are we going to get that horsepower? Can
we keep it?


Designing contingency plans is indeed important (see section 5 of that 
proposal; it talks about managing infrastructure-as-code). You are also 
right that the current infrastructure is best-effort and that KDE won't get 
an SLA without paying for one. If we (KDE) need an SLA, we (the company the 
cluster is hosted at) will be happy to be asked for a quote :). Or we (KDE) 
can just host this stuff anywhere else and pay someone else.


But it seems to me that we already have pretty clear consensus that we 
absolutely do want a pre-approval CI coverage, and that the costs in HW are 
worth it. Does someone from KDE e.V. know whether we could get some free HW 
resources from a commercial partner (hi RedHat/SuSE/Digia)? Do we have some 
backup cash to e.g. rent VM time from Amazon/Rackspace/whatever in an 
unlikely event that the current hosting platform is withdrawn with no prior 
notice?


About the we could vs. we will in general, I have to admit I'm slightly 
confused by that. The proposal is careful to describe what is available 
today, and to make a clear difference in saying what needs to be done in 
future. Maybe some part needs clarification -- what parts do you think are 
more of the yes-this-would-be-nice-but-I'm-worried nature?


With kind regards,
Jan

[1] https://gerrit.vesnicky.cesnet.cz/r/static/bugzilla.js

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Saturday, 31 January 2015 11:14:01 CEST, Ben Cooksley wrote:

Fixing a usability glitch and accepting a radical redesign of your
interface are completely different.


Your mail suggested that they apparently do not care about improving their 
UI, because if they did, they would have solved everything already. I 
disagree with that, and provide evidence which supports the idea that 
Gerrit upstream in fact also cares about users, including those who are not 
already experienced with its UI.



We're not the first ones to complain about it's interface but they've
not done anything to remedy this despite a complete redesign (which
reduced the level of functionality interestingly).


How does the ChangeScreen2 reduce the level of functionality?


I see the following in that section:

1) A note that Jenkins is a glorified job launcher as we don't use
any of it's advanced functionality (which I refuted - it is much more
than that).


You reiterated that it is cool to have graphs tracking number of failed 
tests. I proposed to fix the tests instead, and offered a solution which 
eliminates this problem for tests that are inherently unstable (see 
3.3.2.). I also explained how running cppcheck and code coverage fits into 
this.


The way I understand this reasoning is: we have failing tests - we got 
to have graphs so that people know whether any more tests start failing. 
That sounds rather suboptimal to me. I would much prefer to fix the actual 
cause of pain rather than to provide tools which plot the pain level and 
frequency of patient's seizures. Defect tracking is a tool, not a goal. If 
there is another tool which ensures that no additional defects can enter a 
repository, why not simply use that? (Please see the report for dealing 
with non-deterministic tests; this *is* covered.)



2) Some notes that a proposed patch may be based against a week old
revision. This is making assumptions about how a Jenkins setup would
be made - as we're in control of what it does there is nothing to stop
us trying to apply the patch on top of the latest code in Git.


You have to somehow account for the delay between a human reviews a patch 
and the time it gets merged. Any patch could have landed in the meanwhile. 
What builds a patch queue so that you have this covered?



In terms of checking dependency buildability, once again - this is
possible but we don't do anything like this at the moment to preserve
resources.


Given enough CPUs, how would you do this with our current Jenkins setup? 
This is absolutely not just a resource problem; you need something to build 
these projects against appropriate refs, and do this in an atomic manner. 
Zuul does it, and internally it's a ton of work. Jenkins does not do it. 
KDE's CI scripts do not do it, either.



As for it not having a declarative configuration, we're in the process
of refining the Groovy based Job DSL script Scarlett wrote. This will
shift the configuration of Jenkins jobs entirely into Git, and
depending on how things work out - jobs could be automatically setup
when new repositories come into existence or when branch metadata is
revised.


The report said that there are automated tools which provide workarounds 
for this aspect of Jenkins. It's good to see KDE adopting one now.


However, you are investing resources in making a tool with horrible 
configuration more usable. More power to you, it's your time, but this is 
exactly what the report says -- you are working around the tool's 
limitations here.



About the only point left standing is that it doesn't check individual
subcommits, but we've yet to see whether the KDE project as a whole
sees this as necessary


The fact that most of KDE's projects have no use of pre-merge CI does not 
imply that projects who want to opt-in should be punished. This is 
absolutely *not* about pushing an advanced workflow to everybody. It is 
about being *able* to accomodate such an advanced workflow at all.


This is possible today with Gerrit+Zuul, and it was easy to configure that 
way. Our Zuul builds the dependencies (=projects outside of Gerrit) on a 
per-push basis, exactly how KDE's Jenkins does it. There is no time wasted 
on doing per-commit builds for these, because nobody could react on 
failures anymore -- a change is already in master by that point.


What this is all about is enforcing that each commit which goes through 
code review is regression free.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Thursday, 29 January 2015 23:11:29 CEST, Eike Hein wrote:

Maybe, but this is actually something I like from the
Phabricator proposal: It provides an impression of our
relationship with Phabricator upstream, which it says
is a good and constructive one.


I believe that our relation with the Gerrit upstream community is a good 
and constructive one, too. We also have patches to demonstrate this.


Then we have the infra people of OpenStack which are an excellent group of 
highly-tallented people who know very well what they are doing, and yet are 
patient enough to help others and to explain why they are doing stuff the 
way they do to curious newcomers. That's one of the reasons why this 
proposal is modeled after the infrastructure which the OpenStack project 
uses.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-31 Thread Jan Kundrát

On Saturday, 31 January 2015 13:08:07 CEST, Inge Wallin wrote:

Well, all of the above and more.  Hosting, electricity, networking,


I'm including all of the above as HW costs in my proposal. We (KDE) do 
not have our own datacenter after all.



manual work as the number of physical machines increas


Due to the nature of build jobs which constitute a pretty bursty load, 
renting VMs sounds like a cost-effective approach for our scale. I do not 
expect that it would make financial sense for us to procure enough physical 
HW to cover the peak demand -- hence the VMs. Renting VMs also enables 
corporate sponsors to offer unused capacity for less money, or even for 
free.


Another benefit is that bootstrapping a new VM is about executing a single 
command (and this is not hype; it's how all of the current build VMs in use 
by Gerrit were launched).



manual work as the complexity increases, and so on.


I would prefer if we stopped using phrases such as increased complexity, 
and switched to expressing a specific and measurable difference in costs.


The reason for this request is that increased complexity is almost always 
true, yet people have no way of actually judging what it is going to mean. 
For example, we have in-house expertise in Jenkins where Ben, Nicolas, 
Scarlett and Marko understand the stack we use. Switching to anything else 
will cost us some time to get the respective sysadmins up to speed to the 
new tools. I do not question this.


However, chances are that at the same time adopting a new system could 
provide significant savings in future. For example, if a new system allowed 
easier updates to the build structure, it is reasonable to have a wiki page 
with instructions and to let developers propose changes for their projects. 
That way, sysadmin time could be freed for other tasks. So even though 
there is some initial investment, the long-term operation can easily get 
less demanding, and cheaper.


In fact, a better tooling could attract more people. Hey, this is a nice 
system which provides almost exactly what I need; can I improve it? It 
also presents an opportunity to clean some of the cruft accumulated over 
years, thereby reducing the barrier of entry for new contributors and 
improving the bus factor.


Finally, there is also that aspect of providing a better service to our 
developers. Running infrastructure costs something, yet we do it. IMHO, we 
should strive for offering the best service that we can afford with the 
resources we have today, and have a contingency plan for future.


Right now, we have access to some nice CPUs for free. If it proves useful 
and we get used to it and if the resources stop being free at some future 
point and if we cannot obtain a free alternative from some other sponsor, 
then we should take a look on how much money it would cost us on a 
comercial basis at that point. If the price is too big, then the service is 
*not* that useful for us after all, and we will live without it just fine. 
If, however, we decide that the cost is worth the price and no free/cheaper 
altenrative exists, then we should go and pay. There is no vendor lock-in, 
and it is trivial to migrate to another provider. I just don't see why we 
should be actively prevented from using resources which we have today just 
because we might have to ask somewhere else in future. Turning off 
pre-merge CI would get us back to the current level of resource utilization 
immediately.


That's what this proposal is all about, and please keep in mind that it's 
something which is already deployed and tested. It isn't an 
out-of-thin-blue proposal with uncertain cost of deployment.


Everything can be automated in theory but in practice there is 
never a fully automatic system.


Yes, which is why it's important to looks at the architecture and to spell 
out what our expectations of maintenance effort are. It might be reasonable 
to e.g. compare them with the current effort which goes into keeping 
Jenkins alive and modernizing it.


Keeping the current state has its very real cost, too.

With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-30 Thread Jan Kundrát

On Thursday, 29 January 2015 12:25:57 CEST, Jan Kundrát wrote:
Hi Martin, thanks for an excellent idea, sorting headers before 
actual code changes makes a lot of sense. I have a quick'n'dirty 
patch at [1].


The patch has been merged upstream and will be released in next version 
(2.11). I'll also backport it when I upgrade our instance to 2.10 because 
it's reasonably self-contained, and because I also think that it's an 
excellent idea to have this better ordering.


Thanks for suggesting it.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Feature matrix for future infrastructure

2015-01-29 Thread Jan Kundrát

On Thursday, 29 January 2015 12:49:17 CEST, Christoph Feck wrote:
If it even allows to edit a change request from a different person 
online, then I *want that*. I find it much more time consuming and 
demotivating to nitpick small style/whitespace changes, than to simply 
edit them out.


Yes, it works like that.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-29 Thread Jan Kundrát

On Wednesday, 28 January 2015 13:14:14 CEST, Martin Gräßlin wrote:
Navigation through the code is difficult, you cannot see the 
complete change in one, but have to go through each file. This 
is something I consider as unfortunate as normally I prefer reading the 
changes to the header before the implementation, but due to alphabetic 
ordering we do not have this. Unfortunately when navigating through 
the change to get to the header file the implementation is marked as 
you have reviewed it.


Hi Martin, thanks for an excellent idea, sorting headers before actual code 
changes makes a lot of sense. I have a quick'n'dirty patch at [1].


Git has a config diff.orderfile option which might solve this reasonably 
well. Do you think that the following sorting order is reasonable for a 
KDE's default?


CMake*
cmake*
src/*.h
src/*.cpp
*test*/*.h
*test*/*.cpp

What I also find bad is that you actually need to know the magic keyboard 
shortcuts to use it in a sane way.  And the shortcuts are to be honest 
strange: ] is a key which is great to use on an English layout but on 
many other layouts it's just a very cumbersome to use key.


Do you have some suggestions on how this could be improved? Upstream is 
quite friendly, and I'll be happy to offer patches which improve the user 
experience.


Cheers,
Jan

[1] https://gerrit-review.googlesource.com/63812

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-29 Thread Jan Kundrát

On Thursday, 29 January 2015 18:22:35 CEST, Eike Hein wrote:

One thing I'm unclear on: Does the gerrit test instance run
on machines administrated by kde.org these days?


The VM runs at my workplace. The KDE sysadmins have root access, PostgreSQL 
backups are automatically pushed to a KDE server twice a day, and Git is 
replicated to git.kde.org within seconds after each push.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Another proposal for modernization of our infrastructure

2015-01-28 Thread Jan Kundrát

On Wednesday, 28 January 2015 10:08:54 CEST, Ben Cooksley wrote:

1) Most applications integrate extremely poorly with LDAP. They
basically take the details once on first login and don't sync the
details again after that (this is what both Chiliproject and
Reviewboard do). How does Gerrit perform here?


Data are fetched from LDAP as needed. There's a local cache for speedup 
(with configurable TTL and support for explicit flushes).



2) For this trivial scratch repository script, will it offer it's own
user interface or will developers be required to pass arguments to it
through some other means? The code you've presented thus far makes it
appear some other means will be required.


I might not fully understand this question -- I thought we already dicussed 
this. The simplest method of invocation can be as easy as `ssh 
user@somehost create-personal-project foobar`, with SSH keys verified by 
OpenSSH. This is the same UI as our current setup. There are other options, 
some of them with fancy, web-based UIs.



3) We've used cGit in the past, and found it suffered from performance
problems with our level of scale. Note that just because a solution
scales with the size of repositories does not necessarily mean it
scales with the number of repositories, which is what bites cGit. In
light of this, what do you propose?


An option which is suggested in the document is to use our current quickgit 
setup, i.e. GitPHP. If it works, there's no need to change it, IMHO, and 
sticking with it looks like a safe thing to me. But there are many 
additional choices (including gitiles).



4) Has Gerrit's replication been demonstrated to handle 2000 Git
repositories which consume 30gb of disk space? How is metadata such as
repository descriptions (for repository browsers) replicated?


Yes, Gerrit scales far beyond that. See e.g. the thread at 
https://groups.google.com/forum/#!topic/repo-discuss/5JHwzednYkc for real 
users' feedback about large-scale deployments.



5) If Gerrit or it's hosting system were to develop problems how would
the replication system handle this? From what I see it seems highly
automated and bugs or glitches within Gerrit could rapidly be
inflicted upon the anongit nodes with no apparent safe guards (which
our present custom system has). Bear in mind that failure scenarios
can always occur in the most unexpected ways, and the integrity of our
repositories is of paramount importance.


I agree that one needs proper, offline and off-site backups for critical 
data, and that any online Git replication is not a proper substitute for 
this. The plan for disaster recovery therefore is restore from backup.


In terms of Gerrit, this means backing up all of the Git repositories and 
the dumping the PostgreSQL database, and storing these in a location which 
cannot be wiped out or modified by an attacker who has root on the main Git 
server, or by a software bug in our Git hosting. One cannot get that with 
just Git replication, of course.


What are the safeguard mechanisms that you mentioned? What threats do they 
mitigate? I'm asking because e.g. the need for frequent branch deletion is 
minimized by Gerrit's code review process which uses branches inside. 
What risks do you expect to see here?



6) Notifications: Does it support running various checks that our
hooks do at the moment for license validity and the like? When these
rules are tripped the author is emailed back on their own commits.


Yes, the proposed setup supports these. The best place for implementing 
them is via CI invocation through the ref-updated hook. My personal 
preference would be a ref-updated event handler in Zuul to ensure proper 
scalability, but there are other options.



7) Storing information such as tree metadata location within
individual Git repositories is a recipe for delivering a system that
will eventually fail to scale, and will abuse resources. Due to the
time it takes to fork out to Git,


Gerrit uses JGit, a Java implementation of Git. There are no forks.


plus the disk access necessary for
it to retrieve the information in question, I suspect your generation
script will take several load intensive minutes to complete even if it
only covers mainline repositories. This is comparable to the
performance of Chiliproject in terms of generation at the moment.


The yesterday-released Gerrit 2.10 adds a REST API for fetching arbitrary 
data from files stored in Git with aggressive caching. I would like to use 
that for generating that kde_projects.xml file.



The original generation of our Git hooks invoked Git several times per
commit, which meant the amount of time taken to process 1000 commits
easily reached 10 minutes. I rebuilt them to invoke git only a handful
of times per push - which is what we have now.


Gerrit has a different architecture with no forks and aggressive caching. 
I'm all for benchmarking, though. Do you want a test repository to run your 
benchmarks against?



8) Shifting information 

Re: Feature matrix for future infrastructure

2015-01-28 Thread Jan Kundrát

On Monday, 26 January 2015 18:11:34 CEST, Thomas Lübking wrote:
Eg. I can very well see that somebody concerned w/ i18n would 
like to lookup code via cgit (or similar - no flames here, 
please ;-), download a single file, fix a so far untranslated 
string, diff -pru it with the original and simply upload the 
patch w/o even compiling the entire software himself.


It will be even easier -- the upcoming Gerrit 2.11 contains an online 
editor, so the workflow will be open file, edit it, push a button for 
making a change request.


The current demo setup would only require to add one of the 
-existing- webfrontends.
I assume that it was simply not done (for the demo setup) 
because committing to :refs/for is just much more efficient for 
regular contributors to a project (like rbtools, but minus the 
extra CLI to learn/remember) than clicking through RB or 
similar.


Yes, it never occured to me that patch upload is so important for people. 
Now that it has been made clear, the proposal says that a way for uploading 
patches will be preserved. See the PDF attached to the Another proposal 
for modernization of our infrastructure thread for details.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Sysadmin report on the modernization of our infrastructure

2015-01-27 Thread Jan Kundrát

On Tuesday, 27 January 2015 09:51:46 CEST, Ben Cooksley wrote:

Jenkins provides rich tracking of tests, code coverage and code
quality (eg: cppcheck) in addition to checking if it builds.
Zuul is designed to determine if it builds and if tests fail -
providing a binary pass/fail response.


This is not true. Please read Zuul's documentation at [1] (or the 
modernization proposal I'll be sending later today) for a short overview.



I'll also note that Jenkins provides scheduling and can bring nodes up
and down as needed (when equipped with access to a cloud/cluster).
For this reason Openstack is still relying on Jenkins in part as Zuul
can't do this.


This is not true.

- OpenStack uses Nodepool for node management (VM teardown  bringing up 
new nodes, and image building), not Jenkins.
- The reason why OpenStack uses Jenkins *with* Zuul rather than just Zuul + 
Turbo-Hipster is purely due to inertia and less work required due to their 
history.
- OpenStack's use of Jenkins is limited to acting as a conduit for 
launching jobs.
- Zuul knows how many resources are online at any time, so Nodepool can be 
(and is) used with it just fine.



Jenkins also permits us to track jobs unrelated to our code reviews,


So does Zuul; the OpenStack project is building release tarballs and have 
nightly QA processes on stable branches, all with Zuul.



Please recall that no change of bug tracker or CI system is being
planned at this time - such a change would be for future discussion.


Fair enough, but then the argument of a fully integrated solution should 
not be advertised as Phabricator's advantage, IMHO.


Cheers,
Jan

[1] http://ci.openstack.org/zuul/

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Feature matrix for future infrastructure

2015-01-24 Thread Jan Kundrát

On Friday, 23 January 2015 15:21:34 CEST, Boudewijn Rempt wrote:
There is no way an artist who has a nice patch for Krita is 
ever going to be able to inducted into becoming a Krita 
developer if they have to follow

instructions like this:

https://techbase.kde.org/Development/Gerrit


Hi Boudewijn,
that page contains instructions for existing KDE developers on how to work 
with KDE's Gerrit effectively. You are right that an artist who has zero 
experience with Git and is writing their first C++ patch is likely going to 
have trouble with following a developer-level documentation. A better 
introductionary documentation is surely needed, and it will be needed 
regardless of what platform we choose.


The people with the lowest level of developer experience that I've been in 
touch within KDE are probably some of the GCI students. I think it is 
reasonable to assume that they still know a bit more than your artist which 
is about to send a patch to Krita. This might explain why these students 
were able to contribute via Gerrit without much trouble, and why I 
initially questioned your analysis.


And, apart from detail-by-detail comparisons, gerrit would be an exceedingly 
bad choice for a community like KDE, with its enormous diversity of skill 
levels. Gerrit is uninviting and complicated.


Ah, this might be the core of these differing opinions. Do I understand you 
right? In your mind, KDE should offer tools which make it extremely easy 
for inexperienced users to get involved, even if it makes the job of 
maintainers and/or senior developers a bit more complicated, is that right? 
In my mind, KDE should care primarily about people who are doing the work 
now, while providing enough ropes for motivated newcomers to be able to 
participate if they are willing to follow instructions and learn new stuff.


Now, both of these approaches certainly have merits. Without ingesting new 
blood, a project will eventually die. Without caring about experienced 
developers' comfort, maintainers will drift away and the death will come, 
too.


Maybe the tools suitable for these two approaches are not necessarily the 
same, then.


Gerrit is just a kind of reviewboard with a git integration, phabricator 
is a whole integrated development platform. 


Agreed, which is why the comparison should be about Phabricator + 
whatever_else_is_needed on one hand, and Gerrit + 
whatever_is_needed_for_that on the other hand. Comparing individual pieces 
without seeing the whole mosaic doesn't make much sense.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát
Please read Bens article again. We do this currently and its 
not working. This 
is what needs to be replaced. Phabricator seems to support this, or so they 
say, **and** is does what Gerrit does. So why not use that and 
have everything 
integrated? It's not as simple as picking a WWW git browser, it must be 
integrated into the rest of the system. And it must be easy to 
maintain etc. 
pp. Really, just reread the report.


The report mentions problems with the current Git replication. I'm also 
aware of the technical nature of these things beyond what was said in the 
report because I did talk to Nicolas and Ben about them. The current setup 
is slow, pull-based, unreliable, and can only work once an hour on an 
overseas node due to network latencies and systems' throughput. Fair 
enough.


Compared to that, I know about people using Gerrit's replication with a 
world-wide, distributed network of Git mirrors on multiple continents. What 
I'm proposing is simply using what works well for these people and use that 
for scaleable repository browsing as well. There is no need to write any 
custom scripts. Just deploy as many mirrors as we might need, and add an 
arbitrary read-only WWW git browser to them, with no extra configuration 
required -- just serve all the repos in a read-only manner.


Gerrit's replication is not stupid, it does understand the concept of 
queues and automated retries, so it actually *works* and handles 
bandwidth-limited or latency-bound nodes fine.


If one was to use Phabricator, there will still be the same set of problems 
with git browser having to scale to sustain automated web crawlers and what 
not. We will also still need N machines to bring actual content close to 
the users. Nothing changes by using a single application here, really. The 
problem is that replication needs improvements, so let's improve the 
replication. Gerrit can do that. What does Phabricator use for replication, 
and how does it deal with crappy network?


Scripting stuff means its not configurable to users, which is extremely 
important here. And no, writing a script, maintaining it, and 
adding a config 
UI on top of it if is explicitly **not** what the sysadmins are 
looking for. 
They want to cut down on tools and maintenance burden.


OK, I agree that a perfect way is to add this to Gerrit's existing 
notifications and contributing that patch upstream -- that already includes 
the whole infrastructure, up to and including the pretty UI. Let's do this; 
I'll write that patch and push it upstream once I know that this is not 
going to be a wasted work (see my first mail in this thread).


Note: You removed my marker here, that the below is nice to have. Please 
keep it in, it's important. I never said that this stuff is of utmost 
importance.


I remove those parts of the mail which I mostly agree with; I find that 
better for readability. Sorry for confusion. As you said, it's important to 
assign a proper importance to various features; we're in total agreement 
here.



It's true that there's no git browser where you could attach notes to a
particular line and open a ticket for that. Do we need such a feature,
and if so, how do we intend to use it?


We currently have this in the form of the kde-commits mailing 
list. It is an 
extremely useful feature of the KDE community. What we get 
with Phabricator 
is that, just so much better.


Fair enough, the problem with what we have right now is that nobody is 
guaranteed to read these and to follow up on them, to paraphrase someone 
(you, maybe?) from the previous iteration of this conversation. Improving 
that is a nice bonus, so it's indeed cool to be able to create 
tickets/issues by a single click when one browses a git repo. It's nice 
that Phabricator can do that. Do we however actually have some people doing 
this project-wide code auditing *and* not sending patches at the same time?


I'm just wondering whether this fancy feature would be used that often, and 
whether it could be better to do this as regular code review instead. And I 
also understand that you've listed it in the nice bonuses section.


Having an external tool like our current kanban board on todo.kde.org means 
it's not integrated with the rest. No easy way to link to 
reviews, branches, 
issues, etc. pp. Phabricator gives us all of this, for free!


If something is bundled, it doesn't mean that it's free. One of the costs 
that you pay is that you cannot switch to a better alternative because 
everything and everybody's workflow assumes that you're using the built-in 
solution.


Well, and I disagree here. Having it all integrated will mean we eventually 
have a GitHub clone which makes it trivial to close issues or 
reference stuff 
from the wiki and have stuff updated there as needed. And remember, I said 
that this stuff is *nice to have*. It's not the important 
reason why we should 
use Phabricator, it's just additional sugar on top which you 
won't have with 

Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

On Wednesday, 21 January 2015 16:28:20 CEST, Thomas Lübking wrote:
- The major concern about gerrit seems about scratch repos and 
I think I found a quite old bug/request on this which might 
cross trivial approaches w/ scripts? [1]

 Otoh, it seems Phabricator doesn't support scratch repos right now either.


The bug you linked to is about something else than scratch repos as far as 
I see.


- The lack of non-repo driven (web interface) patch supplies in 
gerrit does not seem to have been addressed in the document, but 
there've been many calls for such. So what's the state on this?


There's a tool for that, https://tools.wmflabs.org/gerrit-patch-uploader/ .


- Phabricator is suggested to have integration w/ a bugtracker
 - Can gerrit be easily integrated w/ bugzilla (or other 
trackers), notably by cross-referencing the review (once! ;-) 
for the BUG keyword? (link and tag that this bug is being 
worked on - tag must be removed in case the patch is abandoned) 
- and of course closing as well.


Yes, see my other mail once it clears the moderation queue.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

On Wednesday, 21 January 2015 15:54:52 CEST, Milian Wolff wrote:

6) The discussion focuses in highlighting Phabricator's benefits, which is
understandable from your point of view. However, much of the same things
can be said about Gerrit as well, especially its backing by a well-known
player, adoption by multiple companies and multinational corporations
(SonyErisccon, SAP, KitWare,...) as well as by many FLOSS projects (Qt,
Andorid, LibreOffice, Eclipse, OpenStack, WikiMedia, Typo3, Cyanogenmod,
Tizen, and a ton of others). Its scaleability has also been verified by
real-world use over many years (for far longer than Phabricator).


Imo, quite the contrary. It concentrates on the issues the admins have with 
their current setup, and then shows how Phabricator could help with that.


Right, I should have said the proposal to use Phabricator and the reasons 
for using that particular tool focus on highlighting Phabricator benefits 
 Sorry for not being clear, I appreciate the value of describing the 
pain points of the legacy setup (and I woud appreciate even more details to 
be able to offer a better alternative).



9) I do not know what effort for integration with KDE Indentity you're
referring to. It's a simple LDAP server, and the entire integration was
trivial. It's really just a matter of configuring Gerrit to talk to LDAP,
which I expect to be also the case with Phabricator.


I don't see where this is mentioned in regard to Gerrit. I can 
only find LDAP 
being mentioned when talking about the status quo for KDE, which does not 
include Gerrit.


The part which I was commenting on is the following paragraph:

As a result of this it would be necessary to combine several different 
components to produce a complete Git solution. This would require further 
effort to integrate them with both each other and parts of KDE 
infrastructure such as Identity. Even after such effort is completed a 
certain degree of synchronisation between the tools will need to be 
maintained, such as registering repositories in both the code hosting 
tool and Gerrit.


There was no extra work to integrate Gerrit with KDE's Identity, and I 
expect most of other tools which we might need to use to have LDAP support 
out of box because that's an industry standard for identity management.


KDE Identity == LDAP, in this context.

The paragraph above also assumes that Gerrit would not be used as a primary 
code hosting place. There's no reason for that, so the raised conclusions 
(this will require syncing) are not true.



4) You have indicated some (pretty important to me) limitations of
Phabricator with a remark that it's a fast moving software, we might get
there eventually. I think that SW should be evaluated based on
functionality present right now in the development version and 
not based on

opened feature requests. We've been promising e.g. support for multiple
accounts in Trojita for many years already, and it didn't make it happen.


This belongs to the point below, imo. Or what are you referring 
to? 


When I read the proposal, I see some enthusiasm for Phabricator's swift 
development pace. That's good, but at the same time it isn't an answer to a 
lack of features. Here are some of the relevant bits:


- CI integration
- scratch repos
- clustering for scaleability
- preserving author metadata
- direct git access to patches under review
- patch upload via git

What I'm saying is that an opened feature request and a rapid speed of 
development are not something to gurantee that these will be ready in a 
month, or even in a couple of years.


To my knowledge, here are some things that Gerrit does not provide, but 
Phabricator potentially provides:


Yes, Gerrit doesn't include wiki pages, issue tracker, Kanban planner and 
various calendars. I don't necessarily see that as a drawback. Do we want 
to migrate wikis and Bugzilla? If yes, I can understand that Phabricator 
might be a compeling tool, but so far the proposal was limited to just 
revamping the git hosting and code review.


- a project overview with a code browser, project meta data etc. pp. and a 
list of commits inside a repository. Qt still uses gitorious for this, afaik


Right, we will have to pick one (or multiple) of the WWW git browsers out 
there. If I was designing such a service, I would let it run from a 
dedicated VM and have Gerrit replicate stuff to these servers. Scaling up a 
stateless, read-only service such as cgit/gitweb/... is very easy.


- the ability to get notified about new commits in a project. (this is 
different from new reviews)


Gerrit has hooks for triggering this (ref-updated), and it's easy to send 
e-mails from that context. There are scripts for that in git's contrib/.


@sysadmins, how is that tackled by Phabricator?

Also, if a proper code review was enforced, there would be no need for 
this.


- Apparently the anongit problem, but Ben would need to fill in 
more details here.


We are already using a stock 

Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

On Wednesday, 21 January 2015 20:07:21 CEST, Ben Cooksley wrote:

1) A detailed list of the issues which a competing proposal would have to
address. Some glimpse of this is in the last paragraph, but that's just a
very high-level description. Could you please provide a list of
functionality that has to be supported by a replacement, so 
that we can work

out how to get there?


The list in question was drawn from my summary mail which was sent to
the previous thread.
Not everything in that list is a requirement (as no software exists
which can meet all of them).


Ben, it just isn't clear to me what parts of the system are free for 
replacement and which parts are set in stone to remain as-is. Maybe we 
don't know yet; that would be fine as well. However, as I'm reading the 
rest of the text, it seems that some people are looking forward to migrate 
away from Buzgilla, or to use Phabricator for task management as well. 
There are also options for getting away with the current set of git hooks, 
or at least some of them.


3) The idea of rolling out Phabricator is not specified, so 
it's hard to get
a proper understanding of what exactly will have to be done. 
What changes in

workflow are going to be needed? What custom tooling will have to be
written? How is Phabricator going to play with the rest of the
infrastructure? What pieces of the infrastructure will actually remain?


There would be no changes in workflow as such.
The only way to properly test a tool though is to actually put it into
use for a while - and thus find out whether it is able to fit our
workflows.

In terms of custom tooling, we will need to integrate it with Jenkins
and set it up to receive keys from Identity.
Due to the way it works with SSH keys this will be a fairly trivial
process as our existing systems which auto-sync keys for svn.kde.org
and git.kde.org can be almost entirely reused.


Let me ask in a different way, then:

- How are you to integrate with CI?
- How are you to plug into Bugzilla?
- How are you going to tie with Jenkins in order to automatically create 
build jobs/profiles?
- How are you going to send commit e-mails? How are people going to filter 
them or to subscribe to various projects of interest?

- How are IRC notifications going to be supported?

I'm sure there are more; the above is just to show what I'm asking about 
when I speak about integration.



7) It is possible to have scratch repositories with Gerrit, but it's true
that it will require a simple tool which verifies user's request and
executes an API call with proper credentials. We're speaking about one
trivial webapp or a shell script here.


This is a separate application, and would therefore require separate
maintenance correct?


Here's the script that I'm talking about, in pseudocode:

 if not check_ldap_group($user, developers):
   die you are not a KDE developer

 if not regexp_match($proj, '[-a-zA-Z0-9_]':
   die invalid project name

 if operation == delete:
   return `ssh bot@gerrit delete --yes-really-delete --force \
   scratch/$user/$proj`

 if operation != create:
   die invalid operation

 return `ssh bot@gerrit create-project --owner $user \
   --parent sysadmin/gerrit-user-scratch-projects \
   scratch/$user/$proj`

That's 9 lines prior to line-wrapping for mail, including error handling. 
As of maintenance, what's your estimate of the required time to maintain 
this over the next 10 years?


As a nice bonus, Gerrit supports enforcing quotas for number of per-user 
repositories as well as the consumed disk space; there's a plugin for that.



(Plus people have to find it)


I'll be happy to include a nice page in Gerrit's top menu saying Create 
project which will explain how to request official repositories as well as 
how to request regular projects from sysadmins :).



How would developers delete no longer used scratch repositories?


In the same manner as creating them, see above. It's also possible to have 
a cronjob querying the date of last change, etc.



Thiago indicated that for sane mail handling one wants the patches Qt
currently has.
Is this not the case?


I don't know what is or is not sane, but [1] is a random example of what we 
have right now. Is that sane enough?


E-mails that I'm receiving from Qt's own Gerrit look the same, but maybe 
I'm missing some crucial difference.



9) I do not know what effort for integration with KDE Indentity you're
referring to. It's a simple LDAP server, and the entire integration was
trivial. It's really just a matter of configuring Gerrit to talk to LDAP,
which I expect to be also the case with Phabricator.


The integration in question I'm referring to are SSH keys, and
ensuring details are automatically updated from LDAP when they change.
If i'm not wrong, your current solution requires keys to be uploaded a
second time.


Yes. This is a matter of one command for each event:

 `cat $key | ssh bot@gerrit set-account $user --add-ssh-key -`

...and an appropriate 

Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

On Wednesday, 21 January 2015 23:57:07 CEST, Ben Cooksley wrote:
Using either 
http://www.guywarner.com/2014/06/part-2-integrating-phabricator-and.html

or http://www.dctrwatson.com/2013/01/jenkins-and-phabricator/ or a
variation thereof.


That is quite some custom code that one has to maintain, though.


Commit emails could either be sent by our existing hooks, or we could
migrate to Herald and customise it's template to fit what we need if
necessary.
People would filter them / subscribe to them through Herald.


How would they subcribe via Herald if it was done via the existing hooks?


Doesn't seem too high, although I don't see how that would be made web
accessible - which might be the hard and costly part maintenance wise.
(You have to deal with security issues too as you are in a separate
web application, so you need to authenticate the developer first).


Well, Apache's mod_authnz_ldap and a Require group developers stanza 
makes this really easy. Just look up $user from an appropriate env var 
provided by the web server. Where is the problem?



Our existing solution is triggered on change events in LDAP and causes
all SSH keys to be re-read and a new ~/.ssh/authorized_keys file to be
written out. You can't rely on OpenLDAP stating the addition/removals
properly when using the syncrepl interface, at least in my experience.
In this way we avoid dependence on the Identity web application.


A quick  dirty approach:

 `ssh bot@gerrit set-account $user --remove-ssh-keys ALL`
 `ssh bot@gerrit set-account $user --add-ssh-key -  authorized_keys`

A better and race-free code would have to invoke `comm` in addition to 
that, and only add/remove keys which has to be removed or added. That's 
left as an excercise for the reader, it's easy enough. Or, to avoid relying 
on a local state altogether, just issue a REST call for SSH key retrieval 
and base a decision on that. It's gonna be like 10 lines of custom code.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

On Wednesday, 21 January 2015 23:12:33 CEST, Thomas Lübking wrote:

The bug cooked up for asking google about gerrit and scratch repos.
The problem is that pushing into any branch would close a 
review - I can only assume it was linked in the mail thread I 
found because a similar issue would affect clones etc. (ie. 
likely when the change-id gets pushed anywhere)


My uninformed guess would be to handle the change-id smarter, 
ie. bind it to a branch (and pot. repo)


IMHO, a pretty straightforward option is to close bugzilla entries once a 
change is approved and lands in an appropriate branch, and that's easy with 
Gerrit's its-bugzilla plugin. It is possible to specify which branches are 
appropriate, of course.



Thanks - looks actually nice (and supports dumb diffs ;-)
Sicne I've no mediawiki account: do you know whether it already 
allows ppl. to access (all) their patches w/o modifications 
(ie. like the RB dashboard) as well?


Yes. Their instance doesn't keep a secondary index which limits 
searching, but a proper query would be something like 
owner:y...@example.org OR comment:'uploaded by y...@example.org'. That 
works on our Gerrit.


Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Sysadmin report on the modernization of our infrastructure

2015-01-21 Thread Jan Kundrát

Hi Ben,
thanks for your proposal. A couple of points which I'm missing from the 
text, and a few further questions as well:


1) A detailed list of the issues which a competing proposal would have to 
address. Some glimpse of this is in the last paragraph, but that's just a 
very high-level description. Could you please provide a list of 
functionality that has to be supported by a replacement, so that we can 
work out how to get there?


2) It is unclear to me whether you plan to use Gitolite in future or not. 
At first, the proposal says that there are inherent scaling issues with it 
and that the replication is beyond its scaling limits, yet at the end of 
the document you say that a replacement has to support Gitolite metadata 
generation. I do not understand that.


3) The idea of rolling out Phabricator is not specified, so it's hard to 
get a proper understanding of what exactly will have to be done. What 
changes in workflow are going to be needed? What custom tooling will have 
to be written? How is Phabricator going to play with the rest of the 
infrastructure? What pieces of the infrastructure will actually remain?


4) You have indicated some (pretty important to me) limitations of 
Phabricator with a remark that it's a fast moving software, we might get 
there eventually. I think that SW should be evaluated based on 
functionality present right now in the development version and not based on 
opened feature requests. We've been promising e.g. support for multiple 
accounts in Trojita for many years already, and it didn't make it happen.


5) You're indicating a requirement for scratch repos to be present from the 
very beginning, yet you acknowledge that Phabricator won't have it for at 
least six months.


6) The discussion focuses in highlighting Phabricator's benefits, which is 
understandable from your point of view. However, much of the same things 
can be said about Gerrit as well, especially its backing by a well-known 
player, adoption by multiple companies and multinational corporations 
(SonyErisccon, SAP, KitWare,...) as well as by many FLOSS projects (Qt, 
Andorid, LibreOffice, Eclipse, OpenStack, WikiMedia, Typo3, Cyanogenmod, 
Tizen, and a ton of others). Its scaleability has also been verified by 
real-world use over many years (for far longer than Phabricator).


Now coming to Gerrit and its analyzis:

7) It is possible to have scratch repositories with Gerrit, but it's true 
that it will require a simple tool which verifies user's request and 
executes an API call with proper credentials. We're speaking about one 
trivial webapp or a shell script here.


8) There is no need for any modifications to Gerrit as your text implies. 
What is running and integrated into KDE right now is an unpatched release 
straight from upstream, with no custom plugins.


9) I do not know what effort for integration with KDE Indentity you're 
referring to. It's a simple LDAP server, and the entire integration was 
trivial. It's really just a matter of configuring Gerrit to talk to LDAP, 
which I expect to be also the case with Phabricator.


10) Re Phabricator's support for submitting and retrieving changes via pure 
git, I believe you're overly optimistic in this regard. This is a pretty 
fundamental design decision which can only be retroactively fitted with 
asignificant amount of pain.


11) While the Gerrit trial has been running for a few months, being used by 
real KDE projects and generated actual feedback and tweaks, there's been no 
trial of Phabricator so far. In my opinion, it is premature to have a plan 
for migration to a particular tool prior to having verified said tool in 
production environment. In this regard, your proposal effectively discusses 
throwing away the results we got with Gerrit, and fails to provide some 
rationale for that. Indeed, the question is where is Phabricator better 
than Gerrit, and I propose to focus on this aspect in future.


12) WikiMedia uses Phabricator for issue tracking and nothing else -- it's 
just a Bugzilla replacement for them. According to publicly available 
information, their reasons for choosing Phabricator had everything to do 
with the PHP as a language known to all of their developers. They still use 
Gerrit for git repo management and code review.


So given that we're about to rebuild the whole Git infrastructure anyway, 
the counter-proposal is to build it around Gerrit this time. Based on what 
I know about KDE's infrastructure, Gerrit can fill in these roles without 
any problem. I would like to work with you towards that goal; are you 
interested?


Finally, I would like to say that I appreciate the work of the sysadmin 
team. In future though, I'd love to see a bit more transparency of the 
entire process. Right now it isn't clear to me whether investing a few 
additional man-months of my time towards working with Gerrit has any merit, 
or whether it's been already decided that the KDE's future is with 
Phabricator. I don't 

Re: Changes to our Git infrastructure

2015-01-06 Thread Jan Kundrát

On Monday, 5 January 2015 20:57:47 CEST, Frank Reininghaus wrote:
Ultimately, a constant stream of 
newcomers is the only thing that keeps a free software project alive 
in the long term.


Yes, as long as these newcomers eventually get enough interest and enough 
skills to become maintainers. I agree with the importance of getting new 
blood in, but there's also a need to educate these contributors so that 
their skills become better over time.


I'm a bit worried that 
any new patch review system which requires more effort before one can 
submit a patch for review might put some potential new contributors 
off.


That's a valid point, so yeah, I agree that we should evaluate our tool(s) 
in light of being usable by newbies as well as by professionals. Thanks for 
saying that.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-06 Thread Jan Kundrát

On Tuesday, 6 January 2015 07:40:01 CEST, Ian Wadham wrote:
   a) I do not know anything about Dr K, but I will try and 
find someone who does.
   b) Unfortunately there is nobody available any more who 
knows anything about
Dr K, but I (or another suggested guy) will try to 
help.  How about we take this
offline via email or IRC and then you can walk us 
through the problem you are
trying to fix, its significance and impact and how you 
are going about fixing it…


This has a risk of splitting the discussion about a patch into multiple 
independent streams where people will have hard time keeping track of all 
the results, IMHO.



The polishing (fixing nitpicks, etc.) should come *after* the stone is cut.


That's a good suggestion.


Going straight to that mode is inappropriate because it conveys the message,
The problem you are trying to fix is unimportant to us.  


Would it work for you if there was a bot which pointed out these issues to 
the patch author? That way it would be obvious which part of the review is 
random nitpicking which is indeed not important when one considers the 
general direction of the patch, and in addition it would come from a 
machine so there won't be any bad feelings about people not appreciating 
the contributor's work.


No amount of new technology, neither Gerritt nor the energy of 
cats confined in a
bag, can help.  There are management solutions to technical 
problems, but there
are no technical solutions to management problems, as a 
colleague of mine used to say.


Agreed. So the actual problem is lack of skilled maintainers who just 
aren't around. I agree that tooling cannot fix it -- the tooling can only 
help by a bit by making their job easier. If the maintainer is simply not 
here, then you cannot get a proper review, sure.


This is an interesting discussion, and I think that there is no problem for 
it happening in parallel to the current talk about reshaping the git 
infrastructure -- but maybe in another ML thread.



but the problem is that there's completely unmaintained code where
nobody feels qualified to sign off patches.


Exactly.  And there are simple, technology-free solutions to that problem,
if anybody is interested.


What are these solutions?


  a) There is no encouragement for the reviewer to build and TEST the
   patch, independently of the reviewee.


My personal opinion is that people should not be approving patches unless 
they tested them, or unless they have sufficient reason to believe that the 
change is OK (such as a trivial change with an associated unit test 
pre-approved by a CI run).


How can we motivate people to test these changes without discouraging them 
from doing reviews at all?



  c) Patching encourages incremental, evolutionary development,
   rather than standing back and taking a design view of the way
   things are developing.  BTW, ReviewBoard supports post-commit
   reviews of several patches or commits together, but that feature
   is turned off in the KDE usage [1].


So we need another approach (or tool) to help us perform review of the 
architecture of our software. Got some suggestions? Launchpad blueprints? 
Wiki pages?


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-06 Thread Jan Kundrát

On Monday, 5 January 2015 22:22:19 CEST, Boudewijn Rempt wrote:

Usually, half-way through they ask me, why doesn't KDE use github


I do not understand how stuff would change if we used GitHub, though. There 
would still be that huge gap of not understanding which of the repos to 
use. I think that this is easy to solve with individual apps, but it's a 
hard problem when it comes to a platform, or indeed to an environment where 
the border between the individual pieces is unclear (e.g. which repo 
contains the plasma clock applet in KDE4, and where is it found in 
plasma5)? I do not have an answer on how to make this more obvious or 
beginner friendly.


The contributors still have to know what a DVCS is, and have to understand 
the concept of a commit and what a push means. But all of that applies to 
GitHub as well.


About the only difference that I can see are GitHub's pull requests. I 
understand that the knowledge of how to create a PR there is pretty 
widespread among our contributors. The fact that ReviewBoard doesn't allow 
actual approval (i.e. where it's approved *and* merged into our SCM) 
doesn't help here at all, that's also true.


I would encourage you to read 
https://techbase.kde.org/Development/Gerrit#Getting_Started . Do you think 
that the workflow proposed there is reasonably beginner-friendly?


We can then discuss ways in which it can be simplified. E.g. the need to 
set up an extra remote can go away easily *if* we agree on directing pushes 
to Gerrit once (and if) it's adopted. A website listing all projects can 
easily show a copy-pasteable link on how to clone and get the change-id 
hook in place during the initial clone, etc. We can add some scripting to 
sync SSH keys from LDAP to Gerrit, so that people only have to register 
with Identity and they'll be all set. That all is possible.


What I'm saying here is that I believe that the feature set supported by 
Gerrit is actually very close to that of GitHub. It's different because 
there's no fork me button and the concepts do not map 1:1 to each other, 
but the general ideas are very similar.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-06 Thread Jan Kundrát

On Monday, 5 January 2015 14:03:13 CEST, Thomas Friedrichsmeier wrote:

I think there is an easy test for this (well, not a real test, but a
useful initial heuristic): Can you explain exactly how to submit a
patch for your project
- to someone without prior knowledge of the tools involved
- without assuming the required tools/keys/accounts are already set up
- without any further reading
- covering all required steps in sufficient detail
in no more than 2000 words, or (if it's based on a web-wizard) in less
than 20 minutes?


This includes links to other pages which explain how to work with git, but 
I think that it does qualify:


https://techbase.kde.org/Development/Gerrit#Getting_Started

Does it match your requirements?

That page started as an attempt to provide a documentation for existing KDE 
developers, so it does go into depth of how to manualy Cc reviewers. If we 
decide to use Gerrit, then I think it would make a lot of sense to 
intorduce a single-page Submitting Patches Quickstart which would just 
describe the absolute basics on one page, including a git primer.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Sunday, 4 January 2015 19:32:28 CEST, Jeff Mitchell wrote:
I don't follow this line of logic. The end result is software 
stored in git trees, but how it gets there is a totally 
different concern. Whether it comes from patches that are then 
accepted and merged, or direct merging of branches, the end 
result is the same.


- Existing KDE account holders can and do use git for their workflow.
- Using non-git workflow for others introduces a different workflow to the 
mix.

- Having two workflows is more complex than having just a single one.

Does it make it more understandable?

My goal is to help bridge the gap between the existing project 
maintainers (who produce software in git trees) and the new 
contributors (who produce patches).


KDE purposefully has a very low barrier to entry. A contributor 
can normally push anywhere.


When I said a contributor, I was referring to somebody who is not a KDE 
developer yet. Please re-read what I wrote again because your understanding 
assumed that a contributor is someone who can push to git. I can see why 
you arrived at what you said in that case, but that's not what I said.



.NET is a framework, not a language. Maybe you meant C#. ...


Thanks for educating me, but I don't think it helps move this 
discussion forward in a productive manner.


I do. Because there are a huge number of languages that have 
compilers to produce .NET CLI. Some of them are indeed 
relatively obscure. Saying .NET doesn't mean anything in terms 
of which languages you are taking issue with. Let's be clear 
what we're talking about.


I was quite obviously referring to any tools which make use of the .NET 
runtime environment. I do think that mandating these for efficient work 
with our code review system is a pretty big no-go.


As other people have added to the list of requirements, patch 
management needs to be able to be done via the web UI. Nobody 
has to install any runtime environment.


The requirement I listed was make the changes available as git refs so 
that I do not use any other tool to work with them. If that is not 
available, then another requirement is have a nice CLI implemented in a 
language that is common on KDE desktop.


The fact that I can fetch and upload patches via a website does not satisfy 
this requirement. It's a bandaid help, not a fully usable solution.


Those that want to contribute will be required to install a 
whole slew of development packages.


Unless they have e.g. done something with Qt already, or unless they're C++ 
developers already, etc.


Especially if it's only needed if they want to do advanced CLI 
stuff.


Fetching patches is not advanced stuff. It's cool to have a manual bypass 
for fetching stuff by hand, but that is a hotfix, not a productive 
solution.


The sysadmins appear to have a strong preference for a unified 
tool to do both.


No, our preference is for well-integrated tools, which is the 
same preference you previously stated.


I'm happy to hear this, but then I don't know how to interpret Ben's 
earlier responses in this thread and our conversations on IRC. Anyway, it's 
cool that we agree on something :).


To put my money where my mouth is, yes, I am willing to 
maintain the extra systems, and I have been doing this with 
Gerrit (and CI) for a couple of months already.


Yes, because Gerrit is not a tool being provided by the 
sysadmin team and as such is not in scope for us to maintain. 
It's great that you're willing to help out, but your offer to 
help maintain Gerrit has no bearing on whether or not it's what 
we end up proposing to the community.


I'm simply stating that any possible argument saying we prefer a single 
tool because we don't have manpower to maintain more of them is moot 
because that manpower is waving its hand and saying I'm gonna do this 
right now.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Sunday, 4 January 2015 13:21:12 CEST, Thomas Friedrichsmeier wrote:

True, but don't forget about the other side of the story:
- potential contributors will have to learn more stuff, before they
  can even _start_ contributing, which may be a real turn-off in some
  cases.


That's a valid concern, so the core question is whether this scenario:

- make changes
- make a commit
- push that commit

is any more complex than:

- make changes
- create a diff
- upload that diff via a browser.

I have to admit I have no idea which one is *really* easier for a total 
newbie, and which one is easier for an average contributor.



Your project's situation may be very different from mine. Personally,
I'm much more worried about keeping the entry barrier as low as
possible, than about reducing the amount of effort that I have to put
into supporting and educating contributors.


That's also a possibility. However, in my experience with Gerrit, GCI 
students (which are kids aged 12-18, IIRC) were able to upload patches 
without substantial problems. That's despite our rudimentary state of 
KDE-specific documentation, and Gerrit having a oh no, let's run away from 
that beast reputation. In short, I think that there's actually a solution 
which can be both a low enough barier to entry *and* help maintainers do 
their job easier.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Monday, 5 January 2015 06:05:33 CEST, Ben Cooksley wrote:

Ease of installation and it's the availability of the necessary
interpreters within mainstream distributions should be more than
sufficient criteria here. Limiting it by any other criteria is playing
pure favouritism to a given set of language(s) and unnecessarily
limits our options.


Ben, you and Jeff appear to disagree with my point that e.g. requiring a 
PHP tool to be installed client-side on each developers' and contributors' 
machine might be a little bit discouraging. It is OK to say that you 
disagree, but it doesn't prove the point to be any less valid. It's fine to 
have people assign varying importance to different evaluation criteria, so 
please do not use your sysadmin hat to unilaterally remove this pure 
favoritism just because you do not see any value in it.


My impression was that we're gathering a list of possible requirements and 
*then* we, as a community, are going to assign some importance factor to 
each and every item raised. It is therefore acceptable to have mutually 
exclusive criteria at this point, or even bits which some of us might find 
to be totally irrelevant. They are going to be sorted out be community's 
consensus, I suppose.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Monday, 5 January 2015 12:43:06 CEST, Milian Wolff wrote:
Hm, why don't we do a prioritization poll? Quite some items 
raised by others 
are totally unimportant to me, and probably vice versa. While I 
agree that it 
would be nice to make everyone happy, I doubt that's going to 
work out. If we 
concentrate on the important aspects first, the admins would have less work 
and most people are still happy.


+1, this is exactly what I wanted to say.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Monday, 5 January 2015 18:01:12 CEST, Jeff Mitchell wrote:
The problem here is that you believe -- incorrectly -- that a 
single workflow cannot include more than one tool. The reason I 
can definitively say that you are incorrect is because your own 
preferred workflow involves more than one tool, regardless of 
how they interact. And if yours does, you can't complain about 
other workflows that do.


I was complaining about an IMHO artificial split where drive-by people 
submit changes in a different way than core developers. I stated that this 
introduces some needless difference to the way devs and contributors work, 
and that we should check whether there are tools that remove this 
difference. I know that e.g. Gerrit removes that difference, so I am not 
thrilled by the idea of using something without that feature.


It does if my point was (and it was) that a workflow consisting 
of producing a commit in Git and having the review take place 
via a web UI is a very broadly accepted paradigm in software 
development, and one that is often considered to be friendly to 
newcomers.


You're right, and I apologise for not understanding you correctly, we're in 
violent agreement after all.


and I believe you were saying that it's fine for a CR tool to 
work on patches and not git trees.


Correct. Although I recognize the merits of such an approach, I 
do not believe that the only acceptable way for a code review 
tool to work is on git trees instead of via patches. And I do 
not believe that this one feature is enough to outright dismiss 
all other options.


That's another thing where I should have probably worded my responses 
better. The requirements I listed were things which I found valuable for my 
work. I did not mean to say that it's the only possible way of doing 
reviews, or that I found everybody who disagrees with me to be a moron. 
It's just that these features are important for me, so I would like to see 
them and I wanted to make sure they are listed as a requirement in a list 
of points gathered by the community.


Maybe this misunderstanding is caused by sysadmins likely perceiving the 
requrements as hard ones which MUST be provided by any solution, while my 
impression is that we were asked to say what is important for us, and the 
evaluation is to be performed by us all together, not just the sysadmins.


Given your earlier statements I imagine, but this is only 
supposition, that one of the reasons you desire such an approach 
is so that you can have all review actions be performed via SSH 
commands without requiring either a web UI or external tool. 
While this is certainly nice to have, I don't believe that it is 
very usable for newcomers.


I agree that *that* would suck :).

Given the earlier 
distinction you made between contributors and developers, it 
also requires those that want to contribute patches to have full 
KDE developer accounts with commit/push access in order to push 
those diffs up for code review...something not required from a 
web interface requiring only an Identity account.


There is no need for full KDE developer account to upload changes for 
review with Gerrit. All that is needed is a regular Identity account.


Behind the scenes, it works by Gerrit being a special git server which 
intercepts pushes and stores each of these changes in a separate ref. I'll 
be happy to go into more detail in another thread, off-list or on IRC.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-05 Thread Jan Kundrát

On Monday, 5 January 2015 16:05:07 CEST, Jeff Mitchell wrote:

- Existing KDE account holders can and do use git for their workflow.
- Using non-git workflow for others introduces a different 
workflow to the mix.

- Having two workflows is more complex than having just a single one.

Does it make it more understandable?


No. What you're saying is having two tools is more complex. 
It's still one workflow.


I feel like you're just language-lawyering here. The workflow I propose 
pushes the burden of producing clean patches to the contributor. The 
workflow you're advocating for appears to center around sending patches 
around, so by definition in your workflow there's a big difference in the 
way 3rd party contributors work as opposed to what KDE developers do. My 
proposal aims at closing this gap.


GitHub is a notable example showing that people don't seem to 
have an issue with a workflow that uses Git + a web-based tool 
to manage their code reviews. I'm not saying we need to end up 
with that, I just don't think it's credible to claim that it's 
too difficult or complex.


That isn't an example that proves your point, though. The GitHub workflow 
actually involves a `git push` to be done by the contributor. That means 
that the GitHub workflow relies on contributors to curate git trees, and as 
such I like that workflow [1] because both core developers and contributors 
produce the same sort of artefacts. It's a different workflow from 
uploading patches via browser or via some client-side tool, though, and I 
believe you were saying that it's fine for a CR tool to work on patches and 
not git trees.


[1] GitHub manages to screwup things when one does a rebase, but that's an 
implementation detail for the purposes of this discussion. Yes, it does 
make the workflow hardly usable for me.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-03 Thread Jan Kundrát

On Saturday, 3 January 2015 03:31:26 CEST, Ben Cooksley wrote:

Regrettably there were one or two items which conflicted. I sided with
the option which kept the barrier to entry as low
as possible as that seemed to be the greater consensus within the thread.


Hi Ben,
thanks for compiling a list. However, I was hoping that the result from 
this first phase would include every mentioned feature (perhaps on a wiki 
page?), so that we can then discuss how many people find each of these 
features valuable. I did not include some of the bits which others already 
expressed when I wrote my reply.


Things I miss from the list you just gathered:

- Working on git trees, not patches. This directly translates into making 
the contributors familiar with our workflow, and therefore getting them 
immersed into what we're doing and helping bridge the gap between 
maintainers and contributors.


- Being able to tweak small bits of the review request straight from the 
web application (in the nice-to-have category; this is different from 
Developers can tweak proposed changes easily before they are landed 
without the involvement of the submitter.).


- Retaining proper autorship (git author) of the original author without 
extra work.


I think it's also reasonable to add:

- Not needing a CLI tool in an obscure language (PHP, Java, .NET,...).

- Not needing a CLI tool or an explicit authorization at all for operations 
such as download patch.



- Project Management:
  - Coherent and Performant
  - One canonical place to get the desired information (no more duplication)
  - Can either be links to elsewhere or directly hosted by the system
  - Covers items like documentation (wiki, api), bug tracking, task
boards, CI, code browsing and reviews, etc.


I'm confused by this part. This thread is called Changes to our Git 
infrastructure. I see that code review is very relevant to that because 
some efficient tools do extend Git, but I don't understand why this list 
contains information about wikis, bug tracking and task boards. I do not 
think that we should be looking for a single tool to do everything (and the 
kitchen sink), so I would appreciate a bit more information on what exactly 
your opinion is here, and why so.



  - A weaker case exists for clone repositories - making them more
nice to have than critical.


I believe that people requested a place to store their changes which for 
some reason cannot be easily upstreamed, but at the same time they do not 
want to bother other folks by having a visible branch in your face in 
the main repo. If that is indeed the case, we should focus on this 
*concept* and put away the fact that it's right now implemented as 
GitHub-style repository clones. Other tools might very well support such 
a scenario by something entirely different from clone repos.



- Integrated:
  A single application which handles everything will always be able to
offer a better and more coherent experience than several applications
we have connected to each other.


I do not agree with that. Well-integrated applications which work well 
together while doing one thing well are superior to a single tool which 
tries to do everything.



  There is also a lower maintenance burden from an integrated
application, as it is one piece of software to update - not 3 or 4.


I am volunteering to get my hands dirty, and I believe others have 
expressed their willing to join the sysadmin team as well. In particular, 
I'll be happy to take care of the Gerrit deployment and help others perform 
day-to-day maintenance of Gerrit. This includes participating in the rest 
of the Git-hosting business, anongit, repo hooks, etc. I'm also interested 
in CI, which is another area in which I can help.


I've worked as a sysadmin for a couple of years and am pretty familiar with 
treating the physical infrastructure of servers and gear as code.



- Scalable:
  The applications which make up our infrastructure need to be able to
handle the demands we place on them without consuming resources
unnecessarily.


That's a good goal, but seems very vague to me. When we move to next 
phases, it is important to spell out what these demands are to prevent 
possible misunderstandings.



As this is quite an extensive list of requirements, we won't be able
to get everyone what they're after. We'll have to do things on a best
effort basis - depending on what is available, what features are
missing, etc. Unfortunately there is no perfect solution here - it
just does not exist.


I can see all of the above fulfiled with Gerrit, but I'm OK with waiting 
with a proper evaluation when we call for one.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2015-01-03 Thread Jan Kundrát

On Saturday, 3 January 2015 21:35:12 CEST, Jeff Mitchell wrote:

On 3 Jan 2015, at 14:00, Jan Kundrát wrote:
- Working on git trees, not patches. This directly translates 
into making the contributors familiar with our workflow, and 
therefore getting them immersed into what we're doing and 
helping bridge the gap between maintainers and contributors.


I agree that this is missing from the list of things people 
brought up but I'd appreciate an explanation as to how this is 
directly translates into our workflow. As far as I can tell 
our workflow is what we make it; if contributions from outside a 
core team are done through a patch-based review instead of a git 
tree-based review, then patch-based review is our workflow.


Because what we together produce is software stored in git trees, not 
series of patches.


My goal is to help bridge the gap between the existing project maintainers 
(who produce software in git trees) and the new contributors (who produce 
patches). If we can offload the management of git trees to the 
contributors, then the following happens:


- contributors learn to master the same tools as the maintainers,
- there's one less thing for a maintainer to do on a contributor's behalf,
- maintainers have more time to process more incoming reviews,
- contributors can eventually transition to maintainers more easily because 
they already know the tools.



- Not needing a CLI tool in an obscure language (PHP, Java, .NET,...).


.NET is a framework, not a language. Maybe you meant C#.


Thanks for educating me, but I don't think it helps move this discussion 
forward in a productive manner.



Regardless, I fail to see how any of those are obscure.


I sincerely believe that pushing Yet Another Runtime Environment to our 
contributors is something which reduces chances of them contributing. Would 
I install e.g. PHP to contribute to Qt? I probably would because Qt is an 
excellent and high profile software project. I don't think I would do this 
just to e.g. send a patch to a random KDE app that I use twice a year, 
though, and I also can't imagine people doing this to contribute to 
Trojita.



They're three of the most popular and widespread languages in the world.


The popularity has to be judged with our target audience in mind because we 
still haven't achieved dominance even on Linux, and absolutely not on other 
OSes. I think that most of our contributors don't come from the pool of PHP 
programmers, Java programmers, or those who use the .NET framework or the 
most popular desktop OSes. These languages/frameworks/environments have 
historically been alien to us, and we are talking about tools that need to 
be on each contributor's machine in order to participate in the 
development.


You are free to disagree with my impression that e.g. requiring to install 
PHP or Mono on a dev machine provides an extra step for contributors to 
pass, though.


The subject line is unfortunately a bit narrow, but since code 
ties into everything, changes to our hosting necessarily affects 
all of our other systems. Changing our git infrastructure is a 
reasonable time to look at changing other things as well. There 
are a number of capabilities we'd like to provide and a number 
of systems we'd like to be able to consolidate.


It isn't clear to me what exactly is being reconsidered at this point. I 
was told that CI is specifically not in scope, and now I read that wiki 
pages and issue tracking is being considered. I don't understand where the 
line was drawn and why.


My impression is that the goals for a code review tool are very different 
from needs for a project management tool. The sysadmins appear to have a 
strong preference for a unified tool to do both. As a (non-KDE) sysadmin, I 
understand the general reasoning, but at the same time, as a developer I do 
not really care about that, and am strongly against using an inferior tool 
just because it was easier to install and takes less effort to maintain. To 
put my money where my mouth is, yes, I am willing to maintain the extra 
systems, and I have been doing this with Gerrit (and CI) for a couple of 
months already.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Outgoing e-mails from Gerrit review to uninterested people

2015-01-03 Thread Jan Kundrát

On Saturday, 3 January 2015 08:57:43 CEST, Aaron J. Seigo wrote:
It would be nice if there was an opt-out for this. I receive a 
large number of 
emails from gerrit for reviews which I have been automatically 
subscribed to 
which I have absolutely zero interest in.


Hi Aaron,
sorry about that. Do you still have these e-mails? It would be interesting 
for me to see them to know what triggered your inclusion in there. Either 
way, tweaking the notifications is something which we should discuss, and 
this real-world testing and feedback enables us to arrive to a better 
configuration for everybody.


Our current settings tries to auto-determine people familiar with the code 
by looking at the authors of lines which are touched by a patch, and auto 
Cc-ing the top three (IIRC) of these people. This is easy to disable or 
configure on a per-project basis. Unfortunately, there's no support for 
checking whether each developer is OK with getting subsequent information 
about these changes. I have to admit I'm a bit surprised that you do not 
like this feature because I'm usually curious with what happens with code I 
write, but maybe I'm just on too few projects to feel the pain of spam.


I see that you got Cced on a review which went through 11 iterations. That, 
combined with quite a few comments, can very well lead to a number of 
e-mails. According to the mail logs, you received 29 e-mails so far. I 
don't have an easy way of distinguishing between those where you were 
explicitly Cc-ed and those where this automation decided to add you. On 
December 16th, I cut this number down a bit by making the CI process only 
send the result to the patch uploader and not to all reviewers.


Oh, and what we are also doing is sending notifications about new patches 
to the project's mailing list, but I assume that you aren't referring to 
these patches.


Suggestions on how to improve this are welcome.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-29 Thread Jan Kundrát

On Monday, 29 December 2014 17:03:25 CEST, argonel wrote:

Personal clones are for forks. If you can't get a patch set accepted by
upstream, its equally unlikely that upstream are going to let you put a
private branch in their repo for sharing that patch set.


This is a social issue, then. What yuo describe makes sense -- if a patch 
is extremely dirty, for example, I can imagine a project maintainer not 
willing to carry it as a visible branch in their repo.


However, the personal prefixes appear to solve this problem semi-neatly. A 
perfect solution would be refs that aren't getting included in clones (and 
guess what, there's one review system which works exactly like that).



I'm sure I'm not
the only one carrying patches that are arguably sharable but not
upstreamable.


Got an example so that I know what you're describing?


I've also used clones to share an experiment that may not belong in the
proper repo now or ever. Making everyone who uses the main repo pay to
carry an experimental branch is somewhat unfriendly, especially if you're
not normally involved with the project. You may also wish to avoid the
scrutiny of the others involved in the main project until you're ready,
which the sudden appearance of a new branch during checkout would certainly
invite.


That's a valid concern. On the other hand, it's a pretty simple way of 
enforcing collaboration. There were keynotes during the last Akademy 
where people mentioned their worrying about development moving into 
isolated silos. Disabling clones leads directly to sort of an enforced 
collaboration (or, failing that, to people pushing stuff to GitHub).



As I see it, scratch repos are the first stage in a project's life cycle.
Before playground, you might fiddle with something, drop it in a scratch
repo and share the link on IRC. Deleting it is painless when you discover
that your idea is terrible, or already exists elsewhere.


I agree with scratch repos being useful as a first step.


There are probably still quite a few people away for the holiday season,
perhaps this decision can be deferred for a couple of weeks until its more
likely that everyone is back and paying attention?


+1, and sending a mail to kde-cvs-announce to make sure all KDE account 
holders are aware of both this and the other thread is a good idea.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to branch management

2014-12-29 Thread Jan Kundrát

On Monday, 29 December 2014 09:50:06 CEST, Ben Cooksley wrote:

Unfortunately allowing force pushes is an extremely messy business
with the hooks - so we're unable to do this (for maintenance reasons
among others).


Could you please elaborate on this one?

The only reason I remember ever hearing was it will send all notification 
e-mails again when you force-push. Why would that be a problem? Would 
disabling e.g. the CCMAIL, BUG and CCBUG keywords (or the all of the 
notification hooks) in all branches that support force pushes be a 
reasonable fix?


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-29 Thread Jan Kundrát

On Monday, 29 December 2014 20:41:03 CEST, Jeff Mitchell wrote:
(The current scratch area itself is already entirely 
custom-coded on top of Gitolite, and that means it must be 
maintained.)


Can we take a look at these custom patches? I'm asking because I see this 
exact feature described at upstream's manual at [1]. What are the 
KDE-specific additions?


With kind regards,
Jan

[1] http://gitolite.com/gitolite/wild.html#deleting-a-wild-repo

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-29 Thread Jan Kundrát

On Monday, 29 December 2014 19:44:21 CEST, Jeremy Whiting wrote:

2. The students typically change their commits quite often after review
(sometimes many times to finally get it right) and force pushing isn't
permitted, but is on clones.

I guess 2 could be solved with more commits rather than changing the
commits though.


In my personal opinion. the moment we adjust our workflow from a good one 
(rebase locally, force push) to a worse one (with commits such as fix 
missing semicolon) just to work around an ACL setup of a git hosting, 
we're doing things very wrong.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-29 Thread Jan Kundrát
We agreed on IRC that these patches are used for personal clones. The 
support for scratch space, i.e. self-service repo creation, is implemented 
by upstream Gitolite, and no custom patches for that are in production now.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-29 Thread Jan Kundrát

On Monday, 29 December 2014 23:05:48 CEST, Jeff Mitchell wrote:

...what does that have to do with anything?


It means that there is no problem with having scratch repos (self service 
repo creation) with Gitolite.


I find that relevant because you mentioned that the current scratch area 
itself is already entirely custom-coded on top of Gitolite, and that means 
it must be maintained.


You meant personal clones. Confusion fixed, problem solved, let's move on.

With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-27 Thread Jan Kundrát
Hi, here's my possibly incomplete wishlist of how I would like to work on 
SW within KDE.


- The tools should recognize that we have a limited number of people 
familiar with the code, while the pool of newcomers tends to be bigger. 
This means that we should teach these newcomers on how to eventually become 
project maintainers. Without swamping them with unintuitive trivia, of 
course.


- I want tight CI integration. If there's a missing semicolon in that new 
C++ code, I don't want to find it myself. A tool should tell the submitter 
about that as fast as possible, and show them what the error is.


- Stuff that breaks tests should not be allowed to even hit the target 
branch. Yes, this means that I would like to abolish the rule which says 
that any KDE developer (myself included) can commit straight to the tree of 
any project on the day-to-day basis. There should be an opt-out for 
projects to enforce CI coverage. I'm OK with self-approvals (we're too 
short on manpower), but I still want CI checks. I've introduced too many 
breakages already.


- Emergencies will happen, so I want all KDE devs being able to push an 
automated button to bypass the CI's verdict. (An realistically, any 
stricter ACL would not ever get approved.)


- Machine time is cheaper than people time. If some automation can save our 
time, we should use it IMHO. Don't know whom to Cc on a review request? Let 
the tool suggest people who have touched this area of file recently. Don't 
know project maintainers? No problem, they get an e-mail by default, with 
no user action.


- The tools should take care of all aspects of a particular change hitting 
the git tree. I'd like to be work on commits and git trees because that's 
what I'll end up managing. A patch review is not enough because it 
delegates some of the responsibilities to project maintainers. I want 
people submitting a git commit, not a patch. (A git commit consists of the 
diff, the commit message,  the parent(s) and additional metadata.) I would 
like to be able to review all of it.


- I want to be able to fix contributor's mistakes reasonably easy. If I can 
fix trivial typos, add BUG: keywords or rebase stuff right from the code 
review system webpage, that sounds good.


- While a change initially got started by someone, other should be able to 
help and fix anything about that change, including replacing the patch. A 
change doesn't belong to its author because we are a collaborative 
project.


- The tool(s) we use should have reaosnable APIs for building other tools 
around them.


- The review system should be self-contained. I do not want to ever ask 
people this is a nice patch, where can I pull from.


I also agree with essentially all points which were raised by other people 
who commented on this thread with about a single exception -- Upload 
patches via git diff + web is not relevant for me, and I'm afraid it 
conflicts with my wishlist item of patch creators preparing git history 
trees for me.


Hope this helps,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-27 Thread Jan Kundrát

On Tuesday, 23 December 2014 13:21:37 CEST, Milian Wolff wrote:
Furthermore, I'd like to use the same review mechanism for 
post-review. When a 
patch is triggering problems, I'd like to start a discussion in 
the context of 
the commit that was merged. Again, I want to annotate source 
lines. So rather 
than sending mails to kde-commits which are then lost, I want to have that 
tracked on a website for others to see.


Hi, do you also like this action to take some effect on the patch, such as 
a revert? I'm a bit worried that there won't be much difference between 
comments lost on kde-commits ML and comments lost on $website. That said, I 
like the benefit of being sure that the comment at least reaches the people 
who authored and reviewed the troublesome change.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-25 Thread Jan Kundrát

On Thursday, 25 December 2014 09:20:53 CEST, Ben Cooksley wrote:

No comments on scratch


Scratch repositories (I can do whatever here, it's simply mine) is good, 
but its actual utility is limited on current setup. If it takes 
minutes/half-an-hour for a new repo creation to propagate, I will just use 
GitHub because it gives me my repo right now.


Are the commit-sanity hooks (audit hooks in repo-management.git) active 
for scratch? If yes, they should IMHO be disabled because when I'm 
importing random, 3rd-party stuff in there, having to file a ticket or even 
making that commit myself in repo-management.git is too incovenient for me.


Which might be fine, maybe we don't want people to use KDE's git for these 
purposes. What's the purpose of scratch repos?



or clone repositories?


These are IMHO useless. They got popular by GitHub because their workflow 
is built around pull requests and personal clones. My opinion is that this 
should be done by branches in a code review system, or at least by working 
in a branch(es) of the single repo. The target audience here are KDE 
developers, not everybody who can click a Fork Me! button.



Or the movement of repositories?


Repo movement takes time of all involved people, and we're short on 
manpower. There could be good reasons for a move every now and then, but at 
the same time doing these moves routinely is something that I would like 
not to see.


In particular, I like the current repo structure by simply being kio and 
trojita instead of kde/frameworks/kio and kde/extragear/pim/trojita. 
I will be OK with moving all KDE projects under a common prefix, e.g. 
kde/kio and kde/trojita, making sure that everything sysadmin is 
sysadmin/something and perhaps even scratch stuff starting at 
scratch/foo. That can really help set up proper ACLs with various tools 
(my favoriote one wouldn't care, though).


I don't think that encoding the KDE module structure, such as 
frameworks/foo or extragear/graphics/bar would provide any value, and 
in fact, I would like to see the current mess of having essentially a flat 
list of git repos *and* a tree of them with different names being 
abolished. Why do we need that tree when it's not a real tree?



Or how anongit functions (what you find works least well, etc)?


See above for issues with propagation delay. 30s is IMHO acceptable, half 
an hour not so much. Oh, and the same applies to force pushes (and 
especially to force pushes). If I need a shared repo, one of the use cases 
is that I'm using it to sync my work between two machines with no direct 
network path, and when I'm doing that, I'll be surely using force pushes 
because it might be my only way of testing. In a scratch repo, or in a 
personal branch of a shared repo.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to our Git infrastructure

2014-12-25 Thread Jan Kundrát

On Thursday, 25 December 2014 11:06:05 CEST, Ben Cooksley wrote:

Not sure why random / 3rd party stuff would be imported - regardless
of whether it is a scratch repository or otherwise.
Distributions tend to frown upon bundling...


I've had a need for this twice.

The first instance was trying to fix breakage in Ragel, a 3rd party SW 
which generates parsers. It contained logic error related to cross 
compilation. I wanted a place to host my modifications, and because I run 
into that problem during my work on KDE, I wanted to use KDE's infra for 
that. I couldn't, because I wasn't able to import upstream's content into 
my personal scratch repo.


The second case was a tool I wanted to use during my work on CI. It's not a 
SW library to be used by program we develop, but rather an existing tool 
which I wanted to extend with some additional level of functionality. 
Again, I wasn't able to import this due to the checking hooks, so I ended 
up using another hosting service.



Which might be fine, maybe we don't want people to use KDE's git for these
purposes. What's the purpose of scratch repos?


For new development projects.


That makes sense. If the scratch repos are not for hosting random crap I 
need for my KDE work, but rather new stuff I create for KDE, then having 
these hooks operating in this strict mode is OK. In that case the message 
about the purpose of the scratch space should probably be made a bit 
stronger, because I wasn't aware of this limitation despite being with KDE 
since 2007.


I'm not saying that limiting the intended use cases like this is 
necessarily bad. There's always Gitorious, GitHub, repo.or.cz and other 
similar services. If we don't want to be in the business of providing KDE 
developers a universal Git hosting, we don't have to be. There's a 
downside, though, that people who already have to use something non-KDE 
might want to default to using these permissive, 3rd-party hosting services 
even for projects that should be on git.k.o, simply because the other 
platform has worked well all the time before.



This question was asked because some of the possible solutions out
there behave like Gitorious/Github and require a minimum level of
structure. If this was unacceptable then we'd need to exclude them
from the candidate shortlist.


Ah, OK. I think it would be a bit short-sighted to eliminate a potentially 
more useful tool only on the basis of having a one-time config change. (My 
personal favorite doesn't require such a change.)



We've never had any problems with immediate propagation once a
repository exists except when a mirror is having some trouble.
At least not to my knowledge anyway.


I've seen repeated problems with propagation delay of *force* pushes to 
g...@git.kde.org:clones/websites/build-kde-org/jkt/th-build.git . I think I 
mentioned them on IRC already.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to branch management

2014-12-25 Thread Jan Kundrát

On Thursday, 25 December 2014 08:21:05 CEST, Ben Cooksley wrote:

In essence, yes - those are the two possible options we have.
Force pushing will *still* be prohibited under this proposal as it
stands (and would be a CoC violation if done).


Hi Ben,
this is a very strong statement. I'm believe that you have a good reason 
for making it, but I do not understand what that reason is. I think that 
one of the reasons you strongly dislike force pushes are limitations of the 
current hook setup. That's a relevant technical point, but IMHO it isn't 
something which would qualify a force push to be a CoC violation. The CoC 
is a generic document which doesn't even talk about the concept of SCM. 
Maybe I have my knee-jerk reaction when people call something they consider 
an evil thing a CoC violation, but I just about totally disagree when I 
read such a statement. I would hate to see this subthread getting derailed 
into a lnguage lawyering of what's in the CoC and what isn't there, so I'll 
stop here by saying that I don't agree with that particular conclusion.


The reason why I think that a force push sometimes makes sense is 
experience with Trojita. There's a couple of long-forgotten WIP branches 
which only differ by some of them being already squashed into more 
manageable form or rebased on a more recent master. The current state leads 
to branches like foo, foo-2 etc (I think we're at foo-5 with Trojita 
now). What alternative to force pushes would you recommend? Should we stick 
with the foo-number scheme? Why is that good?


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Changes to branch management

2014-12-24 Thread Jan Kundrát

On Wednesday, 24 December 2014 01:57:15 CEST, Ben Cooksley wrote:

Unfortunately i'm not sure if Gitolite's ACL mechanisms let us
differentiate between tags and branches so if we allow anyone to
delete branches they'll probably be able to do the same for tags.


Are the generated config files or the scripts for generating gitolite's 
config files available somewhere? The Gitolite setup I'm familiar with uses 
explicit qualifiers such as refs/heads/ and refs/tags/ when setting up 
ACLs. Do you use something different on git.k.o? If not, then managing tags 
separately from branches should come for free.


The way I see it, there are two reasonable alternatives with the current 
setup:


1) Everybody can create, delete and force-push to all branches except the 
reserved ones (kde/*, master, stable,... see the list).


2) People are free to create, delete and force-push to all branches below 
my/$username/ (in my case, tat would be my/jkt/foo for example). Only repo 
owners can create, delete and force-push to arbitrary branch names.


Deciding which of these to use should be just a matter of style. Both seem 
very sensible to me, and they will definitely present an improvement over 
the current status where people can create, but not cleanup afterwards.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Problems with infrastructure

2014-12-21 Thread Jan Kundrát

On Friday, 19 December 2014 22:16:36 CEST, Scarlett Clark wrote:
Jenkins is compatible 
and works with Gerrit, so I don't understand why another CI is being 
considered.


Because when I started this effort this spring, build.kde.org appeared to 
be on life support. I also wanted to expand the number of tools I 
understand, and make sure that I can evaluate various CI tools without a 
baggage of having to stick with a particular CI solution just because 
we've always done it that way. That's why I started looking at various 
tools, and the whole stack which the OpenStack infrastructure team have 
built [1] looked extremely compelling (note: they still use Jenkins).


The killer feature for me was their support for testing everything, where 
each and every commit that is going to land in a repository is checked to 
make sure it doesn't introduce any regressions. Not on a per-push basis, 
but on a per-commit basis. This is something which has bitten me in the 
past where I occasionally introduced commit series which contained 
occasional breakage in the middle, only to be corrected in a subsequent 
commit. That was bad because it breaks `git bisect` when one starts looking 
for errors discovered in future, by unrelated testing.


At the same time, doing per-commit tests means that these tests have to run 
in parallel, and that there must be $something which controls this 
execution. This is well-described at [2], so I would encourage you to read 
this so that you understand what the extra functionality is. To the best of 
my knowledge, Jenkins cannot do that. Based on your level of experience 
with Jenkins, do you think it's possible with it alone? Would that also 
possible on a cross-repository manner?


Now that we've established a necessity to use something extra to control 
this logic of job executions, we still have some use for Jenkins of course. 
Something has to serve as an RPC tool for actually scheduling the build 
jobs on various servers/VMs/workers/chroots/dockers/whatever. The question 
is whether it still makes sense to use Jenkins at that point given that the 
nice features, such as being able to track the history of the number of 
failing tests, having pretty dashboard pointing to faulty commits etc etc 
on one hand and having to create a ton of XML files with build job 
definitions on the other hand. Does Jenkins still provide a net positive 
gain?


The system which I was building had no need for drawing graphs of compiler 
warnings per project throughout past two months. What I wanted to have was 
an efficient system which will report back to the rest of the CI 
infrastructure the result of a build of a proposed change to help keep a 
project's quality up to a defined level. The only use of Jenkins in that 
system would be for remotely triggering build jobs, and somehow pushing the 
results back. I do not think that going through all of the Jenkins 
complexity is worth the effort in that particular use case.


BTW, the way in which KDE uses Jenkins right now does not really make use 
of many Jenkins functions. The script which prepares the git tree is 
custom. The management of build artifacts is reimplemented by hand as well. 
In fact, most of the complexity is within the Python scripts which control 
the installation of dependencies, mapping of config options into cmake 
arguments etc. These are totally Jenkins-agnostic, and would work just as 
well if run by hand. That's why I'm using them in the CI setup I deployed. 
Thanks for keeping these scripts alive.


So in the end, I had a choice of either using Jenkins only to act as a dumb 
transport of commands like build KIO's master branch and responses such 
as build is OK, or bypassing Jenkins entirely and using $other_system. If 
the configuration of the $other_system is easier than Jenkins', then it 
seemed to be a good idea to try it out. Because I was already using Zuul 
(see the requirement for doing trunk gating and speculative execution of 
the dependant jobs as described in [2]), and Zuul uses Gearman for its 
RPC/IPC/messagebus needs, something which just plugs into Gearman made a 
lot of sense. And it turned out that there is such a project, the 
Turbo-Hipster thing. I gave it a try. I had to fix a couple of bugs and add 
some features (SCPing the build logs, for one), and I'm quite happy with 
the result.


But please keep in mind that this is just about how to launch the build 
jobs. TH's involvement with the way how an actual build is performed is 
limited to a trivial shell script [3]. And by the way, the OpenStack's CI 
which inspired me to do this is still using Jenkins for most of its build 
execution needs.


Anyway, as I'm following these discussions, I think you don't really have 
many reasons to start being afraid that even that part of your work which 
is 100% Jenkins-specific would come to no use. There appears to be a huge 
inertia for sticking with whatever we're using right now Just Because™, 
even if all 

Re: [Kde-pim] Problems with infrastructure

2014-12-18 Thread Jan Kundrát

On Thursday, 18 December 2014 14:52:12 CEST, Sebastian Kügler wrote:
Of course it would be prudent to give KDE's sysadmin's access 
at some point, but it's not required per se.


Hi, that's been always the case, all sysadmins have root access, and they 
also have the admin role within Gerrit.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-16 Thread Jan Kundrát

On Monday, 15 December 2014 22:25:37 CEST, Kevin Kofler wrote:
That creates the situation that we 
either all switch and have uniformity or we don't and then we end up with 
reviewborad+gerrit (Albert Astals Cid), which to me sounds a lot like 
blackmail (of course not by Albert, he's just the messenger).


I do not see anything wrong with using two different tools if the community 
cannot agree on using a single one. Yup, if there was consensus, it would 
be great to do it in a unified way, but that consensus apparently isn't 
here now.


I fail to see how is this any attempt at blackmailing.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-16 Thread Jan Kundrát

Hi Ben,


It isn't just
the tool itself which has to be maintained: we have commit hooks,
integration with other bits of infrastructure and so forth which also
needs to both be implemented and maintained.


In case of Gerrit, there is no need for custom hooks as they stay on 
git.kde.org, and therefore I believe this point is not relevant to its 
adoption. The whole setup has been designed and implemented in a way 
suitable for long-term parallel operation alongside KDE's git.


As for the integration bits, they're done now. The tool just talks to LDAP, 
and maintenance of that connection is effectively zero, unless a dramatic 
revamp of our LDAP is planned. The repo mirroring was a matter of setting 
up a single user account and configuring proper ACLs, and these are also 
finished already.


I can understand the general reasons for limitting the number of services 
which we offer and support. However, I would appreciate if we mentioned how 
big these costs are, as there's some room of misinterpretation otherwise.



The more custom work we have, the harder it is to upgrade things.


While true in general, I fail to see how it is relevant to Gerrit. What 
custom bits are involved here?



We'll confuse newcomers if
projects A, B and C are reviewed on tool X while projects D, E and F
are reviewed on tool Y.


I haven't received a single complaint from the Trojita GCI students about 
any hardness in this. They did struggle with making an actual change to the 
code, with proper coding style, with commit message formatting, with git in 
general, they even failed to understand the written text about CCBUG and 
BUG keywords in the our wiki, but nope, I haven't seen them struggle with a 
need to use Gerrit or its differences to RB. YMMV, of course.


Because the majority of complaints actually came from people who are 
well-versed with ReviewBoard, my best guess is that there's muscle memory 
at play here. This is supported by an anecdote -- when I was demoing the RB 
interface to a colleague who maintains Gerrit at $differentOrg, we both 
struggled with finding buttons for managing a list of issues within RB. 
It's been some time since I worked with RB, and it showed.


I remember having hard time grokking the relation between a review and 
attaching/updating a file on RB. I didn't read the docs, and it showed.



A single tool would be best here. Let me make
clear that it is not a case of Reviewboard vs. Gerrit here - as other
options need to be evaluated too.


I understand that people would like to avoid migrating to Gerrit if a 
migration to a $better-tool was coming. Each migration hurts, and it makes 
a lot of sense to reduce the number of hops.


However, what I'm slightly worried about is postponing Gerrit indefinitely 
until all future replacements are evaluated. I don't see people putting 
significant time into any alternative for code review right now. Do we have 
any chances of these people making themselves known in close future? How 
long would be a reasonable waiting time for a testing deployment of 
alternate tools? When are we going to reduce our candidates just to the 
contenders which have been deployed and tested by some projects?



In regards to the difficulty of Gerrit - I tend to agree with this
argument (it took me at least a minute to find the comment button, and
I didn't even get to reviewing a diff).


The documentation, however, explains the functionality in a pretty clean 
manner, see 
https://gerrit.vesnicky.cesnet.cz/r/Documentation/user-review-ui.html .


We also aren't the first project trying to work with Gerrit, so there's 
plenty of tooling available right now, not to be written. There's 
text-mode interface, the gertty project, there's integration in 
QtCreator, there are pure-CLI tools for creating reviews, another web UIs 
in development, there are even Android clients.



Plus there are major concerns
with integration into our infrastructure, such as pulling SSH keys
from LDAP for instance (no, you can't have the tool maintain them
itself - as mainline Git and Subversion need the keys too).


Yes, SSH-keys-in-LDAP is a PITA, but given that one needs a patched OpenSSH 
to look up keys from LDAP anyway, I don't think this is a blocker issue. 
The situation is exactly the same with the Gitolite setup which currently 
runs on git.k.o though, as that doesn't talk to LDAP either. As you 
mentioned during our IRC chat, there's a Python daemon which polls for 
changes in LDAP, and propagates these into Gitolite's config backend in a 
non-public git repo. Why wouldn't this be enough for Gerrit, then?


Gerrit has both SSH-authenticated API and a REST HTTPS API for adding and 
removing of SSH keys by an admin account. *If* this is needed, I'll be 
happy to make it work, it's simply a matter of calling two trivial scripts. 
Would you see any problems with hooking into the identity webapp or its 
backend, if there's any, for this? An edge trigger would be cool.


Are there any 

Re: [Kde-pim] Problems with infrastructure

2014-12-15 Thread Jan Kundrát

On Monday, 15 December 2014 10:46:03 CEST, Lydia Pintscher wrote:

Yeah. Wikimedia just switched to it for bug tracking. More will follow.


My understanding of the reason behind this switch is that they are PHP 
programmers, so they prefer to work with software written in PHP, 


Made my life as product manager there a lot easier already.


I do see a plan for migrate from Gerrit to Phabricator on your wiki now, 
but the same page also says We need help learning about the possibilities 
of Phabricator in this area: what is missing, what exists in a different 
way, what is remarkably interesting, which are the blockers that should be 
reported upstream?. Considering that this is different than what I heard a 
month ago, and given that there's AFAIK no code review in your deployed UI 
now, I wonder what the plan is.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Fwd: Re: [Kde-pim] Problems with infrastructure

2014-12-15 Thread Jan Kundrát

On Monday, 15 December 2014 07:34:24 CEST, Luca Beltrame wrote:

- Apache Allura
https://allura.apache.org/


That is said to support pull requests, but I wasn't able to find an example 
of that in their website. Got one?


Also, loading a list of commits took tens of second at the time I tried it 
:(.



- Rhodecode
http://rhodecode.com
This is the one we choosed, and is indeed very powerfull, the only part is
that company behind it closed the source, but a fully open source fork is
been done
https://kallithea-scm.org/


The documentation again says that it supports pull requests, and I was able 
to find two of them in total in their repos. I don't think that it's a 
particularly well-tested feature, then. One was from June, the other from 
October 2014.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-13 Thread Jan Kundrát

On Friday, 12 December 2014 22:44:39 CEST, Albert Astals Cid wrote:

That's very different from saying whole KDE should just
switch to Gerrit, and I'm not proposing that. Some people have made
themselves clear that no change is going to happen, and I can live with
that.


Where was that discussed? Which people is that?


(Removing PIM from the list, because I don't see this as a PIM matter.)

That was the impression which I got from the #kde-devel IRC channel and the 
kde-core-devel ML right after that frameworks BoF during Akademy. When 
re-reading the threads and the IRC logs today, I no longer have the 
impression that there was a clear, absolute and strict no, but there was 
nonetheless IMHO quite a strong resistance to using something as horrific 
as Gerrit. That might explain why I think that there will be a subset of 
people who won't be fine with any change, and because I respect their 
opinion, I don't want to force such a change upon them.


So, basically, from my point of view -- the tools are here, the CI is done. 
That CI bits in particular make the workflow much more appealing to me. Now 
it's up to the KDE developers to come to a decision whether they want that 
or not.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-11 Thread Jan Kundrát

On Thursday, 11 December 2014 23:20:59 CEST, Albert Astals Cid wrote:
You need to understand understand though that changing patch 
review systems is 
not your decision to take (nor mine), we need to have a general 
agreement/consensus when changing systems as important.


Changing systems is not what I propose, though. What I'm arguing for is 
empowering the individual projects to be able to choose tools which work 
well for them. That's very different from saying whole KDE should just 
switch to Gerrit, and I'm not proposing that. Some people have made 
themselves clear that no change is going to happen, and I can live with 
that.


I do happen to think that yes, switching to Gerrit would in fact indeed be 
a good move for KDE as a whole, but sharing a view is something else than 
making people change their systems. If you like RB and you're a project 
maintainer, sure, by all means do use it for your projects -- I'm not going 
to force you to switch for the sake of my pleasure or something similar.


I also admit that I would probably feel a little bit sad if Trojita ended 
up to be the only project which sticked with Gerrit, but if that was the 
general consensus of the community, who am I to dispute the wishes of 
poeple doing the actual work?


I'm sorry if the impression which I managed to create by pointing out what 
I perceive to be strong points of Gerrit and weak points of the 
alternatives was something different.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-10 Thread Jan Kundrát

On Wednesday, 10 December 2014 10:28:59 CEST, Christian Mollekopf wrote:
* pull requests/the webinterface: reviewboard is awesome for single patches 
every now and then, it's rather useless when you work with 
branches IMO. With github we have a nice webinterface to review 
branches while keeping it simple.
Gerrit may be what I'm looking for though (never used it, 
looking forward to see how the testing goes).


That depends on what you're looking for. Gerrit puts emphasis on review of 
individual patches. It will present related patches together, but you 
won't be able to review them as a single entity -- the goal of Gerrit is to 
ensure that the history is clean, and it is operating under an assumption 
that each commit matters. Unless I'm mistaken, a GitHub pull request shows 
you a diff which represents the starting point and the final point of that 
branch as a single unit, with an option to go into individual commits. 
Gerrit is different -- not worse, different :).


Regarding the testing of Gerrit, I haven't received much feedback yet. 
Three repositories are using it so far (kio, plasma-framework, trojita), 
and if a repo owner/maintainer of some other project is willing to take 
part in this testing, I won't see a problem with it.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-10 Thread Jan Kundrát

On Wednesday, 10 December 2014 19:41:31 CEST, Albert Astals Cid wrote:
D is really important to me since it makes it harder to contribute to non 
hardcore git users; it took me days to start understanding Qt's 
gerrit and i 
am still not sure i understand it fully, with reviewboard i do git diff and 
use the web to upload a patch, as simple as it gets.


Please take your time to try out KDE's Gerrit and don't judge it based on 
your experience with Qt's Gerrit (in fact, try to forget that one if 
possible). There's been more than two years of development which went into 
the version which we use, and this effort is IMHO quite visible.


As a random data point, I've had two newcomers (one of them a GCI student) 
submitting their first patch ever through Gerrit within 15 minutes after I 
asked them to use Gerrit, with no prior experience whatsoever. I'm pretty 
sure that the GCI students in general aren't considered an etalon of 
quality.


Also, uploading a patch with Gerrit is a matter of `git push gerrit 
HEAD:refs/for/master`. Are you suggesting that this is harder than `git 
format-patch origin/master` and uploading the resulting file manually?


And yes, i know people complain about reviewboard, but that is because it's 
the tool we use, if we used gerrit, we would probably get complains too. I 
want to make sure we're not investing time in what at the end 
is most probably a zero sum height.


Right, I believe that one. As a project maintainer though, I can say that 
Gerrit does make my life much easier -- being able to tests patch series 
myself by a single `git pull` is *so* different to the experience of 
fetching patches by hand from RB (and undoing the occasional breakage). 
Also, there's no early CI feedback with RB, and nobody is neither working 
on this, not had announced any plans to work on this topic during the past 
years. That alone would change the ballance for me.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: [Kde-pim] Problems with infrastructure

2014-12-10 Thread Jan Kundrát

On Thursday, 11 December 2014 00:51:28 CEST, Albert Astals Cid wrote:

Yes, it is harder.

Yyou need to setup git correctly, so that gerrit in that 
command is valid, 
you need to understand you're pushing to a different server than the real 
one, you need to commit (i never do format-patch, just git 
diff), all in all 
needs you go have a bigger git-understanding.


I see what you're saying, and you're probably right -- there's a bar, 
indeed. That bar could however be effectively removed by having a 
spoonfeeding, step-by-step documentation on how to submit patches with 
Gerrit. I'm still hoping that I'm not the only guy who cares about this, 
and that maybe someone else is going to produce such a howto (hint for 
bystanders: now is the time, I've put quite a few hours into this already).


Furthermore, there are upstream patches under review for making it possible 
to create a review through the web, with no use of CLI tools or a `git 
push` equivalent of any sort. When these are released, I'll be happy to 
upgrade, as usual.


Besides in reviewboard i could get a tarball, produce a diff and upload it 
easily, i have not investigated Luigi's links yet, but as far 
as i know that 
is not easy/doable in gerrit.


Do we have some stats saying how many people download tarballs / zips from 
ReviewBoard? Is there a User-Agent breakdown for the patch submission to RB 
so that we could look on how many people do push files around, and can we 
compare that to the number of people using rb-tools? I'll be happy to do 
the number crunching myself, but I don't have access to the logs.


Anyway, I understand that my experience is probably going to differ from 
the experience of anybody else to some extent, but to me, the hardest thing 
in making a patch is usually finding out what changes to make in a random 
C++ file of a project whose sources I'm seeing for the first time. Compared 
to *that*, creating a git diff has always been much easier for me.


Moreover, when that patch is ready, someone still need to commit it and 
make sure that it doesn't introduce any regressions. Right now, all parts 
of this duty are effectively up to the project maintainer, which means that 
the process doesn't scale at all. Unless the patch author is a KDE 
developer already (in which case I fully expect them to be able to 
copy-paste three commands from a manual to be able to push to Gerrit), a 
project maintainer has to fetch stuff from RB by hand, copy-paste a commit 
message, perform a build cycle, verify that stuff still works and upload 
the result to git.


Considering a typical turnover time of patches which I see within KDE, I 
don't think that we have superfluous reviewers or project maintainers, so 
my suggestion is to prioritize making their workflow easier at the expense 
of, well, forcing the contributors to spend their time copy-pasting a 
couple of commands from a manual *once* during the course of their 
involvement with a given project.


Anyway, I know that pre-Gerrit proccess was so painful for me that I 
actually decided to invest dozens of hours of my time into this, and get 
the Gerrit + CI ball rolling, and I'm not really willing to go back into 
shuffling bits around by hand. This is 2014, and we have computers to do 
repeated bits for us, don't we?


I've had my fair share of beers tonight, so I hope I won't manage to offend 
people by my blutness here.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Pre-merge CI for Gerrit

2014-12-09 Thread Jan Kundrát

On Tuesday, 2 December 2014 12:05:46 CEST, Jan Kundrát wrote:
Right now, the CI runs only for dummy.git (doing nothing) and 
for trojita.git (doing three separate build  test checks to 
cover various combinations of ancient and new Qt4, Qt5, clang, 
gcc and debug and release builds). Doing this for Trojita was 
pretty easy because it has no dependencies on other projects 
except Qt. I'm of course all for extending this to other KDE 
projects as well, and help is welcome.


A heads-up -- I've now turned on the CI for Gerrit changes agains 
plasma-framework. The KIO can be covered as well when two patches which fix 
two test failures are merged.


More details are in my mail to kde-frameworks-devel [1].

I think that this is a right time to take a look how the system works and 
send comments or feature requests.


With kind regards,
Jan

[1] http://thread.gmane.org/gmane.comp.kde.devel.frameworks/20895

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Pre-merge CI for Gerrit

2014-12-02 Thread Jan Kundrát

Hi,
I managed to get a pre-merge continuous integration working with Gerrit. 
This means that whenever someone uploads/updates a change to Gerrit, it 
gets through a CI run and the result is reported back to Gerrit as an 
advice -- see e.g. [1] for an example. A KDE developer can still override 
the CI result if they choose so.


For technical reasons, the CI is independent on the KDE's own CI, both 
software wise and in respect to the build HW. It's using very similar 
tooling to what the OpenStack project is doing -- everything iss 
coordinated by Zuul [2] while the actual jobs are launched by Turbo-Hipster 
[3]. Resource-wise, see [4] for where it's running.


Right now, the CI runs only for dummy.git (doing nothing) and for 
trojita.git (doing three separate build  test checks to cover various 
combinations of ancient and new Qt4, Qt5, clang, gcc and debug and release 
builds). Doing this for Trojita was pretty easy because it has no 
dependencies on other projects except Qt. I'm of course all for extending 
this to other KDE projects as well, and help is welcome.


With kind regards,
Jan

[1] https://gerrit.vesnicky.cesnet.cz/r/167
[2] http://ci.openstack.org/zuul/
[3] http://josh.people.rcbops.com/2013/09/building-a-zuul-worker/
[4] https://conf.kde.org/en/Akademy2014/public/events/140

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Pre-merge CI for Gerrit

2014-12-02 Thread Jan Kundrát

On Tuesday, 2 December 2014 19:46:18 CEST, Albert Astals Cid wrote:
Dependencies are the hard part. Any reason you didn't piggy-back on 
build.kde.org for it?


That's right.

The reason for not using Jenkins was that the existing KDE's instance was 
not up to that task without significant changes which are underway (and I 
originally estimated that this project would be completed much sooner 
anyway) -- there was really no support of feeding it my own git ref and 
letting it test that without extra code.


Also, using just Jenkins would not give me what I'm ultimately looking 
for, i.e. support for gating the trunk against regressions, using 
cross-project integration etc. Another missing feature is that the Gerrit 
Trigger plugin by default tests just the change, and not how it would look 
after a merge (this is git, so a patch could be based on something else but 
a tip of the target branch). So that led me to a decision to use something 
smarter than just plain old Gerrit/Jenkins/Gerrit-Trigger combo.


The Zuul tool does exactly what I'd like to achieve. The OpenStack infra 
project uses it to launch jobs through Jenkins (in fact, through 8 Jenkins 
masters), but at that point the Jenkins' role is essentially degraded to 
just a dumb job launcher. When doing pre-merge CI, the nice features of 
Jenkins such as plotting the trends of build warnings, extraction of test 
results into HTML tables and what not are not that relevant, IMHO. What 
matters is whether the build passes, whether the tests run, and all that in 
a well-defined manner and using some sets of policies of what sorts of 
mishaps to allow. An example -- if I want my project warning-free, I build 
with -Werror, and I don't care whether a patch introduces 50 of them or 
just 1; either it passes my policy decision of not allowing warnings, or it 
fails. If I have flaky tests, I mark them as such so that `ctest` or `make 
test` or whatever doesn't result in a failure, etc. Anyway, my impression 
was that the OpenStack infra team and Wikimedia staffers both seem to 
struggle with Jenkins; they have scripts which produce XMLs to be fed to 
Jenkins and just let it act as a form of RPC. So I can say that I wanted to 
try if I can avoid Jenkins if possible because it doesn't provide any added 
value in this particular use case, AFAIK.


One of the things missing or up-for-overhaul within KDE's CI was preparing 
job definitions in some DSL, specifying the dependencies in a mass scale, 
etc. I've heard that it's under consideration to use JJB, the Jenkins Job 
Builder, a tool from OpenStack which can equally well create both config 
for Jenkins as well as one for Zuul. This would be great, of course, and 
being aligned with an extremely vibrant development community (and, in 
fact, a family of communities) is a huge benefit IMHO.


So in the end, this Gerrit/Zuul/Turbo-Hipster eventually ends up executing 
a shell script. This shell script can be job-specific, and it should 
probably do stuff like fetching the already-built project dependencies, and 
launching the build and test cycle. I just hacked together a very quick .sh 
one for building Trojita, but I'm of course sure that the only reasonable 
path is to have a single script to be used both by KDE's Jenkins and this 
Gerrit thingy.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Pre-merge CI for Gerrit

2014-12-02 Thread Jan Kundrát

On Tuesday, 2 December 2014 12:05:46 CEST, Jan Kundrát wrote:

[1] https://gerrit.vesnicky.cesnet.cz/r/167


Sorry for noise, that was a very bad example. A much better one is at 
https://gerrit.vesnicky.cesnet.cz/r/164 .


Because that change has been merged now, the comments are shown collapsed. 
Just expand the bits added by Continuous Integration by clicking to see 
what this is all about, and what sort of feedback a user gets when build 
fails.


(Right now, the links to logs point to a machine with only IPv6 
connectivity, I have a DNS thing to fix, sorry. They are real, though.)


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Gerrit: merging feature branches

2014-10-29 Thread Jan Kundrát

On Tuesday, 28 October 2014 19:13:53 CET, Marco Martin wrote:
Gerrit question: I have a feature branch in plasma-framework 
(mart/basicDeleteUndo), and i wanted to do the review process with gerrit.


Hi,
Gerrit is quite flexible, and supports many different use cases. It's up to 
us to agree on how to use them so that they support our work. My personal 
favorite is to use a smaller number of long-lived branches (candidates: 
master, kde4, release, etc), and use Gerrit's topics for changes which are 
somehow grouped together, yet ultimately targetted to be merged into the 
respective branch once they are ready.


A topic is set through pushing to 
refs/for/target-branch-name/your/topic/name (it can also be modified later 
on through the web UI). One can pretty easily search for a single topic, 
the links are clickable, etc.


Would this work for you? Should we define some other workflow?

Oh, and as Thomas said, in Gerrit there's always a workflow of one change 
per commit. If the worflow you're trying to use doesn't match this, then 
you would have a hard time implementing it. My personal opinion is that 
enforcing self-contained and clean commits is a reasonable thing, so I 
don't mind the need to review each change separately. A local rebase 
(perhaps an interactive one to squash related stuff together) is a great 
tool in my opinion.



now i tried the following 3 approaches, that all fail miserably:
* from my branch: git push gerrit HEAD:refs/for/master gives 
[remote rejected] 
master - refs/for/master (no new changes)


I would like to see a picture of the graph of the changes you tried to 
push. Something like `git log --graph --decorate --pretty=oneline 
--abbrev-commit --branches --remotes HEAD` should do the trick. (Be sure to 
`git fetch` from your KDE remote beforehand.)


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-10-18 Thread Jan Kundrát

On Thursday, 16 October 2014 23:43:00 CEST, Kevin Kofler wrote:
In Gerrit, I basically get an ugly command-line interface: I 
have to push to 
a magic ref encoding all the information (and IIRC, git-cola only lets me 
enter the basic refs/for/branchname, the special characters in stuff like 
%r=f...@example.com confuse it, so I'd have to push from a 
terminal if I want 
to use those).


Right, the beauty is in the eye of the beholder; I'm glad that you prefer 
web UIs over CLI tools. I happen have it the other way round. I'm sure 
there are other people like you, and I'm also sure there are other people 
like me.


Would it work for you to use git-cola for your initial push? You can use 
Gerrit's web UI for setting reviewers and what not once the initial push is 
done. That should require roughly as much work as your existing RB workflow 
-- you still have to get the patch and somehow upload it to the RB/Gerrit. 
The only difference is Gerrit's `git push` vs. RB's `git format-patch` and 
a manual upload through your browser (you don't appear to use rb-tools, and 
therefore I'm not proposing `git gpush` as an alternative for you).


I'm surprised that you find shuffling around patch files by hand easier, 
but use whatever you prefer by all means.


Setting reviewers requires a special command-line-style 
parameter appended to the ref that is found in the documentation (that %r= 
thing). There is also no autocompletion nor client-side validation of the 
reviewer nicks/addresses, unlike on ReviewBoard's friendly web interface.


You might want to give Gerrit's web interface a try. Gerrit supports adding 
reviewers (with autocompletion and validation) in there, too. There's an 
Add.. button in the top-middle, next to the list of the current 
reviewers.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Fwd: PVS-Studio KDE analysis

2014-09-29 Thread Jan Kundrát

On Monday, 29 September 2014 18:39:08 CEST, Christoph Feck wrote:

Russian folks behind PVS-Studio static analyzer
(http://www.viva64.com/en/pvs-studio/) made analysis of KDE project. 


Hi, can we make them run it on extragear (and especially 
extragear/pim/trojita)?


Cheers,
Jan


--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-09-22 Thread Jan Kundrát
The language for Code-Review +2 now reads Looks good to me and I know this 
code, approved. I hope people won't be afraid to approve changes now :).


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-09-16 Thread Jan Kundrát

On Monday, 15 September 2014 16:49:39 CEST, Milian Wolff wrote:

Where do I see the diff there?


Thanks to Ben and his review of my patches, Gerrit is now replicating all 
of the changes under review into KDE's git as well. In the context of this 
discussion, it means that there's now a link to KDE's quickgit for showing 
these diffs -- just click on the (quickgit) on the change screen. Of 
course that's a read-only view and therefore not suitable for adding 
comments, etc.


This doesn't change anything for users of KDE's git as these refs are not 
cloned by default.


With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-09-15 Thread Jan Kundrát

On Saturday, 13 September 2014 23:05:48 CEST, Eike Hein wrote:

Yeah, that's something I'm OK with too. Maybe we can even
adapt the UI to use strings like Sven proposes?


https://gerrit.vesnicky.cesnet.cz/r/35

With kind regards,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-09-15 Thread Jan Kundrát

On Monday, 15 September 2014 16:49:39 CEST, Milian Wolff wrote:

Where do I see the diff there?


For me, it's easiest to just click on any file name. That will open a diff 
view (either side-by-side or a unidiff one, based on your prefs). The diff 
shows just a single file, but you can use [ and ] for switching to 
next/previous one. Use ? for help. u goes back to the change screen.


In the gerrit that runs on qt-project, I can 
easily click one button to go to a unified or side-by-side diff 
view. Is that a custom extension?


What the qt-project.org has implemented as a custom extension is support 
for showing multiple files at once on a single page. That's what the 
upstream Gerrit doesn't support yet (there are open patches pending review 
and/or future work, though).


Generally, it seems as if the qt-project gerrit has a much 
cleaner GUI. I'm pretty lost when looking at the one up there...


To make matter more interesting, Gerrit the upstream has switched to a new 
change UI a couple releases ago, and it's a default view on KDE's Gerrit. 
While you can still activate the old change screen in your per-user 
settings, I would recommend against it as upstream is pretty open about 
their plans to remove the old change screen in a future release.


I understand how someone who is used to working with Qt's Gerrit (and who 
invested time into learning its quirks) might find the new change screen 
unintuitive. However, it was pretty easy to unlearn the old habits for me, 
to get used to a new location of various buttons, and now I can manage with 
the new one just fine. I eve nlike it more than the old view.


Since Gerrit has a full-blown API for basically every feature, what about a 
GSoC project for making, say, a KDevelop plugin for making code reviews? 
With per-selection-range commenting, review browsing and what not?


Hope this helps,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


Re: Using Gerrit for code review in KDE

2014-09-14 Thread Jan Kundrát

On Saturday, 13 September 2014 23:29:55 CEST, David Edmundson wrote:

I think a good example is your patch today (and pretending you're not a
maintainer). There was a single typo in a commit message. I wanted it
fixing, but I don't want to have to have to review that whole thing again
(in reviewboard terms fix it and ship it). I would have given a +2, but
when you re-push to gerrit I would have to +2 again before you can merge.

It's be a perfect example of where a self +2 would be fine.


Any project in Gerrit can be configured to copy the Code-Review ratings 
from the previous iteration of a review when only the commit message gets 
changed. I see both positives and negatives of such an option -- at the 
same time it would help you in a situation like this one, but at the other 
hand it might let non-KDE developers do stupid things like changing the 
commit message arbitrarily. That's why I went the paranoid way and 
configured this to use the defaults and not copy stuff around in such a 
situation. We still copy on a trivial rebase, htough.


Are you guys more in favor of copying the reviews upon a change in the 
commit message? I can make it happen.


Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/


  1   2   >