Indeed Yaroslav

I agree these are concerns and it is likely no solution will ever be perfect.

I think the best way to limit (not avoid) such concerns is to make the process as transparent as possible. This one is quite a challenge since one of the key point in the process is that the name of reviewers are kept confidential to allow them to be plain honest in their review.
We may not be able to do that since it is not so much in our culture.

The abuse of the system that I have personally witnessed is one related to COI. I have been wondering how strategic committee was populated. My memory is that it was represented by three types of people
* representants of major labs (mostly from Paris)
* representants from majors industrial actors (eg, France Telecom)
* individual experts

The desire to have several representants of major industrial actors is an explicite choice. The gouvernment wanted to make sure that money given to fund research would have industrial outcomes. Even though some very fundamental projects end up being funded, putting many industrial in the committee is a good way to ensure that most funded projects would be "practical" with some real-life outcomes and economical benefits in the end.

As for the fact of having experts from major paris-based labs, I think it was not necessarily a explicite choice. I think it rather come from facility. France is largely focused on its capital (the rest of the country is easily considered by the government to be peasants). Financially speaking, it is more effective to invite someone working 10 mn walks from the meeting place than 5 hours train. And last, people living in Paris are more likely to lobby ANR to be part of the committee. Of course, I am projecting there and it may be more complex, but this is the impression I got.

(as Wikimedians, we could well imagine that major chapters will in effect have members of the chapters on the FDC just because it is easier and more effective than having members of a new-born far-away chapter. This will not necessarily be an explicite choice but will simply naturally happen. However, as Wikimedia, we could well imagine that if we want to put a specific focus in terms of funding on ... say... Africa and women... it would probably make sense to put on the committee people interested in Africa and in women. That would be an explicite choice)

The one type of "abuse" I have witnessed was this one.

Imagine the 20 experts of the strategy committee in the room, ranking projects list A and list B.

Project P is considered. Unfortunately, one of the experts is not only an expert but also involved in Project P or is possibly working in one of the company involved in Project P. He announces a COI and leave the room. In apparence, all is fine.

In reality, there is another expert in the room, involved in Project J with the same COI situation. Before the meeting, the two experts agreed that they will mutually support the other expert project. So in the end, their COI was poorly handled.

I am not quite sure we can really avoid this situation.


The other point that is slightly annoying is that terms of experts are limited to two years. Consequently, individual experts can only be there 2 years and be done with it. However, big industrial actors (eg, France Telecom) will always have somebody to propose so the organization itself will be on that committee, regardless of the 2 years terms. Of course, one might argue that FT needs to be on that committee anyway since it is an actor you may not do without, but in this case, the choice of limiting terms for individuals is rather dishonnest and not helping. Practically speaking, if an individual is a fabulous expert on robotics, it is unfortunate to refuse his help after two years based on a question of terms limitation if he is still willing and helpful.

(I do not need to make any parallele with some wikimedia chapters right ?)

These were the other points which came to my mind as I read your answers.


On 2/10/12 12:35 PM, Yaroslav M. Blanter wrote:
Florence, I think you gave a great description of the process, and I agree
that we should aim at the degree of transparency achieved in it. Actually,
if I would be in charge of setting such a review panel, this is close to
how I would do it. However, I also have similar personal experience. I am
an academic researcher, and I also participated in such panels on many
occasions - as a panel member, as a referee, and, obviously, as an
applicant. I do not claim I fully understand all the details, which also
vary case-to-case, but there are a couple of things I learned from my
participation, and it is probably good to list them here.

1) If I am a panel member and I want to kill a proposal (for instance, for
personal reasons), it is fairly easy. I just need to find an issue which
experts did not comment on and represent it like a very serious issue. If
there is another panel member who wants to kill the same proposal, the
proposal is dead. You do not even need to conspire.

2) If I want to promote a proposal, it is very difficult, since everybody
assumes I am somehow personally involved. The most efficient way to promote
a borderline proposal is to kill the competitors, see above.

3) If I see that there are panel members with personal agenda, trying to
kill good proposals, it is usually very difficult to withstand them. The
majority of the panel do not care, and if a panel member is killing a
proposal, and another panel member is defending it, the proposal is most
likely dead.

4) You mentioned that there are "other issues" like opening a new lab
which come on top of the proposal quality and can change ranking. My
experience is that these issues often tend to dominate, and they are the
easiest to use by panel members who want to change the ranking with respect
to the expert evaluation.

Even such an  open procedure can be subject to manipulation, and one has
to be extra careful here. I hope this helps.

Cheers
Yaroslav

On Thu, 09 Feb 2012 23:52:20 +0100, Florence Devouard<anthe...@yahoo.com>
wrote:
I wanted to share an experience with regards to a future FDC.

During two years, I was a member of the "comité de pilotage" (which I
will here translate in "steering committee") of the ANR (National
Research Agency in France).

The ANR distributes every year about 1000 M€ to support research in
France.

The ANR programmatic activity is divided in 6 clearly defined themes + 1

unspecific area. Some themes are further divided for more granularity.
For example, I was in the steering committee of CONTINT, which is one of

the four programs of the main theme "information and communication
technologies". My program was about "production and sharing of content
and knowledge (creation, edition, search, interface, use, trust,
reality, social network, futur of the internet), associated services and

robotics"

Every year, the steering committee of each group define the strategic
goals of the year and list keywords to better refine the description of
what could be covered or not covered.

Then a public call for projects is made. People have 2 months to present

their project. From memory, the projects received by CONTINT were
possibly 200.

The projects are peer-reviewed by community members (just as research
articles are reviewed by peers) and annotation/recommandation for
support or not are provided by the peers. There is no administrative
filter at this point.

Then a committee constituted of peers review all the projects and their
annotation/comments and rank them in three groups. C rejected. B why
not. A proposed. Still not administrative filtering at this point.

The steering committee, about 20 people made of community members
(volunteers) and ANR staff review the A and B. Steering committee is
kindly ask to try to keep A projects in A list and B projects in B list.

However, various considerations will make it so that some projects are
pushed up and others pushed down. It may range from "this lab is great,
they need funding to continue a long-going research" to "damned, we did
not fund any robotic project this year even though it is within our
priorities; what would be the best one to push up ?" or "if we push down

this rather costly project, we could fund these three smaller ones". We
may also make recommandation to a project team to rework its budget if
we think it was a little bit too costly compared to the impact expected.

At the end of the session, we have a brand new list of A followed by B.
All projects are ranked. At this point, the budget is only an
approximation, so we usually know that all A will be covered but 0 to a
few Bs may be.

The budget is known slightly later and the exact list of projects funded

published.

How do we make sure what we fund is the best choice ?
Not by administrative decision.
But by two rounds of independent peer-review who can estimate the
quality of the project proposed and the chance for the organisations to
do it well.
And by a further round through which we know that all projects are
interesting and feasible, but will be selected according to strategic
goals defined a year earlier.

There are also "special calls" if there is a budget to support a highly
specific issue. Projects leaders have to decide if their project is
related to a "regular theme", or a "white" or a "special call".

The idea behind this is also that they have to make the effort to
articulate their needs clearly and show what would be the outcome.

The staff do not really make decisions. The staff is here to make sure
all the process work smoothly, to receive the propositions and make sure

they fit the basic requirements, to recruit peer for the reviews (upon
suggestion made... by steering committee or other peers), to organise
the meetings, to publish the results, and so on. Of course, some of them

do impact the process because of their strong inner knowledge of all the

actors involved. The staff is overall 30 people.

How do we evaluate afterwards that we made the good choice and funded
the right ones ?
First because as in any funding research, there are some deliverables;
Second because once a year, there is a sort of conference where all
funded organizations participate and show their results. If an
organization does not respect the game or repeatedly fail to produce
results, they inevitably fall in the C range at some point in the
peer-review process.

I present a simplify process, but that generally is it. I am not saying
either that it is a perfect system, it is not. But according to what I
hear, the system is working fairly well and is not manipulated as much
as other funding system may be ;)

Last, members of the steering committee may only do a 2 years mandate.
No more. There is a due COI agreement to sign and respect as well.
Thanks to the various steps in the process and thanks to the good
(heavy) work provided by the staff, the workload of volunteers is
totally acceptable.

Note that this is governement money but the government does not review
each proposition. The governement set up a process in which there is
enough trust (through the peer-review system and through the themes and
keyword methodology) to delegate the decision-making. The program is a 3

years-program defined by the board of the organization. The majority of
the board are high level members of the governement (Education, Budget,
Research, Industry etc.). This board does define the program and the
dispatching of the budget between the various themes. But the board does

not make the decision-making with regards to which programs are accepted

or not.

Florence




_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

Reply via email to