MPI defaults

2009-03-14 Thread Francesco P. Lovergine
Hi all

while working on HDF5 I'm considering to move MPI support to use mpi-default-dev
and mpi-default-bin. That would imply providing a single reference platform 
support
(openmpi or lam, completely dropping mpich AFAIK) which is quite different from 
what
done until today. I wonder if it would be appropriate
having instead a mpi-all-dev and mpi-any-bin package which depends on all 
supported platforms
on every archs. That would allow transparently building lam, mpich and openmpi 
flavors whenever possible, and depending on any appropriate tool the admin 
would install.

-- 
Francesco P. Lovergine


-- 
To UNSUBSCRIBE, email to debian-science-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: MPI defaults

2009-03-14 Thread Manuel Prinz
Am Samstag, den 14.03.2009, 10:54 +0100 schrieb Francesco P. Lovergine:
 while working on HDF5 I'm considering to move MPI support to use 
 mpi-default-dev
 and mpi-default-bin. That would imply providing a single reference platform 
 support
 (openmpi or lam, completely dropping mpich AFAIK) which is quite different 
 from what
 done until today. I wonder if it would be appropriate having instead a 
 mpi-all-dev and
 mpi-any-bin package which depends on all supported platforms
 on every archs. That would allow transparently building lam, mpich and 
 openmpi 
 flavors whenever possible, and depending on any appropriate tool the admin 
 would install.

That is probably the goal to go for but there are some technical
problems with it. Most debian/rules can't easily include a snippet and
finally do the right thing.

There are also some other issues: Most MPI implementations do not play
well together, meaning there currently are problems if two or more are
installed. (Yes, there shouldn't be, but it's an unresolved issue as of
now.) Also, LAM/MPI can be considered to be dead upstream and is not
recommended for usage, I think Debian should not build against it
anymore; there's nothing against still providing the libraries, of
course. Also, MPICH is superseeded by MPICH2 which noone ever packaged.
I do not know the details but as I check last, MPICH2 seems to not have
support for modern inter-conntects. They are supported via forks, so one
would have to package a MPICH2 version for each interconnect.

Mpi-defaults was supposed to support package who need MPI support, and
give it to them in an easy way. I was not designed to be a full-blown
solution. it definitely worth for such a solution but several questions
need to be addressed first. I do maintain a list of issues that in my
opinion need to be resolved first. I need to write it up in a better way
and have it discussed here and in pkg-scicomp first. Also, we should
check if we really want so many (and out-dated) MPI versions in Debian,
of if we can go with one for the apps and provide libs for all others.
If MPI were to be standardized on ABI compatibility, that resolve the
whole issue. But since this is not going to happen anytime soon, we have
to work on a solution that works for us.

I really welcome your idea and would be glad to hear suggestions if you
have an idea of how to implement that! I also hope we can have the
discussion about MPI in Debian soon; but I currently have to spend the
time to fix my MPI implementation before I can write up a proposal.

Best regards
Manuel


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Bibliography and File Management

2009-03-14 Thread Benda Xu
Dear guys and girls on the list,

I have downloaded a lot of papers from the journals for reading and
reference. Although I tried to develop a naming scheme to organize the
file (mostly PDF format), I run into chaos these days: I forget which is
which before actually open the files one by one.

There will certainly be more and more papers I collect and I am
wondering a smart way to manage the references.

I have tried jabref[1] and a similar one under GNOME[2]. They maintain a
BibTeX file and keep the location of reference files. 

I do not have a full desktop environment. I would like a CUI friendly
(esp. Emacs friendly) tool for the same purpose. RefTeX coming with
AucTeX seems not to associate its entries with actual files locally. I
searched the web with no luck.

I would like to hear your advices.

Cheers!

Footnotes: 
[1]  http://jabref.sourceforge.net/

[2]  I tried a year ago and forgot its lovely name.

-- 
Benda Xu
Academic Talent Program,
Fundamental Science of Mathematics and Physics,
Tsinghua University,
P.R.China

http://alioth.debian.org/~heroxbd-guest


-- 
To UNSUBSCRIBE, email to debian-science-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bibliography and File Management

2009-03-14 Thread Bryan Bishop
On Sat, Mar 14, 2009 at 6:23 PM, Benda Xu hero...@gmail.com wrote:
 I have downloaded a lot of papers from the journals for reading and
 reference. Although I tried to develop a naming scheme to organize the
 file (mostly PDF format), I run into chaos these days: I forget which is
 which before actually open the files one by one.

Yes, I run in to this problem as well. I go on reading sprees of
hundreds of papers. Recently it was something like 200 papers and 150
MB re: microfluidics. BibTeX is nice, but not always a given. Google
Scholar allows you to export citations as you find paper search
results- perhaps it would be possible to write a userscript/javascript
hack that would automatically download the BibTeX citation as you
download a file? This way, you always keep track of information per
download. This is a hack, not a real solution, of course.

- Bryan
http://heybryan.org/
1 512 203 0507


-- 
To UNSUBSCRIBE, email to debian-science-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Full paper-to-bibliography toolchain

2009-03-14 Thread Bryan Bishop
Hi all,

This email comes about because of the recent thread about bibliography
management. In particular, I've always had my eye out for what sort of
software should (or should not) exist for scientific papers. Some
immediate examples:

AutoScholar
http://heybryan.org/projects/autoscholar/

AutoScholar is a perl script that takes a paper title and queries
Google Scholar and fetches a PDF link if available- either on the
first page of search results, or in the Get This Article link. This
truly belongs as a module to the 'surfraw' project more than anything.
Future bugfixes should honestly include automatically following
through to the publisher's website via WWW::Mechanize and look for PDF
links. PDF links are of three types usually: (1) direct links, (2)
links to a page that refreshes to the PDF, or (3) a popup with some
javascript black magic (like in the case of ScienceDirect) (which I
don't know how to fix with perl's WWW::Mechanize, any hints?).

Call for Paper (CFPs) file format standards- suggestions for a microformat
http://heybryan.org/cfp.html


http://wikicfp.org/ - a wiki for posting CFPs. I've posted a few. Many
of the CFPs that flood my inbox are forwarded (by me) over to the
wikicfp gmail e-mail address, but I know that the poor guy who runs it
isn't keeping up with the CFP emails that I send his way. Also, I know
that there's no automatic way of reading CFPs since they hardly have a
standard format. Yes, there is standardized information that is
contained within each, but not always in the same format. Anyway, CFPs
should be released in a standardized format so that there is metadata,
descriptions, authors/participants/keynote speakers, locations and
addresses, deadlines, email addresses, URLs, BibTeX for previous
proceeding publications, and so on. The wikicfp wiki has an interface
for inserting information, but unfortunately it doesn't always capture
all of the information of a CFP since not all CFPs follow the same
three-tiered submission deadline format.

What are the advantages of sending around CFP files? You could process
more of them, and more quickly. You would only have to download an RSS
feed or zip file of CFPs and search for terms that you are interested
in. You could use the calendar/date-time information to import into
your own personal calendar/scheduling system. And it might also be a
good way to keep track of your work on different abstracts, posters,
papers, etc., with respect to deadlines, topics, etc. Perhaps even
through the submission-review-editing-(hopeful)-acceptance process?


Recently I mentioned the idea of a GreaseMonkey userscript to
complement paper-reading over Google Scholar, here on this mailing
list:
http://lists.debian.org/debian-science/2009/03/msg00046.html

Google has an option in the user preferences page on Google Scholar (
http://scholar.google.com/ ) to show Export citation links next to
each paper that it turns up as a result to a user's queries, including
a BibTeX format. If you're downloading all of these papers, perhaps a
userscript that will detect a click and simultaneously download both
the PDF as well as the citation be appropriate? Or even better,
perhaps exporting just the citation with the link into a queue for
later processing? (This goes hand-in-hand with list of things to
improve Google with- like search session management (to see recent
queries, and recent results, instead of going in circles with Google
Trends and Google Search History) which I'll probably never get around
to implementing.)

Btw, speaking of GreaseMonkey, here's a script that will fix
ScienceDirect's naughty popup behavior for showing PDFs:
http://userscripts.org/tags/sciencedirect
http://userscripts.org/scripts/show/41663

There are some browser plugins for Firefox, such as Zotero, which does
in-browser bibliography management.
http://www.zotero.org/

But it's not entirely clear how often Zotero is able to capture both
bibliographic information as well as the actual PDF. Anybody know? I'd
like to be able to just impose a standard on all of you: a tar file
with a PDF and a dot bib (BibTeX) file. But alas, this doesn't seem
like it will happen. ;-) I did however have an opportunity once to
impose code on PLoS ONE, but I didn't take advantage of the situation-
silly me!

Another software package I once put a few hours into was something I
foolishly called Autozen 2008. It was a perl script for cyclical PDF
viewing- in other words, pages would be flashed up for a few seconds
at a time on one of my many idle monitors. Instead of having a
television blazing around in the background, when I get distracted at
least I'm being distracted by something educational and interesting.
Huh. I never knew that the wetting properties of acrylic liquids was
inversely proportional to their capillary crawl distance. (or
something) It would also have been interesting to set up a public
repository for a few clients to connect to for a reading circle of
sorts, where we all throw in some