On 08/12/2013 10:04 PM, Andrei Alexandrescu wrote: > I'd agree a lot more with what follows if it weren't for workshops, symposia, > and journals, which together complete quite a large spectrum of publication > and > debate venues, all with different tradeoffs.
I agree that there are other avenues of dissemination out there, but I do feel that the particular predominance of conference proceedings in CS skews the field quite strongly. Regarding journals -- I was quite shocked when I first realized what long review times can be found in some major computer science journals (I hope this has changed, but it was certainly true when I was looking more intensely at the literature in the period 2008-2010). I personally suspect that's a direct consequence of proceedings being the main publication venue -- journal turnaround times _could_ be much, much faster, and are in many other fields (e.g. physics), but people tolerate those long delays essentially because of cultural factors -- there's an expectation that you target conferences for quick publication, journal for long-term archival. > But that's a matter common to all deadline-oriented work. The tradeoffs are > well > known. Also, Journals and trade magazines don't have such. Yes, but in CS the deadline-oriented venues predominate much more than in other fields, and on grounds that I don't believe are valid any more (there are now quicker and easier ways to disseminate the latest results in a fast-moving field). > On the upside that's incentive for producing good-quality work. Indeed, > conference proceedings are predictably better than workshops or > non-peer-reviewed publications. I agree there are better and worse conferences and that the selection practices of better conferences offer an incentive to authors to do well. I just think that they'd have the same incentive to do well and a better chance of achieving it if the main publication venue was instead well-run journals with fast review turnaround. It may be academic cultural bias, because my preference (and the norm in my field) is for meetings to be places for discussion and exchange rather than a publication venue, so I don't mind meetings where the contents have not been strongly reviewed. On the other hand in my field, unrefereed electronic preprints are a perfectly normal first dissemination method and are viewed pretty much as first-class publications in research/citation terms, though not of course in terms of career brownie points or formal research assessments. You'd think that would be an insane free-for-all but the data suggests not so -- researchers' care for their own reputation means that what's out there is usually not so different in quality from what winds up in the journals. > I haven't seen anyone really complaining about it. Not sure what surveys you > mention. Students who don't submit this time around have more time focusing on > research. The complaint is principally mine, because I think if there's empirical work that relies on user input in some way (whether it's people playing a game, or answering survey questions, or assessing the relevance of query results, or the quality of user interface designs...), you have to be quite careful about how you select your test subjects in order to avoid biasing the results. In my experience this is not something computer scientists take into consideration. Then again, my general experience is that CS doesn't have very good empirical traditions in general, and those that exist tend to be biased by the history of the discipline. But I also think they're negatively affected by deadline-focused publication. > We're all guilty of that to a small extent. Generally people are good at > picking > relevant literature, and my advisers were very careful in reviewing > quotations. I don't think many people _deliberately_ try and mis-cite work, but I think it's an accidental consequence of deadline-focused work. (Hey, you see the same thing in software development with horrible hacks being put in to make sure the product is "fixed" for the release deadline, right?:-) > That may happen at second and third tier venues. Good conferences accept good > research, which means a submitter won't risk a rejection by chopping work in > multiple pieces and submitting it separately. Never really heard of a labmate > doing this. But I imagine you have encountered the close cousin of this approach, which is: produce a piece of work for conference X; get rejected; rework it in a couple of weeks to make it look like something suitable for conference Y ... ? I find that kind of slanting of a work in order to "sell" it to be problematic, although I'd accept that it depends on who's doing it and how extreme the slanting is. As for good conferences, I'd suggest a major complaint here is that while it may be difficult to get a really bad paper in there, it's easy for a good paper to be bumped with no real justification, and a typical reason for that is that it just gets handed to an overworked reviewer without the time or inclination to try and understand the work properly. With journals you have a right of reply and if it's obvious that a referee is either biased or not capable of understanding the work, you can appeal to the editor to discount their opinion. > Journal papers. Yes, but that's insufficient in CS given the dominance of proceedings as a publication venue. It's also my impression that quite a lot of top-tier journal publications are essentially the final step of a process that begins with 2-3 years of hawking around a particular body of work through the conferences -- without which it's unlikely you'll get a look-in except in obscure journals. (But top-tier journals tend to be problematic in all fields -- your best bet of publishing in any major journal is to co-author with someone who's already published there.) > Many academists defend the likes of IEEE etc., which use the funds gathered > that > way for good purposes. I know most conferences don't prevent their authors > from > putting their work online. In CS it is very rarely the case that a paper > cannot > be found without having to pay for it, and in those rare cases a little social > engineering gets you the paper (I wrote the author and got it once). I regularly have pain and annoyance trying to get hold of stuff that has been published in IEEE proceedings lines. Besides, we were talking about the time delay between academia and industry -- I don't think that we can expect that many companies to take out library subscriptions or risk $25 a pop on papers. But I think open access is another debate, albeit one that I think in principle is pretty unanswerable in the internet age. > Well I'm unconvinced but this seems to be one of those potayto-potahto kind of > things. I do agree the situation can improve. Well, everyone has their own priorities and different experience of the problems, and depending on your area of work (and the institutions you're affiliated with) different things may affect you more or less. I used to joke with various professors that they would have to die before we could solve the problems of research publication, because from their point of view the pros almost invariably outweighed the cons -- they were adept at working with the current system and had much at risk with alternatives. These were people I had great affection for, you understand, but I think that point wasn't entirely without merit -- not because they had a conflict of interest in a selfish way, but because their sense of responsibility both towards the quality of their own work and their ability to positively influence the careers of their students led them to prefer the system that they knew over alternatives.