Re: [ECOLOG-L] Bayesian inference in ecology/biology

2018-09-10 Thread David Schneider

Hello Jorge,
See also the special issue in Ecology

Ecological Applications, 6(4), 1996, pp. 1036-1094

Ellison and Dixon make the case for Bayesian to
(a) make better use of pre-existing data; (b) allow stronger
conclusions to be drawn from large-scale experiments with few 
replicates;

and (c) be more relevant to environmental  decision-making.

They clearly prefer evidence based priors (a, above).
As to environmental decision making (c), that occurs in the
policy arena, where it is up to us as scientists to make the
best case on evidence. And insists on evidence, as in medical
practice.

Good luck,
David S

On 2018-09-10 21:07, David Schneider wrote:

Hello Jorge,

If you would like to look at an accessible text try:

Bayesian Methods for Ecology Paperback – May 10 2007
by Michael A. McCarthy

The Wikipedia entry has a number of errors, so not to be trusted.

Speaking of being careful, there are no verifiable pictures of
Thomas Bayes.  The picture that is always shown is that of a cleric
at a time similar to that to Bayes.

Speaking of being careful, you have to be a *really astute*
to find anything like "Bayes Theorem* in Bayes (+Price)
article in 1763.  Try reading it.

For some background, a quick read:

When did Bayesian inference become "Bayesian"?
Stephen E. Feinberg
Bayesian Analysis
Volume 1, Number 1 (2006), 1-40.

For a well informed and tongue in cheek example
of Bayesian analysis, see:

Stigler, Stephen M (1983). "Who Discovered Bayes' Theorem?".
The American Statistician. 37 (4): 290–296.
doi:10.1080/00031305.1983.10483122.

In my experience Bayesian analysis makes sense with a
monitoring program, where we have evidence based priors.
In fisheries research we have annual surveys and we have
*lots* of subjective priors to
deal with, to which Bayesian has an answer.

I think Bayesian makes sense in an activity such as NEON,
National Ecological Observatory Network, where I did
Bayesian workshop in R.

The computational details require substantial
computer resources (MCMC, look it up) to "burn in" the choice
of distributional assumptions.

The differences in opinion within the Bayesian camp
can be as intense as the Fisher vs Neyman-Pearson wars
in the frequentist camp in the last century.

In the end though, the way we think, as humans, is Bayesian,
working from subjective priors.  The question is- what is the role
of Bayesian thinking in science, a public activity.

Good luck!
David S



On 2018-09-10 16:10, Jorge A. Santiago-Blay wrote:

Bayesian inference in ecology/biology

Dear Colleagues:

I am looking for papers, book chapters, etc. on the subject of
Bayesian inference in ecology/biology. I have found a few articles
available on the web (and already ordered a few books) but they do not
seem to provide an appropriate background for me (and I suspect for
the students who want to learn more about it).

If you have constructive suggestions, please send them directly to me
at

blayjo...@gmail.com

Sincerely,

Jorge

P.S. Apologies for potential duplicate emails.

Jorge A. Santiago-Blay, PhD
blaypublishers.com [1]

1. Positive experiences for authors of papers published in _LEB_
http://blaypublishers.com/testimonials/

2. Free examples of papers published in _LEB_:
http://blaypublishers.com/category/previous-issues/ [2].

3. _Guidelines for Authors_ and page charges of _LEB_:
http://blaypublishers.com/archives/ [3] _._

4. Want to subscribe to _LEB_?
http://blaypublishers.com/subscriptions/ [4]

http://blayjorge.wordpress.com/

http://paleobiology.si.edu/staff/individuals/santiagoblay.cfm


Links:
--
[1] http://blaypublishers.com
[2] http://blaypublishers.com/category/previous-issues/
[3] http://blaypublishers.com/archives/
[4] http://blaypublishers.com/subscriptions/


Re: [ECOLOG-L] Bayesian inference in ecology/biology

2018-09-10 Thread David Schneider

Hello Jorge,

If you would like to look at an accessible text try:

Bayesian Methods for Ecology Paperback – May 10 2007
by Michael A. McCarthy

The Wikipedia entry has a number of errors, so not to be trusted.

Speaking of being careful, there are no verifiable pictures of
Thomas Bayes.  The picture that is always shown is that of a cleric
at a time similar to that to Bayes.

Speaking of being careful, you have to be a *really astute*
to find anything like "Bayes Theorem* in Bayes (+Price)
article in 1763.  Try reading it.

For some background, a quick read:

When did Bayesian inference become "Bayesian"?
Stephen E. Feinberg
Bayesian Analysis
Volume 1, Number 1 (2006), 1-40.

For a well informed and tongue in cheek example
of Bayesian analysis, see:

Stigler, Stephen M (1983). "Who Discovered Bayes' Theorem?".
The American Statistician. 37 (4): 290–296.
doi:10.1080/00031305.1983.10483122.

In my experience Bayesian analysis makes sense with a
monitoring program, where we have evidence based priors.
In fisheries research we have annual surveys and we have
*lots* of subjective priors to
deal with, to which Bayesian has an answer.

I think Bayesian makes sense in an activity such as NEON,
National Ecological Observatory Network, where I did
Bayesian workshop in R.

The computational details require substantial
computer resources (MCMC, look it up) to "burn in" the choice
of distributional assumptions.

The differences in opinion within the Bayesian camp
can be as intense as the Fisher vs Neyman-Pearson wars
in the frequentist camp in the last century.

In the end though, the way we think, as humans, is Bayesian,
working from subjective priors.  The question is- what is the role
of Bayesian thinking in science, a public activity.

Good luck!
David S



On 2018-09-10 16:10, Jorge A. Santiago-Blay wrote:

Bayesian inference in ecology/biology

Dear Colleagues:

I am looking for papers, book chapters, etc. on the subject of
Bayesian inference in ecology/biology. I have found a few articles
available on the web (and already ordered a few books) but they do not
seem to provide an appropriate background for me (and I suspect for
the students who want to learn more about it).

If you have constructive suggestions, please send them directly to me
at

blayjo...@gmail.com

Sincerely,

Jorge

P.S. Apologies for potential duplicate emails.

Jorge A. Santiago-Blay, PhD
blaypublishers.com [1]

1. Positive experiences for authors of papers published in _LEB_
http://blaypublishers.com/testimonials/

2. Free examples of papers published in _LEB_:
http://blaypublishers.com/category/previous-issues/ [2].

3. _Guidelines for Authors_ and page charges of _LEB_:
http://blaypublishers.com/archives/ [3] _._

4. Want to subscribe to _LEB_?
http://blaypublishers.com/subscriptions/ [4]

http://blayjorge.wordpress.com/

http://paleobiology.si.edu/staff/individuals/santiagoblay.cfm


Links:
--
[1] http://blaypublishers.com
[2] http://blaypublishers.com/category/previous-issues/
[3] http://blaypublishers.com/archives/
[4] http://blaypublishers.com/subscriptions/


[ECOLOG-L] Post-doctoral positions available at KAUST

2017-08-08 Thread David Schneider

On 2017-08-08 12:19, Joanne I. Ellis wrote:

This is a quick request to please pass on the ads below, particularly
if you know of someone looking for a Postdoc or PhD opportunity in
marine sciences. Thank you all so much.

Joanne Ellis

We have two postdoctoral fellowships available within the Integrated
Ocean Processes group at KAUST University.  The post-doc positions are
for a three year period and include fellowships to study "Multiple
stressors in the Red Sea" and "Marine and Coastal Ecosystem Services".
The post-doctoral and PhD student would be working with faculty and
research scientists with experience in marine ecology, benthic
ecosystems, coral reef dynamics, genomics and oceanography. The
fellowships are open to candidates with a strong research background
across the natural sciences.

POST-DOCTORAL FELLOW:  MULTIPLE STRESSORS EFFECTS IN THE RED SEA
MARINE HABITATS

https://iop.kaust.edu.sa/Pages/Post%20Doc%20Multiple%20Stressors.aspx

POST-DOCTORAL FELLOW:  MARINE AND COASTAL ECOSYSTEM SERVICES

https://iop.kaust.edu.sa/Pages/post%20doc%20ecosystem%20services.aspx

PHD STUDENT POSITION:  EFFECTS OF CONTAMINANTS ON THE FITNESS OF CORAL
REEF ASSOCIATED SPECIES

https://iop.kaust.edu.sa/Pages/PhD%20studentship.aspx



Re: [ECOLOG-L] Editor bias in peer review

2017-05-20 Thread David Schneider

Hello Edwin,

You asked for examples of rejection and subsequent publication.
Here are 3 marine examples from gatekeeper journals
Nature 271:352  rejected by Science
American Naturalist 139:148  rejected by Marine Ecology--Progress Series
Phil. Trans. R. Soc. Lond. B 352: 633rejected by Ecology

I think your baseline should include a full listing of the history
of a manuscript - each rejection of a manuscript by date, and  final
publication by date.
Otherwise your baseline is biased, isn't it?

Also, if you are doing an experiment, how will you allocate
experimental units?  If you are doing a survey, how will you achieve
a representative sample?

Over a long publication career, marked by my first peer reviewed
publication in 1978, I have experienced many rejections.  My view
is that science is not science until it is communicated.  And that
my role is to get the result published, even if not where I think
it deserves to be published.  That's my view.  Suum cuique.

Yours in the pursuit of evidence based results,
David Schneider

On 2017-05-19 13:41, Edwin Cruz-Rivera wrote:

Dear all,

   I apologize for the cross listing. We are trying to cover as broad
a canvas as possible:

In the past years, journals have increased the responsibilities of
editors-in-chief to the point that they have become gatekeepers of
their publications. The bottom line is that papers get sent out to
peer reviewers only when editors say so, if they deem the article to
be "of broad enough interest" to their readers.

Clearly, there is a spectacular number of problems with this (though
we do not seem to talk about them). For one, systematic bias can be
introduced in a multitude of ways: what terrestrial researchers
consider "hot topics" of "general interest" may not be the same as
what freshwater or marine ones do. I keep glancing at the
plant-herbivore interactions literature seeing how marine papers often
cites terrestrial works, but not the other way around.

After talking to several colleagues, it seems that the trend is "I
(insert editors name)  don't think this is of general interest but it
is really good, so I recommend you submit your manuscript to this
journal of also general interest (open access journal from our
publisher that costs you thousands of dollars to publish in)." This,
frankly, seems like a dishonest practice; if it is good enough for one
general ecology journal it should be for another. Have we exchanged
fashion for quality? We want to know your opinion.

We would like to compile data on the frequency of such cases. Our
hypothesis is that the definition of "general interest" or "worthy of
peer review" in ecology is completely arbitrary and we will be
designing an experiment to test this, but we would like to establish a
baseline by asking for cases in which authors have felt their papers
have been rejected out of bias rather than merit. In order to narrow
the field, it will be important to have articles that were published
in journals after "broader" journals rejected them without peer
review.

Your responses will be kept confidential,

Edwin

=
Dr. Edwin Cruz-Rivera
Associate Professor
Department of Biological Sciences
University of the Virgin Islands
#2 John Brewers Bay
St. Thomas 00802
USVI
Tel: 1-340-693-1235
Fax: 1-340-693-1385

"It is not the same to hear the devil as to see him coming your way"
(Puerto Rican proverb)


Re: [ECOLOG-L] Question about authorship

2017-02-24 Thread David Schneider

Hello Gabriel,

There are clear guidelines from the
International Council of Medical Journal Editors, ICMJE,
to answer you questions.

http://www.icmje.org/recommendations/browse/roles-and-responsibilities/

More on the topic can be found with a google search:  Vancouver protocol

I wonder if the prof with whom you worked knows about these guidelines.
Does the student who contacted you know there are guidelines?

Best of luck,
David Schneider

On 2017-02-23 17:04, Gabriel Chavez wrote:

Hello ECOLOGers,

My name is Gabriel and I had a question to pose concerning use of 
authorship

on a scientific paper. I worked on a long-term study regarding carbon
sequestration and nutrient cycling in Pacific Northwest forests on a
permanent plot network with other faculty and undergraduates at my 
college.
We had the data and were interpreting it but hadn't published any 
papers or

sent anyone to any conferences with the results.

I have since graduated from said college, and I recently learned that a
student took over that previous work we had conducted and is planning 
on

submitting an abstract the ESA, and wanted to know if I wanted to be
included an an co-author on the formal paper that is being published 
(of

course I want to be included). My question to all of you  is: in what
capacities am I "allowed" to use this paper or results that come out of 
the
study? For example, showcasing this abstract or paper on my LinkedIn 
page,
including it in graduate school applications, that sort or thing. What, 
in

your opinion, are the limits in which I can use this work? Thank you.


Re: [ECOLOG-L] Should Calculus Be Required of All Ecology/Biology Majors?

2016-10-18 Thread David Schneider

Hi Howie Neufeld,

I'm going to suggest that rather than requiring a particular
course, such as calculus, that you identify a set of courses that 
address

quantitative reasoning based on the logic of mathematics
as it applies to measured quantities.

This set might include calculus courses that emphasize practical
applications of the rules of calculus.
It might include statistics course that focus on writing the
model instead of 'name the test' or 'get the pvalue.'
It might include a course in experimental design.
It might include physics or chemistry courses were there is
application to biology.
It might include a course in physiology, a discipline with
a strong and long established quantitative basis.
It might include a course in genetics where the quantitative
analysis is central.
It might include a course in network theory, as it applies
to biological systems.
And almost any course labelled 'modelling' would be part
of such a list.

Any list of course will be constrained to what is on the go
at any one university, or what can be arranged at a distance.

Good luck!
David S

On 2016-10-18 17:34, Andrew Wright wrote:

Hi all,

I agree with Carrie too and I already responded to Howard, but he's my
response again for you all:

The answer is simply 'yes'. Although biology was the science for
non-mathematicians back in the day, more and more modelling is coming
into the discipline and students will need a reasonable mathematical
foundation to cope in biology in the future. Even if only a basic
foundation is provided, this will help students understand innovative
statistical approaches and more complex models that touch on their
fields, even if they are unable to use them themselves.

More generally there should be more maths requirements in Biology.
Otherwise students will simply fall behind.

Andrew

--
Andrew Wright, Ph.D.

VaquitaAreBrowncoats: Where Sci-Fi meets Science, the Cosmos meets
Conservation and Firefly meets Flipper. Shiny
https://www.facebook.com/vaquitaarebrowncoats.

"We don't have to save the world. The world is big enough to look
after itself. What we have to be concerned about is whether or not the
world we live in will be capable of sustaining us in it." Douglas
Adams

GNU Terry Pratchett
On 19 October 2016 at 06:20, John Anderson  wrote:


I am fascinated by this discussion and would love to hear more
points of view.  As far as carrie's excellent post, I guess I am not
sure why one would expect a Calculus course to do her 6 points any
more than many other classes?  I was required to take two terms of
calculus as an undergrad Zoology major back when there were such
majors, plus a year of physics.  We had to take a year of physical
Chemistry before we could take Biology, and then could only enroll
in Biology if we simultaneously took Organic Chem.  It always seemed
to me that a LOT of these classes were more about getting rid of
people than educating them.  Weirdly, stats was NOT required.  In
all the years since I have used calculus (briefly) in a course on
theoretical population biology, I use Chemistry primarily when i
teach physiology, but professionally I use Stats all the time.
Talking with colleagues, this pattern seems by no means unique.
Thoughts?

On Tue, Oct 18, 2016 at 11:04 AM, Joseph Russell
 wrote:

I agree with Carrie here! When I was a Marine Biology undergrad at
Stockton University in NJ, we were required to take two semesters of
physics. However, the physics I and II courses that we took were not
the same as would have been taken by a physics major. Our Physics
courses were titled "physics for life sciences" which narrowed down
the concepts to those that applied to people in the life sciences
field. I believe the calculus courses that we were required to take
were standard calculus, but I could see something like this working
as well, where the calculus courses would not be like a calculus
course taken by a math major, but rather, the curriculum would be
designed so that the concepts and learning objectives would suit the
field of study. Carrie has provided an excellent list below with the
6 points of valuable competencies for prospective biologists.

JOSEPH RUSSELL, MNR

_Wildlife Management and Recreational Planning Research Fellow_

Stockton University

Galloway, NJ 08205

(609) 287-0596 [9]

joseph.russ...@stockton.edu

www.stockton.edu [10]
Sent from my iPhone

On Oct 18, 2016, at 10:18 AM, Carrie Eaton  wrote:

Hi all,

I responded with a few details already to Howard.  But I’ll just
generally say that if you are thinking about curricular redesign,
I’d like to suggest backward design based on concepts and
competencies that employers need and which have been well identified
by many national level reports. For example, Vision and Change.
Vision and Change identifies 6 vital competencies for all biology
students:

1.  ABILITY TO APPLY THE PROCESS OF SCIENCE

2.  ABILITY TO USE QUANTITATIVE REASONING

3.  ABILITY TO USE MODELING AND SIMULATION

4.  AB

Re: [ECOLOG-L] Query on authorship -taking responsibility

2016-08-22 Thread David Schneider

Hello all,
The ESA criteria are essentially the same as the
'Vancouover protocol' by the International
Council of Medical Journal Editors.

 See

http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

In the case at hand, if a person cannot be located, they cannot take 
responsibility.

And so cannot be listed as coauthor under the ESA and ICMJE criteria.

A note in the acknowledgements as to contribution, would be appropriate.

Yours in ethical publishing,
David S

On 2016-08-22 11:55, Cliff Duke wrote:

The Ecological Society of America. See
http://www.esa.org/esa/about/governance/esa-code-of-ethics/

-Original Message-
From: Aaron T. Dossey [mailto:bugoc...@gmail.com]
Sent: Monday, August 22, 2016 10:22 AM
To: Cliff Duke ; ECOLOG-L@LISTSERV.UMD.EDU
Cc: Dr. Aaron T. Dossey ;
all.things.b...@gmail.com
Subject: Re: [ECOLOG-L] Query on authorship


Is that Entomological Society or Ecological Society?

* A code written of, for and by professors I am sure nonetheless.


On 8/22/2016 8:15 AM, Cliff Duke wrote:
Concerning the recent "query on authorship," there is no ethical 
ambiguity. The ESA Code of Ethics is quite clear on that point:


"Researchers will not add or delete authors from a manuscript 
submitted for publication without consent of those authors.
Researchers will not include as coauthor(s) any individual who has not 
agreed to the content of the final version of the manuscript."


If you can't locate the person who contributed, that person can 
consent to the manuscript. Just note their contribution in the 
acknowledgements section of the paper.



ATD of ATB and ISI


Re: [ECOLOG-L] fixed vs. random effects in field research

2016-05-17 Thread David Schneider
Here is a how Quinn  and Keough (2002 Cambridge University 
Press) address the distinction between random and fixed effects.
_
8.1.1 Types of predictor variables (factors)
There are two types of categorical predictor variables
in linear models. The most common type is
a fixed factor, where all the levels of the factor (i.e.
all the groups or treatments) that are of interest
are included in the analysis. We cannot extrapolate
our statistical conclusions beyond these specific
levels to other groups or treatments not in
the study. If we repeated the study, we would
usually use the same levels of the fixed factor
again. Linear models based on fixed categorical
predictor variables (fixed factors) are termed fixed
effects models (or Model 1 ANOVAs). Fixed effect
models are analogous to linear regression models
where X is assumed to be fixed. The other type of
factor is a random factor, where we are only using
a random selection of all the possible levels (or
groups) of the factor and we usually wish to make
inferences about all the possible groups from our
sample of groups. If we repeated the study, we
would usually take another sample of groups
from the population of possible groups. Linear
models based on random categorical predictor
variables (random factors) are termed random
effects models (or Model 2 ANOVAs). 
__

In the Grossman query (below) temperature, rainfall, and
density would likely be fixed because they are of interest -- 
the contrasts would be of interest across  the particular
values of temperature,  rainfall, and density.  Inference
would be only to the measured values and their contrasts.
All three variables become fixed if fitted as a regression
instead of as categorical variables. 
Temperature might be taken as a random variable over a small 
range, but would not be credible as a random variable over 
a wide range, given its profound effect on biological processes. 
Location would be either random or fixed, depending on whether 
the inference was to only those 3 sites at the stated dates of 
measurement (fixed), or to all possible sites in some stated 
area (random),  or to the hypothetical population of a very 
large number of repetitions at those sites (random, as above).
If the locations were known to differ in some salient 
biological way, such that they could be ordered as to
expected effect, location could be legitimately treated as fixed.

The choice of random versus fixed categorical variable lies with 
the judgement and knowledge of the biologist.
A good statistician will demure on demands for hard and fast rules. 
A good  statistician will instead probe the biologist as to the 
scope of inference, then help the biologist form the 
correctly nested (log) likelihood ratio (as in Quinn and 
Keogh or any of many texts).  The likelihood ratio is
key - in either a decision theoretic context (as in Quinn
and Keough) or with inference from a prior to a posterior 
probability, if that is what you want to do. 

~ David Schneider




Quoting "Street, Garrett" :

> There is also an excellent section on what constitutes a random or fixed
> effect in Tom Hobbs and Mevin Hooten's "Bayesian Models: a Statistical Primer
> for Ecologists" using fecundity of spotted owls (adapted from Clark's work on
> the subject), and again using hypothetical sampling of aboveground biomass,
> as examples. Both examples are accompanied by clear and concise explanations
> of the implications for the underlying distributions and assumptions of the
> model one might seek to fit, and for the ecology informing the models.
> 
> Garrett Street
> Assistant Professor
> Wildlife, Fisheries, and Aquaculture
> Mississippi State University
> 
> On May 17, 2016, at 4:34 PM, Brian Church
> mailto:church...@gmail.com>> wrote:
> 
> There is a fairly detailed discussion of fixed vs. random effects on
> CrossValidated here:
>
http://stats.stackexchange.com/questions/4700/what-is-the-difference-between-fixed-effect-random-effect-and-mixed-effect-mode
> 
> Based on the discussion there, it seems like temperature, rainfall, and
> density could all be considered to be random effects for the following
> reasons:
> 1. You are unlikely to sample the entire populations for those variables.
> 2. They are not being controlled
> 3. They are likely continuous and distributed in some way (e.g., normal)
> rather than discrete values
> 4. You are unlikely to be interested in responses at a specific temperature,
> rainfall, and density; rather, it seems more interesting to understand
> effects relating to the underlying distributions of those variables.
> 
> Those commenting in the CrossValidated forum cite a few sources, though they
> seem to be general/mathematical rather than ecology-specific. Hope tha

Re: [ECOLOG-L] The speed of peer-review

2016-04-27 Thread David Schneider
Neil,
Thanks for posting two recent publications on speed of
review. My experience with the review process as an 
author goes back to 1978 (Nature 271).  My experience as
reviewer began 1988.  For a decade I was responsible
for the review process at a leading journal in marine
science.

The PLOS one article was consistent with my experience.
The Cooke at al article collided with my experience.
Set against the 'need for speed' we have the equally
alliterative  'haste makes waste.'  

As to science and 'need for speed' imagine yourself
as science person on a panel, charged by US FDA, to put 
science to decision for fast tracking trials on a new drug.  
And you the science person put in room full of weeping
parents, and their children in wheel chairs, all doomed to 
die before age 30, all parents firmly convinced the new drug 
is efficacious.  At the same time, from experience you know 
that new drugs can have tragic side effects (e.g. thalidomide).
[search google image search on thalidomide, if the term is
outside your experience]

I've writ it large to make the case.  Some delay means the review
process is working, compared to ROM (read only memory) publication
in an ejournal that promises rapid publication once you pay
the publication fee.

Yours in science in the public interest,
David S








Quoting Neil Hammerschlag :

> Edwin et al
> 
> Here are two recent papers that evaluate author perspectives on review times
> and possible implications for conservation.
> 
> 
> Haddaway NR, Gutowsky LFG, Wilson ADM, Gallagher AJ, Donaldson MR,
> Hammerschlag N, Cooke SJ. (2015) How Long Is Too Long in Contemporary Peer
> Review? Perspectives from Authors Publishing in Conservation Biology
>
Journals.
> PLoS ONE 10(8): e0132557.
> 
> Cooke SJ, Nguyen VM, Wilson AD, Donaldson MR, Gallagher A, Hammerschlag N,
> Haddaway NR. (2016) The need for speed in a crisis discipline: perspectives
> on peer review duration and implications for conservation
> science. Endangered
> Species Research 30: 11-19
> 
> Cheers
> 
> Neil
> 
> 
> 
> 
> 
> Neil Hammerschlag, Ph.D. 
> Research Assistant Professor
> Rosenstiel Marine School (RSMAS) | Abess Center (CESP)
> Predator Ecology Lab | Shark Research & Conservation Program (SRC)
> University of Miami
> 
> e: nhammersch...@rsmas.miami.edu
> o: 305.421.4356 | c: 305.951.6577 | t:
> @DrNeilHammer
> 
> Lab Website: SharkTagging.com
> 
> 
> 
> 
> On Apr 26, 2016, at 10:58 AM, Edwin Cruz-Rivera
> mailto:edwin.cruzriv...@uvi.edu>> wrote:
> 
> Dear All,
> I am very curious about the life cycle of manuscripts in
> online journals these days. I have been doing some numbers on PLOS One, which
> advertises as the journal “accelerating the publication of peer-reviewed”
> science. However, a quick look at the papers that have been published in the
> past few months reveals most of these were accepted 5-9 months after
> submission. What strikes me as odd is that PLOS One gives you two weeks to
> review a manuscript, and they start pestering you with reminders even before
> the review is late…and may you not be late for 48 hours! So how does a
> journal that expects such a fast turnaround from peer reviewers deal with
> authors at such glacial pace? To begin with, it is not as if publication
> comes cheap in this journal. Should 1250 USD not include a bit of expediency?
> The numbers here seem odd. We have had a paper stuck in limbo since November
> 2015 without a final answer yet, supposedly because they cannot find an
> editor (out of > 6000) who can manage the revised version of the paper.
> So the key question is, I suppose: Is this seemingly epic sluggishness the
> norm in open access/online publication these days?
> At this point, I am not really convinced PLOS One should be advertising as
> “the fast one”…or is it?
> Any thoughts?
> 
> Edwin
> =
> Dr. Edwin Cruz-Rivera
> Visiting Associate Professor
> Department of Biological Sciences
> University of the Virgin Islands
> #2 John Brewers Bay
> St. Thomas 00802
> USVI
> Tel: 1-340-693-1235
> Fax: 1-340-693-1385
> 
> "It is not the same to hear the devil as to see him coming your way"
> (Puerto Rican proverb)
> 
> 


Re: [ECOLOG-L] humorous papers

2015-06-08 Thread David Schneider
For a humorous application of inverse probability
(which somehow came to be called Bayesian statistics)
see
Stigler, S.M. 1983.  Who discovered Bayes's Theorem?
The American Statistician 37:290-296

David Schneider


Quoting Gary Grossman :

> I'm looking for funny articles published and a few come to mind that I
> can't remember citations for so I thought I'd ask here.  I don't really
> want to page through J. Irreproducable Results or Worm Runner's Digest but
> there are a few I'm hoping someone can help me with (vice vis pdfs)
> 
> In either the late 70's or 80's there was a note in Nature that comprised
> the poem and reviewers comments on Shelley's *"Ozymandias*"
> 
> Then at about the same time someone published a paper in Limn. & Ocean.
> estimating the biomass of the Loch Ness monster.
> 
> And also at some point someone published a satirical paper on "if no one
> heard it, did the tree in the forest really fall?"
> 
> Of course any other humorous gems would be appreciated.
> Please remember the list doesn't allow attachments, so please respond to my
> university email.
> 
> TIA, g2
> 
> 
> 
> 
> -- 
> Gary D. Grossman, PhD
> 
> Professor of Animal Ecology
> Warnell School of Forestry & Natural Resources
> University of Georgia
> Athens, GA, USA 30602
> 
> http://grossman.myweb.uga.edu/ <http://www.arches.uga.edu/%7Egrossman>
> 
> Board of Editors - Animal Biodiversity and Conservation
> Editorial Board - Freshwater Biology
> Editorial Board - Ecology Freshwater Fish
> 


Re: [ECOLOG-L] Leaving science?

2014-07-28 Thread David Schneider
Hey Allison,
Most of the grad students with me have been MSc students,
I encourage them to cast the net wide to find a 
rewarding professional career.  Often that means
a series of hires, each with some new set of skills
to learn.  And I encourage them to think about 
any institution, whether it be ngo, consulting,
federal or provincial/state or corporate. 
Jobs of former MSc ecology students include include 
provincial wildlife division, a medical testing firm, 
local power utility, federal fisheries, information 
officer with province, the Pacific Whale foundation, 
an aquarium, a medical records company, several 
consulting firms.  And others.  You have valuable technical
skills.  Leaving grad school and the attendant research 
environment does not mean leaving behind your technical
skills.  The US, just like Canada where I live, needs
your technical skills, which are transferable and valuable.

Are you on LinkedIn?  If not, have a look at it.

Best,
David S.
http://www.mun.ca/osc/dschneider/bio.php


Quoting "Allison F. Walston" :

> Hey everyone
> 
> I graduated with my MS in ecology earlier this year and I was able to get a
> temporary job after graduation. However, the job will be ending shortly and
> they won't be able to make any permanent hires in the foreseeable future. I
> have a few other irons in the fire, but I am growing increasingly skeptical
> that any of them will pan out. 
> 
> I know a lot of people are in a similar situation given the job market and
> I've recently started thinking about looking outside of science. I did well
> during grad school and gained a lot of analytical skills. However, I can
> imagine the confusion my grad degree would cause for a potential employer
> outside of science/biology/conservation. 
>  
> Has anyone else made the decision to leave science shortly after grad
> school? What sort of things are career paths are worth looking into?
> 
> Any advice would be greatly appreciated.
> 
> Allison
> 


Re: [ECOLOG-L] questionable publications

2014-06-10 Thread David Schneider
Hello Lui,
Here are three examples that got past the
review process to publication, but were found to be 
fraudulent. - multivitamins, MMR vaccince/autism, and skin graft.

All 3 were fraudulent, and so I think it is appropriate to
name names.  

Chandra, Ranjit Kumar. "Effect of Vitamin and Trace-element Supplementation on
Cognitive Function in Elderly Subjects." Nutrition 17.9 (2001): 709-12. 

Wakefield A, Murch S, Anthony A et al. (1998). "Ileal-lymphoid-nodular
hyperplasia, non-specific colitis, and pervasive developmental disorder in
children". Lancet 351 (9103): 637–41. doi:10.1016/S0140-6736(97)11096-0. PMID
9500320. Retrieved 2007-09-05. (Retracted, see PMID 20137807)

Summerlin, W. T., Miller, G. E., Good, R. A. (1973) Successful tissue and organ
transplantation without immunosuppression. J. Clin. Ivest. 52,34a

google:  MMR vaccine controversy, Ranjit Chandra, William Summerlin

Shortcomings can be hard to spot with fraudulent papers.

Shortcomings are often easier to spot in papers where there
is no obvious intention of fraud. Here is a publication
where the data presented support a conclusion opposite to
that drawn by the authors.

Mar. Biol. 9: 63-64

In this case I think the authors deserve credit for presenting 
data in a way that allows re-analysis.  Often that is not the 
case - the route from Tables and Figures to conclusion is 
inscrutable.  

Many students won't have the statistical background to spot the 
error in Mar. Biol. 9: 63-64

You may  wish to consider asking students to look at the 
guidelines for reviewers from a journal of their choice,
then apply the guidelines to 3 articles in the same journal. 

Then have the class share the results.  Some students will
find problems, some won't.  The class experience  provides some
sense of the diversity or errors that reviewers spot, and 
prevalence of errors in the refereed  literature.  

With kind regards,
David Schneider

Quoting Lui Marinelli :

> Hope this isn't out of orderyears ago, a teacher had us review some bad,
> peer reviewed, published articles, to show us that what is published isn't
> necessarily gospel, we need to look at it with a critical eye.  Basically,
> these were publications that had obvious shortcomings.  the first were quite
> easy to identify the problem and then they got tougher.  I'd like to use
> similar publications to teach a similar lesson to my studentsany examples
> of publications come to mind?
>  
>  
> Lui 
> 
> Lui Marinelli, PhD
> VP Contract Administration, SCFA
> Instructor, School of Environment and Geomatics (formally Renewable
> Resources)
> Selkirk College
> 301 Frank Beinder Way
> Castlegar, BC
> V1N 3J1
> CANADA
> 
> (250) 365-1269
> ( tel:2503651269) 
> lmarine...@selkirk.ca
> 


Re: [ECOLOG-L] puffin problems, decadal scale evidence

2014-05-29 Thread David Schneider
Dear David,
Thanks for posting article with long list of anomalous ecosystem
change in  the Gulf of Maine.  The puffin  in Maine is indeed a 
canary, it is an introduced (and vigorously fostered) species 
in Maine. Its normal range is Iceland (millions!) with a north
American outpost in Canada (tens of thousands at Witless Bay, 
Newfoundland).  An introduced species at the edge of its normal 
range strikes me as bellwether of longer term trends.

Moving from bells on a wether to longer term trends, here is 
something to think about - lobsters were once commercially
 harvested in Virginia.   I came within in a trice of 
rescuing the lobster data recorded by scientist at VIMS
(Virginia Insitute of Marine Science).  His files were discarded
at his death.  

In my view bells on a wether are useful.  On the science side
we need evidence, over decades.  

Best regards,
David S.


 





Quoting David Inouye :

>
http://www.motherjones.com/environment/2014/04/gulf-maine-puffin-climate-change
> 
> 
> 
> is an interesting article about the changes in the ecology of the 
> ocean in Maine, and how it's affecting the reintroduced puffin 
> population (and others). 
> 


[ECOLOG-L] fact checking at journals

2014-05-03 Thread David Schneider
Or, if you are qualified to review certain aspects
> but not others, then restrict your comments to the things you actually
> have a clue about.
> 
> For example, my favorite peer review was my submission to a certain
> not-for-profit open-access journal published by a certain well known
> organization.  I sent a paper to its One journal and got the review,
> signed by a Physicist who had no background in anything the paper was
> about.  When a reviewer reviews a paper they have no expertise, heck
> no background in, the article has not been peer reviewed.  It has been
> read by someone, possibly in an editorial capacity.
> 
>   As long ans we are sucker bait for every published paper, and peer
> review manuscripts we are completely unqualified to peer review, and
> editors allow this to happen, and readers allow papers to go without
> question, things will not work right.  As long as we question
> everything, it will.
> 
> On Sat, May 3, 2014 at 3:33 PM, David Schneider 
> wrote:
> > Dear ecologers,
> >
> > The question posed in the quote on fact-checking
> > from David Duffy is:
> > Who should undertake the cost of fact checking?
> > The quote gives 3 answers: warranty by the author, make the
> > data available, in-house by the publisher.
> >
> > In my experience, as reviewer and in an editorial capacity,
> > one can expect a reviewer to check the facts (results) against
> > interpretation.  But it is often difficult for a reviewer to
> > fact check the results, simply because the data are of necessity
> > presented in summary fashion.  Here are two examples.
> > If I see a confidence limit that includes zero,
> > where the data are counts, I can be fairly sure as a fact
> > checker that the wrong error distribution was used to
> > compute the ci.  If I see a standard error or standard
> > deviation on count data, such that the variance is not at
> > least twice the mean, then I am suspicious, but I can't
> > check it, in the absence of the data.
> >
> > In my view the advent of online data archives are an important
> > step.  I would also suggest that journals need  to do more
> > than ask the author to warrant that they checked the facts.
> > I would suggest that a journal ask an author how they checked
> > the facts.
> >
> > That's my view.  What is your take on who should undertake
> > the cost of fact checking?
> >
> > Best,
> > David S.
> >
> >
> > Quoting Malcolm McCallum :
> >
> >> It is REALLY easy to screw up a figure, table or number set in a text
> >> if you have no one to review it before submission.  IF you are a peer
> >> reviewer, this is one of the things you probably should be looking at.
> >>  Do the numbers make sense?
> >> Peer review isn't there just to screen out garbage, its also there to
> >> assist authors. This is especially the case when an editor selects a
> >> reviewer specifically because of their exprtise in a particular area.
> >> I recall once as an editor that I sent a paper that involved some
> >> fancy modeling to a mathematical modeler to review the math.  It was
> >> outside of what I did.  She said he didn't know anything about the
> >> biology, and I told her that was easily covered by the other two
> >> reviewers, I just wanted to make sure the math was not tom-foolery.
> >> More of this needs to happen in peer review.  I see a lot of papers
> >> that misuse different techniques.
> >>
> >> For example, I recall a paper published in one big ecology journal in
> >> which they used baysian statistics, and misinterpreted the sets.  They
> >> said something had an effect, when the graph and stats clearly
> >> indicated there was no effect!!!  So, the paper ended up widely
> >> covered in the news and people assumed it was what it said, when what
> >> it spent 4-5 pages discussing was complete rubbish.  I've also seen
> >> interval analysis used where fuzzy sets should be used, and the misuse
> >> and over-use of monte carlo analysis is just over the top.
> >>
> >> Monte Carlo is only supposed to be used when you have a very great
> >> understanding of the system and very few assumptions and hopefully not
> >> a lot of unpredictable influences.  This is actually not all that
> >> common in ecology and environmental work.  yet, Monte Carlo is used
> >> and abused by simply "Assuming" things are that might not be.  When
> >> you do this with MC you can get VERY wrong answers and there is
> >> vi

Re: [ECOLOG-L] fact checking at journals

2014-05-03 Thread David Schneider
Dear ecologers,

The question posed in the quote on fact-checking 
from David Duffy is:
Who should undertake the cost of fact checking?
The quote gives 3 answers: warrant by the author, make the
data available, in-house by the publisher.  

In my experience, as reviewer and in an editorial capacity,
one can expect a reviewer to check the facts (results) against
interpretation.  But it is often difficult for a reviewer to 
fact check the results, simply because the data are of necessity 
presented in summary fashion.  Here are two examples. 
If I see a confidence limit that includes zero,
where the data are counts, I can be fairly sure that the 
ci is because the wrong error distribution was used.
If I see a standard error or standard 
deviation on count data, such that the variance doesn't
increase with the mean, then I am suspicious, but I can't 
check it, in the absence of the data. 

In my view online data archives are an important
step.  I would also suggest that journals need  to do more
than ask the author to warrant that they checked the facts.
I would suggest that a journal ask an author to state how 
they checked the facts, as presented in their Results.  

That's my view.  What is your take on who should undertake 
the cost of fact checking?

Best, 
David S.


Quoting Malcolm McCallum :

> It is REALLY easy to screw up a figure, table or number set in a text
> if you have no one to review it before submission.  IF you are a peer
> reviewer, this is one of the things you probably should be looking at.
>  Do the numbers make sense?
> Peer review isn't there just to screen out garbage, its also there to
> assist authors. This is especially the case when an editor selects a
> reviewer specifically because of their exprtise in a particular area.
> I recall once as an editor that I sent a paper that involved some
> fancy modeling to a mathematical modeler to review the math.  It was
> outside of what I did.  She said he didn't know anything about the
> biology, and I told her that was easily covered by the other two
> reviewers, I just wanted to make sure the math was not tom-foolery.
> More of this needs to happen in peer review.  I see a lot of papers
> that misuse different techniques.
> 
> For example, I recall a paper published in one big ecology journal in
> which they used baysian statistics, and misinterpreted the sets.  They
> said something had an effect, when the graph and stats clearly
> indicated there was no effect!!!  So, the paper ended up widely
> covered in the news and people assumed it was what it said, when what
> it spent 4-5 pages discussing was complete rubbish.  I've also seen
> interval analysis used where fuzzy sets should be used, and the misuse
> and over-use of monte carlo analysis is just over the top.
> 
> Monte Carlo is only supposed to be used when you have a very great
> understanding of the system and very few assumptions and hopefully not
> a lot of unpredictable influences.  This is actually not all that
> common in ecology and environmental work.  yet, Monte Carlo is used
> and abused by simply "Assuming" things are that might not be.  When
> you do this with MC you can get VERY wrong answers and there is
> virtually no way to check it.  Fuzzy approaches are much more rubust
> in this regard as is interval analysis. But, you hardly see anyone who
> knows how to use these things, or people are caught 20-30 years
> out-of-date thinking they are controversial.
> 
> The ideal way to do things is to use fuzzy sets to isolate your data
> sets to be used in monte carlo.  That way, you reduce the odds of
> going completely off tangent. However, no one seems to do this either.
>  it is pretty amazing because outside of ecology, the alternate
> methods are widely applied to many different situtaitons.  heck, they
> even have fuzzy monte carlo and fuzzy neural networks now.  But, that
> is an entire different topic.
> 
> The point is, I think it is very reasonable for an editor to select a
> peer reviewer form an outside field to check up on methods and
> techiques that are outside of his/her expertise, especially if these
> are highly technical and particulary novel.  A biologist is not always
> the best reviewer for some biology papers in such cases.
> 
> On Wed, Apr 30, 2014 at 1:11 PM, David Duffy  wrote:
> > "To address this, the publishers of clinical journals must do more to
> > ensure that someone takes responsibility for the fact-checking. That could
> > involve asking authors to guarantee that they have checked figures,
> tables,
> > text and abstracts for internal consistency. Publishers could require
> > authors to make available suitably anonymized data on each patient as
> > metadata to the study, so that readers can trace the source of any
> > discrepancy that might creep through. Or the publishers could reach into
> > their pockets and provide more in-house resources to perform the necessary
> > checking. What is not acceptable is for the situation to continu

Re: [ECOLOG-L] fact checking at journals

2014-05-03 Thread David Schneider
Dear ecologers,

The question posed in the quote on fact-checking 
from David Duffy is:
Who should undertake the cost of fact checking?
The quote gives 3 answers: warranty by the author, make the
data available, in-house by the publisher.  

In my experience, as reviewer and in an editorial capacity,
one can expect a reviewer to check the facts (results) against
interpretation.  But it is often difficult for a reviewer to 
fact check the results, simply because the data are of necessity 
presented in summary fashion.  Here are two examples. 
If I see a confidence limit that includes zero,
where the data are counts, I can be fairly sure as a fact
checker that the wrong error distribution was used to 
compute the ci.  If I see a standard error or standard 
deviation on count data, such that the variance is not at 
least twice the mean, then I am suspicious, but I can't 
check it, in the absence of the data. 

In my view the advent of online data archives are an important
step.  I would also suggest that journals need  to do more
than ask the author to warrant that they checked the facts.
I would suggest that a journal ask an author how they checked
the facts.  

That's my view.  What is your take on who should undertake 
the cost of fact checking?

Best, 
David S.


Quoting Malcolm McCallum :

> It is REALLY easy to screw up a figure, table or number set in a text
> if you have no one to review it before submission.  IF you are a peer
> reviewer, this is one of the things you probably should be looking at.
>  Do the numbers make sense?
> Peer review isn't there just to screen out garbage, its also there to
> assist authors. This is especially the case when an editor selects a
> reviewer specifically because of their exprtise in a particular area.
> I recall once as an editor that I sent a paper that involved some
> fancy modeling to a mathematical modeler to review the math.  It was
> outside of what I did.  She said he didn't know anything about the
> biology, and I told her that was easily covered by the other two
> reviewers, I just wanted to make sure the math was not tom-foolery.
> More of this needs to happen in peer review.  I see a lot of papers
> that misuse different techniques.
> 
> For example, I recall a paper published in one big ecology journal in
> which they used baysian statistics, and misinterpreted the sets.  They
> said something had an effect, when the graph and stats clearly
> indicated there was no effect!!!  So, the paper ended up widely
> covered in the news and people assumed it was what it said, when what
> it spent 4-5 pages discussing was complete rubbish.  I've also seen
> interval analysis used where fuzzy sets should be used, and the misuse
> and over-use of monte carlo analysis is just over the top.
> 
> Monte Carlo is only supposed to be used when you have a very great
> understanding of the system and very few assumptions and hopefully not
> a lot of unpredictable influences.  This is actually not all that
> common in ecology and environmental work.  yet, Monte Carlo is used
> and abused by simply "Assuming" things are that might not be.  When
> you do this with MC you can get VERY wrong answers and there is
> virtually no way to check it.  Fuzzy approaches are much more rubust
> in this regard as is interval analysis. But, you hardly see anyone who
> knows how to use these things, or people are caught 20-30 years
> out-of-date thinking they are controversial.
> 
> The ideal way to do things is to use fuzzy sets to isolate your data
> sets to be used in monte carlo.  That way, you reduce the odds of
> going completely off tangent. However, no one seems to do this either.
>  it is pretty amazing because outside of ecology, the alternate
> methods are widely applied to many different situtaitons.  heck, they
> even have fuzzy monte carlo and fuzzy neural networks now.  But, that
> is an entire different topic.
> 
> The point is, I think it is very reasonable for an editor to select a
> peer reviewer form an outside field to check up on methods and
> techiques that are outside of his/her expertise, especially if these
> are highly technical and particulary novel.  A biologist is not always
> the best reviewer for some biology papers in such cases.
> 
> On Wed, Apr 30, 2014 at 1:11 PM, David Duffy  wrote:
> > "To address this, the publishers of clinical journals must do more to
> > ensure that someone takes responsibility for the fact-checking. That could
> > involve asking authors to guarantee that they have checked figures,
> tables,
> > text and abstracts for internal consistency. Publishers could require
> > authors to make available suitably anonymized data on each patient as
> > metadata to the study, so that readers can trace the source of any
> > discrepancy that might creep through. Or the publishers could reach into
> > their pockets and provide more in-house resources to perform the necessary
> > checking. What is not acceptable is for the situation to continue as i

Re: [ECOLOG-L] brief results of job search survey

2014-04-05 Thread David Schneider
Dear Malcolm,  

I appreciate your summarizing to a *large* job board
the experience of 136 ecologers.  Useful in my opinion 
to students in a lab where the only role 
model is researcher in academic setting.  

In my experience people committed to a career in ecology
take several positions along the way, and usually end up 
in professional career in ecology.

You can never tell where commitment to career in ecology will 
lead you. As an example among many, a recent MSc student with
me (thesis on lobster fecundity) is currently in Maui, 
employed as Research Analyst at Pacific Whale Foundation.

With kind regards,
David S.



Quoting Malcolm McCallum :

> 136 job seekers were kind enough to fill out my survey on your job
> search in 2013-2014.
> A more thorough analysis will be hopefully published in consort with
> other data about jobs etc I have collected since 2001 or 2002 (can't
> recall the start date right now!).  I will state up front though that
> applicants with the most applications did not necessarily have the
> most interviews.  I'm not sure if this holds across the board as a
> trend, but 100-200 applications is a lot to send out while tailoring
> an application for each vacancy.
> 
> So, here it is.  Of the 136 respondents
> 34.6% applied for 1-5 academic jobs
> 20.6% applied for 5-10 academic jobs
> 21.3% applied for 11-20 academic jobs
> 10.3% applied for 21-30 academic jobs
> 6.6% applied for 31-40 academic jobs
> 3.7% applied for 41-50 academic jobs
> 0.7% applied for 51-60 academic jobs
> none applied for 61-91 academic jobs
> 2.2% applied for 100-200 academic jobs.
> 
> Then, of the 136 respondents...
> (57.4% got at least one phone interview)
> 
> 42.6% had no phone interviews
> 25%% had only one phone interview
> 13.2% had two phone interviews
> 5.9% had three phone interviews
> 5.9% had four phone interviews
> 3.7% had five phone interviews
> 0.7% had seven phone interviews
> 0.7% had ten phone interviews
> 0.7% had thirteen phone interviews
> 
> Campus interviews
> (58.8% got at least one campus interview)
> 
> zero = 41.2%
> one = 33.8%
> two = 16.9%
> three = 4.4%
> four = 0
> five = 0.7%
> six = 0
> seven = 0
> eight = 0.7%
> nine = 0.7%
> ten = 0
> eleven = 0
> twelve = 0.7%
> 
> Job offers
> (29.4% got at least one offer).
> 
> zero = 67.6%
> one = 24.3%
> two = 4.4%
> three = 0
> four = 0.7%
> 
> 
> -- 
> Malcolm L. McCallum
> Department of Environmental Studies
> University of Illinois at Springfield
> 
> Managing Editor,
> Herpetological Conservation and Biology
> 
>  "Nothing is more priceless and worthy of preservation than the rich
> array of animal life with which our country has been blessed. It is a
> many-faceted treasure, of value to scholars, scientists, and nature
> lovers alike, and it forms a vital part of the heritage we all share
> as Americans."
> -President Richard Nixon upon signing the Endangered Species Act of
> 1973 into law.
> 
> "Peer pressure is designed to contain anyone with a sense of drive" -
> Allan Nation
> 
> 1880's: "There's lots of good fish in the sea"  W.S. Gilbert
> 1990's:  Many fish stocks depleted due to overfishing, habitat loss,
> and pollution.
> 2000:  Marine reserves, ecosystem restoration, and pollution reduction
>   MAY help restore populations.
> 2022: Soylent Green is People!
> 
> The Seven Blunders of the World (Mohandas Gandhi)
> Wealth w/o work
> Pleasure w/o conscience
> Knowledge w/o character
> Commerce w/o morality
> Science w/o humanity
> Worship w/o sacrifice
> Politics w/o principle
> 
> Confidentiality Notice: This e-mail message, including any
> attachments, is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information.  Any unauthorized
> review, use, disclosure or distribution is prohibited.  If you are not
> the intended recipient, please contact the sender by reply e-mail and
> destroy all copies of the original message.
> 


Re: [ECOLOG-L] Advice Needed

2014-03-06 Thread David Schneider
Dear Heather,
When I ask the students who want to do MSc with
me where they want to be in 5 years, what the student 
wants in many cases is a professionally
rewarding career.  Which is much larger space than
'wildlife manager' or 'research prof.'  That larger
space is often a sequence of positions, includes 
consulting firm, teaching at community college, 
going off to teach in Qatar, corporate 
position (green them from inside), and state and 
federal agencies beyond just the wildlife division.  

An example I can give you, among many, is a student 
who did MSc with me, worked for NGO in Labrador, then
with Nature Conservancy, and is now with the provincially 
regulated electrical power provider (think hydro
and minimizing environmental impacts of the electricity
that makes our lives possible).

Ecolog is a wonderful source of professionally rewarding
positions because of its size.  Ask yourself where you 
want to be, and cast your net as wide as you can, including
Ecolog.  

Masters degree has become entry standard for many positions. 
It is not the only route to professionally rewarding career
in your area of interest, as I know from the careers of 
several people I know, now in their 20s and 30s.  
You might want to think  about casting your net
wide to paid position outside academia or state agency 
or NGO, and use the time to decide on non-profit management 
masters program.  

Best,
David S.
http://www.mun.ca/biology/dschneider/


Quoting "Watson, Heather (wats3...@vandals.uidaho.edu)"
:

>   Dear Fellow Eco-loggers,
> 
> I am desperately seeking advice. I am a recent graduate (degree in ecology
> and conservation biology with a minor in wildlife) who participated in campus
> clubs, volunteered, worked as a biology lab TA, ran my own research project,
> and maintained a decent GPA. I am trying to make my big break into the "real
> world" and have found it to be harsh and full of rejection.  I am trying to
> work towards a non-profit management position and am unsure how to further my
> progress. My thought is to go into a non-profit management masters program,
> however I would like to get more career experience first. Therefore, I am
> asking the skilled professionals of the eco-log world to give me their best
> advice on what kind of job I should be looking for now. As well as any
> suggestions for making a resume and cover letter that would make you want to
> hire someone. Someone online videos suggest making them visually different by
> adding a picture or color and I felt that seemed unprofessional. How does a
> recent graduate like myself make it through the minefield of the job market
> into landing an interview and hopefully a meaningful job?
> 
> Sincerely,
> 
> A stressed graduate
> Any much appreciated responses can be sent to watson3...@outlook.com. Thank
> you very much for your time and consideration.
> ?
> 


Re: [ECOLOG-L] G-test with zero values

2014-02-12 Thread David Schneider
Hello Jason,
The 21st century approach to percent and count data
is to write the model, not search for the 'right test.' 

In my experience it is possible for 4th year undergrads
and 1st year grad students, with little stats experience,
to learn this approach.

Statistical analysis based on writing the statistical model 
can be carried out in almost all stat packages,
including SPSS and Minitab.  Not to mention SAS and R.  

Statistically adept readers of Ecolog will recognize
problems with zeros when analyzing percent data or count data
once one has learned to write the model.  These include
too many expected values less than zero, or other 
problems such as zero inflated counts.   

I trust they will hold off on such problems -- in my view 
the first and most important step for you is grasping the 
idea of writing the model that captures your conceptualization of 
the research question and operating hypotheses,
instead of searching for the 'right test.'

In the fall term of 2013 a highly motivated grad student 
with at best a tenuous grasp of algebra learned this 
approach.  If she can learn to write the model, and 
execute it, and interpret the result, and check the
assumptions, then you can.

Wishing you the best,
David S.
http://www.mun.ca/biology/dschneider/


> > On Wed, Feb 12, 2014 at 12:56 AM, Jason Hernandez <
> > jason.hernande...@yahoo.com> wrote:
> >
> >> Some time ago, I inquired about ways to analyze percent cover data, and
> >> one of the suggestions was to test for heterogeneity.  The snag, however,
> >> is that this requires multiplying each cell value by its natural log.  My
> >> data set has a lot of zero values, which are important to keep; but of
> >> course there is no natural log of zero.  Is there a way to adjust the
> >> analysis to included these zero values?  i have not managed to find
> >> anything on this.
> >>
> >> Jason Hernandez


Re: [ECOLOG-L] Questions about Data Types

2013-10-31 Thread David Schneider
Hello Paul,
The conventional classification is due to Stevens (1946):
nominal, ordinal, interval, ratio.

Current practice in statistics is ordered logistic regression,
which treats the response variable as nominal (binomial).
The most readable text is  Hoffmann, 2004 Generalized Linear
Models.  

There are many other GzLM texts.

Hope that helps,
David Schneider

Quoting "Klawinski, Paul" :

> Hi all,
> 
> I am trying to educate my colleagues about some things and would like to ask
> you some questions as you are members of a talented and learned society.
> I am aware of the controversy so am interested in what I would consider
> common knowledge so, if you respond, please do so without consulting a
> stats text.
> 
> Feel free to answer off list.
> 
> What are the different types of data (nominal, ordinal, etc.)?
> 
> For each data type, list what types of statistical measures you can
> compute for each.
> 
> Which data type do Likert style data fall into (responses using phrases
> like Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree,
> Strongly Agree even if you identify these responses with a number)?
> 
> Comments or questions?
> 
> Thanks,
> 
> Paul
> --
> Paul Klawinski, Ph.D.
> 
> Monte Harmon Professor of Biology | Department of Biology
> 816-415-7628 (office) | 816-781-7700 ext. 5565 (lab)
> 
> William Jewell College: Live What You Learn | www.jewell.edu
> www.jewell.edu/>
> Please join Jewell in conservation efforts.  Print this email only if
> necessary.
> 
> “Luck favors the prepared.”
> Edna “E” Mode
> 
> 




This electronic communication is governed by the terms and conditions at
http://www.mun.ca/cc/policies/electronic_communications_disclaimer_2012.php


[ECOLOG-L] A response to E.O. Wilson's opinion about math

2013-05-16 Thread David Schneider
"Pioneers in science only rarely make discoveries by extracting ideas from pure
mathematics. Most of the stereotypical photographs of scientists studying rows
of
equations on a blackboard are instructors explaining discoveries already made.
Real progress comes in the field writing notes, at the office amid a litter of 
doodled paper, in the hallway struggling to explain something to a friend, 
or eating lunch alone. Eureka moments require hard work. And focus."

E.O.Wilson, NYTimes 5 April 2013
Great Scientist ≠ Good at Math
E.O. Wilson shares a secret: Discoveries emerge from ideas, not
number-crunching


"Biomathematics is not merely a new application for existing mathematical
methods. You can't just pull an established mathematical technique off the
shelf and put it to use.  Biology requires - indeed demands - entirely new
mathematical concepts and techniques, and it raises new and fascinating
problems for mathematical research.
   If the main driving force behind new mathematics in the twentieth century was
the  physical sciences, in the twenty-first century it will be the life
sciences."

Ian Stewart.  The Mathematics of Life.  Basic Books.  2011.


This electronic communication is governed by the terms and conditions at
http://www.mun.ca/cc/policies/electronic_communications_disclaimer_2012.php


Re: [ECOLOG-L] "The Audacity of Graduate School"

2012-10-17 Thread David Schneider
Hello Ecolog,
Here are my thoughts, written 11 PM from Boulder, CO.

Grad school is indeed audacious, and not a default choice.
As someone who spent 3 years on the 'dark side' (academic
admin) I know that there are *huge* differences among labs.
Some labs are very happy and students move to productive
professional lives.  Other labs are miserable.

My advice is, ask yourself why you are going to grad
school.  Then use the web to investigate labs. In addition
to contacting the prof, contact students in the lab and
ask them about their experience.  Like me, some profs 
encourage  prospective students to contact current and 
former students (maybe I'm weird).  

By way, the numbers on NSERC (Canada) success rates quoted 
below are misleading.  Success rates are low in some 
programs, well above 70% in others.  For grad students, most
universities in Canada offer 20-25K/ year in science, if you
meet academic standards and are accepted.  It's not
princely, but then it's only 2 years for MSc, if you 
find the right lab.  And it's mostly or all a stipend.
It's not full time TA.

David S.
http://www.mun.ca/osc/dschneider/



Quoting "Aaron T. Dossey" :

> Actually, I would strongly recommend AGAINST grad school, or grad school 
> only as a last resort.  There are many ways to achieve a successful and 
> fruitful career while following your dreams, and many roads that do not 
> lead through a stint as a temporary under-paid technician/piece of 
> equipment (ie: grad student and postdoc/postech/postemp).
> 
> First, figure out what you want to do, then investigate what it takes to 
> get there.  You'll be surprised at how few careers actually require a 
> Ph.D., and how few careers which do require one actually exist/are 
> available.
> 
> Good luck!
> ATD of ATB
> 
> -- 

Here is the article in Chronicle of Higher Ed.

http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2012_09_28/caredit.a1200108

> 
> 
> 
> On 10/16/2012 11:38 PM, Lindsay Veazey wrote:
> > As one of many hopeful individuals trying to find an open program in which
> to
> > begin an advanced degree, I'd also like to point out the pitiful state of
> > scientific funding in North America. The current NSERC funding success rate
> is
> > below 8%, and the NSF success rate hovers around 20%. Additionally, in my
> > discussions with students of all levels, both current and (hopefully)
> > prospective, I've noticed that funding has essentially dried up for M.Sc
> > candidates, and is not much better for Ph.D candidates.
> >
> > I'm wondering if any subscribers have recommendations for programs abroad,
> > like MESPOM, that welcome foreign students instead of stack the deck
> against
> > their entry.
> >
> > Dr. Dossey, thank you for a well written submission that rings all too
> true.
> >
> 
> 
> -- 
> Aaron T. Dossey, Ph.D.
> Biochemistry and Molecular Biology
> Founder/Owner: All Things Bugs
> Capitalizing on Low-Crawling Fruit from Insect-Based Innovation
> http://allthingsbugs.com/about/people/
> http://www.facebook.com/Allthingsbugs
> 1-352-281-3643
> 



This electronic communication is governed by the terms and conditions at
http://www.mun.ca/cc/policies/electronic_communications_disclaimer_2012.php


[ECOLOG-L] Climate change - EOS Forum

2012-07-04 Thread David Schneider
Hello all,
We have seen considerable diversity in how to respond, as 
scientists, on the topic of climate change.  Clearly one
size does not fit all.  For those friends and acquaintances
who ask, I like to start with simple statements based on
evidence, which I value highly as a scientist - evidence
assembled by IPCC and accessible explanation of what happens
in a greenhouse and why it applies to CO2 (methane etc)
in the atmosphere.

For policy makers, I start with evidence (IPCC) and
then to risks if no action (much less clear!).

For those who respond with arguments we recognize
(ad hominem attacks, cherry picked data, etc) I describe the
fallacy, being careful not to stray to the ad hominem.

For those who venture into a public forum (e.g. talk on 
College campus) I like debate, not surprise. In the 
debates about evolution, Stephen J. Gould mastered the
arguments, and so was prepared to debate the topic. 

For those who go political ('warmist' or 'climate alarmist' as 
below) I like Don Stong's response - call them on going political.

Finally, it helps to do some research on the person to whom you
are responding, to find out motivation ($$$ ? or something else?). 
Search
Paul Cherubini El dorado

You might be surprised.

David Schneider
http://www.mun.ca/osc/dschneider/bio.php

On Mon, Jul 2, 2012 at 4:14 PM, Paul Cherubini  wrote:

> On Jul 2, 2012, at 1:45 PM, Corbin, Jeffrey D. wrote:
>
>  1) but I made the specific point at our counter-presentation that
>> we have a great deal to discuss as to HOW society should
>> confront climate change - Cap&Trade, Carbon tax, mitigation,
>> etc. But such a discussion must begin with an acceptance of
>> the understood science.
>>
>
> The notion of anthropogenic global warming is not hardly
> settled.  There is a large body of anthropogenic doubters,
> especially because global mean temps have stabilized
> since 1998 http://tinyurl.com/6ca5gzt  That flattening of
> warming was not predicted by the anthropogenic warmists.
>
>  2)  the general public who does have difficulty filtering
>> out the conflicting sides of the "debate".
>>
>
> The public and industry pay alot of attention to websites
> such as http://wattsupwiththat.com/ that examine the
> claims and track records of the anthropogenic climate
> alarmists in great depth and provide evidence suggesting
> global mean temps may continue to be relatively
> stable for another 20 years or so.
>
> The public also listens to industry leaders who says things like:
> "fears about climate change, drilling, and energy dependence
> are overblown" -  http://tinyurl.com/6wezuce
>
> Paul Cherubini
> El Dorado, Calif.
>




> 
> The Seven Blunders of the World (Mohandas Gandhi)
> Wealth w/o work
> Pleasure w/o conscience
> Knowledge w/o character
> Commerce w/o morality
> Science w/o humanity
> Worship w/o sacrifice
> Politics w/o principle
> 
> Confidentiality Notice: This e-mail message, including any
> attachments, is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information.  Any unauthorized
> review, use, disclosure or distribution is prohibited.  If you are not
> the intended recipient, please contact the sender by reply e-mail and
> destroy all copies of the original message.
> 


This electronic communication is governed by the terms and conditions at
http://www.mun.ca/cc/policies/electronic_communications_disclaimer_2012.php


[ECOLOG-L] Families in Science - Balancing your personal and professional life

2012-04-29 Thread David Schneider
Here is an article that might be relevant to
the discussion.

http://www.americanscientist.org/issues/feature/2012/2/when-scientists-choose-motherhood

David Schneider


- Forwarded message from Robert Hamilton  -
Date: Sun, 29 Apr 2012 14:20:53 -0400
From: Robert Hamilton 
Reply-To: Robert Hamilton 
 Subject: Re: [ECOLOG-L] Families in Science - Balancing your personal and
professional life
  To: ECOLOG-L@LISTSERV.UMD.EDU

I must say that I find this conversation somewhat embarrassing, and hope
it never gets out into the public domain. I have and have always had
friends and neighbours who work 2 or 3 jobs to keep things going.
Literally going to work at 6AM and not coming home till after 10PM
working jobs at places like Walmart and McDonalds. Lots of people work
8+ hours per say 50 weeks a year, like say my Dad, and had no problem
raising a family and contributing to the community. This whole thing is
a study in extreme narcissism. How's that for a wet blanket!

Robert Hamilton, PhD
Professor of Biology
Alice Lloyd College
Pippa Passes, KY 41844


-Original Message-
From: Ecological Society of America: grants, jobs, news
[mailto:ECOLOG-L@LISTSERV.UMD.EDU] On Behalf Of Jahi Chappell
Sent: Saturday, April 28, 2012 10:07 PM
To: ECOLOG-L@LISTSERV.UMD.EDU
Subject: Re: [ECOLOG-L] Families in Science - Balancing your personal
and professional life

While putting resources into science, including ecology, is of course a
wonderful, necessary, and valuable thing, assuredly supporting our own
families with our presence, time, and energy (and societal resources) is
at least as wonderful, necessary, and valuable. Indeed, as many benefits
as flow from science and science funding, we know that having strong
families and communities makes everyone better off, ceteris parabus, and
having strong families and communities requires time and resource
investment from everyone.

Even granting the proposition that we in the US produce the "best and
most successful scientists in the world", all accounts indicate that we
certainly don't produce the highest average of "happy and most secure
and successful families in the world." We have a *lot* of those, but
alas, our median is likely much lower than our mean, and both are likely
behind countries like those Andres analyzed. So much of what so many are
lacking are basic needs, connections, support networks, and resources,
something depending as much or more on good and participatory governance
than new scientific discovery--we need more time for more participation
outside our work and research, not less.

On 4/27/12 10:22 AM, "David L. McNeely"  wrote:

This is not meant as a wet blanket, as I encourage family friendly
employment practices for all countries and for all occupations.  But, I
wonder how those figures would look if all areas of science were
considered?  It may be that smaller economies, and the Scandinavian
countries in particular, put a greater fraction of their available
resources for scientific research into ecology than do larger economies
and non-Scandinavian countries.  Is U.S. science more diversified than
Finnish or Icelandic science?


David McNeely

 Andres Lopez-Sepulcre  wrote:
Since we're at it, it did the same calculation for all four countries
ranked first in gender equality by the Global Gender Gap Report. All
four, as far as I remember, provide generous paternity leaves that
guarantee job security and can be shared between mother and father.
ISI indexed publications in Ecology per capita (countries ranked in
order of 'gender equality index')
Iceland: 1167
Norway: 1794
Finland: 1500
Sweden: 1361
Not only do these countries do significantly better in ecology 'per
capita' than the less family-oriented scientific powerhouses (e.g.
USA: 650, UK: 660), but it almost seems that if anything, their ranking
in the gender equality index is correlated with their productivity, not
an 'impediment' ... safe for Iceland, but do remember that Iceland
suffered the largest financial collapse in world history in these last 5
years.
Even when this small sample and oversimplified analysis is not proof of
anything, I hope it can change peoples' perceptions that countries that
have increased social welfare, gender equality and more protective
labour laws are less productive.
Andres Lopez-Sepulcre
Laboratoire d'Ecologie, UMR 7625
Ecole Normale Superieure, Paris
alo...@biologie.ens.fr
http://web.me.com/asepulcre
On Apr 27, 2012, at 6:43 PM, Cecilia Hennessy wrote:
PERFECT response, thank you so much!  If we Americans could stop patting
ourselves on the back long enough to realize that other countries have
successful ways of doing things too, maybe we could learn from
international example and progress more efficiently.
cheers!

On Fri, Apr 27, 2012 at 7:48 AM, Andres Lopez-Sepulcre
 wrote:
"...however, why should the USA modify the system producing among th

[ECOLOG-L] funding and citation metrics

2012-01-22 Thread David Schneider
In typically understated fashion, the Canadian funding
scheme for academic science has an answer to excessive
reliance on citation metrics.  

'In your proposal for an individual grant tell us, briefly, 
your 10 year plan and within that the project(s) you propose 
over your next 5 year individual grant. (no convincing 
long term plan = lethally negative marks).  Then tell us 
your best contributions in the last 5 years (which of course the
reviewers can check, as they see fit, via citation metrics
and other criteria).

'In your proposal for a group strategic grant, tell us how 
your proposal relates to public good science or to private
good science.' (criterion tends to be well written proposal
with regard to science and relation to public or private good
science).

'In your proposal for a grant in collaboration with industry
tell us how you will work with your industry partner.'
(criterion tends to be well written proposal
with regard to science and cash contribution by industry).

Your program officer at NSF can relate what they know about 
the degree to which citation rate determines grant outcome. 

David Schneider
Memorial University, St. John's NL Canada




- Forwarded message from malcolm McCallum 
-
Date: Sat, 21 Jan 2012 20:04:50 -0600
From: malcolm McCallum 
Reply-To: malcolm McCallum 
 Subject: Re: [ECOLOG-L] academic publishers and politics
  To: ECOLOG-L@LISTSERV.UMD.EDU

For people who are interested in the politics of publishing and
citation metrics, the following are really worth
reading...technically, we should all be following this stuff. Although
scientists in general are pretty smart, a huge bunch of us tends to
ignore the continual political under-cutting of our profession.
Sometimes I wonder if we are not as a group standing up for ourselves
enough!

Lawrence, P.A. 2007. The mismeasurement of science. Current Biology
17(15):R583-R585.
Answer from the hero in Leo Szilard’s 1948 story “The Mark Gable
Foundation” when asked
by a wealthy entrepreneur who believes that science has progressed too
quickly, what he
should do to retard this progress: “You could set up a foundation with
an annual endowment of
thirty million dollars. Research workers in need of funds could apply
for grants, if they could
make a convincing case. Have ten committees, each composed of twelve
scientists, appointed to
pass on these applications. Take the most active scientists out of the
laboratory and make them
members of these committees. ...First of all, the best scientists
would be removed from their
laboratories and kept busy on committees passing on applications for
funds. Secondly
the scientific workers in need of funds would concentrate on problems
which were considered
promising and were pretty certain to lead to publishable results.
...By going after the obvious,
pretty soon science would dry out. Science would become something like
a parlor game.
...There would be fashions. Those  who followed the fashions would get
grants. Those who wouldn’t
would not.”


Todd, PA, and R.J. Ladle. 2008. Hidden dangers of a "citation
culture." Ethics in Science and environmental politics 8:preprint (I
don't have the paginated version). ABSTRACT: The influence of the
journal impact factor and the effect of a ‘citation culture’ on
science and scientists have been discussed extensively (Lawrence 2007;
Curr Biol 17:R583–585). Nevertheless, many still believe that the
number of citations a paper receives provides some measure of its
quality. This belief may be unfounded, however, as there are 2
substantial areas of error that can distort a citation count or any
metric based on a citation count. One is the deliberate manipulation
of the system by scientists trying to ensure the highest possible
number of cites to their papers; this has  been examined elsewhere
(Lawrence 2003; Nature 422:259–261). The second area of inaccuracy is
inherent to how papers are cited, indexed and searched for. It is this
latter, lesser known, source of error that we will investigate here.

Campbell, P. 2008. Escape from the impact factor. Ethics in science
and environmental politics 8:5-7.
ABSTRACT: As Editor-in-Chief of the journal Nature, I am concerned by
the tendency within academic
administrations to focus on a journal’s impact factor when judging the
worth of scientific contributions
by researchers, affecting promotions, recruitment and, in some
countries, financial bonuses
for each paper. Our own internal research demonstrates how a high
journal impact factor can be the
skewed result of many citations of a few papers rather than the
average level of the majority, reducing
its value as an objective measure of an individual paper. Proposed
alternative indices have their
own drawbacks. Many researchers say that their important work has been
published in low-impact
journals. Focusing on the citations of individual papers is a more
reliable indicator of an individual’s
impact. A p

Re: [ECOLOG-L] Transformation of percent cover data for power analysis

2011-11-30 Thread David Schneider
Hello all,
For advice on the use of the arcsin transform, I
recommend the following paper:

http://www.esajournals.org/doi/abs/10.1890/10-0340.1

The title itself is worth poking the link.

David Schneider


Quoting Jordan Marshall :

> Brian
> 
> I use arcsin(square root(proportion)) anytime I'm doing analysis of
> percent data. The reason may not be justified for the type of simulation
> your running, which I'm not familiar with. I use this transformation
> since percent data is inherently not normally distributed.
> Arcsin(sqrt(proportion)) does transform the data to near normal
> distribution.
> 
> Jordan
> 
> -- 
> Jordan M. Marshall, PhD
> Assistant Professor
> Department of Biology
> Indiana University-Purdue University Fort Wayne
> 2101 E. Coliseum Blvd.
> Fort Wayne, IN 46805
> 
> Office (260) 481-6038
> Mobile (865) 919-9811
> Fax(260) 481-6087
> 
> www.jordanmarshall.com 
> 
> >>> On 11/30/2011 at 12:00 AM, ECOLOG-L automatic digest system
>  wrote:
> > Date:Tue, 29 Nov 2011 14:33:19 -0500
> > From:=?ISO-8859-1?Q?Brian_Mitchell?= 
> > Subject: Transformation of percent cover data for power analysis
> > 
> > Hello ecolog,
> > 
> > I am working on a power analysis simulation for long-term forest
> monitoring
> > data, with the goal of documenting our power to detect trends over
> time. 
> > The simulation is based on a repeated measures hierarchical model,
> where
> > future data is simulated based on the initial data set and a
> bootstrap of
> > pilot data differences between observation periods, multiplied by a
> range of
> > effect sizes (50% decline to 50% increase).
> > 
> > My question is about the appropriate transformation to use for
> percent cover
> > data in this simulation. I don’t want to use raw percentages
> because the
> > simulation will easily result in proportions less than zero or
> greater than
> > one.  Similarly, a log transform can easily result in
> back-transformed
> > proportions greater than one.  Most other transforms I’ve looked at
> would
> > not prevent back-transformed data from exceeding one or the other
> > boundaries.  The exception is the logistic transform, which would
> indeed
> > force all simulated data to be between zero and one when
> back-transformed. 
> > However, the logistic transform gives values of negative infinity for
> a
> > percent cover of zero, and positive infinity for a percent cover of
> one.  I
> > was thinking that adding a tiny number to zeros and subtracting a
> tiny
> > number from ones (e.g., 0.1) would solve the problem (roughly
> equivalent
> > to a log of x+1 transform), but I have been unable to find reference
> to
> > anyone using this approach for percent cover data.  Does anyone have
> any
> > thoughts about the validity of my proposed approach or of another
> approach
> > that would help solve my problem?
> > 
> > Thanks!
> > 
> > Brian Mitchell
> > NPS Northeast Temperate Network Program Manager
> > Adjunct Assistant Professor, University of Vermont
> > brian_mitch...@nps.gov  
> 




This electronic communication is governed by the terms and conditions at
http://www.mun.ca/cc/policies/electronic_communications_disclaimer_2011.php


Re: [ECOLOG-L] Non-parametric statistics

2011-07-15 Thread David Schneider
Hello all,
Excellent advice from Nicole Michel.
There is a learning curve for GzLM, but it
is well worth the effort.

David Schneider
http://www.mun.ca/biology/dschneider/b7932/


Quoting "Michel, Nicole L" :

> Hi Alan et al.,
> 
> Generalized Linear Mixed Models (not to be confused with General Linear Mixed
> Models) are designed for exactly this sort of data.  The Generalized form
> lets you define the distribution to be whatever you want it to be.  With a
> count variable like this, you should start out with either a negbin or
> poisson distribution and a log link, and use AIC (or AICc, depending on your
> sample size) to choose the best-fitting model.  However, in recent analyses I
> ran using count data as dependent variables, I actually found a log
> distribution with either a log or identity (=normal) link to have the best
> fit.  FYI, if you're using a log link and/or distribution and have any '0'
> values, you will need to add 1 to each value prior to running the models to
> avoid the log(0) problem.
> 
> SPSS has the capability to run Generalized Linear Mixed Models, as do both
> SAS (Proc GLIMMIX) and R.
> 
> Best,
> Nicole Michel
> 
> *
> Nicole Michel
> PhD Candidate
> 4060 Stern
> Dept. of Ecology and Evolutionary Biology
> 400 Boggs
> Tulane University
> New Orleans, LA 70118
> Fax: 504-862-8706
> *
> 
> 
> From: Ecological Society of America: grants, jobs, news
> [ECOLOG-L@LISTSERV.UMD.EDU] on behalf of Alan Griffith (agriffit)
> [agrif...@umw.edu]
> Sent: Thursday, July 14, 2011 9:39 AM
> To: ECOLOG-L@LISTSERV.UMD.EDU
> Subject: [ECOLOG-L] Non-parametric statistics
> 
> Hello all,
> 
> I have been searching for some advice on appropriate non-parametric
> statistics for the analysis of a dependent variable that fails normality and
> homogeneity assumptions under both sqrt and ln transformations.
> 
> First I will describe the dataset.  The data are from a field sample.  I have
> 4 years of data from the same set of ecological populations.  The number of
> populations varies year to year.  The number of individuals sampled in a
> population may have varied within and among years.
> 
> Here is a description of the model I would like to implement.  Let’s say
> the Dependent Variable is # seeds eaten / plant.  So, I want to implement
> individual plant nested within population (i.e.  a mixed model with
> population identifier as random variable or SUBJECT(PopID)).  YEAR is a
> categorical independent variable, Population Size is one continuous
> independent variable.  Total # Seeds produced / plant is another continuous
> independent variable.  I would also like to test interactions.
> 
> As I said before, I was not successful in transforming my dependent variable
> using my standard choices (ln and sqrt).  I had found references to using
> rank transformed data in an ANOVA / ANCOVA model, but this was rejected by a
> reviewer.  I am familiar with simple nonparametric tests like Kruskal-Wallis,
> but I do not see how to preserve the complex model with such tests.
> 
> My first hope is to find a method, generally accepted by ecologists, that is
> easily implemented in SPSS.  If this is not possible, I can explore more
> complicated analyses with the help of my campus math / stats consultant.
> 
> Thanks for you advice.
> 
> |   /  \   |  Alan B. Griffith, PhD
> \  \  ̗  ̖  /  /   Associate Professor
>   \  \( )/  /Department of Biological Sciences
>\ (   ) /  University of Mary Washington
> /(   )\   (540) 654-1422
>   / / ( ) \ \ agrif...@umw.edu
> /  |  ¦¦  |  \
> |  |
> 


Re: [ECOLOG-L] Hypothesis Testing in Ecology

2011-03-04 Thread David Schneider
Dear list members,

As someone who has
-been teaching model based stats to natural scientists for decades
-has mastered the logic and arcana of Neyman-Pearson
  Decision Theoretic Hypothesis Testing (p-values)
-routinely uses model based statistics and parameter estimation 
  with conf intervals whenever possible
-exposes students to the Nester collection of quotes

I offer the following: 

-The Anderson book is well recommended.
-At the same time, it is important that students
 understand NPDTHT, in order to understand and
 evaluate the great bulk of published work in ecology.
-Teaching model based stats to students puts 
 considerable demand on the student and it puts
 many of them between a rock (supervisor who adheres
 to NPDTHT) and a hard place (course in model based
 stats).
-Rational treatment of uncertainty is a must in
 ecology.  
-NPDTHT proves nothing.  It merely excludes chance
 (at some stated level of uncertainty) as an explanation
 for some observed result.

David Schneider
c/o Biology, Memorial University, St. John's NL
http://www.elsevierdirect.com/ISBN/9780126278651/Quantitative-Ecology

Quoting Manuel Spínola :

> Dear list members,
> 
> For those interested on statistical hypothesis testing, null hypothesis 
> significant testing and p-values I would like to suggest the following 
> web site with many quotes from many well known statisticians.
> 
> http://warnercnr.colostate.edu/~anderson/nester.html
> 
> and for new approaches on statistics (moving away from "hypothesis 
> testing" and p-values) applied to ecology:
> 
> Anderson, D. R. 2008. Model based inference in the life sciences.  
> Springer, NY.
> 
> Best,
> 
> Manuel
> 
> On 01/03/2011 12:46 p.m., Ruchira Datta wrote:
> > To calculate p-values properly requires paying a lot of attention to how
> you
> > choose the null hypothesis and whether it is really appropriate for your
> > problem and the state of the art.  I do not have a lot of experience in
> > ecology, but in bioinformatics people often choose null hypotheses because
> > they make the p-values easy to compute, or because everyone does it that
> > way, or (more cynically) because they make their results appear
> significant.
> > One can get a good p-value by choosing a null hypothesis that is almost
> > certain to be wrong, regardless of the fact that the consensus was already
> > that this null hypothesis was almost certain to be wrong before any of the
> > reported experiments were undertaken. That doesn't mean the reported
> > experiments advanced scientific understanding.
> >
> > --Ruchira
> >
> > On Tue, Mar 1, 2011 at 6:24 AM, Jeff Houlahan  wrote:
> >
> >> Hi Chris and all, I actually think that it's a mistake to diminish the
> role
> >> of p-values.  My opinion on this (stongly influenced by the writings of
> Rob
> >> Peters) is that there is only one way to demonstrate understanding and
> that
> >> is through prediction.  And predictions only demonstrate understanding if
> >> you make better predictions than you would make strictly by chance.  The
> >> only way to tell if you've done better than chance is through p-values. 
> So,
> >> while there is a great deal more to science than p-values, the ultimate
> >> tests of whether science has led to increased understanding are p-values.
> >>   Best.
> >>
> >> Jeff Houlahan
> >> Dept of Biology
> >> 100 Tucker Park Road
> >> UNB Saint John
> >>
> >
> 
> 
> -- 
> *Manuel Spínola, Ph.D.*
> Instituto Internacional en Conservación y Manejo de Vida Silvestre
> Universidad Nacional
> Apartado 1350-3000
> Heredia
> COSTA RICA
> mspin...@una.ac.cr
> mspinol...@gmail.com
> Teléfono: (506) 2277-3598
> Fax: (506) 2237-7036
> Personal website: Lobito de río 
> <https://sites.google.com/site/lobitoderio/>
> Institutional website: ICOMVIS <http://www.icomvis.una.ac.cr/>
>