ONLINE COURSE – Advancing in R (ADVR01)  Data Wrangling, Data Viz, GLM's,
GLMM's and Model Selection

https://www.prstats.org/course/advancing-in-r-advr01/

25th - 29th March 2024

Please feel free to share!

*COURSE DETAILS - *This course is designed to provide attendees with a
comprehensive understanding of statistical modelling and its applications
in various fields, such as ecology, biology, sociology, agriculture, and
health. We cover all foundational aspects of modelling, including all
coding aspects, ranging from data wrangling, visualisation and exploratory
data analysis, to generalized linear mixed models, assessing
goodness-of-fit and carrying out model comparison.

*Course description*
This course is designed to provide attendees with a comprehensive
understanding of statistical modelling and its applications in various
fields, such as ecology, biology, sociology, agriculture, and health. We
cover all foundational aspects of modelling, including all coding aspects,
ranging from data wrangling, visualisation and exploratory data analysis,
to generalized linear mixed models, assessing goodness-of-fit and carrying
out model comparison.

*Data wrangling*
For data wrangling, we focus on tools provided by R's tidyverse. Data
wrangling is the art of taking raw and messy data and formatting and
cleaning it so that data analysis and visualization may be performed on it.
Done poorly, it can be time consuming, laborious, and error-prone.
Fortunately, the tools provided by R's tidyverse allow us to do data
wrangling in a fast, efficient, and high-level manner, which can have
dramatic consequences for ease and speed with which we analyse data. We
start with how to read data of different
types into R, we then cover in detail all the dplyr tools such
as select, filter, mutate, and others. Here, we will also cover the pipe
operator (%>%) to create data wrangling pipelines
that takes raw messy data on the one end and returns cleaned tidy data on
the other. We then cover how to perform descriptive or summary statistics
on our data using dplyr’s
group_by and summarise functions. We then turn to combining and merging
data. Here, we will consider how to concatenate data frames, including
concatenating all data files in a
folder, as well as cover the powerful SQL-like join operations that allow
us to merge information in different data frames. The final topic we will
consider is how to “pivot” data
from a “wide” to “long” format and back
using tidyr’s pivot_longer and pivot_wider functions.

*Data visualisation*
For visualisation, we focus on the ggplot2 package. We begin by providing a
brief overview of the general principles data visualization, and an
overview of the general principles behind
ggplot. We then proceed to cover the major types of plots for visualizing
distributions of univariate data: histograms, density plots, barplots, and
Tukey boxplots. In all of these cases, we will consider how to visualize
multiple distributions simultaneously on the same plot using different
colours and "facet" plots. We then turn to the visualization of
bivariate
data using scatterplots. Here, we will explore how to apply linear and
nonlinear smoothing functions to the data, how to add marginal histograms
to the scatterplot, add labels to
points, and scale each point by the value of a third variable. We then
cover some additional plot types that are often related but not identical
to those major types covered during the
beginning of the course: frequency polygons, area plots, line plots,
uncertainty plots, violin plots, and geospatial mapping. We then consider
more fine grained control of the plot by
changing axis scales, axis labels, axis tick points, colour palettes, and
ggplot "themes". Finally, we consider how to make plots for
presentations and publications. Here, we will
introduce how to insert plots into documents using RMarkdown, and also how
to create labelled grids of subplots of the kind seen in many published
articles.

*Generalized linear models*
Generalized linear models are generalizations of linear regression models
for situations where the outcome variable is, for example, a binary, or
ordinal, or count variable, etc. The
specific models we cover include binary, binomial, and categorical logistic
regression, Poisson and negative binomial regression for count variables,
as well as extensions for
overdispersed and zero-inflated data. We begin by providing a brief
overview of the normal general linear model. Understanding this model is
vital for the proper understanding of how
it is generalized in generalized linear models. Next, we introduce the
widely used binary logistic regression model, which is is a regression
model for when the outcome variable is
binary. Next, we cover the binomial logistic regression, and the
multinomial case, which is for modelling outcomes variables that are
polychotomous, i.e., have more than two
categorically distinct values. We will then cover Poisson regression, which
is widely used for modelling outcome variables that are counts (i.e the
number of times something has
happened). We then cover extensions to accommodate overdispersion, starting
with the quasi-likelihood approach, then covering the negative binomial and
beta-binomial models for counts and discrete proportions, respectively.
Finally, we will cover zero-inflated Poisson and negative binomial models,
which are for count data with excessive numbers of zero
observations.

*Mixed models*
We will focus primarily on multilevel linear models, but also cover
multilevel generalized linear models. Likewise, we will also describe
Bayesian approaches to multilevel modelling.
We will begin by focusing on random effects multilevel models. These models
make it clear how multilevel models are in fact models of models. In
addition, random effects models
serve as a solid basis for understanding mixed effects, i.e. fixed and
random effects, models. In this coverage of random effects, we will also
cover the important concepts of statistical
shrinkage in the estimation of effects, as well as intraclass correlation.
We then proceed to cover linear mixed effects models, particularly focusing
on varying intercept and/or varying
slopes regression models. We will then cover further aspects of linear
mixed effects models, including multilevel models for nested and crossed
data data, and group level predictor
variables. Towards the end of the course we also cover generalized linear
mixed models (GLMMs), how to accommodate overdispersion through
individual-level random effects, as
well as Bayesian approaches to multilevel levels using the brms R package.

*Model selection and model simplification*
Throughout the course we consider the fundamental issue of how to measure
model fit and a model’s predictive performance, and discuss a wide range of
other major model fit
measurement concepts like likelihood, log likelihood, deviance, and
residual sums of squares. We thoroughly explore nested model comparison,
particularly in general and
generalized linear models, and their mixed effects counterparts. We discuss
out-of-sample generalization, and introduce leave-one-out cross-validation
and the Akaike Information
Criterion (AIC). We also cover general concepts and methods related to
variable selection, including stepwise regression, ridge regression, Lasso,
and elastic nets. Finally, we turn to
model averaging, which may represent a preferable alternative to model
selection.
Please email oliverhoo...@prstatistics.com with any questions.

-- 

Oliver Hooker PhD.
PR stats

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-ecology mailing list
R-sig-ecology@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-ecology

Reply via email to