By the end of this practical you should feel confident doing the following:
ds()
Distance
First need to load the requisite R libraries
library(Distance)
## Loading required package: mrds
## This is mrds 2.2.3
## Built: R 4.0.2; ; 2020-08-01 10:33:56 UTC; unix
##
## Attaching package: 'Distance'
## The following object is masked from 'package:mrds':
##
## create.bins
library(ggplot2)
library(knitr)
We can then load the data from the RData file (this needs to be in your R working directory):
load("spermwhale.RData")
Before setting off fitting detection functions, let’s look at the relationship of various variables in the data.
The most important covariate in a distance sampling analysis is distance itself. We can plot a histogram of the distances to check that (1) we imported the data correctly and (2) it conforms to the usual shape for line transect data.
hist(distdata$distance, xlab="Distance (m)",
main="Distance to sperm whale observations")
We can also see how many observations were recorded per Beaufort level (some background on Beaufort with photos). We can first use table()
to see what values occur and how many there are:
table(distdata$Beaufort)
##
## 0 1 1.8 2 2.4 2.5 2.7 2.8 2.9 3 3.1 3.5 3.6 3.7 3.9 4 4.5 5
## 5 4 7 31 8 10 1 1 4 41 2 6 1 2 2 8 1 3
The survey protocol on these surveys took an average of several observers’ assessments of the weather, hence we get these decimal values. We can use the breaks=
argument to hist()
to bin these values to integer values. We’ll plot the values from the segments at the same time, so we can get an idea of what the conditions were like over the whole survey, not just when we saw sperm whales:
par(mfrow=c(1,2))
hist(distdata$Beaufort, breaks=seq(0, 5, by=1),
main="Observations per Beaufort level",
xlab="Beaufort")
hist(segs$Beaufort, breaks=seq(0, 5, by=1),
main="Segments per Beaufort level",
xlab="Beaufort")
We can see here that most of the segments had a Beaufort >= 2 and very few had Beaufort of 0-1 (right plot). We can see then that there are few observations at the lowest Beaufort (left plot) because there just wasn’t much time to observe things in those conditions. We see that with increasing Beaufort observations fall off (left plot), even as there were plenty of segments with those higher Beaufort levels (right plot).
We can create a rounded version of the variable using the cut
function in R for later:
distdata$BeaufortCut <- cut(distdata$Beaufort, seq(0, 5, by=1),
include.lowest=TRUE, ordered.result=TRUE)
Given we are including data from two different surveys we can also check that the surveys have similar histograms of distance (we’ll check with the detection function below whether we need to include a covariate to control for this).
par(mfrow=c(1,2))
hist(subset(distdata, Survey=="en04395")$distance,
main="", ylim=c(0, 30),
xlab="Distance (m)")
hist(subset(distdata, Survey=="GU0403")$distance,
main="", ylim=c(0, 30),
xlab="Distance (m)")
It’s now time to fit some detection function models. We’ll use the ds()
function from the Distance
package to fit the detection function. You can access the help file for the ds()
function by typing ?ds
– this will give you information about what the different arguments to the function are and what they do.
We can fit a very simple model with the following code:
df_hn <- ds(data=distdata, truncation=6000, key="hn", adjustment=NULL)
## Fitting half-normal key function
## Key only model: not constraining for monotonicity.
## AIC= 2252.06
## No survey area information supplied, only estimating detection function.
Let’s dissect the call and see what each argument means:
data=distdata
: the data to use to fit the model, as we prepared above.truncation=6000
: set the truncation distance. Here, observations at distances greater than 6000m will be discarded before fitting the detection function.key="hn"
: the key function to use for the detection function, in this case half-normal (?ds
lists the other options).adjustment=NULL
: adjustment term series to fit. NULL
here means that no adjustments should be fitted (again ?ds
lists all options).Other useful arguments for this practical are:
formula=
: gives the formula to use for the scale parameter. By default it takes the value ~1
, meaning the scale parameter is constant and not a function of covariates.order=
: specifies the “order” of the adjustments to be used. This is a number (or vector of numbers) specifying the order of the terms. For example order=2
fits order 2 adjustments, order=c(2,3)
will fit a model with order 2 and 3 adjustments (mathematically, it only makes sense to include order 3 with order 2). By default the value is NULL
which has ds()
select the number of adjustments by AIC.We set truncation=6000
here, why? The truncation removes observations in the tail of the distribution. We do this because we only care about the detection function near 0, that has the most effect on \(\hat{p}\) (since it’s where the detection function is biggest). We need to trade-off between getting the shape right and not using lots of parameters modelling what’s going on far from the observer and has little effect on the result. Here we use ~96% of the data. Len Thomas suggests \(g(w)\approx 0.15\) (i.e., at the truncation distance the detection function should be about 0.15). You can play around with this if you’re interested here by changing truncation=
and seeing what the differences in \(\hat{p}\) are (but push on to the next section if you’re happy!).
We can look at the summary of the fitted detection function using the summary()
function:
summary(df_hn)
##
## Summary for distance analysis
## Number of observations : 132
## Distance range : 0 - 6000
##
## Model : Half-normal key function
## AIC : 2252.06
##
## Detection function parameters
## Scale coefficient(s):
## estimate se
## (Intercept) 7.900732 0.07884776
##
## Estimate SE CV
## Average p 0.5490484 0.03662569 0.06670757
## N in covered region 240.4159539 21.32287580 0.08869160
Goodness of fit quantile-quantile plot and test results can be accessed using the gof_ds()
function:
gof_ds(df_hn)
##
## Goodness of fit results for ddf object
##
## Distance sampling Cramer-von Mises test (unweighted)
## Test statistic = 0.396179 p-value = 0.0739475
We can plot the models simply using the plot()
function:
plot(df_hn)
The dots on the plot indicate the distances where observations are. We can remove them (particularly useful for a model without covariates) using the additional argument showpoints=FALSE
(try this out!).
Now try fitting a few models and comparing them using AIC. Don’t try to fit all possible models, just try a selection (say, a hazard-rate, a model with adjustments and a couple with different covariates). You can also try out changing the truncation distance.
Here’s an example to work from. Some tips before you start:
Control+Enter
(on Windows/Linux; Command+Enter
on Mac).df_
to indicate that this is a detection function, then shortened forms of the model form and covariates, separated by underscores, but use what makes sense to you (and future you!).Here’s an example of adding covariates into the model using the formula=
argument:
df_hr_ss_size <- ds(distdata, truncation=6000, adjustment=NULL,
key="hr", formula=~Beaufort+size)
## Fitting hazard-rate key function
## AIC= 2249.327
## No survey area information supplied, only estimating detection function.
Once you have the hang of writing models and looking at the differences between them, you should move onto the next section.
Looking at the models individually can be a bit unwieldy – it’s nicer to put that data into a table and sort the table by the relevant statistic. The function summarize_ds_models()
in the Distance
package performs this task for us.
The code below will make a results table with relevant statistics for model selection in it. The summarize_ds_models()
function takes a series of object names as its first argument. We can do that with the two models that I fitted like so:
model_table <- summarize_ds_models(df_hn, df_hr_ss_size, output="plain")
kable(model_table, digits=3, row.names = FALSE, escape=FALSE,
caption = "Comparison of half normal and hazard rate with sea state and group size.")
Model | Key function | Formula | C-vM \(p\)-value | Average detectability | se(Average detectability) | Delta AIC |
---|---|---|---|---|---|---|
df_hr_ss_size | Hazard-rate | ~Beaufort + size | 0.880 | 0.355 | 0.074 | 0.000 |
df_hn | Half-normal | ~1 | 0.074 | 0.549 | 0.037 | 2.733 |
(You can add the models you fitted above into this list.)
Note that there is a considerable spike in our distance data. This may be down to observers guarding the trackline (spending too much time searching near zero distance). It’s therefore possible that the hazard-rate model is overfitting to this peak. So we’d like to investigate results from the half-normal model too and see what the effects are on the final spatial models.
Just for fun, let’s estimate abundance from these models using a Horvtiz-Thompson-type estimator.
We know the Horvitz-Thompson estimator has the following form: \[ \hat{N} = \frac{A}{a} \sum_{i=1}^n \frac{s_i}{p_i} \] we can calculate each part of this equation in R:
A
is the total area of the region we want to estimate abundance for. This was \(A=5.285e+11 m^2\).a
is the total area we surveyed. We know that the total transect length was 9,498,474m and the truncation distance. Knowing that \(a=2wL\) we can calculate \(a\).df_hn$ddf$data$size
.predict(df_hn$ddf)$fitted
.We know that in general operations are vectorised in R, so calculating c(1, 2, 3)/c(4, 5, 6)
will give c(1/4, 2/5, 3/6)
so we can just divide the results of getting the \(s_i\) and \(p_i\) values and then use the sum()
function to sum them up.
Try out estimating abundance using the formula below using both df_hn
and your favourite model from above:
Note that in the solutions to this exercise (posted on the course website) I show how to use the function dht()
to estimate abundance (and uncertainty) for a detection function analysis. This involves some more complex data manipulation steps, so we’ve left it out here in favour of getting to grips with the mathematics.
It’s common, especially in marine surveys, for animals at zero distance to be missed by observers. There are several ways to deal with this issue. For now, we are just going to use a very simply constant correction factor to inflate the abundance.
From Palka (2006), we think that observations on the track line were such that \(g(0)=0.46\), we can apply that correction to our abundance estimate (in a very primitive way):
This kind of correction works fine when it’s appropriate to use a single number to adjust by, in general we’d like to model the perception bias, for example using “mark-recapture distance sampling” techniques.
Save your top few models in an RData
file, so we can load them up later on. We’ll also save the distance data we used to fit out models.
save(df_hn, df_hr_ss_size, # add you models here, followed by commas!
distdata,
file="df-models.RData")
You can check it worked by using the load()
function to recover the models.
Palka, D. L. (2006) Summer Abundance Estimates of Cetaceans in US North Atlantic Navy Operating Areas. Northeast Fisheries Science Center Reference Document 06-03