By the end of this practical, you should feel comfortable:
summary()when called on a
kparameter of smooth terms to increase their flexibility
The example code below uses the
df_hn detection function in the density surface models. You can substitute this for your own best model as you go, or copy and paste the code at the end and see what results you get using your model for the detection function.
In this practical, we’re just going to do basic comparisons looking at plots and
summary output. We’ll get more into model comparison in later practicals.
Solutions are given between these horizontal lines with bold headers
The “solutions” presented here are not definitive, many modelling options are possible.
Here I’ll use a few different detection functions that I fitted in the first practical. To run the code in this file, you’ll first need to have run the first practical solution, to get the correct outputs.
## Loading required package: mrds
## This is mrds 2.2.3 ## Built: R 4.0.2; ; 2020-08-01 10:33:56 UTC; unix
## ## Attaching package: 'Distance'
## The following object is masked from 'package:mrds': ## ## create.bins
## Loading required package: mgcv
## Loading required package: nlme
## This is mgcv 1.8-33. For overview type 'help("mgcv-package")'.
## Loading required package: numDeriv
## This is dsm 188.8.131.5207 ## Built: R 4.0.2; ; 2020-10-01 05:44:34 UTC; unix
Loading sperm whale data again and the
RData files where we saved our results:
Before we fit a model using
dsm() we must first remove the observations from the spatial data that we excluded when we fitted the detection function – those observations at distances greater than the truncation.
obs <- obs[obs$distance <= df_hn$ddf$meta.data$width, ]
Here we’ve used the value of the truncation stored in the detection function object, but we could also use the numeric value (which we can also find by checking the detection function’s
(If we don’t do this, the DSM will still fit fine, we just get a warning.)
Using the data that we’ve saved so far, we can build a call to the
dsm() function and fit out first density surface model. Here we’re only going to look at models that include spatial smooths.
Let’s start with a very simple model – a bivariate smooth of
dsm_nb_xy <- dsm(count~s(x, y, k=25), ddf.obj=df_hn, segment.data = segs, observation.data=obs, family=nb())
Note again that we try to have informative model object names so that we can work out what the main features of the model were from its name alone.
We can look at a
summary() of this model. Look through the summary output and try to pick out the important information based on what we’ve talked about in the lectures so far.
## ## Family: Negative Binomial(0.093) ## Link function: log ## ## Formula: ## count ~ s(x, y, k = 25) + offset(off.set) ## ## Parametric coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -20.4863 0.2207 -92.84 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Approximate significance of smooth terms: ## edf Ref.df Chi.sq p-value ## s(x,y) 13.91 17.55 66.36 1.05e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## R-sq.(adj) = 0.08 Deviance explained = 34.9% ## -REML = 394.96 Scale est. = 1 n = 949
Things to look for:
s(x, y)) significant at the highest level?
n=) match the number of rows in
Answering the above questions:
summaryoutput we can see that the bivariate term,
s(x,y)is significant at the “0” level – that’s not too surprising given it’s the only term in the model though!
k=25above, so we’re nowhere near using all of the degrees of freedom we gave the model. Note that we don’t need to reduce
k, the penalty has done its job and reduced the wigglyness of the model accordingly.
n = 949, we can check the number of rows in the segment data:
and confirm that’s correct. These might not match if you have
NA (“not available”) values in your segment data, for example if you have covariates that don’t have values (SST due to cloud for example). If there are
NAs where values are needed, the whole row will be dropped. It’s usually worth checking this to make sure you’re fitting to the data that you think you are.
As discussed in the lectures, the
plot output is not terribly useful for bivariate smooths like these. We’ll use
vis.gam() to visualise the smooth instead:
vis.gam(dsm_nb_xy, view=c("x","y"), plot.type="contour", too.far=0.1, main="s(x,y) (link scale)", asp=1)
view=c("x","y")to display the smooths for
y(we can choose any two variables in our model to display like this)
plot.type="contour"gives this “flat” plot, set
plot.type="persp"for a “perspective” plot, in 3D.
too.far=0.1argument displays the values of the smooth not “too far” from the data (try changing this value to see what happens).
asp=1ensures that the aspect ratio of the plot is 1, making the pixels square.
?vis.gammanual page for more information on the plotting options.
We can set the basis complexity via the
k argument to the
s() term in the formula. For example the following re-fits the above model with a much smaller basis complexity than before:
dsm_nb_xy_smallk <- dsm(count~s(x, y, k=10), ddf.obj=df_hn, segment.data = segs, observation.data=obs, family=nb())
Compare the output of
vis.gam() for this model to the model with a larger basis complexity.
Instead of fitting a bivariate smooth of
s(x, y), we could instead use the additive nature and fit the following model:
dsm_nb_x_y <- dsm(count~s(x)+ s(y), ddf.obj=df_hn, segment.data = segs, observation.data=obs, family=nb())
Compare this model with
Comparing to the univariate model
First taking a look at the summary:
## ## Family: Negative Binomial(0.085) ## Link function: log ## ## Formula: ## count ~ s(x) + s(y) + offset(off.set) ## ## Parametric coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -20.4301 0.2374 -86.07 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Approximate significance of smooth terms: ## edf Ref.df Chi.sq p-value ## s(x) 5.171 6.302 22.59 0.001711 ** ## s(y) 4.779 5.886 25.73 0.000444 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## R-sq.(adj) = 0.052 Deviance explained = 31.3% ## -REML = 395.86 Scale est. = 1 n = 949
we can see that the
s(y) terms have EDFs lower than the (default)
k=10 they were given. The combined EDFs are bigger than those for the model with
s(x,y) – perhaps because these terms have a lot of work to do without being able to properly model the interaction. We can see this when plotting the
s(x) + s(y) model alongside the
par(mfrow=c(1,2)) vis.gam(dsm_nb_xy, view=c("x","y"), plot.type="contour", too.far=0.1, main="s(x,y, k=25) (link scale)", asp=1) vis.gam(dsm_nb_x_y, view=c("x","y"), plot.type="contour", too.far=0.1, main="s(x) + s(y) (link scale)", asp=1)
We can also look at univariate plots of