In un messaggio del Friday 23 December 2011, dr. VM Giacalone ha scritto:
> Hi everybody,
> I’m using qgis1.7.3 and the R-adehabitat package to evaluate the home range
> of my fish in a marine protected area. Well, I’m interested in comparing
> the different KUD… LSCV, h-adj, h-ref, but when I launch the application,
> I can get data about MCP and kud h-ref only, with the following message: R
> message (kernel LSCV): no subset converged, no output.
>  
> I have lots of FIX with the same coordinates due to the proximity of fish
> shelter to a receiver location. Arcview3 with animal movement routine is
> able to calculate LSCV kud while Arcgis9 and HRT routine seems to have the
> same “problem” of QGis. 
> My pc setup is winXP sp3, qgis1.7.3, R2.11.1.
>  
> Thanks in advance or your help and merry Christmas!
>  
> Max
>  

LSCV h calculations sometimes does not converge.
It's an old and well-known "property", already sorted out (it depends on the 
nature of the algorithm used to do least-squares...), and AFAIK no solution is 
possible.

Hooge's Animal Movement uses a sort of 'dirty trick': when LSCV is not 
converging, it goes back to fixed h calculations and gives back that (which is 
not what you intended, and - worse - you are not aware of that substitution).

Adehabitat is 'honest' in that warns you that LSCV did not converge.

If it can help, here's the R code I routinely use to calculate homeranges.


BTW: if you have several "identical" locations, as a rule of thumb there are 
two 
approaches: 
1) that location is a "den" or something behaviourally equivalent, e.g. a 
resting site, and in your experimental design is not part of the home range -> 
drop out all those locations
2) those points tend to accumulate around some location that actually is a 
center of activity (say, a foraging site), and they are part of the home range -
> have a look at the jitter() function ;)

Hope this Helps...



-- 
What makes the universe so hard to comprehend is that there's nothing
to compare it with.
 ----------------------------------------------------------
Damiano G. Preatoni, PhD

Unità di Analisi e Gestione delle Risorse Ambientali
  - Guido Tosi Research Group
Dipartimento di Scienze Teoriche e Applicate
Università degli Studi dell'Insubria
Via J.H. Dunant, 3 - 21100 Varese (ITALY)

tel +39 0332421538 fax +39 0332421446
http://biocenosi.dipbsf.uninsubria.it/
ICQ: 78690321 jabber: p...@jabber.org skype: prea.net
-----------------------------------------------------------
Please consider the environment before printing this email
Please do not send attachments in proprietary formats
http://www.gnu.org/philosophy/no-word-attachments.html
Use the UNI CEI Standard ISO/IEC 26300:2006
-----------------------------------------------------------
O< stop html mail - http://www.asciiribbon.org
###############################################################################
# UAGRA R Scripts                                                  HR_engine.R
###############################################################################
# Home-range calculation utility functions
#############################################################################
#    Copyright (C) 2007  Damiano G. Preatoni
#
#    This program is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation, either version 3 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with this program.  If not, see <http://www.gnu.org/licenses/>.
#############################################################################
#
# Version 0.3
#
# Description: calculates different home range estimators
#
# Usage:
#
# Requires: adehabitat
#
# Returns: OutputBox containing the single outputs
#
# References: Wauters LA, Preatoni DG, Molinari A, Tosi G(2007) Radio-tracking
#             squirrels: Performance of home range density and inkage estimators
#             with small range and sample size.
#             Ecological Modelling, 202(3-4):333-344
#
###############################################################################
# created anne 2007-07-19
# updated prea Thu Jan 21 09:28:46 CET 2010
# revision history:
# prea mer ago 1 11:26:45 CEST 2007 - reshaped and converted into sourceable
#     function.
# anne mer nov 14 17:11:00 CEST 2007 - removed the old header and think about
#     the ifs
# anne lun nov 26 20:58:00 CEST 2007 - found how to get all the output in one
#     object (not yet) and abandoned switches' idea
# anne mer nov 28 15:30:00 CEST 2007 - set the output object and start to fix
#     the first checking-ifs
# anne mer gen 23 15:07:38 CEST 2008 - added kernelUD in hadj and used the %in% 
#     selection for kernels
# anne gio gen 24 9:40:00 CEST 2008 - ended the conversion of lscv in a 
#     scope-free function and set the OutputBox names
# anne mer feb 6 16:29:30 CEST 2008 - upgraded to 0.2.0, i.e. changed the interface
# anne mar mar 11 13:54:00 CEST 2008 - corrected hadj and think about deep 
#     refactorization
# anne mar apr 22 14:35:00 CEST 2008 - created turner function and corrected hadj
# anne gio mag 29 22:25:00 CEST 2008 - let khr.area calculate ALL levels
# prea Thu Sep 25 13:32:22 CEST 2008 - added grid= as external parameter
# anne Mon Feb 2 17:10:00 CEST 2009 - added capability of reading sahrlocs' locs
#     dataframe, therefore version skip to 0.2.2
# prea Thu Jan 21 09:28:46 CET 2010 - version 0.3. renamed to HR_engine.R, 
#     translated variable names and comments in English, added kerneloverlap.spot
#     function borrowed from a separate file (20090302 - rewrote from scratch modifying
#     adehabitat kerneloverlap function), added corearea function for incremental
#     area analysis sensu Hanski e.a. 2000 (created mer ago  1 11:26:45 CEST 2007
#     updated prea mar ago  7 11:23:50 CEST 2007, added clustering in version 1.1)
#
###############################################################################

#@TODO(anne) need to add a column to the final dataframe to store the number of
#            locations used per ID. Look for sample code to get such a column
#            into lepre_periods_general.R (mountain hare project folder)
#@TODO(anne) riga 206.

#@TODO(prea) if possible, intercept somehow messages sent out when lscv is not converging and print out them along with the animal ID
#@NOTE(anne) lscv messages should be kernelUD's affair, not motore
#@NOTE(prea) since kernelUD is Clément's affair we're stuck, so better get that message and forget the single point of truth (open a ticket on the Trac?)
#@TODO(anne) integrate sahrlocs' grid

## wrapper function to calculate hr with almost all available methods
## final user should call just this function. mind that x must be a sahrlocs object
## this is easily obtained setting up locations as a dataframe structured as follows
## $ Name: Factor : animal unique identifier
## $ X   : numeric: Easting
## $ Y   : numeric: Northing
## $ ... whatever else
HR_cruncher <- function(x, method=c("mcp","href","lscv","hadj","clusthr","NNCH"), prc=95, grid=40) {
  #print("HR_Cruncher called...")
  if (!require(adehabitat)) 
  stop("This procedure requires adehabitat to be installed")
  if (class(x) == "sahrlocs") {
    dd <- getsahrlocs(x, what="locs")
    dd$ID <- dd$Name
    dd$Name <- NULL
  }
  #print("sanity check for adehabitat done")
  if ((!inherits(x, "data.frame"))) {
    return(is(x))
    stop("x should be a dataframe") 
  } else {
    dd <- x 
  }
  #print("sanity check for dataframes done")
  # if (names(dd) != "ID", "X","Y")
  # stop("check the dataframe, ID, X or Y are missing.")
  if (length(dd) > 3) {
    dd <- subset(dd, select = c("ID", "X", "Y"))
  }
  #print("sanity check for columns done")
  if ((!inherits(dd$ID, "factor"))) {
    dd$ID <- factor(dd$ID)
  }
  #print("sanity check for ID as factor done")

  # calculate location extent to size up "grid"
  Xmin <- min(dd$X)
  Xmax <- max(dd$X)
  Ymin <- min(dd$Y)
  Ymax <- max(dd$Y)
  #print("limits calculated")

  OutputBox <- list()
  class(OutputBox) <- c("list","homerange")

  ############################################################# helper functions
  ## filters out only lscv UD where lscv algorithm converged
  lscv.cruncher <- function(dd, grid=40) {
    khr.lscv <- kernelUD(dd[,c('X','Y')],dd[,'ID'], h="LSCV", grid=grid)
    cleankhr <- list()
    class(cleankhr) <- c("khr","khrud")
    for (i in names(khr.lscv)) {
      if (khr.lscv[[i]]$h$convergence == TRUE) {
        cleankhr[[i]] <- khr.lscv[[i]]
      }
    }
    return(cleankhr)
  }

  ## transpose a dataframe
  #@TODO(anne): refactor :)
  turner <- function(table, colname) {
    storeID <- names(table)
    table <- as.data.frame(t(as.data.frame(table)))
    #table <- data.frame(table)
    colnames(table) <- c(colname)          # assign column names
    table$ID <- storeID                    # better have back ID as a column
    row.names(table) <- table$ID
    return(table)
  }
  
  ## traspose on more than one column
  multicolturner <- function(table) {
    storeID <- names(table)
    storecols <- row.names(table)
    table <- as.data.frame(t(as.data.frame(table)))
    #table <- data.frame(table)       # cast again as dataframe since t returns a matrix
    ### colnames must be "20%ha".
    newnames <- list()
    counter <- 0
    for (i in storecols) {
      counter <- counter+1
      newnames[counter] <- paste("ha_percent_", i, sep="")
    }
    colnames(table) <- newnames         # assign column names
    table$ID <- storeID     # better have back ID as a column
    row.names(table) <- table$ID
    return(table)
  }

  ################################################## analysis methods dispatcher
  if ("mcp" %in% method) {                                                 # MCP
    print("MCP calculation running... should be fast")
    OutputBox[["mcp.shapes"]] <- mcp(dd[,c('X','Y')],dd[,'ID'], percent = prc)
    mcpArea <- mcp.area(dd[,c('X','Y')],dd[,'ID'], percent = prc, unin = "m", unout = "ha", plotit = FALSE)
    pivot <- as.data.frame(t(mcpArea))
    pivot$ID <- row.names(pivot)
    names(pivot) <- c(paste("MCP_", prc, sep=""),"ID")
    OutputBox[["mcp.areas"]] <- pivot
    print("Done.")  
  }
  if ("href" %in% method) {                                          # KHR(href)
    print("Kernel href calculation...")
    OutputBox[["href.volumes"]] <- kernelUD(dd[,c('X','Y')],dd[,'ID'],grid=grid)
    khr.area <-  kernel.area(dd[,c('X','Y')],dd[,'ID'],levels=seq(10,95, by=5),unin="m",unout="ha",grid=grid)
    khr.area <- multicolturner(khr.area)
    khr.h <- list()
    for (i in names(OutputBox[["href.volumes"]])) {
      khr.h[i] <- OutputBox$href.volumes[[i]]$h
    }
    khr.h <- turner(khr.h, "h_ref")
    khr.area <- merge(khr.h, khr.area, by.x = 'ID', by.y = 'ID') 
    OutputBox[["href.areas"]] <- khr.area
    print("Done.")
    rm(khr.area)
    rm(khr.h)
  } 
  if ("lscv" %in% method) {                                          # KHR(lscv)
    print("Kernel LSCV calculation...")
    khr.vols <- lscv.cruncher(dd,grid)
    khr.area <- kernel.area(dd[,c('X','Y')], dd$ID, h="LSCV",levels=seq(10,95, by=5),unin="m",unout="ha",grid=grid)
    khr.h <- list()
    if (is.null(names(khr.vols)))
      if (length(method) == 1) {
        warning("No subset converged. Analysis unsuccessful. Output object is NULL.")
        return(NULL)
      }
      else
        warning("No subset converged. Analysis unsuccessful.")
    else {
      OutputBox[["lscv.volumes"]] <- khr.vols
      for (i in names(OutputBox[["lscv.volumes"]])) {
        khr.h[i] <- OutputBox$lscv.volumes[[i]]$h$h
        #print(OutputBox$lscv.volumes[[i]]$h$h)
      }
      khr.area <- multicolturner(khr.area)
      khr.h <- turner(khr.h, "h_LSCV")
      khr.area <- merge(khr.h, khr.area, by.x = 'ID', by.y = 'ID')
      OutputBox[["lscv.areas"]] <- khr.area
      print("Done.")
    }
  } 
  if ("hadj" %in% method) {                                          # KHR(hadj)
    print("Kernel hadj calculation...")
    if (is.null(OutputBox[["href.volumes"]])) {
      OutputBox[["href.volumes"]] <- kernelUD(dd[,c('X','Y')],dd[,'ID'],grid=grid)
    }
    if (is.null(OutputBox[["lscv.volumes"]])) {
      OutputBox[["lscv.volumes"]] <- lscv.cruncher(dd,grid)
    }
    #@TODO(anne) hadj calculations (see ecological modelling 202: 333-344).
    #@TODO(anne) leave here or place in a separate file?
    # 1: hadj[i] = (hlscv[i]/href[i])
    N <- length(names(OutputBox$href.volumes))          # how many rows do we need?
    hadj <- data.frame(ID=character(N),hadj=numeric(N)) # size up a destination dataframe
    row.names(hadj) <- names(OutputBox$href.volumes)    # give rows names
    hadj$ID <- names(OutputBox$href.volumes)            # fill in ID column
    # if lscv is globally unsuccessful, abort execution.
    #@TODO find a cleaner way...
    abort <- FALSE
    for (each in row.names(hadj)) {
      if (!is.null(OutputBox$lscv.volumes[[each]]$h$convergence)) {
        abort <- FALSE
        break()
      }
        else 
          abort <- TRUE
      }
      if (abort == TRUE) {
        warning("LSCV never converged. hadj would be equal to href. Aborting. Output object is NULL.")
        return(NULL)
      }
      for (i in row.names(hadj)) {
      if (is.null(OutputBox$lscv.volumes[[i]]$h$convergence)) {
        # not converged, hadj = 1
        hadj[i,'hadj'] <- 1
      }
      else {
        if (OutputBox$lscv.volumes[[i]]$h$convergence) { # paranoia!
          # converged, hadj[i] = hlscv[i]/href[i]
          hadj[i,'hadj'] <- OutputBox$lscv.volumes[[i]]$h$h/OutputBox$href.volumes[[i]]$h     
        }
        else { # paranoia (again)!
          hadj[i,'hadj'] <- 1    
        }
      }
    }
    # 2: calculate average hadj
    hadj.mean <- mean(hadj$hadj)
    # 3: recalculate kernel survace areas with h=hadj.mean * href[i]
    khr.hadjdf <- data.frame(percent=seq(10,95,by=5))
    hadj.values <- list()
    khr.geo <- list()
    class(khr.geo) <- c("khrud","khr") ## FIX! class MUST be BOTH khrud and khr, if not gerverticeshr will complain
    for (i in names(OutputBox$href.volumes)) {  
      h <- hadj.mean*OutputBox$href.volumes[[i]]$h
      d <- subset(dd, ID == i, select = c('ID','X','Y'))
      khr.hadjdf[i] <- kernel.area(d[,c('X','Y')],d$ID,h=h,levels=seq(10,95,by=5),unin="m",unout="ha",grid=grid)
      khr.geo[i] <- kernelUD(d[,c('X','Y')], h=h, grid=grid)
      hadj.values[i] <- h
    }
    OutputBox[["hadj.volumes"]] <- khr.geo
    row.names(khr.hadjdf) <- khr.hadjdf$percent
    khr.hadjdf$percent <- NULL
    khr.hadj <- multicolturner(khr.hadjdf) # reshape areas table
    hadj.values <- turner(hadj.values, "h_adj") # reshapes h table
    khr.hadj <- merge(hadj.values, khr.hadj, by.x = 'ID', by.y = 'ID')
    OutputBox[["hadj.areas"]] <- khr.hadj
    print("Done.")
  }
  if ("clusthr" %in% method) {                                      # cluster HR
    #print("clusthr option is not yet active. Tell Anne to work on it.")
    print("clusthr method is not implemented yet.")
    #@TODO(anne) implement bidimensional cluster (Kenward et al. 2001) 
    }
  if ("NNCH" %in% method) {                                             # LoCoH!
    print("a-NNCH calculation. Could be long. It empirically takes e^n seconds...")
    maxdistance <- sqrt((max(dd$Y) - min(dd$Y))^2 + (max(dd$X) - min(dd$X))^2) # extent diagonal
    nnch.shapes <- NNCH(dd[,c('X','Y')],dd[,'ID'], a=maxdistance, unin="m", unout="ha", status=FALSE, duplicates=0.2)
    nnch.area <- NNCH.area(nnch.shapes, rev(seq(10, 100, by = 5)), plotit=FALSE)
    nnch.area <- as.data.frame(t(nnch.area))
    # strip out any 'a' and all other stuff from the animal names (damn!) 
    nnch.area$ID <- strsplit(row.names(nnch.area), '\\..*')
    row.names(nnch.area) <- nnch.area$ID
    # names(nnch.area) <- c("NNCH95","ID")
    OutputBox[["NNCH.shapes"]] <- nnch.shapes
    OutputBox[["NNCH.areas"]] <- nnch.area
    print("Done.")
  }
  ## well. if someone wants to add other dispatchers... move on from here 
  invisible(OutputBox)
}


###############################################################################
# Calculates overlap using the "single point of truth" approach: UDs are pre-
# calculated and stored as R objects (either on disk or not) of class khrud, khr
# Compared to original kerneloverlap, function kerneloverlap2 doesn't calculate
# UDs by itself, but needs an object of class khrud, khr containing already
# calculated UDs
################################################################################
kerneloverlap.spot <- function (x, method = c("HR", "PHR", "VI", "BA",
    "UDOI", "HD"), lev = 95, conditional = FALSE, ...) {
    method <- match.arg(method)
    if (!require(adehabitat)) 
      stop("This procedure requires adehabitat to be installed")
    if (!inherits(x, c("khrud","khr"))) {
      print("x argument must be of class khrud! Aborting.")
      exit()
    }
    vol <- getvolumeUD(x)
    res <- matrix(0, ncol = length(x), nrow = length(x))
    for (i in 1:length(x)) {
        for (j in 1:i) {
            if (method == "HR") {
                vi <- vol[[i]]$UD
                vj <- vol[[j]]$UD
                vi[vi <= lev] <- 1
                vi[vi > lev] <- 0
                vj[vj <= lev] <- 1
                vj[vj > lev] <- 0
                vk <- vi * vj
                res[i, j] <- sum(vk)/sum(vi)
                res[j, i] <- sum(vk)/sum(vj)
            }
            if (method == "PHR") {
                vi <- x[[i]]$UD
                vj <- x[[j]]$UD
                ai <- vol[[i]]$UD
                aj <- vol[[j]]$UD
                ai[ai <= lev] <- 1
                ai[ai > lev] <- 0
                aj[aj <= lev] <- 1
                aj[aj > lev] <- 0
                if (conditional) {
                  vi <- vi * ai
                  vj <- vj * aj
                  res[j, i] <- sum(vi * aj) * (attr(vi, "cellsize")^2)
                  res[i, j] <- sum(vj * ai) * (attr(vi, "cellsize")^2)
                }
                else {
                  res[j, i] <- sum(vi * aj) * (attr(vi, "cellsize")^2)
                  res[i, j] <- sum(vj * ai) * (attr(vi, "cellsize")^2)
                }
            }
            if (method == "VI") {
                vi <- c(x[[i]]$UD)
                vj <- c(x[[j]]$UD)
                ai <- vol[[i]]$UD
                aj <- vol[[j]]$UD
                ai[ai <= lev] <- 1
                ai[ai > lev] <- 0
                aj[aj <= lev] <- 1
                aj[aj > lev] <- 0
                if (conditional) {
                  vi <- vi * ai
                  vj <- vj * aj
                  res[i, j] <- res[j, i] <- sum(pmin(vi, vj)) * 
                    (attr(x[[i]]$UD, "cellsize")^2)
                }
                else {
                  res[i, j] <- res[j, i] <- sum(pmin(vi, vj)) * 
                    (attr(x[[i]]$UD, "cellsize")^2)
                }
            }
            if (method == "BA") {
                vi <- x[[i]]$UD
                vj <- x[[j]]$UD
                ai <- vol[[i]]$UD
                aj <- vol[[j]]$UD
                ai[ai <= lev] <- 1
                ai[ai > lev] <- 0
                aj[aj <= lev] <- 1
                aj[aj > lev] <- 0
                if (conditional) {
                  vi <- vi * ai
                  vj <- vj * aj
                  res[j, i] <- res[i, j] <- sum(sqrt(vi) * sqrt(vj)) * 
                    (attr(vi, "cellsize")^2)
                }
                else {
                  res[j, i] <- res[i, j] <- sum(sqrt(vi) * sqrt(vj)) * 
                    (attr(vi, "cellsize")^2)
                }
            }
            if (method == "UDOI") {
                vi <- x[[i]]$UD
                vj <- x[[j]]$UD
                ai <- vol[[i]]$UD
                aj <- vol[[j]]$UD
                ai[ai <= lev] <- 1
                ai[ai > lev] <- 0
                aj[aj <= lev] <- 1
                aj[aj > lev] <- 0
                if (conditional) {
                  vi <- vi * ai
                  vj <- vj * aj
                  ak <- sum(ai * aj) * (attr(vi, "cellsize")^2)
                  res[j, i] <- res[i, j] <- ak * sum(vi * vj) * 
                    (attr(vi, "cellsize")^2)
                }
                else {
                  ak <- sum(ai * aj) * (attr(vi, "cellsize")^2)
                  res[j, i] <- res[i, j] <- ak * sum(vi * vj) * 
                    (attr(vi, "cellsize")^2)
                }
            }
            if (method == "HD") {
                vi <- x[[i]]$UD
                vj <- x[[j]]$UD
                ai <- vol[[i]]$UD
                aj <- vol[[j]]$UD
                ai[ai <= lev] <- 1
                ai[ai > lev] <- 0
                aj[aj <= lev] <- 1
                aj[aj > lev] <- 0
                if (conditional) {
                  vi <- vi * ai
                  vj <- vj * aj
                  res[j, i] <- res[i, j] <- sqrt(sum((sqrt(vi) - 
                    sqrt(vj))^2 * (attr(vi, "cellsize")^2)))
                }
                else {
                  res[j, i] <- res[i, j] <- sqrt(sum((sqrt(vi) - 
                    sqrt(vj))^2 * (attr(vi, "cellsize")^2)))
                }
            }
        }
    }
    rownames(res) <- names(x)
    colnames(res) <- names(x)
    return(res)
}


###############################################################################
# Estimates core areas accodring to Hanski et al. 2000.
# Usage: res <- corearea(loc, id, method = "kernel")
# References:
# Hanski, I. K.; Stevens, P. C.; Ihalempia, P. & Selonen, V. (2000)
# Home-range size, movements, and nest-site use in the Siberian flyng squirrel,
# Pteromys volans Journal of Mammalogy, 2000, 81, 798-809 
#
###############################################################################
corearea <- function(xy, id = NULL, levels = seq(20,95,by = 5), method = "mcp", unin = "m", unout = "m", plot = TRUE) {
  if (!require(adehabitat)) 
    stop("This procedure requires adehabitat to be installed")
  alpha <- 0.05
  ret <- list()
  if(!method %in% c("clusthr","mcp","kernel")) {
    stop("method should be either \"clusthr\" or \"mcp\" or \"kernel\"") }
  if(method == "clusthr") {
    clust <- clusthr(xy, id)
    ret[['area']] <- clusthr.area(clust, percent = levels, unin = unin, unout = unout, plotit = FALSE)
    }
  if(method == "mcp") {
    ret[['area']] <- mcp.area(xy, id, percent = levels, unin = unin, unout = unout, plotit = FALSE)
    }
  if(method == "kernel") {
    ret[['area']] <- kernel.area(xy, id, h = "href", levels = levels, unin = unin, unout = unout)
    }
  #print(is(ret[['area']]))
  level.areas <- stack(as.data.frame(t(ret[['area']])))
  names(level.areas) <- c('area','coropleth')
  level.areas$coropleth <- ordered(level.areas$coropleth,levels=levels) # as factor!
  ret[['aov']] <- aov(area ~ coropleth, data = level.areas)
  # if coropleth factor is significant go on, else stop here
  if (!summary(ret[['aov']])[[1]][,5][1] <= alpha) {
    warning("No significant differences among coropleth areas.", call. = FALSE)
  }
  else {
    # do TukeyHSD
    ret[['tukey']] <- TukeyHSD(ret[['aov']],"coropleth",ordered = TRUE)
    # pick only consecutive pairs of coropleths
    n <- length(levels)
    i <- 1
    t <- vector()
    for(j in 1:n) {
      t[j] <- i
      i <- (i+n-j)
    }
    p <- ret[['tukey']]$coropleth[t[1:length(t)-1],4]
    if (length(p[p <= alpha]) == 0) {
      warning("No significant differences between consecutive coropleths found.", call. = FALSE)
    }
    else {
      val <- as.numeric(strsplit(names(p[p <= alpha]),"-")[[1]])
      cat("Tukey multiple comparison of means between consecutive levels\n\n")
      print(ret[['tukey']]$coropleth[t[1:length(t)-1],])
      cat("\n\nSignificant differences are present between levels",val[2],"and",val[1],"%\n")
      if (plot) {
        #@TODO(prea) make the plot using average +- standard deviation
        ret[['plot']] <- boxplot(area ~ coropleth,data=level.areas,xlab='% UD Coropleth',ylab=paste('Area (',unout, ")"))
        abline(v = match(val[2],levels),col="red")
      }
    }
  }
  invisible(ret)
}

###############################################################################
## shapefile export functions
nooverlapname <- function(name, dir=getwd(), ext) {
  tempname <- name
  num <- 0
  fullname = file.path(dir, paste(name, ext, sep=""))
  while (file.exists(fullname)) {
    num <- num+1
    tempname <- paste(name, "_", num, sep="")
    fullname = file.path(dir, paste(tempname, ext, sep=""))
  }
  return(tempname)
}

exporter <- function(x, attr, field="ID", out=allavailable, level=95, export= c("global", "separated"), shapename, dir = getwd(), owr=T) {
  namelist <- NULL
    if ((!inherits(x, "homerange")))
        stop("The object I\'m supposed to export \ndoes not come out from HR_cruncher, but if it has the same \nstructure, set its class to both list and homerange.")
    if (!require(rgdal) & !require(adehabitat)) 
    stop("This procedure requires rgdal and adehabitat to be installed.")
    rr <- attr
    allavailable <- sub("\\.areas","",agrep("areas",names(x),value=T)) # here we call all the '*.areas' elements present into outputbox. This is a dirty trick to have all geometry elements, whose name coumd in fact change
    if ("mcp" %in% out) {
      mcpSpol <- area2spol(x$mcp.shapes)
    rrmcp <- merge(rr, x$mcp.areas, by.x = field, by.y = field)
    row.names(rrmcp) <- rrmcp$ID
      mcpSPDF <- SpatialPolygonsDataFrame(mcpSpol, rrmcp, match.ID = TRUE) 
        if ("global" %in% export) {
        ## NOT SAFE! grep the level from area column.
      name <- paste(shapename, "MCP", level, "all", sep="")
      if (owr == F) {
        name <- nooverlapname(name, dir=dir, ext=".shp")
      }
#       namelist <- as.list(c(namelist, name))
      namelist <- c(namelist, name)
      writeOGR(mcpSPDF,dir,name, "ESRI Shapefile")
      }
      if ("separated" %in% export) {
        warning("The separate MCP exporting is not yet implemented.")
      }
    }

  ########### GENERIC FACTORIZED EXPORTER FUNCTION 
  xport <- function(krnvol, krnarea, lev, shpname){ 
    kernelist <- NULL
    khr.out <- getverticeshr(krnvol, lev=lev)
    khr.outspol <- kver2spol(khr.out)
    # filter krnarea to pick up the right only column
    hacolname <- grep(lev, grep("%ha",names(krnarea),value=T), value=T)
    hcolname <- grep("h_",names(krnarea),value=T)
    filter <- subset(krnarea, select= c(field,hcolname,hacolname))
    attribs <- merge(rr, filter, by.x = field, by.y = field)
    row.names(attribs) <- attribs[,field]
    khr.polygons <- SpatialPolygonsDataFrame(khr.outspol, attribs, match.ID = TRUE)
      if ("global" %in% export) {
        name <- paste(shpname,lev, sep="-")
        if (owr == F) 
          name <- nooverlapname(name, dir=dir, ext=".shp")
        kernelist <- c(kernelist,name)
        writeOGR(khr.polygons,dir, name, "ESRI Shapefile")
      }
      if ("separated" %in% export) {
        for (k in getSpPPolygonsIDSlots(khr.polygons))  {
          name <- paste(shpname, lev ,k, sep="-")
          if (owr == F) 
            name <- nooverlapname(name, dir=dir, ext=".shp")
          kernelist <- c(kernelist, name)
          writeOGR(khr.polygons[khr.polygons$ID == k,],dir,name, "ESRI Shapefile")
        }
      }
    # print(kernelist)
    # print(is(kernelist))
    # return(as.list(kernelist))
    return(kernelist)
  }
  ######### END

  if ("href" %in% out) {
    href.volumes <- x$href.volumes
    href.areas <- x$href.areas
    hrefs <- paste(shapename, "-href", sep="")
        list <- xport(href.volumes, href.areas, lev=level, shpname=hrefs)
    namelist <- c(namelist, list)
    # print(namelist)
  }

  if ("lscv" %in% out) {
    lscv.volumes <- x$lscv.volumes
    lscv.areas <- x$lscv.areas
    hlscvs <- paste(shapename, "-hlscv", sep="")
    list <- xport(lscv.volumes, lscv.areas, lev=level, shpname=hlscvs)
    namelist <- c(namelist, list)
  }

  if ("hadj" %in% out) {
    hadj.volumes <- x$hadj.volumes
    hadj.areas <- x$hadj.areas
    hadjs <- paste(shapename, "-hadj", sep="")
    list <- xport(hadj.volumes, hadj.areas, lev=level, shpname=hadjs)
    namelist <- c(namelist, list)
  }
  # print(namelist)
  return(namelist)
}

### End of File ##
# kate: encoding utf-8; syntax bash; space-indent on; indent-width 2;
# kate: word-wrap-column 80; word-wrap-marker on; word-wrap-marker-color green;
# kate: auto-brackets on; indent-mode cstyle;

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
AniMov mailing list
AniMov@faunalia.it
http://lists.faunalia.it/cgi-bin/mailman/listinfo/animov

Reply via email to