I'm looking at two variables, concentration and precipitation, measured at
two networks of stations:
network A measures both variables at few (~30), widely spaced stations
network B measures only precipitation at many (~1800) densely spaced
stations
a decent correlation between the two variables
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dear Ralph,
I guess you'd need to define three variables,
concA, precA and precB. Then, you can let precA and precB have
different neighbourhood specs, but make them statistically identical
by specifying the same variograms for precA and precB as
Dear all,
I fitted a number of theoretical variogram such as spherical, gaussian,
exponential, and linear to empirical semivariogram using gstat package.
Please how do I assess and identify the optimal theoretical model that best
fit the empirical semivariogram.
Thanks while eager to read from
Hi everyone.
I remember some very interesting and professional-looking R projects that
somehow involved geographical maps.
Now that I'd like to work with maps myself, let me ask you this question:
Which of the packages that are currently available would you think are the
most elegant and
HI Markus,
That's... rather broad.
I'd suggest installing the Spatial taskview, reading about what's
available as part of that collection at
http://cran.r-project.org/web/views/Spatial.html and then thinking
about what you are trying to accomplish.
Surely a more focused answer would be more
Hi all, I see someone had earlier posted similar question to the list. But, my
problem is that the counties in the input data don't match those on the map.
Codes below:
library(spdep)
library(maps)
data-read.csv(C:/Users/mitra.devkota/Desktop/MT1.csv,header=TRUE)
names(data)
Dear all,
I have a set of 7 overlaying rasters for which I want to perform a principal
components analysis and subsequently extract the corresponding maps of the two
or three principal components (i.e. the scores).
I've been trying to do this for a while, but I keep getting stuck in the
Hey Edward,
you may find this function useful.
https://github.com/environmentalinformatics-marburg/magic/blob/master/eot/src/EotDenoise.R
It is designed to 'denoise' raster data (i.e. calculate all possible
PCAs and only keep the first k - an integer you specify - PCAs for
reconstruction).
Dear Rolf,
Thank you for insightful answer. Following your explanation I found out
that the key problem was that I didn't specify the simulation in Linhom
envelope. For some reason I thought that lambda would be overriding the
envelope creating point process. So, your example 4 was really
César, I think you can do princomp(na.omit(values(s)), cor=TRUE)
For large objects, also see the pca example in ?raster::predict
Robert
On Tue, Jul 23, 2013 at 10:42 AM, César Capinha nrevist...@yahoo.co.uk wrote:
Dear all,
I have a set of 7 overlaying rasters for which I want to perform a
On 24/07/13 07:56, Allar Haav wrote:
Dear Rolf,
Thank you for insightful answer. Following your explanation I found
out that the key problem was that I didn't specify the simulation in
Linhom envelope. For some reason I thought that lambda would be
overriding the envelope creating point
Ah! Thank you! Exponenting the model truly fixed the problem as,
indeed, the pixel image indeed stems from a multinomial logit model. I
am not really sure why though, something to do with Poisson process
probability function?
All the best,
Allar
On 23/07/2013 22:57, Rolf Turner wrote:
This
Dear All,
Please what is the usefulness of shapefile created for spatial data having
longitude and latitude as spatial component and yield measured in bu/ac as
attribute component. How relevant is the shapefile in the course of
carrying out spatial analysis? I am a beginner. I have just created a
Allar Haav writes:
I am currently trying to use spatstat's inhomogeneous cluster analysis
methods (mostly Linhom), but ran into a problem. Namely, I have a pixel
image (type im) with values ranging from 0 to 1 indicating point
probabilities. When creating and plotting random points with it,
14 matches
Mail list logo