Dear Gianni,
I think you want this:
tapply(areaFe$K <= fer.conc.max, areaFe$ID, sum, na.rm=TRUE)
E.g.:
ID <- rep(1:10, each=10)
K <- runif(100)
areaFe <- as.data.frame(cbind(ID, K))
fer.conc.max <- 0.5
tapply(areaFe$K <= fer.conc.max, areaFe$ID, sum, na.rm=TRUE)
Robert
On Fri, Mar 13, 2009
dear User
I'm quite new to R and trying to learn the basic and sorry for the email. I
don't wish abuse of mail-list. I had read manual and Help online about
tapply (...). I have a data frame and I need "count" the Fe value with sum
for every ID sample area
fer.conc.max <- tapply(areaFe$K, areaFe$
And if none of these helps remember to specify the NA (not available) values
in your data
read.table("data.txt", na.strings="yournavalues", as.is=TRUE)
If your NA values in the original data was for example "NN" R would think the
column was all made of factors and thereby convert it. By using
On Thu, 12 Mar 2009, Huang Juying wrote:
Dear list members,
I am a new R user and spdep user. I tried to use function "localG" to
calculate Getis Gi. My targeted variable is log(SalePrice) for over 1000
houses. The listw is the number of neighbors within 3000m.
SalePrice.Gi <- localG(SaleP
Hello Virgilio,
INLA looks like a great alternative to the MCMC methods. Your right,
ultimately, I am interested in the marginals. I'll take a look at the
course and let you know how it works out. This is could be a huge time
saver.
Thanks,
-Ben
On Thu, Mar 12, 2009 at 3:46 AM, Virgilio Gome
Hi Pieter-
Dave Foley at NOAA has developed R and Matlab routines for downloading lots
of oceanographic data products from the CoastWatch servers. Products
include MODIS and others. Have a look at his routines for getting data in
any space-time interval you need...
http://coastwatch.pfel.noaa.g
Dear Cristina,
It sounds like a problem with the decimal
separator in your raw data. If it is a comma,
that can explain your problem. When reading the data in R indicate sep=","
HTH
Marcelino
At 18:29 12/03/2009, Huang Juying wrote:
Content-Type: text/plain
Content-Disposition: inline
Con
Dear list members,
I am a new R user and spdep user. I tried to use function "localG" to calculate
Getis Gi. My targeted variable is log(SalePrice) for over 1000 houses. The
listw is the number of neighbors within 3000m.
> SalePrice.Gi <- localG(SalePrice, nb3000m.W, zero.policy=TRUE)
Then I
Dear list,
We are working on an agent-based simulation of agricultural production
in the Pampas of Argentina. Very briefly, we want to ask the list for
tips about how to divide a polygon (e.g., a county, or administrative
division) into several smaller polygons of different sizes. Each of
the smal
On Thu, 12 Mar 2009, Ben Fissel wrote:
Hello,
I am attempting to fit a model CAR count data model of the Besag-York-Mollie
form to US zip code data (entire US minus Hawaii and Alaska). However, I'm
running out of memory reading in the zip code data. The zip code data I
obtained form the censu
You may also want to try the Thomas process
ow <- map('state','illinois', plot=FALSE, fill=TRUE)
ow.sppgs <- map2SpatialPolygons(ow, IDs=ow$names,
proj4string=CRS("+proj=longlat"))
ow <- as(ow.sppgs, "owin")
x <- rThomas(2, 0.1, 10, win = ow)
sp.pts <- as(x, "SpatialPoints")
plot(ow.sppgs
Dear Els,
please look at the default value for tol.hor in ?variogram and specify
it in your second call.
--
Edzer
Els Verfaillie wrote:
> Dear list,
>
>
>
> When plotting directional variograms, I wonder why in the first plot I see a
> different sample variogram for the 45 direction than in the
Dear list,
When plotting directional variograms, I wonder why in the first plot I see a
different sample variogram for the 45 direction than in the second plot
(where I only want to plot the 45° variogram). Do I have to define for the
second plot an angular tolerance that is default for the fir
Hi Ben,
> There are 31625 zipcodes which correspond to a geographic area in the
> US. I'm expecting it to take a while to run, as long as it takes less
I know that some people in Spain were using similar models with around
8500 areas and it took 18 hours to run. So not sure if in your case
WinBU
Hello Virgilio,
There are 31625 zipcodes which correspond to a geographic area in the US.
I'm expecting it to take a while to run, as long as it takes less than say
a week I'm prepared to wait. Naturally, I'll set my code up and test it on
smaller geographic area before scaling up. As long as i
Ben,
How many areas will you have in case you are able to read in all your
shapefiles? Are you sure that you will be able to handle that once you
have your data ready? If you have too many areas perhaps you should
consider aggregate your data in some way and try running a modela with
that.
I know
Hello,
I am attempting to fit a model CAR count data model of the Besag-York-Mollie
form to US zip code data (entire US minus Hawaii and Alaska). However, I'm
running out of memory reading in the zip code data. The zip code data I
obtained form the census at
http://www2.census.gov/geo/tiger/TIGE
Dear list,
thank you very much for all the many suggestions on how to extract values to
points! I ended up using the RSAGA package with the pick.from.ascii.grid()
function.
Cheers,
Frauke
> Date: Tue, 10 Mar 2009 20:19:17 +0800
> Subject: Re: [R-sig-Geo] extract values to points
> From: r.hij
18 matches
Mail list logo