Rusers: I have tried to minimize computing times by taking advanage of lapply(). My data is a 1000*30 matrix and the distance matrix was created with dist(). What I am trying to do is to compute the standard distances using the frequencies attached to the nearest negibors of n reference zones. So I will have 1000 standard distances, and would like to see the frequency distribution of the standard distances.
# Convert decimal degrees into UTM miles x<-(data[,1]-58277.194363)*0.000621 y<-(data[,2]-4414486.03135)*0.000621 # Combine x y for computing distances coords<-cbind(x,y) pts<-length(data) # Subset housing data and employment data RES<-data[,3:17] EMP<-data[,378:392] # Combine all the subdata as D D<-cbind(coords,RES,EMP) cases<-ncol(D)-ncol(coords) # Create a threshold bandwidth for defining the nearest neighbors thrs<-seq(0,35,by=1) SDTAZ<-rep(list(matrix(,nrow(D),length(thrs))),cases) for (j in 1:nrow(D)) for (k in 1:length(thrs)) for (l in 1:cases) { { { SDTAZ[[l]][j,k]<- sqrt( sum( (D[as.vector(which(dis[j,]<=thrs[k])),l+2]-D[j,l+2]- min(D[as.vector(which(dis[j,]<=thrs[k])),l+2]-D[j,l+2])+1)* ( (dis[j,as.vector(which(dis[j,]<=thrs[k]))])^2 ) ) /sum(D[as.vector(which(dis[j,]<=thrs[k])),l+2]-D[j,l+2]- min(D[as.vector(which(dis[j,]<=thrs[k])),l+2]-D[j,l+2])+1) ) } } } I think that within this nested loop, I should use lapply() but I ended up getting different values.... I appreciate if someone could kindly help me. Thank you very much. ------------------------------------ Takatsugu Kobayashi PhD Candidate Indiana University, Dept. Geography ______________________________________________ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.