Thanks to you both. Calling recover (an option hitherto unknown to me) helped
me identify the problem.
For the record, the error occurred in the geom_path() line, not the list
concatenation, as I had previously thought. It was a logic problem: when
typeof == NULL the function jumped, but i
json_dir is a list of JSON lists mapping lat/long route points between
locations using CloudMade's API.
post_url is the URL of the HTTP request
for (n in json_dir) {
i = i + 1
if (typeof(json_dir[[i]]) != NULL) {
if (i == 1) {
dat_add
Have you looked at plyr?
Generally, ldply works well for this sort of thing.
--
View this message in context:
http://r.789695.n4.nabble.com/convert-a-list-to-a-data-frame-tp4532206p4533257.html
Sent from the R help mailing list archive at Nabble.com.
Working code that normalize each row's value against the subset's maximum.
Does the invocation of max() somehow instruct R to 'step back' and evaluate
the subset?
Thanks, Zack
--
View this message in context:
Just read into a data.frame with read.table and then subset to use the first
column.
e.g.,
your_desired_data - data.frame(read.table(path_to_file, sep = , fill =
T))
your_desired_data - your_desired_data[,1]
--
View this message in context:
I'm sure there's a better way to do this using plyr. I just can't nail the
right series of commands.
I've got a vector of strings. I want to remove all where the string's count
within the vector is 1.
Right now I'm using:
Where x2 is the original data.frame and my character strings live
The first block of code should be reproducible.
For the second block, you need only a data.frame. I've included a few rows
from the one I'm working with.
Two required libraries: maps, ggplot2.
http://r.789695.n4.nabble.com/file/n4492267/upton_tank_trunc_nabble.csv
upton_tank_trunc_nabble.csv
I'm sure this is smack-head moment, but I haven't been able to find an
example of this on Nabble or SO, so thought I'd ask.
This works:
michigan - map_data('county', 'michigan')
mich_points - data.frame(x = rnorm(n = 200, median(michigan[,1]), 0.75), y
= rnorm(n = 200, median(michigan[,2]),
Question:
twitteR's searchTwitter() function contains a 'geocode' argument that
returns tweets from users whose location falls within a given radius.
I'm not completely familiar with the API from which twitteR pulls, but no
mechanism exists to extract location coordinates from the tweets
Trying to install RMySQL on 64-bit Windows 7.
Using R-2.14.2 with Rtools214 and MySQL Server 5.5.
Read through several step-by-steps of RMySQL source installation.
Troubleshooting:
- Copied libmysql.dll to R-2.14.2/bin AND R-2.14.2/bin/i386.
- Copied libmysql.dll and libmysql.lib to MySQL
I have a four-digit string I want to convert to five digits. Take the
following frame:
zip
2108
60321
60321
22030
91910
I need row 1 to read '02108'. This forum directed me to formatC previously
(thanks!) That usually works but, for some reason, it's not in this
instance. Neither of the syntaxes
Question:
I'm trying to use paste() with rep() to reformat a series of values as zip
codes. e.g., if column 1 looks like:
52775
83111
99240
4289
112
57701
20001
I want rows 4 and 5 to read,
04289
00112
My thought was this:
perry_frame$zip - ifelse(nchar(as.character(perry_frame$zip))5,
Wanted to post an exchange with Roger Bivand on combining KML files.
I had hoped to combine multiple KML files into a single
SpatialPolygonsDataFrame. I used readOGR to bring files into R; had I worked
with .shp files, I might've used readShapePoly and a unique IDvar. SPRbind
construction would
Just posting again here...
--
View this message in context:
http://r.789695.n4.nabble.com/Prediction-from-censReg-tp4155855p4158844.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
Hi -
First post, so excuse any errors in protocol:
Wanted to ask if there's an easy way to use 'predict' with objects of class
'censReg', 'maxLik', 'maxim' or 'list'.
Have a left-censored dataset, attempting to use a Tobit model and am working
with the censReg package. I like how easy it is
15 matches
Mail list logo