_lines) {
> nextline<-readLines(test_con,1)
> header_lines<-length(grep("*end",nextline,fixed=TRUE))==0
> }
> fread_dat<-read.table(test_con,header=TRUE)
>
> Jim
>
> On Sat, Jun 27, 2015 at 2:16 AM, Trevor Davies
> wrote:
> > I'm trying to read
I'm trying to read in a file using the function fread.
The file that I'm trying to read in has about 100 lines of information I
don't want prior to getting to my matrix of data that I do want. On the
line prior to the data I want there is always a string identifier "*end*"
The following fread ca
reports when stuff
> goes wrong is useful).
>
> best,
> Simon
>
>
> On 16/07/14 20:25, Trevor Davies wrote:
>
>> I have run a quasipoisson spatial model via GAM (NB just wouldn't work)
>> and
>> I am getting the following output of one of my parameter
I have run a quasipoisson spatial model via GAM (NB just wouldn't work) and
I am getting the following output of one of my parameters
(COR.YEARLY.MEAN). Does this suggest an error in the model fit? The model
seems to have converged. Apologies for the lack of reproducible example
but it didn't rea
I think the easiest most straight forward way would be to just throw it
into a loop and subset the data on each loop around (untested code below
but I'm sure you get the gist). ~Trevor
sex1<-unique(tips$sex)
day1<-unique(tips$day)
for (i in 1:length(sex1)){
for (j in 1:length(day1)){
pdf(pas
Hello,
I was hoping someone could point me in the direction towards a package
where I can use delaunay triangulation to create a polygon set where the
inside of the triangles are tagged with an estimate of a mean value of the
points making up the points of the triangle. This is fisheries trawl da
Is there a simple way to subtract base-60 (coordinate system) values?
A trivial example:
I have a Lat of 44.1 degrees and I want to subtract 0.2 degrees from it.
Therefore the answer should be 43.5 degrees (base 60).
I can do a change to character; stringsplit ; change back - deal with the
leadi
od wrote:
> I think it didn't converge. The warning message is for the whole fitting
> iteration, whereas mod07.3.BASE.bam5.1$converged indicates whether
> smoothing parameter selection converged at the final step of the iteration.
>
> best,
> Simon
>
>
> On 17/06/14 19
I'm running some spatial GAMs and am using the bam call (negbin family) and
am getting conflicting information on whether the model is converging or
not. When the model completes its run, I get a warning message that the
model did not converge. When I look at the object itself, I'm told that it
d
Hello I'm interested in calculating the # of Km north/south and east/west
between two sets of geographic points. Sort of like the gcdist() function
but rather than just a single distance have the x & y broken up.
I've found some raw code online but I have been unable to find a package
that has th
Hello,
I'm been playing around trying to get some maps working in ggplot & ggmap.
I have a series of maps I'm trying to make and I want to fix the bubble
area size of banana catches to the specified breaks. I.e so banana catches
of 2 will always be a specified size so when I make the next map, b
n before using the trip* functions.
>
> I know there's a lot of options listed above, but it really does
> depend on which aspects matter to you.
>
> HTH
>
>
>
> On Tue, Mar 11, 2014 at 10:09 AM, Trevor Davies
> wrote:
> > Hello,
> >
> > Sorr
Hello,
Sorry for the lack of a complete example but this is more of a class type
question.
I have a map of the coast that I generated through PBSmapping:
xlims <- c(292,303.5)
ylims <- c(41.5,49.5)
plotMap(worldLLhigh, xlim=xlims, ylim=ylims, col=grey(0.1),bg=grey(0.9),
xlab="", ylab="", las=1,
PM, Hadley Wickham wrote:
> If you load plyr first, then dplyr, I think everything should work.
> dplyr::summarise works similarly enough to plyr::summarise that it
> shouldn't cause problems.
>
> Hadley
>
> On Wed, Jan 29, 2014 at 4:19 PM, Trevor Davies
> wrote:
&g
Thanks - that's solves my problems.
All the best - Trevor
On Wed, Jan 29, 2014 at 2:28 PM, Ista Zahn wrote:
> Hi Trever,
>
> See help("::") and help("detach")
>
> Best,
> Ista
>
> On Wed, Jan 29, 2014 at 5:19 PM, Trevor Davies
> wrote:
&g
I think I have a hole in my understanding of how R uses packages (or at
least how it gives functions in packages priority). I thought I would give
the new dplyr package a test drive this morning (which is blazingly fast
BTW) and I've gone down the rabbit hole.
The issue is that I'm unable to use
Is there a quick function that can convert minutes (seconds) after midnight
to a time?
i.e 670.93 (minutes after midnight) --> 11:10:56.**
I know it can be done by hand but I thought there must be a function for
this already.
Thank you.
[[alternative HTML version deleted]]
Hello,
I was wondering if weighting has been added to glmmadmb as of yet? I found
a post about a year ago that it it hadn't been implemented yet but I was
hoping the documentation may just have not yet caught up.
Thanks,
Trevor Davies
[[alternative HTML version de
Yes, I caught my error once I posted it - I was fiddling with match prior
to hammering down with merge but your solution is much better. Thank you.
On Fri, Jul 12, 2013 at 3:05 PM, David Winsemius wrote:
>
> On Jul 12, 2013, at 2:56 PM, Trevor Davies wrote:
>
> > I always thin
complex situation. If anyone has
something a little less heavy handed I'd live to hear it.
Have a great weekend.
On Fri, Jul 12, 2013 at 2:18 PM, Trevor Davies wrote:
>
> I'm trying to find a function that can replace multiple instances of
> values or characters in a vector
I'm trying to find a function that can replace multiple instances of values
or characters in a vector in a one step operation. As an example, the
vector:
x <- c(rep('x',3),rep('y',3),rep('z',3))
> x
[1] "x" "x" "x" "y" "y" "y" "z" "z" "z"
I would simply like to replace all of the x's with 1's,
1 1 AA AA
2 2 AB d
3 3 AC e
Thanks again.
On Wed, Mar 28, 2012 at 3:52 PM, David Winsemius wrote:
>
> On Mar 28, 2012, at 6:40 PM, Trevor Davies wrote:
>
> Thank you, works perfectly.
>>
>>
> Good. There is also a recode function in package 'car'
Thank you, works perfectly.
On Wed, Mar 28, 2012 at 3:11 PM, David Winsemius wrote:
>
> On Mar 28, 2012, at 5:26 PM, Trevor Davies wrote:
>
> I've looked but I cannot find a more elegant solution.
>>
>> I would like to be able to scan through a data.frame and r
I've looked but I cannot find a more elegant solution.
I would like to be able to scan through a data.frame and remove multiple
and various instances of certain contents.
A trivial example is below. It works, it just seems like there should be a
one line solution.
#Example data:
a <-
data.frame
Thanks David and Bill. I don't know how I couldn't think of that - maybe
because I'm working through a brutal flu right now.
Thanks for the help. This list is a fantastic resource.
Trevor
On Fri, Nov 18, 2011 at 4:18 PM, David Winsemius wrote:
>
> On Nov 18, 2011, at 7
A late friday afternoon coding question. I'm having a hard time thinking
of the correct search terms for what I want to do.
If I have a df like this:
a <-
data.frame(name=c(rep('a',10),rep('b',15)),year=c(1971:1980,1971:1985),amount=1:25)
name year amount
1 a 1971 1
2 a 1972
Sorry, I should have really started a new thread with this because really it
is a new question only loosely related to the first Q.
Thanks for the assist.
As suggested I switched over to sweave. I have a lot of .tex tables that I
> have already created that I was previously inserting into my tex
/blah.tex}
It works fine.
What am I missing here?
Thanks.
On Fri, Oct 28, 2011 at 11:48 AM, Duncan Murdoch
wrote:
> On 28/10/2011 2:40 PM, Trevor Davies wrote:
>
>> I have found that I like having my captions and labels in my latex
>> document
>> rather than having th
I have found that I like having my captions and labels in my latex document
rather than having them contained in my xtable output file (I haven't fully
gone to sweave yet). I know I can do something like this by using the
'only.contents' argument in xtable. Unfortunately, the only.contents
argume
Alternatively, since you are on gmail you can set up a folder and filter so
all r-help emails bypass you inbox and go right to an r-help folder (or
something). I find it very useful for just browsing during down time so I
can offer my assistance or move to an 'r-keepers ' folder the for little
gem
Here is one option:
a<- data.frame(day=c(rep(4,8),rep(6,8)),unit=
c((1:8),seq(2,16,2)),value=round(runif(16,1,34),0)) #approx your data
b<- data.frame(day=c(rep(4,16),rep(6,16)),unit= 1:16) #fake df
b1<-merge (a,b, by=c('day','unit'),all.y=T)
b1$value[is.na(b1$value)]<-0
00
> d 10001000
>b00100010
>
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@
This has been dogging me for a while. I've started making a lot of tables
via xtable so the way I want to sort things is not always in alphabetical or
numerical order.
As an example, consider I have a dataframe as follows
set.seed(100)
a <- data.frame(V1=sample(letters[1:4],100, replace=T),V2=1:1
33 matches
Mail list logo