Re: [R] Quantile Regression without intercept

2015-10-06 Thread Lorenz, David
Did you verify that the correct percentages were above/below the regression
lines? I did a quick check and for example did not consistently get 50% of
the observed response values greater than the tau=.5 line. I did when I
included the nonzero intercept term.



> Date: Mon, 5 Oct 2015 21:14:04 +0530
> From: Preetam Pal 
> To: stephen sefick 
> Cc: "r-help@r-project.org" 
> Subject: Re: [R] Quantile Regression without intercept
> Message-ID: <56129a41.025f440a.b1cf4.f...@mx.google.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Yes..it works.  Thanks ??
>
> -Original Message-
> From: "stephen sefick" 
> Sent: ?05-?10-?2015 09:01 PM
> To: "Preetam Pal" 
> Cc: "r-help@r-project.org" 
> Subject: Re: [R] Quantile Regression without intercept
>
> I have never used this, but does the formula interface work like lm? Y~X-1?
>
>
> On Mon, Oct 5, 2015 at 10:27 AM, Preetam Pal 
> wrote:
>
> Hi guys,
>
> Can you instruct me please how to run quantile regression without the
> intercept term? I only know about the rq function under quantreg package,
> but it automatically uses an intercept model. Icant change that, it seems.
>
> I have numeric data on Y variable (Gdp) and 2 X variables (Hpa and
> Unemployment). Their sizes are 125 each.
>
> Appreciate your help with this.
>
> Regards,
> Preetam
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
>
>
>
>
> --
>
> Stephen Sefick
> **
> Auburn University
> Biological Sciences
> 331 Funchess Hall
> Auburn, Alabama
> 36849
> **
> sas0...@auburn.edu
> http://www.auburn.edu/~sas0025
> **
>
> Let's not spend our time and resources thinking about things that are so
> little or so large that all they really do for us is puff us up and make us
> feel like gods.  We are mammals, and have not exhausted the annoying little
> problems of being mammals.
>
> -K. Mullis
>
> "A big computer, a complex algorithm and a long time does not equal
> science."
>
>   -Robert Gentleman
> [[alternative HTML version deleted]]
>
>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quantile Regression without intercept

2015-10-06 Thread Lorenz, David
Thanks for the details, I suspected something like that.
I think that begs the question: what is the meaning of quantile regression
through the origin? If the tau=.5 line does not pass through 1/2 the data
how do I interpret the line?


On Tue, Oct 6, 2015 at 8:03 AM, Roger Koenker  wrote:

>
> > On Oct 6, 2015, at 7:58 AM, Lorenz, David  wrote:
> >
> > Did you verify that the correct percentages were above/below the
> regression
> > lines? I did a quick check and for example did not consistently get 50%
> of
> > the observed response values greater than the tau=.5 line. I did when I
> > included the nonzero intercept term.
>
> Your "correct percentages" are only correct when you have an intercept in
> the model,
> without an intercept there is no gradient condition to ensure that.
> >
> >
> >
> >> Date: Mon, 5 Oct 2015 21:14:04 +0530
> >> From: Preetam Pal 
> >> To: stephen sefick 
> >> Cc: "r-help@r-project.org" 
> >> Subject: Re: [R] Quantile Regression without intercept
> >> Message-ID: <56129a41.025f440a.b1cf4.f...@mx.google.com>
> >> Content-Type: text/plain; charset="UTF-8"
> >>
> >> Yes..it works.  Thanks ??
> >>
> >> -Original Message-
> >> From: "stephen sefick" 
> >> Sent: ?05-?10-?2015 09:01 PM
> >> To: "Preetam Pal" 
> >> Cc: "r-help@r-project.org" 
> >> Subject: Re: [R] Quantile Regression without intercept
> >>
> >> I have never used this, but does the formula interface work like lm?
> Y~X-1?
> >>
> >>
> >> On Mon, Oct 5, 2015 at 10:27 AM, Preetam Pal 
> >> wrote:
> >>
> >> Hi guys,
> >>
> >> Can you instruct me please how to run quantile regression without the
> >> intercept term? I only know about the rq function under quantreg
> package,
> >> but it automatically uses an intercept model. Icant change that, it
> seems.
> >>
> >> I have numeric data on Y variable (Gdp) and 2 X variables (Hpa and
> >> Unemployment). Their sizes are 125 each.
> >>
> >> Appreciate your help with this.
> >>
> >> Regards,
> >> Preetam
> >>[[alternative HTML version deleted]]
> >>
> >> __
> >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>
> >> Stephen Sefick
> >> **
> >> Auburn University
> >> Biological Sciences
> >> 331 Funchess Hall
> >> Auburn, Alabama
> >> 36849
> >> **
> >> sas0...@auburn.edu
> >> http://www.auburn.edu/~sas0025
> >> **
> >>
> >> Let's not spend our time and resources thinking about things that are so
> >> little or so large that all they really do for us is puff us up and
> make us
> >> feel like gods.  We are mammals, and have not exhausted the annoying
> little
> >> problems of being mammals.
> >>
> >>-K. Mullis
> >>
> >> "A big computer, a complex algorithm and a long time does not equal
> >> science."
> >>
> >>  -Robert Gentleman
> >>[[alternative HTML version deleted]]
> >>
> >>
> >>
> >>
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] wilcox.test - difference between p-values of R and online calculators

2014-09-04 Thread Lorenz, David
  I think that the issue, at least with the online calculator that I looked
at, is that it does not  adjust the standard deviation of the test
statistic for ties, so the standard deviation is larger and hence larger
p-value. I was able to reproduce the reported z-score using the equation
for the standard deviation with out ties.
Dave

Message: 14
> Date: Wed, 3 Sep 2014 23:20:04 +0200
> From: peter dalgaard  >
> To: David L Carlson  >
> Cc: "r-help@r-project.org
> "
>  >,
> W Bradley Knox
>  >
> Subject: Re: [R] wilcox.test - difference between p-values of R and
> online  calculators
> Message-ID:  
> >
> Content-Type: text/plain; charset=us-ascii
>
> Notice that correct=TRUE for wilcox.test refers to the continuity
> correction, not the correction for ties.
>
> You can fairly easily simulate from the exact distribution of W:
>
> x <- c(359,359,359,359,359,359,335,359,359,359,359,
>   359,359,359,359,359,359,359,359,359,359,303,359,359,359)
> y <- c(332,85,359,359,359,220,231,300,359,237,359,183,286,
>   355,250,105,359,359,298,359,359,359,28.6,359,359,128)
> R <- rank(c(x,y))
> sim <- replicate(1e6,sum(sample(R,25))) - 325
>
> # With no ties, the ranks would be a permutation of 1:51, and we could do
> sim2 <- replicate(1e6,sum(sample(1:51,25))) - 325
>
> In either case, the p-value is the probability that W >= 485 or W <= 165,
> and
>
> > mean(sim >= 485 | sim <= 165)
> [1] 0.000151
> > mean(sim2 >= 485 | sim2 <= 165)
> [1] 0.002182
>
> Also, try
>
> plot(density(sim))
> lines(density(sim2))
>
> and notice that the distribution of sim is narrower than that of sim2
> (hence the smaller p-value with tie correction), but also that the normal
> approximationtion is not nearly as good as for the untied case. The
> "clumpiness" is due to the fact that 35 of the ranks have the maximum value
> of 34 (corresponding to the original 359's).
>
> -pd
>
> On 03 Sep 2014, at 19:13 , David L Carlson  >
> wrote:
>
> > Since they all have the same W/U value, it seems likely that the
> difference is how the different versions adjust the standard error for
> ties. Here are a couple of posts addressing the issues of ties:
> >
> > http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html
> >
> http://stats.stackexchange.com/questions/6127/which-permutation-test-implementation-in-r-to-use-instead-of-t-tests-paired-and
> >
> > David C
> >
> > From: wbradleyk...@gmail.com
> 
> [mailto:wbradleyk...@gmail.com
> ]
> On Behalf Of W Bradley Knox
> > Sent: Wednesday, September 3, 2014 9:20 AM
> > To: David L Carlson
> > Cc: Tal Galili; r-help@r-project.org
> 
> > Subject: Re: [R] wilcox.test - difference between p-values of R and
> online calculators
> >
> > Tal and David, thanks for your messages.
> >
> > I should have added that I tried all variations of true/false values for
> the exact and correct parameters. Running with correct=FALSE makes only a
> tiny change, resulting in W = 485, p-value = 0.0002481.
> >
> > At one point, I also thought that the discrepancy between R and these
> online calculators might come from how ties are handled, but the fact that
> R and two of the online calcultors reach the same U/W values seems to
> indicate that ties aren't the issue, since (I believe) the U or W values
> contain all of the information needed to calculate the p-value, assuming
> the number of samples is also known for each condition. (However, it's been
> a while since I looked into how MWU tests work, so maybe now's the time to
> refresh.) If that's correct, the discrepancy seems to be based in what R
> does with the W value that is identical to the U values of two of the
> online calculators. (I'm also assuming that U and W have the same meaning,
> which seems likely.)
> >
> > - Brad
> >
> > 
> > W. Bradley Knox, PhD
> > http://bradknox.net
> > bradk...@mit.edu
> 
>  >
> >
> > On Wed, Sep 3, 2014 at 9:10 AM, David L Carlson  
>  

Re: [R] Simulative data production

2014-04-30 Thread Lorenz, David
Merve,
  I'm not 100 percent sure I understand everything that you want. But start
with the simulated likert scale data. The code that you have is not very
efficient and it has at least one typo. I do not know if columns or rows
represent the persons, so I'll set up as NROW and NCOL.
  An efficient way to generate multiple columns of the same distribution is
to generate all of the the random number and just make them a matrix.
Example code below.

NROW <- 20
NCOL <- 4
MAT <- matrix(sample(1:5, NROW*NCOL), ncol=NCOL)

  Random normal deviates are typically generated from the rnorm function.
But you stated you wanted to generate the normal distribution from total
scores. I'm a bit confused because you refer to 200 people but that does
not correspond to any number in the data that you have generated.
  Hope this helps.
Dave

Date: Tue, 29 Apr 2014 09:38:52 +0300
> From: Merve ?ahin 
> To: r-help@r-project.org
> Subject: [R] Simulative data production
> Message-ID:
>  3hhswa7tn6a...@mail.gmail.com>
> Content-Type: text/plain
>
> Hello,
> My name is Merve from Abant Ä°zzt Baysal University. I want to produce
> simulative data using R, but I couldn't do. I want to produce n=200, 5
> likert type, 20 items and normally distributed data. The normal
> distribution is provided by total scores of each of 200 person. I produce
> this kind of data;
> veri.seti lace=T),sample(1:5,25,replace=T),sample(1:5,25,replace=T
> V1 V2 V3 V4
> 1   4  1  5  1 
> 2   2  4  2  2 
> 3   5  5  1  4 
> 4   4  5  3  4 
> 5   3  2  3  1 
> 6   3  1  2  1 
> 7   1  3  5  4 
> 8   2  4  1  1 
> 9   3  1  5  4 
> 10  4  5  4  5 
> 11  2  1  4  5 
> 12  2  3  1  5 
> 13  1  4  2  4 
> 14  1  1  1  4 
> 15  4  3  4  1 
> 16  2  2  5  2 
> 17  4  4  1  4 
> 18  5  5  2  4 
> 19  4  2  1  3 
> 20  3  5  3  2 
> 21  2  4  4  4 
> 22  4  3  4  4 
> 23  5  1  5  2 
> 24  4  2  2  2 
> 25  2  2  1  3 
>
> But, I cannot check or provide the normal distribution. Also, I want to add
> this data on SPSS 20.0, How can I do this. Can you help me, please?
>
> *Arş.Gör.Merve ŞAHİN*
> *Abant İzzet Baysal Üniversitesi*
> *Eğitim Bilimleri Bölümü*
> *Ölçme ve Değerlendirme A.B.D.*
>
> [[alternative HTML version deleted]]
>
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Aggregate rows with same fields, within factors

2013-09-17 Thread Lorenz, David
Andrea,
  The argument na.action controls how missings are treated. is this what
you wanted?

aggregate(cbind(IND, DUM)~.,data=net1,sum, na.action=na.pass)

Dave

>Date: Mon, 16 Sep 2013 11:42:07 -0400
>From: Andrea Goijman 
>To: arun 
>Cc: R help 
>Subject: Re: [R] Aggregate rows with same fields, within factors
>Message-ID:
><
CA+vCKnXekYBkfvcQYTxGwh1WGPxJM8Yj>o2ycpv7fkdsztvc...@mail.gmail.com
>Content-Type: text/plain
>
>it works, but it eliminates the rows with NA
>
>is there a way to keep those?
>
>
>On Mon, Sep 16, 2013 at 11:22 AM, arun  wrote:
>
> Hi,
> Try:
>
>
>  aggregate(IND~.,data=net1,sum)
>CAMP LOTE HAB TRANS   ORDEN IND
> 1C1   B1   CC1   0
> 2C1   B1   BB3   ACARI   3
> 3C1   B1   BB1 ARANEAE   1
> 4C1   B1   BB3 ARANEAE   2
> 5C1   B1   BB3  COLEOPTERA   2
> 6C1   B1   BB1 DIPTERA  27
> 7C1   B1   BB3 DIPTERA  11
> 8C1   B1   CC2 DIPTERA   3
> 9C1   B1   BB1   HEMIPTERA  11
> 10   C1   B1   BB3   HEMIPTERA 231
> 11   C1   B1   CC2   HEMIPTERA 147
> 12   C1   B1   BB1 HYMENOPTERA   8
> 13   C1   B1   BB3 HYMENOPTERA   2
> 14   C1   B1   CC2 HYMENOPTERA   1
> 15   C1   B1   BB1 LEPIDOPTERA   1
> 16   C1   B1   BB1  NEUROPTERA   1
> 17   C1   B1   BB1  ORTHOPTERA   2
> 18   C1   B1   BB3  ORTHOPTERA   1
>
>
> A.K.
>
> - Original Message -
> From: Andrea Goijman 
> To: R help 
> Cc:
> Sent: Monday, September 16, 2013 11:09 AM
> Subject: [R] Aggregate rows with same fields, within factors
>
> Dear R list,
>
> I want to aggregate the number of individuals 'IND' of the same ORDER,
> within each site and season CAMP,TRANS... but I also want to keep record
of
> the habitat HAB and LOTE
>
> For example I have this:
>
>  CAMP LOTE HAB TRANS IND   ORDEN
> 1765   C1   B1   BB1   7   HEMIPTERA
> 1766   C1   B1   BB1   7 DIPTERA
> 1767   C1   B1   BB1   1 DIPTERA
> 1768   C1   B1   BB1   1  NEUROPTERA
> 1769   C1   B1   BB1   1   HEMIPTERA
> 1770   C1   B1   BB1   5 DIPTERA
> 1771   C1   B1   BB1   1 DIPTERA
>
> And I want this
>
>   CAMP LOTE HAB TRANS IND   ORDEN
> 1765   C1   B1   BB1   8   HEMIPTERA
> 1766   C1   B1   BB1   14 DIPTERA
> 1768   C1   B1   BB1   1  NEUROPTERA
>
>
> I'm using aggregate the way I show below, but it is not working, and I
> cannot figure out why.
>
> Thanks!
>
> Andrea
>
>
>
> net1<-structure(list(CAMP = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("C1",
> "C2", "C3", "C4"), class = "factor"), LOTE = structure(c(1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L), .Label = c("B1", "B4", "B5", "F7", "G6", "G8", "R10", "W9",
> "Z2", "Z3"), class = "factor"), HAB = structure(c(1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label =
> c("B",
> "C"), class = "factor"), TRANS = structure(c(1L, 1L, 1L, 1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L,
> 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
> 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L), .Label = c("B1",
> "B2", "B3", "C1", "C2", "C3"), class = "factor"), IND = c(2L,
> 6L, 7L, 1L, 1L, 7L, 7L, 1L, 1L, 1L, 5L, 1L, 1L, 1L, 4L, 1L, 2L,
> 1L, 1L, NA, NA, NA, NA, 28L, 4L, 2L, 1L, 3L, 193L, 1L, 2L, 7L,
> 2L, 1L, 5L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 62L, 1L, 1L, 1L, 80L,
> 1L, 1L, 4L), ORDEN = structure(c(9L, 10L, 8L, 8L, 15L, 9L, 8L,
> 8L, 12L, 9L, 8L, 8L, 11L, 3L, 8L, 8L, 10L, 9L, 15L, 1L, 1L, 1L,
> 1L, 9L, 8L, 8L, 5L, 2L, 9L, 10L, 3L, 9L, 9L, 9L, 8L, 10L, 5L,
> 15L, 1L, 1L, 1L, 1L, 9L, 8L, 10L, 9L, 9L, 8L, 8L, 9L), .Label = c("",
> "ACARI", "ARANEAE", "CHILOGNATHA", "COLEOPTERA", "DERMAPTERA",
> "DICTYOPTERA", "DIPTERA", "HEMIPTERA", "HYMENOPTERA", "LEPIDOPTERA",
> "NEUROPTERA", "NN", "ODONATA", "ORTHOPTERA", "PSOCOPTERA", "STREPSIPTERA",
> "THYSANOPTERA", "TRICHOPTERA"), class = "factor")), .Names = c("CAMP",
> "LOTE", "HAB", "TRANS", "IND", "ORDEN"), row.names = c(1760L,
> 1761L, 1762L, 1763L, 1764L, 1765L, 1766L, 1767L, 1768L, 1769L,
> 1770L, 1771L, 1772L, 1773L, 1920L, 1921L, 1922L, 1923L, 1924L,
> 1774L, 1775L, 1776L, 1777L, 1778L, 1779L, 1780L, 1781L, 1782L,
> 1783L, 1784L, 1785L, 1786L, 1787L, 1788L, 1789L, 1790L, 1791L,
> 1925L, 1731L, 1732L, 1733L, 1734L, 1735L, 1736L, 1737L, 1738L,
> 1739L, 1740L, 1741L, 1742L), class = "data.frame")
>
> #generate grouping list
> b <- list(net1$CAMP, net1$LOTE, net1$HAB, net1$TRANS, net1$ORDEN)
>
> #ag

Re: [R] R-help Digest, Vol 128, Issue 30

2013-10-28 Thread Lorenz, David
Pavlos,
  There are several ways to evaluate how well new data fit an old
regression.Part of the answer depends on what you are concerned about. For
example, if you are concerned about bias, you can test whether the mean of
the new data is within the expected range of the mean of that many new
values. The equations for these prediction intervals should be in good
texts on linear regression.
Dave

Date: Sun, 27 Oct 2013 13:36:12 +0200
From: Pavlos Pavlidis 
To: r-help 
Subject: Re: [R] how well *new data* fit a pre-computed model
Message-ID:

Content-Type: text/plain

Here is a link to a plot that illustrates the question:
bio.lmu.de/~pavlidis/pg1.pdf

the question is how to evaluate whether the blue points fit the curve well
enough. The curve has been produced from the black points

best
pavlos


On Sun, Oct 27, 2013 at 1:30 PM, Pavlos Pavlidis wrote:

> Hi all,
> I have fitted polynomial models to a bunch of data (time-course analysis).
> The experiment is "the expression value of gene A under condition K over
> time". The data points that have been used to fit the model are about 200
> (dataset A). Furthermore I have a few data (dataset B; about 10 points)
for
> "the expression values of gene A under condition G over time". The
question
> is:
>
> how can I evaluate how well the dataset B fits the model generated by
> dataset A?
>
> kind regards,
> pavlos
>
> --
>
> Pavlos Pavlidis, PhD
>
> Foundation for Research and Technology - Hellas
> Institute of Molecular Biology and Biotechnology
> Íikolaou Plastira 100, Vassilika Vouton
> GR - 711 10, Heraklion, Crete, Greece
>



--

Pavlos Pavlidis, PhD

Foundation for Research and Technology - Hellas
Institute of Molecular Biology and Biotechnology
Íikolaou Plastira 100, Vassilika Vouton
GR - 711 10, Heraklion, Crete, Greece

[[alternative HTML version deleted]]



--

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predefined area under the curve

2014-01-17 Thread Lorenz, David
Elisa,
  Part of the issue is that there is no unique solution. You could increase
each of the 6 values by a fixed amount to make up the 0.242003 difference
in area, or you could increase them proportionally, among many other
schemes.
  I'll outline an approach and you can decide how you want to proceed.
1.  Determine the contribution of reach point to the total area:
matrix(diff(c(el[1,1], el[,1], el[12,1]), 2)/2)
2. The total contribution from the last 7 values is 1.42607687227.
3. To distribute the change equally over the last 7 values, simply add
0.242003/1.42607687227 = 0.1696984 to each of those 12 values.
4. To distribute unequally, you'll need to work that out yourself, but
knowing the relative weights will be the key.
Dave


>Date: Thu, 16 Jan 2014 13:24:41 +
>From: eliza botto 
>To: "r-help@r-project.org" 
>Subject: [R] predefined area under the curve
>Message-ID: 
>Content-Type: text/plain
>
>Dear UseRs of R,
>My sincere apologizes in advance if my question isn't relevant to the
>operations in R. I actually have the following two columns data, with 12
>rows in it.
> dput(el)
>
>structure(c(-1.42607687227285, -1.0200762327862, -0.736315917376129,
> -0.502402223373355, -0.293381232121193, -0.0965586152896391,
> 0.0965586152896391, 0.293381232121194, 0.502402223373355,
>0.73631591737613, 1.0200762327862, 1.42607687227285,
>1.99095972340185, 1.84006682649012, 1.71563586990498,
>1.60312301737773, 0.748443534297919, 0.696909774793038,
>0.64586377528834, 0.594330015783459, 0.270606020696256,
>0.24247780756, 0.211370068418158, 0.173646844190226), .Dim =
>c(12L, 2L), .Dimnames = list(NULL, c("", "GG")))
>
>When I plot column 2 against column 1 , i get a curve with an area
>[auc(column1,column2)] under it equals to 2.602997. As i am calibrating it
>for further simulations therefore i know that the area under the curve
should
>actually be equal to 2.845. I also know that the first 6 rows have been
>located accurately, therefore the rows from 7 to 12 need to be relocated
in
>such a manner that area under the curve gets equal to or as close as
>possible to 2.845. How can I do that? i have been doing it manually but at
>the cost of time and  accuracy.
>
>Thankyou very much in advance.
>Elisa
>   [[alternative HTML version deleted]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.