You need a "#" at the beginning of the string to specify that it is a
hex code for the color. Try "#".
On Thu, Jun 6, 2024 at 9:07 AM Yosu Yurramendi
wrote:
>
> What is the HEX code for "transparent" color?
> I've tried "" "FF00" "", but they don't work.
> Thanks
>
>
One more function to consider using and teaching is the attach
function. If you use `attach` with a the name of a file that was
created using `save` then it creates a new, empty environment, `load`s
the contents of the file into the environment, and attached the
environment to the search path (by
Stephen, I see lots of answers with packages and resources, but not
book recommendations. I have used Introduction to Data Technologies
by Paul Murrell (https://www.stat.auckland.ac.nz/~paul/ItDT/) to teach
SQL and database design and would recommend looking at it as a
possibility.
On Mon, Aug 2
Using the `assign` function is almost always a sign that you are
making things more complicated than is needed. It is better to work
directly with lists, which make it much easier to set and get names.
Your code and description can be done pretty simply using the `lapply
function (often easier th
Because `<-` and `=` do different things (and I am one of the old
fossils that wish they had not been confounded).
`fun(x <- expr)` Assigns the value of `expr` to the variable `x` in
the frame/environment that `fun` is called from.
`fun(x = expr)` Assigns the value of `expr`to the variable `x` in
Since any space that follows 2 or 3 + signs (or - signs) also follows
a single + (or -), this can be done with positive look behind, which
may be a little simpler:
x <- c(
'leucocyten + gramnegatieve staven +++ grampositieve staven ++',
'leucocyten - grampositieve coccen +'
)
strsplit(x, "(?<=
I think the tricky part here is the different types of `quote`. I
would use `bquote` here, try:
doit <- function(x){
ds <- deparse(substitute(x))
cat("1\n")
print(ds)
eval(bquote(lm(.(ds))),parent.frame())
}
Just note that getting the parent.frame() portion right in anything
more complic
The first thing to understand is that despite similarity in names,
`match` and `match.call` are doing very different things, which should
not be confused with each other.
For understanding what a function is doing, it is helpful to watch
what it does at each step. With functions like `lm` that ar
A little simpler answer than the others.
Look at package Namespaces. When a package is created, the NAMESPACE
file defines which functions in the package are exported (i.e.
available for you to use), the other functions are "private" to the
package meaning that other functions in the package can
To expand a little on Christopher's answer.
The short answer is that having the different syntaxes can lead to
more readable code (when used properly).
Note that there are now 2 different (but somewhat similar) pipes
available in R (there could be more in some package(s) that I don't
know about,
You might try `hasName` instead of `exists` since `exists` is designed
for environments and `hasName` for objects (like lists). Note that
the order of the arguments is switched between the 2 functions.
This does the same thing as Andrew Simmons' answer, but is a little bit shorter.
On Tue, Dec 27
Another option is the map family of functions in the purrr package
(yes, this depends on another package being loaded, which may affect
things if you are including this in your own package, creating a
dependency).
In map and friends, if the "function" is a string or integer, then it
is taken as th
Terry,
I don't know if it is much cleaner or not, but you can use:
sapply(fits, `[[`, 'iter')
This calls the `[[` function (to extract list elements) on each
element of the top list, with the extra argument of `iter` to say
which element.
On Tue, Dec 27, 2022 at 10:16 AM Therneau, Terry M., Ph
Here is one approach:
tmp <- data.frame(min=seq(0,150, by=15))
tmp %>%
mutate(hm=sprintf("%2d Hour%s %2d Minutes",
min %/% 60, ifelse((min %/% 60) == 1, " ", "s"),
min %% 60))
You could replace `sprintf` with `str_glue` (and update the syntax as
well) if
A p-value is for testing a specific null hypothesis, but you do not
state your null hypothesis anywhere.
It is the null value that needs to be subtracted from the bootstrap
differences, not the observed difference. By subtracting the observed
difference you are setting a situation where the p-val
I would suggest using the microbenchmark package to do the time
comparison. This will run each a bunch of times for a more meaningful
comparison.
One possible reason for the difference is the number of missing values
in your data (along with the number of columns). Consider the
difference in the
While it is possible to fill bars with patterns, it is not
recommended. Fill patterns can lead to what is called the Moire
effect and other optical illusions. Depending on the fill patterns
and how they relate to each other this can cause an illusion of
movement within the plot, straight lines ap
I think that the current documentation is correct, but that does not
mean that it cannot be improved.
The key phrase for me is "from the current position" which says to me
that the match needs to happen right there, not just somewhere in the
rest of the string.
If you used the expression " +t" t
One option is to use the my.symbols and ms.image functions from the
TeachingDemos package. There is an example under ?ms.image.
On Mon, Aug 10, 2020 at 7:43 AM Pedro páramo wrote:
>
> Hi,
>
> There is a way to add a photo like a free text but images on a plot, (hist,
> chart trough ggplot) to a
Rui pointed out that you can examine the source yourself. FAQ 7.40
has a link to an article with detail on finding and examining the
source code.
A general algorithm for checking for duplicates follows (I have not
examined to R source code to see if they use something more clever).
Create an emp
There are more than one way to do it, and it would help if you
provided some sample data.
But here is an example for one way to do it:
examp.dat <- as.data.frame(matrix(sample(1:5, 100*6, replace=TRUE), ncol=6)
tmp.count <- apply(examp.dat, 1, function(x) sum(x>=3))
examp2.dat <- examp.dat[tmp.co
As Borris mentioned, paste0 works well for this.
Another option is the sprintf function:
sprintf("c%i", 1:10)
For this example they do the same thing, but as things become more
complicated sometimes you will want paste0 and sometimes sprintf will
be better.
Compare the above to
sprintf("c%02i",
On CRAN there are Task Views which are moderated lists and summaries
of packages available for specific topics. The first one
(alphabetically) is the Bayesian task view:
https://cran.r-project.org/web/views/Bayesian.html which lists many
packages for doing/learning Bayesian statistics using R. Th
Here is one of my favorites:
https://medium.com/@ODSC/how-300-matchboxes-learned-to-play-tic-tac-toe-using-menace-35e0e4c29fc
On Tue, May 12, 2020 at 2:39 AM Abby Spurdle wrote:
>
> > In my opinion the advantage of computers is not Artificial
> > Intelligence, but rather Artificial Patience (mos
It is a nice dream, but it is really abdicating ethical responsibility
to the computer instead of the researcher. And I personally don't
trust computers over people for this.
What could go wrong?
First, how do you guarantee that the statistical plan was locked in
place before the data was collec
As others have pointed out, ncol calls the length function, so you are
pretty safe in terms of output of getting the same result when applied
to the results of functions like read.csv (there will be a big
difference if you ever apply those functions to a matrix or some other
data structures).
One
ge:stats", search()) + 1)
>
> On Mon, Nov 25, 2019 at 1:27 PM Greg Snow <538...@gmail.com> wrote:
> >
> > You could use the `pos` arg to place the newly loaded package(s) on
> > the search path after the stats package. That would give priority for
> > any fu
You could use the `pos` arg to place the newly loaded package(s) on
the search path after the stats package. That would give priority for
any functions in the stats package over the newly loaded package (but
also give priority for any other packages earlier on the search path).
On Sat, Nov 23, 20
This is in part answered by FAQ 7.21.
The most important part of that answer is at the bottom where it says
that it is usually better to use a list.
It may be safer to use a list for your case so that other important
variables do not become masked (hidden by the global variables you
just created)
Just to add one more option (which is best probably depends on if all
the same dates are together in adjacent rows, if an earlier date can
come later in the data frame, and other things):
df$count <- cumsum(!duplicated(df$Date))
Skill a cumsum of logicals, just a different way of getting the logi
Generally you should do the power analysis before collecting any data.
Since you have results it looks like you already have the data
collected.
But if you want to compute the power for a future study, one option is
to use simulation.
1. decide what the data will look like
2. decide how you will
When the goal of looping is to compute something and save each
iteration into a vector or list, then it is usually easier to use the
lapply/sapply/replicate functions and save the result into a single
list rather than a bunch of global variables.
Here is a quick example that does the same computat
I am not an expert on Rscript, but I don't think that an actual
terminal is ever used when using Rscript. And `interactive()` will
probably always be false.
So if you want the script to pause for input, you need to have some
form of user interface to work with.
One option is to use the tcltk pac
Can you show us an example with the data that you are using and the
output from t.test.
A t-value of 1.96 is not an automatic rejection. It depends on alpha
and the degrees of freedom. Even if we set alpha at 0.05, 1.96 should
not give a p-value less than 0.05 with finite degrees of freedom.
Th
Here is another approach that uses only the default packages:
> onecar <- mtcars[10,]
> w <- which(duplicated(rbind(mtcars,onecar), fromLast = TRUE))
> w
[1] 10
> mtcars.subset <- mtcars[-w,]
>
>
> threecars <- mtcars[c(1,10,15),]
> w <- which(duplicated(rbind(mtcars,threecars), fromLast=TRUE))
>
The basic test of independence for a table based on the Chi-squared
distribution can be done using the `chisq.test` function. This is in
the stats package which is installed and loaded by default, so you
don't need to do anything additional. There is also the `fisher.test`
function for Fisher's e
The uniroot function can be used to find a value in a specified
interval, if it exists.
On Tue, Aug 14, 2018 at 3:30 PM Tania Morgado Garcia wrote:
>
> Hello everyone. I'm new to R and I'm using spline functions. With the
> command splinefun (x, y) I get the function of interpolating the values x
Look at the spin and stitch functions in the knitr package if you want
to process an existing script into an output that mixes the code run
with the output.
Look at the txtStart and related functions in the TeachingDemos
package if you want the code and output saved in a file from a session
where
The error is because the read.csv function converted both columns to
factors. The simplest thing to do is to set stringsAsFactors=FALSE is
the call to read.csv so that they are compared as strings. You could
also call as.character on each of the columns if you don't want to
read the data in again
You may find the answers to this question on Cross Validated (along
with the discussion) to be useful:
https://stats.stackexchange.com/questions/35940/simulation-of-logistic-regression-power-analysis-designed-experiments
On Tue, Oct 10, 2017 at 10:09 AM, davide cortellino
wrote:
> Dear All
>
>
>
You can also look at the my.symbols and ms.arrows functions in the
TeachingDemos package.
On Wed, Mar 29, 2017 at 7:44 AM, julio cesar oliveira wrote:
> Dears,
>
> The arrows command uses the start and end coordinates of each vector, but I
> have the starting coordinates, azimuth, and length.
>
What is the result of running:
getOption("device")
?
It should be something like: "RStudioGD". It sounds like this has
been changed to something else, if that is the case it is a matter of
either changing it back, or figuring out where the change is being
made and fixing that.
On Mon, Jan 23,
Some tools that might help include spread.labels from the plotrix
package, spread.labs from the TeachingDemos package, and
dynIdentify/TkIdentify from the TeachingDemos package.
On Tue, Dec 13, 2016 at 4:37 PM, Marna Wagley wrote:
> Hi R user,
> I have created using metaNMDS (Nonmetirc Multidimen
I would suggest looking at the strapply function in the gsubfn
package. That gives you more flexibility in specifying what to look
for in the structure of the data, then extract only those pieces that
you want.
On Fri, Oct 14, 2016 at 5:16 PM, Joe Ceradini wrote:
> Afternoon,
>
> I unfortunatel
I don't know if the parallel approach would work or not, but a
possibly simpler approach would be to use the tclTaskSchedule function
from the tcltk package. You could use this to schedule your update
code to run on a regular basis, then you have access to the command
line between times that it ru
if. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Correlation of Fixed Effects:
>(Intr)
> treatmentB -0.568
>
> Thanks again,
> Joshua
>
>
> From: Greg Snow <538...@gmail.com>
> Sent: Wednesday, September 28, 2
There are multiple ways of doing this, but here are a couple.
To just test the fixed effect of treatment you can use the glm function:
test <- read.table(text="
replicate treatment n X
1 A 32 4
1 B 33 18
2 A 20 6
2 B 21 18
3 A 7 0
3 B 8 4
", header=TRUE)
fit1 <- glm( cbind(X,n-X) ~ treatment, da
If your goal is to visualize the predicted curve from an lm fit (or
other model fit) then you may want to look at the Predict.Plot and
TkPredict functions from the TeachingDemos package.
On Sun, Sep 25, 2016 at 7:01 AM, Matti Viljamaa wrote:
> I’m trying to plot regression lines using curve()
This indicates that your Discharge column has been stored/converted as
a factor (run str(df) to verify and check other columns). This
usually happens when functions like read.table are left to try to
figure out what each column is and it finds something in that column
that cannot be converted to a
You might consider the Predict.Plot and TkPredict functions in the
TeachingDemos package. These help you explore multiple linear
regression models by plotting the "line" relating the response to one
of the predictors at given values of the other predictors. These
lines can be combined in a single
As has been mentioned, this really requires a GUI tool beyond just R.
Luckily there are many GUI tools that have been linked to R and if
Shiny (the shiniest of them) does not have something like this easily
available so far then you may want to look elsewhere in the meantime.
One option is the tcl
And you can specify the symbols and colors using code like:
with(lili, plot(y,conc, pch= c(16,18,17)[sample],
col=c('red','green','blue')[sample], log="y"))
modifying to meet your data and preferences, of course.
On Tue, Aug 30, 2016 at 11:48 AM, Clint Bowman wrote:
>
> with(lili,plot(y,conc,pc
You can attach rda files directly with the attach function, no need to
load them first (see the what argument in the help for attach). This
may do what you want more directly.
In general it is better to not use loops and attach for this kind of
thing. It is better to store multiple data objects
Lauren,
The easier that you make it for us to help you, the more likely you
are to get help and the more helpful the answers will be.
First, please post in plain text (not HTML). Second, reproducible
code (including sample data) helps us help you.
Your code above is incorrect, the 3 lines with
ject_list) {
> print(paste0("Object '", i, "' in '", file_name, "' contains:"))
> str(get(i))
> print(names(get(i))) # works
> }
>
> do what you want?
>
> Best,
> Ista
>
> On Tue, Aug 16, 2016 at
for my
> analysis.
>
> Could you explain why working with different environments would be
> helpful?
>
> You suggested to read variables into lists rather than storing them in
> global variables. This sounds interesting. Could you provide an example of
> how to define and
ot;#19" "#20" "#21"
> [22] "#22" "#23" "#24" "#25" "#26"
>
> There are a lot of C/C++ based functions that use an internal function to
> get
> the names of vectors and they may not use this method, but R code will us
The names function is a primitive, which means that if it does not
already do what you want, it is generally not going to be easy to
coerce it to do it.
However, the names of an object are generally stored as an attribute
of that object, which can be accessed using the attr or attributes
functions
Stefano,
It is usually best to keep these discussions on R-help. People there
may be quicker to respond and could have better answers. Keeping the
discussion on the list also means that if others in the future find
your question, they will also find the answers and discussion. And
some of us can
The rnoaa package has the function ncdc_stations which can be used to
search for stations in a region. You could use that giving it an
extent around the coordinates that you are interested in (add and
subtract a small amount from the coordinates), then pass the results
from that function (possibly
Please post in plain text, the message is very hard to read with the
reformatting that was done.
Did you receive any warnings when you fit your models?
The fact that the last coefficient is NA in both outputs suggests that
there was some co-linearity in your predictor variables and R chose to
dro
You need to figure out how to tell txtProgressBar what the progress is.
One simple option would be that if you are installing 10 packages,
then create the bar with a range of values from 0 to 10 and initialize
it at 0, then after the first package installs update it to show 1,
after the 2nd instal
It worked for me.
> matplot(for_jhon$ID, for_jhon[,2:73], type='l')
>
> Any I dea on how I can label multiple-line plot based on column names?
>
> Thanks for your help
>
> John
>
> On Tue, Jul 19, 2016 at 8:46 PM, Greg Snow <538...@gmail.com> wrote:
>&
Most attachments get stripped off, so your data did not make it through.
But try:
matplot(for_jhon$ID, for_jhon[,2:73], type='l')
On Tue, Jul 19, 2016 at 12:24 PM, John Wasige wrote:
> Dear all,
>
> This is to kindly request for your help. I would like to plot my data.
>
> The R script below g
If you want square plots on a rectangular plotting region, then where
do you want the extra space to go?
One option would be to add outer margins to use up the extra space.
The calculations to figure out exactly how much space to put in the
outer margins will probably not be trivial.
Another opti
I think that you need to reconsider your conditions.
The smallest number in your candidate set is 1, so if you sample 100
1's they will add to 100 which is greater than 50. So to have a set
of numbers that sums to 50 you will need to either include negative
numbers, 0's, or sample fewer than 50 v
There are several options. The option that is most like search and
replace is to use the `sub` or `gsub` function (or similar functions
in added packages). But you may be able to accomplish what you want
even simpler by using the `paste`, `paste0`, or `sprintf` functions.
On Tue, Jun 28, 2016 at
You can use the grconvertX and grconvertY functions to find the
coordinates (in user coordinates to pass to rect) of the figure region
(or other regions).
Probably something like:
grconvertX(c(0,1), from='nfc', to='user')
grconvertY(c(0,1), from='nfc', to='user')
On Fri, Jun 24, 2016 at 8:19 P
Please help us help you. Tell us what you have tried and where you
have looked (otherwise we may just point you to things you already
know about). Also what is your focus (simple analysis, learning,
programming, ...)?
The best place to start is with "An Introduction to R" which Installs
with R.
whether you show me related package,
> code or how to compute the CIs. Note that, my question is how to compute
> these CIs for coefficients not the difference in means for different groups.
>
>
>
>
>
>
> At 2016-05-05 23:27:54, "Greg Snow" <538...@gm
Super,
Are you just interested in having the final intervals computed for
you? Or are you trying to compute them yourself so that you can learn
more about what they do? Or something else?
If the first is the case then you can just use the multicomp package
as you have mentioned. David was assu
There are some packages that add labels or other attributes (units) to
columns of data frames and have methods to display the labels, units,
etc. One of these packages is Hmisc, see the label and unit
functions. I believe that there are other packages as well. This may
provide what the original
Filling polygons with lines is a throwback to the time when the height
of quality graphics was the mechanical pen plotter (a device that used
a pen in a mechanical arm to draw the plot on a piece of paper).
Computing and printing technology has advanced quite a bit from that
day, so you may want to
Here is the rough equivalent to what you did but using the R6 package:
library(R6)
Cache <- R6Class("Cache",
public = list(
myCache = numeric(0),
add = function(x) {
self$myCache <- c(self$myCache, x)
print(self$myCache)
}
)
)
cacheThis <- Cache$new()
cacheThis$add(1
Boris,
You may want to look into the R6 package. This package has tools that
help create objects (environments) with methods that can use and
change the object. You can have your persistent table stored as part
of your object and then create methods that will use and modify the
table within the
You can create a list of functions then use subscripting. E.g.:
funvec <- c(sin, cos, tan)
for(i in 1:3) {
print(funvec[[i]](pi/6))
}
Just create the list with the different functions that you want to
call, then subscript that list with your n_r variable.
You can also look at ?switch, but I t
You need to use `isolate` on one of the assignments so that it does
not register as an update. Here are a few lines of code from the
server.R file for an example that I use that has a slider for r
(correlation) and another slider for r^2 and whenever one is changed,
I want the other to update:
To give a full answer we need some more detail from you. For example
what operating system are you on? what do you mean by "users click on
it"? and at what point do you want them to click (after running R,
when looking at the desktop, etc.)
But to help get you started you may want to look at the
One option is to call `legend` twice and do some manual positioning.
This worked for me:
plot(1:10)
legend('topleft', lty=1:3, bty="n", legend=c('','','') )
legend('topleft', pch=c(20,8,1), bty="n",
legend=c('clyde','irving','melvin'), inset=c(0.1,0))
You may need to fiddle with the amount of ins
You are trying to use shortcuts where shortcuts are not appropriate
and having to go a lot longer around than if you did not use the
shortcut, see fortune(312).
You should really reread the help page: help("[[") and section 6.1 of
An Introduction to R.
Basically you should be able to do something
Do you have the sample sizes that the sample proportions were computed
from (e.g. 0.5 could be 1 out of 2 or 100 out of 200)?
If you do then you can specify the model with the proportions as the y
variable and the corresponding sample sizes as the weights argument to
glm.
If you only have proport
Yes, you can use the knitr package directly (that is what Rstudio
uses, but it is its own package).
On Fri, Jan 8, 2016 at 1:16 PM, Ragia . wrote:
> dear group,
> is there a way to write my outputs to any kind of files ( the output contains
> text and graph) using R only without installing rstud
You may also be interested in the xspline function (graphics package,
so you don't need to install or load anything extra) since you mention
general splines. These splines can be made similar to Bezier curves
(but not exactly the same). The function returns a set of coordinates
(when draw=FALSE)
Adrian,
Draw the polygon once without the border and the whole in it, then go
back and draw the border around the outer polygon without any fill.
On Wed, Dec 2, 2015 at 9:31 AM, Adrian Dușa wrote:
> On Wed, Dec 2, 2015 at 5:19 PM, David L Carlson wrote:
>>
>> Using only base graphics, one solut
Richard,
I think the reason that this gives the warning is for the rest of us
who don't think about asking about missing values in non-data objects.
I could imagine someone choosing a poor name for a variable and doing
something like:
mean <- mean(x)
is.na(mean)
which would then tell them wheth
John,
One additional point that I have not seen brought up yet. If your
main goal is to have all the output from an existing R script put into
a single output file then you should look at the `stitch` function in
the knitr package. This will take an existing R script and convert it
to one of the
Here is one option if you don't want to write the explicit for loop
(there is still a loop):
library(TeachingDemos)
v<-0:60
z<-3/5+4i/5
t<-z^(v/9)
tmpfun <- function(npoints) {
plot( Re(t)[seq_len(npoints)], Im(t)[seq_len(npoints)],
xlab="Real", ylab="Imaginary", xlim=c(-1,1), ylim=c(-1,
Look at the polylineoffset function in the polyclip package. It looks
like it does what you are asking for.
On Mon, Nov 2, 2015 at 5:33 AM, WRAY NICHOLAS
wrote:
> Hi I am plotting various strands of information, and I want to create an
> "envelope" around each line, so that the locus of the env
An alternative to your approach is to pass your data to the approxfun
or splinefun functions and then use the integrate function on the
result.
On Mon, Aug 24, 2015 at 3:10 AM, CarstenH wrote:
> Hi all
>
> I need to calculate the area under a curve (integral) for the following data
> pairs:
>
> D
For base plotting functions (not grid) then you may be interested in
the updateusr function in the TeachingDemos package. If you can find
the current coordinates of 2 points on the 1st plot (the background
image) that are not in the same horizontal or vertical line (use the
locator function if not
R has several options for projecting a 3 dimensional plot onto a 2
dimensional plane and plotting the result. Which is best depends on
what you want.
You mention a function "plot3d" but not which package it comes from.
You are more likely to receive prompt and useful help when you do your
part of
I would suggest that instead of trying to view all the results in the
console that you save the result into a object then use the View (note
the capitol V) function to be able to scroll through the results. The
head and tail functions have already been mentioned and I second their
use for a quick
duce informative censoring
> that is worse than the non-informative one. This makes sense I think because
> the Z_i-response relationship must be more informative?
>
>
>
> Thanks for your suggestion of copulas – I have not come across these. Is
> this similar to assuming a
ing is not making much of a difference here - model mis-specification
> dominates.
>
> I still must be doing something wrong but I cannot figure this one out.
>
> Thanks
>
> Dan
>
>
>
> On Thu, Jul 23, 2015 at 12:33 AM, Greg Snow <538...@gmail.com> wrote:
&g
You might be interested in the HWidentify and HTKidentify functions in
the TeachingDemos package. They currently don't do maps, but since
the functions are pure R code it would not be hard to modify them.
On Wed, Jul 22, 2015 at 5:35 PM, Marie-Louise wrote:
> Hello,
> I am trying to build a map
I think that the Cox model still works well when the only information
in the censoring is conditional on variables in the model. What you
describe could be called non-informative conditional on x.
To really see the difference you need informative censoring that
depends on something not included i
If you want you script to wait until you have a value entered then you
can use the tkwait.variable or tkwait.window commands to make the
script wait before continuing (or you can bind the code to a button so
that you enter the value, then click on the button to run the code).
On Wed, Jul 8, 2015 a
This is FAQ 7.21.
The most important part of the answer in FAQ 7.21 is the last section
where it states that it is often easier to use a list rather than
messing around with trying to dynamically name global variables.
If you tell us what you are trying to accomplish then we may have
better advic
The examples on the help page for the function "simfun" in the
TeachingDemos package have some examples of simulating data from
nested designs with some terms fixed and some random. I don't think
any of the examples match your conditions exactly, but could be
modified to do so (changing a random e
Thanks for your final paragraph, we sometimes see people who want us
to do their homework for them and that does not go over well, but I
think you situation is one where many would be happy to help. So here
are some hints to help:
The rnorm command expects the standard deviation, not the variance
1 - 100 of 2240 matches
Mail list logo