If CRAN were a passive repository, the discussion about its policies
would not be relevant to this list e.g., SourceForge. However, the
development of R and its packages are very intimately connected to the
CRAN repository policy.
I doubt any of the players in building our current R ecosystem
of predict.warn code
Test example:
# A simple test of predict.warn.R for 2 factors
# J C Nash 20050822
rm(list=ls())
source(predict.warn.R)
x-c(1, 2, 3, 4, 5, 6, 7, 8)
fac1-c(A, A, , , , B, )
fac1-c(fac1, A)
fac1
y-c( 2, 4, 5, 8, 9.5, 12, 13, 17)
fac2-c(1,1,1,2,2,1,2,3)
fac2-as.factor(fac2) # Note
After some thought, I decided r-devel was probably the best of the R lists
for this item. Do feel free to share, as the purpose is to improve documentation
and identify potential issues.
John Nash
The R Consortium has awarded some modest funding for "histoRicalg",
a project to document and
In looking at rootfinding for the histoRicalg project (see
gitlab.com/nashjc/histoRicalg),
I thought I would check how uniroot() solves some problems. The following short
example
ff <- function(x){ exp(0.5*x) - 2 }
ff(2)
ff(1)
uniroot(ff, 0, 10)
uniroot(ff, c(0, 10), trace=1)
uniroot(ff, c(0,
In https://cran.r-project.org/bin/linux/ubuntu/
Administration and Maintances of R Packages
^^
Minor stuff, but if someone who can edit is on the page,
perhaps it can be changed to "Maintenance"
Best, JN
__
783655
> [4] 5.03391827 0.49004503 2.76198165
> [7] 1.09760394 1.92979280 1.34802525
> [10] 1.38677998 1.38628970 1.38635074
> [13] 1.38628970
>
> This will not tell you why the objective function is being called (e.g. in a
> line search
> or in derivative estimati
Having worked with optim() and related programs for years, it surprised me
that I haven't noticed this before, but optim() is inconsistent in how it
deals with bounds constraints specified at infinity. Here's an example:
# optim-glitch-Ex.R
x0<-c(1,2,3,4)
fnt <- function(x, fscale=10){
yy <-
The main thing is to post the "small reproducible example".
My (rather long term experience) can be written
if (exists("reproducible example") ) {
DeveloperFixHappens()
} else {
NULL
}
JN
On 2019-03-29 11:38 a.m., Saren Tasciyan wrote:
> Well, first I can't sign in bugzilla
As the original coder (in mid 1970s) of BFGS, CG and Nelder-Mead in optim(),
I've
been pushing for some time for their deprecation. They aren't "bad", but we have
better tools, and they are in CRAN packages. Similarly, I believe other
optimization
tools in the core (optim::L-BFGS-B, nlm, nlminb)
by folk outside the core.
JN
On 2019-03-04 9:12 a.m., Avraham Adler wrote:
> On Mon, Mar 4, 2019 at 5:01 PM J C Nash <mailto:profjcn...@gmail.com>> wrote:
>
> As the original coder (in mid 1970s) of BFGS, CG and Nelder-Mead in
> optim(), I've
> be
Rereading my post below, I realize scope for misinterpretation. As I have said
earlier,
I recognize the workload in doing any streamlining, and also the immense
service to us
all by r-core. The issue is how to manage the workload efficiently while
maintaining
and modernizing the capability.
As with many areas of R usage, my view is that the concern is one
of making it easier to find appropriate information quickly. The
difficulty is that different users have different needs. So if
one wants to know (most of) what is available, the Time Series
Task View is helpful. If one is a novice,
wrote:
>>>>>> J C Nash
>>>>>> on Thu, 26 Mar 2020 09:29:53 -0400 writes:
>
> > Given that a number of us are housebound, it might be a good time to
> try to
> > improve the approximation. It's not an area where I have much
> ex
Given that a number of us are housebound, it might be a good time to try to
improve the approximation. It's not an area where I have much expertise, but in
looking at the qbeta.c code I see a lot of root-finding, where I do have some
background. However, I'm very reluctant to work alone on this,
Does this issue fit in the more general one of centralized vs
partitioned checks? I've suggested before that the CRAN team
seems (and I'll be honest and admit I don't have a good knowledge
of how they work) to favour an all-in-one checking, whereas it
might be helpful to developers and also widen
Sorry, Martin, but I've NOT commented on this matter, unless someone has been
impersonating me.
Someone else?
JN
On 2021-01-11 4:51 a.m., Martin Maechler wrote:
>> Viechtbauer, Wolfgang (SP)
>> on Fri, 8 Jan 2021 13:50:14 + writes:
>
> > Instead of a separate file to
Is this a topic for Google Summer of Code? See
https://github.com/rstats-gsoc/gsoc2021/wiki
On 2021-01-09 12:34 p.m., Dirk Eddelbuettel wrote:
>
> The idea of 'white lists' to prevent known (and 'tolerated') issues, note,
> warnings, ... from needlessly reappearing is very powerful and general,
One of the mechanisms by which R has been extended and improved has been through
the efforts of students and mentors in the Google Summer of Code initiatives.
This
year Toby Hocking (along with others) has continued to lead this effort.
This year, Google has changed the format somewhat so that
This message is to let R developers know that the project in the Subject is now
a Google Summer of Code project.
Our aim in this project is to find simplifications and corrections to the nls()
code, which has become heavily patched. Moreover, it has some deficiencies in
that
there is no
of
inference can be when applied to parameters in such models.
JN
On 2021-08-20 11:35 a.m., Martin Maechler wrote:
>>>>>> J C Nash
>>>>>> on Fri, 20 Aug 2021 11:06:25 -0400 writes:
>
> > In our work on a Google Summer of Code project
>
In our work on a Google Summer of Code project "Improvements to nls()",
the code has proved sufficiently entangled that we have found (so far!)
few straightforward changes that would not break legacy behaviour. One
issue that might be fixable is that nls() returns no result if it
encounters some
In mentoring and participating in a Google Summer of Code project "Improvements
to nls()",
I've not found examples of use of the "subset" argument in the call to nls().
Moreover,
in searching through the source code for the various functions related to
nls(), I can't
seem to find where subset
Extreme scaling quite often ruins optimization calculations. If you think
available methods
are capable of doing this, there's a bridge I can sell you in NYC.
I've been trying for some years to develop a good check on scaling so I can
tell users
who provide functions like this to send (lots
FWIW, nlsr::nlxb() gives same answers.
JN
On 2023-01-25 09:59, Dave Armstrong wrote:
Dear Colleagues,
I recently answered [this question]() on StackOverflow that identified
what seems to be unusual behaviour with `stats:::nls.fitted()`. In
particular, a null model returns a single fitted
nls() actually uses different modeling formulas depending on the 'algorithm',
and
there is, in my view as a long time nonlinear modeling person, an unfortunate
structural issue that likely cannot be resolved simply. This is because for
nonlinear
modeling programs we really should be using
A tangential email discussion with Simon U. has highlighted a long-standing
matter that some tools in the base R distribution are outdated, but that
so many examples and other tools may use them that they cannot be deprecated.
The examples that I am most familiar with concern optimization and
Thanks Martin.
Following Duncan's advice as well as some textual input, I have put a proposed
Rd file for
optim on a fork of the R code at
https://github.com/nashjc/r/blob/master/src/library/stats/man/optim.Rd
This has the diffs given below from the github master. The suggested changes
Better check your definitions of SVD -- there are several forms, but all I
am aware of (and I wrote a couple of the codes in the early 1970s for the SVD)
have positive singular values.
JN
On 2023-07-16 02:01, Durga Prasad G me14d059 wrote:
Respected Development Team,
This is Durga Prasad
But why time methods that the author (me!) has been telling the community for
years have updates? Especially as optimx::optimr() uses same syntax as optim()
and gives access to a number of solvers, both production and didactic. This set
of solvers is being improved or added to regularly, with a
to the arm64 results if that is your worry.
Cheers,
Simon
On 04/02/2024 4:47 p.m., J C Nash wrote:
Slightly tangential: I had some woes with some vignettes in my
optimx and nlsr packages (actually in examples comparing to OTHER
packages) because the M? processors don't have 80 bit registers
:47 p.m., J C Nash wrote:
Slightly tangential: I had some woes with some vignettes in my
optimx and nlsr packages (actually in examples comparing to OTHER
packages) because the M? processors don't have 80 bit registers of
the old IEEE 754 arithmetic, so some existing "tolerances" are too
Slightly tangential: I had some woes with some vignettes in my
optimx and nlsr packages (actually in examples comparing to OTHER
packages) because the M? processors don't have 80 bit registers of
the old IEEE 754 arithmetic, so some existing "tolerances" are too
small when looking to see if is
t, getting the indicated result of 0 for (sum(vv1) - 1.0e0), with
non-zero on my
Ryzen 7 laptop.
JN
# FPExtendedTest.R J C Nash
loopsum <- function(vec){
n <- length(vec)
vsum<-0.0
for (i in 1:n) { vsum <- vsum + vec[i]}
vsum
}
small<-.Machine$double.eps/4 # 1/4 of the
Slightly tangential, but about two decades ago I was researching
how multimedia databases might be reasonably structured. To have a
concrete test case, I built a database of English Country (Playford)
dances, which I called Playford's Progeny. (Ben B. will be aware of
this, too.) This proved
We'd be more than happy to have you contribute directly. The goal is not just an
information session, but to get some movement to ways to make the package
collection(s)
easier to use effectively. Note to selves: "effectively" is important -- we
could make
things easy by only recommending a few
advise others to do so. I
don't know how representative my little corner of the world is,
though.
I have an embryonic task view on mixed models at
https://github.com/bbolker/mixedmodels-misc/blob/master/MixedModels.ctv
but the perfect is the enemy of the good ...
On Fri, Feb 10, 2017 at 9:56 AM, J
lig...@statistik.tu-dortmund.de> wrote:
On 15.03.2017 18:30, Ben Bolker wrote:
On 17-03-15 11:09 AM, J C Nash wrote:
Possibly tangential, but has there been any effort to set up a Sparc
testbed? It
seems we could use a network-available (virtual?) machine, since this
platform is
often the unfortuna
Possibly tangential, but has there been any effort to set up a Sparc testbed? It
seems we could use a network-available (virtual?) machine, since this platform
is
often the unfortunate one. Unless, of course, there's a sunset date.
For information, I mentioned SPARC at our local linux group,
Having been around a while and part of several programming language and
other standards (see ISO 6373:1984 and IEEE 754-1985), I prefer some democracy
at the
level of getting a standard. Though perhaps at the design level I can agree
with Hadley. However, we're now at the stage of needing to
Duncan's observation is correct. The background work to the standards
I worked on was a big effort, and the content was a lot smaller than R,
though possibly similar in scope to dealing with the current question.
The "voting" was also very late in the process, after the proposals
were developed,
It occurs to me that there could be packages developed by early career R
developers that might fit
this prize which is considered quite prestigious (not to mention the cash) in
the numerical methods community.
It is also likely that people may not be aware of the award in the R community.
For the past few weeks I've been struggling to check a new version of optimx
that gives
a major upgrade to the 2013 version currently on CRAN and subsumes several
other packages.
It seems to work fine, and pass Win-builder, R CMD check etc.
However, both devtools and cran reverse dependency
I think several of us have had similar issues lately. You might have seen my
posts on reverse dependencies.
It seems there are some sensitivities in the CRAN test setup, though I think
things are improving.
Last week I submitted optimx again. I don't think I changed anything but the
date and
Will pending queries to CRAN-submissions about false positives in the check
process be cleared
first, or left pending? I've been waiting quite a while re: new optimx package,
which has 1 so-called "ERROR" (non-convergence, please use a different method
msg)
and 1 WARNING because new optimx
ario, Canada
> Web: https://socialsciences.mcmaster.ca/jfox/
>
>
>
>> -Original Message-
>> From: R-package-devel [mailto:r-package-devel-boun...@r-project.org] On
>> Behalf Of J C Nash
>> Sent: Monday, August 27, 2018 8:44 PM
>> To:
have the same parent):
>
> ea <- TraceSetup_2(fn = function(x) x^2 - 2*x + 1)
>> ls(ea)
> [1] "fn" "ftrace" "gr" "ifn""igr"
>> ea$fn
> function(x) x^2 - 2*x + 1
>>
>> eb <- TraceSetup_2(fn = function(x
In order to track progress of a variety of rootfinding or optimization
routines that don't report some information I want, I'm using the
following setup (this one for rootfinding).
TraceSetup <- function(ifn=0, igr=0, ftrace=FALSE, fn=NA, gr=NA){
# JN: Define globals here
groot<-list(ifn=ifn,
TraceSetup -> -> registerNames
> Execution halted
Also had to use utils::globalVariables( ...
JN
On 2018-08-27 08:40 PM, Richard M. Heiberger wrote:
> Does this solve the problem?
>
> if (getRversion() >= '2.15.1')
> globalVariables(c('envroot'))
>
> I ke
Excuses for the length of this, but optimx has a lot of packages
using it.
Over the past couple of months, I have been struggling with checking
a new, and much augmented, optimx package. It offers some serious
improvements:
- parameter scaling on all methods
- two safeguarded Newton methods
'm running Linux Mint
18.3 Sylvia.
Linux john-j6-18 4.10.0-38-generic #42~16.04.1-Ubuntu SMP Tue Oct 10 16:32:20
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
john@john-j6-18 ~ $ R
R version 3.4.4 (2018-03-15) -- "Someone to Lean On"
J C Nash
_
esses. Output from these is not capturable/sinkable
> by the master R process. The gist of what's happening is:
>
>> sink("output.log")
>> system("echo hello") ## not sinked/captured
> hello
>> sink()
>> readLines("output.log")
&g
On 2018-04-11 03:24 PM, Henrik Bengtsson wrote:
> R CMD check, which is used internally runs checks in standalone
> background R processes. Output from these is not capturable/sinkable
> by the master R process. The gist of what's happening is:
>
>> sink("output.log")
&
> Georgi Boshnakov
>
>
>
>
>
> From: R-package-devel [r-package-devel-boun...@r-project.org] on behalf of J
> C Nash [profjcn...@gmail.com]
> Sent: 11 April 2018 19:05
> To: List r-package-devel
> Subject: [R-pkg-devel] Saving output of chec
If NOTEs are going to be treated as errors, then a lot of infrastructure (all my
packages for optimization and nonlinear modelling, which are dependencies of
a few dozen other packages etc.) will disappear. This is because they have
version
numbering I've been using in some form that pre-dates R
There is a quite well-developed but not terribly large C program
for conjugate gradients and similar approaches to optimization I would
like to wrap in a package for use in R. I would then build this into the
optimx package I maintain. I suspect that the approach may turn out to be
one of the most
LSE FALSE FALSE FALSE FALSE FALSE
>
> The || operator works on length 1 Booleans. Since fval can be of length
> greater than 1 at that point, the proper condition seems to be:
>
> any(is.infinite(fval)) || any(is.na(fval))
>
> Best regards,
>
> Sebastian
>
iolate some conditions e.g., character rather than
numeric etc.
JN
On 2019-06-07 8:44 a.m., J C Nash wrote:
> Uwe Ligges ||gge@ @end|ng |rom @t@t|@t|k@tu-dortmund@de
> Fri Jun 7 11:44:37 CEST 2019
>
> Previous message (by thread): [R-pkg-devel] try() in R CMD check --as-cran
&
FALSE FALSE FALSE FALSE FALSE
>>>
>>> The || operator works on length 1 Booleans. Since fval can be of
>> length
>>> greater than 1 at that point, the proper condition seems to be:
>>>
>>> any(is.infinite(fval)) || any(is.na(fval))
>> a little ty
n...@gmail.com>> wrote:
>
> On 07/06/2019 12:32 p.m., William Dunlap wrote:
> > The length-condition-not-equal-to-one checks will cause R to shutdown
> > even if the code in a tryCatch().
>
> That's strange. I'm unable to reproduce it with my tries, and
After making a small fix to my optimx package, I ran my usual R CMD check
--as-cran.
To my surprise, I got two ERRORs unrelated to the change. The errors popped up
in
a routine designed to check the call to the user objective function. In
particular,
one check is that the size of vectors is
I'm seeking some general advice about including vignettes in my packages,
which are largely for nonlinear estimation and function minimization
(optimization).
This means that my packages offer alternatives to many other tools, and the user
then has the chore of deciding which is appropriate. Bad
Are you sure you want to try to run R etc. under Wine?
- If you have Windows running, either directly or in a VM, you can run R there.
- If you have Windows and want to run R under some other OS, then set up a VM
e.g., Linux Mint, for that. I sometimes test R for Windows in a VirtualBox VM
for
Possibly the "old" site-library is not getting over-written. I had to
manually delete.
See https://www.mail-archive.com/r-help@r-project.org/msg259132.html
JN
On 2020-07-28 7:21 a.m., Dirk Eddelbuettel wrote:
>
> Hi Adelchi,
>
> On 28 July 2020 at 11:46, Adelchi Azzalini wrote:
> | When I
-06-10 9:37 a.m., Dirk Eddelbuettel wrote:
>
> On 10 June 2021 at 09:22, J C Nash wrote:
> | Thanks to help from Duncan Murdoch, we have extracted the nls()
> functionality to a package nlspkg and are building
> | an nlsalt package. We can then run nlspkg::AFunction() and
>
to install with "makevars.win", I
got a
WARNING on running a CHECK until I replaced it with "Makevars.win", i.e.,
Camel-case
name.
Do these observations merit edits in the manual?
JN
On 2021-06-11 11:16 a.m., J C Nash wrote:
> After some flailing around, discover
Hi,
I'm mentoring Arkajyoti Bhattacharjee for the Google Summer of Code project
"Improvements to nls()".
Thanks to help from Duncan Murdoch, we have extracted the nls() functionality
to a package nlspkg and are building
an nlsalt package. We can then run nlspkg::AFunction() and
> [11] LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C
>
> attached base packages:
> [1] stats graphics grDevices utils datasets methods base
>
> loaded via a namespace (and not attached):
> [1] compiler_4.1.0
>>
On 2021-06-10 9
> On 12.06.2021 16:39, J C Nash wrote:
>> Two minor notes:
>>
>> 1) The Writing R Extensions manual, as far as I can determine, does not
>> inform package
>> developers that Makevars.win needs to be in the src/ subdirectory. I
>> followed the example
>
ymmetric.matrix
>>>> <https://rdrr.io/cran/matrixcalc/src/R/is.symmetric.matrix.R>, and
>>>> is.positive.definite
>>>> <https://rdrr.io/cran/matrixcalc/src/R/is.positive.definite.R>) and
>>> there
>>>> is only one function in post
I just downloaded the source matrixcalc package to see what it contained. The
functions
I looked at seem fairly straightforward and the OP could likely develop
equivalent features
in his own code, possibly avoiding a function call. Avoiding the function
call means NAMESPACE etc. are not
Another approach is to change the responsibility.
My feeling is that tests in the TESTING package should be modifiable by the
maintainer of
the TESTED package, with both packages suspended if the two maintainers cannot
agree. We
need to be able to move forward when legacy behaviour is outdated
021 9:49 a.m., J C Nash wrote:
>> Another approach is to change the responsibility.
>>
>> My feeling is that tests in the TESTING package should be modifiable by the
>> maintainer of
>> the TESTED package, with both packages suspended if the two maintainers
>&
I'd second Uwe's point. I was one of 31 signatories to first IEEE 754 (I didn't
participate in
the two more recent releases, as I already tore my hair out with the details of
low level
bit manipulations). Before the standard, porting code was truly a nightmare. We
did it
because we had to and
I think this is similar in nature (though not detail) to an issue raised
on StackOverflow where the OP used "x" in dot args and it clashed with the
"x" in a numDeriv call in my optimx package. I've got a very early fix (I
think), though moderators on StackOverflow were unpleasant enough to
delete
On 9/2/23 4:23 PM, Greg Hunt wrote:
The percent encoded characters appear to be valid in that URL, suggesting
that rejecting them is an error. That kind of error could occur when the
software processing them converts them back to a non-unicode character set.
On Sun, 3 Sep 2023 at 4:34 am, J C Nas
I'm posting this in case it helps some other developers getting build failure.
Recently package nlsr that I maintain got a message that it failed to build on
some platforms. The exact source of the problem is still to be illuminated,
but seems to be in knitr::render and/or pandoc or an
I've spent a couple of hours with an Rmarkdown document where I
was describing some spherical coordinates made up of a radius r and
some angles. I wanted to fix the radius at 1.
In my Rmarkdown text I wrote
Thus we have `r = 1` ...
This caused failure to render with "unexpected =". I was
Yes. An initial space does the trick. Thanks. J
On 2023-11-03 11:48, Serguei Sokol wrote:
Le 03/11/2023 à 15:54, J C Nash a écrit :
I've spent a couple of hours with an Rmarkdown document where I
was describing some spherical coordinates made up of a radius r and
some angles. I wanted to fix
Thanks. That seems to be the issue. Also vincent's suggestion of checkRd.
JN
On 2023-10-22 10:52, Ivan Krylov wrote:
On Sun, 22 Oct 2023 10:43:08 -0400
J C Nash wrote:
\itemize{
\item{fnchk OK;}{ \code{excode} = 0;
\code{infeasible} = FALSE
I'm doing a major update of the optimx package and things were going relatively
smoothly until this weekend when files that have passed win-builder gave NOTEs
on r-devel for several manual (.Rd) files.
The NOTE is of the form
* checking Rd files ... NOTE
checkRd: (-1) fnchk.Rd:40-41: Lost
FWIW: optimx::optimx is outdated and only there for legacy use.
Better to use the optimx::optimr() function for single solvers.
JN
On 2022-11-25 05:10, Ivan Krylov wrote:
В Fri, 25 Nov 2022 09:59:10 +
"ROTOLO, Federico /FR" пишет:
When submitting my package parfm, I get the following
Recently I updated my package nlsr and it passed all the usual checks and was
uploaded to CRAN. A few days later I got a message that I should "fix" my
package as it had failed in "M1max" tests.
The "error" was actually a failure in a DIFFERENT package that was used as
an example in a vignette.
In work on an upgrade to my optimx package, I added to my (plain text) NEWS
file.
The lines
VERSION 2023-06-25
o This is a MAJOR revision and overhaul of the optimx package and its
components.
o Fixed CITATION file based on R CMD check --as-cran complaints
regarding
if there are plans to use NEWS for some purpose in
the future i.e., to actually track changes beyond package maintainer's
comments?
Cheers, and thanks again.
JN
On 2023-07-26 10:03, Ivan Krylov wrote:
В Wed, 26 Jul 2023 09:37:38 -0400
J C Nash пишет:
I'd like to avoid NOTEs if possible
The important information is in the body of the man page for news(),
i.e., found by
?utils::news
and this explains why putting an "o" in front of a line clears the
NOTE. Once I realized that CRAN is running this, I could see the
"why". Thanks.
JN
On 2023-07-26 10:25, Duncan Murdoch wrote:
My nlsr package was revised mid-February. After CRAN approved it, I got a
message that it was "failing" M1Mac tests. The issue turned out to be ANOTHER
package that was being used in an example in a vignette. Because M1 does not
provide the IEEE 754 80 bit registers, a method in package minqa did
In updating my nlsr package, I ran R CMD check --as-cran and got an error
that /usr/lib/R/doc/html/katex/katex.js was not found.
I installed the (large!) r-cran-katex. No joy.
katex.js was in /usr/share/R/doc/html/katex/ so I created a symlink. Then
I got katex-config.js not found (but in 1
For info, I put a little study I did about the byte code compiler and
other speedup approaches (but not multicore) on the Rwiki at
http://rwiki.sciviews.org/doku.php?id=tips:rqcasestudy
which looks at a specific problem, so may not be relevant to everyone.
However, one of my reasons for doing
The message below came to me from the Getting Open Source Logic INto
Government list. I'm passing it on to the devel list as the infoworld
article may have some ideas of relevance to the R project, mainly
concerning build and test issues and tracking changes in the code base.
While the
While as a Linux user who has not so far been banished to Winland I have
not experienced this problem, it seems to be the type of issue where a
how to, for example, on the R Wiki, would be helpful. Moreover, surely
this is a name conflict on different platforms, so possibly a list of
these
There is quite a literature on related methods for variance. If anyone
is interested, I did some work (and even published the code in the
magazine Interface Age in 1981) on some of these. I could probably put
together scans of relevant materials, some of which are not easily
available. It
This issue has been known for some time and I've had why don't you fix
this? queries. However, I'm not one of the R-core folk who could do so,
and don't code in C. Moreover, as far as I can tell, the version of
L-BFGS-B in R is not one of the standard releases from Morales and Nocedal.
As
I had a bunch of examples of byte code compiles in something I was
writing. Changed to 3.0.2 and the advantage of compiler disappears. I've
looked in the NEWS file but do not see anything that suggests that the
compile is now built-in. Possibly I've just happened on a bunch of
examples where it
)
Actually, I'm not greatly axious about all this. Mainly I want to make
sure that I get whatever advice is to be rendered so it is correct.
Best,
JN
On 13-11-03 02:22 PM, Duncan Murdoch wrote:
On 13-11-03 2:15 PM, Prof J C Nash (U30A) wrote:
I had a bunch of examples of byte code compiles
Thanks. I should not try adjusting code after some hours of proofreading.
Making that change gave a suitable time difference.
Best, JN
On 13-11-03 03:46 PM, Henrik Bengtsson wrote:
tfor - cmpfun(tfor)
twhile - cmpfun(twhile)
/Henrik
On Sun, Nov 3, 2013 at 11:55 AM, Prof J C Nash
Over the years, this has been useful to me (not just in R) for many
nonlinear optimization tasks. The alternatives often clutter the screen.
On 13-11-06 06:00 AM, r-devel-requ...@r-project.org wrote:
People do sometimes use this pattern for displaying progress (e.g. iteration
counts).
I noted Duncan's comment that an answer had been provided, and went to
the archives to find his earlier comment, which I am fairly sure I saw a
day or two ago. However, neither May nor June archives show Duncan in
the thread except for the msg below (edited for space). Possibly tech
failures are
As you dig deeper you will find vmmin.c, cgmin.c and (I think) nmmin.c
etc. Those were, as I understand, converted by p2c from my Pascal codes
that you can find in the pascal library on netlib.org. These can be run
with the Free Pascal compiler.
Given how long ago these were developed (30 years
I won't comment on the C/C++ option, as I'm not expert in that. However,
R users and developers should know that Nocedal et al. who developed
L-BFGS-B released an update to correct a fault in 2011. It was important
enough that an ACM TOMS article was used for the announcement.
I recently
As the author of 3 of the 5 methods in optim, I think you may be wasting
your time if this is for performance. My reasons are given in
http://www.jstatsoft.org/v60/i02
Note that most of the speed benefits of compilation are found in the
objective and gradient function, with generally more minor
1 - 100 of 108 matches
Mail list logo