> "ReidH" == Huntsinger, Reid <[EMAIL PROTECTED]>
> on Wed, 3 Aug 2005 13:21:45 -0400 writes:
ReidH> I thought setting keep.data=FALSE might help, but
ReidH> running this on a 32-bit Linux machine, the R process
ReidH> seems to use 1.2 GB until just before clara returns,
On Wed, 3 Aug 2005, Prof Brian Ripley wrote:
>> From the help page:
>
> 'clara' is fully described in chapter 3 of Kaufman and Rousseeuw
> (1990). Compared to other partitioning methods such as 'pam', it
> can deal with much larger datasets. Internally, this is achieved
> by consi
> "Nestor" == Nestor Fernandez <[EMAIL PROTECTED]>
> on Wed, 03 Aug 2005 18:44:38 +0200 writes:
Nestor> I'm trying to estimate clusters from a
Nestor> very large dataset using clara but the program stops
Nestor> with a memory error. The (very simple) code and the
Nestor
--Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nestor Fernandez
Sent: Wednesday, August 03, 2005 12:45 PM
To: r-help@stat.math.ethz.ch
Subject: [R] clara - memory limit
Dear all,
I'm trying to estimate clusters from a very large dataset using clara but
th
>From the help page:
'clara' is fully described in chapter 3 of Kaufman and Rousseeuw
(1990). Compared to other partitioning methods such as 'pam', it
can deal with much larger datasets. Internally, this is achieved
by considering sub-datasets of fixed size ('sampsize') su
Dear all,
I'm trying to estimate clusters from a very large dataset using clara but the
program stops with a memory error. The (very simple) code and the error:
mydata<-read.dbf(file="fnorsel_4px.dbf")
my.clara.7k<-clara(mydata,k=7)
>Error: cannot allocate vector of size 465108 Kb
The dataset c
I need informations about the clara routine. The on-line doc say that the
argument stand is a logical, indicating if the measurements in x are
standardized before calculating the dissimilarities. Measurements are
standardized for each variable (column), by subtracting the variable's mean
value