Re: [R-SIG-Finance] Naming conventions for merged XTS columns

2012-10-09 Thread Jeff Ryan
To provide some additional clarity now that I've been able to look into it:

Firstly, merge on data.frames is quite different than merge in
xts/zoo.  merge in zoo/xts is cbind.  Which using your objects gets
you something different:

> cbind(as.data.frame(M), as.data.frame(N) )
   a a
1970-01-02 1 2
1970-01-03 2 1
1970-01-04 2 2

>From there, xts is indeed different than both matrix and zoo.  The
matrix class doesn't care about non-unique column names.  zoo appends
the object symbol to the column, though can behave differently
depending on options passed.

Most of these variations are too expensive for xts in general, but to
keep column names unique we employ the R function make.names( ..
unique=TRUE).  This happens in the C code technically, but when the
columns are named, the results you see are a result of this call.

What gets more interesting is when names are not defined.  You can
peruse below and see what happens in at least a few cases.

In general, xts tries to make sure you have unique column names.
Which is oddly consistent with the data.frame constructor itself:

data.frame(a=1,a=3)
  a a.1
1 1   3

So, if you have colnames defined, make.names(colnames(x),unique=TRUE)
is the behavior you can expect.  If not, it depends on how you call
the related bind functions:

> x
[,1]
1970-01-01 00:00:011
1970-01-01 00:00:022
1970-01-01 00:00:033

> merge(x,x,x)
x x.1 x.2
1970-01-01 00:00:01 1   1   1
1970-01-01 00:00:02 2   2   2
1970-01-01 00:00:03 3   3   3

> cbind(x,x,x)
..1 ..2 ..3
1970-01-01 00:00:01   1   1   1
1970-01-01 00:00:02   2   2   2
1970-01-01 00:00:03   3   3   3

> do.call(merge,list(x,x,x))
X1.3 X1.3.1 X1.3.2
1970-01-01 00:00:011  1  1
1970-01-01 00:00:022  2  2
1970-01-01 00:00:033  3  3

HTH
Jeff

On Tue, Oct 9, 2012 at 7:52 PM, Worik Stanton  wrote:
> When I merge two xts with the same column names a '.1' is appended...
>
> Where does this convention come from and can it be firmly relied on?
>
> Sorry if this is a general 'R' question... But merge acts differently
> for data.frames
>
> cheers
> Worik
>
>> M <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
>> M
>[,1]
> 1970-01-020
> 1970-01-032
> 1970-01-041
>> colnames(M) <- 'a'
>> N <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
>> colnames(N) <- 'a'
>> merge(N,M)
>a a.1
> 1970-01-02 2   0
> 1970-01-03 1   2
> 1970-01-04 2   1
>>
>
>> merge(as.data.frame(M), as.data.frame(N) )
>   a
> 1 1
> 2 2
> 3 2
>>
>
>
> --
> it does not matter  I think that I shall never see
> how much I dig and digA billboard lovely as a tree
> this hole just  Indeed, unless the billboards fall
> keeps getting deeper  I'll never see a tree at all
>
> ___
> R-SIG-Finance@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions 
> should go.



-- 
Jeffrey Ryan
jeffrey.r...@lemnica.com

www.lemnica.com

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Naming conventions for merged XTS columns

2012-10-09 Thread Jeff Ryan
I'm not in front of the relevant bits of code, but it would likely be in 
make.unique.names or something like that within merge.xts at the R level. 

That said, relied upon seems unwise. The pattern is meant to match zoo behavior 
which in turn is meant to match matrix IIRC. But inconsistencies at the R level 
are pretty common around the edges, so xts matches "inconsistent" behavior to 
be most correct :-)

It very much depends on what the colnames are (defined, unique, etc) on the 
incoming objects and the order of the calls. Even as far as merge() vs 
merge.xts() vs cbind() vs cbind.xts() vs do.call(merge vs do.call(merge.xts. ...

You get the idea. Plus there are very real performance issues when trying to 
match native R or zoo behaviors in all cases. 

Best to use colnames explicitly would be my advice. 

Best
Jeff

Jeffrey Ryan|Founder|jeffrey.r...@lemnica.com

www.lemnica.com

On Oct 9, 2012, at 7:52 PM, Worik Stanton  wrote:

> When I merge two xts with the same column names a '.1' is appended...
> 
> Where does this convention come from and can it be firmly relied on?
> 
> Sorry if this is a general 'R' question... But merge acts differently
> for data.frames
> 
> cheers
> Worik
> 
>> M <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
>> M
>   [,1]
> 1970-01-020
> 1970-01-032
> 1970-01-041
>> colnames(M) <- 'a'
>> N <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
>> colnames(N) <- 'a'
>> merge(N,M)
>   a a.1
> 1970-01-02 2   0
> 1970-01-03 1   2
> 1970-01-04 2   1
>> 
> 
>> merge(as.data.frame(M), as.data.frame(N) )
>  a
> 1 1
> 2 2
> 3 2
>> 
> 
> 
> -- 
> it does not matter  I think that I shall never see
> how much I dig and digA billboard lovely as a tree
> this hole just  Indeed, unless the billboards fall
> keeps getting deeper  I'll never see a tree at all
> 
> ___
> R-SIG-Finance@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions 
> should go.

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


[R-SIG-Finance] Naming conventions for merged XTS columns

2012-10-09 Thread Worik Stanton
When I merge two xts with the same column names a '.1' is appended...

Where does this convention come from and can it be firmly relied on?

Sorry if this is a general 'R' question... But merge acts differently
for data.frames

cheers
Worik

> M <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
> M
   [,1]
1970-01-020
1970-01-032
1970-01-041
> colnames(M) <- 'a'
> N <- xts(trunc(3*runif(3)), seq(as.Date(1), as.Date(3), by=1))
> colnames(N) <- 'a'
> merge(N,M)
   a a.1
1970-01-02 2   0
1970-01-03 1   2
1970-01-04 2   1
>

> merge(as.data.frame(M), as.data.frame(N) )
  a
1 1
2 2
3 2
>


-- 
it does not matter  I think that I shall never see
how much I dig and digA billboard lovely as a tree
this hole just  Indeed, unless the billboards fall
keeps getting deeper  I'll never see a tree at all

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Higher Order Moment Portfolio

2012-10-09 Thread Brian G. Peterson

On 10/09/2012 06:24 PM, nserdar wrote:

Please let me know any codes about "higher Order Moment Portfolio
Optimisation" in R


R package 'PerformanceAnalytics' can calculate all the higher order 
moments and co-moments.


packages 'PortfolioAnalytics' and 'Meucci' can optimize arbitrary 
objective functions using these higher moments.


Regards,

   - Brian


--
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


[R-SIG-Finance] Higher Order Moment Portfolio

2012-10-09 Thread nserdar

Please let me know any codes about "higher Order Moment Portfolio
Optimisation" in R

Regards,
Serdar



--
View this message in context: 
http://r.789695.n4.nabble.com/Higher-Order-Moment-Portfolio-tp4645636.html
Sent from the Rmetrics mailing list archive at Nabble.com.

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] "ugarchfit" function of "rugarch" package needs at least 100 data points.

2012-10-09 Thread Eric Zivot
While GARCH parameter estimates can be quite variable (as Pat has nicely
shown), the resulting volatility forecasts tend to be much more stable (at
least in the short term). If the goal is short-term vol forecasting then one
doesn't really care too much about the GARCH point estimates. The important
long-term parameters are the unconditional vol and the persistence of the
GARCH process as these dictate where and how fast the vol forecasts evolve.
For the short term, most GARCH forecasts look a lot like simple EWMAs. So if
you only have 100 obvs and want to get a simple short-term forecast, use an
EWMA. That's the riskMetrics approach.

-Original Message-
From: r-sig-finance-boun...@r-project.org
[mailto:r-sig-finance-boun...@r-project.org] On Behalf Of Patrick Burns
Sent: Tuesday, October 09, 2012 1:05 PM
To: r-sig-finance@r-project.org
Subject: Re: [R-SIG-Finance] "ugarchfit" function of "rugarch" package needs
at least 100 data points.

The blog post:

http://www.portfolioprobe.com/2012/09/17/variability-of-garch-estimates/

shows how variable garch results are using
2000 daily observations.

Even if there is a way to get your model fit, I doubt that it would mean
very much.

If you do find a way, I would suggest that you create multiple datasets that
are simulations of the model the same size as your data.  Then estimate the
model on the simulations and see how variable those model estimates are.
They will be very variable, I predict.

Pat

On 09/10/2012 19:58, Tanvir Khan wrote:
> Right now I'm trying to fit a GJR Garch model on the inflation rate of 
> my country, Bangladesh, but the problem is the function "ugarchfit" of 
> the "rugarch" package requires at least 100 data points to run. I have 
> yearly
> (12 months average) data and my country is not even 50 years old! So, 
> is there any way to use this function to fit a data with less than 100 
> observations? Any type of suggestion will be extremely helpful.
>

--
Patrick Burns
patr...@burns-stat.com
http://www.burns-stat.com
http://www.portfolioprobe.com/blog
twitter: @portfolioprobe

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions
should go.

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] "ugarchfit" function of "rugarch" package needs at least 100 data points.

2012-10-09 Thread Patrick Burns

The blog post:

http://www.portfolioprobe.com/2012/09/17/variability-of-garch-estimates/

shows how variable garch results are using
2000 daily observations.

Even if there is a way to get your model fit,
I doubt that it would mean very much.

If you do find a way, I would suggest that you
create multiple datasets that are simulations of
the model the same size as your data.  Then estimate
the model on the simulations and see how variable
those model estimates are.  They will be very
variable, I predict.

Pat

On 09/10/2012 19:58, Tanvir Khan wrote:

Right now I'm trying to fit a GJR Garch model on the inflation rate of my
country, Bangladesh, but the problem is the function "ugarchfit" of the
"rugarch" package requires at least 100 data points to run. I have yearly
(12 months average) data and my country is not even 50 years old! So, is
there any way to use this function to fit a data with less than 100
observations? Any type of suggestion will be extremely helpful.



--
Patrick Burns
patr...@burns-stat.com
http://www.burns-stat.com
http://www.portfolioprobe.com/blog
twitter: @portfolioprobe

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] "ugarchfit" function of "rugarch" package needs at least 100 data points.

2012-10-09 Thread alexios ghalanos

Dear Tanvir,

To have an 'accurate' estimate of the persistence of a GARCH process you 
really do need more than 100 datapoints, and anything less than this is 
not likely to be informative or accurate, and might even fail to 
converge. More generally, in the approach adopted by the rugarch package 
(frequentist), the question of determining the presence of GARCH 
dynamics is difficult to answer with very few datapoints. Instead, if 
you insist on using GARCH, with limited history monthly data, my 
suggestion is to try one of the Bayesian packages (try the bayesGARCH 
package or LaplacesDemon, the latter has extensive examples including 
how to fit a number of GARCH models).


Regards,

Alexios


On 09/10/2012 19:58, Tanvir Khan wrote:

Right now I'm trying to fit a GJR Garch model on the inflation rate of my
country, Bangladesh, but the problem is the function "ugarchfit" of the
"rugarch" package requires at least 100 data points to run. I have yearly
(12 months average) data and my country is not even 50 years old! So, is
there any way to use this function to fit a data with less than 100
observations? Any type of suggestion will be extremely helpful.



___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


[R-SIG-Finance] "ugarchfit" function of "rugarch" package needs at least 100 data points.

2012-10-09 Thread Tanvir Khan
Right now I'm trying to fit a GJR Garch model on the inflation rate of my
country, Bangladesh, but the problem is the function "ugarchfit" of the
"rugarch" package requires at least 100 data points to run. I have yearly
(12 months average) data and my country is not even 50 years old! So, is
there any way to use this function to fit a data with less than 100
observations? Any type of suggestion will be extremely helpful.

-- 
**
Tanvir Khan
Applied Statistician
Institute Of Statistical Research & Training
University Of Dhaka
http://bd.linkedin.com/pub/tanvir-khan/39/281/536
tkh...@isrt.ac.bd
tanvir...@gmail.com
**

***THE WORLD IS OPEN SOURCE***

[[alternative HTML version deleted]]

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Slow data EOD

2012-10-09 Thread Ralph Vince
Ah, okthen I can create a function to perhaps append it onto the
file I am downloading entirely end of day or something like that.

On Tue, Oct 9, 2012 at 1:02 PM, G See  wrote:
> On Tue, Oct 9, 2012 at 11:58 AM, Ralph Vince  wrote:
>> Thanks Garrett, Im really just looking for timely end-of-day data on
>> this though. Ralph
>
> For example, getQuote("SPY"), will return end of day data if you call
> it late in the afternoon ;-)

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Slow data EOD

2012-10-09 Thread G See
On Tue, Oct 9, 2012 at 11:58 AM, Ralph Vince  wrote:
> Thanks Garrett, Im really just looking for timely end-of-day data on
> this though. Ralph

For example, getQuote("SPY"), will return end of day data if you call
it late in the afternoon ;-)

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Slow data EOD

2012-10-09 Thread Ralph Vince
Thanks Garrett, Im really just looking for timely end-of-day data on
this though. Ralph

On Tue, Oct 9, 2012 at 12:40 PM, G See  wrote:
> Hi Ralph,
>
> You can get real time intraday data from yahoo or google:
> http://www.quantshare.com/sa-426-6-ways-to-download-free-intraday-and-tick-data-for-the-us-stock-market
>
> Or you can get delayed data with quantmod::getQuote.
>
> Maybe that will work better for you.
>
> I'm not sure about your particular issue, but one of the issues with
> yahoo's daily data is that sometimes it has duplicate timestamps (and
> different volume) for the most recent observation.
>
> HTH,
> Garrett
>
> On Tue, Oct 9, 2012 at 11:26 AM, Ralph Vince  wrote:
>> I'm downloading certain equity data from yahoo on an eod basis, using
>> the code, below. It works wonderfully, formatting the data and dates
>> precisely as I am looking for EXCEPT often the data is late. Often,
>> the latest market day's data is not up until 10, 11 pm that night.
>>
>> Is there something I am doing wrong here? Surely, yahoo must have the
>> data by the close. is the way I am invoking calling the file, below,
>> causing this? Or is there a way to obtain it from google earlier? I;d
>> be very grateful for any help along these lines. Ralph Vince
>>
>> require(quantmod)
>> library(plan)
>> brsym <- c(
>> "AAPL",
>> "ABT",
>> ...
>> "WMT",
>> "XOM"
>> );
>> for (i in 1:length(brsym)) {
>> tryCatch({
>> j <- paste("http://table.finance.yahoo.com/table.csv?s=",brsym[[i]],sep="";);
>> j <- paste(j,"&g=d&ignore=.csv",sep="");
>> print(j);
>> X <- read.csv(j, header=TRUE);
>> # Convert the "Date" column from a factor class to a Date class
>> X$Date <- as.Date(X$Date)
>> # Sort the X object by the Date column -- order(-X$Date) will sort it
>> in the other direction
>> X <- X[order(X$Date),]
>> # Format the date column as you want
>> X$Date <- format(as.Date(X$Date),"%Y%m%d");
>> X <- X[,1:6]
>> kk <- trim.whitespace(brsym[[i]]);
>> k <- paste("/home/oracle/broadbaseddata/", kk, sep="");
>> k <- trim.whitespace(k);
>> k <- paste(k,".csv", sep="");
>> write.table(X, k, append = FALSE, quote = FALSE, sep = ",",
>> eol = "\n", na = "NA", dec = ".", row.names = FALSE,
>> col.names = FALSE, qmethod = c("escape", "double"));
>> print(k);
>> ko <- paste(X$Date[1], "-",X$Date[length(X$Date)]);
>> print(ko);
>> }, interrupt = function(ex) {
>> cat("An interrupt was detected.\n");
>> print(ex);
>> }, error = function(ex) {
>> cat("An error was detected.\n");
>> print(ex);
>> }, finally = {
>> cat("done\n");
>> })
>> }
>>
>> ___
>> R-SIG-Finance@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
>> -- Subscriber-posting only. If you want to post, subscribe first.
>> -- Also note that this is not the r-help list where general R questions 
>> should go.

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


Re: [R-SIG-Finance] Slow data EOD

2012-10-09 Thread G See
Hi Ralph,

You can get real time intraday data from yahoo or google:
http://www.quantshare.com/sa-426-6-ways-to-download-free-intraday-and-tick-data-for-the-us-stock-market

Or you can get delayed data with quantmod::getQuote.

Maybe that will work better for you.

I'm not sure about your particular issue, but one of the issues with
yahoo's daily data is that sometimes it has duplicate timestamps (and
different volume) for the most recent observation.

HTH,
Garrett

On Tue, Oct 9, 2012 at 11:26 AM, Ralph Vince  wrote:
> I'm downloading certain equity data from yahoo on an eod basis, using
> the code, below. It works wonderfully, formatting the data and dates
> precisely as I am looking for EXCEPT often the data is late. Often,
> the latest market day's data is not up until 10, 11 pm that night.
>
> Is there something I am doing wrong here? Surely, yahoo must have the
> data by the close. is the way I am invoking calling the file, below,
> causing this? Or is there a way to obtain it from google earlier? I;d
> be very grateful for any help along these lines. Ralph Vince
>
> require(quantmod)
> library(plan)
> brsym <- c(
> "AAPL",
> "ABT",
> ...
> "WMT",
> "XOM"
> );
> for (i in 1:length(brsym)) {
> tryCatch({
> j <- paste("http://table.finance.yahoo.com/table.csv?s=",brsym[[i]],sep="";);
> j <- paste(j,"&g=d&ignore=.csv",sep="");
> print(j);
> X <- read.csv(j, header=TRUE);
> # Convert the "Date" column from a factor class to a Date class
> X$Date <- as.Date(X$Date)
> # Sort the X object by the Date column -- order(-X$Date) will sort it
> in the other direction
> X <- X[order(X$Date),]
> # Format the date column as you want
> X$Date <- format(as.Date(X$Date),"%Y%m%d");
> X <- X[,1:6]
> kk <- trim.whitespace(brsym[[i]]);
> k <- paste("/home/oracle/broadbaseddata/", kk, sep="");
> k <- trim.whitespace(k);
> k <- paste(k,".csv", sep="");
> write.table(X, k, append = FALSE, quote = FALSE, sep = ",",
> eol = "\n", na = "NA", dec = ".", row.names = FALSE,
> col.names = FALSE, qmethod = c("escape", "double"));
> print(k);
> ko <- paste(X$Date[1], "-",X$Date[length(X$Date)]);
> print(ko);
> }, interrupt = function(ex) {
> cat("An interrupt was detected.\n");
> print(ex);
> }, error = function(ex) {
> cat("An error was detected.\n");
> print(ex);
> }, finally = {
> cat("done\n");
> })
> }
>
> ___
> R-SIG-Finance@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions 
> should go.

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.


[R-SIG-Finance] Slow data EOD

2012-10-09 Thread Ralph Vince
I'm downloading certain equity data from yahoo on an eod basis, using
the code, below. It works wonderfully, formatting the data and dates
precisely as I am looking for EXCEPT often the data is late. Often,
the latest market day's data is not up until 10, 11 pm that night.

Is there something I am doing wrong here? Surely, yahoo must have the
data by the close. is the way I am invoking calling the file, below,
causing this? Or is there a way to obtain it from google earlier? I;d
be very grateful for any help along these lines. Ralph Vince

require(quantmod)
library(plan)
brsym <- c(
"AAPL",
"ABT",
...
"WMT",
"XOM"
);
for (i in 1:length(brsym)) {
tryCatch({
j <- paste("http://table.finance.yahoo.com/table.csv?s=",brsym[[i]],sep="";);
j <- paste(j,"&g=d&ignore=.csv",sep="");
print(j);
X <- read.csv(j, header=TRUE);
# Convert the "Date" column from a factor class to a Date class
X$Date <- as.Date(X$Date)
# Sort the X object by the Date column -- order(-X$Date) will sort it
in the other direction
X <- X[order(X$Date),]
# Format the date column as you want
X$Date <- format(as.Date(X$Date),"%Y%m%d");
X <- X[,1:6]
kk <- trim.whitespace(brsym[[i]]);
k <- paste("/home/oracle/broadbaseddata/", kk, sep="");
k <- trim.whitespace(k);
k <- paste(k,".csv", sep="");
write.table(X, k, append = FALSE, quote = FALSE, sep = ",",
eol = "\n", na = "NA", dec = ".", row.names = FALSE,
col.names = FALSE, qmethod = c("escape", "double"));
print(k);
ko <- paste(X$Date[1], "-",X$Date[length(X$Date)]);
print(ko);
}, interrupt = function(ex) {
cat("An interrupt was detected.\n");
print(ex);
}, error = function(ex) {
cat("An error was detected.\n");
print(ex);
}, finally = {
cat("done\n");
})
}

___
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.