Rob,

There have been a few reports of issues with the default value of 100 for 
rows_at_time and I have had issues on OSX connecting to an Oracle server. I 
have set the value to 1 in the wrapper functions that I have written to obtain 
data from our server where I call odbcConnect() and sqlQuery(), both of which 
have this argument.

This issue is referenced in the help for sqlQuery() in the Details section:

Modern drivers should work (and work faster, especially if communicating with a 
remote machine) with rows_at_time = 100, the usual default, or more. (However, 
some drivers may mis-fetch multiple rows, in which case set rows_at_time = 1 
when creating the connection.) However, if max is specified then this may fetch 
too many rows and hence it could be reduced (but then this setting applies to 
all subsequent fetches from that result set). Another circumstance in which you 
might want to reducerows_at_time is if there are large character columns in the 
result set: with the default value up to 6Mb of buffer for each such column 
could be allocated to store intermediate results.


Note that setting rows_at_time to 0 results in it being set to 1 internally in 
sqlQuery(), since the following code is present in the function:

    rows_at_time <- if (missing(rows_at_time)) 
        attr(channel, "rows_at_time")
    else max(1, min(1024, rows_at_time))


> max(1, min(1024, 0))
[1] 1


HTH,

Marc Schwartz 
  
On Feb 16, 2010, at 4:49 PM, Rob Forler wrote:

> It turns out that in the sqlQuery I must set rows_at_time =0 to get rid of
> this problem.
> 
> Does anyone have any idea why this might be?
> 
> On Tue, Feb 16, 2010 at 12:52 PM, Rob Forler <rfor...@uchicago.edu> wrote:
> 
>> some more info
>>> t(t(odbcGetInfo(connection)))
>>                 [,1]
>> DBMS_Name        "Adaptive Server Anywhere"
>> DBMS_Ver         "12.70.0000"
>> Driver_ODBC_Ver  "03.51"
>> Data_Source_Name "dbname"
>> Driver_Name      "Adaptive Server Anywhere"
>> Driver_Ver       "09.00.0001"
>> ODBC_Ver         "03.52.0000"
>> Server_Name      "dbname"
>> 
>> 
>> 
>> 
>> On Tue, Feb 16, 2010 at 11:39 AM, Rob Forler <rfor...@uchicago.edu> wrote:
>> 
>>> Hello,
>>> 
>>> We are having some strange issues with RODBC related to integer columns.
>>> Whenever we do a sql query the data in a integer column is 150 actual data
>>> points then 150 0's then 150 actual data points then 150 0's. However, our
>>> database actually has numbers where the 0's are filled in. Furthermore,
>>> other datatypes do not have this problem: double and varchar are correct and
>>> do not alternate to null. Also, if we increase the rows_at_time to 1024
>>> there are larger gaps between the 0's and actual data. The server is a
>>> sybase IQ database. We have tested it on a different database sybase ASE and
>>> we still get this issue.
>>> 
>>> For example :
>>> 
>>> We have the following query
>>> 
>>> sqlString = "Select ActionID, Velocity from ActionDataTable"
>>> 
>>> #where ActionID is of integer type and Velocity is of double type.
>>> connection = odbcConnect("IQDatabase"); #this database is a sybase IQ
>>> database
>>> sqlData = sqlQuery(connection, sqlString);
>>> 
>>> 
>>> sqlData$ActionID might be 1,2,3,4,5,6,....150, 0,0,0,0,0,0,0,....,0,0,0,
>>> 301,302,303,304,.....448,449,500,0,0,0...,0,0
>>> 
>>> and Velocity will have data values along the whole column without these
>>> big areas of 0's.
>>> 
>>> Thanks for the help,
>>> Robert Forler
>>>

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to