Thanks Pierce - I go the drift of the loop but have actually tried it out
with the odbc<->sqlite driver

however in your notes below you mentioned that you used a dump of large
size. My question is when faced with such large row sets how do you limit
the size of data read in the select statement .

Select * from table would probably read in the entire table set, and the
limit clause would stop and not go beyond the no# of rows specified.

another doubt I had was from the example code below from the NBSQLite3. 
is rs close at the right place - would it be closing the connection ?

   res := db beginTransaction.

   rs := db execute: '...'.
   rs close.

   res := db commitTransaction. 

regards
Sanjay



Pierce Ng-3 wrote
> On Thu, Apr 30, 2015 at 06:29:53AM -0700, Sanjay Minni wrote:
>> the smalltalkhub page shows an example
>> db execute: 'SELECT * FROM BLOGPOSTS;'.
>> but how are the rows supposed to be accessed - in the tests i see they
>> have
>> to be walked thru one by one like a 3GL loop
> 
> Sanjay,
> 
> Here's how you can collect all rows:
> 
>   | db resultSet row coll |
>   coll := OrderedCollection new.
>   db := NBSQLite3Connection openOn: '/tmp/so.db'.
>   [   resultSet := db execute: 'select * from posts limit 10'.
>       [ (row := resultSet next) notNil ] whileTrue: [
>           coll add: row ]
>   ] ensure: [ db close ].
>   coll inspect.
> 
> Each row is an NBSQLite3Row instance. Use #at: with the column name as key 
> to get at the data.
> 
>   coll fifth at: 'Title' 
>   ==> 'How do I calculate someone''s age in C#?'
> 
> With the collection you still have to loop through them, no?
> 
> I am using NBSQLite3 to play with existing data, and I haven't worked with
> enough different types of data to attempt to generalize a looping
> construct. So
> I've always done by-row processing. 
> 
> Another reason is that, while my data sets aren't "Big Data", I am
> aesthetically inclined against reading everything into RAM. :-) Above
> example
> uses the Sep 2011 StackOverflow data dump. The SQLite datafile created
> from the
> dump, with full text indexing, is 16GB and the table posts has ~6.5
> million
> rows. 
> 
> HTH.
> 
> Pierce





-----
---
Regards, Sanjay
--
View this message in context: 
http://forum.world.st/Reading-rows-in-a-database-table-select-from-tp4823000p4823522.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply via email to