> my application reads about 100.000 records from the DB during startup (it will 
> be even more in the future!). This takes about 20 seconds. Is that normal for 
> such a number of records? For me, it seems a bit to long.

Reading 100.000 Rows from a table is a big job. Are they read using an 
single SELECT or is there an sequence of selects used?


> The executed SQL statement looks like this:
> 
> sql = "SELECT \"cid\", \"name\", \"city\", \"client\" "
>         "FROM   company "
>         "WHERE  \"confidentiality\" <= " + conf + " "
>         "AND    \"name\" LIKE '" + mPattern + "' "
>         "ORDER BY \"name\" DESC, \"city\" DESC";
> 
> I created an index for the 4 selected columns. Is there anything that is time 
> consuming in this statement? Or is the performance for 100.000 records OK?

There is not much help for this query in using index:

- On "confidentiality" the select is done by "<=" which on everage 
does select half the data - to judge this qne would have to knov more 
about your database, if there only few rows meeting the <= condition 
it would be highly efficient to have an index there.

- On "name" the index-use is reduced if "mPattern" does start with an 
wildcard. Usualy "LIKE" does only utilize index to search for the 
first non-wildcard chars.

- The index on "cid", "city", "client" should not really affect this 
query since index is used for selection (and perhaps for sorting) not 
for output. You should have a look at the query-plan here.

Elmar

_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to