Re: [sqlite] selecting the top 3 in a group

2009-01-07 Thread Robert Citek
Turning the pseudo code into a bash script produced the desired output:

for i in $(sqlite3 team.db 'select distinct div from teams ' ) ; do
  sqlite3 -separator $'\t' team.db '
select div, team, wins
from teams
where div="'$i'"
order by wins+0 desc
limit 3 ;'
done

I am still curious to know if there is a purely SQL way to do the same.

Regards,
- Robert

On Thu, Jan 8, 2009 at 12:06 AM, Robert Citek  wrote:
> In pseudocode, I want to do something similar to this:
>
> for $i in (select div from teams) {
>  select div, team, wins from teams where div=$i order by wins desc limit 3 ;
> }
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] selecting the top 3 in a group

2009-01-07 Thread Robert Citek
That gets me the best team in the first five divisions.  I would like
the top three teams within each division.

Regards,
- Robert

On Thu, Jan 8, 2009 at 12:19 AM, aditya siram  wrote:
> Hi Robert,
> SQL has a LIMIT keyword. I have used it to take the top 'x' entries of a
> large table , so for example:
> SELECT * from table LIMIT 20
>
> You should be able to use it in your query like so:
> select div, team, max(wins) from teams group by div limit 5;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] selecting the top 3 in a group

2009-01-07 Thread aditya siram
Hi Robert,
SQL has a LIMIT keyword. I have used it to take the top 'x' entries of a
large table , so for example:
SELECT * from table LIMIT 20

You should be able to use it in your query like so:
select div, team, max(wins) from teams group by div limit 5;

-deech

On Thu, Jan 8, 2009 at 12:06 AM, Robert Citek wrote:

> How can I construction a SQL query to pick the top three (3) items in a
> group?
>
> I have a list of sports teams which are grouped into divisions, say A,
> B, C, D, etc.  At the end of the season I would like to get a list of
> the top three teams (those with the most wins) in each division.  If I
> wanted the best team from each division, I could write this:
>
> select div, team, max(wins) from teams group by div ;
>
> Unfortunately, there's no option to max to specify more than one item,
> e.g. max(wins,3) to specify the top 3.
>
> In pseudocode, I want to do something similar to this:
>
> for $i in (select div from teams) {
>  select div, team, wins from teams where div=$i order by wins desc limit 3
> ;
> }
>
> Is there a way to do the equivalent using only SQL?
>
> Thanks in advance for any pointers.
>
> Regards,
> - Robert
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] selecting the top 3 in a group

2009-01-07 Thread Robert Citek
How can I construction a SQL query to pick the top three (3) items in a group?

I have a list of sports teams which are grouped into divisions, say A,
B, C, D, etc.  At the end of the season I would like to get a list of
the top three teams (those with the most wins) in each division.  If I
wanted the best team from each division, I could write this:

select div, team, max(wins) from teams group by div ;

Unfortunately, there's no option to max to specify more than one item,
e.g. max(wins,3) to specify the top 3.

In pseudocode, I want to do something similar to this:

for $i in (select div from teams) {
  select div, team, wins from teams where div=$i order by wins desc limit 3 ;
}

Is there a way to do the equivalent using only SQL?

Thanks in advance for any pointers.

Regards,
- Robert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Big performance regressions since 3.4.0?

2009-01-07 Thread develo...@yahoo.com

Or solve two problems by improving the algorithm for non-indexed
GROUP BY queries:

  http://www.sqlite.org/cvstrac/tktview?tn=1809

D. Richard Hipp wrote:
>Version 3.5.3 made a change to the way DISTINCT is processed.
>Probably that change is making your particular case much slower.  The
>change can be seen at:
>
>http://www.sqlite.org/cvstrac/chngview?cn=4538
>
>This change was in response to grumbling on the mailing list
>
>http://www.mail-archive.com/sqlite-users@sqlite.org/msg28894.html
>
>It would appear that I need to spend some time improving this
>optimization - enabling it only in cases where it seems likely to
>improve performance and disabling it in cases like yours were it makes
>things much worse.  We'll try to have a look at that for version 3.6.9.

-- 
View this message in context: 
http://www.nabble.com/Re%3A-Big-performance-regressions-since-3.4.0--tp21302304p21345408.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Edward J. Yoon
>>> Each NAS_000 ~ N storages have approximately 300,000
>>> files, the average size of file is few MB (not over GB).
>>> The broker servers (with SQLite library) are on the
>>> NAS
>
> It's not clear how many broker servers there are.
> One per NAS?

80 ~ 100 servers. and generally one per nas, but it can be connected
to any NAS. (so it's a number of combination)

>>> and The front-end web servers (more than 200 servers)
>>> communicate with living broker servers after request
>>> location from location addressing system.
>
> , which is implemented in MySQL, right?

Yes.

>>> There are high frequency read/write/delete operations.
>
> Let's say a few MB is 50 MB, so 300,000 files on one NAS
> would contain 5E7 * 3E5 = 15E12 = 15 TB
>
> There would have to be 20E6 / 3E5 = 67 NAS installations,
> all connected to 200 webservers via broker servers.
>
> I'm afraid the chosen architecture isn't scalable, and code
> tweaking in sqlite will not help much.
>
> Opening and closing one of 20,000,000 files for every
> logical transaction is not suitable for such a scale. An
> operation of that size should be able to construct a better
> solution.
>

Exactly. may be not suitable.

At this time, We have focused on the short-term efforts. If I solve
them, I'll report my experiences to this community.

All of your advices are really helpful to me.

Thanks,
Edward

On Thu, Jan 8, 2009 at 6:04 AM, Kees Nuyt  wrote:
> On Wed, 7 Jan 2009 10:17:06 -0800, "Jim Dodgen"
>  wrote in General Discussion of SQLite
> Database :
>
>
>> I'm a little worried about how long it takes to open one
>> of 20,000,000 files in a directory on the NAS?
>
> I agree. It would require a very cleverly contructed
> directory tree, and very short (sub)dir names to reduce the
> effort to locate a file.
>
> "Edward J. Yoon" wrote:
>
>>> Each NAS_000 ~ N storages have approximately 300,000
>>> files, the average size of file is few MB (not over GB).
>>> The broker servers (with SQLite library) are on the
>>> NAS
>
> It's not clear how many broker servers there are.
> One per NAS?
>
>>> and The front-end web servers (more than 200 servers)
>>> communicate with living broker servers after request
>>> location from location addressing system.
>
> , which is implemented in MySQL, right?
>
>>> There are high frequency read/write/delete operations.
>
> Let's say a few MB is 50 MB, so 300,000 files on one NAS
> would contain 5E7 * 3E5 = 15E12 = 15 TB
>
> There would have to be 20E6 / 3E5 = 67 NAS installations,
> all connected to 200 webservers via broker servers.
>
> I'm afraid the chosen architecture isn't scalable, and code
> tweaking in sqlite will not help much.
>
> Opening and closing one of 20,000,000 files for every
> logical transaction is not suitable for such a scale. An
> operation of that size should be able to construct a better
> solution.
>
> Or we still don't understand what's really going on.
> --
>  (  Kees Nuyt
>  )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 600ms for simple query: How to optimize it?

2009-01-07 Thread D. Richard Hipp

On Jan 7, 2009, at 6:11 PM, Lukas Haase wrote:

> Hello,
>
> Can somebody tell me why this (simple) query take so much time? This
> query does nothing more than querying a table and JOINing two other
> tables together.
>
> SELECT
>   ti1.topicID AS topicID,
>   ti2.topic_textID AS parent,
>   n.level,
>   n.level_order
> FROM navigation AS n
> LEFT JOIN topic_ids AS ti1 ON ti1.topicID = n.topicID
> LEFT JOIN topic_ids AS ti2 ON ti2.topicID = n.parent_topicID
> WHERE ti1.topic_textID = 'X';

SQLite should be running this query in O(NlogN).

If you change the first LEFT JOIN to a plain old JOIN (which should  
give equivalent results by virtue of the WHERE clause restricting  
ti1.topic_textID to not be NULL) then it should run in O(logN) - much  
faster.  Try it and let me know.

SELECT
ti1.topicID AS topicID,
ti2.topic_textID AS parent,
n.level,
n.level_order
FROM navigation AS n
JOIN topic_ids AS ti1 ON ti1.topicID = n.topicID
LEFT JOIN topic_ids AS ti2 ON ti2.topicID = n.parent_topicID
WHERE ti1.topic_textID = 'X';


>
>
> I thought I optimized the table good with indexes but one such a query
> takes 500 to 1000ms in my C++ program.
>
> Here are my table definitions and the indexes (unfortunately I need  
> the
> VARCHAR(20) field because I get the "topicID" only as text:
>
> CREATE TABLE topic_ids(
>   topicID INTEGER,
>   topic_textID VARCHAR(20),
>   PRIMARY KEY(topicID)
> );
> CREATE INDEX topic_textID ON topic_ids(topic_textID);
>
> CREATE TABLE navigation(
>   topicID INTEGER PRIMARY KEY,
>   parent_topicID INTEGER,
>   level VARCHAR(20),
>   level_order INTEGER
> );
> CREATE INDEX parent_topicID ON navigation(parent_topicID);
> CREATE INDEX level ON navigation(level);
> CREATE INDEX level_order ON navigation(level_order);
>
> I need to execute this query in a database application each time a new
> page is opened. So 500ms are really too much. A few ms would be great.
>
> And the tables itself are not really huge:
>
> SELECT COUNT(*) FROM navigation;
> 19469
> SELECT COUNT(*) FROM topic_ids;
> 19469
>
> Does anybody have an idea what's going wrong here? How can I speed up
> this query?
>
> Thank you very much in advance,
> Luke
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Java Database Access Code Generator

2009-01-07 Thread jose isaias cabrera

Have you thought of D?

http://www.digitalmars.com/d/1.0/index.html

the code is like java, but with c++ speed and it's a stand alone program. 
It's free and it has a few sqlite3 libraries:

http://www.dsource.org/projects/ddbi

and

http://www.dprogramming.com/sqlite.php

it's so easy, a caveman can do it. .-)

just a thought... .-)

josé


- Original Message - 
From: "Mark Fraser" 
To: 
Sent: Wednesday, January 07, 2009 6:23 PM
Subject: [sqlite] Java Database Access Code Generator


> Hello,
>
> I am looking for suggestions on a simple tool to generate java db access
> code that works with SQLite.
>
> Ideally what I want is something that will take a database schema file
> with create table statements as input and will generate the java classes
> necessary to encapsulate basic operations on the database.
>
> Obviously I have done a lot of searching already but have not found
> anything current that has the simplicity and functionality I am hoping 
> for.
>
> Has anyone here successfully used such a tool with java/SQLite?
>
> Thanks.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Java Database Access Code Generator

2009-01-07 Thread Mark Fraser
Hello,

I am looking for suggestions on a simple tool to generate java db access 
code that works with SQLite.

Ideally what I want is something that will take a database schema file 
with create table statements as input and will generate the java classes 
necessary to encapsulate basic operations on the database.

Obviously I have done a lot of searching already but have not found 
anything current that has the simplicity and functionality I am hoping for.

Has anyone here successfully used such a tool with java/SQLite?

Thanks.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 600ms for simple query: How to optimize it?

2009-01-07 Thread Lukas Haase
Hello,

Can somebody tell me why this (simple) query take so much time? This 
query does nothing more than querying a table and JOINing two other 
tables together.

SELECT
ti1.topicID AS topicID,
ti2.topic_textID AS parent,
n.level,
n.level_order
FROM navigation AS n
LEFT JOIN topic_ids AS ti1 ON ti1.topicID = n.topicID
LEFT JOIN topic_ids AS ti2 ON ti2.topicID = n.parent_topicID
WHERE ti1.topic_textID = 'X';

I thought I optimized the table good with indexes but one such a query 
takes 500 to 1000ms in my C++ program.

Here are my table definitions and the indexes (unfortunately I need the 
VARCHAR(20) field because I get the "topicID" only as text:

CREATE TABLE topic_ids(
topicID INTEGER,
topic_textID VARCHAR(20),
PRIMARY KEY(topicID)
);
CREATE INDEX topic_textID ON topic_ids(topic_textID);

CREATE TABLE navigation(
topicID INTEGER PRIMARY KEY,
parent_topicID INTEGER,
level VARCHAR(20),
level_order INTEGER
);
CREATE INDEX parent_topicID ON navigation(parent_topicID);
CREATE INDEX level ON navigation(level);
CREATE INDEX level_order ON navigation(level_order);

I need to execute this query in a database application each time a new 
page is opened. So 500ms are really too much. A few ms would be great.

And the tables itself are not really huge:

SELECT COUNT(*) FROM navigation;
19469
SELECT COUNT(*) FROM topic_ids;
19469

Does anybody have an idea what's going wrong here? How can I speed up 
this query?

Thank you very much in advance,
Luke

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Kees Nuyt
On Wed, 7 Jan 2009 11:14:11 +0900, "Edward J. Yoon"
 wrote in General Discussion of
SQLite Database :

> Thanks for your reply.
>
>> That's a lot of files. Or did you mean rows?
>> Are you sure? There can be many other reasons.
>
> There is a lot of files. So, I don't know exactly 
> why at this time, But thought network latency 
> can´t be denied.
>
> /Edward

Which of my suggestions did you already try?


>On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt  wrote:
>> On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
>>  wrote in General Discussion of
>> SQLite Database :
>>
>>> Hi, I'm newbie in here.
>>>
>>> I'm using SQLite, all data (very huge and 20 million files)
>>
>> That's a lot of files. Or did you mean rows?
>>
>>> stored on NAS storage. Lately my system has been getting
>>> progressively slower. Network cost seems too large.
>>
>> Are you sure? There can be many other reasons.
>>
>>> To improve its performance, I'm think about local lock file
>>> instead of NAS as describe below.
>>>
>>> char str[1024] = "/tmp";
>>> strcat(str, lockfile);
>>> sprintf(str, "%s-lock", zFilename);
>>>
>>> But, I'm not sure this is good idea.
>>> I would love to hear your advice!!
>>
>> I think that's not the right way to start.
>> This is what I would do, more or less in
>> this order:
>>
>> 1- Optimize the physical database properties
>>   PRAGMA page_size (read the docss first!)
>>   PRAGMA [default_]cache_size
>>
>> 2- Optimize SQL: use transactions
>>   where appropriate.
>>
>> 3- Optimize your code. Don't close database
>>   connections if they can be reused.
>>
>> 4- Optimize the schema: create indexes that
>>   help, leave out indexes that don't help.
>>
>> 5- Investigate the communication to/from NAS.
>>   Do all NIC's train at the highest possible speed?
>>   Some limiting switch or router in between?
>>   Do you allow jumbo frames?
>>
>> 6- Consider SAN/fSCSI, direct attached storage.
>>
>> 7- Consider changing SQLite code.
>>
>>
>> Without more details on your use case, people will only get
>> general advice like the above.
>>
>>>Thanks.
>>
>> Hope this helps.
>> --
>>  (  Kees Nuyt

-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Kees Nuyt
On Wed, 7 Jan 2009 10:17:06 -0800, "Jim Dodgen"
 wrote in General Discussion of SQLite
Database :


> I'm a little worried about how long it takes to open one 
> of 20,000,000 files in a directory on the NAS?

I agree. It would require a very cleverly contructed
directory tree, and very short (sub)dir names to reduce the
effort to locate a file.

"Edward J. Yoon" wrote:

>> Each NAS_000 ~ N storages have approximately 300,000
>> files, the average size of file is few MB (not over GB).
>> The broker servers (with SQLite library) are on the 
>> NAS 

It's not clear how many broker servers there are.
One per NAS?

>> and The front-end web servers (more than 200 servers)
>> communicate with living broker servers after request
>> location from location addressing system. 

, which is implemented in MySQL, right?

>> There are high frequency read/write/delete operations.

Let's say a few MB is 50 MB, so 300,000 files on one NAS
would contain 5E7 * 3E5 = 15E12 = 15 TB

There would have to be 20E6 / 3E5 = 67 NAS installations,
all connected to 200 webservers via broker servers.

I'm afraid the chosen architecture isn't scalable, and code
tweaking in sqlite will not help much. 

Opening and closing one of 20,000,000 files for every
logical transaction is not suitable for such a scale. An
operation of that size should be able to construct a better
solution.

Or we still don't understand what's really going on.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: memory usage

2009-01-07 Thread Kees Nuyt
On Wed, 7 Jan 2009 10:25:12 -0800, ed 
wrote in General Discussion of SQLite Database
:

>Hello, I did not receive a reply to my question.Does anyone have any
>information on this?

Apparently not.
I am not much of a source hacker, but perhaps you are.
You might be able to intercept allocation and free calls and
keep tallies per active database handle. You would have to
add a few entrypoints for this.

In short:
Setup a hash table with counters for current and maximum
allocation, use db handle as a key in the hashtable.

Add an entrypoint that registers which db handle will be
used in the next sqlite3_* call. Call that entrypoint before
every sqlite3_* call.

Add code to the "allocate" entrypoint:
increment the current and maximum memory 
counter for the currently active db handle.

Add code to the "free" entrypoint:
decrement the current memory counter 
for the currently active db handle

Add an entrypoint to report the contents of the hashtable.

>thanks,
>ed
>
>-- Forwarded message --
>From: ed 
>Date: Tue, Dec 30, 2008 at 10:02 AM
>Subject: memory usage
>To: sqlite-users@sqlite.org
>
>
>Hello,
>My multi-threaded application has various sqlite db's open simultaneously,
>in memory using the :memory: keyword, disk based db's and at times, tmpfs
>(ram) db's. Is there a way to view each individual database's memory usage?
>
>I found the functions sqlite3_memory_used() and
>sqlite3_status(SQLITE_STATUS_MEMORY_USED, ...) but these look like they
>provide memory statistics for all of sqlite, not per database.
>
>thanks,
>ed

Hope this helps.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Carl Lindgren

Edward J. Yoon wrote:
>> Is access to *one* of the 20 million different SQLite files getting
>> progressively slower? How big is that specific SQLite file? Is that
>> the one that is "huge"? I use SQLite over an NAS (at times), and never
>> experience any noticeable slowdown. Is access to his NAS itself slow,
>> perhaps not just via SQLite but just over the regular filesystem?
>> 
>
> Each NAS_000 ~ N storages have approximately 300,000 files, the
> average size of file is few MB (not over GB). The broker servers (with
> SQLite library) are on the NAS and The front-end web servers (more
> than 200 servers) communicate with living broker servers after request
> location from location addressing system. There are high frequency
> read/write/delete operations.
>
> The number of files/storages/clients keep increasing little by little.
>   
How is the NAS attached to the network? Is it a USB attached drive to a 
server?

Many network bandwidth issues aren't caused by the DB it's self but 
rather the network architecture. A simple rule of thumb is that the 
bandwidth is only as large as the largest bandwidth of an 
appliance/component that the data travels through. Some NAS appliances 
are notoriously slow due to the restrictions on bandwidth within that 
appliance. And you won't notice it until you get to a specific point, 
which maybe the number of concurrent users or the amount of date 
transfered or a combination of both.

If you have the NAS attached to a server you could setup a routine that 
would copy the desired db file to the server (or another server on the 
network) while the user is using that db file. That would give you more 
of a distributed file system architecture than just trying to serve 
everything off of the NAS and would in turn take pressure off of the NAS 
appliance.

I would be curious to know at what point this became an issue? Other 
factors would be the type/make/model of the NAS and how it's setup on 
the network? Are you using hubs or switches? Have you had a traffic 
monitor on the network? Is this network MS based or Unix, do you have a 
PDC, and how is the NAS authenticating users? Are the connections mapped 
drives?  What I would suggest doing is stepping back to the point of the 
performance degradation and work from there. It maybe that you have just 
reached the outer limits of that particular NAS appliance.

Personally, I do not use NAS for DB's that have concurrent users using 
the DB's but rather use NAS for archiving and occasional user storage of 
non-current data for the reasons I stated above. If NAS is the only 
solution I have, I'll plan to expand the NAS with another appliance when 
I notice any degradation of performance occurring and then do load 
balancing to balance the load between the appliances.

 From what I have gathered here, I think I'd be safe to assume that this 
isn't a SQLite issue but rather a Network/NAS issue.

Hope this helps,
Carl


Carl Lindgren
C. R. Lindgren Consulting / Business on the Desktop


> /Edward
>
> On Wed, Jan 7, 2009 at 9:57 PM, P Kishor  wrote:
>   
>> On 1/7/09, Thomas Briggs  wrote:
>> 
>>>I actually thought the original question was perfectly clear.  I
>>>  thought the proposed solution (included in the original post) was
>>>  perfectly logical too.  So what's all the fuss?
>>>   
>> The confusion, at least for me, arose from the following sentence in the OP 
>> --
>>
>> "I'm using SQLite, all data (very huge and 20 million files) "
>>
>> and the response to request for clarification of the above.
>>
>> - we know he is using SQLite
>>
>> - we know "it" is all data (although, I am not sure what else could
>> SQLite be used for other than "data")
>>
>> - we know "it" is very huge
>>
>> - we know there are 20 million *files* involved
>>
>> No matter how I put together the above four pieces of information, I
>> can't grok it.
>>
>> Is access to *one* of the 20 million different SQLite files getting
>> progressively slower? How big is that specific SQLite file? Is that
>> the one that is "huge"? I use SQLite over an NAS (at times), and never
>> experience any noticeable slowdown. Is access to his NAS itself slow,
>> perhaps not just via SQLite but just over the regular filesystem?
>>
>> So there... no fuss, just a desire to understand better what exactly
>> is the problem.
>>
>> 
>>>  On Wed, Jan 7, 2009 at 7:28 AM, P Kishor  wrote:
>>>  > On 1/6/09, Edward J. Yoon  wrote:
>>>  >> Thanks,
>>>  >>
>>>  >>  In more detail, SQLite used for user-based applications (20 million is
>>>  >>  the size of app-users). and MySQL used for user location (file path on
>>>  >>  NAS) addressing.
>>>  >
>>>  > Edward,
>>>  >
>>>  > At least I still don't understand why you have 20 million databases.
>>>  > My suspicion is that something is getting lost in the translation
>>>  > above, and neither you nor anyone on the list is benefitting from 

[sqlite] Fwd: memory usage

2009-01-07 Thread ed
Hello, I did not receive a reply to my question.Does anyone have any
information on this?

thanks,
ed

-- Forwarded message --
From: ed 
Date: Tue, Dec 30, 2008 at 10:02 AM
Subject: memory usage
To: sqlite-users@sqlite.org


Hello,
My multi-threaded application has various sqlite db's open simultaneously,
in memory using the :memory: keyword, disk based db's and at times, tmpfs
(ram) db's. Is there a way to view each individual database's memory usage?

I found the functions sqlite3_memory_used() and
sqlite3_status(SQLITE_STATUS_MEMORY_USED, ...) but these look like they
provide memory statistics for all of sqlite, not per database.

thanks,
ed
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Jim Dodgen
I'm a little worried about how long it takes to open one of 20,000,000 files
in a directory on the NAS?


On Wed, Jan 7, 2009 at 6:36 AM, Edward J. Yoon wrote:

> > Is access to *one* of the 20 million different SQLite files getting
> > progressively slower? How big is that specific SQLite file? Is that
> > the one that is "huge"? I use SQLite over an NAS (at times), and never
> > experience any noticeable slowdown. Is access to his NAS itself slow,
> > perhaps not just via SQLite but just over the regular filesystem?
>
> Each NAS_000 ~ N storages have approximately 300,000 files, the
> average size of file is few MB (not over GB). The broker servers (with
> SQLite library) are on the NAS and The front-end web servers (more
> than 200 servers) communicate with living broker servers after request
> location from location addressing system. There are high frequency
> read/write/delete operations.
>
> The number of files/storages/clients keep increasing little by little.
>
> /Edward
>
> On Wed, Jan 7, 2009 at 9:57 PM, P Kishor  wrote:
> > On 1/7/09, Thomas Briggs  wrote:
> >>I actually thought the original question was perfectly clear.  I
> >>  thought the proposed solution (included in the original post) was
> >>  perfectly logical too.  So what's all the fuss?
> >
> > The confusion, at least for me, arose from the following sentence in the
> OP --
> >
> > "I'm using SQLite, all data (very huge and 20 million files) "
> >
> > and the response to request for clarification of the above.
> >
> > - we know he is using SQLite
> >
> > - we know "it" is all data (although, I am not sure what else could
> > SQLite be used for other than "data")
> >
> > - we know "it" is very huge
> >
> > - we know there are 20 million *files* involved
> >
> > No matter how I put together the above four pieces of information, I
> > can't grok it.
> >
> > Is access to *one* of the 20 million different SQLite files getting
> > progressively slower? How big is that specific SQLite file? Is that
> > the one that is "huge"? I use SQLite over an NAS (at times), and never
> > experience any noticeable slowdown. Is access to his NAS itself slow,
> > perhaps not just via SQLite but just over the regular filesystem?
> >
> > So there... no fuss, just a desire to understand better what exactly
> > is the problem.
> >
> >>
> >>
> >>  On Wed, Jan 7, 2009 at 7:28 AM, P Kishor  wrote:
> >>  > On 1/6/09, Edward J. Yoon  wrote:
> >>  >> Thanks,
> >>  >>
> >>  >>  In more detail, SQLite used for user-based applications (20 million
> is
> >>  >>  the size of app-users). and MySQL used for user location (file path
> on
> >>  >>  NAS) addressing.
> >>  >
> >>  > Edward,
> >>  >
> >>  > At least I still don't understand why you have 20 million databases.
> >>  > My suspicion is that something is getting lost in the translation
> >>  > above, and neither you nor anyone on the list is benefitting from it.
> >>  > Could you please make a little more effort at explaining what exactly
> >>  > is your problem -- it well might be an "xy problem."
> >>  >
> >>  > If you really do have 20 million SQLite databases on a NAS, and you
> >>  > don't care about changing anything about the situation except for
> >>  > improving the speed of access from that NAS, well, since you will
> >>  > likely be accessing only one db at a time, perhaps you could copy
> that
> >>  > specific db to a local drive before opening it.
> >>  >
> >>  > In any case, something tells me that you will get better mileage if
> >>  > you construct a good question for the list with enough background
> >>  > detail.
> >>  >
> >>  >
> >>  >>
> >>  >>
> >>  >>  On Wed, Jan 7, 2009 at 1:31 PM, P Kishor 
> wrote:
> >>  >>  > On 1/6/09, Edward J. Yoon  wrote:
> >>  >>  >> > Do you have 20 million sqlite databases?
> >>  >>  >>
> >>  >>  >>
> >>  >>  >> Yes.
> >>  >>  >
> >>  >>  > Since all these databases are just files, you should stuff them
> into a
> >>  >>  > Postgres database, then write an application that extracts the
> >>  >>  > specific row from the pg database with 20 mil rows giving you
> your
> >>  >>  > specific SQLite database on which you can do your final db work.
> >>  >>  >
> >>  >>  > Seriously, you need to rethink 20 mil databases as they defeat
> the
> >>  >>  > very purpose of having a database.
> >>  >>  >
> >>  >>  >
> >>  >>  >>
> >>  >>  >>
> >>  >>  >>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen 
> wrote:
> >>  >>  >>  > I think the question was about the structure of your data
> >>  >>  >>  >
> >>  >>  >>  > a sqlite database is a file and can contain many tables.
> tables can contain
> >>  >>  >>  > many rows.
> >>  >>  >>  >
> >>  >>  >>  > Do you have 20 million sqlite databases?
> >>  >>  >>  >
> >>  >>  >>  > This information can help people formulate an answer.
> >>  >>  >>  >
> >>  >>  >>  > On Tue, Jan 

Re: [sqlite] Exporting database to CSV file

2009-01-07 Thread Brandon, Nicholas (UK)

.
> 
> Is there a way to do this entirely through php?  I would like 
> to make a query on a table and write the results to a csv 
> file so that the user can have the option of downloading it.  
> Has anyone ever done something similar to this?
> 
> Thanks
> 

I believe there is a function like 'fputcsv' which may work for you.

However I would test the multi-line output as mentioned in the earlier
email from Sylvain. I recall trying to use the function and that it
suffers from the same problem. You may find it easier just to code it
directly yourself.


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Exporting database to CSV file

2009-01-07 Thread Jonathon
Thanks for the replies... Actually, it seems my requirements have changed..

Is there a way to do this entirely through php?  I would like to make a
query on a table and write the results to a csv file so that the user can
have the option of downloading it.  Has anyone ever done something similar
to this?

Thanks

J

On Tue, Jan 6, 2009 at 8:02 AM, Alexey Pechnikov wrote:

> Hello!
>
> В сообщении от Tuesday 06 January 2009 15:33:42 Sylvain Pointeau
> написал(а):
> > The import has the big limitation to not be able to import the file when
> a
> > field is on multiple lines.I don't know if this is the same for the
> > export...
>
> See virtualtext extension from spatialite project.
>
> Best regards, Alexey.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Edward J. Yoon
> Is access to *one* of the 20 million different SQLite files getting
> progressively slower? How big is that specific SQLite file? Is that
> the one that is "huge"? I use SQLite over an NAS (at times), and never
> experience any noticeable slowdown. Is access to his NAS itself slow,
> perhaps not just via SQLite but just over the regular filesystem?

Each NAS_000 ~ N storages have approximately 300,000 files, the
average size of file is few MB (not over GB). The broker servers (with
SQLite library) are on the NAS and The front-end web servers (more
than 200 servers) communicate with living broker servers after request
location from location addressing system. There are high frequency
read/write/delete operations.

The number of files/storages/clients keep increasing little by little.

/Edward

On Wed, Jan 7, 2009 at 9:57 PM, P Kishor  wrote:
> On 1/7/09, Thomas Briggs  wrote:
>>I actually thought the original question was perfectly clear.  I
>>  thought the proposed solution (included in the original post) was
>>  perfectly logical too.  So what's all the fuss?
>
> The confusion, at least for me, arose from the following sentence in the OP --
>
> "I'm using SQLite, all data (very huge and 20 million files) "
>
> and the response to request for clarification of the above.
>
> - we know he is using SQLite
>
> - we know "it" is all data (although, I am not sure what else could
> SQLite be used for other than "data")
>
> - we know "it" is very huge
>
> - we know there are 20 million *files* involved
>
> No matter how I put together the above four pieces of information, I
> can't grok it.
>
> Is access to *one* of the 20 million different SQLite files getting
> progressively slower? How big is that specific SQLite file? Is that
> the one that is "huge"? I use SQLite over an NAS (at times), and never
> experience any noticeable slowdown. Is access to his NAS itself slow,
> perhaps not just via SQLite but just over the regular filesystem?
>
> So there... no fuss, just a desire to understand better what exactly
> is the problem.
>
>>
>>
>>  On Wed, Jan 7, 2009 at 7:28 AM, P Kishor  wrote:
>>  > On 1/6/09, Edward J. Yoon  wrote:
>>  >> Thanks,
>>  >>
>>  >>  In more detail, SQLite used for user-based applications (20 million is
>>  >>  the size of app-users). and MySQL used for user location (file path on
>>  >>  NAS) addressing.
>>  >
>>  > Edward,
>>  >
>>  > At least I still don't understand why you have 20 million databases.
>>  > My suspicion is that something is getting lost in the translation
>>  > above, and neither you nor anyone on the list is benefitting from it.
>>  > Could you please make a little more effort at explaining what exactly
>>  > is your problem -- it well might be an "xy problem."
>>  >
>>  > If you really do have 20 million SQLite databases on a NAS, and you
>>  > don't care about changing anything about the situation except for
>>  > improving the speed of access from that NAS, well, since you will
>>  > likely be accessing only one db at a time, perhaps you could copy that
>>  > specific db to a local drive before opening it.
>>  >
>>  > In any case, something tells me that you will get better mileage if
>>  > you construct a good question for the list with enough background
>>  > detail.
>>  >
>>  >
>>  >>
>>  >>
>>  >>  On Wed, Jan 7, 2009 at 1:31 PM, P Kishor  wrote:
>>  >>  > On 1/6/09, Edward J. Yoon  wrote:
>>  >>  >> > Do you have 20 million sqlite databases?
>>  >>  >>
>>  >>  >>
>>  >>  >> Yes.
>>  >>  >
>>  >>  > Since all these databases are just files, you should stuff them into a
>>  >>  > Postgres database, then write an application that extracts the
>>  >>  > specific row from the pg database with 20 mil rows giving you your
>>  >>  > specific SQLite database on which you can do your final db work.
>>  >>  >
>>  >>  > Seriously, you need to rethink 20 mil databases as they defeat the
>>  >>  > very purpose of having a database.
>>  >>  >
>>  >>  >
>>  >>  >>
>>  >>  >>
>>  >>  >>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen  wrote:
>>  >>  >>  > I think the question was about the structure of your data
>>  >>  >>  >
>>  >>  >>  > a sqlite database is a file and can contain many tables. tables 
>> can contain
>>  >>  >>  > many rows.
>>  >>  >>  >
>>  >>  >>  > Do you have 20 million sqlite databases?
>>  >>  >>  >
>>  >>  >>  > This information can help people formulate an answer.
>>  >>  >>  >
>>  >>  >>  > On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon 
>> wrote:
>>  >>  >>  >
>>  >>  >>  >> Thanks for your reply.
>>  >>  >>  >>
>>  >>  >>  >> > That's a lot of files. Or did you mean rows?
>>  >>  >>  >> > Are you sure? There can be many other reasons.
>>  >>  >>  >>
>>  >>  >>  >> There is a lot of files. So, I don't know exactly why at this 
>> time,
>>  >>  >>  >> But thought network latency can´t be 

Re: [sqlite] Deleting duplicate records

2009-01-07 Thread Igor Tandetnik
"Craig Smith"  wrote in
message news:5d97aa0a-73c0-4b2c-83e7-dd7cef798...@macscripter.net
> Alexey, thank you very much for your idea to put a CONSTRAINT on the
> table in the first place, that is the trick for a long term solution.
> Here is how I have put it together:
>
> CREATE TABLE talks (member_id INTEGER, date DATE, CONSTRAINT
> constraint_ignore_dup UNIQUE (member_id, date) ON CONFLICT IGNORE);
>
> I believe that I understand this statement, except for the term
> constraint_ignore_dup.  Is that a variable name?  Could it be pretty
> much anything I want, and if so, what is its purpose?

It's optional. You can write simply

CREATE TABLE talks (member_id INTEGER, date DATE,
UNIQUE (member_id, date) ON CONFLICT IGNORE);

In some other DBMS'es, constraint name may be used in DROP CONSTRAINT 
statement. But SQLite doesn't support that, so naming a constraint is 
pointless. SQLite supports CONSTRAINT keyword only to ease porting from 
other systems.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Thomas Briggs
   I actually thought the original question was perfectly clear.  I
thought the proposed solution (included in the original post) was
perfectly logical too.  So what's all the fuss?

On Wed, Jan 7, 2009 at 7:28 AM, P Kishor  wrote:
> On 1/6/09, Edward J. Yoon  wrote:
>> Thanks,
>>
>>  In more detail, SQLite used for user-based applications (20 million is
>>  the size of app-users). and MySQL used for user location (file path on
>>  NAS) addressing.
>
> Edward,
>
> At least I still don't understand why you have 20 million databases.
> My suspicion is that something is getting lost in the translation
> above, and neither you nor anyone on the list is benefitting from it.
> Could you please make a little more effort at explaining what exactly
> is your problem -- it well might be an "xy problem."
>
> If you really do have 20 million SQLite databases on a NAS, and you
> don't care about changing anything about the situation except for
> improving the speed of access from that NAS, well, since you will
> likely be accessing only one db at a time, perhaps you could copy that
> specific db to a local drive before opening it.
>
> In any case, something tells me that you will get better mileage if
> you construct a good question for the list with enough background
> detail.
>
>
>>
>>
>>  On Wed, Jan 7, 2009 at 1:31 PM, P Kishor  wrote:
>>  > On 1/6/09, Edward J. Yoon  wrote:
>>  >> > Do you have 20 million sqlite databases?
>>  >>
>>  >>
>>  >> Yes.
>>  >
>>  > Since all these databases are just files, you should stuff them into a
>>  > Postgres database, then write an application that extracts the
>>  > specific row from the pg database with 20 mil rows giving you your
>>  > specific SQLite database on which you can do your final db work.
>>  >
>>  > Seriously, you need to rethink 20 mil databases as they defeat the
>>  > very purpose of having a database.
>>  >
>>  >
>>  >>
>>  >>
>>  >>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen  wrote:
>>  >>  > I think the question was about the structure of your data
>>  >>  >
>>  >>  > a sqlite database is a file and can contain many tables. tables can 
>> contain
>>  >>  > many rows.
>>  >>  >
>>  >>  > Do you have 20 million sqlite databases?
>>  >>  >
>>  >>  > This information can help people formulate an answer.
>>  >>  >
>>  >>  > On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon 
>> wrote:
>>  >>  >
>>  >>  >> Thanks for your reply.
>>  >>  >>
>>  >>  >> > That's a lot of files. Or did you mean rows?
>>  >>  >> > Are you sure? There can be many other reasons.
>>  >>  >>
>>  >>  >> There is a lot of files. So, I don't know exactly why at this time,
>>  >>  >> But thought network latency can´t be denied.
>>  >>  >>
>>  >>  >> /Edward
>>  >>  >>
>>  >>  >> On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt  wrote:
>>  >>  >> > On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
>>  >>  >> >  wrote in General Discussion of
>>  >>  >> > SQLite Database :
>>  >>  >> >
>>  >>  >> >> Hi, I'm newbie in here.
>>  >>  >> >>
>>  >>  >> >> I'm using SQLite, all data (very huge and 20 million files)
>>  >>  >> >
>>  >>  >> > That's a lot of files. Or did you mean rows?
>>  >>  >> >
>>  >>  >> >> stored on NAS storage. Lately my system has been getting
>>  >>  >> >> progressively slower. Network cost seems too large.
>>  >>  >> >
>>  >>  >> > Are you sure? There can be many other reasons.
>>  >>  >> >
>>  >>  >> >> To improve its performance, I'm think about local lock file
>>  >>  >> >> instead of NAS as describe below.
>>  >>  >> >>
>>  >>  >> >> char str[1024] = "/tmp";
>>  >>  >> >> strcat(str, lockfile);
>>  >>  >> >> sprintf(str, "%s-lock", zFilename);
>>  >>  >> >>
>>  >>  >> >> But, I'm not sure this is good idea.
>>  >>  >> >> I would love to hear your advice!!
>>  >>  >> >
>>  >>  >> > I think that's not the right way to start.
>>  >>  >> > This is what I would do, more or less in
>>  >>  >> > this order:
>>  >>  >> >
>>  >>  >> > 1- Optimize the physical database properties
>>  >>  >> >   PRAGMA page_size (read the docss first!)
>>  >>  >> >   PRAGMA [default_]cache_size
>>  >>  >> >
>>  >>  >> > 2- Optimize SQL: use transactions
>>  >>  >> >   where appropriate.
>>  >>  >> >
>>  >>  >> > 3- Optimize your code. Don't close database
>>  >>  >> >   connections if they can be reused.
>>  >>  >> >
>>  >>  >> > 4- Optimize the schema: create indexes that
>>  >>  >> >   help, leave out indexes that don't help.
>>  >>  >> >
>>  >>  >> > 5- Investigate the communication to/from NAS.
>>  >>  >> >   Do all NIC's train at the highest possible speed?
>>  >>  >> >   Some limiting switch or router in between?
>>  >>  >> >   Do you allow jumbo frames?
>>  >>  >> >
>>  >>  >> > 6- Consider SAN/fSCSI, direct attached storage.
>>  >>  >> >
>>  >>  >> > 7- Consider changing SQLite code.
>>  >>  >> >
>>  >>  >> 

Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread P Kishor
On 1/6/09, Edward J. Yoon  wrote:
> Thanks,
>
>  In more detail, SQLite used for user-based applications (20 million is
>  the size of app-users). and MySQL used for user location (file path on
>  NAS) addressing.

Edward,

At least I still don't understand why you have 20 million databases.
My suspicion is that something is getting lost in the translation
above, and neither you nor anyone on the list is benefitting from it.
Could you please make a little more effort at explaining what exactly
is your problem -- it well might be an "xy problem."

If you really do have 20 million SQLite databases on a NAS, and you
don't care about changing anything about the situation except for
improving the speed of access from that NAS, well, since you will
likely be accessing only one db at a time, perhaps you could copy that
specific db to a local drive before opening it.

In any case, something tells me that you will get better mileage if
you construct a good question for the list with enough background
detail.


>
>
>  On Wed, Jan 7, 2009 at 1:31 PM, P Kishor  wrote:
>  > On 1/6/09, Edward J. Yoon  wrote:
>  >> > Do you have 20 million sqlite databases?
>  >>
>  >>
>  >> Yes.
>  >
>  > Since all these databases are just files, you should stuff them into a
>  > Postgres database, then write an application that extracts the
>  > specific row from the pg database with 20 mil rows giving you your
>  > specific SQLite database on which you can do your final db work.
>  >
>  > Seriously, you need to rethink 20 mil databases as they defeat the
>  > very purpose of having a database.
>  >
>  >
>  >>
>  >>
>  >>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen  wrote:
>  >>  > I think the question was about the structure of your data
>  >>  >
>  >>  > a sqlite database is a file and can contain many tables. tables can 
> contain
>  >>  > many rows.
>  >>  >
>  >>  > Do you have 20 million sqlite databases?
>  >>  >
>  >>  > This information can help people formulate an answer.
>  >>  >
>  >>  > On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon 
> wrote:
>  >>  >
>  >>  >> Thanks for your reply.
>  >>  >>
>  >>  >> > That's a lot of files. Or did you mean rows?
>  >>  >> > Are you sure? There can be many other reasons.
>  >>  >>
>  >>  >> There is a lot of files. So, I don't know exactly why at this time,
>  >>  >> But thought network latency can´t be denied.
>  >>  >>
>  >>  >> /Edward
>  >>  >>
>  >>  >> On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt  wrote:
>  >>  >> > On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
>  >>  >> >  wrote in General Discussion of
>  >>  >> > SQLite Database :
>  >>  >> >
>  >>  >> >> Hi, I'm newbie in here.
>  >>  >> >>
>  >>  >> >> I'm using SQLite, all data (very huge and 20 million files)
>  >>  >> >
>  >>  >> > That's a lot of files. Or did you mean rows?
>  >>  >> >
>  >>  >> >> stored on NAS storage. Lately my system has been getting
>  >>  >> >> progressively slower. Network cost seems too large.
>  >>  >> >
>  >>  >> > Are you sure? There can be many other reasons.
>  >>  >> >
>  >>  >> >> To improve its performance, I'm think about local lock file
>  >>  >> >> instead of NAS as describe below.
>  >>  >> >>
>  >>  >> >> char str[1024] = "/tmp";
>  >>  >> >> strcat(str, lockfile);
>  >>  >> >> sprintf(str, "%s-lock", zFilename);
>  >>  >> >>
>  >>  >> >> But, I'm not sure this is good idea.
>  >>  >> >> I would love to hear your advice!!
>  >>  >> >
>  >>  >> > I think that's not the right way to start.
>  >>  >> > This is what I would do, more or less in
>  >>  >> > this order:
>  >>  >> >
>  >>  >> > 1- Optimize the physical database properties
>  >>  >> >   PRAGMA page_size (read the docss first!)
>  >>  >> >   PRAGMA [default_]cache_size
>  >>  >> >
>  >>  >> > 2- Optimize SQL: use transactions
>  >>  >> >   where appropriate.
>  >>  >> >
>  >>  >> > 3- Optimize your code. Don't close database
>  >>  >> >   connections if they can be reused.
>  >>  >> >
>  >>  >> > 4- Optimize the schema: create indexes that
>  >>  >> >   help, leave out indexes that don't help.
>  >>  >> >
>  >>  >> > 5- Investigate the communication to/from NAS.
>  >>  >> >   Do all NIC's train at the highest possible speed?
>  >>  >> >   Some limiting switch or router in between?
>  >>  >> >   Do you allow jumbo frames?
>  >>  >> >
>  >>  >> > 6- Consider SAN/fSCSI, direct attached storage.
>  >>  >> >
>  >>  >> > 7- Consider changing SQLite code.
>  >>  >> >
>  >>  >> >
>  >>  >> > Without more details on your use case, people will only get
>  >>  >> > general advice like the above.
>  >>  >> >
>  >>  >> >>Thanks.
>  >>  >> >
>  >>  >> > Hope this helps.
>  >>  >> > --
>  >>  >> >  (  Kees Nuyt
>  >>  >> >  )
>  >>  >> > c[_]
>  >>  >> > ___
>  >>  >> > sqlite-users mailing list
>  >>  >> > 

Re: [sqlite] Deleting duplicate records

2009-01-07 Thread Alexey Pechnikov
Hello!

В сообщении от Wednesday 07 January 2009 08:56:02 Craig Smith написал(а):
> CREATE TABLE talks (member_id INTEGER, date DATE, CONSTRAINT  
> constraint_ignore_dup UNIQUE (member_id, date) ON CONFLICT IGNORE);
>
> I believe that I understand this statement, except for the term  
> constraint_ignore_dup.  Is that a variable name?  Could it be pretty  
> much anything I want, and if so, what is its purpose?

constraint_ignore_dup is the constraint name. You can use any string, for 
example, constraint1 or 
my_constraint.

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users