Re: [sqlite] Upgrade sqlite 3.3.4 to sqlite 3.6.7

2009-01-13 Thread Edward J. Yoon
Thanks for your helpful information!

On Wed, Jan 14, 2009 at 12:48 AM, Griggs, Donald
<donald.gri...@allscripts.com> wrote:
> Subject: [sqlite] Upgrade sqlite 3.3.4 to sqlite 3.6.7
>
> Hi,
>
> I consider upgrade sqlite 3.3.4 to sqlite 3.6.7. So, I wonder there is
> any change (or problem) of file format.
> ===
>
> Upgrading from version 2 to version 3 (understandably) required a dump
> and restore, but not upgrading from one version 3 to a newer one.
>
> Following is from page: http://sqlite.org/different.html
>
> Stable Cross-Platform Database File
>
> The SQLite file format is cross-platform. A database file written on one
> machine can be copied to and used on a different machine with a
> different architecture. Big-endian or little-endian, 32-bit or 64-bit
> does not matter. All machines use the same file format. Furthermore, the
> developers have pledged to keep the file format stable and backwards
> compatible, so newer versions of SQLite can read and write older
> database files.
>
> Most other SQL database engines require you to dump and restore the
> database when moving from one platform to another and often when
> upgrading to a newer version of the software.
>
>
>
>
> -Original Message-----
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Edward J. Yoon
> Sent: Tuesday, January 13, 2009 2:45 AM
> To: General Discussion of SQLite Database
> Subject: [sqlite] Upgrade sqlite 3.3.4 to sqlite 3.6.7
>
> Hi,
>
> I consider upgrade sqlite 3.3.4 to sqlite 3.6.7. So, I wonder there is
> any change (or problem) of file format.
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardy...@apache.org
> http://blog.udanax.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Upgrade sqlite 3.3.4 to sqlite 3.6.7

2009-01-12 Thread Edward J. Yoon
Hi,

I consider upgrade sqlite 3.3.4 to sqlite 3.6.7. So, I wonder there is
any change (or problem) of file format.

-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Edward J. Yoon
>>> Each NAS_000 ~ N storages have approximately 300,000
>>> files, the average size of file is few MB (not over GB).
>>> The broker servers (with SQLite library) are on the
>>> NAS
>
> It's not clear how many broker servers there are.
> One per NAS?

80 ~ 100 servers. and generally one per nas, but it can be connected
to any NAS. (so it's a number of combination)

>>> and The front-end web servers (more than 200 servers)
>>> communicate with living broker servers after request
>>> location from location addressing system.
>
> , which is implemented in MySQL, right?

Yes.

>>> There are high frequency read/write/delete operations.
>
> Let's say a few MB is 50 MB, so 300,000 files on one NAS
> would contain 5E7 * 3E5 = 15E12 = 15 TB
>
> There would have to be 20E6 / 3E5 = 67 NAS installations,
> all connected to 200 webservers via broker servers.
>
> I'm afraid the chosen architecture isn't scalable, and code
> tweaking in sqlite will not help much.
>
> Opening and closing one of 20,000,000 files for every
> logical transaction is not suitable for such a scale. An
> operation of that size should be able to construct a better
> solution.
>

Exactly. may be not suitable.

At this time, We have focused on the short-term efforts. If I solve
them, I'll report my experiences to this community.

All of your advices are really helpful to me.

Thanks,
Edward

On Thu, Jan 8, 2009 at 6:04 AM, Kees Nuyt <k.n...@zonnet.nl> wrote:
> On Wed, 7 Jan 2009 10:17:06 -0800, "Jim Dodgen"
> <j...@dodgen.us> wrote in General Discussion of SQLite
> Database <sqlite-users@sqlite.org>:
>
>
>> I'm a little worried about how long it takes to open one
>> of 20,000,000 files in a directory on the NAS?
>
> I agree. It would require a very cleverly contructed
> directory tree, and very short (sub)dir names to reduce the
> effort to locate a file.
>
> "Edward J. Yoon" wrote:
>
>>> Each NAS_000 ~ N storages have approximately 300,000
>>> files, the average size of file is few MB (not over GB).
>>> The broker servers (with SQLite library) are on the
>>> NAS
>
> It's not clear how many broker servers there are.
> One per NAS?
>
>>> and The front-end web servers (more than 200 servers)
>>> communicate with living broker servers after request
>>> location from location addressing system.
>
> , which is implemented in MySQL, right?
>
>>> There are high frequency read/write/delete operations.
>
> Let's say a few MB is 50 MB, so 300,000 files on one NAS
> would contain 5E7 * 3E5 = 15E12 = 15 TB
>
> There would have to be 20E6 / 3E5 = 67 NAS installations,
> all connected to 200 webservers via broker servers.
>
> I'm afraid the chosen architecture isn't scalable, and code
> tweaking in sqlite will not help much.
>
> Opening and closing one of 20,000,000 files for every
> logical transaction is not suitable for such a scale. An
> operation of that size should be able to construct a better
> solution.
>
> Or we still don't understand what's really going on.
> --
>  (  Kees Nuyt
>  )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-07 Thread Edward J. Yoon
> Is access to *one* of the 20 million different SQLite files getting
> progressively slower? How big is that specific SQLite file? Is that
> the one that is "huge"? I use SQLite over an NAS (at times), and never
> experience any noticeable slowdown. Is access to his NAS itself slow,
> perhaps not just via SQLite but just over the regular filesystem?

Each NAS_000 ~ N storages have approximately 300,000 files, the
average size of file is few MB (not over GB). The broker servers (with
SQLite library) are on the NAS and The front-end web servers (more
than 200 servers) communicate with living broker servers after request
location from location addressing system. There are high frequency
read/write/delete operations.

The number of files/storages/clients keep increasing little by little.

/Edward

On Wed, Jan 7, 2009 at 9:57 PM, P Kishor <punk.k...@gmail.com> wrote:
> On 1/7/09, Thomas Briggs <t...@briggs.cx> wrote:
>>I actually thought the original question was perfectly clear.  I
>>  thought the proposed solution (included in the original post) was
>>  perfectly logical too.  So what's all the fuss?
>
> The confusion, at least for me, arose from the following sentence in the OP --
>
> "I'm using SQLite, all data (very huge and 20 million files) "
>
> and the response to request for clarification of the above.
>
> - we know he is using SQLite
>
> - we know "it" is all data (although, I am not sure what else could
> SQLite be used for other than "data")
>
> - we know "it" is very huge
>
> - we know there are 20 million *files* involved
>
> No matter how I put together the above four pieces of information, I
> can't grok it.
>
> Is access to *one* of the 20 million different SQLite files getting
> progressively slower? How big is that specific SQLite file? Is that
> the one that is "huge"? I use SQLite over an NAS (at times), and never
> experience any noticeable slowdown. Is access to his NAS itself slow,
> perhaps not just via SQLite but just over the regular filesystem?
>
> So there... no fuss, just a desire to understand better what exactly
> is the problem.
>
>>
>>
>>  On Wed, Jan 7, 2009 at 7:28 AM, P Kishor <punk.k...@gmail.com> wrote:
>>  > On 1/6/09, Edward J. Yoon <edwardy...@apache.org> wrote:
>>  >> Thanks,
>>  >>
>>  >>  In more detail, SQLite used for user-based applications (20 million is
>>  >>  the size of app-users). and MySQL used for user location (file path on
>>  >>  NAS) addressing.
>>  >
>>  > Edward,
>>  >
>>  > At least I still don't understand why you have 20 million databases.
>>  > My suspicion is that something is getting lost in the translation
>>  > above, and neither you nor anyone on the list is benefitting from it.
>>  > Could you please make a little more effort at explaining what exactly
>>  > is your problem -- it well might be an "xy problem."
>>  >
>>  > If you really do have 20 million SQLite databases on a NAS, and you
>>  > don't care about changing anything about the situation except for
>>  > improving the speed of access from that NAS, well, since you will
>>  > likely be accessing only one db at a time, perhaps you could copy that
>>  > specific db to a local drive before opening it.
>>  >
>>  > In any case, something tells me that you will get better mileage if
>>  > you construct a good question for the list with enough background
>>  > detail.
>>  >
>>  >
>>  >>
>>  >>
>>  >>  On Wed, Jan 7, 2009 at 1:31 PM, P Kishor <punk.k...@gmail.com> wrote:
>>  >>  > On 1/6/09, Edward J. Yoon <edwardy...@apache.org> wrote:
>>  >>  >> > Do you have 20 million sqlite databases?
>>  >>  >>
>>  >>  >>
>>  >>  >> Yes.
>>  >>  >
>>  >>  > Since all these databases are just files, you should stuff them into a
>>  >>  > Postgres database, then write an application that extracts the
>>  >>  > specific row from the pg database with 20 mil rows giving you your
>>  >>  > specific SQLite database on which you can do your final db work.
>>  >>  >
>>  >>  > Seriously, you need to rethink 20 mil databases as they defeat the
>>  >>  > very purpose of having a database.
>>  >>  >
>>  >>  >
>>  >>  >>
>>  >>  >>
>>  >>  >>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen <j...@

Re: [sqlite] SQLite with NAS storage

2009-01-06 Thread Edward J. Yoon
Thanks,

In more detail, SQLite used for user-based applications (20 million is
the size of app-users). and MySQL used for user location (file path on
NAS) addressing.

On Wed, Jan 7, 2009 at 1:31 PM, P Kishor <punk.k...@gmail.com> wrote:
> On 1/6/09, Edward J. Yoon <edwardy...@apache.org> wrote:
>> > Do you have 20 million sqlite databases?
>>
>>
>> Yes.
>
> Since all these databases are just files, you should stuff them into a
> Postgres database, then write an application that extracts the
> specific row from the pg database with 20 mil rows giving you your
> specific SQLite database on which you can do your final db work.
>
> Seriously, you need to rethink 20 mil databases as they defeat the
> very purpose of having a database.
>
>
>>
>>
>>  On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen <j...@dodgen.us> wrote:
>>  > I think the question was about the structure of your data
>>  >
>>  > a sqlite database is a file and can contain many tables. tables can 
>> contain
>>  > many rows.
>>  >
>>  > Do you have 20 million sqlite databases?
>>  >
>>  > This information can help people formulate an answer.
>>  >
>>  > On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon 
>> <edwardy...@apache.org>wrote:
>>  >
>>  >> Thanks for your reply.
>>  >>
>>  >> > That's a lot of files. Or did you mean rows?
>>  >> > Are you sure? There can be many other reasons.
>>  >>
>>  >> There is a lot of files. So, I don't know exactly why at this time,
>>  >> But thought network latency can´t be denied.
>>  >>
>>  >> /Edward
>>  >>
>>  >> On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt <k.n...@zonnet.nl> wrote:
>>  >> > On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
>>  >> > <edwardy...@apache.org> wrote in General Discussion of
>>  >> > SQLite Database <sqlite-users@sqlite.org>:
>>  >> >
>>  >> >> Hi, I'm newbie in here.
>>  >> >>
>>  >> >> I'm using SQLite, all data (very huge and 20 million files)
>>  >> >
>>  >> > That's a lot of files. Or did you mean rows?
>>  >> >
>>  >> >> stored on NAS storage. Lately my system has been getting
>>  >> >> progressively slower. Network cost seems too large.
>>  >> >
>>  >> > Are you sure? There can be many other reasons.
>>  >> >
>>  >> >> To improve its performance, I'm think about local lock file
>>  >> >> instead of NAS as describe below.
>>  >> >>
>>  >> >> char str[1024] = "/tmp";
>>  >> >> strcat(str, lockfile);
>>  >> >> sprintf(str, "%s-lock", zFilename);
>>  >> >>
>>  >> >> But, I'm not sure this is good idea.
>>  >> >> I would love to hear your advice!!
>>  >> >
>>  >> > I think that's not the right way to start.
>>  >> > This is what I would do, more or less in
>>  >> > this order:
>>  >> >
>>  >> > 1- Optimize the physical database properties
>>  >> >   PRAGMA page_size (read the docss first!)
>>  >> >   PRAGMA [default_]cache_size
>>  >> >
>>  >> > 2- Optimize SQL: use transactions
>>  >> >   where appropriate.
>>  >> >
>>  >> > 3- Optimize your code. Don't close database
>>  >> >   connections if they can be reused.
>>  >> >
>>  >> > 4- Optimize the schema: create indexes that
>>  >> >   help, leave out indexes that don't help.
>>  >> >
>>  >> > 5- Investigate the communication to/from NAS.
>>  >> >   Do all NIC's train at the highest possible speed?
>>  >> >   Some limiting switch or router in between?
>>  >> >   Do you allow jumbo frames?
>>  >> >
>>  >> > 6- Consider SAN/fSCSI, direct attached storage.
>>  >> >
>>  >> > 7- Consider changing SQLite code.
>>  >> >
>>  >> >
>>  >> > Without more details on your use case, people will only get
>>  >> > general advice like the above.
>>  >> >
>>  >> >>Thanks.
>>  >> >
>>  >> > Hope this helps.
>>  >> &g

Re: [sqlite] SQLite with NAS storage

2009-01-06 Thread Edward J. Yoon
> Do you have 20 million sqlite databases?

Yes.

On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen <j...@dodgen.us> wrote:
> I think the question was about the structure of your data
>
> a sqlite database is a file and can contain many tables. tables can contain
> many rows.
>
> Do you have 20 million sqlite databases?
>
> This information can help people formulate an answer.
>
> On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon <edwardy...@apache.org>wrote:
>
>> Thanks for your reply.
>>
>> > That's a lot of files. Or did you mean rows?
>> > Are you sure? There can be many other reasons.
>>
>> There is a lot of files. So, I don't know exactly why at this time,
>> But thought network latency can´t be denied.
>>
>> /Edward
>>
>> On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt <k.n...@zonnet.nl> wrote:
>> > On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
>> > <edwardy...@apache.org> wrote in General Discussion of
>> > SQLite Database <sqlite-users@sqlite.org>:
>> >
>> >> Hi, I'm newbie in here.
>> >>
>> >> I'm using SQLite, all data (very huge and 20 million files)
>> >
>> > That's a lot of files. Or did you mean rows?
>> >
>> >> stored on NAS storage. Lately my system has been getting
>> >> progressively slower. Network cost seems too large.
>> >
>> > Are you sure? There can be many other reasons.
>> >
>> >> To improve its performance, I'm think about local lock file
>> >> instead of NAS as describe below.
>> >>
>> >> char str[1024] = "/tmp";
>> >> strcat(str, lockfile);
>> >> sprintf(str, "%s-lock", zFilename);
>> >>
>> >> But, I'm not sure this is good idea.
>> >> I would love to hear your advice!!
>> >
>> > I think that's not the right way to start.
>> > This is what I would do, more or less in
>> > this order:
>> >
>> > 1- Optimize the physical database properties
>> >   PRAGMA page_size (read the docss first!)
>> >   PRAGMA [default_]cache_size
>> >
>> > 2- Optimize SQL: use transactions
>> >   where appropriate.
>> >
>> > 3- Optimize your code. Don't close database
>> >   connections if they can be reused.
>> >
>> > 4- Optimize the schema: create indexes that
>> >   help, leave out indexes that don't help.
>> >
>> > 5- Investigate the communication to/from NAS.
>> >   Do all NIC's train at the highest possible speed?
>> >   Some limiting switch or router in between?
>> >   Do you allow jumbo frames?
>> >
>> > 6- Consider SAN/fSCSI, direct attached storage.
>> >
>> > 7- Consider changing SQLite code.
>> >
>> >
>> > Without more details on your use case, people will only get
>> > general advice like the above.
>> >
>> >>Thanks.
>> >
>> > Hope this helps.
>> > --
>> >  (  Kees Nuyt
>> >  )
>> > c[_]
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>>
>>
>>
>> --
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardy...@apache.org
>> http://blog.udanax.org
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> Jim Dodgen
> j...@dodgen.us
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-06 Thread Edward J. Yoon
Thanks for your reply.

> That's a lot of files. Or did you mean rows?
> Are you sure? There can be many other reasons.

There is a lot of files. So, I don't know exactly why at this time,
But thought network latency can´t be denied.

/Edward

On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt <k.n...@zonnet.nl> wrote:
> On Tue, 6 Jan 2009 11:23:29 +0900, "Edward J. Yoon"
> <edwardy...@apache.org> wrote in General Discussion of
> SQLite Database <sqlite-users@sqlite.org>:
>
>> Hi, I'm newbie in here.
>>
>> I'm using SQLite, all data (very huge and 20 million files)
>
> That's a lot of files. Or did you mean rows?
>
>> stored on NAS storage. Lately my system has been getting
>> progressively slower. Network cost seems too large.
>
> Are you sure? There can be many other reasons.
>
>> To improve its performance, I'm think about local lock file
>> instead of NAS as describe below.
>>
>> char str[1024] = "/tmp";
>> strcat(str, lockfile);
>> sprintf(str, "%s-lock", zFilename);
>>
>> But, I'm not sure this is good idea.
>> I would love to hear your advice!!
>
> I think that's not the right way to start.
> This is what I would do, more or less in
> this order:
>
> 1- Optimize the physical database properties
>   PRAGMA page_size (read the docss first!)
>   PRAGMA [default_]cache_size
>
> 2- Optimize SQL: use transactions
>   where appropriate.
>
> 3- Optimize your code. Don't close database
>   connections if they can be reused.
>
> 4- Optimize the schema: create indexes that
>   help, leave out indexes that don't help.
>
> 5- Investigate the communication to/from NAS.
>   Do all NIC's train at the highest possible speed?
>   Some limiting switch or router in between?
>   Do you allow jumbo frames?
>
> 6- Consider SAN/fSCSI, direct attached storage.
>
> 7- Consider changing SQLite code.
>
>
> Without more details on your use case, people will only get
> general advice like the above.
>
>>Thanks.
>
> Hope this helps.
> --
>  (  Kees Nuyt
>  )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with NAS storage

2009-01-05 Thread Edward J. Yoon
Again,

We have a lot of read/write operation. So, I guess the network latency
is problem and think about lock management system.

On Tue, Jan 6, 2009 at 11:23 AM, Edward J. Yoon <edwardy...@apache.org> wrote:
> Hi, I'm newbie in here.
>
> I'm using SQLite, all data (very huge and 20 million files) stored on
> NAS storage. Lately my system has been getting progressively slower.
> Network cost seems too large.
>
> To improve its performance, I'm think about local lock file instead of
> NAS as describe below.
>
> char str[1024] = "/tmp";
> strcat(str, lockfile);
> sprintf(str, "%s-lock", zFilename);
>
> But, I'm not sure this is good idea. I would love to hear your advice!!
> Thanks.
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardy...@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite with NAS storage

2009-01-05 Thread Edward J. Yoon
Hi, I'm newbie in here.

I'm using SQLite, all data (very huge and 20 million files) stored on
NAS storage. Lately my system has been getting progressively slower.
Network cost seems too large.

To improve its performance, I'm think about local lock file instead of
NAS as describe below.

char str[1024] = "/tmp";
strcat(str, lockfile);
sprintf(str, "%s-lock", zFilename);

But, I'm not sure this is good idea. I would love to hear your advice!!
Thanks.
-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardy...@apache.org
http://blog.udanax.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users