Re: [sqlite] [SQLite improve productivity]

2007-08-06 Thread John Stanton

Igor Mironchick wrote:

Hi.

How can I improve productivity of this code:

a_messenger_t::data_buff_t --> std::deque< some_struct >

// 



   char * errors = 0;

   for( a_messenger_t::data_buff_t::const_iterator it = msg.m_buff.begin(),
   last = msg.m_buff.end(); it != last; ++it )
   {
   // Converting long to std::string...
   std::string sec( itos( it->m_time.sec() ) );
   std::string usec( itos( it->m_time.usec() ) );

   // Columns in 'data' table defined like (INTEGER, INTEGER, 
INTEGER, TEXT)
   char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', 
'%q', '%q', '%q' )",

   m_sources_map[ it->m_source ].m_sid.c_str(),
   sec.c_str(), usec.c_str(), it->m_value.c_str() );

   // m_db --> sqlite3*
   int ret = sqlite3_exec( m_db, sql, 0, 0, &errors );

   if( ret != SQLITE_OK )
   {
   ACE_DEBUG( ( LM_INFO, ACE_TEXT( "[%D] %M -- %C\n" ), errors ) );
   sqlite3_free( errors );
   }

   sqlite3_free( sql );
   }

Any idea? This method are so slow - about 1 kB per second new data in my 
DB.



Use prepared statements and transactions.

   SQL-
  sqlite3_open()
  sqlite3_prepare();
  BEGIN
while data remains
  sqlite3_bind...
  sqlite3_step();
  sqlite3_reset();
repeat
  COMMIT
  sqlite3_finalize()
  sqlite3_close()

If you have a large data set you might want to add another loop to limit 
 each transaction to a few thousand steps.


You might jump up to 10,000 inserts per sec or more.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-06 Thread Christian Smith

Igor Mironchick uttered:

Thx, very helpfull reply. One more question: is it need to do "END" after 
"BEGIN" or enought "COMMIT"?




You can use "COMMIT". Probably should do, as it is more descriptive about 
what is happening. Check the docs for transaction commands:

http://www.sqlite.org/lang_transaction.html


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-05 Thread Igor Mironchick
Thx, very helpfull reply. One more question: is it need to do "END" 
after "BEGIN" or enought "COMMIT"?


By the way, some edition of sql query improve productivity a little bit too:

// Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, TEXT)
char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', '%q', 
'%q', '%q' )",

 m_sources_map[ it->m_source ].m_sid.c_str(),
 sec.c_str(), usec.c_str(), it->m_value.c_str() );

was changed with:

// Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, TEXT)
char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( %d, %d, %d, '%s' 
)",

 m_sources_map[ it->m_source ].m_sid,
 it->m_time.sec(), it->m_time.usec(), it->m_value.c_str() );

where m_sources_map[ it->m_source ].m_sid is long now.

A common issue of high latency transactions. SQLite has a high 
per-transaction overhead, which can be amortized across multiple 
INSERTs or UPDATEs to improve the average INSERT rate. You are doing a 
single INSERT per transaction, so wrap multiple INSERTs inside a 
single "BEGIN" ... "END" transaction.


See:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations

Christian


--
Regards,
Igor Mironchick,
Intervale ©
#ICQ 492-597-570


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-05 Thread Igor Mironchick
Thx, very helpfull reply. One more question: is it need to do "END" 
after "BEGIN" or enought "COMMIT"?


By the way, some edition of sql query improve productivity a little bit too:

// Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, TEXT)
char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', '%q', 
'%q', '%q' )",

 m_sources_map[ it->m_source ].m_sid.c_str(),
 sec.c_str(), usec.c_str(), it->m_value.c_str() );

was changed with:

// Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, TEXT)
char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( %d, %d, %d, '%s' 
)",

 m_sources_map[ it->m_source ].m_sid,
 it->m_time.sec(), it->m_time.usec(), it->m_value.c_str() );

A common issue of high latency transactions. SQLite has a high 
per-transaction overhead, which can be amortized across multiple 
INSERTs or UPDATEs to improve the average INSERT rate. You are doing a 
single INSERT per transaction, so wrap multiple INSERTs inside a 
single "BEGIN" ... "END" transaction.


See:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations

Christian


--
Regards,
Igor Mironchick,
Intervale ©
#ICQ 492-597-570


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-05 Thread Christian Smith
A common issue of high latency transactions. SQLite has a high 
per-transaction overhead, which can be amortized across multiple INSERTs 
or UPDATEs to improve the average INSERT rate. You are doing a single 
INSERT per transaction, so wrap multiple INSERTs inside a single "BEGIN" 
... "END" transaction.


See:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations

Christian

Igor Mironchick uttered:


Hi.

How can I improve productivity of this code:

a_messenger_t::data_buff_t --> std::deque< some_struct >

//

  char * errors = 0;

  for( a_messenger_t::data_buff_t::const_iterator it = msg.m_buff.begin(),
  last = msg.m_buff.end(); it != last; ++it )
  {
  // Converting long to std::string...
  std::string sec( itos( it->m_time.sec() ) );
  std::string usec( itos( it->m_time.usec() ) );

  // Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, 
TEXT)
  char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', '%q', 
'%q', '%q' )",

  m_sources_map[ it->m_source ].m_sid.c_str(),
  sec.c_str(), usec.c_str(), it->m_value.c_str() );

  // m_db --> sqlite3*
  int ret = sqlite3_exec( m_db, sql, 0, 0, &errors );

  if( ret != SQLITE_OK )
  {
  ACE_DEBUG( ( LM_INFO, ACE_TEXT( "[%D] %M -- %C\n" ), errors ) );
  sqlite3_free( errors );
  }

  sqlite3_free( sql );
  }

Any idea? This method are so slow - about 1 kB per second new data in my DB.




--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] [SQLite improve productivity]

2007-08-05 Thread Igor Mironchick

Hi.

How can I improve productivity of this code:

a_messenger_t::data_buff_t --> std::deque< some_struct >

//

   char * errors = 0;

   for( a_messenger_t::data_buff_t::const_iterator it = msg.m_buff.begin(),
   last = msg.m_buff.end(); it != last; ++it )
   {
   // Converting long to std::string...
   std::string sec( itos( it->m_time.sec() ) );
   std::string usec( itos( it->m_time.usec() ) );

   // Columns in 'data' table defined like (INTEGER, INTEGER, 
INTEGER, TEXT)
   char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', 
'%q', '%q', '%q' )",

   m_sources_map[ it->m_source ].m_sid.c_str(),
   sec.c_str(), usec.c_str(), it->m_value.c_str() );

   // m_db --> sqlite3*
   int ret = sqlite3_exec( m_db, sql, 0, 0, &errors );

   if( ret != SQLITE_OK )
   {
   ACE_DEBUG( ( LM_INFO, ACE_TEXT( "[%D] %M -- %C\n" ), errors ) );
   sqlite3_free( errors );
   }

   sqlite3_free( sql );
   }

Any idea? This method are so slow - about 1 kB per second new data in my DB.

--
Regards,
Igor Mironchick,
Intervale ©
#ICQ 492-597-570


-
To unsubscribe, send email to [EMAIL PROTECTED]
-