Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 04:10, Yu Watanabe wrote:
 2. Instead INDEXes and schema design must be studied.  Please provide:
 SHOW CREATE TABLE
   | thold_data | CREATE TABLE `thold_data` (
 `id` int(11) NOT NULL auto_increment,
 `rra_id` int(11) NOT NULL default '0',
 `data_id` int(11) NOT NULL default '0',
 `thold_hi` varchar(100) default NULL,
 `thold_low` varchar(100) default NULL,
 `thold_fail_trigger` int(10) unsigned default NULL,
 `thold_fail_count` int(11) NOT NULL default '0',
 `thold_alert` int(1) NOT NULL default '0',
 `thold_enabled` enum('on','off') NOT NULL default 'on',
 `bl_enabled` enum('on','off') NOT NULL default 'off',
 `bl_ref_time` int(50) unsigned default NULL,
 `bl_ref_time_range` int(10) unsigned default NULL,
 `bl_pct_down` int(10) unsigned default NULL,
 `bl_pct_up` int(10) unsigned default NULL,
 `bl_fail_trigger` int(10) unsigned default NULL,
 `bl_fail_count` int(11) unsigned default NULL,
 `bl_alert` int(2) NOT NULL default '0',
 `lastread` varchar(100) default NULL,
 `oldvalue` varchar(100) NOT NULL default '',
 `repeat_alert` int(10) unsigned default NULL,
 `notify_default` enum('on','off') default NULL,
 `notify_extra` varchar(255) default NULL,
 `host_id` int(10) default NULL,
 `syslog_priority` int(2) default '3',
 `cdef` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `rra_id` (`rra_id`)
   ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 EXPLAIN SELECT
   I have seen the following select query in the slow query log.
   I also saw update queries as well.

 mysql explain select * from thold_data where thold_enabled='on' AND data_id 
 = 91633;
 ++-++--+---+--+-+--+--+-+
 | id | select_type | table  | type | possible_keys | key  | key_len | ref 
  | rows | Extra   |
 ++-++--+---+--+-+--+--+-+
 |  1 | SIMPLE  | thold_data | ALL  | NULL  | NULL | NULL| 
 NULL | 6161 | Using where | 
 ++-++--+---+--+-+--+--+-+
 1 row in set (0.06 sec)

 If cache size tuning is not an option ,
 do you think that following action would be an choice to faten the queries 
 little bit more?
You are selecting a record based on the value of data_id and
thold_enabled, but don't have an index on either? Add an index for both.
If data_id is unique, then you would only need an index on that.

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 07:37, Alex Schaft wrote:
 You are selecting a record based on the value of data_id and
 thold_enabled, but don't have an index on either? Add an index for both.
 If data_id is unique, then you would only need an index on that.

 Alex


On second thought, an index on thold_enabled won't mean much I think, so
either leave it off or create an index on data_id plus thold_enabled.
Someone more knowledgeable may correct me.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: 回复: Why is creating indexes faster after inserting massive data rows?

2012-05-07 Thread Alex Schaft

On 2012/05/07 10:53, Zhangzhigang wrote:

johan 

Plain and simple: the indices get updated after every insert statement,

whereas if you only create the index *after* the inserts, the index gets 
created in a single operation, which is a lot more efficient..


Ok, Creating the index *after* the inserts, the index gets created in a single 
operation.
But the indexes has to be updating row by row after the data rows has all been 
inserted. Does it work in this way?
So i can not find the different overhead  about two ways.
My simplified 2c. When inserting rows with active indexes one by one 
(insert), mysql has to


1) lock the space for the data to be added,
2) write the data,
3) lock the index,
4) write the index key(s),
5) unlock the index,
6)unlock the data

This happens for each row

When first doing all data without index, only 1, 2, and 6 happen. When 
you then create an index, it can lock the index, read all the data and 
write all index keys in one go and then unlock the index.


If you make an omelet, do you fetch your eggs from the fridge one by 
one, or all at the same time? :)


HTH,
Alex


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mixing and matching mysql mssql whileloop within an if case

2012-05-03 Thread Alex Schaft

  
  
I suppose an easier way is to have a getrow function, Something like

while ($row = getrow($RS) {
  .
  .
  .
  }
  
  function getrow($RS)
  
  {
  if ($current_server_is_mysql)
  {
   return mysql_fetch_assoc($RS);
 }
 else
 { 
  return sqlsrv_fetch_array( $RS, SQLSRV_FETCH_ASSOC);
 }
} 



On 2012/05/03 19:20, Haluk Karamete wrote:

  Please take a look at the following code and tell me if there is a way
around it.

if ($current_server_is_mysql):
while ($row = mysql_fetch_assoc($RS)) {
else:
while( $row = sqlsrv_fetch_array( $RS, SQLSRV_FETCH_ASSOC)){
endif:
Depending on the server I'm working with, I'd like to compile my
records into the $RS recordset the proper/native way. If I can get
pass this part, the rest should be all right cause both mysql and
mssql $row can be tapped into the same way. For example,
$row['fieldname'] will give me the field value whether the row was a
mysql or mssql resource. So it all boils down to the above snippet
failing.

The error I get points to the else: part in the above snippet.

Parse error: syntax error, unexpected T_ELSE in
D:\Hosting\5291100\html\blueprint\pages\populate_migration_table.php
on line 415
I can understand why I am getting this error.

But, I'm hoping you guys can offer a work-around it without me
resorting to duplicate the entire while loop she-bang.





-- 
  
  



Batch update

2012-02-14 Thread Alex Schaft

Hi,

I need to update a table along the lines of the following

update table set LastUpdate=now(), UpdateSource='Abc' Where Key1 = 'Def' 
and Key2 = 'ghi'


I need to possible do anywhere from 2 to 20 of these. Would it be better 
to call an update statement for each of these,

or should I do a batch INSERT with ON DUPLICATE KEY UPDATE?

Going to try both ways now, but was wondering what would be the best 
internally my instincts tell me the latter.


Thanks,
Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Batch update

2012-02-14 Thread Alex Schaft

On 2/14/2012 10:30 AM, cars...@bitbybit.dk wrote:

On 14.02.2012 10:20, Alex Schaft wrote:


Hi,

I need to update a table along the lines of the following

update table set LastUpdate=now(), UpdateSource='Abc' Where Key1 = 'Def'
and Key2 = 'ghi'

I need to possible do anywhere from 2 to 20 of these. Would it be better
to call an update statement for each of these,


What exactly are these? Is 'Abc' constant for all rows, or does it 
change for each Key1Key2 value?


If so, then simply
WHERE (Key1=... AND Key2=...) OR (Key1=... AND Key2=...) OR... should 
suffice

or even:
WHERE (Key1, Key2) IN (val1, val2),(val3,val4), ...

If 'Abc' changes per Key1Key2 values, explore the CASE statement.

Best,

/ Carsten


Hi,

'Abc' would stay the same, as well as 'Def', but 'ghi' will change. I'll 
look into the where with the in operator, thanks.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: C api mysql_store_result vs mysql_use_result

2012-02-09 Thread Alex Schaft

On 2012/02/09 01:40 PM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

If I were to do a select count(*) from x where y prior to doing
select * from x where y to get a number of records, how would this impact
performance on the server itself? Would the first query be the one to
do the most processing, with the second one being faster, or would both
have to do the same amount of work?

Heh. The amount of work put into parsing and executing would be the same, 
except if you can compose your count query to use only indexed fields.

Easily checked with an explain of both queries, I'd say.

Also, do consider if you really need a %complete progress indicator, or if a 
simple record counter with no indicated endpoint will do. That is, do your 
users need to know how long it's going to take, or do they just want assurance 
that the process didn't hang?


From the user's perspective, they just need to know the process didn't 
hang. The count() query is more for getting memory requirements upfront. 
Can I handle it all, or do I need to break it down into pages?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

C api mysql_store_result vs mysql_use_result

2012-02-08 Thread Alex Schaft

Hi,

I'm currently using mysql_store_result to retrieve all records of a 
query. This poses a problem however if say a couple of thousand records 
get returned, and the user gets no feedback during the progress. I now 
want to change this to mysql_use_result. The only catch is that you 
don't know how many records you're going to get and allocating memory 
for them.


If I were to do a select count(*) from x where y prior to doing select * 
from x where y to get a number of records, how would this impact 
performance on the server itself? Would the first query be the one to do 
the most processing, with the second one being faster, or would both 
have to do the same amount of work?


Thanks,
Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Maximum line length or statement length for mysqldump

2011-10-22 Thread Alex Schaft



On 2011/10/21 10:26 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

Got my app reading in a dump created with extended-inserts off, and
lumping all of the insert statements together. Works like a charm

Just for laughs, would you mind posting the on-disk size of your database, and 
the restore time with both extended and single inserts?


ibdata1 currently sitting at 6 gigs. Without ext inserts about a minute 
and a half and with a couple of seconds. I'm well aware of the speed 
differences. That's why I'm now reading in the non extended and joining 
the values together into big sql statements. This now takes about 10 
seconds, but I'm still optimizing that.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-21 Thread Alex Schaft

On 2011/10/20 03:43 PM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I realize that, I'm just trying to stop the phone calls saying I
started a restore, and my pc just froze

I might just read all the single insert lines, and get a whole lot of
values clauses together before passing it on to get around the
performance issue while having some idea of progress.

Wouldn't it be better to educate your users, then? :-) Much less trouble for 
you.

Either that, or just do the windows thing: print a progress bar that goes to 
95% in ten seconds, then run the entire restore and then progress the remaining 
5% :-p

You could probably write a sed script that intersperses the INSERT INTO lines 
with some form of progress printing. I remain convinced that users simply need 
to learn patience, though.


Got my app reading in a dump created with extended-inserts off, and 
lumping all of the insert statements together. Works like a charm



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

Hi,

I'm monitoring a mysqldump via stdout, catching the create table 
commands prior to flushing them to my own text file. Then on the restore 
side, I'm trying to feed these to mysql via the c api so I can monitor 
progress (no of lines in the dump file vs no of lines sent to mysql), 
but the lines are as much as 16k long in the text file times about 110 
of those for one huge insert statement.


What can I pass to mysqldump to get more sane statement lengths?

Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 10:53 AM, Alex Schaft wrote:


What can I pass to mysqldump to get more sane statement lengths?


+1 for extended-inserts...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 11:54 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed these to mysql via the c api so I can
monitor progress (no of lines in the dump file vs no of lines sent to mysql),
but the lines are as much as 16k long in the text file times about
110 of those for one huge insert statement.

What can I pass to mysqldump to get more sane statement lengths?

That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.
I realize that, I'm just trying to stop the phone calls saying I 
started a restore, and my pc just froze


I might just read all the single insert lines, and get a whole lot of 
values clauses together before passing it on to get around the 
performance issue while having some idea of progress.


Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Index question

2011-10-11 Thread Alex Schaft
If you have a table with columns A  B, and might do a where on A or B, 
or an order by A, B, would single column indexes on A and B suffice or 
would performance on the order by query be improved by an index on A,B?


Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Index question

2011-10-11 Thread Alex Schaft

On 2011/10/11 02:22 PM, Rik Wasmus wrote:

Just to clarify having key indexes of (a,b) or (b,a) have no difference ?

They DO.

See it as lookup table which starts with 'a' in the first case, and 'b'  in the
second one. Looking for anything that matches 'b' for an index (a,b) requires
a full scan as you don't know 'a', likewise searching for 'a' in an index
(b,a) requires a full scan. See it as looking through a phonebook trying to
locate someone by first- rather then lastname. It's in there, just not easily
accessible.

However, if you have an index on (a,b) and DO know which 'a' you want
('Smith'), looking for 'Smith, John' is faster with an index (a,b) then with
only an index on (a).

Johan was trying to explain this distinction:

- index (a,b) is good for searches on ONLY a  or BOTH a  b, but bad for ONLY
b
- index (b,a) is good for searches on ONLY b  or BOTH a  b, but bad for ONLY
a
- index (a)  index (b) is good for searches on ONLY b  or ONLY a, and is
suboptimal for searching for BOTH a,b (although, faster then no index, but the
query optimizer has to choose which index to use, can't use both).
Next question. If you have the two separate indexes and then do two 
queries, one for a and one for b. If you then get a list of unique id's 
of both, would it be faster to create an intersection yourself rather 
than have the server do the legwork?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Index question

2011-10-11 Thread Alex Schaft

On 2011/10/11 02:30 PM, Alex Schaft wrote:

On 2011/10/11 02:22 PM, Rik Wasmus wrote:
Just to clarify having key indexes of (a,b) or (b,a) have no 
difference ?

They DO.

See it as lookup table which starts with 'a' in the first case, and 
'b'  in the
second one. Looking for anything that matches 'b' for an index (a,b) 
requires
a full scan as you don't know 'a', likewise searching for 'a' in an 
index
(b,a) requires a full scan. See it as looking through a phonebook 
trying to
locate someone by first- rather then lastname. It's in there, just 
not easily

accessible.

However, if you have an index on (a,b) and DO know which 'a' you want
('Smith'), looking for 'Smith, John' is faster with an index (a,b) 
then with

only an index on (a).

Johan was trying to explain this distinction:

- index (a,b) is good for searches on ONLY a  or BOTH a  b, but bad 
for ONLY

b
- index (b,a) is good for searches on ONLY b  or BOTH a  b, but bad 
for ONLY

a
- index (a)  index (b) is good for searches on ONLY b  or ONLY a, 
and is
suboptimal for searching for BOTH a,b (although, faster then no 
index, but the

query optimizer has to choose which index to use, can't use both).
Next question. If you have the two separate indexes and then do two 
queries, one for a and one for b. If you then get a list of unique 
id's of both, would it be faster to create an intersection yourself 
rather than have the server do the legwork?





Then there's index merge optimizations too I suppose

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Update table on lost connection

2011-09-28 Thread Alex Schaft

Hi,

We're busy moving legacy apps from foxpro tables to mysql. User logins 
were tracked via a record in a table which the app then locked, 
preventing multiple logins for the same user code.


I want to simulate this via a locked column in a mysql table, but 
would need the field to be cleared if the server loses the connection to 
the client. How would I do this, or is there an alternative?


Thanks,
Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Import .dbf files

2011-07-19 Thread Alex Schaft

On 2011/07/19 09:52 PM, andrewmchor...@cox.net wrote:

Hello

I am about to create a database in mysql. I would like to be able to import 
some dbase3 (.dbf) files into the tables I will be defining. What is the 
easiest way to import the table. Is there software that can be downloaded that 
will allow me to do this?

Andrew


Hi,

You want to look at www.xharbour.org. A 32bit open source compiler for 
dbase/clipper code. We're currently using it to write apps accessing 
mysql data.


Another option is www.advantagedatabase.com, a full sql database server 
for dbf files. It has a local server (think sqlite) as well as a host of 
client interfaces available with which you would be able to connect to 
via odbc, .net, jdbc etc to retrieve your data that way.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Index creation

2011-06-21 Thread Alex Schaft

Hi,

I'm busy creating an index on a 518505 record table on a single column 
which is now taking about 2 hours on the copy to tmp table process


The server is a 2gig ram Intel(R) Xeon(TM) CPU 3.00GHz running on a 
hardware raid 5. The inno config was left as a standard install from 
my-medium config.


innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

The server is :mysql  Ver 14.7 Distrib 4.1.22, for redhat-linux-gnu 
(i386) using readline 4.3


Table stats are currently as follows:

+---++-++++-+-+--+---++-+-++---+--++--+
| Name  | Engine | Version | Row_format | Rows   | 
Avg_row_length | Data_length | Max_data_length | Index_length | 
Data_free | Auto_increment | Create_time | Update_time | 
Check_time | Collation | Checksum | Create_options | 
Comment  |

+---++-++++-+-+--+---++-+-++---+--++--+
| wininv_invdet | InnoDB |   9 | Fixed  | 518526 |   
1824 |   945815552 |NULL |   1144487936 | 0 
| 518506 | 2011-06-21 07:36:20 | NULL| NULL   | 
latin1_swedish_ci | NULL || InnoDB free: 9216 kB |

+---++-++++-+-+--+---++-+-++---+--++--+

Innodb status is:

=
110621  9:47:04 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 31 seconds
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 1187081, signal count 1182987
Mutex spin waits 4624590, rounds 17483575, OS waits 350615
RW-shared spins 138728, OS waits 66949; RW-excl spins 833217, OS waits 
690480


TRANSACTIONS

Trx id counter 0 4252
Purge done for trx's n:o  0 4198 undo n:o  0 0
History list length 0
Total number of lock structs in row lock hash table 255
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 16841, OS thread id 2954886048
MySQL thread id 3186, query id 3047513 localhost root
show engine innodb status
---TRANSACTION 0 4251, ACTIVE 62 sec, process no 16841, OS thread id 
2957421472 inserting, thread declared inside InnoDB 160

mysql tables in use 2, locked 3
258 lock struct(s), heap size 27968, undo log entries 2285
MySQL thread id 2, query id 3041739 pc-00030.quicksoftware.co.za 
10.1.1.30 root copy to tmp table
CREATE INDEX WININV_INVDET_SUPREF3 ON `wininv_invdet` (`indkey_004`) /* 
Create synthetic Index */


FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 1; buffer pool: 1
5028159 OS file reads, 4867919 OS file writes, 272069 OS fsyncs
105.58 reads/s, 17525 avg bytes/read, 84.48 writes/s, 3.87 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf for space 0: size 519, free list len 271, seg size 791, is not empty
Ibuf for space 0: size 519, free list len 271, seg size 791,
13085268 inserts, 13064005 merged recs, 1775632 merges
Hash table size 69257, used cells 63, node heap has 1 buffer(s)
181.74 hash searches/s, 737.07 non-hash searches/s
---
LOG
---
Log sequence number 6 1970388696
Log flushed up to   6 1970162325
Last checkpoint at  6 1963765307
1 pending log writes, 0 pending chkp writes
108282 log i/o's done, 1.68 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 35308974; in additional pool allocated 2095872
Buffer pool size   1024
Free buffers   0
Database pages 1022
Modified db pages  557
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 6249386, created 1640028, written 7658612
112.93 reads/s, 9.87 creates/s, 109.77 writes/s
Buffer pool hit rate 973 / 1000
--

recordset search question

2011-05-23 Thread Alex Schaft

Hi,

I'm sure this must've been done before, so if someone can point me at a 
discussion or assist me in some other way I'd appreciate it.


If I'm browsing a paged list of invoices say in numerical order and I 
then want to reposition the list on a certain client, I can do a second 
query to the get the record id, but how would I easily reposition the 
browse listing back in date order on the record found in the second query?


Thanks,
Alex


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org