Re: MySQL 5.1: incorrect arithmetic calculation

2013-02-15 Thread Alex Keda

15.02.2013 15:07, Alex Keda пишет:

OK. But, how about:
mysql SELECT 365 * 1.67 * ( 1 - 0.10);
+--+
| 365 * 1.67 * ( 1 - 0.10) |
+--+
| 548.5950 |
+--+
1 row in set (0.00 sec)

mysql
??


sorry, I'm too many work... =)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL 5.1: incorrect arithmetic calculation

2013-02-15 Thread Alex Keda

15.02.2013 14:43, Johan De Meersman пишет:


- Original Message -

From: Alex Keda ad...@lissyara.su

mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)) as `Summ` FROM 
`WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);

Based off the select you printed, this comes to EXACTLY 548.595 for the first 
row and 0 for the second row.



mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)*100)/100 as `Summ` FROM 
`WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);

The more detailed result here, though, seems to suggest that there's a longer 
fraction in your table than is printed by your select. Would your column happen 
to be a Float?


but, my desktop calculator gives the result 548.60

Which is the correct rounding for 548.595. Check if your column is a float, and 
if it is, go google for floating point mathematics. They do not work the way 
you think they do. Use decimal(n,m) for money - or any discrete number, for 
that matter. Floats are not exact values, they are APPROXIMATE values.

https://dev.mysql.com/doc/refman/5.0/en/problems-with-float.html may also give 
you an idea of what goes wrong.



OK. But, how about:
mysql SELECT 365 * 1.67 * ( 1 - 0.10);
+--+
| 365 * 1.67 * ( 1 - 0.10) |
+--+
| 548.5950 |
+--+
1 row in set (0.00 sec)

mysql
??

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL 5.1: incorrect arithmetic calculation

2013-02-14 Thread Alex Keda

bkp0# mysql h5000_bill
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1643184
Server version: 5.1.68-log FreeBSD port: mysql-server-5.1.68

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input 
statement.


mysql set names utf8;
Query OK, 0 rows affected (0.00 sec)

mysql SELECT * FROM `WorksCompliteAgregate` WHERE (`ContractID` = 10369 
AND `Month` = 497);

+--++---+---+--++-++
| ID   | ContractID | Month | ServiceID | Comment  | Cost   | 
Discont | Amount |

+--++---+---+--++-++
| 10551851 |  10369 |   497 | 1 | №20440 |   1.67 | 0.10 
|365 |
| 10551854 |  10369 |   497 | 2 | №20441 | 150.00 | 1.00 
|  1 |

+--++---+---+--++-++
2 rows in set (0.00 sec)

mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)) as `Summ` FROM 
`WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);

++
| Summ   |
++
| 548.59 |
++
1 row in set (0.00 sec)

mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)*100)/100 as `Summ` FROM 
`WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);

++
| Summ   |
++
| 548.594985 |
++
1 row in set (0.00 sec)

mysql SELECT 365 * 1.67 * ( 1 - 0.10);
+--+
| 365 * 1.67 * ( 1 - 0.10) |
+--+
| 548.5950 |
+--+
1 row in set (0.00 sec)

mysql
===

but, my desktop calculator gives the result 548.60

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 04:10, Yu Watanabe wrote:
 2. Instead INDEXes and schema design must be studied.  Please provide:
 SHOW CREATE TABLE
   | thold_data | CREATE TABLE `thold_data` (
 `id` int(11) NOT NULL auto_increment,
 `rra_id` int(11) NOT NULL default '0',
 `data_id` int(11) NOT NULL default '0',
 `thold_hi` varchar(100) default NULL,
 `thold_low` varchar(100) default NULL,
 `thold_fail_trigger` int(10) unsigned default NULL,
 `thold_fail_count` int(11) NOT NULL default '0',
 `thold_alert` int(1) NOT NULL default '0',
 `thold_enabled` enum('on','off') NOT NULL default 'on',
 `bl_enabled` enum('on','off') NOT NULL default 'off',
 `bl_ref_time` int(50) unsigned default NULL,
 `bl_ref_time_range` int(10) unsigned default NULL,
 `bl_pct_down` int(10) unsigned default NULL,
 `bl_pct_up` int(10) unsigned default NULL,
 `bl_fail_trigger` int(10) unsigned default NULL,
 `bl_fail_count` int(11) unsigned default NULL,
 `bl_alert` int(2) NOT NULL default '0',
 `lastread` varchar(100) default NULL,
 `oldvalue` varchar(100) NOT NULL default '',
 `repeat_alert` int(10) unsigned default NULL,
 `notify_default` enum('on','off') default NULL,
 `notify_extra` varchar(255) default NULL,
 `host_id` int(10) default NULL,
 `syslog_priority` int(2) default '3',
 `cdef` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `rra_id` (`rra_id`)
   ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 EXPLAIN SELECT
   I have seen the following select query in the slow query log.
   I also saw update queries as well.

 mysql explain select * from thold_data where thold_enabled='on' AND data_id 
 = 91633;
 ++-++--+---+--+-+--+--+-+
 | id | select_type | table  | type | possible_keys | key  | key_len | ref 
  | rows | Extra   |
 ++-++--+---+--+-+--+--+-+
 |  1 | SIMPLE  | thold_data | ALL  | NULL  | NULL | NULL| 
 NULL | 6161 | Using where | 
 ++-++--+---+--+-+--+--+-+
 1 row in set (0.06 sec)

 If cache size tuning is not an option ,
 do you think that following action would be an choice to faten the queries 
 little bit more?
You are selecting a record based on the value of data_id and
thold_enabled, but don't have an index on either? Add an index for both.
If data_id is unique, then you would only need an index on that.

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 07:37, Alex Schaft wrote:
 You are selecting a record based on the value of data_id and
 thold_enabled, but don't have an index on either? Add an index for both.
 If data_id is unique, then you would only need an index on that.

 Alex


On second thought, an index on thold_enabled won't mean much I think, so
either leave it off or create an index on data_id plus thold_enabled.
Someone more knowledgeable may correct me.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: 回复: Why is creating indexes faster after inserting massive data rows?

2012-05-07 Thread Alex Schaft

On 2012/05/07 10:53, Zhangzhigang wrote:

johan 

Plain and simple: the indices get updated after every insert statement,

whereas if you only create the index *after* the inserts, the index gets 
created in a single operation, which is a lot more efficient..


Ok, Creating the index *after* the inserts, the index gets created in a single 
operation.
But the indexes has to be updating row by row after the data rows has all been 
inserted. Does it work in this way?
So i can not find the different overhead  about two ways.
My simplified 2c. When inserting rows with active indexes one by one 
(insert), mysql has to


1) lock the space for the data to be added,
2) write the data,
3) lock the index,
4) write the index key(s),
5) unlock the index,
6)unlock the data

This happens for each row

When first doing all data without index, only 1, 2, and 6 happen. When 
you then create an index, it can lock the index, read all the data and 
write all index keys in one go and then unlock the index.


If you make an omelet, do you fetch your eggs from the fridge one by 
one, or all at the same time? :)


HTH,
Alex


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mixing and matching mysql mssql whileloop within an if case

2012-05-03 Thread Alex Schaft

  
  
I suppose an easier way is to have a getrow function, Something like

while ($row = getrow($RS) {
  .
  .
  .
  }
  
  function getrow($RS)
  
  {
  if ($current_server_is_mysql)
  {
   return mysql_fetch_assoc($RS);
 }
 else
 { 
  return sqlsrv_fetch_array( $RS, SQLSRV_FETCH_ASSOC);
 }
} 



On 2012/05/03 19:20, Haluk Karamete wrote:

  Please take a look at the following code and tell me if there is a way
around it.

if ($current_server_is_mysql):
while ($row = mysql_fetch_assoc($RS)) {
else:
while( $row = sqlsrv_fetch_array( $RS, SQLSRV_FETCH_ASSOC)){
endif:
Depending on the server I'm working with, I'd like to compile my
records into the $RS recordset the proper/native way. If I can get
pass this part, the rest should be all right cause both mysql and
mssql $row can be tapped into the same way. For example,
$row['fieldname'] will give me the field value whether the row was a
mysql or mssql resource. So it all boils down to the above snippet
failing.

The error I get points to the else: part in the above snippet.

Parse error: syntax error, unexpected T_ELSE in
D:\Hosting\5291100\html\blueprint\pages\populate_migration_table.php
on line 415
I can understand why I am getting this error.

But, I'm hoping you guys can offer a work-around it without me
resorting to duplicate the entire while loop she-bang.





-- 
  
  



Batch update

2012-02-14 Thread Alex Schaft

Hi,

I need to update a table along the lines of the following

update table set LastUpdate=now(), UpdateSource='Abc' Where Key1 = 'Def' 
and Key2 = 'ghi'


I need to possible do anywhere from 2 to 20 of these. Would it be better 
to call an update statement for each of these,

or should I do a batch INSERT with ON DUPLICATE KEY UPDATE?

Going to try both ways now, but was wondering what would be the best 
internally my instincts tell me the latter.


Thanks,
Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Batch update

2012-02-14 Thread Alex Schaft

On 2/14/2012 10:30 AM, cars...@bitbybit.dk wrote:

On 14.02.2012 10:20, Alex Schaft wrote:


Hi,

I need to update a table along the lines of the following

update table set LastUpdate=now(), UpdateSource='Abc' Where Key1 = 'Def'
and Key2 = 'ghi'

I need to possible do anywhere from 2 to 20 of these. Would it be better
to call an update statement for each of these,


What exactly are these? Is 'Abc' constant for all rows, or does it 
change for each Key1Key2 value?


If so, then simply
WHERE (Key1=... AND Key2=...) OR (Key1=... AND Key2=...) OR... should 
suffice

or even:
WHERE (Key1, Key2) IN (val1, val2),(val3,val4), ...

If 'Abc' changes per Key1Key2 values, explore the CASE statement.

Best,

/ Carsten


Hi,

'Abc' would stay the same, as well as 'Def', but 'ghi' will change. I'll 
look into the where with the in operator, thanks.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: C api mysql_store_result vs mysql_use_result

2012-02-09 Thread Alex Schaft

On 2012/02/09 01:40 PM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

If I were to do a select count(*) from x where y prior to doing
select * from x where y to get a number of records, how would this impact
performance on the server itself? Would the first query be the one to
do the most processing, with the second one being faster, or would both
have to do the same amount of work?

Heh. The amount of work put into parsing and executing would be the same, 
except if you can compose your count query to use only indexed fields.

Easily checked with an explain of both queries, I'd say.

Also, do consider if you really need a %complete progress indicator, or if a 
simple record counter with no indicated endpoint will do. That is, do your 
users need to know how long it's going to take, or do they just want assurance 
that the process didn't hang?


From the user's perspective, they just need to know the process didn't 
hang. The count() query is more for getting memory requirements upfront. 
Can I handle it all, or do I need to break it down into pages?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

C api mysql_store_result vs mysql_use_result

2012-02-08 Thread Alex Schaft

Hi,

I'm currently using mysql_store_result to retrieve all records of a 
query. This poses a problem however if say a couple of thousand records 
get returned, and the user gets no feedback during the progress. I now 
want to change this to mysql_use_result. The only catch is that you 
don't know how many records you're going to get and allocating memory 
for them.


If I were to do a select count(*) from x where y prior to doing select * 
from x where y to get a number of records, how would this impact 
performance on the server itself? Would the first query be the one to do 
the most processing, with the second one being faster, or would both 
have to do the same amount of work?


Thanks,
Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Maximum line length or statement length for mysqldump

2011-10-22 Thread Alex Schaft



On 2011/10/21 10:26 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

Got my app reading in a dump created with extended-inserts off, and
lumping all of the insert statements together. Works like a charm

Just for laughs, would you mind posting the on-disk size of your database, and 
the restore time with both extended and single inserts?


ibdata1 currently sitting at 6 gigs. Without ext inserts about a minute 
and a half and with a couple of seconds. I'm well aware of the speed 
differences. That's why I'm now reading in the non extended and joining 
the values together into big sql statements. This now takes about 10 
seconds, but I'm still optimizing that.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-21 Thread Alex Schaft

On 2011/10/20 03:43 PM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I realize that, I'm just trying to stop the phone calls saying I
started a restore, and my pc just froze

I might just read all the single insert lines, and get a whole lot of
values clauses together before passing it on to get around the
performance issue while having some idea of progress.

Wouldn't it be better to educate your users, then? :-) Much less trouble for 
you.

Either that, or just do the windows thing: print a progress bar that goes to 
95% in ten seconds, then run the entire restore and then progress the remaining 
5% :-p

You could probably write a sed script that intersperses the INSERT INTO lines 
with some form of progress printing. I remain convinced that users simply need 
to learn patience, though.


Got my app reading in a dump created with extended-inserts off, and 
lumping all of the insert statements together. Works like a charm



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

Hi,

I'm monitoring a mysqldump via stdout, catching the create table 
commands prior to flushing them to my own text file. Then on the restore 
side, I'm trying to feed these to mysql via the c api so I can monitor 
progress (no of lines in the dump file vs no of lines sent to mysql), 
but the lines are as much as 16k long in the text file times about 110 
of those for one huge insert statement.


What can I pass to mysqldump to get more sane statement lengths?

Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 10:53 AM, Alex Schaft wrote:


What can I pass to mysqldump to get more sane statement lengths?


+1 for extended-inserts...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 11:54 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed these to mysql via the c api so I can
monitor progress (no of lines in the dump file vs no of lines sent to mysql),
but the lines are as much as 16k long in the text file times about
110 of those for one huge insert statement.

What can I pass to mysqldump to get more sane statement lengths?

That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.
I realize that, I'm just trying to stop the phone calls saying I 
started a restore, and my pc just froze


I might just read all the single insert lines, and get a whole lot of 
values clauses together before passing it on to get around the 
performance issue while having some idea of progress.


Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Index question

2011-10-11 Thread Alex Schaft
If you have a table with columns A  B, and might do a where on A or B, 
or an order by A, B, would single column indexes on A and B suffice or 
would performance on the order by query be improved by an index on A,B?


Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Index question

2011-10-11 Thread Alex Schaft

On 2011/10/11 02:22 PM, Rik Wasmus wrote:

Just to clarify having key indexes of (a,b) or (b,a) have no difference ?

They DO.

See it as lookup table which starts with 'a' in the first case, and 'b'  in the
second one. Looking for anything that matches 'b' for an index (a,b) requires
a full scan as you don't know 'a', likewise searching for 'a' in an index
(b,a) requires a full scan. See it as looking through a phonebook trying to
locate someone by first- rather then lastname. It's in there, just not easily
accessible.

However, if you have an index on (a,b) and DO know which 'a' you want
('Smith'), looking for 'Smith, John' is faster with an index (a,b) then with
only an index on (a).

Johan was trying to explain this distinction:

- index (a,b) is good for searches on ONLY a  or BOTH a  b, but bad for ONLY
b
- index (b,a) is good for searches on ONLY b  or BOTH a  b, but bad for ONLY
a
- index (a)  index (b) is good for searches on ONLY b  or ONLY a, and is
suboptimal for searching for BOTH a,b (although, faster then no index, but the
query optimizer has to choose which index to use, can't use both).
Next question. If you have the two separate indexes and then do two 
queries, one for a and one for b. If you then get a list of unique id's 
of both, would it be faster to create an intersection yourself rather 
than have the server do the legwork?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Index question

2011-10-11 Thread Alex Schaft

On 2011/10/11 02:30 PM, Alex Schaft wrote:

On 2011/10/11 02:22 PM, Rik Wasmus wrote:
Just to clarify having key indexes of (a,b) or (b,a) have no 
difference ?

They DO.

See it as lookup table which starts with 'a' in the first case, and 
'b'  in the
second one. Looking for anything that matches 'b' for an index (a,b) 
requires
a full scan as you don't know 'a', likewise searching for 'a' in an 
index
(b,a) requires a full scan. See it as looking through a phonebook 
trying to
locate someone by first- rather then lastname. It's in there, just 
not easily

accessible.

However, if you have an index on (a,b) and DO know which 'a' you want
('Smith'), looking for 'Smith, John' is faster with an index (a,b) 
then with

only an index on (a).

Johan was trying to explain this distinction:

- index (a,b) is good for searches on ONLY a  or BOTH a  b, but bad 
for ONLY

b
- index (b,a) is good for searches on ONLY b  or BOTH a  b, but bad 
for ONLY

a
- index (a)  index (b) is good for searches on ONLY b  or ONLY a, 
and is
suboptimal for searching for BOTH a,b (although, faster then no 
index, but the

query optimizer has to choose which index to use, can't use both).
Next question. If you have the two separate indexes and then do two 
queries, one for a and one for b. If you then get a list of unique 
id's of both, would it be faster to create an intersection yourself 
rather than have the server do the legwork?





Then there's index merge optimizations too I suppose

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Update table on lost connection

2011-09-28 Thread Alex Schaft

Hi,

We're busy moving legacy apps from foxpro tables to mysql. User logins 
were tracked via a record in a table which the app then locked, 
preventing multiple logins for the same user code.


I want to simulate this via a locked column in a mysql table, but 
would need the field to be cleared if the server loses the connection to 
the client. How would I do this, or is there an alternative?


Thanks,
Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Import .dbf files

2011-07-19 Thread Alex Schaft

On 2011/07/19 09:52 PM, andrewmchor...@cox.net wrote:

Hello

I am about to create a database in mysql. I would like to be able to import 
some dbase3 (.dbf) files into the tables I will be defining. What is the 
easiest way to import the table. Is there software that can be downloaded that 
will allow me to do this?

Andrew


Hi,

You want to look at www.xharbour.org. A 32bit open source compiler for 
dbase/clipper code. We're currently using it to write apps accessing 
mysql data.


Another option is www.advantagedatabase.com, a full sql database server 
for dbf files. It has a local server (think sqlite) as well as a host of 
client interfaces available with which you would be able to connect to 
via odbc, .net, jdbc etc to retrieve your data that way.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Index creation

2011-06-21 Thread Alex Schaft
--
ROW OPERATIONS
--
1 queries inside InnoDB, 0 queries in queue
Main thread process no. 16841, id 2976910240, state: flushing log
Number of rows inserted 10916769, updated 2438114, deleted 0, read 15124668
37.22 inserts/s, 0.00 updates/s, 0.00 deletes/s, 37.22 reads/s

I'm fairly new at mysql use and optimization, but I upped the 
buffer_pool size already, but I don't think it affects the current query.


What can I generally do to actually make this perform?

Thanks,
Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Bug using 32-bit libmysqlclient on a 64-bit system?

2011-06-13 Thread Alex Gaynor
Unfortunately the decision to run 32-bit libs on 64-bit systems is outside
of my control.  Given that it *should* work I'm more interested in
diagnosing whether this is a bug of some sort in libmysqlclient or a bug in
my code/build procedure.

Alex

On Sat, Jun 4, 2011 at 10:06 AM, walter harms wha...@bfs.de wrote:

 It is basicly a not clever solution to run 32bit libs with a 64bit system.
 You have to compile -m32 and all sort of things.
 It is *way* better to compile with pure 64bit.

 re,
  wh

 Am 04.06.2011 02:18, schrieb Alex Gaynor:
  I've got a 64-bit Linux system, with a 32-bit libmysqlclient (and a
 64-bit),
  and a C program using the libmysqlclient API which behaves very
 differently
  depending on which platform it is compiled for.  The program is:
 

 



Bug using 32-bit libmysqlclient on a 64-bit system?

2011-06-03 Thread Alex Gaynor
I've got a 64-bit Linux system, with a 32-bit libmysqlclient (and a 64-bit),
and a C program using the libmysqlclient API which behaves very differently
depending on which platform it is compiled for.  The program is:

#include stdio.h
#include string.h

#include mysql.h


int main() {
MYSQL *conn = mysql_init(NULL);
mysql_real_connect(conn, NULL, root, NULL, test_mysqldb,
0, NULL, 0);
mysql_real_query(conn, SHOW COLLATION, strlen(SHOW COLLATION));
MYSQL_RES *result = mysql_store_result(conn);
int n_fields = mysql_num_fields(result);
MYSQL_FIELD *fields = mysql_fetch_fields(result);
int i;
for (i = 0; i  n_fields; i++) {
printf(%s: %d\n, fields[i].name, fields[i].type);
}
mysql_free_result(result);
mysql_close(conn);
}


When run under 64-bit I get the expected result:

alex@devalex:/tmp$ ./test
Collation: 253
Charset: 253
Id: 8
Default: 253
Compiled: 253
Sortlen: 8

However when run under 32-bit I get something very unexpected:

alex@devalex:/tmp$ ./test32
Collation: 253
CHARACTER_SET_NAME: 142345400
COLLATIONS: 142345464
: 142345496
: 142345584
def: 1280069443


I'm not sure what the issue is, and it may very well be on my end, but any
debugging help you can provide would be great (this was originally extracted
from a bug in a Python MySQL driver I'm working on using the ctypes FFI).

Thanks,
Alex


recordset search question

2011-05-23 Thread Alex Schaft

Hi,

I'm sure this must've been done before, so if someone can point me at a 
discussion or assist me in some other way I'd appreciate it.


If I'm browsing a paged list of invoices say in numerical order and I 
then want to reposition the list on a certain client, I can do a second 
query to the get the record id, but how would I easily reposition the 
browse listing back in date order on the record found in the second query?


Thanks,
Alex


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: map polygon data for popular us cities

2011-04-02 Thread Alex Ksikes
you might be interested in this:

http://simplegeo.ksikes.net
https://github.com/alexksikes/simplegeo

On Mon, Feb 7, 2011 at 7:29 AM, Sander de Bruijne san...@debruijne.eu wrote:

 http://www.geonames.org/

 http://download.geonames.org/export/dump/?C=S;O=D


 On 02/02/2011 11:30 AM, viraj wrote:

 dear list,
 where can i find a list of map polygons for united states cities? any
 open database? or tool to obtain correct coordinates?

 i googled but couldn't find anything useful.. may be the terms i use
 are not the correct keywords :(

 any help or advice would be really appreciated.


 ~viraj


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=alex.ksi...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



What size i must download

2010-03-21 Thread Alex
i type 
bzr branch lp:mysql-server

and now 986582KB downloaded

What size of repo i must download with this command ?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Queue / FIFO in MySQL?

2009-09-07 Thread Alex Arul Lurthu
Please check out http://q4m.31tools.com/*. *It is a message queue  storage
engine with sql interface. Havent used it in a production setup though we
did some tests.

-- 
Thanks
Alex
http://alexlurthu.wordpress.com


MySQL Release?

2009-09-04 Thread Alex Katebi
Hi,

I noticed that the 6.11 branch is not available for download anymore. Did
they rename it to 5.4?

Thanks,
-Alex


6.0.11 source

2009-05-21 Thread Alex Katebi
Anyone knows where I can download the 6.0.11 gziped source from?


SOS mysql signal syntax error

2009-05-20 Thread Alex Katebi
Hi Folks,

  I am getting syntax error with the mysql signal. I have a trigger
that needs a signal for raising an error condition if a row with
specific value  is removed.

  CREATE TRIGGER my_trig BEFORE DELETE ON my_tbl
  FOR EACH ROW BEGIN
  DECLARE mysig CONDITION FOR SQLSTATE '45000'; 
  IF OLD.name = 'base' THEN 
 SIGNAL mysig SET MESSAGE_TEXT='base row removal is not allowed';   

  END IF;   
  END

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SOS mysql signal syntax error

2009-05-20 Thread Alex Katebi
OK I tried this exact syntax and I get the same error. I tried it on mysql
client for 6.0.10

On Wed, May 20, 2009 at 2:22 PM, Gavin Towey gto...@ffn.com wrote:

 Interesting.  This syntax is only supposed to be available as of 5.4, but
 it doesn't even work there.  The reference I found was at :
 http://dev.mysql.com/tech-resources/articles/mysql-54.html

 But I couldn't find other references to the new signal support.

 This is listed as the example on that page, but it doesn't work in
 5.4.0-beta

 CREATE PROCEDURE p (divisor INT)
 BEGIN
  DECLARE divide_by_zero CONDITION FOR SQLSTATE '22012';
  IF divisor = 0 THEN
  SIGNAL divide_by_zero;
 END IF;
 END

 Methinks someone forgot to include this feature in the release!


 -Original Message-
 From: Alex Katebi [mailto:alex.kat...@gmail.com]
 Sent: Wednesday, May 20, 2009 10:58 AM
 To: mysql
 Subject: SOS mysql signal syntax error

 Hi Folks,

  I am getting syntax error with the mysql signal. I have a trigger
 that needs a signal for raising an error condition if a row with
 specific value  is removed.

  CREATE TRIGGER my_trig BEFORE DELETE ON my_tbl
  FOR EACH ROW BEGIN
  DECLARE mysig CONDITION FOR SQLSTATE '45000';
  IF OLD.name = 'base' THEN
 SIGNAL mysig SET MESSAGE_TEXT='base row removal is not allowed';
  END IF;
  END

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 The information contained in this transmission may contain privileged and
 confidential information. It is intended only for the use of the person(s)
 named above. If you are not the intended recipient, you are hereby notified
 that any review, dissemination, distribution or duplication of this
 communication is strictly prohibited. If you are not the intended recipient,
 please contact the sender by reply email and destroy all copies of the
 original message.



Re: load data into temporary table

2009-05-19 Thread Alex K
Thank you but the real problem occurs when you don't know the schema
of the table in advance. If data.txt has two columns columns how can I
still load it in a temporary table? I'm asking this question because
I'd like to add an import csv feature to a web application. I know
that you can load data infile into table without specifying the schema
of this table but it does not look like you can do load data infile
into a temporary table.

Thank you,

Alex

2009/5/19 Janek Bogucki janek.bogu...@studylink.com:
 Hi,

 mysql create temporary table t(i int);

 mysql \! echo 1  /tmp/data.txt

 mysql load data infile '/tmp/data.txt' into table t;
 Query OK, 1 row affected (0.00 sec)
 Records: 1  Deleted: 0  Skipped: 0  Warnings: 0

 mysql select * from t;
 +--+
 | i    |
 +--+
 |    1 |
 +--+
 1 row in set (0.00 sec)

 Best Regards,
 -Janek, CMDEV 5.0.
 StudyLink. Helping People Realise Their Potential.
 http://studylink.com


 On Tue, 2009-05-19 at 02:57 +0100, Alex K wrote:
 Hello,

 Would anyone know how to load data infile into a temporary table?

 Thank you,

 Alex



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=alex.ksi...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



What is wrong with this SYNTAX?

2009-05-19 Thread Alex Katebi
It is complaining about near STRCMP.

  CREATE TRIGGER bgp.tglobal BEFORE UPDATE on bgp.global
  FOR EACH ROW BEGIN
  IF STRCMP(NEW.Variable_name,'ASN') != 0 THEN
 set NEW.Variable_name=NULL;
  ELSEIF STRCMP(NEW.Variable_name, 'RouterId') != 0 THEN
 set NEW.Variable_name=NULL
  ELSEIF STRCMP(NEW.Variable_name, 'ASN') != 0 THEN
 set NEW.Value=CONVERT(CONVERT(NEW.Value, UNSIGNED), CHAR);
  ELSEIF STRCMP(NEW.Variable_name, 'RouterId') != 0 THEN
 set NEW.Value=INET_NTOA(INET_ATON(NEW.Value));
  ENDIF
  END

Thanks in advance!
Alex


Native Function

2009-05-18 Thread Alex Katebi
Hi All,

   I am using the mysql embedded library (libmysqld) in my application.  I
am using triggers and calling a C/C++ function in my application. This is
how I convey configuration changes to my application.
I think my only choice is to make a Native Function in the mysql source code
since UDF is not avialable in libmysqld. It would be nice if there were a
generic function for this purpose that would take two string arguments and
return an integer. This way I can use it for anywhere I need to tie in
triggers to my application.

Is there an easier way to accomplish this. I mean invoking an application
C/C++ function from mysql stored procedures?

Thanks in andvance!
Alex


Re: Native Function

2009-05-18 Thread Alex Katebi
Hello Martin,

   This sounds great! I am not sure if plugins are supported for the
embedded mysql applications. Meaning that my application is linked with the
libmysqld not libmysql. Do you know the answer?

Thanks,
Alex



On Mon, May 18, 2009 at 10:17 AM, Martin Gainty mgai...@hotmail.com wrote:

  yes if the library is a plugin
 create the plugin
 http://dev.mysql.com/doc/refman/5.1/en/plugin-api.html
 also make sure the table mysql.plugin is created

 compile and run mysqld dynamically is a pre-requisite
 http://dev.mysql.com/doc/refman/5.1/en/plugin-writing.html

 once the dynamic lib is created follow these instructions to install the
 plugin
 http://dev.mysql.com/doc/refman/5.1-maria/en/install-plugin.html

 Martin Gainty
 __
 Jogi és Bizalmassági kinyilatkoztatás/Verzicht und
 Vertraulichkeitanmerkung/Note de déni et de confidentialité

 Ez az üzenet bizalmas.  Ha nem ön az akinek szánva volt, akkor kérjük, hogy
 jelentse azt nekünk vissza. Semmiféle továbbítása vagy másolatának készítése
 nem megengedett.  Ez az üzenet csak ismeret cserét szolgál és semmiféle jogi
 alkalmazhatósága sincs.  Mivel az electronikus üzenetek könnyen
 megváltoztathatóak, ezért minket semmi felelöség nem terhelhet ezen üzenet
 tartalma miatt.

 Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
 Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte
 Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
 dient lediglich dem Austausch von Informationen und entfaltet keine
 rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
 E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.

 Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
 destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
 l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
 est interdite. Ce message sert à l'information seulement et n'aura pas 
 n'importe quel effet légalement obligatoire. Étant donné que les email 
 peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
 aucune responsabilité pour le contenu fourni.






  Date: Mon, 18 May 2009 09:33:15 -0400
  Subject: Native Function
  From: alex.kat...@gmail.com
  To: mysql@lists.mysql.com

 
  Hi All,
 
  I am using the mysql embedded library (libmysqld) in my application. I
  am using triggers and calling a C/C++ function in my application. This is
  how I convey configuration changes to my application.
  I think my only choice is to make a Native Function in the mysql source
 code
  since UDF is not avialable in libmysqld. It would be nice if there were a
  generic function for this purpose that would take two string arguments
 and
  return an integer. This way I can use it for anywhere I need to tie in
  triggers to my application.
 
  Is there an easier way to accomplish this. I mean invoking an application
  C/C++ function from mysql stored procedures?
 
  Thanks in andvance!
  Alex

 --
 Hotmail® has a new way to see what's up with your friends. Check it 
 out.http://windowslive.com/Tutorial/Hotmail/WhatsNew?ocid=TXT_TAGLM_WL_HM_Tutorial_WhatsNew1_052009



load data into temporary table

2009-05-18 Thread Alex K
Hello,

Would anyone know how to load data infile into a temporary table?

Thank you,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



AMD64

2009-04-07 Thread Alex Katebi
Hi,

What is the size of MYSQL_TYPE_LONG in a 64bit machine? I am trying to save
pointers. If this type is 4 bytes on a 32bit machine and 8 bytes on a 64bit
machine will make it much easier.

Thanks,
-Alex


passing a bad pointer question

2009-04-07 Thread Alex Katebi
Hi,

   If I have a mysql client application that passes a bad MYSQL* connection
pointer to the server. How does the server cope with that.
Does the server crash? Or does the server drops the client connection? Or
does the server ignore the function call and return error?
If it returns error what is the error code?

Thanks,
-Alex


Re: AMD64

2009-04-07 Thread Alex Katebi
On Tue, Apr 7, 2009 at 11:17 AM, Martijn Tonies m.ton...@upscene.comwrote:

 Alex,

 Please respond to the list instead of my personal address.

sorry I thought i was.




 Nevertheless, if MYSQL_TYPE_LONG is the datatype for a table field,
 it would always -have- to be the same size, cause different clients can
 connect (if not being embedded), wouldn't it?


Yes if 64bit clients and 32bit servers or 32bit clients and 64bit servers
can talk to each other.
Do they? I found it much too complicated to go that route for my
client/server design. In my case the answer is no.




 With regards,

 Martijn Tonies
 Upscene Productions
 http://www.upscene.com

 Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
 Anywhere, MySQL, InterBase, NexusDB and Firebird!

 Database questions? Check the forum:
 http://www.databasedevelopmentforum.com



  Hi Martijn,

  I made an embedded mysql server. The API for the clients to my server use
 the standard mysql client C API. If clients pass bad pointers to the server
 I don't want the server to crash. So the server knows about all valid mysql
 connections. If a bad mysql pointer is passed in, it can detect that. So
 when I save valid pointers when a connection is made I like to save it as a
 MYSQL_TYPE_LONG. The client and the server has to run on similar type
 machines. 64bit client with 64bit server and 32bit client with 32bit server,
 no mismatch. The Long size in a 32bit machine is 4 bytes but in a 64bit
 machine is 8 bytes. So I wanted to avoid conditionals for building for a
 64bit machine.


  On Tue, Apr 7, 2009 at 10:49 AM, Martijn Tonies m.ton...@upscene.com
 wrote:



  What is the size of MYSQL_TYPE_LONG in a 64bit machine? I am trying to
 save
  pointers. If this type is 4 bytes on a 32bit machine and 8 bytes on a
 64bit
  machine will make it much easier.



Ehm, wouldn't that like, totally fail if -saved- by a 64bit machine and
 -read- by
a 32bit machine if that were the case?

Perhaps you should explain in more detail what you're trying to do?

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
 http://lists.mysql.com/mysql?unsub=alex.kat...@gmail.com






is this a bug?

2009-03-21 Thread Alex Katebi
In mysql embedded server 6.09 built from source for Linux 2.6.27.11
kernel.
First create a connection (mysql_real_connect) then close the connection in
a different thread.
This causes a SIGSEGV crash in THD:store_globals ( ).  My mysql embedded
library is thread-safe.

Thanks in advance,
-Alex


Autoconf --basedir

2009-01-27 Thread Alex Katebi
Anyone knows how to figure out the mysql basedir string using Autoconf and
then pass it to Automake for an embedded application?

Thanks in advance,
-Alex


stored procedures embedded server

2008-12-03 Thread Alex Katebi
   For embedded server do I need to configure
--with-embedded-privilege-control?
It looks like stored procedure is looking for mysql.proc table.

Thanks.


Re: slow in statement?

2008-11-10 Thread Alex K
Hi Ananda,

Indexes are present. I'm passing no more 10 values.

Alex

2008/11/10 Ananda Kumar [EMAIL PROTECTED]:
 it should not cause any issues, unless your passing too many values in id
 in(1,2,...n).
 Are the indexes present.


 On 11/10/08, Alex K [EMAIL PROTECTED] wrote:

 Hello,

 It seems to me that the statement id in (id1, id2 ... idn) is much
 slower than id=id1 or id=id2 ... or id=idn or I am doing something
 wrong?

 Thank you,

 Alex

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow in statement?

2008-11-10 Thread Alex K
Hello,

It seems to me that the statement id in (id1, id2 ... idn) is much
slower than id=id1 or id=id2 ... or id=idn or I am doing something
wrong?

Thank you,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow in statement?

2008-11-10 Thread Alex K
Here we go:

http://pastebin.com/m2439985d

replace $company_ids by list of ids from 'companies'
replace $neg_company_ids by -1 * $company_ids
replace $location_ids by list of location ids from 'locations'
replace $all_company_ids by list of ids from 'all_companies'

The reason why I'm doing $neg_company_ids is because I want to
differentiate between ids from 'companies' and from ids
'all_companies'.
So I noticed that when I replace the in statements by ors, it is
actually faster.

Thank you so much.

2008/11/10 Ananda Kumar [EMAIL PROTECTED]:
 can u please show me the explain plan for this sql and also the table
 structure

 explain select statement

 desc table_name


 On 11/10/08, Alex K [EMAIL PROTECTED] wrote:

 Hi Ananda,

 Indexes are present. I'm passing no more 10 values.

 Alex

 2008/11/10 Ananda Kumar [EMAIL PROTECTED]:
  it should not cause any issues, unless your passing too many values in
 id
  in(1,2,...n).
  Are the indexes present.
 
 
  On 11/10/08, Alex K [EMAIL PROTECTED] wrote:
 
  Hello,
 
  It seems to me that the statement id in (id1, id2 ... idn) is much
  slower than id=id1 or id=id2 ... or id=idn or I am doing something
  wrong?
 
  Thank you,
 
  Alex
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



embedded client test scripts

2008-10-12 Thread Alex Katebi
Any information on how to run the test scripts for the embedded client.

Thanks,
-Alex


simple design choice

2008-10-03 Thread Alex K
Hello,

I have a table of a 1 million users. I want to add a flag called
delete if a user wants to delete his account. Note that this situation
does not happen a lot.

1) Should I alter my users table and add a delete flag to the users table.
 it's easy to update however it uses a lot of unnecessary space.

2) Should I create a new table user_id, flag already prefilled with
all user_ids.

3) Should I create a new table called deleted_users that has a user_id
if this user wants to be deleted.
 it's hassle to update but takes into consideration the spareness of the data.

Thank you,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: simple design choice

2008-10-03 Thread Alex K
2) is probably cleaner but a hassle as well because one needs to make
sure all user_ids are also in this new table.

2008/10/3 Alex K [EMAIL PROTECTED]:
 Hello,

 I have a table of a 1 million users. I want to add a flag called
 delete if a user wants to delete his account. Note that this situation
 does not happen a lot.

 1) Should I alter my users table and add a delete flag to the users table.
 it's easy to update however it uses a lot of unnecessary space.

 2) Should I create a new table user_id, flag already prefilled with
 all user_ids.

 3) Should I create a new table called deleted_users that has a user_id
 if this user wants to be deleted.
 it's hassle to update but takes into consideration the spareness of the 
 data.

 Thank you,

 Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: simple design choice

2008-10-03 Thread Alex K
That seems like a nice trick. I suppose the flag would just be an int
and not an enum in this case.

2008/10/3 Mr. Shawn H. Corey [EMAIL PROTECTED]:
 On Fri, 2008-10-03 at 09:58 -0700, Rob Wultsch wrote:
 On Fri, Oct 3, 2008 at 9:49 AM, Alex K [EMAIL PROTECTED] wrote:
  Hello,
 
  I have a table of a 1 million users. I want to add a flag called
  delete if a user wants to delete his account. Note that this situation
  does not happen a lot.
 
  1) Should I alter my users table and add a delete flag to the users table.
  it's easy to update however it uses a lot of unnecessary space.
 
  2) Should I create a new table user_id, flag already prefilled with
  all user_ids.
 
  3) Should I create a new table called deleted_users that has a user_id
  if this user wants to be deleted.
  it's hassle to update but takes into consideration the spareness of the 
  data.
 
  Thank you,
 
  Alex

 #1 Define uses a lot of unnecessary space. I would imagine it would
 add not much more than 1 MB to the size of db, depending on column
 choice. A decent choice I think.

 #2 Yuck.

 #3 A compact and clean solution.


 If you're going to do #1, make the new column status, with two states:
 active and deleted.  In the future you can add more states without
 re-doing your tables again.


 --
 Just my 0.0002 million dollars worth,
  Shawn

 Linux is obsolete.
 -- Andrew Tanenbaum


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Tables are too wide

2008-09-27 Thread Alex Katebi
Hi,

  When selecting (example: select * from mysql.db ) some of the mysql
database tables the result set does not fit in a standard xterm window. Is
there any terminal display program in Linux that has a horizontal scroll bar
that I can use? Or what can I do to split the table output to fit on my
screen?

Thanks,
-Alex


UDF Question

2008-09-04 Thread Alex Katebi
Hello,

  I am planning to write a UDF (User Defined Function) that acts like a
server side client. This UDF is called by a client first. After that the UDF
spwans a thread then exits. Within this spawned thread it will get work from
a network socket. After that it will start executing SQL statements against
some database tables in the server.

  What are the ramifications of the above model within the MySql Server.

Thanks,
-Alex


mysql user and authentication ???

2008-08-31 Thread Alex Katebi
Hello,

I just installed the 6.0.6 version of the mysqld on Linux using the
binary RPM.
The RPM installed all the programs then it started the mysqld server.

I read lots of documents about mysql user and password. But I have no idea
what the users are used for, and how users are created.

The Linux root user is the only user that can start and stop the server. Is
this fact correct?

Thanks,
-Alex


starting mysqld on Linux fedora

2008-08-30 Thread Alex Katebi
Hello,

  Is there any documentation on how to start the mysqld server.
I have installed version 6.0 server and client. But I don't know how to
start the server correctly.

Thanks,
-Alex


Re: Reset of Status Parameters

2008-06-20 Thread Alex Arul Lurthu
I would also add Baron's  maakit  http://www.maatkit.org/ ( innotop ) for
innodb details to the arsenal.

On Fri, Jun 20, 2008 at 3:11 PM, Ian Simpson [EMAIL PROTECTED] wrote:

 I tend to use the 'mytop' program, which shows the average
 queries/second for the entire lifetime and for the last 5 seconds, as
 well as showing a bunch of other statistics and a list of running
 queries. It's a handy little monitoring tool.

 On Fri, 2008-06-20 at 12:17 +0530, Venu Madhav Padakanti wrote:
  I am using MySQL version 5.0.22, I am interested in knowing the current
  performance on the MySQL.
 
  With the status command we can get the queries per second but it will
  average since the beginning of time when SQL was up and running and not
  the current rate?
 
  Is there any way to reset that parameter so that the data can reflect
  current without restarting the MySQL
 
  Thanks in advance
  ..venu
 --
 Ian Simpson
 System Administrator
 MyJobGroup

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
Please check if the my.cnf configurations to be the same.

 What are your configuration parameters in terms of innodh flush log trx
commit , bin logging, sync binlog and innodb unsafe for binlog ?

If the systems have raid, check if the BBWC is enabled on the new host and
WB is enabled.


On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi list,

 Have a bit of a mystery here that I hope somebody can help with.

 I've just got a new server that I'm using as a dedicated MySQL server.
 In terms of hardware it's pretty much identical, if not slightly
 superior to an existing server already in production use.

 It's having a real struggle processing INSERT statements to InnoDB
 tables; it's maxing out at around 100 inserts per second, even with very
 simple two column tables (inserts into MyISAM tables run fine).
 Meanwhile, the original server can happily process around 1000
 inserts/sec into an identical table.

 The MySQL configuration of the two databases is identical, except for
 the tablespace file size (the new server has a larger tablespace
 defined), and the InnoDB logs (again, new server has larger logs).

 Can anybody suggest an area of investigation as to the cause?

 Thanks,
 --
 Ian Simpson

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
also how often do you issue a commit. batching the inserts inside a
transaction might help.

On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED] wrote:

 check for iostat to see if the disk is heavly used.

 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi Alex,

 Configurations are identical, other than the differences I initially
 mentioned. I've diffed both the configuration files and the output of
 SHOW VARIABLES on both servers.

 I've contacted my hosting provider to ask about the RAID settings.

 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

 Thanks

 --
 Ian Simpson

 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
  Please check if the my.cnf configurations to be the same.
 
   What are your configuration parameters in terms of innodh flush log
  trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope somebody can help
  with.
 
  I've just got a new server that I'm using as a dedicated MySQL
  server.
  In terms of hardware it's pretty much identical, if not
  slightly
  superior to an existing server already in production use.
 
  It's having a real struggle processing INSERT statements to
  InnoDB
  tables; it's maxing out at around 100 inserts per second, even
  with very
  simple two column tables (inserts into MyISAM tables run
  fine).
  Meanwhile, the original server can happily process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases is identical,
  except for
  the tablespace file size (the new server has a larger
  tablespace
  defined), and the InnoDB logs (again, new server has larger
  logs).
 
  Can anybody suggest an area of investigation as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing or
  transmission error has misdirected this email, please notify
  the author by replying to this email. If you are not the
  intended recipient(s) disclosure, distribution, copying or
  printing of this email is strictly prohibited and you should
  destroy this mail. Information or opinions in this message
  shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any
  responsibility for viruses or other destructive elements and
  it is your responsibility to scan any attachments.
 
 
 
  --
  Thanks
  Alex
  http://alexlurthu.wordpress.com

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.





-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
replication based inserts are serial whereas most of the time the inserts on
masters are concurrent. this leads to the slaves falling behind. to tackle
this we have used the following strategies :

1. Use raid 0 on the slaves (master users raid 10) so as to speed up writes.

2. pre fetch and cache the data that needs to be modified by the slave sql
thread.
3. set innodb flush trx log commit to 2 or even 0.
4. Out of desperation sometimes disable innodb double write and also xa
support.

On Fri, Jun 13, 2008 at 7:33 PM, Ian Simpson [EMAIL PROTECTED] wrote:



 Hi guys, thanks for pitching in.

 The inserts are from replication; we're not using transactions on the
 master (yet), and I don't think there's a way of telling MySQL to batch
 incoming replication statements if they're not already in a transaction.

 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read per
 second than the new server:

 The working server (which in addition to replicating is also handling a
 bunch of read queries)

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 88.47 782.20 998.77 9046888130 11551757459

 The new server, which is just trying to handle replication

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 77.83 1367.55 2914.72 358474084 764029986

 Thanks,
 
 --
 Ian Simpson




 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
  Variable_name: sync_binlog
  Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be the
  same.
  
   What are your configuration parameters in terms of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two databases
  is identical,
   except for
   the tablespace file size (the new server has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs).
  
   Can anybody suggest an area of investigation
  as to the cause?
  
   Thanks,
   --
   Ian Simpson
  
   This email may contain confidential
  information and is
   intended for the recipient(s) only. If an
  addressing or
   transmission error has misdirected this
  email, please notify
   the author by replying to this email. If you
  are not the
   intended recipient(s) disclosure,
  distribution, copying or
   printing of this email is strictly
  prohibited and you should
   destroy this mail. Information or opinions
  in this message
   shall not be treated as neither given nor
  endorsed by the
   company. Neither the company nor the sender
  accepts any
   responsibility for viruses or other
  destructive elements and
   it is your responsibility to scan any
  attachments.
  
  
  
   --
   Thanks
   Alex
   http://alexlurthu.wordpress.com
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing
  or transmission error has misdirected this email,
  please notify the author by replying to this email. If
  you are not the intended recipient(s) disclosure,
  distribution, copying or printing of this email is
  strictly prohibited and you should destroy this mail.
  Information or opinions in this message shall not be
  treated as neither given nor endorsed by the company.
  Neither the company nor the sender accepts any
  responsibility for viruses or other destructive
  elements

backup questions

2008-01-28 Thread Alex K
Hi guys,

Is it safe to perform a backup of a database by copying its mysql
files if the database is:

1) MyISAM and the database is not being used (mysql server turned off)?

2) InnoDB and and the database is not being used?

3) MyISAM and the database is being used?

4) InnoDB and and the database is being used?

I know I can use mysqhotcopy or mysqldump but I'm just curious if it's
possible to perform a backup by simply copying the files over in the
conditions described above.

Thank you so much,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance of heterogeneous joins

2008-01-25 Thread Alex K
Cool it's good to know thank you.

On 25/01/2008, Jay Pipes [EMAIL PROTECTED] wrote:
 Nope, no difference, AFAIK.

 Alex K wrote:
  Any ideas pertaining this newbie question?
 
  Thank you so much,
 
  Hi Guys,
 
  Is there a performance hit when joining across multiple databases as
  opposed to joining multiples tables in one database? Suppose the same
  tables are available across all databases.
 
  Thank you,
 
  Alex
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance of heterogeneous joins

2008-01-24 Thread Alex K
Any ideas pertaining this newbie question?

Thank you so much,

 Hi Guys,

 Is there a performance hit when joining across multiple databases as
 opposed to joining multiples tables in one database? Suppose the same
 tables are available across all databases.

 Thank you,

 Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select from otherdb.table question?

2008-01-20 Thread Alex K
Hi Brent,

ahh of course :) thank you so much for answering though.

Alex

On 20/01/2008, Brent Baisley [EMAIL PROTECTED] wrote:
 When you establish a connection, it's a connection to a server, not a
 specific DB. You can set a default db so that you don't always have
 to specify the db name you are working with. So to answer your
 question, no, a new connection is not established.

 Brent


 On Jan 19, 2008, at 10:19 AM, Alex K wrote:

  Hi Guys,
 
  What does the statement select * from otherdb.table do if I haven't
  explicitly connected to otherdb previously? I would assume it connects
  to otherdb and does the select on table but does it create a new
  connection each time? Is it as efficient as explicitly connecting to
  otherdb and then querying. I'm using webware DBUtils for connection
  pooling. Would these connections also be taken into account?
 
  Thank you so much,
 
  Alex
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?
  [EMAIL PROTECTED]
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



performance of heterogeneous joins

2008-01-19 Thread Alex K
Hi Guys,

Is there a performance hit when joining across multiple databases as
opposed to joining multiples tables in one database? Suppose the same
tables are available across all databases.

Thank you,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



select from otherdb.table question?

2008-01-19 Thread Alex K
Hi Guys,

What does the statement select * from otherdb.table do if I haven't
explicitly connected to otherdb previously? I would assume it connects
to otherdb and does the select on table but does it create a new
connection each time? Is it as efficient as explicitly connecting to
otherdb and then querying. I'm using webware DBUtils for connection
pooling. Would these connections also be taken into account?

Thank you so much,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



basic style shema question

2008-01-18 Thread Alex K
Hi Guys,

Let's suppose I have the following table:

create table companies
(
id  int(11) not null auto_increment primary key,

# user's login and password
email   varchar(100),
passwordvarchar(100),

# basic information
company_namevarchar(100),
contact_namevarchar(100),
street  varchar(100),
cityvarchar(100),
state   varchar(7),
zip varchar(13),
phone   varchar(25),

# user's company description
description text,
category_other  text,

# localization
desired_zip varchar(7),
latitudedec(10,7),
longitude   dec(10,7),

# user's personalized options
url varchar(200) not null unique,
logo_md5varchar(32),
linked_url  varchar(200),
color_bgrd  varchar(16),

# content moderation (null if nothing, 1 for new, 2 for updates)
updates smallint,
banned  boolean
);

Would you keep this as one table or would you split it into multiple
tables such as users, company localizations, personalized options and
moderation which would hold each the fields under each comment
together with a company_id? The first way of doing it is easier to
update because I would not have to update all the other linked tables.
But the other way of doing seems cleaner.

To summarize one table vs. many tables with one to one relations?

Thank you,

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: basic style shema question

2008-01-18 Thread Alex K
Hi Kevin,

Well the basic information, company description and personalized
options will be selected many times (whenever a user submits a query).
It will basically be show on the result page of the search engine.

The user's login / password well is used to login, then the user may
update the company basic information, description and personalized
options. These updates may happen sporadically though. Once every 3
minutes these fields are selected again in order to update the search
engine index.

Thank you,

Alex

On 18/01/2008, Kevin Hunter [EMAIL PROTECTED] wrote:
 At 11:44a -0500 on 18 Jan 2008, Alex K wrote:
  To summarize one table vs. many tables with one to one relations?

 As per usual, it depends on your needs.  For most flexibility, and to
 give the DB the best chance to give the best plan for the possible
 requests I might make in the future, I generally tend towards
 normalization, not denormalization.

 The question is: what do you want to do with the schema?

 Kevin

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Circular replication

2007-12-05 Thread Alex Arul Lurthu
Chain replication is fine as long as reading stale data from the last slave
in your chain is ok. the staleness depends on the write throughput and
capacity of the intermediate slaves. But Chain replication with circular
replication is a definite no no in prod since if any intermediate fails, you
will not be able to restore it easily and the data goes out of sync.

On Dec 5, 2007 12:31 AM, Krishna Chandra Prajapati [EMAIL PROTECTED]
wrote:

 Hi All,

 Is circular replication or chain replication is suitable for production
 environment. Whether any testing has been done. If yes then, Please let me
 know. There is any other issue related to circular replication.

 Thanks
 --
 Krishna Chandra Prajapati
 MySQL DBA,
 Ed Ventures e-Learning Pvt.Ltd.
 1-8-303/48/15, Sindhi Colony
 P.G.Road, Secunderabad.
 Pin Code: 53
 Office Number: 040-66489771
 Mob: 9912924044
 URL: ed-ventures-online.com
 Email-id: [EMAIL PROTECTED]




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Determining Table Storage Engine Type on Crashed Table

2007-12-01 Thread Alex Arul Lurthu
How about looking at the .frm file of the table.

On 11/28/07, Richard Edward Horner [EMAIL PROTECTED] wrote:
 FYI, this did not work :)

 Thanks though!

 Rich(ard)

 On Nov 23, 2007 3:37 AM, Paul McCullagh [EMAIL PROTECTED]
 wrote:
  Maybe this will work:
 
  SHOW CREATE TABLE table_name;
 
 
  On Nov 21, 2007, at 9:42 PM, Richard Edward Horner wrote:
 
   Hey everybody,
  
   Hopefully some of you are already enjoying time off. I am not...yet :)
  
   Anyway, is there a way to determine what storage engine a table is
   using if it's crashed?  When it's fine, I can just run:
  
   mysql show table status like 'table_name';
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   | Name| Engine | Version | Row_format | Rows   |
   Avg_row_length | Data_length | Max_data_length  | Index_length |
   Data_free | Auto_increment | Create_time | Update_time
   | Check_time  | Collation | Checksum | Create_options
   | Comment |
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   | table_name | MyISAM |  10 | Fixed  | 985984 | 13
   |12817792 | 3659174697238527 | 34238464 | 0 |
   1182153 | 2007-11-15 17:44:28 | 2007-11-21 15:28:07 | 2007-11-21
   15:28:18 | latin1_swedish_ci | NULL || |
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   1 row in set (0.00 sec)
  
   As you can see, the second column returned is the Engine. In this
   case, MyISAM. Now, if I crash the table, it doesn't work:
  
   mysql show table status like 'table_name';
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   | Name| Engine | Version | Row_format | Rows | Avg_row_length
   | Data_length | Max_data_length | Index_length | Data_free |
   Auto_increment | Create_time | Update_time | Check_time | Collation |
   Checksum | Create_options | Comment
   |
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   | table_name | NULL   |NULL | NULL   | NULL |   NULL |
  NULL |NULL | NULL |  NULL |
   NULL | NULL| NULL| NULL   | NULL  | NULL |
   NULL   | Table './blah/table_name' is marked as crashed and
   should be repaired |
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   1 row in set (0.00 sec)
  
   Now, let's assume for a moment this were an InnoDB table. If I were to
   try and run repair, it would say that the storage engine does not
   support repair so clearly it knows what the storage engine is. How do
   I get it to tell me? Or I guess a broader more helpful question would
   be, What are all the ways to determine a table's storage engine
   type?
  
   Thanks,
   --
   Richard Edward Horner
   Engineer / Composer / Electric Guitar Virtuoso
   [EMAIL PROTECTED]
   http://richhorner.com - updated June 28th
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql?
   [EMAIL PROTECTED]
  
 
 



 --
 Richard Edward Horner
 Engineer / Composer / Electric Guitar Virtuoso
 [EMAIL PROTECTED]
 http://richhorner.com - updated June 28th

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
Sent from Gmail for mobile | mobile.google.com

Thanks
Alex
http://alexlurthu.wordpress.com

-- 
MySQL General Mailing

Re: show slave staus

2007-11-27 Thread Alex Arul Lurthu
You can set pager command to grep out unwanted fields.

On 11/28/07, bruce [EMAIL PROTECTED] wrote:
 hi..

 simple question!!

 mysql show slave status

 returns a number of fields, with information on the status of the slave. is
 there a way to only return the field(s) i'm interested in...

 i thought that i had figured this out, but i can't recall, and it's been
 awhile since i've played with this!

 thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
Sent from Gmail for mobile | mobile.google.com

Thanks
Alex
http://alexlurthu.wordpress.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT Speed

2007-11-26 Thread Alex Arul Lurthu
   |2 | acctStatusType_id | A
 | 936 | NULL | NULL   |  | BTREE  | |
 | acct  |  1 | index1   |3 | day   | A
 |   88638 | NULL | NULL   |  | BTREE  | |
 | acct  |  1 | index1   |4 | nas_id| A
 | 1659511 | NULL | NULL   |  | BTREE  | |
 +---++--+--+---+
 ---+-+--++--++-+

 MySQL version is 4.1.20
 What is the bottleneck in my server? How could I improve MySQL server
 performance?

 Thank you!

 --
 Alexander Bespalov


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
Sent from Gmail for mobile | mobile.google.com

Thanks
Alex
http://alexlurthu.wordpress.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database page corruption on disk occurring during mysqldump on a fresh database

2007-09-05 Thread Alex Arul Lurthu
The checksum errors might be due to various reasons. We had similar issue
where we restored the database multiple times, replaced the ram sticks
nothing helped. Finally we drilled down the issue to the chassis. Recommend
testing the restore on a different machine to rule out any hardware issue.

-- 
Thanks
Alex
http://alexlurthu.wordpress.com

On 9/5/07, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Hi
 This might be happening due to two reasons;
 1 The system date might not be correct.
 2. Some things wrong with log postion (Incorrect log position)

 Regards,
 Krishna Chandra Prajapati

 On 8/31/07, Maurice Volaski [EMAIL PROTECTED] wrote:
 
  A 64-bit Gentoo Linux box had just been upgraded from MySQL 4.1 to
  5.0.44 fresh (by dumping in 4.1 and restoring in 5.0.44) and almost
  immediately after that, during which time the database was not used,
  a crash occurred during a scripted mysqldump. So I restored and days
  later, it happened again. The crash details seem to be trying to
  suggest some other aspect of the operating system, even the memory or
  disk is flipping a bit. Or could I be running into a bug in this
  version of MySQL?
 
  Here's the output of the crash
  ---
  InnoDB: Database page corruption on disk or a failed
  InnoDB: file read of page 533.
  InnoDB: You may have to recover from a backup.
  070827  3:10:04  InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex
 
  [dump itself deleted for brevity]
 
;InnoDB: End of page dump
  070827  3:10:04  InnoDB: Page checksum 646563254,
  prior-to-4.0.14-form checksum 2415947328
  InnoDB: stored checksum 4187530870, prior-to-4.0.14-form stored
  checksum 2415947328
  InnoDB: Page lsn 0 4409041, low 4 bytes of lsn at page end 4409041
  InnoDB: Page number (if stored to page already) 533,
  InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
  InnoDB: Page may be an index page where index id is 0 35
  InnoDB: (index PRIMARY of table elegance/image)
  InnoDB: Database page corruption on disk or a failed
  InnoDB: file read of page 533.
  InnoDB: You may have to recover from a backup.
  InnoDB: It is also possible that your operating
  InnoDB: system has corrupted its own file cache
  InnoDB: and rebooting your computer removes the
  InnoDB: error.
  InnoDB: If the corrupt page is an index page
  InnoDB: you can also try to fix the corruption
  InnoDB: by dumping, dropping, and reimporting
  InnoDB: the corrupt table. You can use CHECK
  InnoDB: TABLE to scan your table for corruption.
  InnoDB: See also InnoDB:
  http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
  InnoDB: about forcing recovery.
  InnoDB: Ending processing because of a corrupt database page.
 
  --
 
  Maurice Volaski, [EMAIL PROTECTED]
  Computing Support, Rose F. Kennedy Center
  Albert Einstein College of Medicine of Yeshiva University
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
 Krishna Chandra Prajapati
 MySQL DBA,

 Ed Ventures e-Learning Pvt. Ltd,
 201,202, Ashoka Bhoopal Chambers,
 S P Road, Secunderabad 53.
 Ph. No. - 040-39188771
 Url: www.ed-ventures-online.com



Re: recovering from 'disk full' mysql error

2007-09-05 Thread Alex Arul Lurthu
stop slave;

change master to master_log_file='Relay_Master_Log_File' ,
master_log_pos=Exec_master_log_pos;

#Get the values for the above command from your show slave status output.

start slave;

The above process should fix your problem.


On 9/4/07, Russell E Glaue [EMAIL PROTECTED] wrote:

 I had a disk full error on the master MySQL (4.1.22), which was
 replicating to a
 slave (4.1.22).

 My question is, how do I recover a slave replica when the master had a
 disk full
 error, space was freed, but the 'disk full' issue left a corrupted master
 binlog?
 1) Do I have to reinitialize the slave from scratch,
 2) or can I use some CLI tool to repair the damaged binlog (see below)
 3) or can I skip to the next binlog

 This is assuming I performed a stop and start of the master mysql, then
 'purge
 binary logs' from master.
 And would the resolution be the same if I did not start and stop the
 master mysql?


 According to bug 9949 (http://bugs.mysql.com/bug.php?id=9949) this was
 addressed
 in 4.1.9.
 I am running 4.1.22, and have experienced this issue as outlined.

 I am not finding a good sum of reasoning and process to proceed with
 resolving
 this type of predicament. Can anyone clue me in to resolving this?


 I purged enough binlogs to free up disk space on master, and then started
 the
 slave replica process on the slave.

 I get this famous error:
 -
 070904 12:46:26 [Note] Slave I/O thread: connected to master
 '[EMAIL PROTECTED]:3306',  replication started in log '
 master1-bin.54' at
 position 138164107
 070904 12:46:26 [ERROR] Error reading packet from server: binlog truncated
 in
 the middle of event (server_errno=1236)
 070904 12:46:26 [ERROR] Got fatal error 1236: 'binlog truncated in the
 middle of
 event' from master when reading data from binary log
 -

 I do a binlog dump on 'master1-bin.54', and I get the below error
 ('Error in
 log format or read error') at the end of the output:

 -
 #070904 11:46:22 server id 2  end_log_pos 137541177 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137655584 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137655719 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137770204 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137770339 Query
 thread_id=48870
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137962851 Query
 thread_id=48870
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137962986 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138152129 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138152264 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138163972 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138164107 Query
 thread_id=48870
 exec_time=0 error_code=0
 ERROR: Error in Log_event::read_log_event(): 'read error', data_len:
 81904,
 event_type: 2
 Could not read entry at offset 138164107:Error in log format or read error
 -


 -RG

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Delete query question

2007-09-05 Thread Alex Arul Lurthu
reply inline

On 9/5/07, Olaf Stein [EMAIL PROTECTED] wrote:

 delete from geno_260k where ident=(select ident from geno_260k where a1=0
 group by ident having count(a1)25);


When a sub query returns more than one row in a where clause, then =
should be replaced by  the  in .

-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Simple questio SQL

2007-09-05 Thread Alex Arul Lurthu
select * from Products order by  Visited  desc  limit  10;  should  give
you the top  10  products.


On 9/5/07, Tomas Abad [EMAIL PROTECTED] wrote:

 HI All,



 I have a Table and want to know the most visited products.



 Products

 -  Id

 -  Name

 -  Visited



 Visited is numeric.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: innodb engine status

2007-08-29 Thread Alex Arul Lurthu
To have a good understanding on the show innodb status output checkout
http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/

One area you can look at is the LATEST DETECTED DEADLOCK. But in most
cases have found calculations on the status variables more helpful.

--
Alex
http://alexlurthu.wordpress.com

On 8/29/07, Ananda Kumar [EMAIL PROTECTED] wrote:

 Hi Friends,

 When i do SHOW INNODB STATUS\G. It gives me details of transaction
 happening on INNODB engine.

 Please let me know what all information i should consider from this output
 to ensure if everything is fine or there are issue which i should address.

 I am using mysql version 5.0.41-community-log

 regards
 anandkl



Re: SCRIPT OR TOOL TO GIVE CHANGES IN DATABASES

2007-08-29 Thread Alex Arul Lurthu
If you want to track the schema level changes, updates, inserts etc you
always have the binlogs/update logs. If you want to track select queries
also you have the general query log. Check out
http://dev.mysql.com/doc/refman/4.1/en/log-files.html for the different logs
available. Of course be wary of the performance implications.

-- 
Alex
http://alexlurthu.wordpress.com/

On 8/29/07, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Hi All,

 Is there is any script or tool that generate the report and send an email
 for changes done in the databases;

 There is any otherway (manual) So that i can look that what changes has
 been
 done in the database today.

 Regards,
 Krishna



Re: thread_concurrency in linux

2007-08-29 Thread Alex Arul Lurthu
I am not sure whether you are talking about innodb_thread_concurrency. If so
please check out
http://www.mysqlperformanceblog.com/2006/06/05/innodb-thread-concurrency for
more details. Innodb_thread_concurrency works on linux.

Thanks
Alex

On 8/29/07, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Hi,

 Does anyone know if thread_concurrency works in linux or is it just
 limited to Solaris and Windows?

 I know the general rule is number of CPU's*2 but will this actually have
 any effect with Linux's threading model?

 Thanks for any help :)

 Andrew

 Mysql, query


 This message has been scanned for viruses by BlackSpider MailControl -
 www.blackspider.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu


Re: Access denied for user on Windows Server

2007-08-22 Thread Alex Kloss
Car Toper wrote:
 More info...

 I am able to login with the account via phpMyAdmin on the server.
 Also, I am trying to use phpMyAdmin to change the permissions, but I
 cannot figure out the syntax:

 GRANT ON ppsntracker.* TO [EMAIL PROTECTED] WITH GRANT OPTION;

 Cartoper

 On 8/21/07, Car Toper [EMAIL PROTECTED] wrote:
   
 I have a MySql server running on a Windows box that I cannot connect
 to from another machine.  I first checked to make sure the user has
 the correct permissions, which the user does, it has the %.  I did
 read something that gave me the impression the user needs to have a
 tranditional windows account on the machine, so I created one.  Still
 nothing.  Is there anything special I need to do to allow access to
 MySql when the database is running on a different machine?

 

   
If you're trying to create a superuser on the server, you would use

GRANT ALL PRIVILEGES ON ppsntracker.* TO [EMAIL PROTECTED] WITH
GRANT OPTION;

Although I wouldn't recommend this for security purposes; it's a much
more secure idea to selectively grant the
privileges as necessary to the user and assign a password, such as

GRANT SELECT,INSERT,DELETE ON ppsntracker.* TO [EMAIL PROTECTED]
IDENTIFIED BY 'password';

And so forth.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [MySQL] Fastest way to Search from a Table

2007-08-07 Thread Alex Arul Lurthu
On 8/7/07, Tharabar [EMAIL PROTECTED] wrote:

 SELECT `field1`,`field2`,`field3` FROM 'table' WHERE `field5` LIKE
 '%keyword%1' AND `field5` LIKE '%keyword2%'


The most trivial way to speedup a query is to add an index on the columns
that are being used in the where clause. But doing 'LIKE' operation with the
pattern beginning with '%' it renders the index use less.

If you use MyISAM you can always use full text search on a text field for
such use cases.


-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: MySQL database move

2007-07-09 Thread Alex Arul Lurthu

Hi Ace,

If you cant affort downtime and if you are using innodb try removing auto
extend on the current data file and create a datafile in a different
partition and put autoextend on the same. If you are using MyISAM , you can
move few tables to different disk use symlinks.


--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu

On 7/9/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Ace,
Can't you zip or move old bin-logs to a different disk and release some
free
space on the current drive.

regards
anandkl


On 7/9/07, Ace [EMAIL PROTECTED] wrote:

 Hi,

   We have crisis. Disk with MySQL database is full. Now we want to move
 database to another disk. How can we do it?

 --
 Thanks,
 Rajan




Re: bin-log position.

2007-07-08 Thread Alex Arul Lurthu

Anand,

If the dump was taken from the slave then the binlog positions that were
dumped out are that of the slave. Now you might do 2 things :

1. Setup a slave of your master

  To do this use the binlog positions that you obtained using the
command show slave status on your slave. The master log file will be 
mysql-bin.29 . But your show slave output status doesnt show the
exe_master_log position. You need to use that position for setting up a new
slave of your master.

2. Setup a slave of your slave

 To do this use the binglog positions in your mysqldump output.

P.S: Make sure you setup the hostnames in the change master command
appropriately.


--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu


Re: bin-log position.

2007-07-08 Thread Alex Arul Lurthu

You will be taking a chance if you use Read_Master_Log_Pos: 106683511. It is
not guaranteed  that whatever  is read from master is  definitely  applied.
The apply of the read sqls might have stopped somewhere and in that
case Read_Master_Log_Pos
and exec_master_log_pos might be different.

--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu

On 7/9/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Alex,
Thanks a lot,
I missed on taking the exe_master_log position value.
If i start from Read_Master_Log_Pos: 106683511, will it cause any harm.

regards
anandkl



On 7/9/07, Alex Arul Lurthu [EMAIL PROTECTED] wrote:

 Anand,

 If the dump was taken from the slave then the binlog positions that were
 dumped out are that of the slave. Now you might do 2 things :

 1. Setup a slave of your master

To do this use the binlog positions that you obtained using the
 command show slave status on your slave. The master log file will be 
 mysql-bin.29 . But your show slave output status doesnt show the
 exe_master_log position. You need to use that position for setting up a new
 slave of your master.

 2. Setup a slave of your slave

   To do this use the binglog positions in your mysqldump output.

 P.S: Make sure you setup the hostnames in the change master command
 appropriately.


 --
 Thanks
 Alex
 http://blog.360.yahoo.com/alex.lurthu






Re: Performance problem MySQL 4.0.20

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


How does the cpu power influence the speed of a query?



Sort is a cpu intensive process.

*Check if you are suffering from locks on the tables you look up.*

Also, was the table in the system where the queries are running faster
rebuilt recently. You can try rebuilding the tables.

How is the overrall system load on both machines. Is one overloaded than the
other ? The machine which takes longer to excute the query might be busy
serving some other process other than MySQL.


Thanks
Alex


Re: bin-log position.

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, Ananda Kumar [EMAIL PROTECTED] wrote:



-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.03',
MASTER_LOG_POS=311;

As see in the mysqldump is saying to start recovery from bin-log file '
mysql-bin.03' and position=MASTER_LOG_POS=311,




You need to use the positions specified in the mysqldump since when u
restore using the data, you will get data only till the snapshot specified
in the dump.

Thanks
Alex


Re: Performance problem MySQL 4.0.20

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


How does the cpu power influence the speed of a query?



Sort is a cpu intensive process.

*Check if you are suffering from locks on the tables you look up.*

Also, was the table in the system where the queries are running faster
rebuilt recently. You can try rebuilding the tables.

How is the overrall system load on both machines. Is one overloaded than the
other ? The machine which takes longer to excute the query might be busy
serving some other process other than MySQL.


Thanks
Alex


Re: mysqldump on slave.

2007-07-07 Thread Alex Arul Lurthu

You havent mentioned which version of mysql your are on. If on an older
version i would recommend the option --opt. Also if you are running a  dump
using single-transaction option on a loaded system with writes, things might
slow down significantly due to different versions of rows which the server
needs to maintain due to the dump.

Thanks
Alex

On 7/6/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I am planning to take mysqldump on slave running innodb engine.

I going to use the below command

/usr/bin/mysqldump --user=root  --password= --flush-logs
--all-databases
--master-data=2 --single-transaction   alldbdump.sql

Before doing the mysqldump, should i stop the slave or the above command
is
enough, please let know, also please let me know if any other option i can
use to speed up mysqldump.

regards
anandkl



Re: Cannot Connect: Can't create a new thread (errno 11)

2007-07-05 Thread Alex Arul Lurthu

Chris,

Did you check if there are any user level MAX_USER_CONNECTIONS restriction
set while granting access ?

Thanks
Alex

On 7/4/07, Chris Faust [EMAIL PROTECTED] wrote:


Folks,



I'm wondering is someone could help us in trying to resolve the above
error,
we have read through what threads we could find and tried all the
suggestions, but have had no luck.



There are no memory problems, in fact when this happens if there is still
a
localhost client connection to the server you can do whatever you want,
everything is running normal. It just refuses all new connections. When it
happens there is plenty of memory on the machine and no load at all
(everything is running just as it should). Once things fall below 90,
everything comes back.



The problem happens when the process list hits the 91st process every
time.
90 processes, no problems, the 91st process/connection gets the above
error
and any after that simple gets Can't connect to MySQL server.



We've tried all the obvious things, messing with the number of
connections,
open files etc. Nothing seems to work.



We are running MySQL 5 on Centos 5 (flavor of RedHat). The machine is
dedicated to MySQL.



Below is a link to a text file that shows the OS limits for the mysql
account on the machine and the output from show variables (including it
made
the message to large for the list).



http://208.3.90.212/wtresults.txt



If anyone has any suggestions, it would be appreciated.



Thanks!






Re: Performance problem MySQL 4.0.20

2007-07-05 Thread Alex Arul Lurthu

Hi,

Run explain plan on both the machines and post the same.

~Alex

On 7/4/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


Hello Ananda,

yes, the testmachine has the same data.


Regards,
Spiker
--
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: find a particular record.

2007-06-29 Thread Alex Greg

I am doing this to move data from one table to other and getting below
error.
How can check this particular record at this row number 921888 in dats_tem
table.

insert into reports1.dats1 select * from dats_tem;

ERROR 1292 (22007): Incorrect datetime value: '-00-00 00:00:00' for
column 'P_LAST_SOLD_DATE' at row 921888


An extremely quick and dirty solution:

mysqldump -uusername -p -t --skip-opt database_name dats_tem | grep
INSERT | head -921888 | tail -1

replacing username with your username and database_name with your
database name, of course.

-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: General Query Log -- No Timestamp

2007-06-29 Thread Alex Greg

On 6/26/07, Brown, Charles [EMAIL PROTECTED] wrote:


The MYSQL general query log does not include timestamp of queries that
it logs because queries are logged many many seconds before they are
executed.


Which version of MySQL are you running? I'm running 5.0.22 on my
desktop, but I'm fairly sure that older versions included timestamps
in the general query log.

I just enabled mine to test this (by adding log = /var/log/mysql.log
to /etc/my.cnf) and it looks something like:

070629  8:17:44   6 Connect [EMAIL PROTECTED] on monitoring
 6 Query   set autocommit=0
 6 Query   select * from urls where active=y
070629  8:17:45   6 Query   INSERT INTO results VALUES
(NULL,5,now(),0.5833,35267)
 6 Query   INSERT INTO results VALUES
(NULL,6,now(),0.0137,0)
 6 Query   INSERT INTO results VALUES
(NULL,8,now(),0.7762,28130)
 6 Query   INSERT INTO results VALUES
(NULL,9,now(),0.0348,4217)

-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Blob data

2007-06-25 Thread Alex Arul Lurthu

Ratheesh,

If you still want to place you blob data in the database, seperate out the
blob storage to a seperate table. This will help you  alleviate few of your
performance and maintenance problems.

~Alex

On 6/22/07, Steve Edberg [EMAIL PROTECTED] wrote:


At 12:11 PM +0530 6/22/07, Ratheesh K J wrote:
Hello All,

I want a clarification. Whe run a forum wherein people send messages
with/without attachments. Attachments may contain images, documents
etc.. We are actually storing the attachment in a blob column.
Sometimes the attachments are big. And today the table size has
grown to 40 GB. This has created a headache for any maintanance
task, backup, restoration. etc.

I want to know whether this is the right approach. Or should we
actually store the attachments in directories and just stiore the
attachment path in the database.

Kindly suggest the best approach so that I can reduce the database size.

Thanks in advance


Yes, storing files - especially non-textual files - in the file
system instead of the database is generally considered the best
practice.

At one point I had created a document management system that stored
everything in the database as you are doing; my rationale was that it
allowed me to manage permissions using the existing database
permissions, and to back up the whole database using mysqldump, vs
mysqldump + doing a tar of the files. However, I abandoned this
approach for the following reasons:

(1) Storing non-plaintext items (eg; pictures) in the database makes
it bigger and slower without added value - you can't (at least not
yet, or in the foreseeable future) do a meaningful search on a blob.

(2) It becomes more difficult to split storage out onto multiple
filesystems; eg, leaving the database files in /var/database, putting
the documents themselves into /home/docmanager, etc.

(3) It makes queries on the commandline unwieldy; if you have a blob
field, doing a select * to check a record's contents can dump a lot
of garbage on the screen.

(4) It can make doing incremental backups more difficult; if the
documents themselves are relatively static, but the document metadata
stored in the database is very dynamic, it becomes simple to do a
compact daily database dump + a weekly document directory backup (for
example) if the files are not in the database.

What I do is create a unique SHA1 hash when a file is uploaded (eg;
sha1(rand()). The original filename and the 40-character hash are
stored in the database, and the document is stored in the filesystem
using the hash as the filename. I can optionally compress and encrypt
the document as well, storing the encryption key in the database.
This gives (for me) adequate document security. An additional
advantage is that you can take advantage of the filesystem tree if
you have a large number of documents. For example, if a document hash
is 'f0118e9bd2c4fb29c64ee03abce698b8', you can store the file in the
directory tree f0/11/8e/f0118e9bd2c4fb29c64ee03abce698b8 (extending
to as many levels as you feel necessary). By keeping the number of
files per directory fairly small, file retrieval becomes relatively
fast. As the hashes approximate a random distribution, you should
always have a close-to-balanced tree.

Lastly, I store a hash of the document itself in the database as
well. This allows me to detect if duplicate files are uploaded, and
to determine if a previously-uploaded file has been corrupted in some
way.

steve

--
+--- my people are the people of the dessert, ---+
| Steve Edberghttp://pgfsun.ucdavis.edu/ |
| UC Davis Genome Center[EMAIL PROTECTED] |
| Bioinformatics programming/database/sysadmin (530)754-9127 |
+ said t e lawrence, picking up his fork +

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: replication

2007-05-15 Thread Alex Arul Lurthu

check if you are seeing any access denied errors in the mysql error log.

~Alex

On 5/14/07, richard [EMAIL PROTECTED] wrote:


Ian P. Christian wrote:
 richard wrote:

 as far as I can see, these commands select which db's to replicate on
 the slave that currently exist on the master server.
 What i am asking is, if I set a server up as a slave, can I have other
 databases on the slave that are not part of the replication system?


 Yes - I've been doing this for some time, works fine.




mmm, ok.
I've set replication up and have show master and show slave output
displaying the following.
Any updates I make to the master are not transferred to the slave.
The log position increases on the master but doesn't change on the
slave. Any ideas?
cheers
Rich


mysql show master status \G
*** 1. row ***
File: bla-bin.004
Position: 681
Binlog_do_db: foo_db
Binlog_ignore_db:
1 row in set (0.00 sec)



mysql show slave status \G
*** 1. row ***
Slave_IO_State: Checking master version
Master_Host: 192.168.1.110
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: bla-bin.004
Read_Master_Log_Pos: 512
Relay_Log_File: db_box-relay-bin.01
Relay_Log_Pos: 4
Relay_Master_Log_File: bla-bin.004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: foo_db
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 512
Relay_Log_Space: 4
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Redo logs take to much disk space

2007-05-15 Thread Alex Arul Lurthu

the ask lies in expire_logs_days. If you set this to optimal number of days,
logs older than the configured days will get purged.

~Alex


On 5/15/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


 One question about this, is it safe to turn of log_bin?
i think, you can. the log is necessary for data replication and sometimes
for
data recovery.
you can read about it here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
 Or can you tweak it somehow so that it won't' take some much disk space?
you can bzip old logs if you need them but don't want them to take so much
space :)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: backup solution.

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I have take a mysqldump of my entire database, is it possible to restore
just one table from this mysqldump.



Yes thats possible.

cat your-dump-filename | grep tablename u want to restore  mysql -u
user -ppassword should do it.



Also, i have taken binary backup of my database, and one of the datafile
is
corrupt, is it possible to restore just this file from backup and start
the
mysqldb. Mysql database is running in innodb engine.



If all files are from the same cold binary backup, then you will be able to
do a restore. Else it will be highly unlikely.

~Alex


Re: Memory Problems

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Christoph Klünter [EMAIL PROTECTED] wrote:




I have set the sort_buffer_size to 1G but even this doesn't help.
Any hints ? Should we try a 64Bit-OS ?



setting sort_buffer_size to 1GB is not recommended. it is a thread specific
configuration parameter which means each thread will be eligible to get a
sort_buffer_size of 1GB.

On a 32 bit system, you can push to a approx 3.2 GB usage for the database
by doing kernel level hacks mentioned at
http://www.puschitz.com/TuningLinuxForOracle.shtml#GrowingTheOracleSGATo2.7GBInx86RHEL2.1WithoutVLMor
by using VLM also. But if you are using a mysql 32bit build, then you
will not be able to use more than 4GB theoretically.

Using 64bit OS and 64bit build of mysql will enable you to use memory
greater than 4GB effectively.

~Alex


Re: backup solution.

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Alex,
Thanks for the info,
For the second question, do you mean i should restore the entire backup or
just that one file from my backup.

All the files should be from the same backup. AFAIK, MySQL doesnt have an

option to recover only one datafile. Hence restore the entire backup.

~Alex


Re: Memory Problems

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Micah Stevens [EMAIL PROTECTED] wrote:


I think you may be able to get around this by using multiple key
buffers? (MySQL 4.1 or later)

key buffers caches only index data and they dont help with sorting like

sort_buffer. they dont impact innodb engine. even while using multiple key
buffers, the memory allocated will belong to the same process and the
limitation is at the process level.

~Alex


Re: replication

2007-05-14 Thread Alex Arul Lurthu

techincally yes you can have. replication is all about the queries that are
getting executed on the master will be executed on the slave. So , if the
database that exists on the master exists on the slave , then you should
have no problems.

Thanks
Alex

On 5/14/07, richard [EMAIL PROTECTED] wrote:



as far as I can see, these commands select which db's to replicate on
the slave that currently exist on the master server.
What i am asking is, if I set a server up as a slave, can I have other
databases on the slave that are not part of the replication system?




MySQL 中文网 wrote:

 of course

 setup by the option replication-do-db or replication-ignore-db :)

 richard wrote:

 Hi,

 I have a server (master version 4.0.1) that has a database that I
 want to replicate on another server (slave version 4.1.20).
 The slave already has another db on it. Is is possible to have a
 mixture of replicated and non replicated dbs running on a slave?

 cheers
 Rich



 --







 **
 MySQL 中文网
 叶金荣   MySQL独立咨询师、系统架构师、培训师

 业务: LAMP架构设计、解决方案、负载均衡,MySQL 技术咨询、培训、支持
 手机:13466608698
 [EMAIL PROTECTED] (或 [EMAIL PROTECTED])
 网址:http://imysql.cn
 **
 http://imysql.cn 为用户提供一个专业的、自由的MySQL中文交流社区,欢迎来访

 给你的祝福,要让你招架不住!

 2007/05/14 17:50:01



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: MySQL Benchmarking

2007-03-15 Thread Alex Greg

On 3/14/07, Clyde Lewis [EMAIL PROTECTED] wrote:

System Configuration: Sun Microsystems  sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 65536 Megabytes
CPU: 12 @ 1200 MHz

I'm looking for a tool that will allow us to determine the max number
of databases that can run in a single instance of MySQL on a pretty
beefy server( Spec above).

In total we will have about  ~40 MySQL
instances running on this server. Each instance of MySQL, there will
have between 30-60 individual databases supporting an OLTP
application. I know that there are no know internal limits that MySQL
have regarding the number of databases that can be created, but I
would like get my hands on a tool that can simulate the number of
databases and identify where we would potentially run into
performance issues.


As I mentioned above, your performance issues are going to come not
from the number of databases, but from (primarily) how well-designed
your database tables and queries are, and (secondly) how you configure
the mysql server(s).

One important factor to bear in mind is that with 40 separate MySQL
instances on the single 64GB server, you will have a maximum 1.6GB of
RAM per instance (excluding memory used by the O/S and other
applications). This will have to be divided up between the various
memory buffers (key_buffer, innodb_buffer_pool, etc.) allocated by
each mysql process, so you might want to reconsider if you really need
to run 40 separate mysql processes, or whether all the databases can
live in the same MySQL instance and thus probably make better use of
the available RAM.

With regards to stress-testing and benchmarking, two popular tools for
benchmarking MySQL servers are:

Super Smack: http://vegan.net/tony/supersmack/
Sysbench: http://sysbench.sourceforge.net/


We need to determine whether to have multiple
servers to support the ~40 instances or have all ~40 instances on the
same machine. Any help of ideas would be greatly appreciated with
this decision.


I would be inclined to have separate machines, rather than put
everything on one huge server. By spreading the data around, you are
reducing the risk if the one mega-machine were to become unavailable,
and also reducing resource contention (on the disks, CPU, RAM etc.).


-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how would mysqld restart affect dynamically set global variables?

2007-03-14 Thread Alex Greg

We're running mysql 4.1.20.  If I understand the manual correctly, I can
change max_connections while mysqld is running without restart mysqld to
make the change take effect


Correct.


But what if mysqld restarts later in some
other situations, like machine reboot, would my (global) change on
max_connections remain?


No; you'll need to put it in my.cnf in order for it to persist after
mysqld is shut down.

-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   >