Re: List of Publicly Accessible MySQL Databases?

2008-08-25 Thread Jay Pipes
Hi!

Check out db4free.net. :)

Cheers,

Jay

Andrew J. Leer wrote:
 Is there a listing of public MySQL Databases anywhere?
 
 Just if someone would be new to databases (not me...other people at my
 office) and they would want to get a look at an existing working
 database to learn SQL on?
 
 I've found one such database:
 
 Genome Bioinformatics
 db.host=genome-mysql.cse.ucsc.edu
 db.user=genomep
 db.password=password
 
 But I really don't think the people I'm trying to teach here know much
 about Genome Bioinformatics (and ah consequently I don't know anything
 about that either...)
 
 Thank  you,
 Andrew J. Leer
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb vs myisam

2008-04-03 Thread Jay Pipes
Please actually read my reply before asking the same question.  As I 
stated, InnoDB outputs *estimated* row counts in EXPLAIN, whereas MyISAM 
outputs *accurate* row counts.


-jay

Krishna Chandra Prajapati wrote:

Hi,

On myisam storage system

mysql explain select ui.user_id, ucp.user_id,ucp.payment_date from
user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
++-+---++---+---+-+-++-+
| id | select_type | table | type   | possible_keys |
key   | key_len | ref | rows   | Extra   |
++-+---++---+---+-+-++-+
|  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
user_course_pay_comp1 | 30  | NULL| *256721* | Using index |

|  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
PRIMARY   | 10  | dip.ucp.user_id |  1 | Using index |
++-+---++---+---+-+-++-+
2 rows in set (0.00 sec)


On innodb storage system

mysql  explain select ui.user_id, ucp.user_id,ucp.payment_date from
user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
++-+---++---+-+-+-++-+
| id | select_type | table | type   | possible_keys |
key | key_len | ref | rows   | Extra   |
++-+---++---+-+-+-++-+
|  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
idx_user_course_payment | 9   | NULL| *256519* | Using index
|
|  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
PRIMARY | 10  | dip.ucp.user_id |  1 | Using index |

++-+---++---+-+-+-++-+
2 rows in set (0.00 sec)

I have executed ANALYZE TABLE COMMAND on both the system (innodb and myisam)
Yet there is a small difference. Highlighted in red color

Is it the behavior of myisam or innodb or interal working of the storage
engines.

Thanks,
Krishna




On Wed, Apr 2, 2008 at 9:06 PM, Rob Wultsch [EMAIL PROTECTED] wrote:


On Wed, Apr 2, 2008 at 5:06 AM, Krishna Chandra Prajapati 
[EMAIL PROTECTED] wrote:


Horribly ugly stuff


I know I sure as heck am not going to spend half an hour to turn those
queries into something understandable, and I expect no one else will
either.  If you want help please remove all extraneous details  (turn table
and columns names in t1,t2,col1,col2, etc or descriptive names like parent,
child, datetime_end)  and send out something that is easy to reproduce. You
get a cupcake if you include ddl that populates itself with random data.

Also, using /G instead of a semi colon will make database output a heck of
a lot easier to read in email form.

--
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb vs myisam

2008-04-02 Thread Jay Pipes
The MyISAM isn't scanning more rows.  It's that the InnoDB rows output 
in EXPLAIN is an estimate and the MyISAM one is accurate...


-jay

Krishna Chandra Prajapati wrote:

Hi All,

I have same table configuration, every thing same except the storage engine.

Explain result on innodb system

mysql explain select ucpr.course_amount, ucpr.coupon_amount,
ucp.payment_order_id, ui.course_id, uct.ref, uet.ref, ui.user_id,
ucpr.coupon,  ucp.payment_service_id, ucp.payment_id   FROM
user_course_pricing ucpr, user_info ui, course c, user_course_payment ucp
left outer join user_cc_trans uct on
ucp.payment_order_id=uct.payment_order_id left outer join user_ec_trans uet
on ucp.payment_order_id=uet.payment_order_id   WHERE ucp.payment_order_id is
not null and date_format(ucp.payment_date, '%m-%Y')='05-2007' and
ucp.user_id = ucpr.user_id  and ucp.user_id = ui.user_id and
ui.course_id = c.course_id and ucp.payment_id in (1, 2, 5, 6) and
ui.course_id not in  (1005, 1007, 1008) and ui.course_id not in (select
course_id from course_attribute where ATTRIBUTE ='HOSTED_AFFILIATE')   ORDER
BY ucp.Payment_date;
+++--+-+---++-+--+---+---+
| id | select_type| table| type|
possible_keys | key| key_len |
ref  | rows  |
Extra |
+++--+-+---++-+--+---+---+
|  1 | PRIMARY| c| range   |
PRIMARY   | PRIMARY| 10  |
NULL |   134 | Using where; Using index; Using
temporary; Using filesort |
|  1 | PRIMARY| ui   | ref |
PRIMARY,idx_user_info_2   | idx_user_info_2| 10  |
dip.c.course_id  |   279 | Using
index   |
|  1 | PRIMARY| ucp  | eq_ref  |
PRIMARY,user_course_pay_comp1 | PRIMARY| 10  |
dip.ui.user_id   | 1 | Using
where   |
|  1 | PRIMARY| ucpr | eq_ref  |
PRIMARY   | PRIMARY| 10  |
dip.ucp.user_id  | 1 | Using
where   |
|  1 | PRIMARY| uct  | ref |
user_cc_trans_order_id| user_cc_trans_order_id | 10  |
dip.ucp.payment_order_id | 1
|   |
|  1 | PRIMARY| uet  | index   |
NULL  | idx_user_ec_trans  | 35  |
NULL | 13959 | Using
index   |
|  2 | DEPENDENT SUBQUERY | course_attribute | unique_subquery |
PRIMARY   | PRIMARY| 44  |
func,const   | 1 | Using index; Using
where  |
+++--+-+---++-+--+---+---+
7 rows in set (0.00 sec)


Explain result on myisam system

mysql explain
- select ucpr.course_amount, ucpr.coupon_amount, ucp.payment_order_id,
ui.course_id, uct.ref, uet.ref, ui.user_id, ucpr.coupon,
-  ucp.payment_service_id, ucp.payment_id
-   FROM user_course_pricing ucpr, user_info ui, course c,
user_course_payment ucp left outer join user_cc_trans uct on
-  ucp.payment_order_id=uct.payment_order_id left outer join
user_ec_trans uet on ucp.payment_order_id=uet.payment_order_id
-   WHERE ucp.payment_order_id is not null and
date_format(ucp.payment_date, '%m-%Y')='05-2007' and ucp.user_id =
ucpr.user_id
-  and ucp.user_id = ui.user_id and ui.course_id = c.course_id and
ucp.payment_id in (1, 2, 5, 6) and ui.course_id not in
-  (1005, 1007, 1008) and ui.course_id not in (select course_id
from course_attribute where ATTRIBUTE ='HOSTED_AFFILIATE')
-   ORDER BY ucp.Payment_date;
+++--+-+---++-+--++--+
| id | select_type| table| type|
possible_keys | key| key_len |
ref  | rows   |
Extra|

Re: Large Database Performance - Reference Sites?

2008-02-10 Thread Jay Pipes
You will likely need to be a lot more specific about what you are asking 
for here, David.  What is a large select?  What constitutes a large 
update?  What number of joined tables composes a multi join in your 
specific case?  What is text functionality?


-jay

David Stoller wrote:

Can Someone with Large Databases (100million records 20K-row avg  )X5
 
contact me for some questions, regarding performance on:

1. Text functionality
2. Performance
  large selects
  multi joins
  large updates
  bulk inserts
 
 
Best Regards,
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
R  D DBA

Malha Technology Park
Jerusalem 91481, Israel
972-2-6499241
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance of heterogeneous joins

2008-01-24 Thread Jay Pipes

Nope, no difference, AFAIK.

Alex K wrote:

Any ideas pertaining this newbie question?

Thank you so much,


Hi Guys,

Is there a performance hit when joining across multiple databases as
opposed to joining multiples tables in one database? Suppose the same
tables are available across all databases.

Thank you,

Alex





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Prepared SQL statements - Faster performance?

2008-01-14 Thread Jay Pipes
Are you using the PREPARE STATEMENT server-side syntax or an emulated 
prepared statement like in PDO?


-jay

mos wrote:
I would like to speed up my Select queries since I'm executing approx 
5,000 of them, same syntax but the search values for 2 columns will 
change with each query. Will I see any performance increase if I prepare 
the statement and use parameters? (I don't need to use the query cache 
since the result set will be returned only once for each set of search 
values.)


TIA
Mike
MySQL 5.024



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Forbidden subquery

2007-12-20 Thread Jay Pipes
No problem.  I hope by now you figured out I made a typo... :)  The 
WHERE in the DELETE should be prod_price_chg_flag='O', not =X :)


-jay

Jerry Schwartz wrote:

Hi Jerry!

The very last sentence on:
http://dev.mysql.com/doc/refman/5.0/en/delete.html

is Currently, you cannot delete from a table and select from the same
table in a subquery.


[JS] Yes, I knew that. I just thought that illegal query was the best way of
expressing what I wanted to do.




But, to bypass that, you can create a temp table and join to that:


[JS] Bingo! It didn't occur to me to make a temporary table. That should do
exactly what I want!

Thanks.


CREATE TEMPORARY TABLE to_delete
SELECT prod_price.prod_id FROM prod_price
WHERE prod_price.prod_price_chg_flag = X;

DELETE prod_price FROM prod_price
JOIN to_delete ON prod_price.prod_id=to_delete.prod_id
WHERE prod_price.prod_price_chg_flag = 'X';

DROP TABLE to_delete;

Cheers,

Jay

Jerry Schwartz wrote:

What I want to accomplish is expressed best as

DELETE FROM prod_price
WHERE prod_price.prod_price_chg_flag = O
AND prod_price.prod_id IN

(SELECT prod_price.prod_id FROM prod_price
WHERE prod_price.prod_price_chg_flag = X)
;

This is clear, concise, and completely illegal. I want to delete

every O

record which has an accompanying X record.

I tried using a self-join like this

DELETE FROM p1 USING prod_price AS p1 JOIN prod_price AS p2 ON

p1.prod_id =

p2.prod_id
WHERE p1.prod_price_chg_flag = O
AND p2.prod_price_chg_flag = X
;

And got a storage engine error 134 (MyISAM table). I'm not even

certain that

this would have done what I wanted, but I guess I won't find out.

Here's what the table prod_price looks like:

   Table: prod_price
Create Table: CREATE TABLE `prod_price` (
  `prod_price_id` varchar(15) NOT NULL default '',
  `prod_id` varchar(15) default NULL,
  `prod_price_del_format` varchar(255) default NULL,
  `prod_price_val_date` date default NULL,
  `prod_price_chg_flag` char(1) default NULL,
  `prod_price_disp_curr` varchar(10) default NULL,
  `prod_price_disp_price` decimal(10,2) default NULL,
  `prod_price_end_curr` varchar(10) default NULL,
  `prod_price_end_price` decimal(10,2) default NULL,
  `prod_price_reg_price` varchar(5) default NULL,
  `prod_price_changed` tinyint(1) default NULL,
  `prod_price_added` datetime default NULL,
  `prod_price_updated` datetime default NULL,
  PRIMARY KEY  (`prod_price_id`),
  KEY `prod_id` (`prod_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I'm running 5.0.45-community-nt.

Suggestions?

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
infoshop.com








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Another cry for help..

2007-12-20 Thread Jay Pipes

You could use a view:

CREATE VIEW all_scores
SELECT s.tid, s.vid, s.uid, s.highScore
FROM score s
JOIN objects o
ON s.tid = o.tid
JOIN itemtypes it
ON s.vid = it.vid
JOIN users u
ON s.uid = u.uid
WHERE o.shortname = %s /* Should these ANDs really be ORs? */
AND i.itemtype LIKE %s;

SELECT highScore:= @my_high_score
FROM all_scores
WHERE u.username = %s
LIMIT 1;

SELECT COUNT(*):= @total_scores FROM all_scores;

SELECT COUNT(*):= @total_greater_my_score
FROM all_scores
WHERE highScore  @my_high_score;

SELECT ((@total_great_my_score + 1) / @total_scores) * 100 AS percentile;

Hope this helps,

Jay


Anders Norrbring wrote:

Brent Baisley skrev:

You might be able to use variables to store the result of the query.
Although I've never tried assigning the result of a query to a
variable, only field values.

SELECT (count(*)+)/(SELECT COUNT(*) FROM score AS s2
WHERE s2.tid = @tid:=(SELECT tid FROM objects WHERE shortname = %s)
AND s2.vid = @vid:=(SELECT vid FROM itemtypes WHERE itemtype LIKE %s)
) * 100
AS percentile
WHERE s1.tid = @tid
AND s1.vid = @vid
AND s1.highScore  (SELECT highScore FROM score WHERE tid = s1.tid AND
vid = s1.vid
AND uid = (SELECT uid FROM users WHERE username = %s))

Brent


At a first glance, it doesn't work at all, I get NULL results from it, 
but I haven't spent any time trying to locate the problem yet..


So, I'm still open for ideas!





On 12/20/07, Anders Norrbring [EMAIL PROTECTED] wrote:

Hi.. I'm struggling with a query that I'm trying to simplify as much as
possible, but I can't seem to get rid of using the very same subqueries
several times.
Would there be a way to optimize the following so I get rid of
subqueries that do the exact same thing more than once?


SELECT (count(*)+1) / (SELECT COUNT(*) FROM score AS s2
WHERE s2.tid = (SELECT tid FROM objects WHERE shortname = %s)
AND s2.vid = (SELECT vid FROM itemtypes WHERE itemtype LIKE %s)) * 100
AS percentile FROM score AS s1
WHERE s1.tid = (SELECT tid FROM objects WHERE shortname = %s)
AND s1.vid = (SELECT vid FROM types WHERE itemtype LIKE %s)
AND s1.highScore  (SELECT highScore FROM score WHERE tid = s1.tid AND
vid = s1.vid
AND uid = (SELECT uid FROM users WHERE username = %s))


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Forbidden subquery

2007-12-19 Thread Jay Pipes

Hi Jerry!

The very last sentence on:
http://dev.mysql.com/doc/refman/5.0/en/delete.html

is Currently, you cannot delete from a table and select from the same 
table in a subquery.


But, to bypass that, you can create a temp table and join to that:

CREATE TEMPORARY TABLE to_delete
SELECT prod_price.prod_id FROM prod_price
WHERE prod_price.prod_price_chg_flag = X;

DELETE prod_price FROM prod_price
JOIN to_delete ON prod_price.prod_id=to_delete.prod_id
WHERE prod_price.prod_price_chg_flag = 'X';

DROP TABLE to_delete;

Cheers,

Jay

Jerry Schwartz wrote:

What I want to accomplish is expressed best as

DELETE FROM prod_price
WHERE prod_price.prod_price_chg_flag = O
AND prod_price.prod_id IN

(SELECT prod_price.prod_id FROM prod_price
WHERE prod_price.prod_price_chg_flag = X)
;

This is clear, concise, and completely illegal. I want to delete every O
record which has an accompanying X record.

I tried using a self-join like this

DELETE FROM p1 USING prod_price AS p1 JOIN prod_price AS p2 ON p1.prod_id =
p2.prod_id
WHERE p1.prod_price_chg_flag = O
AND p2.prod_price_chg_flag = X
;

And got a storage engine error 134 (MyISAM table). I'm not even certain that
this would have done what I wanted, but I guess I won't find out.

Here's what the table prod_price looks like:

   Table: prod_price
Create Table: CREATE TABLE `prod_price` (
  `prod_price_id` varchar(15) NOT NULL default '',
  `prod_id` varchar(15) default NULL,
  `prod_price_del_format` varchar(255) default NULL,
  `prod_price_val_date` date default NULL,
  `prod_price_chg_flag` char(1) default NULL,
  `prod_price_disp_curr` varchar(10) default NULL,
  `prod_price_disp_price` decimal(10,2) default NULL,
  `prod_price_end_curr` varchar(10) default NULL,
  `prod_price_end_price` decimal(10,2) default NULL,
  `prod_price_reg_price` varchar(5) default NULL,
  `prod_price_changed` tinyint(1) default NULL,
  `prod_price_added` datetime default NULL,
  `prod_price_updated` datetime default NULL,
  PRIMARY KEY  (`prod_price_id`),
  KEY `prod_id` (`prod_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I'm running 5.0.45-community-nt.

Suggestions?

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Update but insert if not exist

2007-12-17 Thread Jay Pipes

INSERT ... ON DUPLICATE KEY UPDATE:

http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html

Cheers,

Jay

J Trahair wrote:

This is a question I want to know the answer to, as well! Is there any way of 
avoiding looking up a specific record in a table to see if it exists, before 
deciding whether to INSERT INTO or UPDATE, eg:

mstrSQL = SELECT * FROM Shops WHERE ShopReference = '  grd1.TextMatrix(numRowNo, 1) 
 '
Set rsRecordset = New ADODB.Recordset
gconn.CursorLocation = adUseServer
rsRecordset.Open mstrSQL, gconn, adOpenDynamic, adLockOptimistic
If rsRecordset.EOF = True Then
mstrSQL = INSERT INTO Shops (ShopNameInFull, ShopReference, TillNumber) VALUES (strShopNameInFull, strShopReference, strTillNumber)   
mconn.Execute mstrSQL

Else
mstrSQL = UPDATE Shops SET ShopNameInFull = 'strShopNameInFull', ShopReference = 
'strShopReference', TillNumber = 'strTillNumber' WHERE ShopReference = '  
grd1.TextMatrix(numRowNo, 1)  '
mconn.Execute mstrSQL
End If


just thought I'd ask!

Jonathan Trahair



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Spfile in Mysql......

2007-11-27 Thread Jay Pipes

Sujatha S wrote:

Mysql should bring this as there new feature in there next release!


Unlikely.  Dynamic changes are, well, dynamic.  Permanent stuff goes in 
the my.cnf.


-jay


Regards,

Sujatha
On Nov 27, 2007 11:44 AM, Shanmugam, Dhandapani 
[EMAIL PROTECTED] wrote:


Hello,




The dynamic changes made on mysql server instance gets vanished once



the instance goes down...!! Is there any way for mysql to store the



dynamic changes on my.cnf file automatically ..?(like Oracle) , so on



next startup mysql automatically pickup the dynamic changes made from



my.cnf file


Unfortunately there is not. You should alter your my.cnf file to record the
changes you make.

--

MySQL General Mailing List

For list archives: *http://lists.mysql.com/mysql*http://lists.mysql.com/mysql

To unsubscribe: *
http://lists.mysql.com/[EMAIL PROTECTED]http://lists.mysql.com/[EMAIL 
PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Subquery

2007-10-22 Thread Jay Pipes
Indeed, as you say, Brent, correlated subqueries are not well-optimized 
in MySQL.  The specific subquery (the IN() subquery) demonstrated in the 
original post is, however, optimized in MySQL 6.0 :)


More comments inline.

Brent Baisley wrote:
You are using a correlated subquery, which MySQL is terrible at. 
Whenever you find yourself doing a correlated subquery, see if you can 
switch it to a derived table with a join, which MySQL is far better at. 
A derived table is like a virtual table you create on the fly. It's 
very simple, just assign a name to your query and then treat it as if it 
is a regular table.


Actually, in this case, no need for a derived table.  A simple join will 
suffice:


SELECT * FROM projects p
JOIN project_tags pt ON p.project_id = pt.project_id
JOIN tags t ON pt.tag_id = t.tag_id
WHERE tags.name='foo';

Make sure you've got indexes on p (project_id), pt (project_id, tag_id), 
t (name)


Cheers,

Jay


So your query would look something like this:
SELECT projects.* FROM projects
JOIN (SELECT projects_tags.project_id FROM tags, projects_tags WHERE 
tags.name='foo' AND projects_tags.project_id=projects.id) AS ptagids

ON project.id=ptagids.project_id

Your IN has become a JOIN and mysql optimizes it far better.

On Oct 19, 2007, at 6:57 PM, Ryan Bates wrote:

I'm trying to determine why a subquery is slower than running two 
separate queries. I have a simple many-to-many association using 3 
tables: projects, tags and projects_tags. Here's the query I'm using 
to find the projects with a given tag:


SELECT * FROM projects WHERE id IN (SELECT projects_tags.project_id 
FROM tags, projects_tags WHERE tags.name='foo' AND 
projects_tags.project_id=projects.id);

(0.36 sec)

Compare that with splitting it into two queries:

SELECT projects_tags.project_id FROM tags, projects_tags WHERE 
tags.name='foo' AND projects_tags.project_id=projects.id

(0.00 sec) /* returns 1, 2, 3 */

SELECT * FROM projects WHERE id IN (1, 2, 3);
(0.00 sec)

Why is it so much faster? Looking at the explain statement (below) of 
the one with the subquery, it appears it's not using the primary key 
index on the projects table. Why is it that MySQL doesn't perform this 
simple optimization? And is there a solution that will allow me to 
still use a subquery?


I realize I can use a join instead of a subquery, but this is a 
simplified example.



Here's the explain statement:

*** 1. row ***
   id: 1
  select_type: PRIMARY
table: projects
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 15433
Extra: Using where
*** 2. row ***
   id: 2
  select_type: DEPENDENT SUBQUERY
table: tags
 type: ref
possible_keys: PRIMARY,index_tags_on_name
  key: index_tags_on_name
  key_len: 258
  ref: const
 rows: 1
Extra: Using where; Using index
*** 3. row ***
   id: 2
  select_type: DEPENDENT SUBQUERY
table: projects_tags
 type: ref
possible_keys: tag_id
  key: tag_id
  key_len: 5
  ref: my_database.tags.id
 rows: 10
Extra: Using where


Here's the table dumps:

CREATE TABLE `projects` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `tags` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  PRIMARY KEY  (`id`),
  KEY `index_tags_on_name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `projects_tags` (
  `project_id` int(11) default NULL,
  `tag_id` int(11) default NULL,
  KEY `tag_id` (`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;


I'm using MySQL 5.0.37. Thanks in advance.

Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need help with a natural sort order for version numbers and release code names

2007-10-12 Thread Jay Pipes

Daevid Vincent wrote:
 
I'm trying to get some 'release/version numbers' to sort properly.


mysql SELECT ReleaseID, Name FROM releases ORDER BY Name DESC;
+---+-+

| ReleaseID | Name|
+---+-+
|18 | Unspecified | 
|20 | Next Patch  | 
|58 | LOCset  | 
|74 | Abashiri| 
|54 | 4.6.0 (Folsom)  | 
		  -- 4.5.10 should be here
|99 | 4.5.9   | 
|98 | 4.5.6   | 
|93 | 4.5.5 (Purdy)   | 
|97 | 4.5.4   | 
|96 | 4.5.3   | 
|94 | 4.5.2   | 
|   100 | 4.5.10  |   -- should be ^ there
|91 | 4.5.1 Deferred  | 
|78 | 4.5.1 (Leavenworth) | 
|95 | 4.2.7.4 | 
|92 | 4.2.7.3 | 
|90 | 4.2.7.2 | 
|87 | 4.2.7.1 | 
|88 | 4.2.7.0 |  

I like this order, especially with the top four, 
except for that 4.5.10 should be higher up, 
just under 4.6.0, not under 4.5.2 as it is now.


So I tried the  + 0  trick which makes things even worse 
(notice the 4.2.6.1 and 4.2.6.0 -- yipes!):


mysql SELECT ReleaseID, Name FROM releases ORDER BY Name + 0 DESC;


What about ORDER BY REPLACE(Name, '.', '') + 0 DESC?


+---+-+
| ReleaseID | Name|
+---+-+
 (18,20,58,74) are moved
:(
|54 | 4.6.0 (Folsom)  | 
|78 | 4.5.1 (Leavenworth) | 
|   100 | 4.5.10  | 
|91 | 4.5.1 Deferred  | 
|93 | 4.5.5 (Purdy)   | 
|94 | 4.5.2   | 
|96 | 4.5.3   | 
|97 | 4.5.4   | 
|98 | 4.5.6   | 
|99 | 4.5.9   | 
|82 | 4.2.6.1 |?
|76 | 4.2.2   | 
|75 | 4.2.4   | 
|72 | 4.2.1   | 
|73 | 4.2.3   | 
|67 | 4.2.6.0 |?



I'm pretty sure this is going to involve some sort of splitting the version
from the release codeword via some string functions, and then operating on
that part.


D.Vin
http://daevid.com
---
eval() is my favorite templating engine.






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 2008 conference fee?

2007-10-08 Thread Jay Pipes

Sid Lane wrote:

stupid non-technical ?:

does anyone know what the registration fee is going to be for the 2008
conference?  my mgr needs a # today to put in next yr's budget  I couldn't
find it on the conference site.  if it's not been finalized could someone
tell me what it was last year?


Hi!

I believe the conference fees will be similar to last year:

$1,095 conference w/o tutorials
$495 tutorials
$1,495 conference w/tutorials

Plus, as always, there are significant discounts available for a variety 
of groups (students, government, user groups, educators, etc..)


I *think* that's right... :)

Cheers, and post back here if you've got any further ?s.

Jay Pipes
Program Chair, MySQL Conference and Expo 2008


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [mysql] duplicating lines

2007-09-13 Thread Jay Pipes

Hi Craig,

would you mind posting the SHOW CREATE TABLE for the tables in question? 
 I'm having trouble determining what is the primary key for your 
service ticket table...


Thanks,

-Jay

Weston, Craig (OFT) wrote:

Hello again,

I am having a duplication of results problem. I believe my
query (below) is giving me exactly what I deserve in writing it.

 


What it returns to me is a row for each status. What I would most want
would be a single row with the oldest status - IE the status datetime
that happened earliest. 

 


What I am trying to do is determine when a service desk ticket first
enters any one of these three categories. I am not using distinct on
`thedata2`.`Source` as this does not effect the result set.

 

I have 2 tables. 


One of the tables lists all the ticket information at time of the ticket
being closed. The other has an entry referenced by ticket number for
each time a ticket is touched or updated. So what I am trying to do is
identify the last time it was touched with the appropriate status
change.

 


Does anyone have any idea what I could do to eliminate the duplicate
with the oldest time? I am experimenting in the idea of a subquery but
can't think of anything else.

 

 


???

 


Thanks,

craig

 

 


SELECT

`thedata2`.`Source`,

`thedata1`.`Status`,

`thedata2`.`Priority`,

`thedata1`.`start_Time`,

`thedata1`.`Close_Time`,

`thedata1`.`workday`'cycletime'

FROM

`thedata2`

Inner Join `thedata1` ON `thedata1`.`Source` = `thedata2`.`SourceSR`

 


WHERE

 


(`thedata1`.`Status` like  'Resolved' OR

`thedata1`.`Status` like 'Restored' OR

`thedata1`.`Status` like 'Isolation')

 

and 


`thedata2`.`Open_Time` BETWEEN  '2007-02-01 00:00' AND '2007-08-31
23:59:59'

And

 


((`thedata2`.`Priority` = 1 and `thedata1`.`workday`  14400)

OR

(`thedata2`.`Priority` = 2 and `thedata1`.`workday`  86400)

or

(`thedata2`.`Priority` = 2 and `thedata1`.`workday`  172800))

This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Does this MySQL client exist?

2007-09-13 Thread Jay Pipes

Christoph Boget wrote:

I did a search and couldn't find anything like what I'm looking for and
though I doubt something like this does exist, I figured I'd ask anyway.  Is
there a client (not phpMyAdmin) that can connect to a server (that is
running MySQL) using SSH and connect to the database that way?  Right now,
the only way we are allowed to access the actual server is by using either
SSH or SFTP.  The only way we can access the MySQL database on that server
is either use phpMyAdmin (which I don't particularly care for; not to
disparage the hard work of the developers, it's just a matter of personal
preference) or use the command line.

I'm hoping that there is client software out there that can do what I'm
looking for.  Does it exist?


Use the mysql client, like so:

# ssh [EMAIL PROTECTED]

[EMAIL PROTECTED] ~ mysql --user=dbuser --password somedatabasename
Enter password: XXX

mysql SELECT blah blah...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: finding count of spaces in a string

2007-09-04 Thread Jay Pipes

[EMAIL PROTECTED] wrote:
We have numerous identical tables with a varchar column that holds data 
like this: 0 0 0 1 0 1 0 25 7 0 139 0 9. Essentially there are a bunch 
of integers with a single space as a separator. There _should_ be no more 
than 30 entries ( and 29 spaces ), but sometimes the system misfires and 
there are more or less.  Is there a MySQL solution to getting a count of 
the spaces present in the field, figuring that spaces + 1 will equal 
entries? It's fairly straight forward using a PHP application, but I'd 
like to get the DB server to accomplish this task. Not having much luck 
finding a solution in the manual. 


SELECT
CHAR_LENGTH(field_name) - CHAR_LENGTH(REPLACE(field_name, ' ', '')) as 
num_spaces FROM my_table;


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: thread_concurrency in linux

2007-09-03 Thread Jay Pipes

Andrew Braithwaite wrote:

Does anyone know if thread_concurrency works in linux or is it just
limited to Solaris and Windows?


Hi!  That variable only affects Solaris, as the Solaris threading 
library supports thr_setconcurrency().


innodb_thread_concurrency, however, can affect all platforms, AFAIK:

http://www.mysql.org/doc/refman/5.0/en/innodb-parameters.html#option_mysqld_innodb_thread_concurrency


I know the general rule is number of CPU's*2 but will this actually have
any effect with Linux's threading model?


Nope, at least AFAIK.

-jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index usage - MyISAM vs InnoDB

2007-08-27 Thread Jay Pipes

Hi!  Comments inline.

Edoardo Serra wrote:
SELECT sum(usercost) FROM cdr WHERE calldate BETWEEN '2007-06-01 
00:00:00' AND '2007-06-30 23:59:59'


If I run it on the MyISAM table, MySQL choose the right index (the one 
on the calldate column) and the query is fast enough


If I run it on the InnoDB table, MySQL uses no index even if an EXPLAIN 
query tells me that 'calldate' is between the available indexes


Here are my EXPLAIN results

mysql EXPLAIN SELECT sum(usercost) FROM cdr_innodb WHERE calldate 
BETWEEN '2007-06-01 00:00:00' AND '2007-06-30 23:59:59';
++-+---+--+-+--+-+--+-+-+ 

| id | select_type | table | type | possible_keys   | key  | 
key_len | ref  | rows| Extra   |
++-+---+--+-+--+-+--+-+-+ 

|  1 | SIMPLE  | cdr   | ALL  | calldate,date-context-cause | NULL | 
NULL| NULL | 5016758 | Using where |
++-+---+--+-+--+-+--+-+-+ 


1 row in set (0.00 sec)


mysql EXPLAIN SELECT sum(usercost) FROM cdr_myisam WHERE calldate 
BETWEEN '2007-06-01 00:00:00' AND '2007-06-30 23:59:59';
++-+---+---+-+--+-+--++-+ 

| id | select_type | table | type  | possible_keys   | key 
| key_len | ref  | rows   | Extra   |
++-+---+---+-+--+-+--++-+ 

|  1 | SIMPLE  | cdr   | range | calldate,date-context-cause | 
calldate | 8   | NULL | 772050 | Using where |
++-+---+---+-+--+-+--++-+ 


1 row in set (0.11 sec)

Another strange thing is that the EXPLAIN on InnoDB says the table has 
5016758 rows but a SELECT count(*) returns 4999347 rows (which is the 
correct number)


The rows returned in EXPLAIN SELECT (and SHOW TABLE STATUS) for InnoDB 
tables is an estimate.  For MyISAM, it is the actual number of rows in 
the table.  This is because InnoDB has to track a version for each row 
in the table (for transactional isolation), and MyISAM does not, which 
makes it much easier to just have a simple row count for the table.


This estimate of rows returned is what is used by the optimizer to 
determine what execution plan is optimal for this particular query.  In 
this case, there are approximately 772K out of 5M rows which meet the 
WHERE condition -- or about 15% of the total number of rows in the 
table.  There is a certain threshold, where above it the optimizer will 
choose to do a sequential table scan of the data, versus do many random 
seeks into memory or disk.


It seems that you are hovering around the threshold for where the 
optimizer chooses to do a sequential table scan (InnoDB) vs a range 
operation on a btree with lookups into the data file for each matched 
row in the index (MyISAM).  The difference in returning an estimate vs. 
the actual row count *might* be the cause of the difference in execution 
plans.  Or, it could have something to do with the weights that the 
optimizer chooses to place on bookmark lookups in MyISAM vs a quick 
table scan in InnoDB.  I'd be interested to see what the difference in 
*performance* is?  Also, in *either* engine, if you are executing this 
particular query a *lot*, the best thing for you to do would be to put 
the index on (calldate, usercost) so that you have a covering index 
available to complete the query.


Cheers!

Jay


Tnx in advance for help

Regards

Edoardo Serra
WeBRainstorm S.r.l.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: servers full potential / FT searches locking tables

2007-08-27 Thread Jay Pipes
SELECTs don't lock the table.  Are you having frequent UPDATEs while 
selecting?  That would be the reason for locks.


-jay

Justin wrote:

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I 
want all power to go to the server. It just seems to be laggy at times. 
And I want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way 
not to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush time  0
ft boolean syntax  + -()~*:|
ft max word len  84
ft min word len  3
ft query expansion limit  20
ft stopword file  (built-in)
group concat max len  1,024
have archive  YES
have bdb  NO
have blackhole engine  NO
have compress  YES
have crypt  YES
have csv  NO
have dynamic loading  YES
have example engine  NO
have federated engine  NO
have geometry  YES
have innodb  YES
have isam  NO
have merge engine  YES
have ndbcluster  NO
have openssl  DISABLED
have query cache  YES
have raid  NO
have rtree keys  YES
have symlink  YES
init connect
init file
init slave
innodb additional mem pool size  1,048,576
innodb autoextend increment  8
innodb buffer pool awe mem mb  0
innodb buffer pool size  8,388,608
innodb checksums  ON
innodb commit concurrency  0
innodb concurrency tickets  500
innodb data file path  ibdata1:10M:autoextend
innodb data home dir
innodb doublewrite  ON
innodb fast shutdown  1
innodb file io threads  4
innodb file per table  OFF
innodb flush log at trx commit  1
innodb flush method
innodb force recovery  0
innodb lock wait timeout  50
innodb locks unsafe for binlog  OFF
innodb log arch dir
innodb log archive  OFF
innodb log buffer size  1,048,576
innodb log file size  5,242,880
innodb log files in group  2
innodb log group home dir  ./
innodb max dirty pages pct  90
innodb max purge lag  0
innodb mirrored log groups  1
innodb open files  300
innodb support xa  ON
innodb sync spin loops  20
innodb table locks  ON
innodb thread concurrency  8
innodb thread sleep delay  10,000
interactive timeout  28,800
join buffer size  131,072
key buffer size  8,388,600
key cache age threshold  300
key cache block size  1,024
key cache division limit  100
language  /usr/share/mysql/english/
large files support  ON
large page size  0
large pages  OFF
lc time names  en_US
license  GPL
local infile  ON
locked in memory  OFF
log  OFF
log bin  OFF
log bin trust function creators  OFF
log error  /var/log/mysql/error.log
log queries not using indexes  OFF
log slave updates  OFF
log slow queries  OFF
log warnings  1
long query time  10
low priority updates  OFF
lower case file system  OFF
lower case table names  0
max allowed packet  1,073,740,800
max binlog cache size  4,294,967,295
max binlog size  1,073,741,824
max connect errors  10
max connections  5,000
max delayed threads  20
max error count  64
max heap table size  16,777,216
max insert delayed threads  20
max join size  18446744073709551615
max length for sort data  1,024
max prepared stmt count  16,382
max relay log size  0
max seeks for key  4,294,967,295
max sort length  1,024
max sp recursion depth  0
max tmp tables  32
max user 

Re: servers full potential / FT searches locking tables

2007-08-27 Thread Jay Pipes

A read lock does not prevent other reads.

Rolando Edwards wrote:

SELECTs do lock the tables implicitly.

According to Page 400 (Section 28.1 : Locking Concepts) of MySQL 5.0 Certification Study 
Guide (ISBN 0-672-32812-7), here is what the first bulletpoint says under the heading 
A lock on data can be acquired implicitly or explicitly:

For a client that does nothing special to acquires locks, the MySQL server 
implicitly acquires locks as necessary to process the client's statments 
sdafely. For example, the server acquires a read lock when the client issues a 
SELECT statement and a write lock when the client issues an INSERT statement. 
Implicit locks are acquired only for the duration of a single statement.

- Original Message -
From: Jay Pipes [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, August 27, 2007 2:26:29 PM (GMT-0500) America/New_York
Subject: Re: servers full potential / FT searches locking tables

SELECTs don't lock the table.  Are you having frequent UPDATEs while 
selecting?  That would be the reason for locks.


-jay

Justin wrote:

Ok.. Straight to the point.. Here is what I currently have.

MySQL Ver 14.12 Distrib 5.0.27
RHEL vs 5
584GB Raid 5 storage
8GB of RAM
and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

what my question is.. is am I utilizing the servers potential with the 
following as my settings.  The server is a dedicated MySQL server so I 
want all power to go to the server. It just seems to be laggy at times. 
And I want to be sure I've optimized to the fullest potential


My biggest issue is with FT searches. Tables get locked during larger 
queries and I can't select anything when that happens. Is there any way 
not to lock the tables on a Full Text search? (does that make sense?)


thanks again for any insight

Justin.

Here's a dump of the my.cnf and the phpmyadmin dump of vars.

/etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
wait_timeout=60
default-character-set=utf8
max_allowed_packet = 3000M
max_connections = 5000
ft_min_word_len=3

server-id=1
log-error = /var/log/mysql/error.log
expire_logs_days = 3


# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=0

[mysql.server]
user=mysql

[mysqld_safe]
err-log=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


auto increment increment   1
auto increment offset  1
automatic sp privileges  ON
back log  50
basedir  /
binlog cache size  32,768
bulk insert buffer size  8,388,608
character set client  utf8
character set connection  utf8
character set database  utf8
character set filesystem  binary
character set results  utf8
character set server  utf8
character set system  utf8
character sets dir  /usr/share/mysql/charsets/
collation connection  utf8_general_ci
collation database  utf8_general_ci
collation server  utf8_general_ci
completion type  0
concurrent insert  1
connect timeout  5
datadir  /var/lib/mysql/
date format  %Y-%m-%d
datetime format  %Y-%m-%d %H:%i:%s
default week format  0
delay key write  ON
delayed insert limit  100
delayed insert timeout  300
delayed queue size  1,000
div precision increment  4
engine condition pushdown  OFF
expire logs days  3
flush  OFF
flush time  0
ft boolean syntax  + -()~*:|
ft max word len  84
ft min word len  3
ft query expansion limit  20
ft stopword file  (built-in)
group concat max len  1,024
have archive  YES
have bdb  NO
have blackhole engine  NO
have compress  YES
have crypt  YES
have csv  NO
have dynamic loading  YES
have example engine  NO
have federated engine  NO
have geometry  YES
have innodb  YES
have isam  NO
have merge engine  YES
have ndbcluster  NO
have openssl  DISABLED
have query cache  YES
have raid  NO
have rtree keys  YES
have symlink  YES
init connect
init file
init slave
innodb additional mem pool size  1,048,576
innodb autoextend increment  8
innodb buffer pool awe mem mb  0
innodb buffer pool size  8,388,608
innodb checksums  ON
innodb commit concurrency  0
innodb concurrency tickets  500
innodb data file path  ibdata1:10M:autoextend
innodb data home dir
innodb doublewrite  ON
innodb fast shutdown  1
innodb file io threads  4
innodb file per table  OFF
innodb flush log at trx commit  1
innodb flush method
innodb force recovery  0
innodb lock wait timeout  50
innodb locks unsafe for binlog  OFF
innodb log arch dir
innodb log archive  OFF
innodb log buffer size  1,048,576
innodb log file size  5,242,880
innodb log files in group  2
innodb log group home dir  ./
innodb max dirty pages pct  90
innodb max purge lag  0
innodb mirrored log groups  1
innodb open files  300
innodb support xa  ON
innodb sync spin loops  20
innodb table locks  ON
innodb thread concurrency  8
innodb thread sleep delay  10,000
interactive timeout  28,800
join buffer size  131,072
key buffer size  8,388,600
key cache age threshold  300
key cache block size  1,024

Re: user permissions to all DB

2007-08-21 Thread Jay Pipes

solidzh wrote:

2007/8/21, Jay Pipes [EMAIL PROTECTED]:

Terry wrote:

Hello,

I want to grant a user all permissions to all DBs on the system as
well as any new DBs that show up.  I want to avoid having to modify
permissions everytime a new DB is added.  Is there a way to do this?

GRANT SELECT, INSERT, CREATE, ... ON *.* TO 'username'@'hostname'
IDENTIFIED BY 'password';



That's well but why not,
grant all on *.* to 'user'@'host' identified by 'pwd'; ?


Because then you give the user SUPER, FILE, ALTER, SHUTDOWN, and PROCESS 
privileges, which probably isn't a good idea... :)


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: user permissions to all DB

2007-08-20 Thread Jay Pipes

Terry wrote:

Hello,

I want to grant a user all permissions to all DBs on the system as
well as any new DBs that show up.  I want to avoid having to modify
permissions everytime a new DB is added.  Is there a way to do this?


GRANT SELECT, INSERT, CREATE, ... ON *.* TO 'username'@'hostname' 
IDENTIFIED BY 'password';


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: user permissions to all DB

2007-08-20 Thread Jay Pipes

Yep.

Terry wrote:

Just to verify, will that include all new databases?

On 8/20/07, Jay Pipes [EMAIL PROTECTED] wrote:

Terry wrote:

Hello,

I want to grant a user all permissions to all DBs on the system as
well as any new DBs that show up.  I want to avoid having to modify
permissions everytime a new DB is added.  Is there a way to do this?

GRANT SELECT, INSERT, CREATE, ... ON *.* TO 'username'@'hostname'
IDENTIFIED BY 'password';

Cheers,

Jay




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: user permissions to all DB

2007-08-20 Thread Jay Pipes

Terry,

I absolutely agree with Rolando on this.

Rolando,

Although I agree with you, I was only trying to answer Terry's question :)

Cheers,

Jay

Rolando Edwards wrote:

You must be very careful when granting permissions on every database this way.

Here is why:

By giving a user permissions on all databases this way,
you also give away permissions to the 'mysql' schema.
This is where the grant tables live.

A person could
1) insert new users into mysql.user like this
INSERT INTO mysql.user VALUES (...);

2) delete users from mysql.user like this
DELETE FROM mysql.user WHERE host='...' AND user='...';

3) maliciously or accidently change passwords like this
UPDATE mysql.user SET PASSWORD=PASSWORD('insert new password') WHERE 
host='...' AND user='...';

4) grants additional privileges to himself like this
UPDATE mysql.user SET grant_priv='Y',execute_priv='Y',... WHERE host='...' AND 
user='...';

After setting those privilges, the person would then run FLUSH PRIVILEGES;
Then, all the privileges the user gave himself would go into effect !!!
Of course, the user would need the RELOAD privilege to do FLUSH PRIVILEGES;

Even if the user does not have RELOAD privilege, the user could still give 
himself this privilege in a delayed way like this
UPDATE mysql.user SET reload_priv='Y' WHERE host='...' AND user='...';

Then the next time mysqld is restarted, all the privileges the user gave 
himself would go into effect !!!

It is therefore NOT RECOMMENDED the user be granted privileges over the 'mysql' 
schema.

Instead to this: 
GRANT SELECT, INSERT, CREATE, ... ON db1.* TO 'username'@'hostname' IDENTIFIED BY 'password';

GRANT SELECT, INSERT, CREATE, ... ON db2.* TO 'username'@'hostname' IDENTIFIED 
BY 'password';
GRANT SELECT, INSERT, CREATE, ... ON db3.* TO 'username'@'hostname' IDENTIFIED 
BY 'password';
GRANT SELECT, INSERT, CREATE, ... ON db4.* TO 'username'@'hostname' IDENTIFIED 
BY 'password';

Grant the necessary privileges to each database individually and leave out 
'mysql'.

Unfortunately, you cannot grant privileges to all databases and revoke 
privileges from one schema ('mysql' in this instance)
You must enumerate the databases you specifically want to grant the user 
privileges to.

GUARD THE mysql SCHEMA WITH YOUR LIFE, PLEASE 

- Original Message -
From: Jay Pipes [EMAIL PROTECTED]
To: Terry [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, August 20, 2007 3:41:52 PM (GMT-0500) America/New_York
Subject: Re: user permissions to all DB

Terry wrote:

Hello,

I want to grant a user all permissions to all DBs on the system as
well as any new DBs that show up.  I want to avoid having to modify
permissions everytime a new DB is added.  Is there a way to do this?


GRANT SELECT, INSERT, CREATE, ... ON *.* TO 'username'@'hostname' 
IDENTIFIED BY 'password';


Cheers,

Jay




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert Select query problem

2007-08-10 Thread Jay Pipes

Ed Reed wrote:

Hi All,
 
I have an issue that I need to resolve that is difficult to explain. I

hope that someone can understand what I*m trying to do and shed some
light on a solution. Here goes.
 
I have three tables, inventory, which is a list of transactions with

positive and negative values; request, which essentially is a temporary
table that gets deleted after it*s used here; and purchase, which holds
the solution as to whether an item is to be purchased or removed from
inventory,
 
CREATE TABLE `inventory` (

  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `Item` varchar(100) NOT NULL DEFAULT '',
  `Qty` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=13 DEFAULT CHARSET=latin1; 
 
CREATE TABLE `purchase` (

  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `Source` int(11) DEFAULT NULL,
  `Item` varchar(100) NOT NULL DEFAULT '',
  `Qty` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1; 
 
CREATE TABLE `request` (

  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `Required` varchar(100) NOT NULL DEFAULT '',
  `Qty` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=4 DEFAULT CHARSET=latin1; 
 
My Inventory and Request tables have data in them like this,
 
Insert Into `inventory` (Item, Qty)
Values 
('Apples',5),

('Bananas',4),
('Cherries',6),
('Apples',-1),
('Bananas',1),
('Cherries',-2),
('Apples',3),
('Bananas',-7),
('Cherries',19),
('Apples',-5),
('Bananas',88),
('Cherries',6);
 
Insert Into `request` (Required, Qty)

Values
('Apples', 12),
('Bananas', 112),
('Cherries', 5);
 
Now what I*d like to do is create a single Insert Select query that

creates a record in my purchase table for each of the items in my
request table based on the number of items available in my inventory.
But, if there aren't enough items in the inventory to cover the amount
requested, I need to have a second record for that item in the purchase
table with the qty difference to another source. So based on the data in
the inventory my current totals are,
 
+--+--+

| Item | Sum(Qty) |
+--+--+
| Apples   | 2|
| Bananas  | 86   |
| Cherries | 29   |
+--+--+
 
and based on the qty of items in my request I would like to have a

purchase table that looks like this,
 
++--+-+

| Source | Item | Qty |
++--+-+
| 1  | Apples   | 2   |
| 0  | Apples   | 10  |
| 1  | Bananas  | 86  |
| 0  | Bananas  | 26  |
| 1  | Cherries | 5   |
++--+-+
 
with a source of 1 meaning pull the items from inventory and a source

of 0 means purchase them from somewhere else.
 
Can anyone help me with this?


Try this:

INSERT INTO purchase (Source, Item, Qty)
SELECT
  1, totals.Item, r.Qty
FROM request r
JOIN (
 SELECT Item, SUM(Qty) AS TotQty
 FROM inventory
 GROUP BY Item
) AS totals
ON r.Required = totals.Item
WHERE r.Qty = totals.TotQty
UNION ALL
SELECT
  0, totals.Item, (r.Qty - totals.TotQty)
FROM request r
JOIN (
 SELECT Item, SUM(Qty) AS TotQty
 FROM inventory
 GROUP BY Item
) AS totals
ON r.Required = totals.Item
WHERE r.Qty  totals.TotQty;

cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 2007 MySQL Community Survey - Got Ten Minutes to Spare?

2007-08-08 Thread Jay Pipes

[EMAIL PROTECTED] wrote:
Problems again with the survey design and functionality. Page 4 asks 
questions about Falcon, and to be honest I don't know anything about 
Falcon, but you've required answers to advance and only offered yes' and 
no' as choices.  Without a Don't know choice as an option, your results 
will be totally skewed. I designed survey questionnaires in my past life 
as a psychologist, and this one is fatally flawed. I'm done. I won't try 
again.


Well, since I'm not a psychologist, I did the best I could, David.  I'll 
remember your input for the next one and hopefully do a better job.


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 2007 MySQL Community Survey - Got Ten Minutes to Spare?

2007-08-07 Thread Jay Pipes

Thanks for the input!  Hopefully, I've fixed the problem.

Please retry.  It should have saved your previous answers.

Thanks much!

Jay

J.R. Bullington wrote:

I received the same as David. The question was:

What OS do you currently use? Please check all that apply: (page 3, question 4 
or 5):

I chose Linux 2.6.x and Windows. It kept telling me that I needed to answer the 
question.

I changed the answer to just Linux and it let me thru.

Just to provide a little more info for you.

J.R.



From: [EMAIL PROTECTED]
Sent: Tuesday, August 07, 2007 9:35 AM
To: 'Jay Pipes' [EMAIL PROTECTED], mysql@lists.mysql.com
Subject: RE: 2007 MySQL Community Survey - Got Ten Minutes to Spare? 

Having a moment of altruism, I started doing the survey only to find that 
it wouldn't let me advance to the next page (from either page 1 or page 2, 
can't recall). I kept getting an error of an answer is required of this 
question even when I had provided one. No good deed goes unpunished 
perhaps...


David


So I was gonna take this survey (I don't need or care about the book,
just wanted to help you out) and honestly, it's more like a quiz --
needless to say I didn't do it. 

:-| 



-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 03, 2007 2:33 PM

To: mysql@lists.mysql.com
Subject: 2007 MySQL Community Survey - Got Ten Minutes to Spare?

2007 MySQL Community Survey - Bribe Included

I've created a survey on SurveyMonkey that I am hoping to get 
a bunch of 
responses for. The survey will help the community team identify how 
(in)effectively we communicate development and other goals 
and also what 
features you, our community users, most want in future versions of 
MySQL. So, hey, give us ten minutes of your time and help us 
make MySQL 
better.


A Blatant Bribe for Participating

And for those who need a bribe, we'll be giving away two Apress books 
(each) to 3 random survey takers. The survey is anonymous, but if you 
would like to go into the drawing for the books, just include 
your email 
address in the very last question...otherwise, just leave it blank.


Link to the 2007 Community survey:

http://www.surveymonkey.com/s.aspx?sm=ONCdTFPxjyq55NAUBUJ_2fvQ_3d_3d

Thanks much!

Jay Pipes
Community Relations Manager, North America
MySQL, Inc.







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



2007 MySQL Community Survey - Got Ten Minutes to Spare?

2007-08-03 Thread Jay Pipes

2007 MySQL Community Survey - Bribe Included

I've created a survey on SurveyMonkey that I am hoping to get a bunch of 
responses for. The survey will help the community team identify how 
(in)effectively we communicate development and other goals and also what 
features you, our community users, most want in future versions of 
MySQL. So, hey, give us ten minutes of your time and help us make MySQL 
better.


A Blatant Bribe for Participating

And for those who need a bribe, we'll be giving away two Apress books 
(each) to 3 random survey takers. The survey is anonymous, but if you 
would like to go into the drawing for the books, just include your email 
address in the very last question...otherwise, just leave it blank.


Link to the 2007 Community survey:

http://www.surveymonkey.com/s.aspx?sm=ONCdTFPxjyq55NAUBUJ_2fvQ_3d_3d

Thanks much!

Jay Pipes
Community Relations Manager, North America
MySQL, Inc.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Camp II - August 23-24 - Brooklyn, New York

2007-08-03 Thread Jay Pipes

=== MySQL Camp II - August 23-24 - Brooklyn, New York ==

The second MySQL Camp is happening August 23rd and 24th at Polytechnic 
University in Brooklyn, New York.  Like the first MySQL Camp, this one 
is a *completely free*, *community-driven* event.


About MySQL Camp


The camp is a relaxed, barcamp-style unconference that gets MySQL and 
FLOSS community members, users, and developers together for the purpose 
of driving innovation and participation.  Sessions at the un-conference 
are proposed and voted on by the campers both onsite and before the camp 
begins.  The camp is part hackfest, part interactive learning and 
sharing, and part relaxed networking event.


The focus of MySQL Camp II is participation.  Come prepared to shout out 
ideas, challenge traditional thinking, make new friends, and work with 
fellow community members on both your own and community projects.


Limited Registration


Registration for MySQL Camp II is restricted to only 200 participants, 
and space is filling up quickly.  To register, email Jay Pipes 
([EMAIL PROTECTED]) the following information:


- Your Name
- Your Company or Affiliation (if applicable)
- Your Location
- Your Email Address

Links and More Information
--

Up to date information about the camp is available at http://mysqlcamp.org.

Information about Polytechnic University is available at 
http://www.poly.edu.


For hotel information and room sharing, please check the MySQLCamp.org 
website continually as the camp dates approach.


==

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimize code?

2007-04-27 Thread Jay Pipes

Hi Jerry, comments inline

Jerry Schwartz wrote:

I need (ultimately) to update some prices in a prod_price table. First, I
need to locate a product and its associated prices using a field
prod.prod_price_prod_id which is not unique, and is often null, but it is
indexed. (Subsequent operations will use PRIMARY keys, but I haven't gotten
there yet.) I further qualify a product by an associated pub.pub_code, to
weed out possible duplicate prod_pub_prod_id entries from different
publisher.


Good... I would move to lookups/joins on a primary key ASAP for performance.


My SELECT statement is

SELECT SQL_CALC_FOUND_ROWS prod.prod_num,
prod_price.prod_price_end_curr,
prod_price.prod_price_end_price,
prod_price.prod_price_disp_curr,
prod_price.prod_price_disp_price
FROM pub JOIN prod JOIN prod_price
WHERE pub.pub_id = prod.pub_id
AND pub.pub_code IN (ener,fit,govt,heal,id,life,manu)
AND prod.prod_id = prod_price.prod_id
AND prod.prod_pub_prod_id = 101771
AND prod_price.prod_price_disp_curr = 'USD'
AND prod_price.prod_price_end_curr = 'USD';

An EXPLAIN of this query looks pretty good:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: PRIMARY,pub_id,prod_pub_prod_id
  key: prod_pub_prod_id
  key_len: 766


Whoooaaahhh is it really a 766-byte-wide key?  That's going to kill you.


  ref: const
 rows: 2
Extra: Using where
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: pub
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 45


Same here.  45-byte-wide PK is a killer.


  ref: giiexpr_db.prod.pub_id
 rows: 1
Extra: Using where
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: prod_price
 type: ref
possible_keys: prod_id
  key: prod_id
  key_len: 46


Same


  ref: giiexpr_db.prod.prod_id
 rows: 2
Extra: Using where*** 1. row
***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: PRIMARY,pub_id,prod_pub_prod_id
  key: prod_pub_prod_id
  key_len: 766


Same



  ref: const
 rows: 2
Extra: Using where
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: pub
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 45
  ref: giiexpr_db.prod.pub_id
 rows: 1
Extra: Using where
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: prod_price
 type: ref
possible_keys: prod_id
  key: prod_id
  key_len: 46
  ref: giiexpr_db.prod.prod_id
 rows: 2
Extra: Using where

As you can see, if first retrieves the (possibly multiple) prod records
based upon the prod_pub_prod_id, which is keyed. Then it hops over to the
pub table using the common pub_id field, which is the PRIMARY key in the pub
table, so it can check my IN condition. Finally, it picks up (possibly
multiple) prod_price records using the common field prod_id.

The optimization seems pretty good. A single execution of this query, using
the CI MySQL, is reported to take .05 seconds. Unfortunately, I have about
20,000 products to process; so at a minimum I would expect it to take 1,000
seconds. Even ignoring the overhead from PHP, this is going to run for
awhile.

Does anyone have any suggestions for improving my code?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem on millions of records in one table?

2007-04-19 Thread Jay Pipes

He, Ming Xin PSE NKG wrote:

 Hi, Pipes
Is it reliable to use MySQL 5.1 in a commercial product now since it is
still a beta version?


Hmmm.  Probably depends on what you are doing with it... But, in 
general, it's fairly sta ble at this point but, like all beta software, 
cannot be considered a production version.


Cheers,

Jay


-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 18, 2007 10:04 PM

To: Brent Baisley
Cc: He, Ming Xin PSE NKG; mysql@lists.mysql.com
Subject: Re: Problem on millions of records in one table?

Brent Baisley wrote:

It all depends on how complicated your data and searches are. I've got


tables that add 2-3 million per day and I don't have performance 
problems. Although we only retain at most 500 millions records, not a 
full years worth.


That said, you can get horrible performance out of mysql with tables
as 

small as 100,000 records if you don't structure your queries correctly


or use a good table structure. If I know the tables are going to grow 
quickly and I don't need the entire dataset all the time, I'll use
merge 

tables. This makes it easy to remove old data easily from the
default 

table set.


Hi!  Have you tried out the new partitioning features of MySQL 5.1 to do

this?  Would be cool if you had some performance numbers comparing the 
older MERGE table method with the newer partitioning...


Cheers!

Jay

- Original Message - From: He, Ming Xin PSE NKG 
[EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, April 17, 2007 11:03 PM
Subject: Problem on millions of records in one table?


Hi,all

The number of the records in one table increase constantly. As
evaluated, the amount would increase to at least 30 millions within

one

year.  So we worry about whether mysql could handle such a big amount

of

records with good performance. Or need we some other solutions to

avoid

this problem ,such as using Partition, dividing a big table and etc.

Any

help or idea would be greatly appreciated.

Best Regards
mingxin








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help please: SELECT in binlog?

2007-04-19 Thread Jay Pipes

Fionn Behrens wrote:

We recently switched to mysql5 and while we were at it we also changed
our logs from text to bin as suggested by the migration script we had
(probably created by debian people).

Now I unfortunately had to reconstruct what had happened during a faulty
run of our application and I could not get any SELECT statement from the
log!? The usual search engine run didnt bring up anything useful,

so my questions are:

1) Are the selects somwhere in the binlogs and I just have not found
   the right voodoo to make the come out?


No, no selects.  Only commands that change data are replicated, AFAIK.


2) If they are not there by default, can I configure mysqld to store
   SELECTs in a binlog?


Not that I know of.


3) If not, is the old text log all I can go back to?


You can have both, AFAIK.  The general query log keeps all queries, 
including SELECTs.  Binlog only has data-modifying queries.


Cheers,

jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem on millions of records in one table?

2007-04-18 Thread Jay Pipes

Brent Baisley wrote:
It all depends on how complicated your data and searches are. I've got 
tables that add 2-3 million per day and I don't have performance 
problems. Although we only retain at most 500 millions records, not a 
full years worth.


That said, you can get horrible performance out of mysql with tables as 
small as 100,000 records if you don't structure your queries correctly 
or use a good table structure. If I know the tables are going to grow 
quickly and I don't need the entire dataset all the time, I'll use merge 
tables. This makes it easy to remove old data easily from the default 
table set.


Hi!  Have you tried out the new partitioning features of MySQL 5.1 to do 
this?  Would be cool if you had some performance numbers comparing the 
older MERGE table method with the newer partitioning...


Cheers!

Jay

- Original Message - From: He, Ming Xin PSE NKG 
[EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, April 17, 2007 11:03 PM
Subject: Problem on millions of records in one table?


Hi,all

The number of the records in one table increase constantly. As
evaluated, the amount would increase to at least 30 millions within one
year.  So we worry about whether mysql could handle such a big amount of
records with good performance. Or need we some other solutions to avoid
this problem ,such as using Partition, dividing a big table and etc. Any
help or idea would be greatly appreciated.

Best Regards
mingxin





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is the Falcon license listed as 'PROPRIETARY' in 5.2.3?

2007-04-10 Thread Jay Pipes

Baron Schwartz wrote:

Greetings,

On 5.2.3:

select plugin_name, plugin_license from plugins;
+-++
| plugin_name | plugin_license |
+-++
| binlog  | GPL|
| partition   | GPL|
| ARCHIVE | GPL|
| BLACKHOLE   | GPL|
| CSV | GPL|
| Falcon  | PROPRIETARY|
| FEDERATED   | GPL|
| MEMORY  | GPL|
| InnoDB  | GPL|
| MyISAM  | GPL|
| MRG_MYISAM  | GPL|
| ndbcluster  | GPL|
+-++

Why is Falcon listed as PROPRIETARY?  I assume that won't be the 
eventual license when it's finished.  Is it just work in-progress to 
make it GPL or something?


Hi!  This was an oversight, and due to the original Netfrastructure code 
from Jim Starkey.  It is now fixed in the codebase, as evidenced here:


http://lists.mysql.com/commits/24222

Cheers!

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Do NULL values slow down the database?

2007-03-29 Thread Jay Pipes

Ales Zoulek wrote:

Hi,

I've read reacently, that it's not good to use columns with NULL
values in MySQL, because it slows down the select queries over that
columns. Is it true? Or do that affects only some situations or some
versions? Are there some relevant statistics about that?


There is not really a noticeable slowdown just for having NULLable 
columns.  However, there are situations where separating off frequently 
accessed columns from (often NULLable) infrequently accessed columns 
into two or more tables can provide very good performance improvement, 
as the infrequently accessed columns have much less likelihood from 
taking up space in memory, especially in memory-starved applications.


Cheers,

Jay


On 3/29/07, Reinhart Viane [EMAIL PROTECTED] wrote:

Hello list,

I have a table events in a database that has a field named duration. This
field is a mediumint containing an amount of minutes (eg 65, 87, 10368)
Now I need these to be outputted into a h:mm (so 65 will be 
represented as

1:05)

My complete query is:
select YEAR(events.workdate) as theyear,
(sum(events.duration)/60),clients.name, persons.name from events, 
persons,

clients where events.personid= persons.personid and events.clientid=
clients.clientid group by clients.name, events.personid, theyear;

this does, off course not give me the wanted result.
How can I convert these numerical entries to hh:mm in my query?

(days do not matter, I just need hours and minutes, thx)

Regards and thanks,

Reinhart Viane
D-studio
Graaf van Egmontstraat 15/3
2800 Mechelen
[EMAIL PROTECTED] +32(0)15 44 89 01



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: improving performance of server

2007-03-27 Thread Jay Pipes
Could you post the actual code you are using for the INSERT?  Also, what 
storage engine are you using?


Jay

andrew collier wrote:

hello,

i am having some trouble getting mysql to perform decently on my machine. it is 
a 2 GHz dual core AMD 64 machine (although i am presently running a 32 bit 
linux system) with 1 Gb RAM and 1 Gb swap partition. the data i am using is 
loaded into a table described by:

CREATE TABLE IF NOT EXISTS strikes (
id  INT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
epoch   DATETIME,
usecMEDIUMINT UNSIGNED,
fdate   DOUBLE,
lat FLOAT(6,4),
lon FLOAT(7,4),
error   TINYINT UNSIGNED,
nstat   TINYINT UNSIGNED
);

the data itself is pretty big: 70082053 records.

during the loading process mysqlq pretty much hogs the CPU but uses only around 
5% of the RAM. it takes 13m50s to load the data. there is a lot of disk 
activity, but this is just reading the data and there is virtually no swap 
space in use.

adding the following index:

INDEX coords (lat,lon)

takes a really long time. once again the hard disk is working hard, but there 
is no swapping, so obviously this is just due to database reads. CPU usage is 
about the same. in the end after 60 hours i gave up: had to reboot to windows 
to do some other stuff. but it was just taking unreasonably long anyway.

i am pretty sure that a lot more of this processing could be done without that 
much disk activity and i am guessing that is what is slowing the whole process 
down. the configuration i have in my.cnf is:

[client]
port= 3306
socket  = /var/run/mysql/mysql.sock

[mysqld]
port= 3306
socket  = /var/run/mysql/mysql.sock
skip-locking
key_buffer = 32M
max_allowed_packet = 1M
table_cache = 4
sort_buffer_size = 128K
read_buffer_size = 256K
read_rnd_buffer_size = 256K
net_buffer_length = 2K
thread_stack = 64K

server-id   = 1

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 8M
sort_buffer_size = 8M

[myisamchk]
key_buffer = 8M
sort_buffer_size = 8M

[mysqlhotcopy]
interactive-timeout

please could someone give me an idea of how i might go about making this whole 
thing a little more efficient? thanks!

best regards,
andrew collier.

--
Need cash? Click to get a payday loan
http://tags.bluebottle.com/fc/CAaCMPJktTdJ7Iu6tODsIDX46jYJlhKA/





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Increasing the Query Cache Size has performance ?

2007-03-20 Thread Jay Pipes
Not sure what is going on, but the version of MySQL you are using is 
ancient.  The current version of MySQL is 5.0.37.  Even MySQL 4.1 (which 
has been end-of-lifed) has a latest version of 4.1.22.  If it is a bug 
you are seeing, it likely has been fixed in a later version.


Cheers,

Jay

Kishore Jalleda wrote:

Hello Everybody,
 I increased the query_cache_size on one of our
main servers from 100 MB to 250 MB, since I was seeing a very high rate lot
of Qcache Low Mem prunes. The server was fine for 15 minutes and the Low 
mem

prunes went down to almost zero, but then started getting too many
connections errors and the queries were taking too long to execute, and 
only

after the roll backed the change the server started behaving normally. This
could not be a co-incidence as  the server has been running fine for months
even under heavy traffic conditions.
 Has anybody ever experienced such a thing or
know what could be the cause ..

Server info
4.1.11-Debian_4sarge3-log
32GB RAM
Max_connections : 400


Thanks
Kishore Jalleda




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: function based index

2007-03-13 Thread Jay Pipes

Brown, Charles wrote:

Hello Dan. Try it and see if you can:

mysql create index indx101 on tab101(min(c1),c2) ;

Answer: Its not supported.


-Original Message-
From: Dan Nelson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 13, 2007 9:10 AM

To: Ananda Kumar
Cc: mysql@lists.mysql.com
Subject: Re: function based index

In the last episode (Mar 13), Ananda Kumar said:

Is it possible to create function based index in mysql as available
in oracle. Please point me to any documentation on this.


Sorry; mysql doesn't have function-based indexes.  The closest you can
get in mysql is if you add another column and an UPDATE trigger that
populates it with the results of your function, then index that column. 


Hi!  We've recently opened up our public worklog for commenting... which 
means you should definitely add comments to the worklog task which 
covers function indexes:


http://forge.mysql.com/worklog/task.php?id=1075

Cheers!

jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: binary into blob

2007-03-07 Thread Jay Pipes

Ed wrote:

Hi All,
I'm trying to figure out how to put a pdf file into a blob field.

I guess a pdf file is a binnary file and it will contain characters that will 
mess things up, so my question is:


can it be done?  Or better, how can it be done? ;)

Any pointers to documentation are a bonus!


Is there a specific reason you want to store this in a database?  Why 
not use the local (or networked) file system and simply store the 
metadata about the PDF in the database?


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help me optimize this sql

2007-03-06 Thread Jay Pipes
SELECT *
FROM table_one t1
INNER JOIN table_two t2
ON t1.column_one = t2.column_one
LEFT JOIN table_three t3
ON t3.column_two = t1.column_three
AND t3.column_four = t1.column_five
WHERE column_six LIKE '%dsc%'
AND column_seven LIKE '%aaa%';

There is no need for a derived table.

Also, using LIKE '%xxx%' prohibits indexes on column_six and
column_seven from being used.

Also, typically, when doing a LEFT JOIN to a table, that table is used
on the *right* side of the ON expression.  In your SQL, it is on the
left side, which doesn't make much sense.  I think you mean for it to be
on the right...

Cheers,

Jay



wangxu wrote:
 sql:
   select *
   from table_one inner join table_two   on table_two.column_one = 
 table_one.column_one 
  left join (SELECT * from table_three) table_four
 on table_four.column_two = table_one.column_three  and 
 table_four.column_four= table_one.column_five 
   where column_six like '%dsc%' and column_seven like '%aaa%'
 
   explain:
 
   *** 1. row ***
  id: 1
 select_type: PRIMARY
   table: table_one
type: ALL
   possible_keys: 
 key: NULL
 key_len: NULL
 ref: NULL
rows: 481
   Extra: Using where
   *** 2. row ***
  id: 1
 select_type: PRIMARY
   table: table_two
type: ref
   possible_keys: idx_column_one
 key: idx_column_one
 key_len: 153
 ref: table_one.column_one
rows: 1
   Extra: Using where
   *** 3. row ***
  id: 1
 select_type: PRIMARY
   table: derived2
type: ALL
   possible_keys: NULL
 key: NULL
 key_len: NULL
 ref: NULL
rows: 2297
   Extra:
   *** 4. row ***
  id: 2
 select_type: DERIVED
   table: table_three
type: ALL
   possible_keys: NULL
 key: NULL
 key_len: NULL
 ref: NULL
rows: 2250
   Extra:
 
 
 
   Can I optimize this sql ?
   thanks!
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Elimination Query

2007-03-06 Thread Jay Pipes

DELETE bm_KW FROM bm_KW
INNER JOIN (
SELECT kw2.KeywordID
FROM bmjn_KW kw1
INNER JOIN bmjn_KW kw2
ON kw1.KeywordID = kw2.KeywordID
AND kw2.ItemID != '1016'
WHERE kw1.ItemID = '1016'
) AS keywords
ON bm_KW.KeywordID = keywords.KeywordID;

Miles Thompson wrote:
This query works but is there any way of making it more elegant or 
speeding it up?


DELETE from bm_KW USING bm_KW, bmjn_KW
WHERE
   bm_KW.KeywordID IN
(SELECT bmjn_KW.KeywordID FROM bmjn_KW
 WHERE bmjn_KW.ItemID = '1016' ) AND

   bm_KW.KeywordID NOT IN
(SELECT bmjn_KW.KeywordID FROM bmjn_KW
 WHERE bmjn_KW.ItemID != '1016');

Its purpose is to delete only the keywords which are unique to the item 
being deleted, 1016 in this case.


The bm_KW table stores the keywords and consists of two columns: 
KeywordID and Keyword. KeywordID is an auto-incrementing primary key.


The bmjn_KW table stores only pointers to keywords and items and 
consists of two columns: ItemID - a foreign key pointing to the id of a 
given item - and KeywordID a foreign key pointing to the KeywordID in 
the bm_KW table.


When an item is added the bm_KW table is searched to determine if any of 
the keywords used to describe it have been used before. If so a record 
is added to bmjn_KW referencing the item and the  KeywordID in bm_KW.


If the keyword has not been used it is added to bm_KW and then 
referenced as described above.


Any thoughts or opinions?

Regards - Miles Thompson

_
Win a trip for four to a concert anywhere in the world! 
http://www.mobilelivetour.ca/






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Elimination Query

2007-03-06 Thread Jay Pipes

Miles Thompson wrote:

That looks a lot more professional - thanks.


No problem!  test it out first, though! ;)


Where can I get information on writing better SQL?


There's lots of good books on it.  Joe Celko's SQL for Smarties books 
are great for that stuff.  (Also, shameless plug: my own book Pro 
MySQL (Apress, 2005) has a couple chapters on effective SQL coding.. :)


Cheers,

Jay


Regards - Miles Thompson


From: Jay Pipes [EMAIL PROTECTED]

DELETE bm_KW FROM bm_KW
INNER JOIN (
SELECT kw2.KeywordID
FROM bmjn_KW kw1
INNER JOIN bmjn_KW kw2
ON kw1.KeywordID = kw2.KeywordID
AND kw2.ItemID != '1016'
WHERE kw1.ItemID = '1016'
) AS keywords
ON bm_KW.KeywordID = keywords.KeywordID;

Miles Thompson wrote:
This query works but is there any way of making it more elegant or 
speeding it up?


DELETE from bm_KW USING bm_KW, bmjn_KW
WHERE
   bm_KW.KeywordID IN
(SELECT bmjn_KW.KeywordID FROM bmjn_KW
 WHERE bmjn_KW.ItemID = '1016' ) AND

   bm_KW.KeywordID NOT IN
(SELECT bmjn_KW.KeywordID FROM bmjn_KW
 WHERE bmjn_KW.ItemID != '1016');


snip





_
Buy what you want when you want it on Sympatico / MSN Shopping  
http://shopping.sympatico.msn.ca/content/shp/?ctId=2,ptnrid=176,ptnrdata=081805 







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help me optimize this ALL

2007-03-05 Thread Jay Pipes
No, because you have no WHERE condition.

wangxu wrote:
 sql:
 
 SELECT *   
 FROM table_one INNER JOIN table_one table_one1 ON table_one1.column_one = 
 table_one.column_two 
INNER JOIN table_one table_one2 ON table_one2.column_one = 
 table_one.column_three 
 
 
 explain:
 
 *** 1. row ***
id: 1
   select_type: SIMPLE
 table: table_one
  type: ALL
 possible_keys: idx_column_two,idx_column_three   
   key: NULL
   key_len: NULL
   ref: NULL
  rows: 2037
 Extra:
 *** 2. row ***
id: 1
   select_type: SIMPLE
 table: table_one1
  type: ref
 possible_keys: idx_column_one
   key: idx_column_one
   key_len: 5
   ref: table_one.column_two
  rows: 1
 Extra: Using where
 *** 3. row ***
id: 1
   select_type: SIMPLE
 table: table_one2
  type: ref
 possible_keys: idx_column_one
   key: idx_column_one
   key_len: 5
   ref: table_one.column_three
  rows: 1
 Extra: Using where
 
 Can I optimize this ALL on table one?
 thanks!
 
  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Heavily-loaded MySQL: Resource temporarily unavavailable?

2007-03-02 Thread Jay Pipes

Chris Boot wrote:

Rolando Edwards wrote:

Also consider

wait_timeout in my.ini


This is set to 28800. I don't consider this a problem since I've hardly 
ever got anywhere near my connection limit.



- Original Message -
From: Rolando Edwards [EMAIL PROTECTED]
To: Chris Boot [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Friday, March 2, 2007 2:19:38 PM (GMT-0500) Auto-Detected
Subject: Re: Heavily-loaded MySQL: Resource temporarily unavavailable?

I hope I am not asking dumb questions
but please consider the following:

1) Do you mysql_close every connection when rendering a page ?
2) Have you ever tried mysql_pconnect (which does not require doing a 
mysql_close to any persistent connection) ?


I have, and these suck up my connections like you wouldn't imagine. 
There are lots of different sites hosted on the server, and using 
persistent connections would require thousands of connections.


Yeah, don't use pconnect.

3) Do you have interactive_timeout set (in seconds) in my.ini (default 
is 28800 [8 hours]) ?


It's set to the default. Once again I don't consider this an issue since 
connections don't stay open long enough for this to even be useful.


4) Do you have 'max_user_connections' set to zero (default anyway) in 
my.ini ?


I don't, this is set to 0.

I've done a few things since my first post that seem to have helped, 
though I'm not sure quite yet. I've upped the back_log from 50 to 128, 
upped thread_cache from 20 to 32, thread_concurrency from 4 to 32 (does 
this do anything on Linux?).


The thread_cache (I believe you mean *thread_cache_size*?)  is the 
number of THD* cleared instances that the server keeps internally so 
that the memory allocation of creating a new THD (connection class 
instance) is mitigated.  But, unless you have very high concurrent 
connections, this likely will not make much of a difference.  Look at 
the difference between Connections and Threads_created status variables 
to see what percentage of your connections are being created from 
scratch (as opposed to partial recycling via the thread_cache)


thread_concurrency won't do anything on Linux.  It's for Solaris.

However, innodb_thread_concurrency is different.  It's the maximum (or 
infinite if set to 0 or more than 20) number of operating system threads 
that InnoDB can use in its queueing system.  I wouldn't recommend 
changing this.


Those in themselves didn't seem to make much difference, however I have 
now made some of the busier sites connect to 127.0.0.1 (using TCP 
instead of UNIX sockets) which has either slowed connections enough to 
make the problem go away, or is making better use of back_log which I'm 
not sure is used for UNIX sockets. Any ideas?


You may want to try reverting that and simply turning off networking 
entirely, choosing to use unix sockets for everything:


--skip-networking

This should provide a good connection time reduction.

See here for more information on that option:

http://dev.mysql.com/doc/refman/5.0/en/dns.html

Other variables you may want to look at is ensuring that your 
table_cache (or table_open_cache if 5.1+) has enough room to deal with 
600 connections * the number of average tables in a typical SQL 
expression executed against the server.  If this variable value is 
really low, you could be experiencing file descriptor swap/thrashing as 
so many threads are opening and then closing file descriptors rapidly.


Cheers,

Jay

Thanks again,
Chris



- Original Message -
From: Chris Boot [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Friday, March 2, 2007 1:34:40 PM (GMT-0500) Auto-Detected
Subject: Heavily-loaded MySQL: Resource temporarily unavavailable?

All,

I'm running a loaded MySQL server and every so often MySQL seems to 
refuse
connections in batches, which manifests itself as the following errors 
in PHP:


mysql_connect() [a 
href='function.mysql-connect'function.mysql-connect/a]:
Can't connect to local MySQL server through socket 
'/var/run/mysqld/mysqld.sock'

(11) [path to file] on line [x]

I've got a carefully tuned my.ini and Apache configuration, which can
theoretically process 400 connections at a time and this works most of 
the time.
MySQL is set to accept 600 simultaneous connections though this is 
never reached
(according to phpMyAdmin's stats anyway). I've upped the open files 
limit on

MySQL and Apache (PHP runs as a module).

At this stage I'm completely out of ideas as to what I can do to fix 
my problem.

Any ideas? What extra information can I provide that could help?

Many thanks,
Chris







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Heavily-loaded MySQL: Resource temporarily unavavailable?

2007-03-02 Thread Jay Pipes

Chris Boot wrote:

Jay Pipes wrote:

Chris Boot wrote:

Rolando Edwards wrote:

Also consider

wait_timeout in my.ini


This is set to 28800. I don't consider this a problem since I've 
hardly ever got anywhere near my connection limit.



- Original Message -
From: Rolando Edwards [EMAIL PROTECTED]
To: Chris Boot [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Friday, March 2, 2007 2:19:38 PM (GMT-0500) Auto-Detected
Subject: Re: Heavily-loaded MySQL: Resource temporarily unavavailable?

I hope I am not asking dumb questions
but please consider the following:

1) Do you mysql_close every connection when rendering a page ?
2) Have you ever tried mysql_pconnect (which does not require doing 
a mysql_close to any persistent connection) ?


I have, and these suck up my connections like you wouldn't imagine. 
There are lots of different sites hosted on the server, and using 
persistent connections would require thousands of connections.


Yeah, don't use pconnect.

3) Do you have interactive_timeout set (in seconds) in my.ini 
(default is 28800 [8 hours]) ?


It's set to the default. Once again I don't consider this an issue 
since connections don't stay open long enough for this to even be 
useful.


4) Do you have 'max_user_connections' set to zero (default anyway) 
in my.ini ?


I don't, this is set to 0.

I've done a few things since my first post that seem to have helped, 
though I'm not sure quite yet. I've upped the back_log from 50 to 
128, upped thread_cache from 20 to 32, thread_concurrency from 4 to 
32 (does this do anything on Linux?).


The thread_cache (I believe you mean *thread_cache_size*?)  is the 
number of THD* cleared instances that the server keeps internally so 
that the memory allocation of creating a new THD (connection class 
instance) is mitigated.  But, unless you have very high concurrent 
connections, this likely will not make much of a difference.  Look at 
the difference between Connections and Threads_created status 
variables to see what percentage of your connections are being created 
from scratch (as opposed to partial recycling via the thread_cache)


phpMyAdmin claims my thread cache hitrate is 99.82%, so I doubt that 
change would have made much difference indeed.


Correct.


thread_concurrency won't do anything on Linux.  It's for Solaris.


I thought as much. Thanks.

However, innodb_thread_concurrency is different.  It's the maximum (or 
infinite if set to 0 or more than 20) number of operating system 
threads that InnoDB can use in its queueing system.  I wouldn't 
recommend changing this.


I haven't, and although I use InnoDB most of the heavily-loaded tables 
are all MyISAM. But none of the above should really affect my connection 
problems which is, at the moment, the only place I'm seeing performance 
problems.


Indeed.

Those in themselves didn't seem to make much difference, however I 
have now made some of the busier sites connect to 127.0.0.1 (using 
TCP instead of UNIX sockets) which has either slowed connections 
enough to make the problem go away, or is making better use of 
back_log which I'm not sure is used for UNIX sockets. Any ideas?


You may want to try reverting that and simply turning off networking 
entirely, choosing to use unix sockets for everything:


--skip-networking

This should provide a good connection time reduction.

See here for more information on that option:

http://dev.mysql.com/doc/refman/5.0/en/dns.html


Hmm, but that doesn't explain why I've stopped seeing the refused 
connections though, or does it? DNS shouldn't really be much of an 
impact due to the host cache no? Is there some kernel parameter or other 
that I can use to tune how UNIX sockets work or something?


I would guess that turning off networking altogether would give better 
performance than the host cache and DNS together.  But, then again, as 
you say, if it ain't broke, why fix it ;)


Other variables you may want to look at is ensuring that your 
table_cache (or table_open_cache if 5.1+) has enough room to deal with 
600 connections * the number of average tables in a typical SQL 
expression executed against the server.  If this variable value is 
really low, you could be experiencing file descriptor swap/thrashing 
as so many threads are opening and then closing file descriptors rapidly.


table_cache is set to 512, which is probably a little low for my 
environment but it seems ok for now (opened_tables is about 3K after 4 
hours MySQL uptime).


If you've adjusted your ulimit for file descriptors, then, sure, put 
this up soe that not as much thrashing occurs.



Thanks for all the help,


No prob!

Jay


Cheers,

Jay

Thanks again,
Chris



- Original Message -
From: Chris Boot [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Friday, March 2, 2007 1:34:40 PM (GMT-0500) Auto-Detected
Subject: Heavily-loaded MySQL: Resource temporarily unavavailable?

All,

I'm running a loaded MySQL server and every so often MySQL seems to 
refuse

Re: [LICENSING] why so hazy? Comparing to Samba.

2007-02-22 Thread Jay Pipes

software advocate wrote:

Wow, well that is just neat. It's too bad I left PHP for TurboGears.

The second problem is the buzz of frameworks (TurboGears, JBoss, etc) which
use/want to use MySQL as a backend. This is really the only issue I have
with MySQL. They do support other databases, and its not like anyone is
twisting their arm to use MySQL, but the option is nice since people 
already

know MySQL(if they need to send a raw statement.


I have no idea what this means. :(

First, you were complaining about PHP and Jim noted that we have a PHP 
native driver in the works.  Then, you move on to Python and Java... 
sounds like you're just flame-baiting.


Also, your lack of knowledge about the GPL is apparent.  You aren't 
arguing anything to do with licensing.  You're simply complaining that 
something isn't free as in beer when you want it to be.


Hey, if you want to go ahead and waste valuable development time by 
writing your own mysql client library for your (TurboGears???) software 
instead of building in the small licensing cost that goes with embedding 
or linking with the MySQL GPL libs, go right ahead.  Nobody's stopping 
you, and nobody's stopping your potential customers from buying your 
competitor's software which is moving right along while you re-invent 
the wrong wheels.


Cheers.

Jay


On 2/22/07, Jim Winstead [EMAIL PROTECTED] wrote:


On Thu, Feb 22, 2007 at 01:39:49PM -0900, software advocate wrote:
 This is exactly why someone needs to develop a non-gpl mysql drop in
client
 for PHP. This would get around license costs, despite what MySQL AB has
to
 say, this would be completely legal. Ask your local rep from the FSF.
One
 could always reverse engineer the protocol like the Samba team. Also 
the

 fact is, an idea can NOT be copyrighted. You can read the source, take
 notes, or even make documentation to create your own client.

In fact, MySQL AB has developed exactly such a thing. (Look for info on
'mysqlnd'.)

This licensing comment in the internals documentation is old, and is
supposed to be removed. Unfortunately, updating the internals
documentation is not something that gets a lot of priority.

Jim Winstead
MySQL Inc.






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [LICENSING] why so hazy? Comparing to Samba.

2007-02-22 Thread Jay Pipes

software advocate wrote:

On 2/22/07, Jay Pipes [EMAIL PROTECTED] wrote:


I have no idea what this means. :(

First, you were complaining about PHP and Jim noted that we have a PHP
native driver in the works.  Then, you move on to Python and Java...
sounds like you're just flame-baiting.



I don't mean to sound like I'm trolling. PHP is the most common relation in
which others can relate. Just to be clear, my reasons for not using PHP 
were not MySQL related.


Fair enough, and my apologies for sounding harsh (it's been a long week 
but there's really no excuse for my being rude.)



Also, your lack of knowledge about the GPL is apparent.  You aren't
arguing anything to do with licensing.  You're simply complaining that
something isn't free as in beer when you want it to be.


I don't lack any knowledge. What I'm saying is people are looking at what
the MySQL website has to say and often get confused. I've seen plenty of
people who just have the simplest of questions which are obfuscated by the
MySQL website.


OK, there is a mismatch here between what you are saying and what you 
are describing as a problem for your own development.  To be more 
specific, there really isn't a licensing issue *unless* you are linking 
or embedding MySQL in a *non open source* application which you are 
*distributing* as a package.  I can think of no real-world scenario 
under which an application written with the TurboGears framework would 
come into a licensing issue, as the framework is designed AFAIK, to work 
with the mysql connector library present on the server running the 
application (MySQLdb library or the python-language driver, I can't 
remember it's name...)


Where confusion typically comes up is not confusion at all; instead it 
usually follows one of the following conditions:


The developer has embedded libmysqld or libmysql (the server or the 
client) within the application and does not want to open source the 
application.  This comes into play in packaged applications typically, 
not PHP, Python, Perl stuff.  And, in this case, the developer has the 
choice of a) requiring the installing user to have installed MySQL 
separately, b) open sourcing their own application, c) writing their own 
native MySQL client, or d) building in licensing costs into the 
application after having gotten a quote from MySQL for such use.


The GPL is about *user* freedom, and reciprocity (i.e. if I open source, 
you open source).  Many developers love the idea of *using* GPL software 
but don't like the idea of open sourcing their *own* software, which is 
unfortunate for the developer, but precisely what the GPL aims to 
prevent, in order to best protect the rights of the end-user.


Non-copyleft licenses such as BSD/MIT aim to protect the *developer's* 
rights and free up any restrictions on the developer, allowing the 
developer to use and incorporate such software in their own projects 
with no reciprocity of opening up their own source.


There are merits to both approaches, but that is outside the scope of 
discussion here.


I do see your point about the sockets layer/client protocol and that 
language is being removed/has been removed from our website because it 
has been, as you point out, a source of confusion; I will agree with you 
there.


But, on a general note, I *don't* think that MySQL licensing is 
confusing, and any confusion thereof stems from confusion about what the 
GPL itself states.  I hear tons of times how developers claim that they 
fall under the mere aggregation clause when in fact they do not -- it's 
just an attempt to bypass the GPL restrictions.


Like yourself, I'm no lawyer, and Clint, our general counsel is welcome 
to chime in here... but my honest *personal* opinion is that most times 
I hear complaints about this, it tends to be just that, a complaint that 
there is no way around the GPL and that people wish that the license was 
BSD so that there would be no reason to acquire a license *if needed*.


Am I pushing MySQL license costs?  NO WAY (hey, I'm in the Community 
Team!)  What I am saying is that I think there are pretty clear lines 
which say on one side no license needed and on the other side license 
needed.  The client protocol clause was indeed confusing, which is why 
Jim and I noted it's being/has been removed from the site.



Hey, if you want to go ahead and waste valuable development time by

writing your own mysql client library for your (TurboGears???) software
instead of building in the small licensing cost that goes with embedding
or linking with the MySQL GPL libs, go right ahead.  Nobody's stopping
you, and nobody's stopping your potential customers from buying your
competitor's software which is moving right along while you re-invent
the wrong wheels.



The plus to writing a language specific extension not only to get away from
the license fee, but to have a language specific extension. By having the
extension written in a language, say Python, you have more

Re: [LICENSING] why so hazy? Comparing to Samba.

2007-02-22 Thread Jay Pipes

software advocate wrote:

On 2/22/07, Jay Pipes [EMAIL PROTECTED] wrote:


I do see your point about the sockets layer/client protocol and that
language is being removed/has been removed from our website because it
has been, as you point out, a source of confusion; I will agree with you
there.


There is one more piece of very confusing information to more people. I 
know

what it means, but directly it is confusing.

http://dev.mysql.com/doc/internals/en/licensing-notice.html

Therefore if you use this description to write a program, you must release
your program as GPL.

Under any type of law, how does this even hold water? The GPL is about
distribution of software or a work, not a NDA. This type of statement 
should

never have been made.


Agreed, which is, why Jim and I have stated it is removed from the rest 
of the website, but the internals docs haven't unfortunately been 
updated like the rest of the site.  Note that we will work to remove it 
as soon as possible.


cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Alter table - adding constraints?

2007-02-21 Thread Jay Pipes

Chris White wrote:

Jay Paulson wrote:

2) both tables have data in them.

  
This is most likely your issue then, depending on the table size, go 
through and make sure that anything in the referenced column matches the 
referencing column.  You should also be able to use SHOW INNODB STATUS 
to see what's possibly failing.


No, it's because you already have a constraint in your schema called 
fk_regions.  Just change the constraint name to one that is unique for 
your schema...  This is the reason I usually name myu constraints like:


fk_fromtable_totable

So this doesn't become an issue.

cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Using Triggers to Maintain a Table to prevent complex join statements...

2007-02-21 Thread Jay Pipes

Cory Robin wrote:

We have a statement that joins 8 different tables to pull
multiple rows from each table.

I have heard of people using triggers to create, update and
delete records in a table based on statements in other
tables.

The obvious result of this would be to SPEED up results
right?  :)

I'd love to find someone with experience with this that I
can pick info from.  lol


Post the SQL you are using and I'll give you a hand.

Cheers!

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Real BITs which use 1 bit in 5.1?

2007-02-18 Thread Jay Pipes

Kevin Burton wrote:

A little birdie:

http://forge.mysql.com/wiki/Top10SQLPerformanceTips

notes..

In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte.

Is this true?


Hmm, I had wondered about that when someone yelled it out at MySQL Camp...

No, it's not true.  The BIT data type is not the same as BOOL.  BIT is 
used for easier bitmask/bitfield type columns, and BOOL is used for 
boolean values, but it is currently aliased to TINYINT(1).  So, this is 
patently false.


True boolean data type support is, however, planned for a future 
release...not sure when though.


I'll remove that from the wiki page.

Cheers,

Jay


I didn't see a note in the manual..

I assume it would be here

http://dev.mysql.com/doc/refman/5.1/en/storage-requirements.html




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Merge tables and ON DUPLICATE KEY UPDATE

2007-02-12 Thread Jay Pipes

Kevin Burton wrote:

I want to use a merge table so that I can direct all new INSERTs to a
new merge table and migrate old data off the system by having a
continually sliding window of underlying MyISAM tables.

The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE  and
that value isn't in the leading table where all INSERTs go a *new* row
will be created.

Is there any way around this problem?


What about using partitioning in MySQl 5.1+?  Would this work?

http://dev.mysql.com/doc/refman/5.1/en/partitioning.html

Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: to join or not to join, that is the query

2007-02-11 Thread Jay Pipes

Miguel Vaz wrote:

I have three tables:
TABLE Person
- id_person, name, id_levelA, id_sizeA, id_levelB, id_sizeB
TABLE Levels
- id, desc
TABLE Sizes
- id, desc


Hi!  You can always join a table twice :)

SELECT
  p.id_person
, lA.desc as levelA
, sA.desc as sizeA
, lB.desc as levelB
, sB.desc as sizeB
FROM Person p
INNER JOIN Levels lA
ON p.id_levelA = lA.id
INNER JOIN Levels lB
ON p.id_levelB = lB.id
INNER JOIN Sizes sA
ON p.id_sizeA = sA.id
INNER JOIN Sizes sB
ON p.id_sizeB = sB.id;

Of course, if id_levelA field is NULLable, you would use a LEFT JOIN 
instead of an INNER JOIN.


Here is an example output:

mysql CREATE TABLE Person (
- id_person INT UNSIGNED NOT NULL
- , name VARCHAR(20) NOT NULL
- , id_levelA TINYINT UNSIGNED NOT NULL
- , id_sizeA TINYINT UNSIGNED NOT NULL
- , id_levelB TINYINT UNSIGNED NOT NULL
- , id_sizeB TINYINT UNSIGNED NOT NULL
- , PRIMARY KEY (id_person)
- );
Query OK, 0 rows affected (0.07 sec)

mysql INSERT INTO Person VALUES (1, 'Miguel', 1, 1, 2, 2);
Query OK, 1 row affected (0.04 sec)

mysql CREATE TABLE Levels ( id TINYINT UNSIGNED NOT NULL , `desc` 
VARCHAR(20) NOT NULL , PRIMARY KEY (id) );

Query OK, 0 rows affected (0.06 sec)

mysql CREATE TABLE Sizes ( id TINYINT UNSIGNED NOT NULL , `desc` 
VARCHAR(20) NOT NULL , PRIMARY KEY (id) );

Query OK, 0 rows affected (0.00 sec)

mysql INSERT INTO Levels VALUES (1, 'Level One'),(2, 'Level Two');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql INSERT INTO Sizes VALUES (1, 'Size One'),(2, 'Size Two');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql SELECT
-   p.id_person
- , lA.desc as levelA
- , sA.desc as sizeA
- , lB.desc as levelB
- , sB.desc as sizeB
- FROM Person p
- INNER JOIN Levels lA
- ON p.id_levelA = lA.id
- INNER JOIN Levels lB
- ON p.id_levelB = lB.id
- INNER JOIN Sizes sA
- ON p.id_sizeA = sA.id
- INNER JOIN Sizes sB
- ON p.id_sizeB = sB.id;
+---+---+--+---+--+
| id_person | levelA| sizeA| levelB| sizeB|
+---+---+--+---+--+
| 1 | Level One | Size One | Level Two | Size Two |
+---+---+--+---+--+
1 row in set (0.00 sec)

A couple notes for you:

1) desc is a keyword, so I would not recommend using it as a field 
name.  Use something like description instead to make your life easier


2) This kind of table structure is typically an indication of a poorly 
designed schema because it is not normalized.  You should instead have a 
table, PersonLevels and PersonSizes, which can store any number of a 
person's levels and sizes... read up on normalization about this concept.


Cheers,

Jay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select records of last week

2007-02-11 Thread Jay Pipes

Or, another, perhaps easier way:

SELECT * FROM foobar
WHERE yourdatefield  CURDATE() - INTERVAL 7 DAY;

Cheers,

Jay

Lars Schwarz wrote:

depending on your mysql version:

for the date/time field version:

select * from foobar where yourdatefield  DATE_ADD( CURDATE( ) ,
INTERVAL -7 DAY )

i don't use unix timestamps, but you may check the UNIXTIME() and
UNIX_TIMESTAMP() functions for converting them before using the
DATE_ADD().

lars

On 2/11/07, barz040 [EMAIL PROTECTED] wrote:

Hi all

I want to extract the records that aren't oldest of 1 week.

Ho can I do this in two mode?

1) when my data field is int(10) and I store record with unix time
(1171152000)

2) when my data field is a data field and I store record with format
2007-02-11 12:50:02

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql v5 math a bit out. How do I round the info to become correct

2006-12-14 Thread Jay Pipes
For exact calculations, you need to use the DECIMAL data type.  See this
section in the manual for the reasons why:

http://dev.mysql.com/doc/refman/5.1/en/precision-math-examples.html

Cheers,

Jay

Kerry Frater wrote:
 I am running a small procedure for set jobs that calculates a running total
 for me to display. It works fine in most cases but the math concerns me.
 I am using 5.0.18 on my Windows PC and the latest 5.x on my Linux server
 (both test machines).
 The finance table is an InnoDb table.
 CreditAmount and Debitamount are both fields set to FLOAT.
 
 
 The SQL code called from my program
 
select f.*,@rbal:[EMAIL PROTECTED](debitamount * -1) as runbal
from (select @rbal:=0) rb,finance f
where f.jobref='abc1234'
order by f.jobref,f.inputorder
 
 I have one combination that has as data the following (listed in InputOrder)
 DebitAmount   CreditAmount
 314.43
   10314.4
 1
 
 (at least that is what the select * displays for the table) So I expected to
 see the rolling runbal column to be:
 -314.43
 .97
 -0.03
 
 what I actually got was
 -314.43
 1
 -0.000305176
 
 Now I can understand some to be rounding errors and I would like to know how
 to tell MySQL that I am only interested to two decimal places in the
 evaluated variable but the math of (-314.43 + 10314.4) = 1 is more of a
 concern. The last calculated value of runbal could be explained by the use
 of float as it is trying to do 1 - 1 which is 0 of course and the
 last float would round to that. But of course I shouldn't be starting from
 1.
 
 Thanks for any advice
 
 Kerry
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Where to get Source Distribution of MySQL Server 5.0 Standard for FreeBSD?

2006-12-14 Thread Jay Pipes
Daniel Kasak wrote:
 VeeJay wrote:
 Where one can find Source Distribution of MySQL Server 5.0 Standard for
 FreeBSD?
 
 Not on the website, that's for sure.
 Have you tried the usual warez sites, p2p networks, etc?

Really?  Seems pretty straightforward to me.  One the downloads page,
under the FreeBSD heading:

http://dev.mysql.com/downloads/mysql/5.0.html

Jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query not using indexes?

2006-12-13 Thread Jay Pipes
Make sure that all joined fields are of identical types, otherwise the
query executor must cast each and every join value, which may be
affecting the query time...

Jay

Chris Boget wrote:
 Here is a query I'm trying to run.  It doesn't look like it is overly
 complex and, granted, it's joining on a few tables, I wouldn't imagine
 it would take 6.5s to execute.
 
 SELECT
  TRIM( users.username ) AS username,
  TRIM( games.game_name ) AS game_name,
  CASE
WHEN 0 != UNIX_TIMESTAMP( MAX( collections.modified )) THEN TRIM( MAX(
 collections.modified ))
ELSE TRIM( MAX( collections.created ))
  END AS modified,
  TRIM( users.hide_inventory ) AS hide_inventory
 FROM
  collections
 INNER JOIN users ON users.id = collections.user_id
 INNER JOIN game_pieces ON game_pieces.id = collections.game_piece_id
 INNER JOIN games_sets ON games_sets.id = game_pieces.games_set_id
 INNER JOIN games ON games.id = games_sets.game_id
 GROUP BY
  users.username,
  games.game_name
 ORDER BY
  users.username;
 
 167 rows in set (6.49 sec)
 
 Table collections has 76,328 rows
 Table users has 291 rows
 Table game_pieces has 5,491 rows
 Table games_sets has 29 rows
 Table games has 3 rows
 
 Table games has a FK for games_sets which has a FK for game_pieces.
 
 All are InnoDB tables on a MySQL 5.x database
 
 Here is the same query EXPLAINed:
 
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: games
 type: index
 possible_keys: PRIMARY
  key: ix_games_GameName
  key_len: 102
  ref: NULL
 rows: 3
Extra: Using index; Using temporary; Using filesort
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: games_sets
 type: ref
 possible_keys: PRIMARY,ix_games_sets_games_sets,ix_games_sets_games
  key: ix_games_sets_games
  key_len: 8
  ref: cake_communal_haven.games.id
 rows: 4
Extra: Using index
 *** 3. row ***
   id: 1
  select_type: SIMPLE
table: game_pieces
 type: ref
 possible_keys: PRIMARY,ix_game_pieces_games_set_id,ix_game_pieces_set_piece
  key: ix_game_pieces_games_set_id
  key_len: 9
  ref: cake_communal_haven.games_sets.id
 rows: 127
Extra: Using where; Using index
 *** 4. row ***
   id: 1
  select_type: SIMPLE
table: collections
 type: ref
 possible_keys:
 ix_collections_game_piece_id,ix_collections_user_id_game_piece_id
 ,ix_collections_user_id
  key: ix_collections_game_piece_id
  key_len: 8
  ref: cake_communal_haven.game_pieces.id
 rows: 23
Extra:
 *** 5. row ***
   id: 1
  select_type: SIMPLE
table: users
 type: ref
 possible_keys: PRIMARY
  key: PRIMARY
  key_len: 22
  ref: cake_communal_haven.collections.user_id
 rows: 1
Extra:
 5 rows in set (0.00 sec)
 
 In the EXPLAIN, it doesn't look like any indexes are being used in rows
 4 and 5.  Is that right?  I'm looking at all the columns that are being
 joined and they all do have indexes.
 
 If I take out the tables game_pieces, games_sets and games and remove
 TRIM( games.game_name ) AS game_name from the select, the execution
 time goes down to 1.9 seconds so it seems as if it isn't using an
 appropriate index from either/any of those tables.
 
 Is there anything I can do to speed this query up?  Or is the joining of
 76k+ rows to 5k+ rows (plus the other tables) really going to slow
 things down that significantly?  I can't imagine that it would because
 I'm sure there are alot of other people using MySQL on much larger
 databases.
 
 thnx,
 Christoph
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Trying to create a new table in memory.

2006-11-30 Thread Jay Pipes
Charles Danko wrote:
 Hi,
 
 I am trying to write a new table into main memory, but I am getting an out
 of memory error.
 
 Each entry consists of 2 medium_int and 1 tiny_int variables, and the table
 contains just over 100,000,000 rows.  By my count, this makes just over
 700MB of data.  The machine I am using has 2GB, but I am still getting an
 out of memory error.  What am I doing wrong?

Need some more information from you.

a) Which storage engine are you using?
b) What settings for that storage engine exist in your config file?  For
instance, if InnoDB, then what is the value of innodb_buffer_pool_size?
c) What operating system are you using?


Cheers,

Jay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Camp drawing near

2006-10-24 Thread Jay Pipes
Hello all!

The first MySQL Camp (un)conference is coming up in just two weeks!  It
looks like there will be over 50 MySQL developers and community members
joining ranks at Google Headquarters in Mountain View, California, to
participate in this community-driven event.

Session topics have been proposed on a variety of technical and
community issues, including hacking on the MySQL server and the code
contribution and bug-fixing process, modularizing the MySQL source code,
open source licensing topics, hardcore DBA and developer topics, and
community evangelism.

The session schedule is starting to materialize, and will be filling in
over the next week by the participants.  If you are interested in
attending this free event, please head over to the camp website:
http://mysqlcamp.org and check out what's going on.

Hope to see everyone there!

Cheers,

Jay Pipes
Community Relations Manager, North America, MySQL, Inc.
[EMAIL PROTECTED] +1 614 406 1267


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delimiter

2006-10-20 Thread Jay Pipes
On Thu, 2006-10-19 at 18:18 -0300, JoXo CXndido de Souza Neto wrote:
 I tried to escape the pipe character but it does not work.
 
 I shall try to explain better what is happening.
 
 I have got a .sql file which create my whole database (including triggers).
 
 When i run it at phpmyadmin it fails, then i tried to run a part of my .sql 
 file which creates a trigger in a mysql prompt and it works fine.

I believe the DELIMITER command is currently only a mysql client command
(i.e. it only works in the command line client).  Perhaps it works in
the Query Browser; I don't use the GUI tools, so I'm not sure.

 Is there some trouble using the delimiter comand by a php script?
 
 
 
 Dan Buettner [EMAIL PROTECTED] escreveu na mensagem 
 news:[EMAIL PROTECTED]
 Joo, the pipe character | often needs to be escaped, usually with a
 backslash like so: \|
 
 Can you post the command you are running and the error you get?
 
 Dan
 
 
 On 10/19/06, Joo Cndido de Souza Neto [EMAIL PROTECTED] wrote:
  Hi everyone.
 
  I have getting a sintax error when i try to execute DELIMITER | in my 
  mysql
  5 server.
 
  Could anyone help me about it?
 
  --
  Joo Cndido de Souza Neto
  Curitiba Online
  [EMAIL PROTECTED]
  (41) 3324-2294 (41) 9985-6894
  http://www.curitibaonline.com.br
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
  
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: References on Optimizing File Sort

2006-10-17 Thread Jay Pipes
On Tue, 2006-10-17 at 09:26 -0700, Robert DiFalco wrote:
 I have an unavoidable filesort in a very large query. Can someone point
 me to references for optimizing filesort? I'm assuming this is going to
 be changes to my.ini or the hardware. 

Well, one method to *eliminate* Using filesort is to either use a
covering index, or take advantage of InnoDB's clustered organization
(which has an implicit sort of data records on the primary key).  Can
you show us an example of the query in question, using an EXPLAIN?

Cheers,

Jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query missing rows in location of zip by distance

2006-09-25 Thread Jay Pipes
On Mon, 2006-09-25 at 06:09 -0700, Steffan A. Cline wrote:
 Basically all 6 should come up in the big query because they are within the
 same zip and that any other locations within the same area. They all have
 the same latitude and longitude but only 1 shows up.

If they all have the same lat/long, then the distance from the centroid
of your supplied zip code will be the same for all 6.  Because you are
grouping on the distance, only 1 record will return.  Remove the GROUP
BY distance.

Cheers,

Jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MIT Kerberos integration with MySQL

2006-09-22 Thread Jay Pipes
Hi!

Please see the Forge wiki and provide comments at the end of the technical 
specification for pluggable authentication and authorization.  Thanks!

Jay

http://forge.mysql.com/wiki/PluggableAuthenticationSupport
http://forge.mysql.com/wiki/PluggableAuthorizationSupport

On Friday 22 September 2006 12:45, Dan Nelson wrote:
 In the last episode (Sep 22), Whisler, David said:
  Does anyone know if MySQL currently integrates with MIT Kerberos
  authentication (http://web.mit.edu/Kerberos/)
  http://web.mit.edu/Kerberos/ .  And if not, are they working on
  this for a future release?  MIT Kerberos has broad use in government,
  academic and research institutions as well as some corporate
  environments.  In addition, Microsoft has implemented it's own
  version of Kerberos.  And Oracle integrates with it, which enables us
  to have single-sign on for our Oracle Database applications once we
  have logged into our network, but we'd like to have this same
  function available for our MySQL databases as well.

 There's already a Permit authentification outside of mysql bug report
 filed:

 http://bugs.mysql.com/bug.php?id=4703

 Must be pretty far down on the todo list, though..

 --
   Dan Nelson
   [EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Avoiding multi-col indexes increasing speed inspite of fully-enforced constraints on a fully-normalized db

2006-09-18 Thread Jay Pipes
Hi!  Please post the actual SHOW CREATE TABLE statements for the tables
in question.  Thanks!

Jay

On Mon, 2006-09-18 at 12:03 +0500, Asif Lodhi wrote:
 Hi,
 
 I have a multi-column index (TrnsxType, TrnsxDate, TrnsxID, DepartID).
  This index along with
 a multi-column index of some child tables results in 8-column indexes
 
 (TrnsxType, TrnsxDate, TrnsxID, DepartID, OrderType, OrderDate,
 OrderNo, DepartmentID), etc.
 
 I cannot eliminate Department ID because the software has to go with
 the manual procedures where each department has its own
 order/invoice/bill books - each with its own number series.
 In case the software goes down because of a power failure or
 something, things will continue on the manual system - using the
 manual system's document numbers, etc.  When the power becomes
 available, transactions manually recorded will be fed into the
 software-based system.
 
 To cope with the very likely possibility of duplication of manual
 order/invoice numbers etc. with those of software generated
 invoice/order numbers, etc., I am storing invoice/order numbers
 recorded on manual invoices/orders as Negative numbers in the database
 - so that even if there is a duplication, the two numbers stay
 separate - yet to the physical paper world they stay the same - well,
 almost - differing only in the signs!  However, even in that case, I
 have a problem - since there can be power/network failures, I am
 storing document (invoice/order, etc.) numbers with unique machine
 numbers embedded in them (as the left-most 3-4 digits, for example) so
 even if the transactions are fed into a network-disconnected computer
 the transaction numbers stay unique because of the left-most
 machine-number digits in the transaction-numbers.  However, the manual
 system has a separate document number series for each department - so
 even if I store manual document numbers in -ve and use the left most
 3-4 digits of the transaction-number column as the embedded
 machine-numbers (to make the transaction number unique, that is - in
 case connection to the server is dropped), I am going to have
 duplication errors - unless I get the department-no in the unique
 index (can't use a primary key as I am using InnoDB).
 
 I am storing all types of transactions in a single master/detail table
 combination with each transaction distinguished by its transaction
 type (order, invoice, purchase requisition, cash sale, etc.)  However,
 that puts one more field into the index - in addition to increasing
 the data load on the tables.  I decided on a single two-table design
 because the total number of transactions per year is not very big -
 last year the total number of transaction was under 100,000 - i.e.
 under hundred thousand.  I reckon that it can go as high as 500,000 to
 1000,000 but not much in the near future.
 
 If I create separate tables for each transaction type - invoice,
 order, cash sale, credit sale, etc. then I fear the system will be
 having to deal with too many tables (there are at least 10 transaction
 types).  Since keeping different types of transactions in different
 tables will only decrease the index key length by 1 and there will
 still be three-columns in the indexs and there will be many tables
 with three-column indexes, do you guys think that splitting up the
 tables like this will increase performance?
 
 Lastly, my database is fully normalized and I have tried to enformce
 data-integrity at the database level with all constraints enforced.
 Since, on innoDB tables, there is a requirement of building indexes
 for foreign key constraints, I fear I'll have performance degradation
 problems on multi-column indexes.  Because of that I have designed the
 database so that there will be very few updates or deletes - because
 of the stuff that I read about InnoDB issues.  Does MySQL performs
 well with so many constraints enforced?  I have STRICT SQL and all
 other restricted clauses enabled in my.ini file.
 
 Cutting it short:  can you recommend a solution that I can use to
 reduce the number of columns in indexes?  can you give me an advice to
 increase the MySQL performance in the face of fully-enforced
 constraints?
 
 I am posting this query on this list because I have seen some very
 good responses to similar  problems on this list.
 
 Thanks in advance,
 
 Asif
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Complex SQL for multiple joins

2006-08-31 Thread Jay Pipes
Also, depending on the number of permissions you are tracking, you could
use a single INT field and do bitwise ORing in your application to
determine permission checks...

Though I usually don't recommend denormalizing the schema, this is one
scenario that would potentially make life a bit easier.  Instead of
having six tables, you would have 3:

Users
UserGroups
User2Group (stores many-to-many relationship)

Both the fact tables (Users and UserGroups) would have an INT UNSIGNED
field called, say, permission_flags which could contain up to 32 flag
values for various permissions.

This is a very compact and efficienct way of *storing* permissions.
Retrieving sets of users/groups based on a single flag would be easy,
though an index would not be used.

For instance, imagine you have set bit 1 to mean has read access.

To find all users with read access, you would do:

SELECT * FROM Users WHERE permissions  1;

Let's say you have another permission for write access at the second bit
and you want to see all users with both read and write permission, you'd
do:

SELECT * FROM Users WHERE permissions  (1 + 2) = (1 + 2);

the third bit would be 2^3 or 4, etc...:

SELECT * FROM Users WHERE permissions  (1 + 2 + 4) = (1 + 2 + 4);

Additionally, what is nice about this type of organization is that you
can store the user's permissions in session memory and reference the
permissions without having to go to the database by using the same
bitwise operations in your application code.

For instance, in PHP you would write something like:

?php
define('CAN_READ', 1  0);
define('CAN_WRITE', 1  1);
define('CAN_DO_OTHER', 1  2);

$permissions = $my_session_vars['permissions'];
$can_read = $permissions  CAN_READ;
$can_write = $permissions  CAN_WRITE;

...
?

Some more notes: 

* Use a 64-bit BIGINT datatype if you need 33-64 flags for permissions
* You can use a SET datatype as well in this manner
* Make sure you understand bitwise operations

Cheers,

Jay

On Thu, 2006-08-31 at 09:39 -0400, Brent Baisley wrote:
 You should be able to do it 1 query, I can think of two ways. I'm not sure 
 which one will run faster. I use table aliasing to keep 
 the lines shorter.
 The first possibility would have user permissions in one column and group 
 permissions in another:
 
 SELECT users.userid,perm_u.permissions,perm_g.permissions
 FROM users
 LEFT JOIN users_permissions AS u_p ON users.userid=u_p.userid
 LEFT JOIN permissions AS perm_u ON u_p.permid=perm_u.permid
 LEFT JOIN users_usergroups AS u_ug ON users.userid=u_ug.userid
 LEFT JOIN usergroups_permissions AS ug_p ON u_ug.groupid=ug_p.groupid
 LEFT JOIN permissions AS perm_g ON ug_p.permid=perm_g.permid
 WHERE users.userid=#
 
 But if you want to just have one column of permissions, you'll need to use 
 UNIONs. I think this is more readable.
 
 SELECT users.userid AS userid,permissions AS permissions
  FROM users
  LEFT JOIN users_permissions AS u_p ON users.userid=u_p.userid
  LEFT JOIN permissions ON u_p.permid=permissions.permid
  WHERE users.userid=#
 UNION
 SELECT users.userid AS userid,permissions
  FROM users
  LEFT JOIN users_usergroups AS u_ug ON users.userid=u_ug.userid
  LEFT JOIN usergroups_permissions AS ug_p ON u_ug.groupid=ug_p.groupid
  LEFT JOIN permissions AS perm_g ON ug_p.permid=perm_g.permid
  WHERE users.userid=#
 
 Either of those should work. I don't know your table structure, so I can't 
 get too specific with it.
 
 - Original Message - 
 From: Stephen Orr [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Wednesday, August 30, 2006 7:27 PM
 Subject: Complex SQL for multiple joins
 
 
  Hi,
 
  For my current website project I'm developing a fine-grained access control
  system.
  I have the schema mostly worked out, although it will be having additional
  data added to parts of it later.
 
  For the record, I have 6 tables:
 
  users (contains all my individual users)
  usergroups (contains all the groups users can belong to)
  permissions (contains all the things that the system allows)
  users_usergroups (a join table that identifies which users belong to which
  groups)
  users_permissions (another join table that identifies which users have which
  permissions)
  usergroups_permissions (the final join table that identifies which
  usergroups have which permissions)
 
  Each of the _permissions join tables has an additional type column
  specifying whether the link allows or denies that particular action.
 
  What I've managed to get so far is queries returning which users belong to
  which groups, which users have which permissions, and which usergroups have
  which permissions. However, I need to go one step further and retrieve the
  permissions belonging to the usergroups which a specified user is a member
  of. Ideally I'd like to retrieve the individual users permissions at the
  same time.
 
  So what I need is a query that returns permissions belonging to a specific
  user, and permissions belonging to the usergroups that the 

Re: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread Jay Pipes
On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
 We have an application that stores versioned data in MySQL. Everytime a
 piece of data is retrieved and written to, it is stored in the database with
 a new version and all old versions are subsequently deleted. We have a
 request rate of 2 million reads per hour and 1.25 million per hour. What I
 am seeing is that as the DB grows the performance on the writes degrades
 substantially. When I start with a fresh database writes are at 70ms. But
 once the database reaches around 10GB the writes are at 200 ms. The DB can
 grow upto 35GB. I have tried almost performance related tuning described in
 the MySQL documentation page.
 
 What do I need to look at to start addressing this problem or this is how
 the performance is going to be ?

Before getting into server parameters, is it possible to take a look at
your schema and a sample of your SQL queries from the application?  That
would help immensely.  70ms for an UPDATE seems very slow... and 200ms
is very slow.

Cheers,
-- 
Jay Pipes
Community Relations Manager, North America, MySQL, Inc.
[EMAIL PROTECTED] :: +1 614 406 1267


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query cache question

2006-08-30 Thread Jay Pipes
Could you post an example of a query you are expecting to be cached?

On Wed, 2006-08-30 at 11:43 -0500, Mazur Worden, Kathy wrote:
 Hi,
 
  
 
 I just turned on the query cache by modifying its size and limit and
 restarting the server.  However queries aren't being stored to it:
 
  
 
 Variable_name
 
 Value
 
 Qcache_free_blocks
 
 1
 
 Qcache_free_memory
 
 10477008
 
 Qcache_hits
 
 0
 
 Qcache_inserts
 
 0
 
 Qcache_lowmem_prunes
 
 0
 
 Qcache_not_cached
 
 3759
 
 Qcache_queries_in_cache
 
 0
 
 Qcache_total_blocks
 
 1
 
  
 
 
 
 I've found
 http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html
 which says the qcache_not_cached variable is the number of queries which
 were either not SELECT's or have SQL_NO_CACHE specified so I ran several
 selects (selected records in a few tables using Query Browser and
 refreshed a few web pages hitting the db).  None of the queries have the
 no cache option specified.
 
  
 
 What else would prevent queries from getting into the cache?
 
  
 
 Kathy Mazur Worden
 
 Prairie Area Library System
 
  
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread Jay Pipes
On Wed, 2006-08-30 at 09:34 -0700, Phantom wrote:
 Here is the schema for the table are generate Sequence Numbers needed per
 item and the table that actually stores the data :
 
 CREATE TABLE SEQUENCE_NUMBER
 (
 USER_KEY VARBINARY(255) NOT NULL,
 SUBKEY VARBINARY(255)NOT NULL,
 SEQUENCE_NBR INT NOT NULL,
 GLOBAL_GENERATION INT NOT NULL,
 KEY_GENERATION INT NOT NULL,
 LAST_ACCESS_TIME BIGINT NOT NULL,
 PRIMARY KEY (USER_KEY, SUBKEY)
 )TYPE=InnoDB;

You have a serious problem with your data types here...  Using a
VARBINARY(255), VARBINARY(255) as your primary key is not a good idea.
What is the purpose of the sequence table? Why not simply use an
auto_incrementing integer?  Additionally, why are you using a BIGINT for
an access time?  Why not just use the (much smaller) TIMESTAMP datatype?

 CREATE TABLE ITEMS
 (
 USER_KEY VARBINARY(255) NOT NULL,
 SUBKEY VARBINARY(255) NOT NULL,
 VERSION_STAMP_HASH VARBINARY(255) NOT NULL,
 VERSION_STAMP LONGBLOB NOT NULL,
 USER_DATA LONGBLOB NOT NULL,
 LAST_ACCESS_TIME BIGINT NOT NULL,
 TYPE VARCHAR(255) NOT NULL,
 DATA_HASH VARBINARY(255) NOT NULL,
 HINTED_STORAGE_ID VARBINARY(255),
 USER_KEY_HASH VARBINARY(255) NOT NULL,
 ROWID BIGINT NOT NULL AUTO_INCREMENT,
 PRIMARY KEY (USER_KEY, SUBKEY, VERSION_STAMP_HASH),
 INDEX (ROWID),
 INDEX (HINTED_STORAGE_ID),
 INDEX (USER_KEY_HASH),
 INDEX (LAST_ACCESS_TIME,TYPE),
 INDEX (DATA_HASH)
 ) TYPE=InnoDB;

Same comment from above applies here, but it's even worse here because
*every one of the five secondary indexes* on the table will have a 512
byte primary key value appended to *each index record*, resulting in
horrible performance problems.  Again, what is the purpose of the
sequence numbers as the primary key?

 Sample queries are :
 
 SELECT GLOBAL_GENERATION, KEY_GENERATION, SEQUENCE_NBR, LAST_ACCESS_TIME
 FROM SEQUENCE_NUMBER Where USER_KEY = ? AND SUBKEY = ?;
 
 UPDATE SEQUENCE_NUMBER SET KEY_GENERATION = ?, SEQUENCE_NBR = ?,
 GLOBAL_GENERATION = ?, LAST_ACCESS_TIME = ? WHERE USER_KEY = ? AND SUBKEY =
 ? AND LAST_ACCESS_TIME = ?;
 
 INSERT INTO SEQUENCE_NUMBER VALUES (?, ?, ?, ?, ?, ?);
 
 REPLACE INTO SEQUENCE_NUMBER VALUES (?, ?, ?, ?, ?, ?);
 
 SELECT * FROM ITEMS Where USER_KEY = ? AND SUBKEY = ?;
 
 SELECT * FROM ITEMS WHERE USER_KEY = ? AND SUBKEY = ? AND
 VERSION_STAMP_HASH = ?;
 
 INSERT INTO ITEMS (USER_KEY, SUBKEY, VERSION_STAMP_HASH, VERSION_STAMP,
 USER_DATA, LAST_ACCESS_TIME, TYPE, DATA_HASH, HINTED_STORAGE_ID,
 USER_KEY_HASH) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
 
 DELETE FROM ITEMS WHERE USER_KEY = ? AND SUBKEY = ? AND VERSION_STAMP_HASH
 = ?;



 These are the main queries that we execute.
 
 Thanks
 A
 
 
 On 8/30/06, Jay Pipes [EMAIL PROTECTED] wrote:
 
  On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
   We have an application that stores versioned data in MySQL. Everytime a
   piece of data is retrieved and written to, it is stored in the database
  with
   a new version and all old versions are subsequently deleted. We have a
   request rate of 2 million reads per hour and 1.25 million per hour. What
  I
   am seeing is that as the DB grows the performance on the writes degrades
   substantially. When I start with a fresh database writes are at 70ms.
  But
   once the database reaches around 10GB the writes are at 200 ms. The DB
  can
   grow upto 35GB. I have tried almost performance related tuning described
  in
   the MySQL documentation page.
  
   What do I need to look at to start addressing this problem or this is
  how
   the performance is going to be ?
 
  Before getting into server parameters, is it possible to take a look at
  your schema and a sample of your SQL queries from the application?  That
  would help immensely.  70ms for an UPDATE seems very slow... and 200ms
  is very slow.
 
  Cheers,
  --
  Jay Pipes
  Community Relations Manager, North America, MySQL, Inc.
  [EMAIL PROTECTED] :: +1 614 406 1267
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: query cache question

2006-08-30 Thread Jay Pipes
Those queries look just fine to me.

Could you output the result of the following:

SELECT @@global.query_cache_size;

On Wed, 2006-08-30 at 12:27 -0500, Mazur Worden, Kathy wrote:
 Maybe that's my issue, I had though the type of query would not matter.
 
 I've run simple queries in query browser like this:
 Select * from table t;
 
 And refreshed web pages hit about a hundred times a day containing
 counting queries like this:
 select count(BuildingInfo.deliverycode) AS 'Total' from BuildingInfo,
 AgencyInfo where BuildingInfo.agencyid = AgencyInfo.agencyid AND
 AgencyInfo.type = 'academic' AND (BuildingInfo.membershipstatus = 'Full'
 OR BuildingInfo.membershipstatus = 'Developmental')
 
 And select queries at least as long as these:
 SELECT AgencyInfo.Name, AgencyInfo.agencyid from AgencyInfo where
 AgencyInfo.Type = 'school' AND (AgencyInfo.membershipstatus = 'Full' OR
 AgencyInfo.membershipstatus = 'Developmental') ORDER BY AgencyInfo.Name
 
 SELECT AgencyInfo.Name, BuildingInfo.Name as 'BuildingName',
 AgencyInfo.agencyid, BuildingInfo.buildingid FROM BuildingInfo INNER
 JOIN AgencyInfo ON  BuildingInfo.agencyid = AgencyInfo.agencyid ORDER BY
 AgencyInfo.Name, BuildingInfo.Name;
 
 There are indexes on the id and name related fields used in the WHERE
 clauses.
 
 Kathy Mazur Worden
 Prairie Area Library System
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 12:07 PM
 To: Mazur Worden, Kathy
 Cc: mysql@lists.mysql.com
 Subject: Re: query cache question
 
 Could you post an example of a query you are expecting to be cached?
 
 On Wed, 2006-08-30 at 11:43 -0500, Mazur Worden, Kathy wrote:
  Hi,
  
   
  
  I just turned on the query cache by modifying its size and limit and
  restarting the server.  However queries aren't being stored to it:
  
   
  
  Variable_name
  
  Value
  
  Qcache_free_blocks
  
  1
  
  Qcache_free_memory
  
  10477008
  
  Qcache_hits
  
  0
  
  Qcache_inserts
  
  0
  
  Qcache_lowmem_prunes
  
  0
  
  Qcache_not_cached
  
  3759
  
  Qcache_queries_in_cache
  
  0
  
  Qcache_total_blocks
  
  1
  
   
  
  
  
  I've found
  http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html
  which says the qcache_not_cached variable is the number of queries
 which
  were either not SELECT's or have SQL_NO_CACHE specified so I ran
 several
  selects (selected records in a few tables using Query Browser and
  refreshed a few web pages hitting the db).  None of the queries have
 the
  no cache option specified.
  
   
  
  What else would prevent queries from getting into the cache?
  
   
  
  Kathy Mazur Worden
  
  Prairie Area Library System
  
   
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread Jay Pipes
What type of data are you inserting?  What storage engine are you
inserting into?  What is the average row size?

On Wed, 2006-08-30 at 12:32 -0400, George Law wrote:
 I see the same type of slow downs using 5.0.18
 
 I am using load data in file to load CSV files.  
 
 with clean tables, I see fairly quick inserts (ie instant)
 
 2006-08-30 12:07:15 : begin import into table1
 2006-08-30 12:07:15: end import into table1 records (10962) 
 
 
 From earlier this morning, before I rotated my tables:
 2006-08-30 09:02:01 : begin import into table1
 2006-08-30 09:05:07: end import into table1 records (10082)
 
 
 I've posted about this before - one person will say that its my indexes
 getting rebuilt, others have said its disk io. I can never get a solid
 answer.
 
 If I disable the keys, do the import, then re-enable the keys, it takes
 just as long, 
 if not longer.
 
 
 I have just about given up on finding a solution for this and just
 rotate my tables out
 regularly once the imports take over 5 minutes to process roughly 10,000
 records
 
 --
 George
 
 
 
 
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 12:06 PM
 To: Phantom
 Cc: mysql@lists.mysql.com
 Subject: Re: Degrading write performance using MySQL 5.0.24
 
 On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
  We have an application that stores versioned data in 
 MySQL. Everytime a
  piece of data is retrieved and written to, it is stored in 
 the database with
  a new version and all old versions are subsequently 
 deleted. We have a
  request rate of 2 million reads per hour and 1.25 million 
 per hour. What I
  am seeing is that as the DB grows the performance on the 
 writes degrades
  substantially. When I start with a fresh database writes 
 are at 70ms. But
  once the database reaches around 10GB the writes are at 
 200 ms. The DB can
  grow upto 35GB. I have tried almost performance related 
 tuning described in
  the MySQL documentation page.
  
  What do I need to look at to start addressing this problem 
 or this is how
  the performance is going to be ?
 
 Before getting into server parameters, is it possible to 
 take a look at
 your schema and a sample of your SQL queries from the 
 application?  That
 would help immensely.  70ms for an UPDATE seems very slow... 
 and 200ms
 is very slow.
 
 Cheers,
 -- 
 Jay Pipes
 Community Relations Manager, North America, MySQL, Inc.
 [EMAIL PROTECTED] :: +1 614 406 1267
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread Jay Pipes
| char(3)
 | YES  | | |   |
 | msw_name| char(10)
 | YES  | | NULL|   |
 | called_party_after_transit_route| char(1)
 | YES  | | NULL|   |
 | called_party_on_dest_num_type   | int(1)
 | YES  | | 0   |   |
 | called_party_from_src_num_type  | int(1)
 | YES  | | 0   |   |
 | call_source_realm_name  | char(3)
 | YES  | | NULL|   |
 | call_dest_realm_name| char(3)
 | YES  | | NULL|   |
 | call_dest_crname| char(50)
 | YES  | | NULL|   |
 | call_dest_custid| char(20)
 | YES  | | NULL|   |
 | call_zone_data  | char(20)
 | YES  | | NULL|   |
 | calling_party_on_dest_num_type  | int(1)
 | YES  | | 0   |   |
 | calling_party_from_src_num_type | int(1)
 | YES  | | 0   |   |
 | original_isdn_cause_code| int(1)
 | YES  | | 0   |   |
 +-+-
 -+--+-+-+---+
 
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 1:44 PM
 To: George Law
 Cc: mysql@lists.mysql.com
 Subject: RE: Degrading write performance using MySQL 5.0.24
 
 What type of data are you inserting?  What storage engine are you
 inserting into?  What is the average row size?
 
 On Wed, 2006-08-30 at 12:32 -0400, George Law wrote:
  I see the same type of slow downs using 5.0.18
  
  I am using load data in file to load CSV files.  
  
  with clean tables, I see fairly quick inserts (ie instant)
  
  2006-08-30 12:07:15 : begin import into table1
  2006-08-30 12:07:15: end import into table1 records (10962) 
  
  
  From earlier this morning, before I rotated my tables:
  2006-08-30 09:02:01 : begin import into table1
  2006-08-30 09:05:07: end import into table1 records (10082)
  
  
  I've posted about this before - one person will say that 
 its my indexes
  getting rebuilt, others have said its disk io. I can never 
 get a solid
  answer.
  
  If I disable the keys, do the import, then re-enable the 
 keys, it takes
  just as long, 
  if not longer.
  
  
  I have just about given up on finding a solution for this and just
  rotate my tables out
  regularly once the imports take over 5 minutes to process 
 roughly 10,000
  records
  
  --
  George
  
  
  
  
  
  -Original Message-
  From: Jay Pipes [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, August 30, 2006 12:06 PM
  To: Phantom
  Cc: mysql@lists.mysql.com
  Subject: Re: Degrading write performance using MySQL 5.0.24
  
  On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
   We have an application that stores versioned data in 
  MySQL. Everytime a
   piece of data is retrieved and written to, it is stored in 
  the database with
   a new version and all old versions are subsequently 
  deleted. We have a
   request rate of 2 million reads per hour and 1.25 million 
  per hour. What I
   am seeing is that as the DB grows the performance on the 
  writes degrades
   substantially. When I start with a fresh database writes 
  are at 70ms. But
   once the database reaches around 10GB the writes are at 
  200 ms. The DB can
   grow upto 35GB. I have tried almost performance related 
  tuning described in
   the MySQL documentation page.
   
   What do I need to look at to start addressing this problem 
  or this is how
   the performance is going to be ?
  
  Before getting into server parameters, is it possible to 
  take a look at
  your schema and a sample of your SQL queries from the 
  application?  That
  would help immensely.  70ms for an UPDATE seems very slow... 
  and 200ms
  is very slow.
  
  Cheers,
  -- 
  Jay Pipes
  Community Relations Manager, North America, MySQL, Inc.
  [EMAIL PROTECTED] :: +1 614 406 1267
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: query cache question

2006-08-30 Thread Jay Pipes
Please show the output of:

SELECT @@global.query_cache_type;

and then:

SHOW GLOBAL STATUS LIKE 'Qc%';

Thanks!

On Wed, 2006-08-30 at 13:43 -0500, Mazur Worden, Kathy wrote:
 The output of that query is:
 
 @@global.query_cache_size
 10485760
 
 Kathy Mazur Worden
 Prairie Area Library System
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 12:42 PM
 To: Mazur Worden, Kathy
 Cc: mysql@lists.mysql.com
 Subject: RE: query cache question
 
 Those queries look just fine to me.
 
 Could you output the result of the following:
 
 SELECT @@global.query_cache_size;
 
 On Wed, 2006-08-30 at 12:27 -0500, Mazur Worden, Kathy wrote:
  Maybe that's my issue, I had though the type of query would not
 matter.
  
  I've run simple queries in query browser like this:
  Select * from table t;
  
  And refreshed web pages hit about a hundred times a day containing
  counting queries like this:
  select count(BuildingInfo.deliverycode) AS 'Total' from BuildingInfo,
  AgencyInfo where BuildingInfo.agencyid = AgencyInfo.agencyid AND
  AgencyInfo.type = 'academic' AND (BuildingInfo.membershipstatus =
 'Full'
  OR BuildingInfo.membershipstatus = 'Developmental')
  
  And select queries at least as long as these:
  SELECT AgencyInfo.Name, AgencyInfo.agencyid from AgencyInfo where
  AgencyInfo.Type = 'school' AND (AgencyInfo.membershipstatus = 'Full'
 OR
  AgencyInfo.membershipstatus = 'Developmental') ORDER BY
 AgencyInfo.Name
  
  SELECT AgencyInfo.Name, BuildingInfo.Name as 'BuildingName',
  AgencyInfo.agencyid, BuildingInfo.buildingid FROM BuildingInfo INNER
  JOIN AgencyInfo ON  BuildingInfo.agencyid = AgencyInfo.agencyid ORDER
 BY
  AgencyInfo.Name, BuildingInfo.Name;
  
  There are indexes on the id and name related fields used in the WHERE
  clauses.
  
  Kathy Mazur Worden
  Prairie Area Library System
  
  -Original Message-
  From: Jay Pipes [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, August 30, 2006 12:07 PM
  To: Mazur Worden, Kathy
  Cc: mysql@lists.mysql.com
  Subject: Re: query cache question
  
  Could you post an example of a query you are expecting to be cached?
  
  On Wed, 2006-08-30 at 11:43 -0500, Mazur Worden, Kathy wrote:
   Hi,
   

   
   I just turned on the query cache by modifying its size and limit and
   restarting the server.  However queries aren't being stored to it:
   

   
   Variable_name
   
   Value
   
   Qcache_free_blocks
   
   1
   
   Qcache_free_memory
   
   10477008
   
   Qcache_hits
   
   0
   
   Qcache_inserts
   
   0
   
   Qcache_lowmem_prunes
   
   0
   
   Qcache_not_cached
   
   3759
   
   Qcache_queries_in_cache
   
   0
   
   Qcache_total_blocks
   
   1
   

   
   
   
   I've found
   http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html
   which says the qcache_not_cached variable is the number of queries
  which
   were either not SELECT's or have SQL_NO_CACHE specified so I ran
  several
   selects (selected records in a few tables using Query Browser and
   refreshed a few web pages hitting the db).  None of the queries have
  the
   no cache option specified.
   

   
   What else would prevent queries from getting into the cache?
   

   
   Kathy Mazur Worden
   
   Prairie Area Library System
   

   
  
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread Jay Pipes
Well, clearly you've got a denormalized schema here, but I'll assume you
will be normalizing this after the import?  I hope so.  Having a PK of
CHAR(50) / INT will lead to poorer performance than a simply
incrementing integer key.  Additionally, it seems all the fields are
NULLable, which wastes even more space.  Plus, not quite sure what all
the undef1, undef2, undef3 ... etc fields are?  Also, storing timestamps
in a CHAR(19) is another great way to kill performance.

As far as just simply *getting* this information into the database as
quickly as possible, you might try dumping it into a temporary table,
then normalizing it and INSERT ... SELECT * FROM temp_table to batch the
insert as one move into the main table...

Jay

On Wed, 2006-08-30 at 15:11 -0400, George Law wrote:
 Jay,
 
 Here you go - at least according to mysqldump 
 
 
 CREATE TABLE `table1` (
   `start_time` char(19) default NULL,
   `start_time_epoch` int(10) default '0',
   `call_duration` char(9) default NULL,
   `call_source` char(15) default NULL,
   `call_source_q931sig_port` int(5) default '0',
   `call_dest` char(15) default NULL,
   `undef1` char(1) default NULL,
   `call_source_custid` char(20) default NULL,
   `called_party_on_dest` char(32) default NULL,
   `called_party_from_src` char(32) default NULL,
   `call_type` char(2) default NULL,
   `undef2` tinyint(1) default NULL,
   `disconnect_error_type` char(1) default '',
   `call_error_num` int(4) default '0',
   `call_error` char(24) default NULL,
   `undef3` char(1) default NULL,
   `undef4` char(1) default NULL,
   `ani` char(32) default NULL,
   `undef5` char(1) default NULL,
   `undef6` char(1) default NULL,
   `undef7` char(1) default NULL,
   `cdr_seq_no` int(9) NOT NULL default '0',
   `undef8` char(1) default NULL,
   `callid` char(50) NOT NULL default '',
   `call_hold_time` char(9) default NULL,
   `call_source_regid` char(20) default '',
   `call_source_uport` int(1) default '0',
   `call_dest_regid` char(20) default '',
   `call_dest_uport` int(1) default '0',
   `isdn_cause_code` int(3) default '0',
   `called_party_after_src_calling_plan` char(32) default NULL,
   `call_error_dest_num` int(4) default '0',
   `call_error_dest` char(25) default NULL,
   `call_error_event_str` char(20) default '',
   `new_ani` char(32) default NULL,
   `call_duration_seconds` int(5) default '0',
   `incoming_leg_callid` char(1) default NULL,
   `protocol` enum('sip','h323') default NULL,
   `cdr_type` enum('start1','start2','end1','end2','hunt') default NULL,
   `hunting_attempts` int(1) default '0',
   `caller_trunk_group` int(3) default NULL,
   `call_pdd` int(5) default '0',
   `h323_dest_ras_error` int(2) default '0',
   `h323_dest_h225_error` int(2) default '0',
   `sip_dest_respcode` int(3) default '0',
   `dest_trunk_group` char(1) default NULL,
   `call_duration_fractional` decimal(8,3) default '0.000',
   `timezone` char(3) default '',
   `msw_name` char(10) default NULL,
   `called_party_after_transit_route` char(1) default NULL,
   `called_party_on_dest_num_type` int(1) default '0',
   `called_party_from_src_num_type` int(1) default '0',
   `call_source_realm_name` char(3) default NULL,
   `call_dest_realm_name` char(3) default NULL,
   `call_dest_crname` char(50) default NULL,
   `call_dest_custid` char(20) default NULL,
   `call_zone_data` char(20) default NULL,
   `calling_party_on_dest_num_type` int(1) default '0',
   `calling_party_from_src_num_type` int(1) default '0',
   `original_isdn_cause_code` int(1) default '0',
   PRIMARY KEY  (`callid`,`cdr_seq_no`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=1;
 
  
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 3:06 PM
 To: George Law
 Cc: mysql@lists.mysql.com
 Subject: RE: Degrading write performance using MySQL 5.0.24
 
 Hi!
 
 Could you please post a SHOW CREATE TABLE table1 \G
 
 thanks!
 
 On Wed, 2006-08-30 at 14:32 -0400, George Law wrote:
  data is all alphanumeric - any char fields are all fixed 
 lengths, no
  varchars
  
  
 Name: table1
   Engine: MyISAM
  Version: 10
   Row_format: Fixed
 Rows: 330344
   Avg_row_length: 624
  Data_length: 206134656
  Max_data_length: 2680059592703
 Index_length: 18638848
Data_free: 0
   Auto_increment: NULL
  Create_time: 2006-08-30 09:50:23
  Update_time: 2006-08-30 14:17:17
   Check_time: NULL
Collation: latin1_swedish_ci
 Checksum: NULL
   Create_options: max_rows=1
  Comment: 
  
  
  
 +-+--
 ---
  -+--+-+-+---+
  | Field   | Type
  | Null | Key | Default | Extra |
  
 +-+--
 ---
  -+--+-+-+---+
  | start_time  | char(19)
  | YES  | | NULL

RE: How to draw data model diagrams from existing schema?

2006-08-30 Thread Jay Pipes
Use MySQL Workbench:

http://dev.mysql.com/downloads/guitools/

On Wed, 2006-08-30 at 15:11 -0400, Jerry Schwartz wrote:
 Automatically? Don't know. I did it by hand using Dia, a free alternative to
 MS Visio. It was tedious.
 
 Regards,
 
 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032
 
 860.674.8796 / FAX: 860.674.8341
 
 
 -Original Message-
 From: news [mailto:[EMAIL PROTECTED] Behalf Of Matthew Wilson
 Sent: Wednesday, August 30, 2006 2:58 PM
 To: mysql@lists.mysql.com
 Subject: How to draw data model diagrams from existing schema?
 
 
 
 I've spent the last month building a fairly elaborate database with lots
 of foreign keys.
 
 I want to draw a diagram that maps the relationships between all the
 tables, but I haven't found any software that can do that.
 
 Does anyone have any suggestions?  I want to draw a picture with a box
 for each table with links to all the other tables.
 
 TIA
 
 Matt
 
 --
 A better way of running series of SAS programs:
 http://overlook.homelinux.net/wilsonwiki/SasAndMakefiles
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: query cache question

2006-08-30 Thread Jay Pipes
Ah, I think I may have figured it out...

Are you using prepared statements?  If so, doing that prevents the query
cache from caching the resultset entirely.

On Wed, 2006-08-30 at 14:17 -0500, Mazur Worden, Kathy wrote:
 @@global.query_cache_type
 ON
 
 Variable_name Value
 Qcache_free_blocks1
 Qcache_free_memory10477008
 Qcache_hits   0
 Qcache_inserts0
 Qcache_lowmem_prunes  0
 Qcache_not_cached 20318
 Qcache_queries_in_cache   0
 Qcache_total_blocks   1
 
 Thanks
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 2:12 PM
 To: Mazur Worden, Kathy
 Cc: mysql@lists.mysql.com
 Subject: RE: query cache question
 
 Please show the output of:
 
 SELECT @@global.query_cache_type;
 
 and then:
 
 SHOW GLOBAL STATUS LIKE 'Qc%';
 
 Thanks!
 
 On Wed, 2006-08-30 at 13:43 -0500, Mazur Worden, Kathy wrote:
  The output of that query is:
  
  @@global.query_cache_size
  10485760
  
  Kathy Mazur Worden
  Prairie Area Library System
  
  -Original Message-
  From: Jay Pipes [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, August 30, 2006 12:42 PM
  To: Mazur Worden, Kathy
  Cc: mysql@lists.mysql.com
  Subject: RE: query cache question
  
  Those queries look just fine to me.
  
  Could you output the result of the following:
  
  SELECT @@global.query_cache_size;
  
  On Wed, 2006-08-30 at 12:27 -0500, Mazur Worden, Kathy wrote:
   Maybe that's my issue, I had though the type of query would not
  matter.
   
   I've run simple queries in query browser like this:
   Select * from table t;
   
   And refreshed web pages hit about a hundred times a day containing
   counting queries like this:
   select count(BuildingInfo.deliverycode) AS 'Total' from
 BuildingInfo,
   AgencyInfo where BuildingInfo.agencyid = AgencyInfo.agencyid AND
   AgencyInfo.type = 'academic' AND (BuildingInfo.membershipstatus =
  'Full'
   OR BuildingInfo.membershipstatus = 'Developmental')
   
   And select queries at least as long as these:
   SELECT AgencyInfo.Name, AgencyInfo.agencyid from AgencyInfo where
   AgencyInfo.Type = 'school' AND (AgencyInfo.membershipstatus = 'Full'
  OR
   AgencyInfo.membershipstatus = 'Developmental') ORDER BY
  AgencyInfo.Name
   
   SELECT AgencyInfo.Name, BuildingInfo.Name as 'BuildingName',
   AgencyInfo.agencyid, BuildingInfo.buildingid FROM BuildingInfo INNER
   JOIN AgencyInfo ON  BuildingInfo.agencyid = AgencyInfo.agencyid
 ORDER
  BY
   AgencyInfo.Name, BuildingInfo.Name;
   
   There are indexes on the id and name related fields used in the
 WHERE
   clauses.
   
   Kathy Mazur Worden
   Prairie Area Library System
   
   -Original Message-
   From: Jay Pipes [mailto:[EMAIL PROTECTED] 
   Sent: Wednesday, August 30, 2006 12:07 PM
   To: Mazur Worden, Kathy
   Cc: mysql@lists.mysql.com
   Subject: Re: query cache question
   
   Could you post an example of a query you are expecting to be cached?
   
   On Wed, 2006-08-30 at 11:43 -0500, Mazur Worden, Kathy wrote:
Hi,

 

I just turned on the query cache by modifying its size and limit
 and
restarting the server.  However queries aren't being stored to it:

 

Variable_name

Value

Qcache_free_blocks

1

Qcache_free_memory

10477008

Qcache_hits

0

Qcache_inserts

0

Qcache_lowmem_prunes

0

Qcache_not_cached

3759

Qcache_queries_in_cache

0

Qcache_total_blocks

1

 



I've found
   
 http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html
which says the qcache_not_cached variable is the number of queries
   which
were either not SELECT's or have SQL_NO_CACHE specified so I ran
   several
selects (selected records in a few tables using Query Browser and
refreshed a few web pages hitting the db).  None of the queries
 have
   the
no cache option specified.

 

What else would prevent queries from getting into the cache?

 

Kathy Mazur Worden

Prairie Area Library System

 

   
   
  
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subqueries in MySQL 4.1

2006-08-23 Thread Jay Pipes
On Wed, 2006-08-23 at 22:23 +0200, spacemarc wrote:
 Hi,
 I have a query like this:
 
 SELECT table1.*,(
 SELECT COUNT( field2 )
 FROM table2
 WHERE id=10
 ) AS total
 FROM table1
 GROUP BY id
 LIMIT 1
 
 but the subqueries do not work with mysql  4.1. How can I convert it
 (or make to work) in MySQL 3.x, 4.0 possibly in one only query?

Your query doesn't show any relationship between the two tables (via a
join condition or correlation) so you would have to do two queries
(which is exactly what your original query does anyway:

SELECT COUNT(field2):[EMAIL PROTECTED] FROM table2 WHERE id = 10;

SELECT table1.*, @counter as total
FROM table1
LIMIT 1;

Note that I took out the GROUP BY clause, which is pointless given the
query's structure of returning the first id column.

Jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow log logs non-slow statements

2006-08-15 Thread Jay Pipes
It is likely you are also logging any queries not using an index (doing
full table scans).  Check the configuration variable: log_long_format

Cheers,
Jay

On Tue, 2006-08-15 at 14:45 +0200, Dominik Klein wrote:
 I have specified
 
 log-slow-queries
 long-query-time=10
 
 in my.cnf and restarted my server. After that I see statements logged 
 into the slow-log-file.
 
 But it seems that mysql logs too much into that file.
 
 When executing this statement:
 
 mysql show variables like %tx%;
 +---+-+
 | Variable_name | Value   |
 +---+-+
 | tx_isolation  | REPEATABLE-READ |
 +---+-+
 1 row in set (0,00 sec)
 
 it immediately shows up in the slow-log:
 
 # Time: 060815 14:40:22
 # [EMAIL PROTECTED]: root[root] @ localhost []
 # Query_time: 0  Lock_time: 0  Rows_sent: 1  Rows_examined: 1
 show variables like %tx%;
 
 This is also true vor simple select statements which give a result in 
 (0,00 sec).
 
 How can I make mysql log only those slow queries, that are really slow.
 
 Thanks in advance
 Dominik
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL [bar]Camp

2006-08-10 Thread Jay Pipes
Hi all!

Wanted everyone to know about an upcoming MySQL camp to be held the
weekend of November 10th through 12th in Silicon Valley at a venue to be
determined soon.

Please hop over to the wiki at http://mysqlcamp.org and check it out.
Feel free to add suggestions for topics, sign your name on the
participants page, and anything else.

Not familiar with barcamps?  Check out http://barcamp.org to find out
about the concept!

Cheers!

Jay Pipes
Community Relations Manager, North America, MySQL, Inc.
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: previous and next query

2006-08-04 Thread Jay Pipes
On Fri, 2006-08-04 at 17:14 -0700, Tanner Postert wrote:
 select text from table where user = 2 and id  3 order by id ASC limit 1;
 select text from table where user = 2 and id  3 order by id ASC limit 1;
 
 is it possible to do this using 1 query?

select text from table 
where user = 2 and id  3 order by id ASC limit 1
UNION ALL
select text from table 
where user = 2 and id  3 order by id ASC limit 1;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Help for Loosely Couple Properties

2006-08-02 Thread Jay Pipes
On Tue, 2006-08-01 at 16:23 -0700, Robert DiFalco wrote:
 The question is, how do I query this? Say I want all records from table
 T whose COLOR property value is ORANGE.
 
 The only thing I can come up with (and I'm no SQL expert and this looks
 wrong to me) is the following:
 
 SELECT *
 FROM T
 WHERE
 (
 T.ID NOT IN 
 ( 
 SELECT StringVal.REF_ID 
 FROM StringVal 
 WHERE StringValue.TYPE_ID = COLOR 
 )
 AND
 EXISTS
 ( 
 SELECT * 
 FROM StringType 
 WHERE StringType.DEF_VAL LIKE Orange AND StringType.ID = COLOR
 
 )
 )
 OR
 (
 T.ID IN 
 (
 SELECT StringVal.REF_ID 
 FROM StringVal 
 WHERE StringVal.VAL LIKE Orange AND StringVal.TYPE_ID = COLOR
 )
 )

SELECT * FROM T
LEFT JOIN StringVal V
ON T.ID = V.REF_ID
INNER JOIN StringType ST
ON V.TYPE_ID = ST.ID
AND ST.ID = COLOR
WHERE V.REF_ID IS NULL
OR V.VAL = Orange;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doing a join

2006-08-02 Thread Jay Pipes
On Wed, 2006-08-02 at 11:13 -0600, John Meyer wrote:
 I have two tables:
 
 
 MEMBERS:
 MEM_ID
 ...
 
 GROUPS:
 GRO_ID:
 ...
 
 And one joiner
 
 MEM_GRO:
 MEM_ID, GRO_ID
 
 
 I want to print out a list like this
 
 GROUP_NAME, NUMBER_OF_MEMBERS
 
 Even when the number of members is 0, how do I do that?

SELECT G.NAME AS GROUP_NAME , COUNT(*) AS NUMBER_OF_MEMBERS
FROM GROUPS G
LEFT JOIN MEM_GRO AS M
ON G.GRO_ID = M.GRO_ID
GROUP BY G.GRO_ID;

By the way, when you use ALLCAPS for everything, it makes it very
difficult to pick out SQL keywords, and MAKES IT SEEM LIKE YOU ARE
SHOUTING.

-jay



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tune a geometric query

2006-08-01 Thread Jay Pipes
On Tue, 2006-08-01 at 17:39 +0530, ViSolve DB Team wrote:
 Hello Prashant,
 
 If you do need the duplicate rows in the final result, use UNION ALL with 
 your query. Otherwise you can opt for UNION as UNION is faster than UNION 
 ALL.

I have never heard of any evidence of this; in fact, it makes more sense
that the reverse would be true, as MySQL would not have to do an
implicit DISTINCT on the outermost resultset.

Prashant:

Please post an EXPLAIN of your original posted SQL query.  Use the /G
flag from the command line client to make it easier to read the results.
Thanks!

-jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Help for Loosely Couple Properties

2006-08-01 Thread Jay Pipes
On Tue, 2006-08-01 at 16:23 -0700, Robert DiFalco wrote:
 I have a table that contains properties that can be associated with any
 table whose primary key is a LONG. Lets say that there is just one kind
 of property. The table looks something like this:
 
 TABLE StringVal
   REF_ID  BIGINT// row to associate property with
   TYPE_ID BIGINT// type of string property
   VAL VARCHAR   // property value
 
   P_KEY( REF_ID, TYPE_ID )
 
 There is another table to represent a specific StringVal type along with
 its default value:
 
TABLE StringType
   ID  BIGINT   // The TYPE ID
   NAMEVARCHAR  // The unique name of this property
   DEF_VAL VARCHAR  // The default value of this property

Actually, the rub is that you are not using specific columns for
specific entity attributes, and are pretty much storing everything in
one gigantic table.  Any particular reason for this?  For instance, why
not have a column called color, instead of overcomplicating things?

-jay


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Value of a referenced field

2006-06-29 Thread Jay Pipes

Hi Andreas!

Andreas Bauer wrote:
In my postgres database, the value of the 
insert into command of the table 2 is 
currval(sequence).There are severally such insert into 
commands back-to-back:


insert into table1 (authorid, lastname, firstname) 
values (nextval('s_authors'), 'Meyers', 'Scott');


insert into table2 (authorid, title, subtitle)
values (currval('s_authors'), 'Effektiv C++ Programmieren',
'50 Wege zur Verbesserung Ihrer Programme und Entwuerfe');

How to in mysql?


You use NULL for the first, and the LAST_INSET_ID() function for the 
next table:


INSERT INTO table1 (authorid, lastname, firstname)
VALUES (NULL, 'Meyers', 'Scott');

INSERT INTO table2 (authorid, title, subtitle)
VALUES (LAST_INSERT_ID(), 'Effektiv C++ Programmieren',
'50 Wege zur Verbesserung Ihrer Programme und Entwuerfe');

You can read more about LAST_INSERT_ID() here:

http://dev.mysql.com/doc/refman/5.0/en/information-functions.html

Cheers!

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-06-28 Thread Jay Pipes

Jesse wrote:
The error was, 
Unknown column 'primary_grouping.State' in 'on clause'.  I assume this 
is in the ON clause that's JOINing the member_counts to the 
primary_grouping.


No, that's because of a stupid mistake on my part.  Here you go:

SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN State ST
 ON primary_grouping.State = ST.State
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS member_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN Members M
ON C.ID = M.ChapterID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS member_counts
 ON primary_grouping.State = member_counts.State
 AND primary_grouping.Sub = member_counts.Sub
 AND primary_grouping.ChapterType = member_counts.ChapterType
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS advisor_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN AdvChapterLink ACL
ON C.ID = ACL.ChapterID
   INNER JOIN LocalAdvisors LA
ON ACL.AdvisorID = LA.ID
AND LA.InvoiceNo IS NOT NULL
  GROUP BY S.State, S.Sub, C.ChapterType
) AS advisor_counts
 ON primary_grouping.State = advisor_counts.State
 AND primary_grouping.Sub = advisor_counts.Sub
 AND primary_grouping.ChapterType = advisor_counts.ChapterType;



- Original Message - From: Jay Pipes [EMAIL PROTECTED]
To: Jesse [EMAIL PROTECTED]
Cc: mysql mysql@lists.mysql.com
Sent: Tuesday, June 27, 2006 7:22 PM
Subject: Re: Query Speed



Jesse wrote:
I worked with the query for a while, trying equi-joins instead of 
JOINs, and variuos other things.  I found that the queries that I was 
using to represent the TotMem  TotAdv columns was what was closing 
things down. I finally ended up using a sub-query to solve the 
problem.  I gathered the basic information, then added the count 
columns to that, and now, I'm down to less than a second execution.  
Much better!  Here's the new query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT 
Count(*) FROM Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN 
Schools S1 ON S1.ID=C1.SchoolID WHERE S1.State=sq.State AND 
S1.Sub=sq.Sub AND C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT 
Count(*) FROM ((AdvChapterLink ACL JOIN LocalAdvisors LA ON 
LA.ID=ACL.AdvisorID) JOIN Chapters C2 ON C2.ID=ACL.ChapterID) JOIN 
Schools S2 ON S2.ID=C2.SchoolID WHERE S2.State=Sq.State AND 
S2.Sub=Sq.Sub AND C2.ChapterType=sq.ChapterType AND LA.InvoiceNo IS 
NOT NULL) AS TotAdv FROM (SELECT S.State, S.Sub, ST.StateName, 
C.ChapterType FROM Schools S, State ST, Chapters C WHERE 
ST.State=S.State AND C.SchoolID=S.ID GROUP BY S.State, S.Sub, 
ST.StateName, C.ChapterType) AS sq ORDER BY State, Sub, ChapterType


Correlated subqueries are evil.  You will get much better performance 
by converting the correlations to derived tables (subqueries in the 
FROM clause) like so (I formatted it differently so I could read it).


Here is your original query (from above):

SELECT
sq.State
, sq.Sub
, sq.StateName
, sq.ChapterType
, (
SELECT Count(*)
FROM Members M
JOIN Chapters C1 ON C1.ID=M.ChapterID
JOIN Schools S1 ON S1.ID=C1.SchoolID
WHERE S1.State=sq.State
AND S1.Sub=sq.Sub
AND C1.ChapterType=sq.ChapterType
) AS TotMem
, (
SELECT Count(*)
FROM (
(
AdvChapterLink ACL
JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID
)
JOIN Chapters C2 ON C2.ID=ACL.ChapterID
)
JOIN Schools S2 ON S2.ID=C2.SchoolID
WHERE S2.State=Sq.State
AND S2.Sub=Sq.Sub
AND C2.ChapterType=sq.ChapterType
AND LA.InvoiceNo IS NOT NULL
) AS TotAdv
FROM (
SELECT S.State, S.Sub, ST.StateName, C.ChapterType
FROM Schools S, State ST, Chapters C
WHERE ST.State=S.State
AND C.SchoolID=S.ID
GROUP BY S.State, S.Sub, ST.StateName, C.ChapterType
) AS sq
ORDER BY
State
, Sub
, ChapterType;

The problem is that the two correlated subqueries in the SELECT clause 
will be executed *once for every result returned from the outer 
primary query.  That means lots of queries, needlessly.  Instead, get 
rid of all the parenthetical joins (not needed), the join to State in 
the GROUP BY (not needed) and convert the correlations into derived 
tables, using standard joins.  Remember to think in terms of the 
*sets* of data upon which you are operating, not in terms of iterating 
through a set of data. They are different concepts.  Here's the new 
query:


SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN (
  SELECT

Re: Query Speed

2006-06-27 Thread Jay Pipes

Jesse wrote:
I worked with the query for a while, trying equi-joins instead of JOINs, 
and variuos other things.  I found that the queries that I was using to 
represent the TotMem  TotAdv columns was what was closing things down.  
I finally ended up using a sub-query to solve the problem.  I gathered 
the basic information, then added the count columns to that, and now, 
I'm down to less than a second execution.  Much better!  Here's the new 
query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT Count(*) 
FROM Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN Schools S1 ON 
S1.ID=C1.SchoolID WHERE S1.State=sq.State AND S1.Sub=sq.Sub AND 
C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=Sq.State AND S2.Sub=Sq.Sub AND 
C2.ChapterType=sq.ChapterType AND LA.InvoiceNo IS NOT NULL) AS TotAdv 
FROM (SELECT S.State, S.Sub, ST.StateName, C.ChapterType FROM Schools S, 
State ST, Chapters C WHERE ST.State=S.State AND C.SchoolID=S.ID GROUP BY 
S.State, S.Sub, ST.StateName, C.ChapterType) AS sq ORDER BY State, Sub, 
ChapterType


Correlated subqueries are evil.  You will get much better performance by 
converting the correlations to derived tables (subqueries in the FROM 
clause) like so (I formatted it differently so I could read it).


Here is your original query (from above):

SELECT
sq.State
, sq.Sub
, sq.StateName
, sq.ChapterType
, (
SELECT Count(*)
FROM Members M
JOIN Chapters C1 ON C1.ID=M.ChapterID
JOIN Schools S1 ON S1.ID=C1.SchoolID
WHERE S1.State=sq.State
AND S1.Sub=sq.Sub
AND C1.ChapterType=sq.ChapterType
) AS TotMem
, (
SELECT Count(*)
FROM (
(
AdvChapterLink ACL
JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID
)
JOIN Chapters C2 ON C2.ID=ACL.ChapterID
)
JOIN Schools S2 ON S2.ID=C2.SchoolID
WHERE S2.State=Sq.State
AND S2.Sub=Sq.Sub
AND C2.ChapterType=sq.ChapterType
AND LA.InvoiceNo IS NOT NULL
) AS TotAdv
FROM (
SELECT S.State, S.Sub, ST.StateName, C.ChapterType
FROM Schools S, State ST, Chapters C
WHERE ST.State=S.State
AND C.SchoolID=S.ID
GROUP BY S.State, S.Sub, ST.StateName, C.ChapterType
) AS sq
ORDER BY
State
, Sub
, ChapterType;

The problem is that the two correlated subqueries in the SELECT clause 
will be executed *once for every result returned from the outer primary 
query.  That means lots of queries, needlessly.  Instead, get rid of all 
the parenthetical joins (not needed), the join to State in the GROUP BY 
(not needed) and convert the correlations into derived tables, using 
standard joins.  Remember to think in terms of the *sets* of data upon 
which you are operating, not in terms of iterating through a set of 
data.  They are different concepts.  Here's the new query:


SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS member_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN Members M
ON C.ID = M.ChapterID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS member_counts
 ON primary_grouping.State = member_counts.State
 AND primary_grouping.Sub = member_counts.Sub
 AND primary_grouping.ChapterType = member_counts.ChapterType
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS advisor_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN AdvChapterLink ACL
ON C.ID = ACL.ChapterID
   INNER JOIN LocalAdvisors LA
ON ACL.AdvisorID = LA.ID
AND LA.InvoiceNo IS NOT NULL
  GROUP BY S.State, S.Sub, C.ChapterType
) AS advisor_counts
INNER JOIN State ST
 ON primary_grouping.State = ST.State
ORDER BY
State
, Sub
, ChapterType;

This should reduce the number of queries actually executed to 3 instead 
of  700 million


Let us know the output of EXPLAIN and the speed difference.

Cheers,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld refuses to run on boot

2006-06-22 Thread Jay Pipes

Fredrik Andersson wrote:

Both datadir and basedir is owned by mysql and the mysql group.

I have hit rockbottom here it seems :(


first, check the error log for mysql.  Should be in the datadir, named 
localhost.localdomain.err I believe on RH.  There will be an entry in 
there explaining that something is problematic with permissions.  It 
probably is the PID directory, IMO.  Make sure you know where MySQL is 
trying to create the pid file...


Let us know how things work out.

Cheers!


Den 6/21/2006, skrev Jay Pipes [EMAIL PROTECTED]:


Probably a permissions issue.  Ensure that the directory in which the
pid file is created (I believe /var/run or /var/lib/mysql on RH) has
write permission for the mysql system user/group.  Also, ensure
permissions/ownership on the datadir (/var/lib/mysql) for the mysql
owner/group.

Fredrik Andersson wrote:

Hi all

I have problems getting MySQL autoboot on my RedHat installation. I have
tried to add the mysql.server start script from the install dir to the
system with chkconfig --add mysql (I copied it to /etc/init.d/) and then
trying to add mysql to the default boot order with chkconfig mysql on
but no luck there.

I have even tried to edit rc.local with the commands service mysql start
(this work when I run it myself) and mysqld -u mysql ...

The log file says that mysql has been started and then ended directly
after..

060621 13:12:29 mysqld started
060621 13:12:33 mysqld ended

Anyone that have any tip on how to solve this?

I can add that chkconfig -- list | grep mysql tells me that MySQL has
been configured to run on boot levels 3,4 and 5 correctly and go down on
1,2 and 6 but still it refuses to work. =(

Version being used is, RHEL 4.3 and MySQL 5.0.22

All help is greatly appreciated.


--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld refuses to run on boot

2006-06-21 Thread Jay Pipes
Probably a permissions issue.  Ensure that the directory in which the 
pid file is created (I believe /var/run or /var/lib/mysql on RH) has 
write permission for the mysql system user/group.  Also, ensure 
permissions/ownership on the datadir (/var/lib/mysql) for the mysql 
owner/group.


Fredrik Andersson wrote:

Hi all

I have problems getting MySQL autoboot on my RedHat installation. I have
tried to add the mysql.server start script from the install dir to the
system with chkconfig --add mysql (I copied it to /etc/init.d/) and then
trying to add mysql to the default boot order with chkconfig mysql on
but no luck there.

I have even tried to edit rc.local with the commands service mysql start
(this work when I run it myself) and mysqld -u mysql ...

The log file says that mysql has been started and then ended directly
after..

060621 13:12:29 mysqld started
060621 13:12:33 mysqld ended

Anyone that have any tip on how to solve this?

I can add that chkconfig -- list | grep mysql tells me that MySQL has
been configured to run on boot levels 3,4 and 5 correctly and go down on
1,2 and 6 but still it refuses to work. =(

Version being used is, RHEL 4.3 and MySQL 5.0.22

All help is greatly appreciated.



--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query slow

2006-06-21 Thread Jay Pipes

luiz Rafael wrote:

Dear friends

is their any way to optimize this query bellow, it take +- 2minutes do 
complete, i think it becouse their no index by the emissao field


SELECT  * FROM  `sav00_sava0400_dbf`   WHERE 2000 = YEAR(`emissao`) OR 
(1999 = YEAR(`emissao`) AND 12  MONTH(`emissao`))  ORDER BY emissao ASC


SELECT  * FROM  `sav00_sava0400_dbf`
emissao BETWEEN '2000-01-01' AND '2000-12-31'
UNION ALL
SELECT  * FROM  `sav00_sava0400_dbf`
emissao BETWEEN '1999-12-01' AND '1999-12-31'

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How To Pronounce MySQL

2006-06-08 Thread Jay Pipes

Jesse wrote:
I have been using Microsoft SQL Server for a while, and I'm now trying 
to switch all our applications over to use MySQL.  Microsoft SQL Server 
is pronounced Sequel Server.  Is MySQL pronounced My Sequel, or is 
it pronounced My S-Q-L?  I mean, generally speaking?


What Jimmy G said is correct, the official pronunciation is:

My' Ess'-Queue-El

However, as a bit of trivia, the original pronunciation is more like:

Mee' Ess'-Queue-El

because My is the name of Monty's daughter, and in Swedish, the 
pronunciation of My sounds more like Me...


:)

p.s.  That is also where Max-DB comes from.  Max is the name of 
Monty's son, not an implication that Max-DB is better than MySQL.


--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: libmysqlclient_r.so not being created

2006-06-06 Thread Jay Pipes

[EMAIL PROTECTED] wrote:

Any idea why libmysqlclient_r.so is not being created when I compile after a 
configure --without-server?



Compile MySQL with --enable-thread-safe-client configure option.

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: fetch floats/doubles in native binary representation

2006-05-26 Thread Jay Pipes

Andras Pal wrote:

Hi,
  I've a large table with lots of floats (or doubles) that i want to query
using C API. By default, in C, the rows are returned as a type of char **,
therefore floating numbers are returned as ascii strings. Is it possible
to fetch somehow directly these numbers in their original binary
representation? I know it is architecture-dependent, but if we use
localhost, it won't cause any problem if the result is casted directy to
(float) and/or (double). For our problem, it would be much more efficent
to do the queries this way than the server converts the number to ascii,
the client converts the number againt to binary (on the same machine), and
we do what we want to do after it (which is a simple multiplication, not
as time consuming as two conversions...).


Hi Andras!

Why not have the MySQL server do the computation, if it is a simple 
multiplication?  Can you post the code you are using so we might suggest 
an alternative solution?


Cheers,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: One to many meetting specific conditions

2006-05-26 Thread Jay Pipes

Scott Haneda wrote:

4.0.18-standard-log

I have a very basic one to many relationship, accounts and transactions.

There is only one account per users, but of course, there can be x
transactions.  These are once a month charges for billing.

I need to be able to select all accounts where next_charge_date = NOW()
That's the easy part, I get all the records I want.  However, some of those
get charges through one gateway, and some get charged through another.

The transaction table has a field called merchant, lets say it can be bankA
or bankB.

So, I need a list of accounts, where none of its many transaction records
has the merchant bankA.


Wouldn't something like this suit your needs?

SELECT a.account_id
FROM accounts a
LEFT JOIN transactions t
ON a.account_id = t.account_id
AND t.next_charge_date = NOW()
AND t.merchant != 'bankA'
GROUP BY a.account_id;

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Joining multiple tables with grouping functions

2006-05-26 Thread Jay Pipes

Ville Mattila wrote:

I try to get a list of all Invoices with total sum of the invoice and
paid sum of each invoices, as well as a customer name. I try following
query:

SELECT Invoices.*, SUM(InvoiceContents.Amount * InvoiceContents.Price)
AS InvoiceTotal, Customers.Name, SUM(Payments.Amount) PaidTotal,
MAX(Payments.Date) LastPayment FROM Invoices LEFT JOIN InvoiceContents
ON (InvoiceContents.InvoiceID = Invoices.ID) LEFT JOIN Customers ON
(Customers.ID = Invoices.CustomerID) LEFT JOIN Payments ON
(Payments.InvocieID = Invoices.ID) GROUP BY Invoices.ID ORDER BY
Invoices.ID;

The query works fine, but multiples the total paid sum with the number
of the matched InvoiceContents. If I remove the JOIN with
InvoiceContents, the query works fine (except I can't get the total sum
of the invoice).


Hi Ville!

Try this:

SELECT
i.*
, ic.InvoiceTotal
, c.Name
, COALESCE(p.PaidTotal, 0.00) AS PaidTotal
, COALESCE(p.LastPayment, 'No payment made') AS LastPayment
FROM Invoices i
 INNER JOIN (
  SELECT
InvoiceID
  , SUM(Amount * Price) AS InvoiceTotal
  FROM InvoiceContents
  GROUP BY InvoiceID
 ) AS ic
  ON i.ID = ic.InvoiceID
 INNER JOIN Customers c
  ON i.CustomerID = c.ID
 LEFT JOIN (
  SELECT
InvoiceID
  , SUM(Amount) AS PaidTotal
  , MAX(Payments.Date) AS LastPayment
  FROM Payments
  GROUP BY InvoiceID
 ) AS p
  ON i.ID = p.InvoiceID;

Just to explain, what I'm doing here is using two derived tables to get 
the aggregate information for both payments and invoices.  I have 
changed some of your LEFT JOINs to INNER JOINs, because I could not see 
a way that an invoice would *not* have a customer, or that an invoice 
would *not* have any contents.  However, I kept the LEFT JOIN on the 
payments aggregate because it is likely that an invoice would not have a 
payment associated with it.


You will notice that I have taken the aggregates (the SUM and MAX 
functions) out of the main query and put them into the derived tables 
(the subqueries in the FROM clause).  This prevents the multiplication 
effect you saw earlier.


Also, I have used the COALESCE function in the outer SELECT in order to 
eliminate any possible NULL values from the results and replace those 
with a correct value.


Hope this helps,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql performance / ndb 5.1 performance

2006-05-25 Thread Jay Pipes

Moritz Möller wrote:

Hi Dan,

there are about 2GB free, so the net size would still be 32 GB.

The queries are really optimized, 99.9% of all queries can be satisfied
without table scans.

Well, I guess I have to give NDB a chance, I hope it will help. The only
alternative I come to is to cluster the database on application level (use
server userID%numServers), which would be a [insert favourite non-swear-word
here] lot of work ;)


Hi Moritz!

There is an alternative solution than NdbCluster, and that would be to 
set up a replicated environment, and have commodity hardware slaves 
supply the bulk of the SELECT operations, with the 8-core machine used 
as the master replication database.


Your application server or web server would have to point SELECTs to the 
slaves for reporting purposes, and do writes to the master only.  This 
is a cheap way to get scale-out performance from commodity hardware, and 
it is pretty customizable as far as the replicationi layout you would 
want...


For instance, you could have your application server direct a certain 
category of queries to one slave, and another category to another slave, 
depending on traffic conditions.


BTW, how many requests/sec are you averaging, and also, what is the 
percentage reads to writes in your database?  You can get both answers 
from SHOW STATUS variables.


Cheers,

Jay



-Original Message-
From: Dan Trainor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 25, 2006 1:41 AM

To: Moritz Möller; mysql@lists.mysql.com
Subject: Re: mysql performance

Moritz Möller wrote:

Hi list,

we're running some large high-traffic mysql servers, and are currently
reaching the limit of our machines.

We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware

is

quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
CPU.
The queries run very fast (I seldom see a process that's running longer

than

a second), but there are too many of them, I guess.

As far as I know, NDB keeps the whole database in memory, so with indices
and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
NDB :(

Does someone know other solutions to this? Is NDB the only storage engine
supporting clustering?

Thanks in advantage,

Moritz





Hi -

That's quite a large database.  I, too, have been dealing with what I 
thought was a large database for this new project.  Being 2G, it hardly 
compares to your database size.


Keep in mind, however, that a 36G ibdata file does not necessarily mean 
that you are using 36G to store data.  InnoDB documents from the MySQL 
site explain ways to compact these files, possibly shrinking the size of 
ibdata files.  Another way to get a better idea of how much data you're 
actually using is to use the 'SHOW TABLE STATUS' query from within 
MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
subtract this from the total size of the ibdata file(s).  This will give 
you a more accurate representation of how much of that ibdata file 
you're actually using.  I think.  (Someone mind correcting me if I'm way 
off here?)


NDB may not be your solution.  Even though disk-based storage is 
included with NDB in 5.1 and beyond, I'm not too sure how this will 
affect the speed of your operations.  I suppose it's worth a try, however.


Please take this advise with a grain of salt, as InnoDB is still quite 
new to me, as well.  Other things I've found to speed up large databases 
are to properly make indexes, and testing them with the EXPLAIN 
function.  This alone has let me to speed up our operations as much as 
30% in most cases.


Thanks
-dant




--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benchmarking

2006-05-25 Thread Jay Pipes

Dan Trainor wrote:
I'm curious as to what you guys use for benchmarking nowadays.  I'd like 
to benchmark preformance of an InnoDB database on a fancy new server, 
compared to an old degraded one.


Hi Dan!

I use SysBench for most things, also MyBench for a few things (from 
Jeremy Zawodny) as well as ApacheBench (ab), supersmack (really 
customizable), and have used httperf in the past.


For general MySQL benchmarking, you can always run the MySQL benchmark 
suite (included in source distributions) on one machine and on the 
other, and see differences that way.


Cheers,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Available

2006-05-23 Thread Jay Pipes
Please post your exact table schema using SHOW CREATE TABLE, and your 
exact query, along with an EXPLAIN SELECT for the query.


Thanks!

-jay

Jan Gomes wrote:

Hy Guys,

I have a simple structure of tables, howewer has 50 million of registers and 
2,5 GB of data.
The table is MyIsam and has 4 integer fields and 1 Text field, one primary key with two integer fields 
and one btree index with one integer field.


There is a select in this table using an index(with one integer field), whith a set value 
for this field ( select * from table where field in (value1,value2,value3,value4,etc) ).


This select has delay 4s average.

Is this a good time for the select ? How can I run this select in less time?

I had make this optimization:

1-Compress the index
2-sort the index with myisamchk


PS.: This table is read-only, hasn't an insert, update or delete.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Available

2006-05-23 Thread Jay Pipes

Jan Gomes wrote:

The query is:
SELECT * FROM table
where id_table2 in (1652272,1652293,1652316,1652362);

+-+---+---+--+
| select_type | table | type  | possible_keys |
+-+---+---+--+
| SIMPLE  | table | range | PRIMARY,table_idx1|
+-+---+---+--+

++-+--+--+--+
| key  | key_len | ref   | rows | Extra  |
+--- +-+--+--+--+
| table_idx1 |   4   | NULL |   72 | Using where|
++-+--+--+-+



If this is what the EXPLAIN is, and you are getting 4 second response 
times, I would guess that something is amiss.  Have you tried running an 
ANALYZE TABLE on your table to rebuild the index?  Be aware that this 
will issue a table lock on the table, do do it off hours.


Then, issue the query again and post back time results.

Also, post the output of SELECT @@version;

Cheers

Jay


PS.: Pardon me for some thing, my english is not very good


It's just fine!


Hy Jay,

Thanks for you attention.

Show you my structure and EXPLAIN:

CREATE TABLE `table` (
 `id_table1` int(10) unsigned NOT NULL default '0',
 `id_table2` int(10) unsigned NOT NULL default '0',
 `field1`smallint(5) unsigned NOT NULL default '0',
 `field2`mediumint(8) unsigned NOT NULL default '0',
 `textField` text NOT NULL,

 PRIMARY KEY  (`id_table1`,`id_table2`),
 KEY `table_idx1` (`id_table2`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 PACK_KEYS=1


 


You don't show the query you are explaining.
It certainly can't be the one in your previous post below, because column
names don't match.


+-+---+---++
| select_type | table | type  | possible_keys  |
+-+---+---++
| SIMPLE  | table | range | PRIMARY,table_idx1|
+-+---+---++

++-+--+--+-+
| key  | key_len | ref   | rows | Extra |
+--- +-+--+--+-+
| table_idx1 |   4   | NULL |   72 | Using where|
++-+--+--+-+

 

Please post your exact table schema using SHOW CREATE TABLE, and your 
exact query, along with an EXPLAIN SELECT for the query.


Thanks!

-jay

Jan Gomes wrote:
   


Hy Guys,

I have a simple structure of tables, howewer has 50 million of registers and 
2,5 GB of data.
The table is MyIsam and has 4 integer fields and 1 Text field, one primary key with two integer fields 
and one btree index with one integer field.


There is a select in this table using an index(with one integer field), whith a set value 
for this field ( select * from table where field in (value1,value2,value3,value4,etc) ).


This select has delay 4s average.

Is this a good time for the select ? How can I run this select in less time?

I had make this optimization:

1-Compress the index
2-sort the index with myisamchk


PS.: This table is read-only, hasn't an insert, update or delete.

 


==
Atenciosamente,
Jan Gomes - [EMAIL PROTECTED]





--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Find ids that are NOT in other two tables.

2006-05-22 Thread Jay Pipes

Yesmin Patwary wrote:

  Hi,
   
  Greatly appreciated your kind help.  Would it work, if I rewrite your query as below to remove all records from company_db that don't have any relation to other two tables?  
   


DELETE
FROM company_db c
LEFT JOIN person p
ON c.company_id = p.company_id
LEFT JOIN customer cu
ON c.company_id = cu.company_id
WHERE p.company_id IS NULL
AND cu.company_id IS NULL;
   


  I am not sure if this join query will remove records from other tables.


The DELETE statement can specify which tables you want to delete from. 
Specify the table you want to delete from before the FROM keyword.  And, 
yes, the query works properly, as the example below indicates :)


[EMAIL PROTECTED]:~$ mysql -uroot -p test
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.0.18-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql CREATE TABLE t1 (id INT NOT NULL PRIMARY KEY);
Query OK, 0 rows affected (0.21 sec)

mysql CREATE TABLE t2 (id INT NOT NULL PRIMARY KEY);
Query OK, 0 rows affected (0.01 sec)

mysql CREATE TABLE t3 (id INT NOT NULL PRIMARY KEY);
Query OK, 0 rows affected (0.00 sec)

mysql INSERT INTO t1 VALUES (1),(2),(3),(4),(5);
Query OK, 5 rows affected (0.05 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql INSERT INTO t2 VALUES (2),(4),(6),(8),(10);
Query OK, 5 rows affected (0.00 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql INSERT INTO t3 VALUES (3),(5),(7),(9),(11);
Query OK, 5 rows affected (0.00 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql DELETE t1 FROM t1
- LEFT JOIN t2 ON t1.id = t2.id
- LEFT JOIN t3 ON t1.id = t3.id
- WHERE t2.id IS NULL
- AND t3.id IS NULL;
Query OK, 1 row affected (0.01 sec)

mysql SELECT * FROM t1;
++
| id |
++
|  2 |
|  3 |
|  4 |
|  5 |
++
4 rows in set (0.00 sec)

mysql SELECT * FROM t2;
++
| id |
++
|  2 |
|  4 |
|  6 |
|  8 |
| 10 |
++
5 rows in set (0.00 sec)

mysql SELECT * FROM t3;
++
| id |
++
|  3 |
|  5 |
|  7 |
|  9 |
| 11 |
++
5 rows in set (0.00 sec)

Hope this answers your questions!

Cheers,

-jay



Jay Pipes [EMAIL PROTECTED] wrote:
  Yesmin Patwary wrote:

Dear All,

Need to find ids that are NOT in other two tables. With the help of PHP scripting and three queries I am able to do the task. Can this be done with one mysql query? 

Below are current quires: 


SELECT DISTINCT company_id FROM company_db
SELECT company_id FROM person WHERE company_id=\$company_id\
SELECT company_id FROM customer WHERE company_id=\$company_id\



Use outer joins:

SELECT DISTINCT c.company_id
FROM company_db c
LEFT JOIN person p
ON c.company_id = p.company_id
LEFT JOIN customer cu
ON c.company_id = cu.company_id
WHERE p.company_id IS NULL
AMD cu.company_id IS NULL;




--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >