RE: Index help

2007-11-12 Thread Rob Wultsch
On Nov 8, 2007 4:08 PM, Stut [EMAIL PROTECTED] wrote:
 Hi all,

 I've inherited a PHP app that uses a MySQL database. The following query
 is extremely slow and I've been battling for a couple of days on an off
 to try and get a combination of indexes to optimise it. Any help would
 be greatly appreciated.


 select household_d.id, household_d.ad_type, household_d.ad_catid,
 household_d.ad_renewed, household_d.ad_userid,
 household_d.ad_trade, household_d.price,
 SUBSTRING(household_d.description, 1, 301) as description,
 users.issuperseller, users.phone, users.town
 from household_d
 left join users on household_d.ad_userid = users.id
 where household_d.ad_catid in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
and household_d.ad_status = live
and household_d.id not in (1, 2)
 order by ad_renewed desc
 limit 0,14


 Explain currently states that it's using the primary key for the users
 table, and the following for the household_d table...

 select_type = SIMPLE
 type = ref
 key = ad_status
 ref = const
 key_len = 1
 key_len
 rows = 22137
 extra = Using where; Using filesort

 Running locally this query is pretty fast, but on the live site it is
 currently taking anything up to a minute. My limited knowledge of MySQL
 indexes led me to add an index with ad_catid, ad_status, ad_renewed and
 id, but explain only says it's a possible key, it doesn't actually use it.

 Any tips appreciated.

A few things pop out at me:
1. Mysql is known for not dealing with 'IN(' or 'OR' queries particulary well.
Could you instead do something like:
where household_d.ad_catid BETWEEN 1 AND 10
?
1.1 Same deal with household_d.id
2. I am going to guess that ad_status has very low cardinality. I
generally build up composite index's by creating an index on whatever
column I think would have the highest useful cardinality. I then test
it to make sure mysql actually uses the new index. If it does not I
figure out why. I then add another column to the index and test the
query to make sure extra length gets used and the query did not get
slower. Rinse, wipe, repeat.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index help

2007-11-12 Thread Stut

Hi Rob,

Thanks for your reply.

Rob Wultsch wrote:

On Nov 8, 2007 4:08 PM, Stut [EMAIL PROTECTED] wrote:

Hi all,

I've inherited a PHP app that uses a MySQL database. The following query
is extremely slow and I've been battling for a couple of days on an off
to try and get a combination of indexes to optimise it. Any help would
be greatly appreciated.


select household_d.id, household_d.ad_type, household_d.ad_catid,
household_d.ad_renewed, household_d.ad_userid,
household_d.ad_trade, household_d.price,
SUBSTRING(household_d.description, 1, 301) as description,
users.issuperseller, users.phone, users.town
from household_d
left join users on household_d.ad_userid = users.id
where household_d.ad_catid in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
   and household_d.ad_status = live
   and household_d.id not in (1, 2)
order by ad_renewed desc
limit 0,14


Explain currently states that it's using the primary key for the users
table, and the following for the household_d table...

select_type = SIMPLE
type = ref
key = ad_status
ref = const
key_len = 1
key_len
rows = 22137
extra = Using where; Using filesort

Running locally this query is pretty fast, but on the live site it is
currently taking anything up to a minute. My limited knowledge of MySQL
indexes led me to add an index with ad_catid, ad_status, ad_renewed and
id, but explain only says it's a possible key, it doesn't actually use it.

Any tips appreciated.


A few things pop out at me:
1. Mysql is known for not dealing with 'IN(' or 'OR' queries particulary well.
Could you instead do something like:
where household_d.ad_catid BETWEEN 1 AND 10
?
1.1 Same deal with household_d.id


Unfortunately not. Neither ad_catid nor id are sequential.


2. I am going to guess that ad_status has very low cardinality. I
generally build up composite index's by creating an index on whatever
column I think would have the highest useful cardinality. I then test
it to make sure mysql actually uses the new index. If it does not I
figure out why. I then add another column to the index and test the
query to make sure extra length gets used and the query did not get
slower. Rinse, wipe, repeat.


That's basically what I was doing, but I wasn't getting anywhere. Since 
I posted the question I've thrown more RAM at the server and it's a lot 
better now but I still worry that it's using where and filesort but it's 
possible there's no way to make it any quicker. I've not had any 
complaints about the speed since I put more RAM in.


Thanks again.

-Stut

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index help

2007-11-12 Thread Rob Wultsch
On Nov 12, 2007 7:57 AM, Stut [EMAIL PROTECTED] wrote:
 Hi Rob,

 Thanks for your reply.


 Rob Wultsch wrote:
  On Nov 8, 2007 4:08 PM, Stut [EMAIL PROTECTED] wrote:
  Hi all,
 
  I've inherited a PHP app that uses a MySQL database. The following query
  is extremely slow and I've been battling for a couple of days on an off
  to try and get a combination of indexes to optimise it. Any help would
  be greatly appreciated.
 
 
  select household_d.id, household_d.ad_type, household_d.ad_catid,
  household_d.ad_renewed, household_d.ad_userid,
  household_d.ad_trade, household_d.price,
  SUBSTRING(household_d.description, 1, 301) as description,
  users.issuperseller, users.phone, users.town
  from household_d
  left join users on household_d.ad_userid = users.id
  where household_d.ad_catid in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
 and household_d.ad_status = live
 and household_d.id not in (1, 2)
  order by ad_renewed desc
  limit 0,14
 
 
  Explain currently states that it's using the primary key for the users
  table, and the following for the household_d table...
 
  select_type = SIMPLE
  type = ref
  key = ad_status
  ref = const
  key_len = 1
  key_len
  rows = 22137
  extra = Using where; Using filesort
 
  Running locally this query is pretty fast, but on the live site it is
  currently taking anything up to a minute. My limited knowledge of MySQL
  indexes led me to add an index with ad_catid, ad_status, ad_renewed and
  id, but explain only says it's a possible key, it doesn't actually use it.
 
  Any tips appreciated.
 
  A few things pop out at me:
  1. Mysql is known for not dealing with 'IN(' or 'OR' queries particulary 
  well.
  Could you instead do something like:
  where household_d.ad_catid BETWEEN 1 AND 10
  ?
  1.1 Same deal with household_d.id

 Unfortunately not. Neither ad_catid nor id are sequential.

  2. I am going to guess that ad_status has very low cardinality. I
  generally build up composite index's by creating an index on whatever
  column I think would have the highest useful cardinality. I then test
  it to make sure mysql actually uses the new index. If it does not I
  figure out why. I then add another column to the index and test the
  query to make sure extra length gets used and the query did not get
  slower. Rinse, wipe, repeat.

 That's basically what I was doing, but I wasn't getting anywhere. Since
 I posted the question I've thrown more RAM at the server and it's a lot
 better now but I still worry that it's using where and filesort but it's
 possible there's no way to make it any quicker. I've not had any
 complaints about the speed since I put more RAM in.

 Thanks again.

 -Stut

If you have to deal with it again consider using a bunch of unions
instead of the 'IN'. Not prettiest thing, but it should fix your
performance issue.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index help

2007-11-12 Thread Rob Wultsch
On Nov 12, 2007 9:22 AM, Afan Pasalic [EMAIL PROTECTED] wrote:
  If you have to deal with it again consider using a bunch of unions
 instead of the 'IN'. Not prettiest thing, but it should fix your
 performance issue.
  Could you please give me more details about your statement that mysql deals
 not so well with IN, and it's better to use UNIONS?
  I just tested two queries, using IN and UNIONS (using MySQL Browser) and
 I'm getting almost identical download/fetched time?
Someone call me on it if any of the following is incorrect:
Prior to Mysql 5.0, queries using OR or IN would not use an index
unless the left most portion of the index was the portion using the
OR/IN. Mysql 5.0 and later can use a merge index to replicate a UNION
(among other) optimization.

Mysql treats each query in a union as a completely separate query. If
the reason the query was not using an index was because of the IN or
OR a unions might allow mysql to use the index.  Look at the explain
of the union query. Is it using an index on whichever column you are
trying to optimize? I am going to guess it is not, or even if it is
the query is not very efficient otherwise.

What version of mysql are you on?
What % of the entries would fulfill by the IN?

Mysql used to choose to do a table scan if an index would not
eliminate a significant (like 70%+) of the rows. The algorithm has
since changed, but that would be a good rule of thumb.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Transactions and locking

2007-11-12 Thread Yves Goergen
Hi,

there's very much information about how transactions and locking works
in InnoDB, but maybe there's also a simple and understandable answer to
my simple question:

When I start a transaction, then find the maximum value of a column and
use that + 1 to write a new row into the table, how do transactions
protect me from somebody else doing the same thing so that we'd both end
up writing a new row with the same value?

Here's a description:

BEGIN TRANSACTION
new_id := (SELECT MAX(id) FROM table) + 1
-- some more work here
INSERT INTO table (id, ...) VALUES (new_id, ...)
COMMIT

What happens if another user does the same in that more work region?

(Of course, this example is pseudocode, I really have a PHP application
that does this.)

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Martijn Tonies
Hello Yves,

 there's very much information about how transactions and locking works
 in InnoDB, but maybe there's also a simple and understandable answer to
 my simple question:

 When I start a transaction, then find the maximum value of a column and
 use that + 1 to write a new row into the table, how do transactions
 protect me from somebody else doing the same thing so that we'd both end
 up writing a new row with the same value?

They won't, a constraint protects you from inserting a new row with
the same value.

 Here's a description:

 BEGIN TRANSACTION
 new_id := (SELECT MAX(id) FROM table) + 1
 -- some more work here
 INSERT INTO table (id, ...) VALUES (new_id, ...)
 COMMIT

 What happens if another user does the same in that more work region?

You will end up with the same new_id value, but the primary key
constraint - if you have one - will reject the insert.

Transactions come in multiple flavors, have a look at the different
isolation
levels:
http://dev.mysql.com/doc/refman/5.0/en/commit.html
http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html

Depending on your isolation level, for example, you will see new rows
in the table between your begin transaction and select max... or
between two select max statements. Other isolation levels will give
you the same max value when reading the value twice, even though
someone else inserted a new row.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 1:25 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 When I start a transaction, then find the maximum value of a column and
 use that + 1 to write a new row into the table, how do transactions
 protect me from somebody else doing the same thing so that we'd both end
 up writing a new row with the same value?

Usually you would use an auto_increment column for this.  If you want
to do it manually, you either need to lock the whole table (to prevent
rows from being added) or do the work in one statement (untested):

INSERT INTO table (id) values (SELECT MAX(id) + 1 FROM table);

You could also keep a separate table that just holds the current ID in
a single row and use an update to get it (also untested):

UPDATE counter SET id = LAST_INSERT_ID(id + 1);

Putting the LAST_INSERT_ID in there lets you grab the id afterward in
the same way you get it from an auto_increment, without doing another
select.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Transactions and locking

2007-11-12 Thread Yves Goergen
Okay, I feel like I need to clarify some things.

I do have a UNIQUE INDEX constraint on those columns, so the other user
won't actually write the same value another time, but it will fail at a
level which it should not.

I don't want to use AUTO_INCREMENT because it's not portable. My
application should work on MySQL and SQLite (and maybe someday it will
also run on many other systems - today, incompatibilities are just too big).

Here's another example:

SELECT COUNT(*) FROM table WHERE name = ?
-- a short delay which is long enough for a concurrent request :(
UPDATE table SET name = ? WHERE id = ?

I do the first query to find out whether my new name is already
assigned. Each name can only appear one time. If I just try and update
the row, the query will fail, but I don't know why. All I could do is
try and parse the error message, but this will by DBMS-dependent. I'd
like to do it in a way so that I can tell the user whether the name was
not unique or there was another error. But this case should be detected
separately.

I'll have a look at those isolation levels though. Maybe it's what I'm
looking for.

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



which duplicate key was hit on last insert?

2007-11-12 Thread Lev Lvovsky
We have tables in our database that, in addition to primary key  
constraints also have unique() constraints of several columns in the  
table:


CREATE TABLE Test (
COL1INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
COL2VARCHAR(10) NOT NULL,
COL3VARCHAR(10) NOT NULL,
UNIQUE(COL2, COL3);
);

There are two insert scenarios which would cause a DUPLICATE KEY  
error - one which contained a pre-existing COL1 value, and another  
which contained a pre-existing COL2,COL3 value.  Is there any way to  
differentiate between which KEY, 'PRIMARY', or 'UNIQUE' (as listed by  
the 'show create table Test' in the mysql client) was actually  
violated in the last insert?


This is specifically for use with the ON DUPLICATE KEY UPDATE  
clause.  Normally we use behavior this to produce a no-op upon adds  
of identical records, however this can cause problems in the case  
that the KEY that was DUPLICATE was in fact the UNIQUE() key, and not  
the PRIMARY key.  Knowing which of these triggered the DUPLICATE key  
error would be helpful in determining what to do next - is this  
information stored anyplace?


thanks,
-lev

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Simple Query

2007-11-12 Thread Ben Wiechman
I need help writing what is probably a rather simple query.

 

I have two tables. The first contains several columns, but most importantly
an id column.

 

The second is has two columns, an id that corresponds with the id in the
first table, and a value.

 

For every row in the first table I'd like to insert a row into the second
with a set value.

 

I tried this but it is not working. I'm not very familiar with subqueries as
you can see.

 

insert into table_2 ( id, value ) values ( (select id from table_1), '1' );

 

Ben Wiechman

[EMAIL PROTECTED]

 



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 2:43 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 SELECT COUNT(*) FROM table WHERE name = ?
 -- a short delay which is long enough for a concurrent request :(
 UPDATE table SET name = ? WHERE id = ?

I think that even with SERIALIZABLE isolation level, this won't lock
anything if it doesn't match any rows, so someone could do an insert
between those statements.  I could be wrong about that.

The alternative is to lock the table.  I'm not sure how that would be
done in SQLite, although SQLite works by taking an exclusive write
lock on the entire database so it may not be an issue.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Yves Goergen
On 12.11.2007 20:43 CE(S)T, Yves Goergen wrote:
 I'll have a look at those isolation levels though. Maybe it's what I'm
 looking for.

Not quite. But I'm going the LOCK TABLES way now. Locking a single table
exclusively for those rare moments seems to be the best solution.

I could also implement an abstraction for that, because other DBMS have
different syntax to do the same thing. Since I only need these locks for
a very short time and a single table with no transaction support, this
works fine for me.

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Simple Query

2007-11-12 Thread Peter Brawley
I tried this but it is not working. I'm not very 
familiar with subqueries as you can see. 



insert into table_2 ( id, value ) values ( (select id from table_1), '1' );


insert into table_2 (id,value)
select id,1 from table_1;

PB


Ben Wiechman wrote:

I need help writing what is probably a rather simple query.

 


I have two tables. The first contains several columns, but most importantly
an id column.

 


The second is has two columns, an id that corresponds with the id in the
first table, and a value.

 


For every row in the first table I'd like to insert a row into the second
with a set value.

 


I tried this but it is not working. I'm not very familiar with subqueries as
you can see.

 


insert into table_2 ( id, value ) values ( (select id from table_1), '1' );

 


Ben Wiechman

[EMAIL PROTECTED]

 



  



No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.503 / Virus Database: 269.15.30/1126 - Release Date: 11/12/2007 12:56 PM
  


Processlist full of Opening tables

2007-11-12 Thread Samuel Vogel

Hey guys,

I do run MySQL on a high traffic Server with approximately 10k 
databases. Since some time MySQL is has become very sluggish.
When I look at my processlist it shows more than 25 processes (sometimes 
of the same  table) with status Opening tables. Some processes also 
show closing tables.
Since I am running I shared hosting environment, I can not examine the 
situation  to such extent, where I could see if table locks are 
involved. But it does not seem to be the case, since it does not appear 
for just some users, but for everybody. Also when I run a simple select 
query it takes more than 2 sec.


I guess MySQL is trying to tell me, that my hard drive is too slow. How 
could I ease this situation ?
My key_buffer is set to 1Gb of my 4Gb system memory. Other values are 
tweaked as well, but I do not think the matter in that case.

Would partitioning the disk as Raid 0 ease the situation?

What other places do I have to look at, to further narrow down the problem?

Regards,
Samy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Yves Goergen
On 12.11.2007 22:16 CE(S)T, Yves Goergen wrote:
 Since I only need these locks for
 a very short time and a single table with no transaction support, this
 works fine for me.

Damn, I found out that I need table locking *and* transactions. I'm lost...

Maybe I'm really better off using a sequence (like the one PostgreSQL
offers and like it is available as an add-on for Perl [1]). But then
again, I need queries outside of a transaction so that the sequence's
next number is immediately commited and visible to other users. I have
the impression that it all doesn't work.

[1]
http://search.cpan.org/~adamk/DBIx-MySQLSequence-1.00/lib/DBIx/MySQLSequence.pm

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



secure mysql port

2007-11-12 Thread Kelly Opal
Hi
Is there any way to restrict access to the tcp port on mysql. I only
want my 5 class C's to be able to access the port but it is a public
server.

Any help would be greatly appreciated.

Kelly


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 5:24 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 Damn, I found out that I need table locking *and* transactions.

What makes you say that?

 Maybe I'm really better off using a sequence (like the one PostgreSQL
 offers and like it is available as an add-on for Perl [1]).

That Perl module uses the exact technique I described to you with
updates and LAST_INSERT_ID().

Frankly, doing the insert and checking for an error seems like a
pretty reasonable solution to me, since you only have two databases to
care about at this point.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Yves Goergen
On 12.11.2007 23:31 CE(S)T, Perrin Harkins wrote:
 On Nov 12, 2007 5:24 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 Damn, I found out that I need table locking *and* transactions.
 
 What makes you say that?

BEGIN TRANSACTION
SELECT MAX(id) FROM table
INSERT INTO table (id) VALUES (?)
INSERT INTO othertable (id) VALUES (?)
COMMIT

First I find a new id value, then I do several INSERTs that need to be
atomic, and especially roll back completely if a later one fails.

 That Perl module uses the exact technique I described to you with
 updates and LAST_INSERT_ID().

AUTO_INCREMENT isn't portable. Now I only support MySQL and SQLite.
But I also did PostgreSQL (until it failed one of the more complex
queries, maybe it comes back one day) and maybe Oracle or whatever will
be compatible, too, so that I then stand there with my AUTO_INCREMENT
and can't use it.

 Frankly, doing the insert and checking for an error seems like a
 pretty reasonable solution to me, since you only have two databases to
 care about at this point.

I wonder if I can safely use an error code to determine this error
condition and then just retry. Here's an interesting page:

http://dev.mysql.com/doc/refman/5.0/en/error-messages-server.html
 Error: 1022 SQLSTATE: 23000 (ER_DUP_KEY)
 Message: Can't write; duplicate key in table '%s'

No documentation for SQLite.

PostgreSQL uses several SQLSTATE codes for this situation.
(http://www.postgresql.org/docs/8.2/interactive/errcodes-appendix.html)

Something must be wrong with SQL-92 because the two reference tables
have no common SQLSTATE values for related error conditions. But
generally I think that an SQLSTATE beginning with 23 is close enough
for a match.

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 5:58 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 BEGIN TRANSACTION
 SELECT MAX(id) FROM table
 INSERT INTO table (id) VALUES (?)
 INSERT INTO othertable (id) VALUES (?)
 COMMIT

 First I find a new id value, then I do several INSERTs that need to be
 atomic, and especially roll back completely if a later one fails.

If you use a table lock on the first table where you get the ID, you
know that ID is safe to use.  Using a table lock when you get the ID
and then trusting transactions to roll back all the inserts in the
event of a later failure should work fine.

  That Perl module uses the exact technique I described to you with
  updates and LAST_INSERT_ID().

 AUTO_INCREMENT isn't portable.

You're misunderstanding.  The LAST_INSERT_ID() function doesn't use
AUTO_INCREMENT.  That's why the perl module uses it.  It just copies
the value you pass to it and makes that available without another
select.  It's not portable to SQLite, but you can use a sequence there
instead.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



errror while installing DBD::MYSQL.

2007-11-12 Thread Siva Prasad


Hi All,

I don't know whether it is correct group or not. Just a hope that I  may get
solution.

I have installed DBI and when I run  the following perl script;

 

--SCRIPT

use strict;

use warnings;

use DBI;

my $DSN=DBI:mysql:database=faculte;host=localhost;port=3306;

my $DB_OBJ=DBI-connect($DSN, root, sqlroot123);

#my @dr=DBI-installed_drivers;

#print @dr\n;

my $sql=select * from fac_class;

 

$DB_OBJ-prepare($sql);

my $data=$DB_OBJ-execute($sql);

print $data\n;

 

--SCRIPT.

 

I got the below  error.


-

install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC
contains:

C:/Perl/site/lib C:/Perl/lib .) at (eval 4) line 3.

Perhaps the DBD::mysql perl module hasn't been fully installed,

or perhaps the capitalisation of 'mysql' isn't right.

Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge.

 at db_interac.pl line 7

 

 

so I downloaded perl  DBD::MYSQL and tried installing the module with no
luck.

 

I used the following command to run the MAKEFile.pl on windows 

 

perl makefile.pl --cflags=-IC:\Program Files\MySQL\MySQL Server
5.0\include

 

The above paths are correct.

 

I got the following error.

 




Failed to determine directory of mysqlclient.lib. Use

  perl Makefile.PL --libs=-Ldir -lmysqlclient -lz -lm -lcrypt -lnsl

to set this directory. For details see the INSTALL.html file,

section Linker flags or type

  perl Makefile.PL --help


I tried to run the mysql_config -cflags  with no luck got the below error

 

'mysql_config' is not recognized as an internal or external command,

operable program or batch file.

 

Can anybody help me to resolve the abov problem.

 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: which duplicate key was hit on last insert?

2007-11-12 Thread Michael Dykman
On Nov 12, 2007 6:42 PM, Lev Lvovsky [EMAIL PROTECTED] wrote:

 We have tables in our database that, in addition to primary key
 constraints also have unique() constraints of several columns in the
 table:

 CREATE TABLE Test (
COL1INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
COL2VARCHAR(10) NOT NULL,
COL3VARCHAR(10) NOT NULL,
 UNIQUE(COL2, COL3);
 );

 There are two insert scenarios which would cause a DUPLICATE KEY
 error - one which contained a pre-existing COL1 value, and another
 which contained a pre-existing COL2,COL3 value.  Is there any way to
 differentiate between which KEY, 'PRIMARY', or 'UNIQUE' (as listed by
 the 'show create table Test' in the mysql client) was actually
 violated in the last insert?

 This is specifically for use with the ON DUPLICATE KEY UPDATE
 clause.  Normally we use behavior this to produce a no-op upon adds
 of identical records, however this can cause problems in the case
 that the KEY that was DUPLICATE was in fact the UNIQUE() key, and not
 the PRIMARY key.  Knowing which of these triggered the DUPLICATE key
 error would be helpful in determining what to do next - is this
 information stored anyplace?


 On Nov 12, 2007, at 1:27 PM, Michael Dykman wrote:

  When I last researched the question, maybe 6 months ago, the sad truth
  was (and liely still is): no, there is no way to distinguish between
  your various unique keys when the duplicate key contraint is raised..
 
   - michael dykman

 Thanks for your reply Michael.

 In our code, we're running this via a stored procedure (for security)
 - is the only choice at this point, to do a SELECT, to check whether
 the record exists, and subsequently INSERT only if we return no rows?

 That seems like the wrong way to do it, but I'll definitely use it if
 it's my only choice.

I had a similar issue in a fairly sophisticated migration
application.. in that case I was fortunate enough to have data
available in my procedure that allowed me to disambiguate logically
without another hit to the database..



-- 
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]