Re: Help needed in creating primary key ,foreign key on a varchar datatype colum

2004-11-21 Thread Heikki Tuuri
Dayakar,
- Original Message - 
From: Dayakar [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Sunday, November 21, 2004 7:39 AM
Subject: Help needed in creating primary key ,foreign key on a varchar 
datatype colum


--=_NextPart_000_000B_01C4CFBA.91C9BA80
Content-Type: text/plain;
charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hello,
I am converting my database from oracle to mysql4.1 and i want the same =
structure as it is oracle like primary key and foreign key references =
etc..=20
In oracle i have created my primary key and foreign key references on a =
varchar datatype column, so can any one help me in doing the same in =
mysql. I have tried doing the same by seeing the examples given in mysql =
manual but not successfull.
If any one can help me by giving a example then it would be great help =
for me.
[EMAIL PROTECTED]:~/mysql-4.1/client ./mysql test
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2 to server version: 4.1.8-debug-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql CREATE TABLE t(a VARCHAR(100) NOT NULL PRIMARY KEY, b INT) TYPE = 
INNODB;
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql CREATE TABLE t1(a1 INT NOT NULL PRIMARY KEY, b1 VARCHAR(50), FOREIGN 
KEY
(b1) REFERENCES t(a)) TYPE=INNODB;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql
regards
dayakar
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: LOAD DATA INFILE question...

2004-11-21 Thread Patrick Connolly
Somewhere about Sat, 20-Nov-2004 at 06:27PM +0100 (give or take), Ferhat BINGOL 
wrote:

| Hi,
| 

| I have a 72 fields data txt file and I was inserting all data
| previously but now I need only some of them to dump into the table.

| I would like to select only 4 fields which are the 1st, 5th,28th
| and 71st fields.

| Is there a statement to do that.
| 

I think it would be simpler to pre-process the file using cut with the
appropriate delmiter if it's not tab-delimited already.  Then import
the reduced file.

HTH

-- 
   ___ Patrick Connolly  
 {~._.~}   
 _( Y )_  Good judgment comes from experience 
(:_~*~_:) Experience comes from bad judgment
 (_)-(_)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



[solved] Re: LOAD DATA INFILE question...

2004-11-21 Thread Ferhat BINGOL
Hi Patrick,

I was doing that it was taking 20 seconds to chop the fields from the file.
That was the problem.

Meanwhile about my problem I have seen that the lines in data files end with
\r\n not \n so it solved the problem.

Now it takes 3 seconds to dump the file and chop the necassary fields.

thanks to MySQL.

:)

thank yo again for answer...


- Original Message - 
From: Patrick Connolly [EMAIL PROTECTED]
To: Ferhat BINGOL [EMAIL PROTECTED]
Cc: mysql [EMAIL PROTECTED]
Sent: Sunday, November 21, 2004 10:00 AM
Subject: Re: LOAD DATA INFILE question...


 Somewhere about Sat, 20-Nov-2004 at 06:27PM +0100 (give or take), Ferhat
BINGOL wrote:

 | Hi,
 |

 | I have a 72 fields data txt file and I was inserting all data
 | previously but now I need only some of them to dump into the table.

 | I would like to select only 4 fields which are the 1st, 5th,28th
 | and 71st fields.

 | Is there a statement to do that.
 |

 I think it would be simpler to pre-process the file using cut with the
 appropriate delmiter if it's not tab-delimited already.  Then import
 the reduced file.

 HTH

 -- 
___ Patrick Connolly
  {~._.~}
  _( Y )_  Good judgment comes from experience
 (:_~*~_:) Experience comes from bad judgment
  (_)-(_)





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Primary key error

2004-11-21 Thread DBS
Hi list,

MySQL newbie here and am using Navicat to learn how to manage a database for
a test OS shopping cart.  I got the below error message after importing a
table into the database.  Can anyone tell me what I could do to correct the
problem?  Would I need to add an additional column to the table in question
for the primary keys, one for each row entry?

Zen_products does not have a primary key.  Updates to this table will be
done using the following pseudo statement:
UPDATE zen_products SET ModifiedFieldsAndValues Where
 “AllFieldsAndOldValues”
Updates to a record in this table may update more than one record.

Thanks anyone,

DBS



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



row numbers, jumping rows

2004-11-21 Thread João Borsoi Soares
Hello list,

I wonder if someone could help me to get the fastest result on my query.
I need to classify one entity, according to a specific attribute (let's
call it X), looking at a sample extracted from a database table. 
First I need to sort my sample ordered by a specific table field (X).
Then I should divide my sorted sample in 10 equal groups (same number of
rows). Finally, I should classify my entity (1 to 10), finding out at
which group its attribute X fits in.

Let me explain my thoughts. First I should make a SELECT COUNT(*).. to
find the total number of rows in my sample. Then I make queries using
LIMIT until I find which group the X attribute fits in. In the worst
case I will have to do 10 queries. Which I think should take too long in
my case.

I wanted to make it in only one query. It could be possible if I could
make a query which returns result jumping rows, like STEP N where N
could be the number of items in each group. Is there anything in mysql
that can give me that? I also thought if I could have the row numbers in
the query result, I could solve it with a simple condition like, WHERE
rowNum % N = 0.

Any ideas?

Thanks.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to create views in mysql4.1

2004-11-21 Thread Dayakar
Hello friends,

Can any one help me in creating view using mysql4.1. If we cannot then any 
other alternative. It would be great if anyone give me an example for that.


regards
dayakar


SMILE ALL THE WHILE..

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL code sought to remove duplicates, using most recent date

2004-11-21 Thread Drew Ferguson
On Sat, 20 Nov 2004 19:14:55 -0800
[EMAIL PROTECTED] wrote:

 they have the same perid, program and startdate.  I would very much like
 to get a listing for each perid that shows their perid, program, startdate
 and then the latest (only the latest) stopdate.  This would give unique

SELECT perid,prog,startdt,MAX(stopdt) FROM test GROUP BY perid,prog,startdt

or

SELECT perid,prog,MIN(startdt),MAX(stopdt) FROM test GROUP BY perid,prog

might do what you wnat

-- 
Drew

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help needed in creating primary key ,foreign key on a varchar datatype colum

2004-11-21 Thread Rhino
I just checked the definition of CREATE TABLE in the MySQL manual
http://dev.mysql.com/doc/mysql/en/CREATE_TABLE.html and I didn't see
anything that suggests that primary keys based on VARCHAR columns need to be
treated differently than other column types in MySQL. However, when I tried
to create a table based on a VARCHAR(500) column, I got a message that said:

ERROR 1074: Too big column length for column 'account_number' (max = 255).
Use BLOB instead

Therefore, if your column is more than 255 characters long, you will have to
use BLOB instead of VARCHAR. If your column is 255 or fewer characters long,
change your column definition to CHAR from VARCHAR.

Also, I believe you will have to include the phrase 'Type=INNODB' in your
table definition; I think only the INNODB engine actually enforces FOREIGN
KEY constraints.

Here is an example of creating two tables, a parent and a child, that use
the INNODB engine:

drop table if exists dept;
create table dept(
deptno char(3) not null,
deptname varchar(36) not null,
mgrno char(6),
primary key(deptno)
) Type=InnoDB;

drop table if exists emp;
create table emp(
empno char(6) not null,
firstnme char(12) not null,
midinit char(1),
lastname char(15) not null,
workdept char(3) not null,
salary dec(9,2) not null,
primary key(empno),
index(workdept),
foreign key(workdept) references dept(deptno) on delete restrict
) Type=InnoDB;

Table 'dept' is the parent table and contains one row for each department of
an imaginary company. Table 'emp' is the child table and contains one row
for each employee of the imaginary company. Each row in the 'emp' table has
a department number in its 'workdept' column; 'workdept' is the foreign key
and points to the 'deptno' column of the 'dept' table.

Rhino


- Original Message - 
From: Dayakar [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, November 21, 2004 12:39 AM
Subject: Help needed in creating primary key ,foreign key on a varchar
datatype colum


Hello,

I am converting my database from oracle to mysql4.1 and i want the same
structure as it is oracle like primary key and foreign key references etc..

In oracle i have created my primary key and foreign key references on a
varchar datatype column, so can any one help me in doing the same in mysql.
I have tried doing the same by seeing the examples given in mysql manual but
not successfull.

If any one can help me by giving a example then it would be great help for
me.

regards
dayakar


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Primary key error

2004-11-21 Thread Rhino

- Original Message - 
From: DBS [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, November 21, 2004 7:37 AM
Subject: Primary key error


 Hi list,

 MySQL newbie here and am using Navicat to learn how to manage a database
for
 a test OS shopping cart.  I got the below error message after importing a
 table into the database.  Can anyone tell me what I could do to correct
the
 problem?  Would I need to add an additional column to the table in
question
 for the primary keys, one for each row entry?

 Zen_products does not have a primary key.  Updates to this table will be
 done using the following pseudo statement:
 UPDATE zen_products SET ModifiedFieldsAndValues Where
  AllFieldsAndOldValues
 Updates to a record in this table may update more than one record.

You may not need to add any columns to the table, although I can't say for
sure since you don't provide a definition of the existing table or describe
the data in it.

MySQL or Navicat wants your table to have a primary key, which it apparently
doesn't have. The good news is that you can add a primary key to an existing
table, even after it has data in it. The primary key definition can identify
a single column or a combination of columns as the primary key. Here are
examples of both:

alter table mytable add primary key (id);

alter table mytable add primary key(area_code, phone_number);

You need to look at your table definition and choose a column or combination
of columns that uniquely identifies each row in the table. Then write and
execute the appropriate ALTER TABLE statement. If the ALTER TABLE statement
works, it should prove that your analysis was correct and you have chosen an
appropriate column or columns as the primary key; if the ALTER TABLE fails,
it will probably be because you have analyzed the data incorrectly and
chosen something that isn't unique for the data in the table.

If you can't find a unique column or combination of columns in your column,
you may have to add an additional column or columns to the table to ensure
that a primary key is possible on the table, then add the primary key via
the ALTER TABLE statement.

Choosing a good primary key is a non-trivial task; if you have no experience
with data modelling, particularly normalization, you should get some help
from someone who has this experience. It is not particularly hard to do but
if you have no experience, it is difficult to explain briefly how to do it
in a way that you are likely to understand.

If you have no one to help you, you can try posting a detailed description
of your data and perhaps someone on this mailing list can help you figure
out the best primary key for your data.

Rhino




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to create views in mysql4.1

2004-11-21 Thread Rhino

- Original Message - 
From: Dayakar [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, November 21, 2004 9:38 AM
Subject: how to create views in mysql4.1


 Hello friends,

 Can any one help me in creating view using mysql4.1. If we cannot then any
other alternative. It would be great if anyone give me an example for that.


Have you tried looking in the manual?
http://dev.mysql.com/doc/mysql/en/CREATE_VIEW.html

Rhino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to create views in mysql4.1

2004-11-21 Thread Jeff Smelser
On Sunday 21 November 2004 08:38 am, Dayakar wrote:

 Can any one help me in creating view using mysql4.1. If we cannot then any
 other alternative. It would be great if anyone give me an example for that.

Thats a 5.0 feature.. Your not gonna get to far with 4.1..

Jeff
-- 
===
Jabber: tradergt@(smelser.org|jabber.org)
Quote:Tagline thievery... coming up on the next Geraldo!
===


pgp0SXQ9L25ZF.pgp
Description: PGP signature


Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-21 Thread Heikki Tuuri
Ivan,
I have now analyzed your ibdata1 file.
As I suspected, the 'history list' was corrupt. It was broken at about 
transaction 1 500 000. Current trx id was already 20 million. The history 
list length was 8.5 million!

Breakpoint 12, trx_purge_rseg_get_next_history_log (rseg=0x402c5268)
   at trx0purge.c:644
644 rseg-last_page_no = FIL_NULL;
(gdb) print log_hdr
$12 = (trx_ulogf_t *) 0x482dd7a6 
(gdb) x/50b log_hdr
0x482dd7a6: 0x000x000x000x000x000x180x060xf4
0x482dd7ae: 0x000x000x000x000x000x180x060xf5
0x482dd7b6: 0x000x000x170xd40x000x000x000x00
0x482dd7be: 0x8c0x000x000x000x180x010x170xf6
0x482dd7c6: 0x160xc40xff0xff0xff0xff0x000x00
0x482dd7ce: 0x000x000x030x0f0x160xe60x170xf6
0x482dd7d6: 0x1c0x01
I do not know why the list had broken. The prev field is 0x, which 
means FIL_NULL.

I have now added to 4.1.8 some debug code to track this. SHOW INNODB STATUS 
now prints the history list length. And if the list length exceeds 20 000 
when purge thinks it has purged everything, mysqld will print to the .err 
log a warning like below:

041121 15:40:33  InnoDB: Warning: purge reached the head of the history 
list,
InnoDB: but its length is still reported as 8546148! Make a detailed bug
InnoDB: report, and post it to bugs.mysql.com

We will see how common the history list corruption is. My tests did not 
produce any corruption.

Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/
- Original Message - 
From: Heikki Tuuri [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 9:59 PM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


John,
please zip ibdata1, which is 'only' 100 MB, and upload it when you have 
shut
down mysqld.

I have been simulating your workload, but I only get 25 segments. No leak
seen.
Regards,
Heikki
- Original Message - 
From: John B. Ivski [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 8:17 PM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki,
Heikki Tuuri wrote:
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close
to the end of the output? The print routine first prints inode pages
that are completely used, and after that other inode pages. Since the
tablespace validation said the tablespace is ok, I guess the segments
really are there.
You're absolutely right, they're there - I must've missed them when
looking through the output.
They're not at the end but around 0 17000, though.
SEGMENT id 0 16683 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 16684 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 2; res 2 used 2; full ext 0
fragm pages 2; free extents 0; not full extents 0: pages 0
SEGMENT id 0 2 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 3 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 4 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 6 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 7 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 8 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 9 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 10 space 0; page 2; res 4 used 4; full ext 0
fragm pages 4; free extents 0; not full extents 0: pages 0
SEGMENT id 0 11 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 12 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 13 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 14 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 15 space 0; page 2; res 160 used 160; full ext 2
fragm pages 32; free extents 0; not full extents 0: pages 0
SEGMENT id 0 17259 space 0; page 2; res 1 

Re: SQL code sought to remove duplicates, using most recent date

2004-11-21 Thread Michael Stassen
[EMAIL PROTECTED] wrote:
Hi!
I have a problem that may only be solvable using a programming approach, but
I'm wondering if there is an SQL method in MySQL.
The problem is that I have a table that looks like this:
CREATE TABLE `test` ( `uid` mediumint(6) NOT NULL auto_increment,
  `perid` varchar(10) default NULL,  
  `prog` char(2) default NULL,
  `startdt` date default NULL,
  `stopdt` date default NULL, 
   PRIMARY KEY  (`uid`) 
) TYPE=MyISAM AUTO_INCREMENT=1 ;

Please see a listing (below) of some records, for this is my problem.  The
perid identifies people (person id is perid).  The next code says what
program they are in (always a 2 digit code).  Then there is a startdate and
a stopdate.  Some records (id# 471) are repeated because people are in the
same program and the startdates are on the same day, and the stopdates are
the only way to cull out the repeats.  When I say repeats I mean they
have the same perid, program and startdate. 
This doesn't make sense to me.  Are you saying that in your data model it is 
appropriate for the same person to start the same program on the same day 2 
or 3 times?  Or do you mean that it isn't, but your application has allowed 
that to happen, and now you are trying to clean up?  It appears that you are 
changing a person's stopdt by adding a new row, rather than by updating the 
existing row.  Ususally, that's not what you want.

Going forward, after you've removed the duplicates, I'd suggest adding a 
unique constraint to prevent duplicates:

  ALTER TABLE test
  ADD UNIQUE per_prog_start (perid, prog, startdt);
Then change the stopdt by updating the existing row, rather than adding a 
new row.

I would very much like to get a
listing for each perid that shows their perid, program, startdate and then
the latest (only the latest) stopdate.  This would give unique records if I
were only to examine the perid, program code and startdate.
1 222 JP  2004-01-26 2006-01-26
2  38 SW  2004-01-21 2004-04-21
3 471 BP  2004-01-09 2004-06-01
4 471 BP  2004-01-09 2004-06-06
5 471 JP  2004-06-01 2004-06-08
6 471 JP  2004-06-08 2006-06-08
7 471 JP  2004-06-01 2006-06-15
8 471 JP  2004-11-10 2006-11-10
9 440 OT  2004-02-19 2004-04-01
   10 479 JP  2003-11-14 2003-12-02
So is there some way to do this using an SQL query?
Yes.
  SELECT perid, prog, startdt, MAX(stopdt)
  FROM test
  GROUP BY perid, prog, startdt;
Note that you can't get the uid with this simple query, because it isn't one 
of the grouped columns.  If you need the uid, as I expect you do, it becomes 
a little more complicated.

  CREATE TEMPORARY TABLE tmp
  ( `perid` varchar(10) default NULL,
`prog` char(2) default NULL,
`startdt` date default NULL,
`stopdt` date default NULL
  );
  LOCK TABLES test READ;
  INSERT INTO tmp
SELECT perid, prog, startdt, MAX(stopdt)
FROM test
GROUP BY perid, prog, startdt;
  SELECT test.*
  FROM test JOIN tmp USING (perid, prog, startdt, stopdt)
  ORDER BY uid;
  UNLOCK TABLES;
  DROP TABLE tmp;
This method is documented in the manual 
http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html, 
along with a simpler version using subqueries if you have mysql 4.1.

Of course, that just selects the rows you want.  If you want to remove the 
duplicates from test, you have a few options.  If there aren't too many, you 
could

  SELECT perid, prog, startdt
  FROM test
  GROUP BY perid, prog, startdt
  HAVING COUNT(stopdt)  1;
to find the groups with duplicates, then manually remove the bad rows.
To have mysql do it, we modify the temp table solution from above:
  CREATE TEMPORARY TABLE tmp
  ( `perid` varchar(10) default NULL,
`prog` char(2) default NULL,
`startdt` date default NULL,
`stopdt` date default NULL
  );
  LOCK TABLES test WRITE;
  INSERT INTO tmp
SELECT perid, prog, startdt, MAX(stopdt)
FROM test
GROUP BY perid, prog, startdt;
  DELETE test
  FROM test LEFT JOIN tmp USING (perid, prog, startdt, stopdt)
  WHERE tmp.perid IS NULL;
  UNLOCK TABLES;
  DROP TABLE tmp;
Thanks so much for looking at this!
Cheers!
-Alex
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


temporary tables, replication, and innodb tables

2004-11-21 Thread Mike Debnam
I'm working on a new series of queries that share a temporary table.
I've tried using both create temporary table  select and create
temp table then insert into t ... select from y. Both seem to
create a lock that (if the select portion runs long enough) causes a
deadlock with the replication thread on that box (it's a slave).

When the select portion runs more than innodb_lock_wait_timeout
seconds the slave replication thread dies with the errors:

041119 16:54:06 [ERROR] Slave: Error 'Lock wait timeout exceeded; try
restarting transaction' on query. ., Error_code: 1205
041119 16:54:06 [ERROR] Error running query, slave SQL thread aborted.
Fix the problem, and restart the slave SQL thread with SLAVE START.
We stopped at log 'db-bin.81' position 65976472

Am I missing something here or is the only way to avoid potential
problems with the slave replication thread is to increase
innodb_lock_wait_timeout to a large enough value that it will be
longer than any potential select for the temporary table?

All innodb tables, MySQL 4.1.7 for both master and slaves.

Thanks.

Mike

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



counting sequences

2004-11-21 Thread starr
Hi!

I need to use SQL to count some sequences.  We have taken a short snapshot of 1 
year for people registered in programs.  So we have data in a table like so:

ID m1 m2 m3 m4 m5 m6 m7m8 m9 m10 m11 m12

The m1, m2, m3 refers to month 1, month2, month3, etc.

The data for 1 person might look like so:

0023  1 1 1 0 0 2 2 1 1 0 1 0and there is 1 row for each person.

The 1 means that they were in a program called SW and the 2 means Other 
and the 0 means Not in any program.

My problem is to count durations for each person, by program, and get average 
stays in program 1 and in program 2.

Is there a nice way to do this with SQL?   

Thanks heaps!

-Alex


Re: temporary tables, replication, and innodb tables

2004-11-21 Thread Heikki Tuuri
Mike,
- Original Message - 
From: Mike Debnam [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Sunday, November 21, 2004 9:25 PM
Subject: temporary tables, replication, and innodb tables


I'm working on a new series of queries that share a temporary table.
I've tried using both create temporary table  select and create
temp table then insert into t ... select from y. Both seem to
create a lock that (if the select portion runs long enough) causes a
deadlock with the replication thread on that box (it's a slave).
do you have binlogging switched on in the slave? If yes, then CREATE ... 
SELECT ... will indeed take locks on the rows it scans in the SELECT. If 
not, then the SELECT is a consistent, non-locking read. Can you switch off 
binlogging?

Another solution is to use SELECT ... INTO OUTFILE. That is always processed 
as a consistent read.

When the select portion runs more than innodb_lock_wait_timeout
seconds the slave replication thread dies with the errors:
041119 16:54:06 [ERROR] Slave: Error 'Lock wait timeout exceeded; try
restarting transaction' on query. ., Error_code: 1205
041119 16:54:06 [ERROR] Error running query, slave SQL thread aborted.
Fix the problem, and restart the slave SQL thread with SLAVE START.
We stopped at log 'db-bin.81' position 65976472
Am I missing something here or is the only way to avoid potential
problems with the slave replication thread is to increase
innodb_lock_wait_timeout to a large enough value that it will be
longer than any potential select for the temporary table?
That is a solution.
This locking problem is yet another case where the upcoming MySQL row-level 
binlogging will help. Then we do not need locks on the SELECT table ever, 
because execution does not need to be serializable.

All innodb tables, MySQL 4.1.7 for both master and slaves.
Thanks.
Mike
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/ 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Remote Connecting

2004-11-21 Thread Danesh Daroui
Hi all,
I have a Linux Fedora Core 3 machine which is supposed to act as 
database server and a Windows XP client. I had installed mysql server 
4.1.5 before and I have defined remote user on my Linux machine as below:

 GRANT ALL PRIVILEGES ON *.* TO 'RemoteUser'@'%' IDENTIFIED BY 
'password' WITH GRANT OPTION;

and I could connect to my server from my Windows XP client like below:
mysql -h 192.168.1.2 -u RemoteUser -p
and everything was OK. Now, I have upgraded my both server and client to 
MySQL 4.1.7 which is the latest version and now I can not connect to my 
server as before. Can anyone help ? Is there something which has been 
changed in new version ?

Thanks,
Danesh Daroui


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Remote Connecting

2004-11-21 Thread Peter
Hello Danesh,
what error do you get after the upgrade ?
Peter
Danesh Daroui wrote:
Hi all,
I have a Linux Fedora Core 3 machine which is supposed to act as 
database server and a Windows XP client. I had installed mysql server 
4.1.5 before and I have defined remote user on my Linux machine as below:

  GRANT ALL PRIVILEGES ON *.* TO 'RemoteUser'@'%' IDENTIFIED BY 
'password' WITH GRANT OPTION;

and I could connect to my server from my Windows XP client like below:
mysql -h 192.168.1.2 -u RemoteUser -p
and everything was OK. Now, I have upgraded my both server and client to 
MySQL 4.1.7 which is the latest version and now I can not connect to my 
server as before. Can anyone help ? Is there something which has been 
changed in new version ?

Thanks,
Danesh Daroui



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


very slow concurrent queries on HEAP table

2004-11-21 Thread Bernd Heller
Hi all,
I've got a rather odd performance problem with concurrent queries here. 
My query regrettably always needs to do a full table scan - really 
can't be helped. So my
idea was to cache the data in a HEAP table to get maximum performance 
out of it and it works really well, I'm down to 0.07 seconds for a 
single query.
Now the problem I just found is when I run 10 or 20 identical queries 
at the same time: my CPU goes up to 100% for a surprisingly long time, 
and when I look at what mysqld is doing with a profiler, it's burning 
70% of its time in pthread_cond_wait and pthread_mutex_lock.

To me this looks as if the table gets locked and the queries don't 
really execute concurrently, but I'm not sure how to find out what is 
going on and there are no updates or inserts happening, just plain 
selects. Table_locks_immediate increments by 1 for each query, but 
Table_locks_waited remains at 0. Also show processlist says all 
queries are in sending data state most of the time.

I'm not sure what to do about this, but it's not an acceptable 
performance right now. The table has 100,000 rows at present and each 
row contains only ints of different sizes totaling to about 200 bytes 
per row. The heap table is small enough to fit into memory, and there 
is also no swapping or thrashing during the queries, so it should 
execute lightning fast, right? it just doesn't. This is mysql 4.1.7 
(official max binary) on MacOS X.

Any help would be very much appreciated!
Bernd
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


loading data file causes errors

2004-11-21 Thread Eric Wagar
I am moving a table from a 3.23.56 db to a 4.1.7 db.  I currently only testing 
to see if I can.  So far, I have been able to create the receiving table, but 
not be able to insert the data.  (The only difference I see when using 
phpMyAdmin is the collation column on the 4.1.7 server)

When I try to load the data, it responds with:
mysql load data infile 'xoops_users2' into table test_cm_xoops.xoops_users2;
Query OK, 24117 rows affected, 65535 warnings (2.28 sec)
Records: 24117  Deleted: 0  Skipped: 0  Warnings: 1615838

The load is looking as though it is reading the line numbers of the file 
(24080), but the data is not coming in.

I don't see any warnings to the screen, and I don't see anything in the log.  
(I am using Linux RH9, and I am assuming the log is in /var/log.  The file is 
very tiny, so maybe I am not even logging.)

What I'd like to know is how do I see what the errors are?  Where would I find 
the log files, and what would they be named?  (Or, would my.cnf need to be 
configured with something other than 'err-log'?)

Thanks
eric

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problems With 4.0 - 4.1 Upgrade on FreeBSD 4.10

2004-11-21 Thread Gustafson, Tim
Hello

I have upgraded from mySQL 4.0 to mySQL 4.1 on a FreeBSD 4.10 box.

The upgrade went perfectly with no warning or error messages.  However,
after the mySQL server was running for a few hours, it crashed.  It
restarted itself thanks to mysqld_safe.  However, it keeps crashing and
restarting itself, with the following message in
/var/db/mysql/my.hostname.com.err:

mysqld got signal 11;
This could be because you hit a bug. It is also possible that this
binary
or one of the libraries it was linked against is corrupt, improperly
built,
or misconfigured. This error can also be caused by malfunctioning
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

Now, I suppose it could be hardware or the O/S, but this box had been
running mySQL 4.0 in the exact same configuration for about a year.  The
hardware has not changed, so I doubt that there is a problem with it.  I
tried recompiling the server without optimiziations as well, just in
case, but that didn't help either.

Has anyone else had similar problems with upgrading mySQL on a FreeBSD
box?

Thanks in advance!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


Ok now connector/J doesn't work.

2004-11-21 Thread William R. Mussatto
Mark Matthews said:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 William R. Mussatto wrote:
 Keith Ivey said:

William R. Mussatto wrote:


I've been googling for 1/2 hr w/o any answers.  sorry if I've missed
 the obvious.
Problem. Fresh install of mysql 4.7.1, AS perl 5.8
DBI and DBD-Mysql via ppm.
Also new connector/J version

Client does not support authnticaiton protocol


What version of MySQL were you using previously?  If it was 4.1.0 or
 earlier, then this
might be useful reading:

http://dev.mysql.com/doc/mysql/en/Password_hashing.html

--
Keith Ivey [EMAIL PROTECTED]
Washington, DC

 Thanks, I'll check it out.  I did the development under linux using
 3.23.xx and perl 5.6 Basically debian stable.

 From the article it looks like I'll have to fall back to a 4.0
 version.

 I was hoping, but using a clean install on the windows box to avoid
 these issues.
 I'll try  SET PASSWORD FOR 'some_user'@'some_host' =
 OLD_PASSWORD('mypass');

 William,

 You'll need a copy of DBD::MySQL that's linked with libmysql from 4.1 to
 get around this issue (and use the new server-side prepared statements
 as well). Patrick Galbraith (a MySQL engineer) has pushed the required
 changes into the CVS repo of DBD::MySQL, however that hasn't been
 released yet as a binary by the DBD::MySQL maintainer  (DBD::MySQL
 version  2.9015)

 Thanks that worked, now to see what happends when I try w/connector/J

 As long as you're using a recent version of Connector/J (3.0.15 or
 3.1.4), it'll work fine.

 Regards,

   -Mark
Tried it and now I get connection timed out after three tries.  I'm
running the testbed using the tomcat buried in netbeans.  Cut and pasted
the example from the on line docs.  Put the jar in the WEB-INF/lib
dirctory.  Compiles fine..  I'm using the in line driver registration
rather than modifying the
web.xml file.

Tested user on the same box with command line client using the
--host=localhost and it connected fine (had run grant all for the
'username'@'localhost' earlier.  Any thoughts?

Of course I've been using your old mm version with 3.23.x (to match our
debian install).

With 4.7.1 both the old mm connector and the current connector/J both time
out.  Anythink I should check. And no. there is no local firewll installed
on the box.

-- 
William R. Mussatto, Senior Systems Engineer
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: loading data file causes errors

2004-11-21 Thread Eric Wagar
 I am moving a table from a 3.23.56 db to a 4.1.7 db.  I currently only
 testing to see if I can.  So far, I have been able to create the receiving
 table, but not be able to insert the data.  (The only difference I see when
 using phpMyAdmin is the collation column on the 4.1.7 server)

 When I try to load the data, it responds with:
 mysql load data infile 'xoops_users2' into table
 test_cm_xoops.xoops_users2; Query OK, 24117 rows affected, 65535 warnings
 (2.28 sec)
 Records: 24117  Deleted: 0  Skipped: 0  Warnings: 1615838

 The load is looking as though it is reading the line numbers of the file
 (24080), but the data is not coming in.

 I don't see any warnings to the screen, and I don't see anything in the
 log. (I am using Linux RH9, and I am assuming the log is in /var/log.  The
 file is very tiny, so maybe I am not even logging.)

 What I'd like to know is how do I see what the errors are?  Where would I
 find the log files, and what would they be named?  (Or, would my.cnf need
 to be configured with something other than 'err-log'?)

I have figured my problem out.  I was trying to do too much through 
phpMyAdmin.  I was able to get the data into the db by doing '/usr/bin/mysql 
-u root -p test_cm_xoops   xoops_users2', which worked flawlessly.

Now that I have that figured out, I am on my way to the fun tedious part of 
changing the field names.

Thanks
eric

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]