Re: how things get messed up

2010-02-18 Thread Martijn Tonies
We have applications for colleges in India. The same idea of having single 
table for manipulating students records. but we are not following archiving 
concept.


Ex stupersonal. and stuclass these tables are playing wide role in our 
application. After 7 years now there are 9000 records[postgresql backend] 
are there in the table. Because of this the entire application [ Fees, 
attendance, exams etc] performance is getting down. For the remedy of this 
I proposed this year wise architecture for our new version [mysql].


I have problem in year wise also, i have number of mutual related tables 
for students such as stu_last_studies, stu_family_details, stu_address, 
stu_extracurri and so on. If i go for year basisis i have to make all the 
above tables also year basis.

Hence, I feel it difficult have such number of tables after few years.

As you said the archive system, can you the idea about the archive 
system[If needed i will give the table structures].


It will be grate help to me.


If you have performance problems with just 9000 records, there's something
seriously wrong with your queries and indices and/or your application code.

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: how things get messed up

2010-02-18 Thread Jerry Schwartz
From: Vikram A [mailto:vikkiatb...@yahoo.in] 
Sent: Wednesday, February 17, 2010 11:41 PM
To: Jerry Schwartz
Cc: MY SQL Mailing list
Subject: Re: how things get messed up

 

Dear Jerry Schwartz

We have applications for colleges in India. The same idea of having single 
table for manipulating students records. but we are not following archiving 
concept.

Ex stupersonal. and stuclass these tables are playing wide role in our 
application. After 7 years now there are 9000 records[postgresql backend] are 
there in the table. Because of this the entire application [ Fees, attendance, 
exams etc] performance is getting down. For the remedy of this I proposed this 
year wise architecture for our new version [mysql].



[JS] You have 9000 records? That should not slow down any application. I must 
not understand you.


I have problem in year wise also, i have number of mutual related tables for 
students such as stu_last_studies, stu_family_details, stu_address, 
stu_extracurri and so on. If i go for year basisis i have to make all the above 
tables also year basis. 
Hence, I feel it difficult have such number of tables after few years. 



[JS] I did not mean that you should have tables for each year. I was suggesting 
that you have tables for recent data and tables for archived data. 


As you said the archive system, can you the idea about the archive system[If 
needed i will give the table structures]. 



[JS] This is best described with a picture. Here is a small example of what I 
meant:

 

  `student_master_table`  (all years)

/\

   /  \

 `grades_current`  `grades_archive`

| /

 `class_master_table`

 

The structures of the two grades tables should be almost the same, something 
like

 

grade_id autoincrement in grades_current only

student_id index

class_id index

class_start_date

grade_received

 

You would add new grade records to the `grades_current` table.

 

Now, suppose that you don’t usually need data more than five years old. Once a 
year you would run these queries:

 

INSERT INTO `grades_archive` SELECT * FROM `grades_current` WHERE 
`class_start_date`  YEAR(DATE_SUB(NOW(), INTERVAL 4 YEAR));

DELETE FROM `grades_current` WHERE `class_start_date`  YEAR(DATE_SUB(NOW(), 
INTERVAL 4 YEAR));

 

That would keep the `grades_current` table small. If you want to find a 
student’s recent grade history, you would use a query like

 

SELECT * FROM `grades_current` WHERE `student_id` = 12345;

 

If you decide that you need a student’s complete history, you could do 

 

SELECT * FROM `grades_current` WHERE `student_id` = 12345 UNION ALL SELECT * 
FROM `grades_archive` WHERE `student_id` = 12345;

 

That is a quick outline of what I was saying.

 

I don’t know how big your database is, so I can’t begin to guess whether or not 
this is necessary. On my desktop computer, where I do my testing, I have two 
tables: one has about 104000 records, the other has about 20 records. The 
query 

 

SELECT `prod`.`prod_num`, `prod_price`.`prod_price_del_format`, 
`prod_price`.`prod_price_end_price` FROM `prod` JOIN `prod_price` ON 
`prod`.`prod_id` = `prod_price`.`prod_id` WHERE `prod`.`prod_num` = 40967;

 

took .70 seconds. Repeating the same query with different values of `prod_num` 
gave increasingly faster results, showing that caching is working as expected: 
after three such queries, the response time was .14 seconds.

 

I understand that schools in India can be very, very big; so perhaps you need 
an archive scheme such as the one I described. In fact, it might be useful to 
extend this whole concept to using an archive database, rather than archive 
tables within the same database. The database engine wouldn’t really care, but 
since the archive database wouldn’t change very often you wouldn’t have to back 
it up very often, either.

 

Regards,

 

Jerry Schwartz

The Infoshop by Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

 

www.the-infoshop.com

 

 


It will be grate help to me.

Thank you

VIKRAM A

 

  _  

From: Jerry Schwartz jschwa...@the-infoshop.com
To: Vikram A vikkiatb...@yahoo.in; Johan De Meersman vegiv...@tuxera.be
Cc: MY SQL Mailing list mysql@lists.mysql.com
Sent: Tue, 16 February, 2010 9:32:22 PM
Subject: RE: how things get messed up

-Original Message-
From: Vikram A [mailto:vikkiatb...@yahoo.in]
Sent: Friday, February 12, 2010 4:13 AM
To: Johan De Meersman
Cc: MY SQL Mailing list
Subject: Re: how things get messed up

Sir,

Thanks for your suggestion,
I will go for blob storage, because our application will maintain the data on
yearly basis[stupersonal2008, stupersonal2009 etc.]. So i feel we may not 
face
such kind of performance issue in our application.

[JS] It sounds like you are planning to have one table per year. Regardless of 
where you put your blobs, I think that is a bad idea from a design 

Re: how things get messed up

2010-02-18 Thread Johan De Meersman
*cough*partitioning*cough*

On Thu, Feb 18, 2010 at 5:24 PM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:

 From: Vikram A [mailto:vikkiatb...@yahoo.in]
 Sent: Wednesday, February 17, 2010 11:41 PM
 To: Jerry Schwartz
 Cc: MY SQL Mailing list
 Subject: Re: how things get messed up



 Dear Jerry Schwartz

 We have applications for colleges in India. The same idea of having single
 table for manipulating students records. but we are not following archiving
 concept.

 Ex stupersonal. and stuclass these tables are playing wide role in our
 application. After 7 years now there are 9000 records[postgresql backend]
 are there in the table. Because of this the entire application [ Fees,
 attendance, exams etc] performance is getting down. For the remedy of this I
 proposed this year wise architecture for our new version [mysql].



 [JS] You have 9000 records? That should not slow down any application. I
 must not understand you.


 I have problem in year wise also, i have number of mutual related tables
 for students such as stu_last_studies, stu_family_details, stu_address,
 stu_extracurri and so on. If i go for year basisis i have to make all the
 above tables also year basis.
 Hence, I feel it difficult have such number of tables after few years.



 [JS] I did not mean that you should have tables for each year. I was
 suggesting that you have tables for recent data and tables for archived
 data.


 As you said the archive system, can you the idea about the archive
 system[If needed i will give the table structures].



 [JS] This is best described with a picture. Here is a small example of what
 I meant:



  `student_master_table`  (all years)

/\

   /  \

 `grades_current`  `grades_archive`

| /

 `class_master_table`



 The structures of the two grades tables should be almost the same,
 something like



 grade_id autoincrement in grades_current only

 student_id index

 class_id index

 class_start_date

 grade_received



 You would add new grade records to the `grades_current` table.



 Now, suppose that you don’t usually need data more than five years old.
 Once a year you would run these queries:



 INSERT INTO `grades_archive` SELECT * FROM `grades_current` WHERE
 `class_start_date`  YEAR(DATE_SUB(NOW(), INTERVAL 4 YEAR));

 DELETE FROM `grades_current` WHERE `class_start_date` 
 YEAR(DATE_SUB(NOW(), INTERVAL 4 YEAR));



 That would keep the `grades_current` table small. If you want to find a
 student’s recent grade history, you would use a query like



 SELECT * FROM `grades_current` WHERE `student_id` = 12345;



 If you decide that you need a student’s complete history, you could do



 SELECT * FROM `grades_current` WHERE `student_id` = 12345 UNION ALL SELECT
 * FROM `grades_archive` WHERE `student_id` = 12345;



 That is a quick outline of what I was saying.



 I don’t know how big your database is, so I can’t begin to guess whether or
 not this is necessary. On my desktop computer, where I do my testing, I have
 two tables: one has about 104000 records, the other has about 20
 records. The query



 SELECT `prod`.`prod_num`, `prod_price`.`prod_price_del_format`,
 `prod_price`.`prod_price_end_price` FROM `prod` JOIN `prod_price` ON
 `prod`.`prod_id` = `prod_price`.`prod_id` WHERE `prod`.`prod_num` = 40967;



 took .70 seconds. Repeating the same query with different values of
 `prod_num` gave increasingly faster results, showing that caching is working
 as expected: after three such queries, the response time was .14 seconds.



 I understand that schools in India can be very, very big; so perhaps you
 need an archive scheme such as the one I described. In fact, it might be
 useful to extend this whole concept to using an archive database, rather
 than archive tables within the same database. The database engine wouldn’t
 really care, but since the archive database wouldn’t change very often you
 wouldn’t have to back it up very often, either.



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



 www.the-infoshop.com






 It will be grate help to me.

 Thank you

 VIKRAM A



  _

 From: Jerry Schwartz jschwa...@the-infoshop.com
 To: Vikram A vikkiatb...@yahoo.in; Johan De Meersman vegiv...@tuxera.be
 
 Cc: MY SQL Mailing list mysql@lists.mysql.com
 Sent: Tue, 16 February, 2010 9:32:22 PM
 Subject: RE: how things get messed up

 -Original Message-
 From: Vikram A [mailto:vikkiatb...@yahoo.in]
 Sent: Friday, February 12, 2010 4:13 AM
 To: Johan De Meersman
 Cc: MY SQL Mailing list
 Subject: Re: how things get messed up
 
 Sir,
 
 Thanks for your suggestion,
 I will go for blob storage, because our application will maintain the data
 on
 yearly basis[stupersonal2008, stupersonal2009 etc.]. So i feel we may not
 face
 such kind of performance issue in our application.
 
 [JS] It 

RE: auto_increment weirdness

2010-02-18 Thread Gavin Towey
Reproduced in 5.1.43.  Could not reproduce it in 5.0.66


-Original Message-
From: Yang Zhang [mailto:yanghates...@gmail.com]
Sent: Wednesday, February 17, 2010 6:05 PM
To: mysql@lists.mysql.com
Subject: auto_increment weirdness

Hi, for some reason, I have an auto_increment field that's magically
bumped up to the next biggest power of 2 after a big INSERT...SELECT
that inserts a bunch of tuples (into an empty table). Is this expected
behavior? I couldn't find any mention of this from the docs (using the
MySQL 5.4.3 beta).

Small reproducible test case:

First, generate some data: from bash, run seq 3  /tmp/seq

Next, run this in mysql:

create table x (a int auto_increment primary key, b int);
create table y (b int);
load data infile '/tmp/seq' into table y;
insert into x (b) select b from y;
show create table x;

This will show auto_increment = 32768 instead of 3.

Is this a bug introduced in the beta? Has it been fixed in newer
releases? Couldn't find a mention in the bug database. Thanks in
advance.
--
Yang Zhang
http://www.mit.edu/~y_z/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



ERROR 1442 (HY000) when delete inside trigger statement

2010-02-18 Thread viraj
i have two table, T1, T2. and 1 trigger.

trigger is before update on T1 and it updates some values in T2. once
it's done, the trigger tries to delete the subject row of T1 (delete
from T1 where id = new.id)

i tried with second trigger on T2 (after/before update) and with a
procedure inside this trigger.. but, all the time i get..

 issue: ERROR 1442 (HY000): Can't update table 'T1' in stored
function/trigger because it is already used by statement which invoked
this stored function/trigger.

found http://forums.mysql.com/read.php?99,122354,122354#msg-122354 and
many other articles which had ended up with no solution. (with before
update, it's possible to set new values to NEW.*, but did not find
anything useful to do a successful delete)


could somebody please confirm this is not possible!!! so i can think
about some other workaround :)

thanks

~viraj

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Community Server 5.1.44 has been released

2010-02-18 Thread Karen Langford

Dear MySQL users,

MySQL Community Server 5.1.44, a new version of the popular Open
Source Database Management System, has been released.  MySQL 5.1.44 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.1, please see

http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html

For information on installing MySQL 5.1.44 on new servers or upgrading
to MySQL 5.1.44 from previous MySQL releases, please see

http://dev.mysql.com/doc/refman/5.1/en/installing.html

MySQL Server is available in source and binary form for a number of
platforms from our download pages at

http://dev.mysql.com/downloads/

Not all mirror sites may be up to date at this point in time, so if
you can't find this version on some mirror, please try again later or
choose another download site.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc.:

http://forge.mysql.com/wiki/Contributing

For information on open issues in MySQL 5.1, please see the errata
list at

http://dev.mysql.com/doc/refman/5.1/en/open-bugs.html

The following section lists the changes in the MySQL source code since
the previous released version of MySQL 5.1.  It may also be viewed
online at

http://dev.mysql.com/doc/refman/5.1/en/news-5-1-44.html

Enjoy!

===

C.1.2. Changes in MySQL 5.1.44

   InnoDB Plugin Notes:

 * This release includes InnoDB Plugin 1.0.6. This version is
   considered of Release Candidate (RC) quality.
   In this release, the InnoDB Plugin is included in source and
   binary distributions, except RHEL3, RHEL4, SuSE 9 (x86,
   x86_64, ia64), and generic Linux RPM packages. It also does
   not work for FreeBSD 6 and HP-UX or for Linux on generic ia64.

   Functionality added or changed:

 * Replication: Introduced the
   --binlog-direct-non-transactional-updates server option. This
   option causes updates using the statement-based logging format
   to tables using non-transactional engines to be written
   directly to the binary log, rather than to the transaction
   cache.
   Before using this option, be certain that you have no
   dependencies between transactional and non-transactional
   tables. A statement that both selects from an InnoDB table and
   inserts into a MyISAM table is an example of such a
   dependency. For more information, see Section 16.1.3.4,
   Binary Log Options and Variables.
   (Bug#46364: http://bugs.mysql.com/bug.php?id=46364)
   See also Bug#28976: http://bugs.mysql.com/bug.php?id=28976,
   Bug#40116: http://bugs.mysql.com/bug.php?id=40116.

   Bugs fixed:

 * Partitioning: When an ALTER TABLE ... REORGANIZE PARTITION
   statement on an InnoDB table failed due to
   innodb_lock_wait_timeout expiring while waiting for a lock,
   InnoDB did not clean up any temporary files or tables which it
   had created. Attempting to reissue the ALTER TABLE statement
   following the timeout could lead to storage engine errors, or
   possibly a crash of the server.
   (Bug#47343: http://bugs.mysql.com/bug.php?id=47343)

 * Replication: In some cases, inserting into a table with many
   columns could cause the binary log to become corrupted.
   (Bug#50018: http://bugs.mysql.com/bug.php?id=50018)
   See also Bug#42749: http://bugs.mysql.com/bug.php?id=42749.

 * Replication: When using row-based replication, setting a BIT
   or CHAR column of a MyISAM table to NULL, then trying to
   delete from the table, caused the slave to fail with the error
   Can't find record in table.
   (Bug#49481: http://bugs.mysql.com/bug.php?id=49481,
   Bug#49482: http://bugs.mysql.com/bug.php?id=49482)

 * Replication: When logging in row-based mode, DDL statements
   are actually logged as statements; however, statements that
   affected temporary tables and followed DDL statements failed
   to reset the binary log format to ROW, with the result that
   these statements were logged using the statement-based format.
   Now the state of binlog_format is restored after a DDL
   statement has been written to the binary log.
   (Bug#49132: http://bugs.mysql.com/bug.php?id=49132)

 * Replication: When using row-based logging, the statement
   CREATE TABLE t IF NOT EXIST ... SELECT was logged as CREATE
   TEMPORARY TABLE t IF NOT EXIST ... SELECT when t already
   existed as a temporary table. This was caused by the fact that
   the temporary table was opened and the results of the SELECT
   were inserted into it when a temporary table existed and had
   the same name.
   Now, when this statement is executed, t is created as a base
   table, the results of the SELECT are inserted into it --- even
   if there already exists a temporary table having the same name
   --- and the statement is logged 

Re: ERROR 1442 (HY000) when delete inside trigger statement

2010-02-18 Thread Jesper Wisborg Krogh
--- Original Message ---
 From: viraj kali...@gmail.com
 To: mysql@lists.mysql.com
 Sent: 19/2/10, 05:48:41
 Subject: ERROR 1442 (HY000) when delete  inside trigger statement

  issue: ERROR 1442 (HY000): Can't update table 'T1' in stored
 function/trigger because it is already used by  statement which invoked
 this stored function/trigger.

 could somebody please confirm this is not possible!!! so i can think
 about some other workaround :)

--

That is correct. There is as far as I know no way in a MySQL trigger to neither 
to do operations on the table the trigger belongs to (obviously except the row 
that the trigger is operating on through the NEW variables) nor reject an 
insert, update, or delete.

It is of course possible to do a workaround in a stored procedure and use 
permissions to ensure that the normal users cannot update the table directly. I 
don't know whether that will be an acceptable solution in your case though.

Jesper

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org