Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Heikki Tuuri
Ivan,
- Original Message - 
From: John B. Ivski [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 4:55 AM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki,
the output shows that there are no dangling transactions, and purge is
not lagging behind.
Yes, that's what I thought... weird, huh :/
If you update a secondary index column, that requires purge to clean up
the index.
The tables have structure similar to the following:
...
Then do
SHOW TABLE STATUS FROM test;
What does it print as the InnoDB free space for the table test.t? That
is the free space in the system tablespace.
Unfortunately I won't be able to shut down the server until this weekend. 
Will let you know the results.

no need to do that any more, because the free space info can also be seen 
from the output of the innodb_tablespace_monitor.

InnoDB is clearly leaking 'segments' in the system tablespace. They are 
probably undo logs. For some reason, a trx commit or purge fails to free 
them.

SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
These big segments are about 40 MB in size. Did you do ALTER TABLE or some 
other big transaction involving a million rows?

...
SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
Most segments seem to be undo logs resulting from small transactions.
You said that you tried max_purge_lag, and it caused lengthy waits? What 
value did you try?

I will try to repeat the problem by simulating your database workload.
Please use also innodb_table_monitor, and send the output to me.
Thanks for the advice.
Good luck,
Ivan
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/ 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help using SSL from VB client, using MyODBC

2004-11-11 Thread Gleb Paharenko
Hello.



I'm not a VB expert, but some notes about methods of establishing such 

connections can be found at:

  http://dev.mysql.com/doc/mysql/en/Windows_and_SSH.html

  http://dev.mysql.com/doc/mysql/en/mysql_ssl_set.html





William Blair Wagner [EMAIL PROTECTED] wrote:

 I'm kind of new to the SSL scene.

 I've read all I can find on MySQL.org about setting up and using SSL.

 

 I'm on MySLQ 4.20 and have built mysql after configuring with --use-vio

 and --use-openssl.  HAVE_OPENSSL = YES.  I can handle setting up the

 user talbe and GRANTS to require SSL for users and connections.

 

 What I don't know how to do it make my client use SSL with MySQL. Can

 you help me?  or give me some direction?

 

 My application runs on M$ Windows.

 It's written in M$ Visual Basic 6 from Visual Studio 6.

 I'm using MDAC 2.7 and M$ ADO.

 I'm using MyODBC 2.50 but can easily and happily upgrade to 3.51

(is 3.51 needed?)

 

 I have no idea what to do (set properties?) to cause my VB client to

 connect to the MySQL DB server using SSL?

 

 Any help would be huge! Thanx.

 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



cfp: MCMP 2005

2004-11-11 Thread Mohamed khedr

   
Call for Papers
First International Workshop on Managing Context Information in Mobile and 
Pervasive Environments (MCMP’2005)

www.site.uottawa.ca/~mkhedr/MCMP05

In conjunction with The 6th International Conference on Mobile Data Management 
(MDM'2005)

May 9, 2005 – Ayia Napa, Cyprus

Workshop Organizers

W. Mansoor, Zayad University, U.A.E

 

M. Khedr, University of Ottawa, Canada

 

D. Benslimane,  Lyon 1 University, France

 

Z. Maamar, Zayad University, U.A.E

 

Programme Committee

P. Bellavista, University of  Bologna, Italy

U. Bellur, IIT, India

P.-A. Champin, Lyon 1 University, France 

G. Chen, Dartmouth University , USA 

B. Esfandiari, Carleton University, Canada 

R. Garcia, UP, Spain 

H. Harroud, University of Ottawa, Canada 

G. Kouadri, University of Fribourg, Switzerland

S. Kouadri, University of Fribourg, Switzerland

T. Kanter, Ericsson, Sweden 

T. Lemlouma, INRIA, France

J. Mäntyjärvi, VTT, Finland

N.C. Narendra, IBM Software Labs, India 

K. Pousttchi, University of Augsburg , Germany 

A. Schmidt, University of  München, Germany

D. Taniar, University of Monash , Australia 

A. Vakali, University of Athena, Greece 

Scope 

The increasing popularity of mobile devices (e.g., laptops, mobile phones, and 
PDAs), and advances in wireless networking technologies are enabling new 
classes of applications targeting environments characterized by being dynamic, 
mobile, reconfigurable, and personalized spontaneously. These applications and 
their targeted environments raise challenges to application developers, as they 
have to be aware of the variations in the execution context such as location, 
time, users’ activities, and devices’ capabilities in order to tune and adapt 
applications’ intended functionalities.

Developing and managing these types of applications that are context-aware 
would be extremely complex and error-prone if not supported by management 
facilities capable of acquiring, modelling, manipulating, reasoning, and 
disseminating context information. This is because application developers would 
have to deal with these issues in a proprietarily manner and consequently would 
be distracted from the actual requirements of the applications they are 
developing on one side and would hinder the interoperability of these 
context-aware applications on the other side. Unfortunately, current 
networking, computing, and management technologies do not fully support such 
model of automated adaptability based on context. The workshop will address 
these challenging issues focusing on exploring novel methods to manage context 
information targeting pervasive and mobile environments.

 

Relevant topics 

· Novel algorithms for acquiring and disseminating context from 
physical and logical sensors.

· Middleware and agent systems support to managing context in pervasive 
environments.

· Innovative approaches for modelling, reasoning, storing, and 
manipulating context information.

· Management of context information in deterministic and 
non-deterministic pervasive environments.

· Facilities to provide persistence services based on context.

· Exploiting new types of context information such as network-, social- 
and system-related context and approaches for managing these new types of 
context.

· Managing multiple environments and processes of context exchange.

· Methods of leveraging Internet service providers from passive 
carriers to context-oriented service providers addressing large scale pervasive 
environments.

· Service discovery and invocation based on context.

· Activity-based computing and its relation to the context aware mobile 
computing.

· Context aware Mobile database transactions and query processing.

· Evaluation metrics of the effectiveness of management techniques for 
context information.

 

Papers and Evaluations

Authors are invited to submit electronically original papers in PDF format to 
([EMAIL PROTECTED], [EMAIL PROTECTED]). Papers length should not exceed 10 
pages in LNCS camera-ready style. All papers will be reviewed. For more 
information, contact one of the workshop chairs at [EMAIL PROTECTED], 
wathiq.mansoor/[EMAIL PROTECTED], [EMAIL PROTECTED]

 

Deadlines

Submissions due: December 22, 2004

Acceptance notification: January 22, 2005 

Camera-ready papers submission: February 22, 2005 

Workshop: May 9, 2005




-
Post your free ad now! Yahoo! Canada Personals





-
Post your free ad now! Yahoo! Canada Personals




-
Post your free ad now! Yahoo! Canada Personals


Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Heikki Tuuri
Ivan,
there is something very strange in your tablespace monitor output:

04  5:35:51 INNODB TABLESPACE MONITOR OUTPUT

FILE SPACE INFO: id 0
size 120832, free limit 120064, free extents 3
not full frag extents 1: used pages 20, full frag extents 1462
first seg id not used 0 2
SEGMENT id 0 5635 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5636 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5637 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
...
the numbers increase by 1 all the way up from 5637 to 88879; res and used 
values are
mostly 1,
sometimes 2 or 3. Rarely (in like, 50 cases) they're pretty big, e.g.

SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
...
SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
NUMBER of file segments: 82815
Validating tablespace
Validation ok
---
END OF INNODB TABLESPACE MONITOR OUTPUT
===
You say that the segment id's go from 5635 to 1. But at the tablespace 
creation, InnoDB allocates several segments for foreign key system tables 
etc. There should be segments with id's 1, 2, 3, like in the output that I 
posted yesterday. Looks like the tablespace management data structures are 
corrupt.

The fact that segments are not being freed at a trx commit or a purge, may 
be a result of this corruption.

Not a single bug has been found from fsp0fsp.c in 4 years. This might also 
be corruption caused by the hardware or the OS.

If you can zip your ibdata files into moderate size, can you upload them 
with ftp to

support.mysql.com/pub/mysql/secret
My guess is that if you rebuild the tablespace, the leak problem will go 
away.

Regards,
Heikki
- Original Message - 
From: Heikki Tuuri [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 10:16 AM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Ivan,
- Original Message - 
From: John B. Ivski [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 4:55 AM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki,
the output shows that there are no dangling transactions, and purge is
not lagging behind.
Yes, that's what I thought... weird, huh :/
If you update a secondary index column, that requires purge to clean up
the index.
The tables have structure similar to the following:
...
Then do
SHOW TABLE STATUS FROM test;
What does it print as the InnoDB free space for the table test.t? That
is the free space in the system tablespace.
Unfortunately I won't be able to shut down the server until this weekend.
Will let you know the results.
no need to do that any more, because the free space info can also be seen
from the output of the innodb_tablespace_monitor.
InnoDB is clearly leaking 'segments' in the system tablespace. They are
probably undo logs. For some reason, a trx commit or purge fails to free
them.
SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
These big segments are about 40 MB in size. Did you do ALTER TABLE or some
other big transaction involving a million rows?
...
SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
Most segments seem to be undo logs resulting from small transactions.
You said that you tried max_purge_lag, and it caused lengthy waits? What
value did you try?
I will try to repeat the problem by simulating your database workload.
Please use also innodb_table_monitor, and send the output to me.
Thanks for the advice.
Good luck,
Ivan
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup 

Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Heikki Tuuri
Ivan,
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close to 
the end of the output? The print routine first prints inode pages that are 
completely used, and after that other inode pages. Since the tablespace 
validation said the tablespace is ok, I guess the segments really are there.

Anyway, if we get the ibdata files, it should be relatively easy to find out 
what is wrong.

Regards,
Heikki
- Original Message - 
From: Heikki Tuuri [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, November 11, 2004 2:17 PM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Ivan,
there is something very strange in your tablespace monitor output:

04  5:35:51 INNODB TABLESPACE MONITOR OUTPUT

FILE SPACE INFO: id 0
size 120832, free limit 120064, free extents 3
not full frag extents 1: used pages 20, full frag extents 1462
first seg id not used 0 2
SEGMENT id 0 5635 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5636 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5637 space 0; page 461; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
...
the numbers increase by 1 all the way up from 5637 to 88879; res and used 
values are
mostly 1,
sometimes 2 or 3. Rarely (in like, 50 cases) they're pretty big, e.g.

SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
...
SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
NUMBER of file segments: 82815
Validating tablespace
Validation ok
---
END OF INNODB TABLESPACE MONITOR OUTPUT
===
You say that the segment id's go from 5635 to 1. But at the tablespace 
creation, InnoDB allocates several segments for foreign key system tables 
etc. There should be segments with id's 1, 2, 3, like in the output that I 
posted yesterday. Looks like the tablespace management data structures are 
corrupt.

The fact that segments are not being freed at a trx commit or a purge, may 
be a result of this corruption.

Not a single bug has been found from fsp0fsp.c in 4 years. This might also 
be corruption caused by the hardware or the OS.

If you can zip your ibdata files into moderate size, can you upload them 
with ftp to

support.mysql.com/pub/mysql/secret
My guess is that if you rebuild the tablespace, the leak problem will go 
away.

Regards,
Heikki
- Original Message - 
From: Heikki Tuuri [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 10:16 AM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Ivan,
- Original Message - 
From: John B. Ivski [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 4:55 AM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki,
the output shows that there are no dangling transactions, and purge is
not lagging behind.
Yes, that's what I thought... weird, huh :/
If you update a secondary index column, that requires purge to clean up
the index.
The tables have structure similar to the following:
...
Then do
SHOW TABLE STATUS FROM test;
What does it print as the InnoDB free space for the table test.t? That
is the free space in the system tablespace.
Unfortunately I won't be able to shut down the server until this 
weekend.
Will let you know the results.

no need to do that any more, because the free space info can also be seen
from the output of the innodb_tablespace_monitor.
InnoDB is clearly leaking 'segments' in the system tablespace. They are
probably undo logs. For some reason, a trx commit or purge fails to free
them.
SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
These big segments are about 40 MB in size. Did you do ALTER TABLE or 
some
other big transaction involving a million rows?

...
SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 

Checking the status of a field, and ensuring it is not updated before I change it

2004-11-11 Thread Joshua Beall
Hi All,

I have the following situation: I want to check a row in a database (list of
jobs that need to be run).  So, my script checks the status field, and if it
is not started, then it marks it as locked and running, and when it
finishes, it marks it as completed.

Now, here is what I am concerned about - because there may be many processes
running at any one time, it seems feasible to me that two instance of my
script might simultaneously issue the SELECT statement, and seeing that the
job is not started, then subsequently issue UPDATE statements.  Both
processes think they are the only ones running.

How should I deal with this?  It seems to me that I need some sort of query
that, all in one transaction, first locks the row so that no other processes
can access it, then checks to see if it is running/completed or not.  If it
is not yet running/completed, it starts the job, updates the database row to
mark the job as running, and then unlocks the row.

Is this what I should do?  I'm not sure how to do this.  Any thoughts?  Any
help would be greatly appreciated!

Sincerely,
  -Josh







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tricky self join query help?

2004-11-11 Thread Brent Baisley
I noticed a few mistakes in my query, which may be causing some 
confusion and would probably cause it not to work. But I'll break 
everything down.

The NULLs in the second owner column are the indicators that there is 
no matching owner in the most recent three months.

Breaking down the query, the first part indicates what databases you 
want to delete records from:
DELETE FROM Events

Then you indicate which database you will be using to create the 
filter. In this case, Events and an alias of the Events table that I 
just called Owners. You are doing a left join so that you don't filter 
out any records from the first table, Events.
USING Events LEFT JOIN Events AS Owners

Since you are doing a join, you obviously need to specify a join 
condition. You are joining, based on OwnerID, the records from the 
Owners (Events) table that are younger than 3 months with ALL (because 
of the left join) records in the Events table.
ON Events.ownerID=Owners.ownerID AND Owners.eventDate= 3 months ago

Anything that does not have a matching OwnerID from the Events/Owners 
join will have a NULL (or not match) value in the OwnerID column. 
Those are the ones you want tot delete, thus the last piece.
WHERE Owners.ownerID IS NULL

You should add a filter on Events for records older than 3 months. It 
will probably speed things up and will assure you don't delete newer 
records. So add this to the end, which I didn't have originally:
AND Events.EventDate 3 months ago

Final delete statement:
DELETE FROM Events
USING Events LEFT JOIN Events AS Owners
ON Events.ownerID=Owners.ownerID AND Owners.eventDate= 3 months ago
WHERE Owners.ownerID IS NULL
AND Events.EventDate 3 months ago
Original statement with typos and logic flaw:
DELETE
FROM Events
USING Events LEFT JOIN Events AS Owners
ON Events.ownerID=Owners.ownerID AND Events.eventData 3 months ago -- 
logic flaw, should be Owners.EventDate
WHERE Owners.ownerID IS NULL

Hope that helps.
On Nov 10, 2004, at 4:50 PM, Gerald Taylor wrote:
Thanks
And I am liking that other answer although it has
all nulls in the second owner column and I don't get how it works.
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: improving query response time

2004-11-11 Thread Aleksandr Guidrevitch
Hi Chetan,
Well, I would suggest to add separate index on
either on bsc_id or data_GenTime,
whichever returns smaller result.
Then check with explain that this index is used
May
chetan t wrote:
Hi mike,
as i have mentioned in my previous mail 

the table structure which i am using is as fallows,
CREATE TABLE IND_KAR_BNG_Metallica_PS_RT_4
(
gan_id INTEGER NOT NULL,
bsc_id INTEGER NOT NULL,
bts_id INTEGER NOT NULL,
bd_type VARCHAR(10) NOT NULL,
bd_id INTEGER NOT NULL,
duplex VARCHAR(10) NOT NULL,
data_GenTime DATETIME NOT NULL,
item_id INTEGER NOT NULL,
M0 INTEGER NOT NULL,
M1 INTEGER NOT NULL,
M2 INTEGER NOT NULL,
M3 INTEGER NOT NULL,
M4 INTEGER NOT NULL,
M5 INTEGER NOT NULL,
M6 INTEGER NOT NULL,
M7 INTEGER NOT NULL,
M8 INTEGER NOT NULL,
M9 INTEGER NOT NULL,
M10 INTEGER NOT NULL,
M11 INTEGER NOT NULL,
M12 INTEGER NOT NULL,
M13 INTEGER NOT NULL,
M14 INTEGER NOT NULL,
M15 INTEGER NOT NULL,
M16 INTEGER NOT NULL,
M17 INTEGER NOT NULL,
M18 INTEGER NOT NULL,
M19 INTEGER NOT NULL,
M20 INTEGER NOT NULL,
M21 INTEGER NOT NULL,
M22 INTEGER NOT NULL,
M23 INTEGER NOT NULL,
M24 INTEGER NOT NULL,
M25 INTEGER NOT NULL,
M26 INTEGER NOT NULL,
M27 INTEGER NOT NULL,
M28 INTEGER NOT NULL,
M29 INTEGER NOT NULL,
M30 INTEGER NOT NULL,
M31 INTEGER NOT NULL,
INDEX RetreiveIndex (data_GenTime,gan_id ,bsc_id
,bts_id ,bd_type ,bd_id ,item_id));
the type of query that is executed is as below
mysql select Sum(m0),Avg(m1),Max(m5),Min(m6) from
ind_kar_bng_robocop_gan_0_pm_ipc_0 where
bsc_id = 255 and data_Gentime  between 2004-11-09
00:00:00 and 2004-11-10 19:41:44
and item_id = 0;
+-+--+-+-+
| Sum(m0) | Avg(m1)  | Max(m5) | Min(m6) |
+-+--+-+-+
| 3899200 | 256. |   0 |   0 |
+-+--+-+-+
1 row in set (5.67 sec)
this query executed when he record count in the table
ind_kar_bng_robocop_gan_0_pm_ipc_0 was 1096650 records
as the number of record in the table keep on growing
the
query response time increases..
the explain select result of the same query is here
mysql explain select Sum(m0),Avg(m1),Max(m5),Min(m6)
from ind_kar_bng_robocop_gan_0_pm
_ipc_0 where
   - bsc_id = 255 and data_Gentime  between
2004-11-09 00:00:00 and 2004-11-10 19:
41:44
   - and item_id = 0 \G
*** 1. row
***
  id: 1
 select_type: SIMPLE
   table: ind_kar_bng_robocop_gan_0_pm_ipc_0
type: ALL
possible_keys: RetreiveIndex
 key: NULL
 key_len: NULL
 ref: NULL
rows: 1096650
   Extra: Using where
1 row in set (0.00 sec)
these are my system variables.
mysql show variables;
+-+---+
| Variable_name   | Value 
  |
+-+---+
| back_log| 50
  |
| basedir | C:\mysql\ 
  |
| binlog_cache_size   | 32768 
  |
| bulk_insert_buffer_size | 8388608   
  |
| character_set_client| latin1
  |
| character_set_connection| latin1
  |
| character_set_database  | latin1
  |
| character_set_results   | latin1
  |
| character_set_server| latin1
  |
| character_set_system| utf8  
  |
| character_sets_dir  |
C:\mysql\share\charsets/  |
| collation_connection| latin1_swedish_ci 
  |
| collation_database  | latin1_swedish_ci 
  |
| collation_server| latin1_swedish_ci 
  |
| concurrent_insert   | ON
  |
| connect_timeout | 5 
  |
| datadir | C:\mysql\data\
  |
| date_format | %Y-%m-%d  
  |
| datetime_format | %Y-%m-%d %H:%i:%s 
  |
| default_week_format | 0 
  |
| delay_key_write | ON
  |
| delayed_insert_limit| 100   
  |
| delayed_insert_timeout  | 300   
  |
| delayed_queue_size  | 1000  
  |
| expire_logs_days| 0 
  |
| flush  

Queries taking 60 seconds+

2004-11-11 Thread John Smith
Afternoon All,

I have the following table structure:

CREATE TABLE properties (
  id int(11) NOT NULL auto_increment,
  propid varchar(14) NOT NULL default '0',
  townid varchar(255) NOT NULL default '',
  countyid mediumint(5) NOT NULL default '0',
  address text NOT NULL,
  price int(14) NOT NULL default '0',
  image text NOT NULL,
  description text NOT NULL,
  link text NOT NULL,
  underoffer tinyint(1) NOT NULL default '0',
  sold tinyint(1) NOT NULL default '0',
  added int(14) NOT NULL default '0',
  `new` tinyint(1) NOT NULL default '1',
  old tinyint(1) NOT NULL default '0',
  PRIMARY KEY  (id),
  KEY old (old),
  KEY `new` (`new`),
  KEY sold (sold),
  KEY underoffer (underoffer),
  KEY propid (propid),
  KEY price (price),
  KEY countyid (countyid),
  FULLTEXT KEY address (address)
) ENGINE=MyISAM

Which I have ran the following commands on:

myisamchk -rq --sort-index --analyze --sort-records=7 properties.MYI
myisampack properties.MYI
myisamchk -rq --sort-index --analyze properties.MYI

It contains just over 400,000 rows and compressed is 163 Meg in size. 

I have just upgraded to 4.1 as well to see if I can squeeze any more 
performance out.

This query:

SELECT SQL_CACHE SQL_CALC_FOUND_ROWS address,price, image, description,
link , underoffer, sold ,added ,new  FROM properties  WHERE  countyid =
44 AND price = 1 AND old=0   ORDER  BY price desc LIMIT 100, 10;

Takes 123 seconds and examins 19068 rows according to the query log,
without the limit it returns 9512 rows.

I am all resourced out and have spent weeks googleing and reading the
docs etc.

I am pretty sure returing all properties containing the countyid should
not take 2 minutes and I am confused at why its examining 19068 rows
also.

It could be down to my server I suppose which is a jailed (no idea
what that is its just what my hosting company call it) freebsd virtial
server.

I have been playing with mysqld variables in my.cnf also...these are
probably all set too high...i am not really sure of the specs of my
server as I don't get that info..

set-variable = query_cache_type=2
set-variable= key_buffer=300M
set-variable= query_cache_size=200M
set-variable= query_cache_limit=50M
set-variable= max_allowed_packet=10M
set-variable= table_cache=50
set-variable= sort_buffer=5M
set-variable= read_rnd_buffer=5M
set-variable= record_buffer=5M
set-variable= tmp_table_size=64M
set-variable= thread_cache=9
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=2
set-variable= ft_min_word_len=3
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log

The table is read only for most of the day and will get updated (once I
get the queries down to an acceptable level) nightly...if that helps.

Any suggestions or further tips would be very helpful as its taken me
months to get my code to input the data in the way I need its now trying
to get the data back out that is the problem.

Thanks
John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SQL Syntax Problem

2004-11-11 Thread Adams, Pat 006
 -Original Message-
 From: David Blomstrom [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, November 10, 2004 4:08 PM
 To: [EMAIL PROTECTED]
 Subject: SQL Syntax Problem
 
 $sql = 'SELECT
 F.IDArea,
 C.IDArea, C.Name, C.Pop, C.Nationality,
 C.NationalityPlural, C.NationalityAdjective FROM cia_people 
 C, famarea2 F WHERE (C.Nationality is not null) AND (F.IDArea 
 = \'eur\') ORDER BY $_POST[\'order\'], 
 $_POST[\'direction\']'; $res = mysql_query($sql) or 
 die('Failed to run ' .
 $sql . ' - ' . mysql_error());

If you change the single quotes on the outside of the SQL statement to
double quotes, PHP will parse variables inside the string. Try 

$sql = SELECT F.IDArea, C.IDArea, C.Name, C.Pop, C.Nationality,
C.NationalityPlural, C.NationalityAdjective 
. FROM cia_people C, famarea2 F 
. WHERE (C.Nationality is not null) AND (F.IDArea = 'eur') 
. ORDER BY {$_POST['order']}, {$_POST['direction']};

Notice that you need to put the variables in curly braces when you have
arrays being parsed.
--
Pat Adams
Applications Programmer
SYSCO Food Services of Dallas, L.P.
(469) 384-6009 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread Victor Pendleton
What does the explain plan look like?
John Smith wrote:
Afternoon All,
I have the following table structure:
CREATE TABLE properties (
 id int(11) NOT NULL auto_increment,
 propid varchar(14) NOT NULL default '0',
 townid varchar(255) NOT NULL default '',
 countyid mediumint(5) NOT NULL default '0',
 address text NOT NULL,
 price int(14) NOT NULL default '0',
 image text NOT NULL,
 description text NOT NULL,
 link text NOT NULL,
 underoffer tinyint(1) NOT NULL default '0',
 sold tinyint(1) NOT NULL default '0',
 added int(14) NOT NULL default '0',
 `new` tinyint(1) NOT NULL default '1',
 old tinyint(1) NOT NULL default '0',
 PRIMARY KEY  (id),
 KEY old (old),
 KEY `new` (`new`),
 KEY sold (sold),
 KEY underoffer (underoffer),
 KEY propid (propid),
 KEY price (price),
 KEY countyid (countyid),
 FULLTEXT KEY address (address)
) ENGINE=MyISAM
Which I have ran the following commands on:
myisamchk -rq --sort-index --analyze --sort-records=7 properties.MYI
myisampack properties.MYI
myisamchk -rq --sort-index --analyze properties.MYI
It contains just over 400,000 rows and compressed is 163 Meg in size. 

I have just upgraded to 4.1 as well to see if I can squeeze any more 
performance out.
This query:
SELECT SQL_CACHE SQL_CALC_FOUND_ROWS address,price, image, description,
link , underoffer, sold ,added ,new  FROM properties  WHERE  countyid =
44 AND price = 1 AND old=0   ORDER  BY price desc LIMIT 100, 10;
Takes 123 seconds and examins 19068 rows according to the query log,
without the limit it returns 9512 rows.
I am all resourced out and have spent weeks googleing and reading the
docs etc.
I am pretty sure returing all properties containing the countyid should
not take 2 minutes and I am confused at why its examining 19068 rows
also.
It could be down to my server I suppose which is a jailed (no idea
what that is its just what my hosting company call it) freebsd virtial
server.
I have been playing with mysqld variables in my.cnf also...these are
probably all set too high...i am not really sure of the specs of my
server as I don't get that info..
set-variable = query_cache_type=2
set-variable= key_buffer=300M
set-variable= query_cache_size=200M
set-variable= query_cache_limit=50M
set-variable= max_allowed_packet=10M
set-variable= table_cache=50
set-variable= sort_buffer=5M
set-variable= read_rnd_buffer=5M
set-variable= record_buffer=5M
set-variable= tmp_table_size=64M
set-variable= thread_cache=9
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=2
set-variable= ft_min_word_len=3
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log
The table is read only for most of the day and will get updated (once I
get the queries down to an acceptable level) nightly...if that helps.
Any suggestions or further tips would be very helpful as its taken me
months to get my code to input the data in the way I need its now trying
to get the data back out that is the problem.
Thanks
John
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Strange error in DELETE query

2004-11-11 Thread Ronan Lucio
Hi,

I have a MySQL-4.0.18 installed on a FreeBSD system.

When I run the follow query:

DELETE
FROM table
WHERE client_id = 1
AND row_id IN (2,5,7)

only the first record is deleted.
Am I doing something wrong or is it a MySQL bug?

Thanks
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 13:58, Victor Pendleton wrote:
 What does the explain plan look like?
 

id   select_type   table   type   possible_keys   key   key_len   ref   rows   
Extra  
1 SIMPLE properties ref old,price,countyid countyid 3 const 9233 Using where; 
Using filesort

The filesort I know is a problem but I thought I read sorting it using
myisamchk by the price key helps this.

Also if I remove ORDER BY and the price =1 AND old = 0 the query still
takes 46 seconds which is too long for a dynamic site that I am trying
to build.

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Strange error in DELETE query

2004-11-11 Thread Jay Blanchard
[snip]
When I run the follow query:

DELETE
FROM table
WHERE client_id = 1
AND row_id IN (2,5,7)

only the first record is deleted.
Am I doing something wrong or is it a MySQL bug?
[/snip]

It is not a bug, just say it out loud
AND row_id is 2 OR 5 OR 7

Once the OR condition is satisfied once, the query will halt. 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Checking the status of a field, and ensuring it is not updated before I change it

2004-11-11 Thread gerald_clark

Joshua Beall wrote:
Hi All,
I have the following situation: I want to check a row in a database (list of
jobs that need to be run).  So, my script checks the status field, and if it
is not started, then it marks it as locked and running, and when it
finishes, it marks it as completed.
Now, here is what I am concerned about - because there may be many processes
running at any one time, it seems feasible to me that two instance of my
script might simultaneously issue the SELECT statement, and seeing that the
job is not started, then subsequently issue UPDATE statements.  Both
processes think they are the only ones running.
How should I deal with this?  It seems to me that I need some sort of query
that, all in one transaction, first locks the row so that no other processes
can access it, then checks to see if it is running/completed or not.  If it
is not yet running/completed, it starts the job, updates the database row to
mark the job as running, and then unlocks the row.
Is this what I should do?  I'm not sure how to do this.  Any thoughts?  Any
help would be greatly appreciated!
Sincerely,
 -Josh
UPDATE proctable SET status='running' WHERE procid=23 AND status='stopped';
If  this fails someone else already started it.
OR
UPDATE proctable SET status='running', starter='myid' WHERE procid=23 
AND status='stopped';

SELECT status,starter FROM proctable WHERE procid=23;
Is it running, and do you own it?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Queries taking 60 seconds+

2004-11-11 Thread Andy Eastham
John,

Have you got a single multi-column index on countyid, price and old, or do
you have individual indexes on each of these fields?  The former would be
much better.

Andy

 -Original Message-
 From: John Smith [mailto:[EMAIL PROTECTED]
 Sent: 11 November 2004 14:15
 To: Victor Pendleton
 Cc: [EMAIL PROTECTED]
 Subject: Re: Queries taking 60 seconds+
 
 On Thu, 2004-11-11 at 13:58, Victor Pendleton wrote:
  What does the explain plan look like?
 
 
 id   select_type   table   type   possible_keys   key   key_len   ref
 rows   Extra
 1 SIMPLE properties ref old,price,countyid countyid 3 const 9233 Using
 where; Using filesort
 
 The filesort I know is a problem but I thought I read sorting it using
 myisamchk by the price key helps this.
 
 Also if I remove ORDER BY and the price =1 AND old = 0 the query still
 takes 46 seconds which is too long for a dynamic site that I am trying
 to build.
 
 John
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread Philippe Poelvoorde
Hi,
could you try adding a key with
ALTER TABLE properties ADD INDEX(countyid,old,price);
It could maybe help getting less rows at a time.
--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Checking the status of a field, and ensuring it is not updated before I change it

2004-11-11 Thread SGreen
If your table is MyISAM you will have to lock the table so that no other 
process can access that row (or any other) while you do your check and 
update.

http://dev.mysql.com/doc/mysql/en/LOCK_TABLES.html

If your table is InnoDb you can still do the table lock or you can lock 
just the one row.

http://dev.mysql.com/doc/mysql/en/InnoDB_locking_reads.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

news [EMAIL PROTECTED] wrote on 11/11/2004 07:38:20 AM:

 Hi All,
 
 I have the following situation: I want to check a row in a database 
(list of
 jobs that need to be run).  So, my script checks the status field, and 
if it
 is not started, then it marks it as locked and running, and when it
 finishes, it marks it as completed.
 
 Now, here is what I am concerned about - because there may be many 
processes
 running at any one time, it seems feasible to me that two instance of my
 script might simultaneously issue the SELECT statement, and seeing that 
the
 job is not started, then subsequently issue UPDATE statements.  Both
 processes think they are the only ones running.
 
 How should I deal with this?  It seems to me that I need some sort of 
query
 that, all in one transaction, first locks the row so that no other 
processes
 can access it, then checks to see if it is running/completed or not.  If 
it
 is not yet running/completed, it starts the job, updates the database 
row to
 mark the job as running, and then unlocks the row.
 
 Is this what I should do?  I'm not sure how to do this.  Any thoughts? 
Any
 help would be greatly appreciated!
 
 Sincerely,
   -Josh
 
 
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


RE: Queries taking 60 seconds+

2004-11-11 Thread John Wards
On Thu, 2004-11-11 at 14:37, Andy Eastham wrote:
 Have you got a single multi-column index on countyid, price and old, or do
 you have individual indexes on each of these fields?  The former would be
 much better.

Its a single column on countyid, when I ran a select and just used
countyid = in the where clause it still took over 30 seconds.

It did take back less rows without the price and old in the query (the
exact number of rows it should look at)

But its still 30 seconds...I would love it to be under 3 seconds.

The performace I get from the 3 column index should be the same as the
single column index using the single index where should it notsaves
me going off for an hour or so unpacking, adding, packing etc to find
out that it still takes 30 seconds.

I am pretty sure its probably a hardware issue but I am hoping I am
wrong...as this application is not ready to pay for its self just yet.

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 14:36, Philippe Poelvoorde wrote:
 Hi,
 
 could you try adding a key with
 ALTER TABLE properties ADD INDEX(countyid,old,price);
 It could maybe help getting less rows at a time.

I dropped the old and price for the where clause and the number of rows
scanned were the same as without the limit which is good so I am
guessing the 3 coloum index will do the same.

Problem is that it still took 30+ seconds. To return 9000 rows.

Not an acceptable performace really.

Any other ideas?

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread Victor Pendleton
If you build the composit indexes as suggested, does your performance 
improve?

John Smith wrote:
On Thu, 2004-11-11 at 13:58, Victor Pendleton wrote:
 

What does the explain plan look like?
   

id   select_type   table   type   possible_keys   key   key_len   ref   rows   Extra  
1 SIMPLE properties ref old,price,countyid countyid 3 const 9233 Using where; Using filesort

The filesort I know is a problem but I thought I read sorting it using
myisamchk by the price key helps this.
Also if I remove ORDER BY and the price =1 AND old = 0 the query still
takes 46 seconds which is too long for a dynamic site that I am trying
to build.
John
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 14:59, Victor Pendleton wrote:
 If you build the composit indexes as suggested, does your performance 
 improve?

Erm, do you think it would? Its just that with such a large table and it
being compressed it takes ages?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Prepared statement for MySQL 4.1

2004-11-11 Thread Scott Hamm
I've read the article about 'prepared statement' found in MySQL 4.1, and am
not sure if I understood what 'prepared statement' does and how can it
benefit us.  Can anyone elaborate on what 'prepared statement' could do with
examples where possible?

Thanks,


The Newbie Scott

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread Jigal van Hemert
From: John Smith [EMAIL PROTECTED]
 On Thu, 2004-11-11 at 14:36, Philippe Poelvoorde wrote:
  Hi,
 
  could you try adding a key with
  ALTER TABLE properties ADD INDEX(countyid,old,price);
  It could maybe help getting less rows at a time.

 I dropped the old and price for the where clause and the number of rows
 scanned were the same as without the limit which is good so I am
 guessing the 3 coloum index will do the same.

`price` is still in the ORDER BY, so removing it only from the WHERE clause
will not help really.

- create an INDEX on the columns in the WHERE clause _and_ ORDER BY / GROUP
BY, etc. This is the only way to ensure that all data is retrieved using an
INDEX
- the DESC direction will be slower than ASC (but you'll probably need it
anyway)
- the extra speed you could gain from the LIMIT will be removed by the
SQL_CALC_FOUND_ROWS option. This option makes sure that the query will be
executed as if the limit was not present to calculate the number of rows in
the entire result set.
- run OPTIMIZE TABLE regularly to help MySQL optimize execution paths; the
cardinality of the indexes are used to optimize the execution path.

All sites where huge result sets are possible will limit the set no matter
what. Sites like google _estimate_ the number of results.
You could also retrieve the id's of the desired records (with a maximum of
say 500 records) and store the id's in e.g. session data. Paging through the
results will only require you to retrieve the data of 10 or 20 records at a
time.

Hope this will help,

Regards, Jigal


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread mos
At 07:52 AM 11/11/2004, you wrote:
Afternoon All,
The table is read only for most of the day and will get updated (once I
get the queries down to an acceptable level) nightly...if that helps.
Any suggestions or further tips would be very helpful as its taken me
months to get my code to input the data in the way I need its now trying
to get the data back out that is the problem.
Thanks
John
John,
Create a second table (MyISAM) but this time don't use compression 
on the table.

create table newtable select * from oldtable;
Create the compound index as someone else had suggested using Alter Table.
Repeat the query and it should return 9000 rows in a couple of seconds. We 
have tables with tens of millions of rows in them and doing an index 
retrieval on a few thousand rows is very fast. The problem is either the 
compression or index.

Mike 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Queries taking 60 seconds+

2004-11-11 Thread John Wards
On Thu, 2004-11-11 at 15:45, Jigal van Hemert wrote:
 `price` is still in the ORDER BY, so removing it only from the WHERE clause
 will not help really.

No the query I ran didn't have an order by clause (sorry if the one I
pasted did..)

 - create an INDEX on the columns in the WHERE clause _and_ ORDER BY / GROUP
 BY, etc. This is the only way to ensure that all data is retrieved using an
 INDEX

OK, ta.

 - the DESC direction will be slower than ASC (but you'll probably need it
 anyway)

Ah right, I do need that but I will set the default to ASC

 - the extra speed you could gain from the LIMIT will be removed by the
 SQL_CALC_FOUND_ROWS option. This option makes sure that the query will be
 executed as if the limit was not present to calculate the number of rows in
 the entire result set.

This is really for formating on the site and I suppose isn't really
needed, I used to run 2 queries until I found that option.

I use it to work out page numbers though, but for the beta search until
I can prove I can make money out of this I will remove it.

 - run OPTIMIZE TABLE regularly to help MySQL optimize execution paths; the
 cardinality of the indexes are used to optimize the execution path.

I have only done about 160-170 queries since I ran:

myisamchk -rq --sort-index --analyze

On the table, would this make any difference so soon?

 All sites where huge result sets are possible will limit the set no matter
 what. Sites like google _estimate_ the number of results.
 You could also retrieve the id's of the desired records (with a maximum of
 say 500 records) and store the id's in e.g. session data. Paging through the
 results will only require you to retrieve the data of 10 or 20 records at a
 time.

You've lost me there, its late in the working day so that might be the
problem ;-)

Cheers
John



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Wards
On Thu, 2004-11-11 at 15:51, mos wrote:
 John,
  Create a second table (MyISAM) but this time don't use compression 
 on the table.
 
 create table newtable select * from oldtable;
 

Right will run that just now, good idea...just have to avoid the wife as
no doubt it will bog the site down and she has a production site on the
same server ;-)

 Create the compound index as someone else had suggested using Alter Table.
 
 Repeat the query and it should return 9000 rows in a couple of seconds. We 
 have tables with tens of millions of rows in them and doing an index 
 retrieval on a few thousand rows is very fast. The problem is either the 
 compression or index.

Will report back on success etc

Cheers
John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help with query performance anomaly

2004-11-11 Thread Graham Cossey
Hi

Can someone offer any advice on a strange problem I have at present...

If I run a certain query (see below) on my local development PC using
mysqlcc it returns in 3.7s.

If I run the exact same query on my live webserver (again using mysqlcc) I
have yet to get a result !!

Both databases have the same table definitions (live db originally created
from mysqldump of dev PC) and have exactly the same [number of] records in
each table. Both machines are running MySQL 3.23.58. Dev PC if Fedora Core2,
live is RedHat 9.

Other than this one query all else appears normal, any suggestions?
Let me know if you need more info and I'll attempt to supply it...

Many thanks

Graham

Query: SELECT code, sum(qty) as total
FROM table1 as d, db2.table2 as r, table3 as p
WHERE year=2004  AND month=10 AND r.col1=p.col1 and d.code=p.code and
from_period = 200410 and to_period  200410 and d.col3!='6'
GROUP BY code



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange error in DELETE query

2004-11-11 Thread Michael Stassen

Jay Blanchard wrote:
[snip]
When I run the follow query:
DELETE
FROM table
WHERE client_id = 1
AND row_id IN (2,5,7)
only the first record is deleted.
Am I doing something wrong or is it a MySQL bug?
[/snip]
It is not a bug, just say it out loud
AND row_id is 2 OR 5 OR 7
Once the OR condition is satisfied once, the query will halt. 
What are you talikng about?  Queries don't halt on the first row matched. 
For example:

  mysql SELECT * FROM t;
  +---++
  | client_id | row_id |
  +---++
  | 1 |  1 |
  | 1 |  2 |
  | 1 |  3 |
  | 1 |  4 |
  | 1 |  5 |
  | 1 |  6 |
  | 1 |  7 |
  | 2 |  1 |
  | 2 |  2 |
  | 2 |  3 |
  | 2 |  4 |
  | 2 |  5 |
  | 2 |  6 |
  | 2 |  7 |
  +---++
  14 rows in set (0.00 sec)
  mysql SELECT * FROM t
  - WHERE client_id = 1
  - AND row_id IN (2,5,7);
  +---++
  | client_id | row_id |
  +---++
  | 1 |  2 |
  | 1 |  5 |
  | 1 |  7 |
  +---++
  3 rows in set (0.01 sec)
  mysql DELETE FROM t
  - WHERE client_id = 1
  - AND row_id IN (2,5,7);
  Query OK, 3 rows affected (0.00 sec)
  mysql SELECT * FROM t
  - WHERE client_id = 1
  - AND row_id IN (2,5,7);
  Empty set (0.00 sec)
The question is: what do you (Ronan) mean by only the first record is 
deleted?  If you run

  SELECT * FROM table
  WHERE client_id = 1
  AND row_id IN (2,5,7)
how many rows do you get?
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help with query performance anomaly

2004-11-11 Thread SGreen
What does EXPLAIN show for the query on both systems?  (I am wondering if 
you may have an index on your development system that you do not have on 
your production server.)

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Graham Cossey [EMAIL PROTECTED] wrote on 11/11/2004 11:19:08 AM:

 Hi
 
 Can someone offer any advice on a strange problem I have at present...
 
 If I run a certain query (see below) on my local development PC using
 mysqlcc it returns in 3.7s.
 
 If I run the exact same query on my live webserver (again using mysqlcc) 
I
 have yet to get a result !!
 
 Both databases have the same table definitions (live db originally 
created
 from mysqldump of dev PC) and have exactly the same [number of] records 
in
 each table. Both machines are running MySQL 3.23.58. Dev PC if Fedora 
Core2,
 live is RedHat 9.
 
 Other than this one query all else appears normal, any suggestions?
 Let me know if you need more info and I'll attempt to supply it...
 
 Many thanks
 
 Graham
 
 Query: SELECT code, sum(qty) as total
 FROM table1 as d, db2.table2 as r, table3 as p
 WHERE year=2004  AND month=10 AND r.col1=p.col1 and d.code=p.code and
 from_period = 200410 and to_period  200410 and d.col3!='6'
 GROUP BY code
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: Strange error in DELETE query

2004-11-11 Thread Ronan Lucio
Jay,

 It is not a bug, just say it out loud
 AND row_id is 2 OR 5 OR 7

 Once the OR condition is satisfied once, the query will halt. 

The problem is that if I use OR in the where clause, MySQL wont
use the indexes in the row_id column.

One important thing that I forgot to say is I run a SELECT with
the same where clause:

SELECT *
FROM table
WHERE client_id = 1
AND row_id IN (2,5,7)

and it returns me three rows,

thanks,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread Stefan Kuhn
It is a property of Mysql that such a query will benefit greatly from a 
composite index. So I would not consider anything else without having tried 
this.


Am Thursday 11 November 2004 16:29 schrieb John Smith:
 On Thu, 2004-11-11 at 14:59, Victor Pendleton wrote:
  If you build the composit indexes as suggested, does your performance
  improve?

 Erm, do you think it would? Its just that with such a large table and it
 being compressed it takes ages?

-- 
Stefan Kuhn M. A.
Cologne University BioInformatics Center (http://www.cubic.uni-koeln.de)
Zlpicher Str. 47, 50674 Cologne
Tel: +49(0)221-470-7428   Fax: +49 (0) 221-470-7786
My public PGP key is available at http://pgp.mit.edu

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with query performance anomaly

2004-11-11 Thread Graham Cossey
Thanks Shaun

EXPLAIN shows the same 'possible keys' for each table but 'key' and
'key-len' columns are different, as are the 'rows' as well of course.

I guess this points to a probable difference in key definitions?

Can 2 installations with the same table definitions produce different
results like this? Maybe something in the configs?

Thanks

Graham

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: 11 November 2004 16:28
 To: Graham Cossey
 Cc: [EMAIL PROTECTED]
 Subject: Re: Help with query performance anomaly


 What does EXPLAIN show for the query on both systems?  (I am wondering if
 you may have an index on your development system that you do not have on
 your production server.)

 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine


 Graham Cossey [EMAIL PROTECTED] wrote on 11/11/2004 11:19:08 AM:

  Hi
 
  Can someone offer any advice on a strange problem I have at present...
 
  If I run a certain query (see below) on my local development PC using
  mysqlcc it returns in 3.7s.
 
  If I run the exact same query on my live webserver (again using
 mysqlcc)
 I
  have yet to get a result !!
 
  Both databases have the same table definitions (live db originally
 created
  from mysqldump of dev PC) and have exactly the same [number of] records
 in
  each table. Both machines are running MySQL 3.23.58. Dev PC if Fedora
 Core2,
  live is RedHat 9.
 
  Other than this one query all else appears normal, any suggestions?
  Let me know if you need more info and I'll attempt to supply it...
 
  Many thanks
 
  Graham
 
  Query: SELECT code, sum(qty) as total
  FROM table1 as d, db2.table2 as r, table3 as p
  WHERE year=2004  AND month=10 AND r.col1=p.col1 and d.code=p.code and
  from_period = 200410 and to_period  200410 and d.col3!='6'
  GROUP BY code
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange error in DELETE query

2004-11-11 Thread Ronan Lucio
Michael,

 What are you talikng about?  Queries don't halt on the first row matched.
 For example:

It´s my thought, too. But it isn´t happen in my MySQL Server.

Now, doing the same tests you did I got the same results of you.
Well, I´ll inspect my code again looking for some error that I
didn´t see, yet.

thanks,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Simple Small Database

2004-11-11 Thread Chris W. Parker
John mailto:[EMAIL PROTECTED]
on Wednesday, November 10, 2004 10:04 PM said:

  I want to make a small simple database that searches
 by state or zip code for jobs. I would enter just a
 job description, job position and job id #. so 3
 fields display. I want to enter in the information by
 an admin area but not a big deal. How hard would this
 be to create.

for me, that would be really easy (albeit time consuming). for you, it
sounds like it would be really hard (and even more time consuming).

but seriously, all those parts are individually very easy for many
people on this list. but for someone that seemingly doesn't know
anything about creating a database or writing the scripts to manage it,
it could be quite a daunting task.

there are two completely different subjects in your question (well
actually i guess it would be a statement since you never used any
question marks...). one is database related, the other is script related
(script = PHP).

i suggest you ask specific questions regarding MySQL databases to this
list and you ask specific questions to the PHP-General list (you can
sign up by going to www.php.net).


hth,
chris.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Strange error in DELETE query

2004-11-11 Thread Paul DuBois
At 8:18 -0600 11/11/04, Jay Blanchard wrote:
[snip]
When I run the follow query:
DELETE
FROM table
WHERE client_id = 1
AND row_id IN (2,5,7)
only the first record is deleted.
Am I doing something wrong or is it a MySQL bug?
[/snip]
It is not a bug, just say it out loud
AND row_id is 2 OR 5 OR 7
Once the OR condition is satisfied once, the query will halt.
Why would it do that?  DELETE doesn't execute just until it succeeds
in deleting one row. (Not without a LIMIT clause, at least.)
Anyway, here's my test:
DROP TABLE IF EXISTS t;
CREATE TABLE t (client_id INT, row_id INT);
INSERT INTO t VALUES(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(1,7),(1,8);
SELECT * FROM t;
DELETE
FROM t
WHERE client_id = 1
AND row_id IN (2,5,7);
SELECT * FROM t;
Result:
+---++
| client_id | row_id |
+---++
| 1 |  1 |
| 1 |  2 |
| 1 |  3 |
| 1 |  4 |
| 1 |  5 |
| 1 |  6 |
| 1 |  7 |
| 1 |  8 |
+---++
+---++
| client_id | row_id |
+---++
| 1 |  1 |
| 1 |  3 |
| 1 |  4 |
| 1 |  6 |
| 1 |  8 |
+---++
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Help with query performance anomaly

2004-11-11 Thread SGreen
Check the results of SHOW CREATE TABLE for the three tables you use and 
compare between production and development. You should be able to spot any 
 differences in your key definitions.

If they are the same on both machines then you should probably run ANALYZE 
TABLE against the three tables on your production machine. That will 
update the query optimizer's statistics for those tables. If the optimizer 
has bad stats it can make poor choices about which index to use.

If that doesn't help, try using the OPTIMIZE TABLE command on your three 
tables. Heavy fragmentation can slow down data retrieval, too.

Let me know how things turn out.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Graham Cossey [EMAIL PROTECTED] wrote on 11/11/2004 11:48:13 AM:

 Thanks Shaun
 
 EXPLAIN shows the same 'possible keys' for each table but 'key' and
 'key-len' columns are different, as are the 'rows' as well of course.
 
 I guess this points to a probable difference in key definitions?
 
 Can 2 installations with the same table definitions produce different
 results like this? Maybe something in the configs?
 
 Thanks
 
 Graham
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: 11 November 2004 16:28
  To: Graham Cossey
  Cc: [EMAIL PROTECTED]
  Subject: Re: Help with query performance anomaly
 
 
  What does EXPLAIN show for the query on both systems?  (I am wondering 
if
  you may have an index on your development system that you do not have 
on
  your production server.)
 
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
 
 
  Graham Cossey [EMAIL PROTECTED] wrote on 11/11/2004 11:19:08 
AM:
 
   Hi
  
   Can someone offer any advice on a strange problem I have at 
present...
  
   If I run a certain query (see below) on my local development PC 
using
   mysqlcc it returns in 3.7s.
  
   If I run the exact same query on my live webserver (again using
  mysqlcc)
  I
   have yet to get a result !!
  
   Both databases have the same table definitions (live db originally
  created
   from mysqldump of dev PC) and have exactly the same [number of] 
records
  in
   each table. Both machines are running MySQL 3.23.58. Dev PC if 
Fedora
  Core2,
   live is RedHat 9.
  
   Other than this one query all else appears normal, any suggestions?
   Let me know if you need more info and I'll attempt to supply it...
  
   Many thanks
  
   Graham
  
   Query: SELECT code, sum(qty) as total
   FROM table1 as d, db2.table2 as r, table3 as p
   WHERE year=2004  AND month=10 AND r.col1=p.col1 and d.code=p.code 
and
   from_period = 200410 and to_period  200410 and d.col3!='6'
   GROUP BY code
  
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 


Re: Queries taking 60 seconds+

2004-11-11 Thread John Wards
Right thanks for all the tips the 3 column index has done the job,
queries coming back in 0.7 secconds now which is just the job before
they get cached.

Don't know how I missed that one as it was abovious...i even tried
countyid and old...forgot about price..

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with query performance anomaly

2004-11-11 Thread Graham Cossey
 Thanks Shaun

 EXPLAIN shows the same 'possible keys' for each table but 'key' and
 'key-len' columns are different, as are the 'rows' as well of course.

 I guess this points to a probable difference in key definitions?

 Can 2 installations with the same table definitions produce different
 results like this? Maybe something in the configs?

[snip]
 
 
  What does EXPLAIN show for the query on both systems?  (I am
 wondering if
  you may have an index on your development system that you do not have on
  your production server.)
 
[snip]
  
   Can someone offer any advice on a strange problem I have at present...
  
   If I run a certain query (see below) on my local development PC using
   mysqlcc it returns in 3.7s.
  
   If I run the exact same query on my live webserver (again using
  mysqlcc)
  I
   have yet to get a result !!
  
   Both databases have the same table definitions (live db originally
  created
   from mysqldump of dev PC) and have exactly the same [number
 of] records
  in
   each table. Both machines are running MySQL 3.23.58. Dev PC if Fedora
  Core2,
   live is RedHat 9.
  
   Other than this one query all else appears normal, any suggestions?
   Let me know if you need more info and I'll attempt to supply it...
  
   Many thanks
  
   Graham
  
[snip]

I've done mysqldumps of the tables involved on both machines and the create
table definitions and key definitions are identical.

The results of my EXPLAINs are pasted below.

Thanks
Graham

DEV BOX:

EXPLAIN SELECT d.dcode, sum(qty) as total
FROM table1 as d, db2.table2 as r, table3 as p
WHERE year=2004  AND month=10 AND r.pcode=p.pcode and d.dcode=p.dcode and
from_period = 200410 and to_period  200410 and d.region!='6'
GROUP BY dcode

+---+---+---
-+--+-+-+--+
--+
| table | type  | possible_keys
| key  | key_len | ref | rows | Extra
|
+---+---+---
-+--+-+-+--+
--+
| d | ALL   | [NULL]
| [NULL]   |  [NULL] | [NULL]  |  322 | Using where; Using
temporary; Using filesort |
| p | index | PRIMARY
| PRIMARY  |  19 | [NULL]  | 6082 | Using where; Using index
|
| r | ref   |
PRIMARY,yr_mn_mk_rng_dr,yr_mn_st,yr_mn_pc,yr_mn_f,yr_mn_mk_st  | yr_mn_pc |
13 | const,const,p.pcode |   41 | Using where
|
+---+---+---
-+--+-+-+--+
--+

LIVE SERVER:

EXPLAIN SELECT d.dcode, sum(qty) as total
FROM table1 as d, db2.table2 as r, table3 as p
WHERE year=2004  AND month=10 AND r.pcode=p.pcode and d.dcode=p.dcode and
from_period = 200410 and to_period  200410 and d.region!='6'
GROUP BY dcode

+---+--+
+-+-+-+---+-
-+
| table | type | possible_keys
| key | key_len | ref | rows  | Extra
|
+---+--+
+-+-+-+---+-
-+
| d | ALL  | [NULL]
| [NULL]  |  [NULL] | [NULL]  |   322 | Using where; Using temporary;
Using filesort |
| r | ref  |
PRIMARY,yr_mn_mk_rng_dr,yr_mn_st,yr_mn_pc,yr_mn_f,yr_mn_mk_st  | PRIMARY |
8 | const,const | 89618 | Using where  |
| p | ref  | PRIMARY
| PRIMARY |   4 | r.pcode | 2 | Using where; Using index
|
+---+--+
+-+-+-+---+-
-+



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld segfaults.

2004-11-11 Thread Sasha Pachev
Fredrik Carlsson wrote:
Hi,
I have problem with my mysqld, when its receiving alot of connections 
and at the same time doing insert/delete/update/select on the same table 
mysqld segfaults and restarts

041108  0:59:08 Warning: Got signal 14 from thread 162602
Segmentation fault
041708 13:17:54  -e \nNumber of processes running now: 0
041708 13:17:54  mysqld restarted
041108 13:17:55  InnoDB: Started
/usr/pkg/libexec/mysqld: ready for connections.
Version: '4.0.21'  socket: '/tmp/mysql.sock'  port: 3306  Source 
distribution
041108 13:19:06 Warning: Got signal 14 from thread 2

I start mysqld with these flags:
--key_buffer_size=130M --table_cache=256 --sort_buffer_size=16M 
--read_buffer_size=4M --query_cache_size=64M

ulimit -n 8096
ulimit -l 80
ulimit -d 80 ulimit -m 80
ulimit -p 9000 ulimit -s 60
The table has about 450k rows and the size with indexes is ~1.4GB
Running NetBSD 1.6.2 and mysql 4.0.21
What can cause these segfaults? and is there anything i can tune to get 
rid of them?
Try lowering your sort_buffer_size and read_buffer_size - those are allocated 
per thread, and with a lot of connections can cause a memory shortage if they 
are large.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Help with report

2004-11-11 Thread Laercio Xisto Braga Cavalcanti
Hi all,
 
I have the following select:
 
 
select  cons_nome as Consultorio,
 dent_nome as Dentista, 
 pac_nome as Paciente, 
 pac_convenio as Tipo, 
 Pac_matricula as Matricula, 
 concat(consulta_dia, 
 /, 
 consulta_mes, 
 /, 
 consulta_ano) as Data, 
 concat(Consulta_hora, 
 :, 
 consulta_minuto) as Hora, 
 Consulta_procedimento as Proced.
 from consulta c, dentista d, paciente p, consultorio cc
 where  cc.cons_codigo  = c.consulta_codigo_cons
 and  c.consulta_codigo_dentista = d.dent_codigo
 and  c.consulta_codigo_paciente = p.pac_codigo
 and   cons_codigo   =   1
 and  consulta_ano  =  2004 
 and  consulta_mes  =  09 
 order by
 cons_codigo,
 dent_codigo,
 consulta_ano, 
 consulta_mes, 
 consulta_dia, 
 Consulta_hora, 
 consulta_minuto
 into outfile 'teste01.txt';
 
This select produces the file teste01.txt above:
 
Itálica Paulista   Dentista 1  MARIA DA DORES CIRIACO DE SOUZA S
99890   8/9/2004   17:20   ORTO
Itálica Paulista   Dentista 1  MARIBEL ROCIO MELGAR PACHECOS
187635  8/9/2004   17:40   EXO
Itálica Paulista   Dentista 1  PAULA CRISTINA CAPUCHO  S
155181  15/9/2004   80 ORTO
Itálica Paulista   Dentista 1  JOSEANE DORIA RICARDO   S
172261  15/9/2004   820ORTO
Itálica Paulista   Dentista 1  RICARDO TERASSI MORAES  S   \N
15/9/2004   840EXO
Itálica Paulista   Dentista 1  RICARDO TERASSI MORAES  S   \N
15/9/2004   90 ORTO
Itálica Paulista   Dentista 1  ERICA GAMA PIMENTEL VIEIRA  S
199166  15/9/2004  9:40ORTO
Itálica Paulista   Dentista 2  YASMIN MACHADO SILVAS
194194  15/9/2004   1:0ORTO
Itálica Paulista   Dentista 2  JAMIL DIAS SILVAS
135453  15/9/2004   1:0ORTO
Itálica Paulista   Dentista 2  FERNANDA MARTINS FERREIRA   N
\N  15/9/2004  15:0ORTO
Itálica Paulista   Dentista 2  ARIANE TELES NASCIMENTO S   \N
15/9/2004   1:20   ORTO
Itálica Paulista   Dentista 2  IVONEIDE SOUSA DE MENEZES   S
148769  15/9/2004  15:40   ORTO
Itálica Paulista   Dentista 2  JOAO RICARDO DE ABRAHAO S   \N
15/9/2004   1:20   ORTO
Itálica Paulista   Dentista 2  AMANDA GONCALVES RODRIGUES  S
169180  15/9/2004  17:0CLINICA
Itálica Paulista   Dentista 2  ALINE OLIVIA SILVA OLIVEIRA S
142403  22/9/2004  10:0ORTO
Itálica Paulista   Dentista 2  FABIANA SANCHES OCANHA  S
152810  22/9/2004   1:0ORTO

and so on...
 
I need that the report have the column headers I put in the select and seems
like The following:  
 
 
Consultorio   DentistaPaciente
Tipo  Matricula   DataHora   Proced.
 
Itálica Paulista  Dentista 1   MARIA DA DORES CIRIACO DE SOUZA
S   99890 8/9/200417:20   ORTO
 MARIBEL ROCIO MELGAR
PACHECO   S   187635   8/9/200417:40   EXO
 PAULA CRISTINA CAPUCHO
S   155181  15/9/200480   ORTO
 JOSEANE DORIA RICARDO
S   172261 15/9/2004 820 ORTO
 RICARDO TERASSI MORAES
S   \N 15/9/2004840 EXO
 RICARDO TERASSI MORAES
S   \N 15/9/200490   ORTO
 ERICA GAMA PIMENTEL
VIEIRAS   199166 15/9/20049:40ORTO
   Dentista 2   YASMIN MACHADO SILVA
S   194194 15/9/20041:0  ORTO
 JAMIL DIAS SILVA
S   135453 15/9/20041:0  ORTO
 FERNANDA MARTINS
FERREIRA   N   \N15/9/200415:0ORTO
 ARIANE TELES NASCIMENTO
S   \N15/9/20041:20ORTO
 IVONEIDE SOUSA DE
MENEZES   S   148769 15/9/200415:40   ORTO
 JOAO RICARDO DE ABRAHAO
S   \N15/9/2004 1:20ORTO
 AMANDA GONCALVES
RODRIGUES S   169180 15/9/2004 17:0CLINICA
 ALINE OLIVIA SILVA
OLIVEIRA  S   142403  22/9/200410:0ORTO
 FABIANA SANCHES OCANHA
S   152810  22/9/20041:0  ORTO
 
And this text Itálica should be Itálica
 
Can anybody help me please?


Laercio Xisto Braga Cavalcanti

Endless Technology Sistemas de Informação 

Rua Peixoto Gomide 321 


Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread John B. Ivski
Heikki,
Heikki Tuuri wrote:
InnoDB is clearly leaking 'segments' in the system tablespace. They are 
probably undo logs. For some reason, a trx commit or purge fails to free 
them.

SEGMENT id 0 75994 space 0; page 82136; res 2720 used 2680; full ext 41
fragm pages 32; free extents 0; not full extents 1: pages 24
SEGMENT id 0 76006 space 0; page 82136; res 2592 used 2569; full ext 39
fragm pages 32; free extents 0; not full extents 1: pages 41
These big segments are about 40 MB in size. Did you do ALTER TABLE or 
some other big transaction involving a million rows?
Certainly not ALTER TABLE... My program makes nightly backups of the database using SELECT * INTO 
OUTFILE but they're incremental and use less than 5 rows/table, each table being processed in a 
separate transaction. About a fortnight ago I had to do 3-4 complete backups when debugging the 
backup program, and the largest table has ~1.6M rows, so those big segments may be the (still not 
freed) leftovers from those complete backups. The viewer also makes SELECT * from tables, but it is 
rarely used.

99% of queries that involve data modification are inserts that use only one row/transaction and 
happen at most 4-5 times/second, 1% happen every 4 hours, update about 5 rows (total) but never 
change more than 1000 rows/transaction.


SEGMENT id 0 88879 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 0 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 119802; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
Most segments seem to be undo logs resulting from small transactions.
You said that you tried max_purge_lag, and it caused lengthy waits? What 
value did you try?
I tried 100 and 1000. It was worse with the former, but both performed worse than with 
max_purge_lag=0.

I will try to repeat the problem by simulating your database workload.
Please let me know if there's any info I could help you with.
Please use also innodb_table_monitor, and send the output to me.
Sending it to you directly (it's ~100k and I'm not sure the list allows or 
needs it ;).
Good luck,
Ivan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 4.1.7 serious problems

2004-11-11 Thread Sasha Pachev

I tested the memory and it seems ok.
I doubt this is an hardware issue, since version 4.1.3 works perfectly.
Ugo:
Do the production and the test server run on the same hardware?
Are you using the same mysqld binary on both?
What I am suspecting is that the binary on the production system does has not 
been compiled for the right processor type.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Help with query performance anomaly

2004-11-11 Thread SGreen
Response at end

Graham Cossey [EMAIL PROTECTED] wrote on 11/11/2004 12:19:17 PM:

  Thanks Shaun
 
  EXPLAIN shows the same 'possible keys' for each table but 'key' and
  'key-len' columns are different, as are the 'rows' as well of course.
 
  I guess this points to a probable difference in key definitions?
 
  Can 2 installations with the same table definitions produce different
  results like this? Maybe something in the configs?
 
 [snip]
  
  
   What does EXPLAIN show for the query on both systems?  (I am
  wondering if
   you may have an index on your development system that you do not 
have on
   your production server.)
  
 [snip]
   
Can someone offer any advice on a strange problem I have at 
present...
   
If I run a certain query (see below) on my local development PC 
using
mysqlcc it returns in 3.7s.
   
If I run the exact same query on my live webserver (again using
   mysqlcc)
   I
have yet to get a result !!
   
Both databases have the same table definitions (live db originally
   created
from mysqldump of dev PC) and have exactly the same [number
  of] records
   in
each table. Both machines are running MySQL 3.23.58. Dev PC if 
Fedora
   Core2,
live is RedHat 9.
   
Other than this one query all else appears normal, any 
suggestions?
Let me know if you need more info and I'll attempt to supply it...
   
Many thanks
   
Graham
   
 [snip]
 
 I've done mysqldumps of the tables involved on both machines and the 
create
 table definitions and key definitions are identical.
 
 The results of my EXPLAINs are pasted below.
 
 Thanks
 Graham
 
 DEV BOX:
 
 EXPLAIN SELECT d.dcode, sum(qty) as total
 FROM table1 as d, db2.table2 as r, table3 as p
 WHERE year=2004  AND month=10 AND r.pcode=p.pcode and d.dcode=p.dcode 
and
 from_period = 200410 and to_period  200410 and d.region!='6'
 GROUP BY dcode
 
 
+---+---+---
 
-+--+-+-+--+
 --+
 | table | type  | possible_keys
 | key  | key_len | ref | rows | Extra
 |
 
+---+---+---
 
-+--+-+-+--+
 --+
 | d | ALL   | [NULL]
 | [NULL]   |  [NULL] | [NULL]  |  322 | Using where; Using
 temporary; Using filesort |
 | p | index | PRIMARY
 | PRIMARY  |  19 | [NULL]  | 6082 | Using where; Using 
index
 |
 | r | ref   |
 PRIMARY,yr_mn_mk_rng_dr,yr_mn_st,yr_mn_pc,yr_mn_f,yr_mn_mk_st  | 
yr_mn_pc |
 13 | const,const,p.pcode |   41 | Using where
 |
 
+---+---+---
 
-+--+-+-+--+
 --+
 
 LIVE SERVER:
 
 EXPLAIN SELECT d.dcode, sum(qty) as total
 FROM table1 as d, db2.table2 as r, table3 as p
 WHERE year=2004  AND month=10 AND r.pcode=p.pcode and d.dcode=p.dcode 
and
 from_period = 200410 and to_period  200410 and d.region!='6'
 GROUP BY dcode
 
 
+---+--+
 
+-+-+-+---+-
 -+
 | table | type | possible_keys
 | key | key_len | ref | rows  | Extra
 |
 
+---+--+
 
+-+-+-+---+-
 -+
 | d | ALL  | [NULL]
 | [NULL]  |  [NULL] | [NULL]  |   322 | Using where; Using 
temporary;
 Using filesort |
 | r | ref  |
 PRIMARY,yr_mn_mk_rng_dr,yr_mn_st,yr_mn_pc,yr_mn_f,yr_mn_mk_st  | PRIMARY 
|
 8 | const,const | 89618 | Using where  |
 | p | ref  | PRIMARY
 | PRIMARY |   4 | r.pcode | 2 | Using where; Using index
 |
 
+---+--+
 
+-+-+-+---+-
 -+
 
 

These are two different plans. Your development machine is using the index 
yr_mn_pc on the r table and is joining that table last. On your production 
server, the r table is joined second and is joined by the index PRIMARY. 
Let me know how the ANALYZE TABLE I suggested in a previous message works 
out to help the statistics.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Strange error in DELETE query

2004-11-11 Thread Michael Stassen
Ronan Lucio wrote:
snip
The problem is that if I use OR in the where clause, MySQL won't
use the indexes in the row_id column.
Yes, it will, as long as the OR conditions are on the *same* column.
  WHERE row_id IN (2,5,7)
and
  WHERE (row_id = 2 OR row_id = 5 OR row_id = 7)
are equivalent.  I prefer IN because it is easier to type and read, but 
mysql should treat them the same.

If you tried this with OR and mysql did not use the index, my guess is that 
you forgot the parentheses:

  SELECT * FROM table
  WHERE client_id = 1
  AND row_id = 2 OR row_id = 5 OR row_id = 7;
AND has higher precedence than OR, so mysql would see that as
  WHERE (client_id = 1 AND row_id = 2)
OR row_id = 5 OR row_id = 7;
which won't use an index and isn't what you meant.
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Sasha Pachev
Heikki Tuuri wrote:
Ivan,
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close 
to the end of the output? The print routine first prints inode pages 
that are completely used, and after that other inode pages. Since the 
tablespace validation said the tablespace is ok, I guess the segments 
really are there.

Anyway, if we get the ibdata files, it should be relatively easy to find 
out what is wrong.
Heikki:
If no bugs were discovered in that code for the last 4 years, we might be 
dealing with the good old problem that what you read from the disk might not 
always be quite what you wrote to it in the first place due to OS/hardware problems.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Prepared statement for MySQL 4.1

2004-11-11 Thread Gleb Paharenko
Hello.



As said at documentation:

Prepared execution is an efficient way to execute a statement more than once.

Good examples you can find in documentation to MySQL, and in 

tests/client_test.c.



Scott Hamm [EMAIL PROTECTED] wrote:

 I've read the article about 'prepared statement' found in MySQL 4.1, and am

 not sure if I understood what 'prepared statement' does and how can it

 benefit us.  Can anyone elaborate on what 'prepared statement' could do with

 examples where possible?

 

 Thanks,

 

 

 The Newbie Scott

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Prepared statement for MySQL 4.1

2004-11-11 Thread Mark Maunder
Scott,

http://dev.mysql.com/tech-resources/articles/4.1/prepared-statements.html

regards,

Mark.

On Thu, 2004-11-11 at 15:38, Scott Hamm wrote:
 I've read the article about 'prepared statement' found in MySQL 4.1, and am
 not sure if I understood what 'prepared statement' does and how can it
 benefit us.  Can anyone elaborate on what 'prepared statement' could do with
 examples where possible?
 
 Thanks,
 
 
 The Newbie Scott



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread John B. Ivski
Heikki,
Heikki Tuuri wrote:
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close 
to the end of the output? The print routine first prints inode pages 
that are completely used, and after that other inode pages. Since the 
tablespace validation said the tablespace is ok, I guess the segments 
really are there.
You're absolutely right, they're there - I must've missed them when looking through the output. 
They're not at the end but around 0 17000, though.

SEGMENT id 0 16683 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 16684 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 2; res 2 used 2; full ext 0
fragm pages 2; free extents 0; not full extents 0: pages 0
SEGMENT id 0 2 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 3 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 4 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 6 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 7 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 8 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 9 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 10 space 0; page 2; res 4 used 4; full ext 0
fragm pages 4; free extents 0; not full extents 0: pages 0
SEGMENT id 0 11 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 12 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 13 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 14 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 15 space 0; page 2; res 160 used 160; full ext 2
fragm pages 32; free extents 0; not full extents 0: pages 0
SEGMENT id 0 17259 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 17 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 18 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 19 space 0; page 2; res 1 used 1; full ext 0
Anyway, if we get the ibdata files, it should be relatively easy to find 
out what is wrong.
I'll delete the whole tablespace this weekend and reimport data from backup. If it keeps growing 
I'll upload the data files (will be easier to do with them occupying much less than 2GB, too ;)

Good luck,
Ivan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Prepared statement for MySQL 4.1

2004-11-11 Thread Scott Hamm
*THAT* one was what I already ready and could not understand it since
English isn't my first language.

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 11, 2004 12:45 PM
To: Scott Hamm
Cc: 'Mysql ' (E-mail)
Subject: Re: Prepared statement for MySQL 4.1


Scott,

http://dev.mysql.com/tech-resources/articles/4.1/prepared-statements.html

regards,

Mark.

On Thu, 2004-11-11 at 15:38, Scott Hamm wrote:
 I've read the article about 'prepared statement' found in MySQL 4.1, and
am
 not sure if I understood what 'prepared statement' does and how can it
 benefit us.  Can anyone elaborate on what 'prepared statement' could do
with
 examples where possible?
 
 Thanks,
 
 
 The Newbie Scott


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



recover single table from binlog

2004-11-11 Thread walt
Does anyone know if there is a tool or a good way to recover a single table 
from the binlogs ?

I tried 
for n in dev-bin.*; do mysqlbinlog $n |grep table_name  /tmp/file.sql; done

but it just gives me the first line of the insert/update statement. I guess 
you could dump all the bin files out to a text file and then use a perl 
script to look for insert into table_name or update table_name and keep 
reading lines until you hit a ;. 

walt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fw: Strange error in DELETE query

2004-11-11 Thread Ronan Lucio
Hi,

I think I found out what was wrong.

ColfFusion has a tag CFQUERYPARAM that prevents
SQL Injection.

Probably CFQUERYPARAM was removing anything after comma.

Without using CFQUERYPARAM the code works perfectly.
So, I had to create a UDF to remove everything except digits and commas.

Thanks in advance,
Ronan







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.1.7 serious problems

2004-11-11 Thread Ugo Bellavance
Sasha Pachev wrote:

I tested the memory and it seems ok.
I doubt this is an hardware issue, since version 4.1.3 works perfectly.

Ugo:
Do the production and the test server run on the same hardware?
One is a single Athlon XP, the other is a dual Athlon MP.
Are you using the same mysqld binary on both?
[EMAIL PROTECTED] local]# md5sum mysql-standard-4.1.7-pc-linux-i686.tar.gz
50adc5470228028dd28f0d51ae4f10f3  mysql-standard-4.1.7-pc-linux-i686.tar.gz
[EMAIL PROTECTED] local]# md5sum mysql-standard-4.1.7-pc-linux-i686.tar.gz
50adc5470228028dd28f0d51ae4f10f3  mysql-standard-4.1.7-pc-linux-i686.tar.gz
What I am suspecting is that the binary on the production system does 
has not been compiled for the right processor type.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Prepared statement for MySQL 4.1

2004-11-11 Thread Fagyal Csongor
Scott,
I've read the article about 'prepared statement' found in MySQL 4.1, and am
not sure if I understood what 'prepared statement' does and how can it
benefit us.  Can anyone elaborate on what 'prepared statement' could do with
examples where possible?
 

In the simplest case, consider this:
You have an application (say a webpage), which receives user input (say 
a submitted form) which contain an e-mail address that you want to 
insert into a database. Let's say you have this form field in a variable 
called $email:

Inserting:
$dbh-do(INSERT INTO mytable (email) VALUES ('$email'));
Now if you do not verify what is in $email, and the user passes you this 
: notavalidemail@'adress
you can run into trouble, as the above statement will parse to
INSERT INTO mytable (email) VALUES ('notavalidemail@'adress)
which is not good, it produces an SQL error because of the extra ' 
character. You can run into more serious trouble if you get something 
more SQL-like from your form, say DELETE FROM mytable or something 
like that ;-)

Using placeholders take care of quoting, that is, the SQL statement will 
always be valid, at the data will no longer be part of the query. This:
$dbh-do(INSERT INTO mytable (email) VALUES (?), undef, $email);
...will insert $email into the table without a runtime error (well, it 
is probably not what you want, as you will have an invalid e-mail 
address, but it is still better compared to letting others execute SQL 
commands on your machine...)

This thing by itself should be enough for anyone NOT to use 
non-prepared statements (like many badly written PHP scripts do). But 
there are more things... well, read the article once again, or wait for 
another e-mail on speed benefits :-)

Regards,
- Csongor
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Help with query performance anomaly

2004-11-11 Thread Graham Cossey


[big snip]


 These are two different plans. Your development machine is using
 the index
 yr_mn_pc on the r table and is joining that table last. On your
 production
 server, the r table is joined second and is joined by the index PRIMARY.
 Let me know how the ANALYZE TABLE I suggested in a previous message works
 out to help the statistics.


I have run ANALYZE on all tables on the live server and the result of
EXPLAIN is the same as before.

Any further suggestions? I'm off to double check the create table stuff once
more...

Thanks for your help (and patience!!)

Graham



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Prepared statement for MySQL 4.1

2004-11-11 Thread Kevin Cowley
Re speed benefits

There aren't any - at least not through the C API. It's at least twice as
slow as writing embedded statements and parsing every time. I'm waiting for
them to announce they've fixed it before we consider this route again.

Kevin Cowley
RD
 
Tel: 0118 902 9099 (direct line)
Email: [EMAIL PROTECTED]
Web: http://www.alchemetrics.co.uk

 -Original Message-
 From: Fagyal Csongor [mailto:[EMAIL PROTECTED]
 Sent: 11 November 2004 18:10
 To: 'Mysql ' (E-mail)
 Subject: Re: Prepared statement for MySQL 4.1
 
 Scott,
 
 I've read the article about 'prepared statement' found in MySQL 4.1, and
 am
 not sure if I understood what 'prepared statement' does and how can it
 benefit us.  Can anyone elaborate on what 'prepared statement' could do
 with
 examples where possible?
 
 
 
 In the simplest case, consider this:
 
 You have an application (say a webpage), which receives user input (say
 a submitted form) which contain an e-mail address that you want to
 insert into a database. Let's say you have this form field in a variable
 called $email:
 
 Inserting:
 $dbh-do(INSERT INTO mytable (email) VALUES ('$email'));
 
 Now if you do not verify what is in $email, and the user passes you this
 : notavalidemail@'adress
 you can run into trouble, as the above statement will parse to
 INSERT INTO mytable (email) VALUES ('notavalidemail@'adress)
 which is not good, it produces an SQL error because of the extra '
 character. You can run into more serious trouble if you get something
 more SQL-like from your form, say DELETE FROM mytable or something
 like that ;-)
 
 Using placeholders take care of quoting, that is, the SQL statement will
 always be valid, at the data will no longer be part of the query. This:
 $dbh-do(INSERT INTO mytable (email) VALUES (?), undef, $email);
 ...will insert $email into the table without a runtime error (well, it
 is probably not what you want, as you will have an invalid e-mail
 address, but it is still better compared to letting others execute SQL
 commands on your machine...)
 
 This thing by itself should be enough for anyone NOT to use
 non-prepared statements (like many badly written PHP scripts do). But
 there are more things... well, read the article once again, or wait for
 another e-mail on speed benefits :-)
 
 Regards,
 - Csongor
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


**
ALCHEMETRICS LIMITED (ALCHEMETRICS)
Mulberry Park, Fishponds Road, Wokingham, Berkshire, RG41 2GX
Tel:  +44 (0) 118 902 9000Fax:  +44 (0) 118 902 9001
This e-mail is confidential and is intended for the use of the addressee only.
If you are not the intended recipient, you are hereby notified that you must 
not use, copy, disclose, otherwise disseminate or take any action based on 
this e-mail or any information herein.
If you receive this transmission in error, please notify the sender
immediately by reply e-mail or by using the contact details above and then
delete this e-mail.
Please note that e-mail may be susceptible to data corruption, interception 
and unauthorised amendment.  Alchemetrics does not accept any liability for 
any such corruption, interception, amendment or the consequences thereof.
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with query performance anomaly

2004-11-11 Thread Jamie Kinney
How do the OS statistics look on both boxes.  Do top, sar, vmstat or
iostat show any CPU, memory or I/O performance issues?  Does anything
odd appear in the /var/log/messages file?

-Jamie

On Thu, 11 Nov 2004 18:42:48 -, Graham Cossey
[EMAIL PROTECTED] wrote:
 
 
 [big snip]
 
 
  These are two different plans. Your development machine is using
  the index
  yr_mn_pc on the r table and is joining that table last. On your
  production
  server, the r table is joined second and is joined by the index PRIMARY.
  Let me know how the ANALYZE TABLE I suggested in a previous message works
  out to help the statistics.
 
 
 I have run ANALYZE on all tables on the live server and the result of
 EXPLAIN is the same as before.
 
 Any further suggestions? I'm off to double check the create table stuff once
 more...
 
 Thanks for your help (and patience!!)
 
 Graham
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: RAID Question

2004-11-11 Thread Kirti S. Bajwa
Paul:

Thank you for your response. Another question:

Would it cause a problem if I configure --with-raid and then never use it in
any programming? Will it add any overhead?

It would help if this feature is available for future use.

Thanks 2nd time.

Kirti

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 10, 2004 8:18 PM
To: Kirti S. Bajwa; [EMAIL PROTECTED]
Subject: RE: RAID Question


At 15:19 -0500 11/10/04, Kirti S. Bajwa wrote:
Hello List:

System: RH9, MySQL 4.1.7

I am in the process of re-setting up (I have test setup 4-5 times) a data
server with the above software. This server consists of 2-CPU (Intel)
RAID-1, 1-40GB IDE HDD for O/S  2-250GB IDE HDD for storing data. 250 GB
IDE HDD are mirrored (RAID-1).

Previously, I setup RAID while setting up RH9. Recently, while reviewing
the
MySQL, documentation, I noticed the following directive for configure
command:

# ./configure -prefix=/usr/local/mysql   -with-raid

While researching on GOOGLE, I did find quite a bit of information on
MySQL
RAID HOWTO search, but nothing to answer my question. Can someone explain
how the aboce directive in configure works? In my setup, do I need the
above directive as shown?

It doesn't have anything to do with hardware raid.  It enables support
for the RAID table options that allow you to split the data file for
MyISAM tables into several files.

See the description for RAID_TYPE, RAID_CHUNKS, and RAID_CHUNKSIZE
options at: http://dev.mysql.com/doc/mysql/en/CREATE_TABLE.html

-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Heikki Tuuri
Sasha,
- Original Message - 
From: Sasha Pachev [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 8:05 PM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki Tuuri wrote:
Ivan,
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close
to the end of the output? The print routine first prints inode pages
that are completely used, and after that other inode pages. Since the
tablespace validation said the tablespace is ok, I guess the segments
really are there.
Anyway, if we get the ibdata files, it should be relatively easy to find
out what is wrong.
Heikki:
If no bugs were discovered in that code for the last 4 years, we might be
dealing with the good old problem that what you read from the disk might 
not
always be quite what you wrote to it in the first place due to OS/hardware 
problems.
nice to hear from you :). Maybe you have time to attend MySQL Conference 
2005 in Silicon Valley?

I agree, that is a possibility. But the bug is rather specific, undo logs 
not being freed. That suggests an InnoDB bug in trx0*.c.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: problem with distinct not solved by group by

2004-11-11 Thread Michael Stassen
You want the rows containing the maximum post_d for each movies_id group. 
As you've seen, this is tricky.  Fortunately, the manual suggests three 
solutions to this problem 
http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html.

Michael
Seth Leonard wrote:
I have three tables:
reviews
users
movies
I am trying to select the latest 4 reviews for DIFFERENT movies.
I can use the following query:
SELECT reviews.movies_id, movies.movie_title, users.name,
reviews.rating, reviews.post_d FROM reviews, users, movies WHERE
reviews.user_id = users.user_id and reviews.movies_id = movies.movie_id
ORDER BY post_d DESC LIMIT 4
However, this can return the same movie twice if any of the last 4
reviews are of the same movie.
DISTINCT is no help because I only want a distinct on movies_id, not
the whole row.
GROUP BY movies_id is no help because it takes the oldest review from
the unique movies_id.
Does anyone have an idea where I can take the most recent 4 rows that
have a different movies_id without doing extra processing work in PHP?
Thanks,
Seth
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Moving innodb from Linux to Windows

2004-11-11 Thread Boyd E. Hemphill
All:

 

I am attempting to move a set of db's from Linux (Mysql 4.0.20) to Windows
(4.1.6).  I have everything running, and I can query tables that are all
lower case.

 

However I cannot issue a query on a table that contains uppercase letters.
This is the my.ini file:  

 

[client]

port=3306

 

[mysqld]

port=3306

 

basedir=C:/Program Files/MySQL/MySQL Server 4.1/

datadir=F:/xxx/

tmpdir=F:\\temp\\

 

default-character-set=latin1

default-storage-engine=INNODB

max_connections=100

query_cache_size=100M

table_cache=256

tmp_table_size=103M

thread_cache=8

 

#*** INNODB Specific options ***

innodb_data_home_dir=F:/xxx/

innodb_log_group_home_dir=F:/xxx/

innodb_data_file_path=ibdata1:15G:autoextend

innodb_log_arch_dir=F:/xxx/

 

innodb_additional_mem_pool_size=7M

innodb_flush_log_at_trx_commit=1

innodb_log_buffer_size=4M

innodb_buffer_pool_size=305M

innodb_log_file_size=10M

innodb_thread_concurrency=8

 

 

#*** MyISAM Specific options

myisam_max_sort_file_size=100M

myisam_max_extra_sort_file_size=100M

myisam_sort_buffer_size=205M

key_buffer_size=157M

read_buffer_size=64K

read_rnd_buffer_size=256K

sort_buffer_size=256K

 

#set-variable = lower_case_table_names=2

 

And this is the error I get if I attempt to access the table Grade

 

Select * from Grade

ERROR 1016 (HY000): Can't open file: 'grade.InnoDB' (errno: 1)

 

In the .err log I have:

04 13:47:03  InnoDB error:

Cannot find table tpri/grade from the internal data dictionary

of InnoDB though the .frm file for the table exists. Maybe you

have deleted and recreated InnoDB data files but have forgotten

to delete the corresponding .frm files of InnoDB tables, or you

have moved .frm files to another database?

Look from section 15.1 of http://www.innodb.com/ibman.html

how you can resolve the problem.

04 13:47:03  [ERROR] C:\Program Files\MySQL\MySQL Server
4.1\bin\mysqld-nt: Can't open file: 'grade.InnoDB' (errno: 1)

 

I am pulling my hair out!   Please help!

 

Best Regards,

Boyd E. Hemphill

MySQL Certified Professional

[EMAIL PROTECTED]

Triand, Inc.

www.triand.com

O:  (512) 248-2278

 

 



Re: InnoDB data files keep growing with innodb_file_per_table

2004-11-11 Thread Heikki Tuuri
John,
please zip ibdata1, which is 'only' 100 MB, and upload it when you have shut 
down mysqld.

I have been simulating your workload, but I only get 25 segments. No leak 
seen.

Regards,
Heikki
- Original Message - 
From: John B. Ivski [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, November 11, 2004 8:17 PM
Subject: Re: InnoDB data files keep growing with innodb_file_per_table


Heikki,
Heikki Tuuri wrote:
hmm... could it be that segments 0 1, 0 2, 0 3, etc. were printed close
to the end of the output? The print routine first prints inode pages
that are completely used, and after that other inode pages. Since the
tablespace validation said the tablespace is ok, I guess the segments
really are there.
You're absolutely right, they're there - I must've missed them when 
looking through the output.
They're not at the end but around 0 17000, though.

SEGMENT id 0 16683 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 16684 space 0; page 11430; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 1 space 0; page 2; res 2 used 2; full ext 0
fragm pages 2; free extents 0; not full extents 0: pages 0
SEGMENT id 0 2 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 3 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 4 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 5 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 6 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 7 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 8 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 9 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 10 space 0; page 2; res 4 used 4; full ext 0
fragm pages 4; free extents 0; not full extents 0: pages 0
SEGMENT id 0 11 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 12 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 13 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 14 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 15 space 0; page 2; res 160 used 160; full ext 2
fragm pages 32; free extents 0; not full extents 0: pages 0
SEGMENT id 0 17259 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 17 space 0; page 2; res 1 used 1; full ext 0
fragm pages 1; free extents 0; not full extents 0: pages 0
SEGMENT id 0 18 space 0; page 2; res 0 used 0; full ext 0
fragm pages 0; free extents 0; not full extents 0: pages 0
SEGMENT id 0 19 space 0; page 2; res 1 used 1; full ext 0
Anyway, if we get the ibdata files, it should be relatively easy to find
out what is wrong.
I'll delete the whole tablespace this weekend and reimport data from 
backup. If it keeps growing
I'll upload the data files (will be easier to do with them occupying much 
less than 2GB, too ;)

Good luck,
Ivan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Which PHP for MySQL 4.1

2004-11-11 Thread C.F. Scheidecker Antunes
Hello,
I would like to migrate my MySQL servers from 4.0 to 4.1.
As I use PHP as well as Java with these servers I wonder what PHP 4 
version would
be compatible with MySQL 4.1.

Has anyone used MySQL 4.1 with PHP yet?
I appreciate your thoughts.
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Which PHP for MySQL 4.1

2004-11-11 Thread Jay Blanchard
[snip]
I would like to migrate my MySQL servers from 4.0 to 4.1.

As I use PHP as well as Java with these servers I wonder what PHP 4 
version would
be compatible with MySQL 4.1.

Has anyone used MySQL 4.1 with PHP yet?
[/snip]

PHP 4 is compatible with MySQL 4.1. My caution to you would be using
Apache 2 as it has some quirks that haven't been worked out yet.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Which PHP for MySQL 4.1

2004-11-11 Thread Curtis Maurand
Jay Blanchard wrote:
[snip]
I would like to migrate my MySQL servers from 4.0 to 4.1.
As I use PHP as well as Java with these servers I wonder what PHP 4 
version would
be compatible with MySQL 4.1.

Has anyone used MySQL 4.1 with PHP yet?
[/snip]
PHP 4 is compatible with MySQL 4.1. My caution to you would be using
Apache 2 as it has some quirks that haven't been worked out yet.
 

I've been using Apache 2 with PHP 4 for quite some time.  Its been 
working fine for me and my customers.

Curtis
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Which PHP for MySQL 4.1

2004-11-11 Thread Ugo Bellavance
Jay Blanchard wrote:
[snip]
I would like to migrate my MySQL servers from 4.0 to 4.1.
As I use PHP as well as Java with these servers I wonder what PHP 4 
version would
be compatible with MySQL 4.1.

Has anyone used MySQL 4.1 with PHP yet?
[/snip]
PHP 4 is compatible with MySQL 4.1. My caution to you would be using
Apache 2 as it has some quirks that haven't been worked out yet.
 Must be careful about authentification though.
see http://dev.mysql.com/doc/mysql/en/Password_hashing.html
and
http://dev.mysql.com/doc/mysql/en/Old_client.html
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Getting count() to include 0 counts in select - slow

2004-11-11 Thread Mark Worsdall



SELECT
_objectives.id,
_objectives.name,
COUNT(go._iso._objective_id)
FROM go._objectives
INNER JOIN go._subjectHeadings
        ON go._subjectHeadings.id = go._objectives.subjectHeadings_id
INNER JOIN go._subjects
        ON go._subjects.id = go._objectives.subjects_id
LEFT JOIN go._iso
        ON go._iso._objective_id = _objectives.id
WHERE
go._subjectHeadings.id = 276
AND
go._subjects.id = 44
GROUP BY 1,2
ORDER BY go._objectives.displayOrder

58 rows in set (24.26 sec)


Did what I wanted but took as long as old method.

the _iso.user_id value for this is 175 in case that can be used to speed
things up.

I have tried various things but they all end up with the servers hard
drive going nutty and /tmp (1Gb) filling up!!!


I have included the descriptions of the tables and a count on them to
show number of records involved.

I can't see the solution yet but I am sure it can be done with the
select without me delving into indexes etc.


describe _objectives;
++---+--+-+-+
| Field  | Type  | Null | Key | Default |
++---+--+-+-+
| id | int(10)   |  | PRI | NULL|
| subjects_id| int(10)   | YES  | | NULL|
| subjectHeadings_id | int(10)   | YES  | | NULL|
| name   | varchar(240)  |  | | |
| active | enum('Y','N') |  | | Y   |
| displayOrder   | int(10)   |  | | 10  |
| owner_id   | int(10)   |  | | 0   |
++---+--+-+-+
SELECT count(*) from go._objectives;
+--+
| count(*) |
+--+
| 4087 |
+--+
1 row in set (0.01 sec)


describe _iso;
+---+---+--+-+-+
| Field | Type  | Null | Key | Default |
+---+---+--+-+-+
| id| int(10)   |  | PRI | NULL|
| student_id| int(10)   |  | | 0   |
| ltaForm_id| int(10)   |  | | 0   |
| lta_id| int(10)   |  | | 0   |
| _objective_id | int(10)   |  | | 0   |
| sta   | varchar(254)  |  | | |
| comment   | varchar(254)  |  | | |
| lecturerNotes | varchar(254)  |  | | |
| displayOrder  | int(10)   |  | | 100 |
| manstatus | enum('W','C') |  | | W   |
| autostatus| enum('W','C') |  | | W   |
| recType   | enum('T','L') |  | | T   |
| active| enum('Y','N') |  | | Y   |
| create_date   | datetime  |  | | 2003-12-08 00:00:00 |
| modified_date | timestamp(14) | YES  | | NULL|
| moduser_id| int(10)   |  | | 0   |
| user_id   | int(10)   |  | | 0   |
| recuser_id| int(10)   |  | | 0   |
| rec_date  | datetime  |  | | 2003-12-08 00:00:00 |
+---+---+--+-+-+
SELECT count(*) from go._iso;
+--+
| count(*) |
+--+
|11498 |
+--+
1 row in set (0.01 sec)



This was on the end of both describe results.
 +
  Extra  |
 +
  auto_increment |
 |
 |
 |
 |
 |
 |
 +








-- 
Work:- postmasterAThinwick.demon.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Which PHP for MySQL 4.1

2004-11-11 Thread OpenMacNews
Jay Blanchard wrote:
[snip]
I would like to migrate my MySQL servers from 4.0 to 4.1.
As I use PHP as well as Java with these servers I wonder what PHP 4
version would
be compatible with MySQL 4.1.
Has anyone used MySQL 4.1 with PHP yet?
[/snip]
fwiw, i build-from-source  use a collection of:
  mysql 4.1.7
  apache 2.0.52 (mpm=worker)
  php 5.0.2
  perl 5.8.5
  bdb 4.3.21 (crypto)
on OSX 10.3.6 without any hitches/issues that *i've* found. one *does* need to be 
careful about versions of pcre bet apache  everything else, but once built, 
all's (seemingly) ok.
of course, YMMV ...
cheers,
richard
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Help with query performance anomaly

2004-11-11 Thread Graham Cossey

Everything seems fine except for this one query.
I'm not sure quite what the results of top will say or what they mean if
it's a Virtual Private Server environment?

Nothing untoward in /var/log/messages or var/log/httpd/error_log or virtual
host httpd logs.

Have just run 'top' on the live server...

Before running the query I get:

13:56:09  up 45 days, 11:47,  1 user,  load average: 0.00, 0.28, 0.44
24 processes: 23 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
idle
CPU1 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
idle
CPU2 states:   0.0% user   0.1% system0.0% nice   0.0% iowait  99.4%
idle
CPU3 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
idle
Mem:  6203744k av, 6194148k used,9596k free,   0k shrd,  304848k
buff
  1948476k active,3601304k inactive
Swap: 4192956k av, 1876604k used, 2316352k free 4081216k
cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 7622   15   0   904  904   748 R 0.1  0.0   0:00   2 top
1 root   9   0   468  440   420 S 0.0  0.0   2:58   1 init
 1733 root   9   0   548  516   480 S 0.0  0.0   0:35   1 syslogd
 2242 root   8   0   808  736   684 S 0.0  0.0   0:11   0 xinetd
 3393 root   8   0   576  552   512 S 0.0  0.0   0:21   1 crond
15329 root   9   0  1284 1156  1088 S 0.0  0.0   0:27   2 sshd
 3264 root   8   0  3676 1548  1488 S 0.0  0.0   0:37   2 httpd
15296 apache 9   0  9904 8872  4752 S 0.0  0.1   0:00   2 httpd
 4576 apache 9   0  9876 8804  4344 S 0.0  0.1   0:01   1 httpd
 8992 root   9   0  1000 1000   848 S 0.0  0.0   0:00   1
mysqld_safe
10433 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
11360 mysql  8   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
11395 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   0 mysqld
11425 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
11456 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   3 mysqld
11491 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
12128 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
12162 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   0 mysqld
12193 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
12224 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   1 mysqld
32418 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 5284    9   0  2288 2244  2068 S 0.0  0.0   0:00   3 sshd
 5538    9   0  1292 1292   984 S 0.0  0.0   0:00   1 bash

Now, I don't know if it's coincidental or not but after setting the query
running and re-issuing the top command I get:

 13:59:49  up 45 days, 11:51,  1 user,  load average: 0.98, 0.61, 0.53
26 processes: 24 sleeping, 2 running, 0 zombie, 0 stopped
CPU0 states:  75.0% user  25.0% system0.0% nice   0.0% iowait   0.0%
idle
CPU1 states:  76.0% user  24.0% system0.0% nice   0.0% iowait   0.0%
idle
CPU2 states:  63.0% user  36.0% system0.0% nice   0.0% iowait   0.0%
idle
Floating point exception

 Does not look good to me !!

Comments?
Advice?

Thanks
Graham

 -Original Message-
 From: Jamie Kinney [mailto:[EMAIL PROTECTED]
 Sent: 11 November 2004 19:25
 To: Graham Cossey
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Help with query performance anomaly


 How do the OS statistics look on both boxes.  Do top, sar, vmstat or
 iostat show any CPU, memory or I/O performance issues?  Does anything
 odd appear in the /var/log/messages file?

 -Jamie

 On Thu, 11 Nov 2004 18:42:48 -, Graham Cossey
 [EMAIL PROTECTED] wrote:
 
 
  [big snip]
 
  
   These are two different plans. Your development machine is using
   the index
   yr_mn_pc on the r table and is joining that table last. On your
   production
   server, the r table is joined second and is joined by the
 index PRIMARY.
   Let me know how the ANALYZE TABLE I suggested in a previous
 message works
   out to help the statistics.
  
 
  I have run ANALYZE on all tables on the live server and the result of
  EXPLAIN is the same as before.
 
  Any further suggestions? I'm off to double check the create
 table stuff once
  more...
 
  Thanks for your help (and patience!!)
 
  Graham
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Which PHP for MySQL 4.1

2004-11-11 Thread Graham Cossey

[snip]
 
 of course, YMMV ...
 
[snip]

Sorry, YMMV? What does that mean?

Is there a reference somewhere for all these acronyms?

Thanks
Graham


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Temporal values VS. int values

2004-11-11 Thread Jim Grill
Looking for some informed opinions...

Which is faster for searching by date ranges?

...Times stored in date() columns - temporal values.

...Or dates stored as unix timestamps in an int(10) unsigned column - int 
values.

Most of the queries will be searching for rows between certain data ranges 
consisting of up to a few months worth of data at a time.

Thanks,

Jim Grill

Re: Which PHP for MySQL 4.1

2004-11-11 Thread James E Hicks III
Graham Cossey wrote:
Sorry, YMMV? What does that mean?
Is there a reference somewhere for all these acronyms?
 

Yes it's called Google.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Which PHP for MySQL 4.1 (mysgl-general: addressed to exclusive sender for this address)

2004-11-11 Thread OpenMacNews
Sorry, YMMV? What does that mean?
YMMV = Your Mileage May Vary
Is there a reference somewhere for all these acronyms?
lots.
google (internet acronyms)
richard
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


slow-query log

2004-11-11 Thread Jeffrey M. Johnson
I noticed something interesting with our slow-query log and am looking for
an answer.

 

Our slow-query log is set for to record anything over 7 seconds.

 

In monitoring the server I ran the show full processlist  I occasionally
see entries like 

 

| ID | USER | HOST | DATABASE | Query   | 120| sleep | QUERY |

 

Where the 120 is the time; which is well over are threshold of 7 seconds.
However the query is not written to our slow-query.log file and to the best
of my knowledge is not counted as a slow query.

 

Why does this occur?  

 

My guess it only counts active query time and not sleep time.

 

Jeff



RE: Which PHP for MySQL 4.1

2004-11-11 Thread Graham Cossey
[snip]
 
 Sorry, YMMV? What does that mean?
 
 Is there a reference somewhere for all these acronyms?
   
 
 
 Yes it's called Google.
 

Yep, as soon as I posted I knew I should not have :)
It's late and I'm getting lazy...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




query of two tables returns far too many rows, more than the two tables contain

2004-11-11 Thread Chip Wiegand
I have two tables I want to get out the rows that are different between 
them. The results I am getting is almost 50,000 rows, but the two tables, 
combined, contain only about 600 rows total. Here is the select statement 
-

SELECT dealers.account_no, dealers.DealerName, blackgate_users.User_Name, 
blackgate_users.DealerName
FROM dealers, blackgate_users
WHERE dealers.account_no NOT 
LIKE blackgate_users.User_Name

in these tables the 
dealers.account_no is the same data as the blackgate_users.User_Name
dealers.DealerName is the same data as the blackgate_users.DealerName
I just want the rows that are in the dealers table but not in the 
blackgate_users table. 

Thanks for any help,
Chip Wiegand
Computer Services
Simrad, Inc
425-778-8821 
425-771-7211 (FAX)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with query performance anomaly

2004-11-11 Thread Graham Cossey
[snip]

 Have just run 'top' on the live server...

 Before running the query I get:

 13:56:09  up 45 days, 11:47,  1 user,  load average: 0.00, 0.28, 0.44
 24 processes: 23 sleeping, 1 running, 0 zombie, 0 stopped
 CPU0 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
 idle
 CPU1 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
 idle
 CPU2 states:   0.0% user   0.1% system0.0% nice   0.0% iowait  99.4%
 idle
 CPU3 states:   0.0% user   0.0% system0.0% nice   0.0% iowait 100.0%
 idle
 Mem:  6203744k av, 6194148k used,9596k free,   0k shrd,  304848k
 buff
   1948476k active,3601304k inactive
 Swap: 4192956k av, 1876604k used, 2316352k free 4081216k
 cached

   PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
  7622   15   0   904  904   748 R 0.1  0.0   0:00   2 top
 1 root   9   0   468  440   420 S 0.0  0.0   2:58   1 init
  1733 root   9   0   548  516   480 S 0.0  0.0   0:35   1 syslogd
  2242 root   8   0   808  736   684 S 0.0  0.0   0:11   0 xinetd
  3393 root   8   0   576  552   512 S 0.0  0.0   0:21   1 crond
 15329 root   9   0  1284 1156  1088 S 0.0  0.0   0:27   2 sshd
  3264 root   8   0  3676 1548  1488 S 0.0  0.0   0:37   2 httpd
 15296 apache 9   0  9904 8872  4752 S 0.0  0.1   0:00   2 httpd
  4576 apache 9   0  9876 8804  4344 S 0.0  0.1   0:01   1 httpd
  8992 root   9   0  1000 1000   848 S 0.0  0.0   0:00   1
 mysqld_safe
 10433 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 11360 mysql  8   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 11395 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   0 mysqld
 11425 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 11456 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   3 mysqld
 11491 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 12128 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 12162 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   0 mysqld
 12193 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
 12224 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   1 mysqld
 32418 mysql  9   0 17036  16M  2012 S 0.0  0.2   0:00   2 mysqld
  5284    9   0  2288 2244  2068 S 0.0  0.0   0:00   3 sshd
  5538    9   0  1292 1292   984 S 0.0  0.0   0:00   1 bash

 Now, I don't know if it's coincidental or not but after setting the query
 running and re-issuing the top command I get:

  13:59:49  up 45 days, 11:51,  1 user,  load average: 0.98, 0.61, 0.53
 26 processes: 24 sleeping, 2 running, 0 zombie, 0 stopped
 CPU0 states:  75.0% user  25.0% system0.0% nice   0.0% iowait   0.0%
 idle
 CPU1 states:  76.0% user  24.0% system0.0% nice   0.0% iowait   0.0%
 idle
 CPU2 states:  63.0% user  36.0% system0.0% nice   0.0% iowait   0.0%
 idle
 Floating point exception

  Does not look good to me !!

[snip]

I have now managed to get a top while the query is running:

14:29:52  up 45 days, 12:21,  1 user,  load average: 0.69, 0.28, 0.39
25 processes: 23 sleeping, 2 running, 0 zombie, 0 stopped
CPU0 states:  71.1% user  28.0% system0.0% nice   0.0% iowait   0.0%
idle
CPU1 states:  68.0% user  31.0% system0.0% nice   0.0% iowait   0.0%
idle
CPU2 states:  71.0% user  28.0% system0.0% nice   0.0% iowait   0.0%
idle
CPU3 states:  80.0% user  19.0% system0.0% nice   0.0% iowait   0.0%
idle
Mem:  6203744k av, 5764148k used,  439596k free,   0k shrd,  257900k
buff
  1839520k active,3282316k inactive
Swap: 4192956k av, 1881496k used, 2311460k free 3687672k
cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
19462 mysql 14   0 19968  19M  2016 R95.4  0.3   0:38   0 mysqld
25248   10   0  1004 1004   748 R 0.3  0.0   0:00   3 top
1 root   9   0   468  440   420 S 0.0  0.0   2:58   1 init
 1733 root   9   0   548  516   480 S 0.0  0.0   0:35   3 syslogd
 2242 root   8   0   808  736   684 S 0.0  0.0   0:11   3 xinetd
 3393 root   9   0   576  552   512 S 0.0  0.0   0:21   2 crond
15329 root   9   0  1284 1156  1088 S 0.0  0.0   0:27   0 sshd
 3264 root   9   0  3676 1548  1500 S 0.0  0.0   0:37   1 httpd
15296 apache 9   0 10632 9608  4768 S 0.0  0.1   0:01   0 httpd
 4576 apache 9   0 10036 8964  4344 S 0.0  0.1   0:01   3 httpd
 8992 root   9   0  1000 1000   848 S 0.0  0.0   0:00   1
mysqld_safe
10433 mysql  9   0 19968  19M  2016 S 0.0  0.3   0:00   1 mysqld
11360 mysql  9   0 19968  19M  2016 S 0.0  0.3   0:00   2 mysqld
11395 mysql  9   0 19968  19M  2016 S 0.0  0.3   0:00   0 mysqld
11425 mysql  9   0 19968  19M  2016 S 0.0  0.3   0:00   2 mysqld
11456 mysql  

Re: Getting count() to include 0 counts in select - slow

2004-11-11 Thread SGreen
This will really speed things up:

ALTER TABLE _ISO ADD KEY(_objective_id)

If you do an EXPLAIN on your current query, you would see that no index is 
used for the _iso table so the engine has to do a table scan. And, for 
future reference, the output of SHOW CREATE TABLES is more informative 
than DESCRIBE because it identifies which columns are in which index and 
in what order. Don't forget to use \G and not ; as your command terminator 
(like this:

SHOW CREATE TABLE _iso\G

or you will end up with lots of extra formatting)

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Mark Worsdall [EMAIL PROTECTED] wrote on 11/11/2004 03:56:12 PM:

 
 
 
 SELECT
 _objectives.id,
 _objectives.name,
 COUNT(go._iso._objective_id)
 FROM go._objectives
 INNER JOIN go._subjectHeadings
 ON go._subjectHeadings.id = go._objectives.subjectHeadings_id
 INNER JOIN go._subjects
 ON go._subjects.id = go._objectives.subjects_id
 LEFT JOIN go._iso
 ON go._iso._objective_id = _objectives.id
 WHERE
 go._subjectHeadings.id = 276
 AND
 go._subjects.id = 44
 GROUP BY 1,2
 ORDER BY go._objectives.displayOrder
 
 58 rows in set (24.26 sec)
 
 
 Did what I wanted but took as long as old method.
 
 the _iso.user_id value for this is 175 in case that can be used to speed
 things up.
 
 I have tried various things but they all end up with the servers hard
 drive going nutty and /tmp (1Gb) filling up!!!
 
 
 I have included the descriptions of the tables and a count on them to
 show number of records involved.
 
 I can't see the solution yet but I am sure it can be done with the
 select without me delving into indexes etc.
 
 
 describe _objectives;
 ++---+--+-+-+
 | Field  | Type  | Null | Key | Default |
 ++---+--+-+-+
 | id | int(10)   |  | PRI | NULL|
 | subjects_id| int(10)   | YES  | | NULL|
 | subjectHeadings_id | int(10)   | YES  | | NULL|
 | name   | varchar(240)  |  | | |
 | active | enum('Y','N') |  | | Y   |
 | displayOrder   | int(10)   |  | | 10  |
 | owner_id   | int(10)   |  | | 0   |
 ++---+--+-+-+
 SELECT count(*) from go._objectives;
 +--+
 | count(*) |
 +--+
 | 4087 |
 +--+
 1 row in set (0.01 sec)
 
 
 describe _iso;
 +---+---+--+-+-+
 | Field | Type  | Null | Key | Default |
 +---+---+--+-+-+
 | id| int(10)   |  | PRI | NULL|
 | student_id| int(10)   |  | | 0   |
 | ltaForm_id| int(10)   |  | | 0   |
 | lta_id| int(10)   |  | | 0   |
 | _objective_id | int(10)   |  | | 0   |
 | sta   | varchar(254)  |  | | |
 | comment   | varchar(254)  |  | | |
 | lecturerNotes | varchar(254)  |  | | |
 | displayOrder  | int(10)   |  | | 100 |
 | manstatus | enum('W','C') |  | | W   |
 | autostatus| enum('W','C') |  | | W   |
 | recType   | enum('T','L') |  | | T   |
 | active| enum('Y','N') |  | | Y   |
 | create_date   | datetime  |  | | 2003-12-08 00:00:00 |
 | modified_date | timestamp(14) | YES  | | NULL|
 | moduser_id| int(10)   |  | | 0   |
 | user_id   | int(10)   |  | | 0   |
 | recuser_id| int(10)   |  | | 0   |
 | rec_date  | datetime  |  | | 2003-12-08 00:00:00 |
 +---+---+--+-+-+
 SELECT count(*) from go._iso;
 +--+
 | count(*) |
 +--+
 |11498 |
 +--+
 1 row in set (0.01 sec)
 
 
 
 This was on the end of both describe results.
  +
   Extra  |
  +
   auto_increment |
  |
  |
  |
  |
  |
  |
  +
 
 
 
 
 
 
 
 
 -- 
 Work:- postmasterAThinwick.demon.co.uk
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: query of two tables returns far too many rows, more than the two tables contain

2004-11-11 Thread SGreen
What happened is called a Cartesian product. You basically asked for a 
combination of every value from one table matched to every value from the 
other table. In your case you did have an except-type condition so you 
eliminated a few of the matches.

Let me explain it this way: Imagine you have two identical lists of names. 
Each list is 200 names long and there are no duplicates within the list. 
The query you just wrote would combine each of the first 200 names with 
each one of the other 199 names from the other table that didn't match so 
that your final results would be (200x199) or 39800 records. Way more than 
the 400 you started with, isn't it

What you want to do is to write a query that lists every record from the 
dealers table OPTIONALLY matched up to rows on the blackgate_users table 
wherever the names match.  This is what the LEFT JOIN operator was 
invented to do (you were using an INNER JOIN).  Now, to find the 
non-matches you look for rows from the blackgate_users table where a 
normally non-null value IS NULL. Because you want ALL of the rows from the 
dealers table but only some of the rows from the blackgate_users table 
(you declare that when you say LEFT JOIN), the query engine fills in the 
missing rows from the right hand table (in this case blackgate_users) 
with all NULL values.

SELECT dealers.account_no, dealers.DealerName, blackgate_users.User_Name, 
blackgate_users.DealerName
FROM dealers
LEFT JOIN blackgate_users
ON  dealers.account_no = blackgate_users.User_Name
WHERE blackgate_users.account_no IS NULL

So by looking for only those rows where the right hand table contains 
NULL values, you find the non-matching rows. Get it?

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Chip Wiegand [EMAIL PROTECTED] wrote on 11/11/2004 04:29:08 PM:

 I have two tables I want to get out the rows that are different between 
 them. The results I am getting is almost 50,000 rows, but the two 
tables, 
 combined, contain only about 600 rows total. Here is the select 
statement 
 -
 
 SELECT dealers.account_no, dealers.DealerName, 
blackgate_users.User_Name, 
 blackgate_users.DealerName
 FROM dealers, blackgate_users
 WHERE dealers.account_no NOT 
 LIKE blackgate_users.User_Name
 
 in these tables the 
 dealers.account_no is the same data as the blackgate_users.User_Name
 dealers.DealerName is the same data as the blackgate_users.DealerName
 I just want the rows that are in the dealers table but not in the 
 blackgate_users table. 
 
 Thanks for any help,
 Chip Wiegand
 Computer Services
 Simrad, Inc
 425-778-8821 
 425-771-7211 (FAX)
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


mySQL 4.1.3-beta subqueries

2004-11-11 Thread L a n a
Hello,
Since I had a problem with except operator and subqueries, I investigated a 
topic on mysql version capability. I tried a few examples which were offered 
on this mailing list, all of them gave me a syntax error, so I've read a 
manual and tried some examples from it.  However, things that must work 
still doesn't work I got frustrated... please help...

I have the following two tables in mySQL 4.1.3-beta :
CREATE TABLE `user_info` (
 `comments` varchar(250) default '',
 `user_id` int(11) NOT NULL auto_increment,
 `login_name` varchar(20) default NULL,
 `user_passwd` varchar(20) default NULL,
 PRIMARY KEY  (`user_id`),
 KEY `user_id` (`user_id`)
) TYPE=MyISAM;
CREATE TABLE `user_info` (
 `comments` varchar(250) default '',
 `user_id` int(11) NOT NULL auto_increment,
 `login_name` varchar(20) default NULL,
 `user_passwd` varchar(20) default NULL,
  PRIMARY KEY  (`user_id`),
 KEY `user_id` (`user_id`)
) TYPE=MyISAM;
I've read the manual and there are two examples:
1. SELECT * FROM t1 WHERE column1 = ANY (SELECT column1 FROM t2);
2. SELECT * FROM t1 WHERE (column1,column2) IN (SELECT column1,column2 FROM 
t2);

I adjusted them to my tables and tested as the following:
1.$sql = SELECT * from user_info WHERE login_name = ANY ( SELECT login_name 
from new_user_info);
2.$sql = SELECT * from user_info WHERE (login_name, user_passwd) IN ( 
SELECT login_name, user_passwd FROM new_user_info);

It gives the following error:
You have an error in your SQL syntax. Check the manual that corresponds to 
your MySQL server version for the right syntax to use near 'user_passwd) IN 
( SELECT login_name,user_passwd FROM new_user_i

Why? What's wrong? Can anyone help?
Thank you,
Lana

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Maximum row size for MyISAM table type

2004-11-11 Thread Joshua Beall
Hi All,

I am a bit confused by the MySQL documentation on this subject.  From 
http://dev.mysql.com/doc/mysql/en/Storage_requirements.html

The maximum size of a row in a MyISAM table is 65,534 bytes. Each BLOB and 
TEXT column accounts for only five to nine bytes toward this size.

So, the maximum size is 64k, but you can go over that limit by using BLOB or 
TEXT types, because although they can hold up to 4gb (2^32 bytes for 
LONGBLOB/LONGTEXT types), they still only contribue 5 to 9 bytes?  What?  Am 
I reading that correctly?

Thanks for any clarification,
  -Josh 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query of two tables returns far too many rows, more than the two tables contain

2004-11-11 Thread Chip Wiegand
[EMAIL PROTECTED] wrote on 11/11/2004 02:05:17 PM:
snip
 What you want to do is to write a query that lists every record from
 the dealers table OPTIONALLY matched up to rows on the 
 blackgate_users table wherever the names match.  This is what the 
 LEFT JOIN operator was invented to do (you were using an INNER 
 JOIN).  Now, to find the non-matches you look for rows from the 
 blackgate_users table where a normally non-null value IS NULL. 
 Because you want ALL of the rows from the dealers table but only 
 some of the rows from the blackgate_users table (you declare that 
 when you say LEFT JOIN), the query engine fills in the missing rows 
 from the right hand table (in this case blackgate_users) with all 
 NULL values. 
 
 SELECT dealers.account_no, dealers.DealerName, 
blackgate_users.User_Name, 
 blackgate_users.DealerName
 FROM dealers 
 LEFT JOIN blackgate_users
ON  dealers.account_no = blackgate_users.User_Name 
 WHERE blackgate_users.account_no IS NULL 

Thanks for the help. That gets me much closer. I did a count in both 
tables and figured there should be 121 rows returned by the query. The 
above select statement gets me 141 rows returned. With a little sleuthing 
around in there I will probably figure out what the extra 10 rows are.
Thanks you very much.
Regards,
Chip
 
 So by looking for only those rows where the right hand table 
 contains NULL values, you find the non-matching rows. Get it? 
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine 
 
 Chip Wiegand [EMAIL PROTECTED] wrote on 11/11/2004 04:29:08 PM:
 
  I have two tables I want to get out the rows that are different 
between 
  them. The results I am getting is almost 50,000 rows, but the two 
tables, 
  combined, contain only about 600 rows total. Here is the select 
statement 
  -
  
  SELECT dealers.account_no, dealers.DealerName, 
blackgate_users.User_Name, 
  blackgate_users.DealerName
  FROM dealers, blackgate_users
  WHERE dealers.account_no NOT 
  LIKE blackgate_users.User_Name
  
  in these tables the 
  dealers.account_no is the same data as the blackgate_users.User_Name
  dealers.DealerName is the same data as the blackgate_users.DealerName
  I just want the rows that are in the dealers table but not in the 
  blackgate_users table. 
  
  Thanks for any help,
  Chip Wiegand
  Computer Services
  Simrad, Inc
  425-778-8821 
  425-771-7211 (FAX)
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
  

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL Syntax Problem

2004-11-11 Thread Ligaya Turmelle
Think I found it.  I made the changes with explanations of what I did. 
If you have any further questions feel free to ask.  Oh and this should 
be on the list for others to see and maybe learn from

Respectfully,
Ligaya Turmelle
head[DATABASE CONNECTION]/head
body
  div class=formdiv
form action=remote.php method=GET
			select name=order !-- values here are what the switch is based 
off of.. so I changed them--
  option value=1'Country, etc./option
  option value=2'Population/option
  option value=3'Nationality/option
  option value=4Nationality: Plural/option
  option value=5Nationality: Adjective/option
  option value=6Geographic Region/option
  /select
  input type=radio name=direction value=0+
  input type=radio name=direction value=1-
  input type=submit name=submit value=Submit
/form
  /div
  ?php
			$colors = array( '#eee', '', '#ff9', '', '#cff', '', '#cfc', '' );
			$n=0;
			$size=count($colors);

			$result = mysql_query('select count(*) FROM cia_people	C, famarea2 F 
WHERE C.IDArea = F.IDArea AND F.IDParent = eur AND C.Nationality is 
not null');
			if (($result)  (mysql_result ($result , 0)  0)) {
// continue here with the code that starts
//$res = mysql_query (SELECT * FROM type.
			} else {
die('Invalid query: ' . mysql_error());
			}

			switch($_GET['order'])  // use the Get method requires the $_GET 
super variable
			{	   	// see http://www.php.net/en/language.variables.predefined 
in the manual
   case 1:
   $order = 'cia_people.Name';
  		 		break;
   case 2:
   $order = 'cia_people.Pop';
  		 		break;
   case 3:
   $order = 'cia_people.Nationality';
  	 			break;
   case 4:
   $order = 'cia_people.NationalityPlural';
   break;
   case 5:
   $order = 'cia_people.NationalityAdjective';
  		 		break;
  	 			case 6:
   $order = 'famarea2.IDParentReg';
  		 		break;
   default:
   $order = 'cia_people.Name';
   break;
			}
			switch($_GET['direction'])  // same reason as above
			{
  case 0:
$direction = 'ASC';
    	break;

case 1:
$direction = 'DESC';
break;
default:
$direction = 'ASC';
break;
}
$sql =
'SELECT F.IDArea,
   C.IDArea,
   C.Name,
   C.Pop,
   C.Nationality,
   C.NationalityPlural,
   C.NationalityAdjective
FROM cia_people C, famarea2 F
WHERE (C.Nationality is not null)
   AND (F.IDArea = \'eur\')
ORDER BY ' . $order . ',' . $direction;   /* here we just use the local 
variables we moved everything into in the switch statements */
$res = mysql_query($sql) or die('Failed to run ' .
$sql . ' - ' . mysql_error());

echo 'table class=sortphp id=tab_cia_people_peo
 thead
   trthCountry/ththX/th/tr
 /thead
 tbody';
//!-- BeginDynamicTable --
$rowcounter=0;
while ($row = mysql_fetch_array ($res)) {
 $c=$colors[$rowcounter++%$size];
 echo tr style=\background-color:$c\ class='.
$row['Name'] .'. $_SERVER['PHP_SELF'] .'?id='.
$row['IDArea'] .
td class='tdname' '. $row['Name'] .'.
$row['Name'] ./td
tdnbsp;/td/tr\n;
}
?
  /tr
  /tbody
/table
/body
/html

David Blomstrom wrote:
Thanks. I guess this is turning into a PHP question
now, but I wondered if you tell me one more thing.
I made the change you suggested, and I now get this
parse error message:
Parse error: parse error, unexpected '{' in
C:\sites\geoworld\about\guide\world\eur\remote.php on
line 119
This is apparently the line it refers to, but it
doesn't make sense to me. I tried deleting the curly
braces/brackets, but it didn't fix anything.
ORDER BY ' . {$_POST['order']} . ',' .
{$_POST['direction']};
This is the script from Hell; every time I change it,
I get a new parse error!
Oh, yes - I also just discovered the single quotes in
my option values, like the one after Nationality:
option value=cia_people.Nationality'
I'm not sure where I picked those up; are they
supposed to be there? I removed them, but, again, it
didn't fix anything.
Thanks.
head[DATABASE CONNECTION]/head
body
  div class=formdiv
form action=remote.php method=GET
  select name=order
  option value=cia_people.Name'Country,
etc./option
  option
value=cia_people.Pop'Population/option
  option
value=cia_people.Nationality'Nationality/option
  option
value=cia_people.NationalityPlural'Nationality:
Plural/option
  option
value=cia_people.NationalityAdjective'Nationality:
Adjective/option
  option
value=famarea2.IDParentRegGeographic
Region/option
  /select
  input type=radio name=direction
value=0+
 

scalability of MySQL - future plans?

2004-11-11 Thread Jacek Becla
Hello,
What are the plans regarding improving scalability of MySQL? We are 
currently trying to decide what technology/product to use for a large 
project that will generate ~600TB/year starting in 2012. Any pointers to 
related articles or hints how safe is to assume that MySQL will be able 
to handle petabyte-scale dataset in 8-10 years would be greatly appreciated.

Best regards,
Jacek Becla
Stanford University
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SQL Syntax Problem

2004-11-11 Thread David Blomstrom
--- Ligaya Turmelle [EMAIL PROTECTED] wrote:

 Think I found it.  I made the changes with
 explanations of what I did. 
 If you have any further questions feel free to ask. 
 Oh and this should 
 be on the list for others to see and maybe learn
 from

Wow, thanks so much for going to all that trouble.
Several other people sent me tips, too. I feel bad to
tell you that it still doesn't work. I got an
immediate parse error.

Also, I don't know if I should continue this on the
list since it may be turning into more of a PHP
problem. But it is a cool script that others might
like to learn about. You can see a working example on
my website at
http://www.geoworld.org/reference/people/ (A good
column to sort is Population; you'll see China at the
top of the column if you choose DESCENDING.)

But this page only sorts data from ONE database table.
I'm now trying to make one that will sort fields from
multiple tables. The problem is that there are too
many elements, none of which I really understand. So
if I fix a parse error, the data doesn't display, and
if I fix it so the data displays, the PHP sorting
switch doesn't work.

I have learned a few things:

1. For some reason, I can't limit the display with a
regular WHERE query. It displays ALL the rows (all the
world's nations), even if I ask it to display rows
only WHERE F.IDParent = 'eur' (Eurasia).

To make it work, I have to use an official join,
like this:

FROM cia_people C
LEFT JOIN 
   famarea2 F ON C.IDArea = F.IDArea 
WHERE F.IDParent = 'eur'

* * * * * * * * * *

2. I had the wrong field for the 'eur' values; it
should be F.IDParent, not IDArea.

* * * * * * * * * *

3. This is the most critical code:

ORDER BY ' . $_POST['order'] . ',' .
$_POST['direction'].';

It's usually the first to flake out, either causing a
parse error or simply not functioning. Every time I
modify another key function, I have to modify this
line, and it's too complex for me to re-engineer.

* * * * * * * * * *

4. I've received a variety of opinions on the quotes,
on functions throughout the source code. I'm not sure
sure if I should be using single quotes, double quotes
or no quotes at all in certain instances.

* * * * * * * * * *

5. There may also be a conflict with globals and
$_Post. Again, I don't understand this stuff. If I
understand correctly, I should either turn globals on
or off (or not have them in the first place), and use
$_Post in one instance but not the other?

* * * * * * * * * *

I'm amazed there isn't more information about this
script readily avaiable. It seems like such a useful
function, I thought it would be rather common.

Below is my current source code. It displays the data
correctly, without errors, but the sort function
doesn't work. Once again, it draws from two tables,
named cia_people and famarea2, joined by the field
they share in common, IDArea.

Every field cited as an option value is from table
cia_people except IDParentReg, which is the field from
table famarea2 I want to sort by. Actually, both
tables share a field named Name, but I think I
identified cia_people.Name in the query.

Don't feel obligated to pursue this; I've already
spent two days on it! :)

Thanks.

* * * * * * * * * *

head[DATABASE CONNECTION]/head
body
  div class=formdiv
form action=remote.php method=GET
  select name=order
  option value=NameCountry, etc./option
  option value=PopPopulation/option
  option value=NationalityNationality/option
  option value=NationalityPluralNationality:
Plural/option
  option value=NationalityAdjectiveNationality:
Adjective/option
  option value=IDParentRegGeographic
Region/option
  /select
input type=radio name=direction value=0+
   input type=radio name=direction value=1-
   input type=submit name=submit value=Submit
/form
  /div
  ?php
$colors = array( '#eee', '', '#ff9', '', '#cff', '',
'#cfc', '' );
$n=0;
$size=count($colors);

$result = mysql_query('select count(*) FROM cia_people
C, famarea2 F
  WHERE C.IDArea = F.IDArea AND F.IDParent = eur
  AND C.Nationality is not null');
if (($result)  (mysql_result ($result , 0)  0)) {
// continue here with the code that starts
//$res = mysql_query (SELECT * FROM type.
} else {
die('Invalid query: ' . mysql_error());
}

switch($order)
{
   case 1:
   $order = 'Name';
   break;
   case 2:
   $order = 'Pop';
   break;
   case 3:
   $order = 'Nationality';
   break;
   case 4:
   $order = 'NationalityPlural';
   break;
   case 5:
   $order = 'NationalityAdjective';
   break;
   case 6:
   $order = 'IDParentReg';
   break;
   default:
   $order = 'Name';
   break;
}
switch($direction)
{
  case 0:
$direction = 'ASC';
break;

  case 1:
$direction = 'DESC';
break;

  default:
$direction = 'ASC';
break;
}

$sql =
SELECT F.IDArea,
C.IDArea,
C.Name,
C.Pop,
C.Nationality,
C.NationalityPlural,
C.NationalityAdjective
FROM 

Re: Maximum row size for MyISAM table type

2004-11-11 Thread Chris
Joshua Beall wrote:
Hi All,
I am a bit confused by the MySQL documentation on this subject.  From 
http://dev.mysql.com/doc/mysql/en/Storage_requirements.html

The maximum size of a row in a MyISAM table is 65,534 bytes. Each BLOB and 
TEXT column accounts for only five to nine bytes toward this size.

So, the maximum size is 64k, but you can go over that limit by using BLOB or 
TEXT types, because although they can hold up to 4gb (2^32 bytes for 
LONGBLOB/LONGTEXT types), they still only contribue 5 to 9 bytes?  What?  Am 
I reading that correctly?

Thanks for any clarification,
 -Josh 

 

5-9 bytes only go toward the total row size because TEXT and BLOB types 
don't get stored in the row itself, just a pointer to them. That pointer 
takes up 5-9 bytes. At least, that's how I understand it.

Chris
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


mySQL 4.1.3-beta subqueries(correction on table names)

2004-11-11 Thread L a n a
Hello,
Since I had a problem with except operator and subqueries, I investigated a 
topic on mysql version capability. I tried a few examples which were offered 
on this mailing list, all of them gave me a syntax error, so I've read a 
manual and tried some examples from it.  However, things that must work 
still doesn't work I got frustrated... please help...

I have the following two tables in mySQL 4.1.3-beta :
CREATE TABLE `user_info` (
 `comments` varchar(250) default '',
 `user_id` int(11) NOT NULL auto_increment,
 `login_name` varchar(20) default NULL,
 `user_passwd` varchar(20) default NULL,
 PRIMARY KEY  (`user_id`),
 KEY `user_id` (`user_id`)
) TYPE=MyISAM;
CREATE TABLE `new_user_info` (
 `comments` varchar(250) default '',
 `user_id` int(11) NOT NULL auto_increment,
 `login_name` varchar(20) default NULL,
 `user_passwd` varchar(20) default NULL,
  PRIMARY KEY  (`user_id`),
 KEY `user_id` (`user_id`)
) TYPE=MyISAM;
Basically two tables contain same structured info for old and new users.
I've read the manual and there are two examples:
1. SELECT * FROM t1 WHERE column1 = ANY (SELECT column1 FROM t2);
2. SELECT * FROM t1 WHERE (column1,column2) IN (SELECT column1,column2 FROM 
t2);

I adjusted them to my tables and tested as the following:
1.$sql = SELECT * from user_info WHERE login_name = ANY ( SELECT login_name 
from new_user_info);
2.$sql = SELECT * from user_info WHERE (login_name, user_passwd) IN ( 
SELECT login_name, user_passwd FROM new_user_info);

It gives the following error:
You have an error in your SQL syntax. Check the manual that corresponds to 
your MySQL server version for the right syntax to use near 'user_passwd) IN 
( SELECT login_name,user_passwd FROM new_user_i

Why? What's wrong? Can anyone help?
Thank you,
Lana

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]