Re: Error 1034: 136 when fixing table from Create Index on long table

2002-10-22 Thread Chris Stoughton
Jocelyn asked whether the file system supports files over 2 GB, and 
whether there was space left on disk.

The file system /export/data/dp20.a supports files larger than 2 GB. 
For example, the targetTsObj files are larger:

bash-2.04$ ls -lh targetTsObj*
-rw-rw1 mysqlmysql134G Oct 20 05:37 targetTsObj.MYD
-rw-rw1 mysqlmysql5.0k Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql 32k Oct 18 21:05 targetTsObj.frm

And, there is still room on the disk that I *think* I am writing the 
index to.
bash-2.04$ df -h .
FilesystemSize  Used Avail Use% Mounted on
/export/data/dp20.a   1.0T  302G  761G  29% /data/dp20.a
bash-2.04$

I have pointed tmpdir to this file system as well, and smaller databases 
have successfully build indices since these changes.

== So, is there a chance that the index file is being built somewhere 
else, and it runs out of space?  Any clues to help me track this down?

== Should I be using innodb tables instead of myisam tables for this 
application?


Thanks again.


Jocelyn Fournier wrote:

According to show variables, your tmp dir is /export/data/dp20.a/tmp/.
Are you sure you have a file system which allow to create files greater than
2 GB for exemple ?
 Original Message -
From: [EMAIL PROTECTED]
To: Jocelyn Fournier [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 1:33 AM
Subject: Re: Error 1034: 136 when fixing table from Create Index on long
table


 

Well, I have datadir, bdb_tmpdir, and tmpdir all pointing to
/export/data/dp20.a, which is 28% full.  It is over one tera-byte.

I am worried that it is trying to write a temporary file, or an index
file, to some other location.

How can I see where the index file is written, to be sure?

Thanks!

- Original Message -
From: Jocelyn Fournier [EMAIL PROTECTED]
Date: Monday, October 21, 2002 5:37 pm
Subject: Re: Error 1034: 136 when fixing table from Create Index on long
table

   

Hi,

[root@forum] /usr/local/mysql/var perror 136
Error code 136:  Unknown error 136
136 = No more room in index file

Are you sure your file system can handle the size of your index
file ?

Regards,
Jocelyn
- Original Message -
From: Chris Stoughton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 12:26 AM
Subject: Error 1034: 136 when fixing table from Create Index on
long table


 

I am running 3.23.52-Max under Linux.

I now have a table with 54M rows:

mysql select count(*) from targetTsObj;
+--+
| count(*) |
+--+
| 54549046 |
+--+
1 row in set (0.05 sec)

Creating an index on this takes 1 hour 10 minutes, with this error:

mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table

The describe command shows that no index has been built.

Previously, it had trouble building this same index, and complained
about not being able to open a file in /tmp
I suspect that /tmp was not large enough, so we changed the tmpdir
variable in my.cnf to point to a file system with *plenty* of
   

roomw and
 

restarted the server.

myisamchk seems to have no complaints about this table:

bash-2.04$ myisamchk targetTsObj
Checking MyISAM file: targetTsObj
Data records: 54549046   Deleted blocks:   0
- check file-size
- check key delete-chain
- check record delete-chain
- check index reference
bash-2.04$ myisamchk -d targetTsObj

MyISAM file: targetTsObj
Record format:   Fixed length
Character set:   latin1 (8)
Data records: 54549046  Deleted blocks:
   

0
 

Recordlength: 2643

table description:
Key Start Len Index   Type
bash-2.04$

Please let me know what this error means, how to get around it,
   

or what
 

additional information you need.  Thanks!

Here is the output of mysqladmin variable

   

+-+-





+
 

| Variable_name   |
Value
|

   

+-+-





+
 

| back_log|
50
|
| basedir |
/
|
| bdb_cache_size  |
8388600
|
| bdb_log_buffer_size |
262144
|
| bdb_home|
/export/data/dp20.a/data/mysql/
|
| bdb_max_lock|
1
|
| bdb_logdir
|
|
| bdb_shared_data |
OFF
|
| bdb_tmpdir  |
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB
3.2.9a: (August 14,
2002)
|
| binlog_cache_size   |
32768
|
| character_set

How to index a large table?

2002-10-22 Thread Chris Stoughton
I have tried a few different variables to allow a create index command 
to finish successfully.

1.  I set tmpdir to be a file system with ample space
2.  I increased tmp_table_size
3. I increases myisam_sort_buffer_size to 100M

Here are the sizes of the database files:

bash-2.04$ ls -l  targetTsObj*
-rw-rw1 mysqlmysql144173128578 Oct 20 05:37 targetTsObj.MYD
-rw-rw1 mysqlmysql5120 Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql   32750 Oct 18 21:05 targetTsObj.frm


I continue to get this error:


Database changed
mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table
mysql

It takes 75 minutes, for this to happen.  During that time, it creates a 
set of files called #sql*.MYD, .MYI, and .frm, and these grow until they 
are identical in size to the targetTsObj.* files.

Then, after several minutes of mysqld consuming 99% CPU time, it ends 
with an error.

1.  Is there a variable I should set in my.cnf?
2. Should I be using innodb tables instead of myisam?

Thanks.

==
For the record, here is what mysqladmin variables says:
+-+-+
| Variable_name   | 
Value   
|
+-+-+
| back_log| 
50  
|
| basedir | 
/   
|
| bdb_cache_size  | 
8388600 
|
| bdb_log_buffer_size | 
262144  
|
| bdb_home| 
/export/data/dp20.a/data/mysql/ 
|
| bdb_max_lock| 
1   
|
| bdb_logdir  
| 
|
| bdb_shared_data | 
OFF 
|
| bdb_tmpdir  | 
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB 
3.2.9a: (August 14, 
2002)   
|
| binlog_cache_size   | 
32768   
|
| character_set   | 
latin1  
|
| character_sets  | latin1 big5 czech euc_kr gb2312 gbk 
sjis tis620 ujis dec8 dos german1 hp8 koi8_ru latin2 swe7 usa7 cp1251 
danish hebrew win1251 estonia 

Re: How to index a large table?

2002-10-22 Thread Chris Stoughton
I have 760 GB free, not 375.

bash-2.04$ df -h .
FilesystemSize  Used Avail Use% Mounted on
/export/data/dp20.a   1.0T  302G  760G  29% /data/dp20.a
bash-2.04$

The .MYD file is 134 GBytes
bash-2.04$ ls -lh targetTsObj*
-rw-rw1 mysqlmysql134G Oct 20 05:37 targetTsObj.MYD
-rw-rw1 mysqlmysql5.0k Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql 32k Oct 18 21:05 targetTsObj.frm

so while the create index command is running, after the #sql* files 
are the same size as the targetTsObj.* files, we still have over 600 GB 
of disk space left.


Jocelyn Fournier wrote:

Hi,

In fact, I think you run out of disk space.
You MYD takes 135 GB.
You have 375 GB free
When MySQL try to add the index on your database, it copies the MYD and frm
under a #sql* name.
So again 135 GB are eaten again.
So it remains 240 GB to build the index file, so it's possible you run out
of free space during the MYI generation ? (what does df report ? )

Regards,
 Jocelyn
- Original Message -
From: Chris Stoughton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 9:49 PM
Subject: How to index a large table?


 

I have tried a few different variables to allow a create index command
to finish successfully.

1.  I set tmpdir to be a file system with ample space
2.  I increased tmp_table_size
3. I increases myisam_sort_buffer_size to 100M

Here are the sizes of the database files:

bash-2.04$ ls -l  targetTsObj*
-rw-rw1 mysqlmysql144173128578 Oct 20 05:37
   

targetTsObj.MYD
 

-rw-rw1 mysqlmysql5120 Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql   32750 Oct 18 21:05 targetTsObj.frm


I continue to get this error:


Database changed
mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table
mysql

It takes 75 minutes, for this to happen.  During that time, it creates a
set of files called #sql*.MYD, .MYI, and .frm, and these grow until they
are identical in size to the targetTsObj.* files.

Then, after several minutes of mysqld consuming 99% CPU time, it ends
with an error.

1.  Is there a variable I should set in my.cnf?
2. Should I be using innodb tables instead of myisam?

Thanks.


   


==
 

For the record, here is what mysqladmin variables says:

   

+-+-


+
 

| Variable_name   |
Value
|

   

+-+-


+
 

| back_log|
50
|
| basedir |
/
|
| bdb_cache_size  |
8388600
|
| bdb_log_buffer_size |
262144
|
| bdb_home|
/export/data/dp20.a/data/mysql/
|
| bdb_max_lock|
1
   


 

|
| bdb_logdir
|
|
| bdb_shared_data |
OFF
|
| bdb_tmpdir  |
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB
3.2.9a: (August 14,
2002)
|
| binlog_cache_size   |
32768
|
| character_set   |
latin1
|
| character_sets  | latin1 big5 czech euc_kr gb2312 gbk
sjis tis620 ujis dec8 dos german1 hp8 koi8_ru latin2 swe7 usa7 cp1251
danish hebrew win1251 estonia hungarian koi8_ukr win1251ukr greek
win1250 croat cp1257 latin5 |
| concurrent_insert   |
ON
|
| connect_timeout |
5
|
| datadir |
/export/data/dp20.a/data/mysql/
|
| delay_key_write |
ON
|
| delayed_insert_limit|
100
|
| delayed_insert_timeout  |
300
|
| delayed_queue_size  |
1000
|
| flush   |
OFF
|
| flush_time  |
0
|
| have_bdb|
YES
|
| have_gemini |
NO
|
| have_innodb |
DISABLED
|
| have_isam   |
YES
|
| have_raid   |
NO
|
| have_openssl|
NO
|
| init_file
|
|
| innodb_additional_mem_pool_size |
1048576
|
| innodb_buffer_pool_size |
8388608
|
| innodb_data_file_path
|
|
| innodb_data_home_dir
|
|
| innodb_file_io_threads  |
4
|
| innodb_force_recovery   |
0
|
| innodb_thread_concurrency   |
8
|
| innodb_flush_log_at_trx_commit  |
16777216
|
| innodb_fast_shutdown|
ON
|
| innodb_flush_method
|
   


 

|
| innodb_lock_wait_timeout|
50
|
| innodb_log_arch_dir
|
|
| innodb_log_archive  |
OFF

Re: How to index a large table?

2002-10-22 Thread Chris Stoughton
Correct, that error means no more room in index file.

So, how do I get more room in the index file?


Paul DuBois wrote:


At 15:11 -0500 10/22/02, Chris Stoughton wrote:


I have 760 GB free, not 375.



Perhaps, but error 136 still means no more room in index file. 






bash-2.04$ df -h .
FilesystemSize  Used Avail Use% Mounted on
/export/data/dp20.a   1.0T  302G  760G  29% /data/dp20.a
bash-2.04$

The .MYD file is 134 GBytes
bash-2.04$ ls -lh targetTsObj*
-rw-rw1 mysqlmysql134G Oct 20 05:37 targetTsObj.MYD
-rw-rw1 mysqlmysql5.0k Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql 32k Oct 18 21:05 targetTsObj.frm

so while the create index command is running, after the #sql* files 
are the same size as the targetTsObj.* files, we still have over 600 
GB of disk space left.


Jocelyn Fournier wrote:

Hi,

In fact, I think you run out of disk space.
You MYD takes 135 GB.
You have 375 GB free
When MySQL try to add the index on your database, it copies the MYD 
and frm
under a #sql* name.
So again 135 GB are eaten again.
So it remains 240 GB to build the index file, so it's possible you 
run out
of free space during the MYI generation ? (what does df report ? )

Regards,
 Jocelyn
- Original Message -
From: Chris Stoughton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 9:49 PM
Subject: How to index a large table?



I have tried a few different variables to allow a create index 
command
to finish successfully.

1.  I set tmpdir to be a file system with ample space
2.  I increased tmp_table_size
3. I increases myisam_sort_buffer_size to 100M

Here are the sizes of the database files:

bash-2.04$ ls -l  targetTsObj*
-rw-rw1 mysqlmysql144173128578 Oct 20 05:37
 

targetTsObj.MYD


-rw-rw1 mysqlmysql5120 Oct 21 11:38 
targetTsObj.MYI
-rw-rw1 mysqlmysql   32750 Oct 18 21:05 
targetTsObj.frm


I continue to get this error:


Database changed
mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table
mysql








-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Error 1034: 136 when fixing table from Create Index on long table

2002-10-21 Thread Chris Stoughton
I am running 3.23.52-Max under Linux.

I now have a table with 54M rows:

mysql select count(*) from targetTsObj;
+--+
| count(*) |
+--+
| 54549046 |
+--+
1 row in set (0.05 sec)

Creating an index on this takes 1 hour 10 minutes, with this error:

mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table

The describe command shows that no index has been built.

Previously, it had trouble building this same index, and complained 
about not being able to open a file in /tmp
I suspect that /tmp was not large enough, so we changed the tmpdir 
variable in my.cnf to point to a file system with *plenty* of roomw and 
restarted the server.  

myisamchk seems to have no complaints about this table:

bash-2.04$ myisamchk targetTsObj
Checking MyISAM file: targetTsObj
Data records: 54549046   Deleted blocks:   0
- check file-size
- check key delete-chain
- check record delete-chain
- check index reference
bash-2.04$ myisamchk -d targetTsObj

MyISAM file: targetTsObj
Record format:   Fixed length
Character set:   latin1 (8)
Data records: 54549046  Deleted blocks: 0
Recordlength: 2643

table description:
Key Start Len Index   Type
bash-2.04$

Please let me know what this error means, how to get around it, or what 
additional information you need.  Thanks!

Here is the output of mysqladmin variable
+-+-+
| Variable_name   | 
Value   
|
+-+-+
| back_log| 
50  
|
| basedir | 
/   
|
| bdb_cache_size  | 
8388600 
|
| bdb_log_buffer_size | 
262144  
|
| bdb_home| 
/export/data/dp20.a/data/mysql/ 
|
| bdb_max_lock| 
1   
|
| bdb_logdir  
| 
|
| bdb_shared_data | 
OFF 
|
| bdb_tmpdir  | 
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB 
3.2.9a: (August 14, 
2002)   
|
| binlog_cache_size   | 
32768   
|
| character_set   | 
latin1   

Re: HELP - Having serious trouble here...

2002-10-14 Thread Chris Stoughton

It sounds like the glibc auto-rpm that Redhat produced (to plug a 
security hole, I believe) is the root cause.  Getting mysql 3.23.52 from 
mysql will be the solution.

I don't know who could get Redhat to coordinate their updates.


John Hinton wrote:

I'm having the same problems. I'm on a RedHat machine... 7.2 running
MySQL ver. 3.23.41. I've been told to upgrade my package. 

I think RedHat has put out an update to some other library, module or
such, that is in conflict with remote/client management applications???

Everything is working fine 'on' the server... I have had reports from a
MAC user that his mysqlMAC or whatever he called it would not work. I
have noticed that my Mascon program fails to connect. I have noticed in
the logs, that everytime I try to connect it restarts mysql. If any
calls are made to MySQL from a java app or script, I have not tested it,
but bet that it will not connect but only restart mysql not good at
all!

I think if someone knows 'what' causes this, it would be nice to get the
information out to the likes of RedHat, so they can let us subscribers
know to update, or send through the update to MySQL at the same time
this incompatibility issue arises.

I know we've had updates to glibc and it seems I read something about
that Don't remember where as I've been researching this for the last
week and have been surfing/searching like a madman for several full
days!

If you are using RedHat, have you updated your packages recently?

Did you try to run mysql-front only this morning, maybe the first time
after doing recent updates?

I had a server shut down last Monday due to a fan dieing I fixed it
and of course went through reboots. I noticed Mascon would not run after
the reboots... But, it had been a while since I had tried to run Mascon,
so I'm not sure this occurred after the reboot, or perhaps after some
package upgrades. The server had run non-stop for over a year... so
gee that leaves lots of questions

It would be nice to get to the bottom of what is happening.

gerald_clark wrote:
  

What is in the error logs?

Vernon Webb wrote:



I have been using MySQL for nearly a year now with no problems, until now. I
rebooted my box this morning and since rebooting I have had nothing but
trouble.

First off I have noticed (I never looked before because I had no need to,
but there is nothing in my msqld.log They are all 0k.

Secondly, I can no longer log in as root using MySQL-Front as I keep getting
an error message stating that it has lost connection during query.

The MySQL daemon keeps restarting itself over and over and I can't figure
out why.

I'm a littel frazzled right now and am at a lost as to where to start to
resolve this. Anyone have any ideas? PLEASE!!!



  

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



  





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance when using two BETWEEN statements in the WHERE clausemultiple times

2002-10-13 Thread Chris Stoughton

I sent a similar question a few days ago.  I don't think there was a 
response.  If there was, sorry that I missed it.  I have worked around 
the issue, but would like to know whether there is something I can do to 
improve the orignal query.

I have a table with two spatial indices -- ra and decl, for right 
ascension and declination , think of them as x,y coordinates.  In order 
to match objects in one table to a second table, I choose a set of 
objects in the first table, find the limits of ra,decl, and then query 
the second table based on these limits.  I then do matching in a 
separate program, between these two lists.

For a specific example of one pair of queries:

select ra,decl from firstTable where fieldId=1
(based on the results of this query, calculate raMin,raMac, declMin, and 
declMax -- 1.1, 1.2, 3.4, 3.5 in this example)
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.4 3.5

So, I wind up sending the following sequence of series:

select ra,decl from firstTable where fieldId=0
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.4 3.5
select ra,decl from firstTable where fieldId=1
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.5 3.6
select ra,decl from firstTable where fieldId=2
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.6 3.7
select ra,decl from firstTable where fieldId=3
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.7 3.8
select ra,decl from firstTable where fieldId=4
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.8 3.9
select ra,decl from firstTable where fieldId=5
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 3.9 4.0
select ra,decl from firstTable where fieldId=6
select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and 
decl between 4.0 4.1
select ra,decl from firstTable where fieldId=7
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 3.5 3.6
select ra,decl from firstTable where fieldId=8
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 3.6 3.7
select ra,decl from firstTable where fieldId=9
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 3.7 3.8
select ra,decl from firstTable where fieldId=10
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 3.8 3.9
select ra,decl from firstTable where fieldId=11
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 3.9 4.0
select ra,decl from firstTable where fieldId=12
select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and 
decl between 4.0 4.1
and so on.

Each of these selects returns approx 200 objects from the million 
records in the table.

The problem is that the first 40 or so queries take less than a second 
each, which is very nice performance.  However, after that, things bog 
down.  The queries with where fieldId=n continue to perform well, but 
the queries with the two betweens take longer and longer, over a minute 
per query.  Not good!  While this is happening, there is no process 
consuming CPU, no swapping, and no heavy disk activity.

Is there a way I can create the indices to optimize these queries?

(The work around is to add another field in targetTsObj, called 
radeclId.  This is an identifier for the grid that each ra,decl pair 
fall into. I build a grid of boxes that cover the legal range of 
ra,decl, and each grid has a uniqu radeclId.  Before loading, I 
calculate the radeclId for each object.  Then, to do this query, I then 
recast the where clause of the query from ra between 1.1 1.2 and decl 
between 4.0 4.1 to radeclId=id1 || radeclId=id2 || radeclId=id3 ... 
where the list id1, id2, id3,  is calculated from the ra,decl 
ranges.  Each one of these queries return in well under a second each, 
and do not bog down.  Therefore, I conclude that the disk and memory are 
performing well, and that I need to fix how I use indices.)


Here is what the indices look like for the secondTable, called 
targetTsObj in this example:
mysql show index from targetTsObj;
+-++---+--+-+---+-+--++-+
| Table   | Non_unique | Key_name  | Seq_in_index | Column_name | 
Collation | Cardinality | Sub_part | Packed | Comment |
+-++---+--+-+---+-+--++-+
| targetTsObj |  1 | objId |1 | objId   | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | fieldId   |1 | fieldId | 
A |1985 | NULL | NULL   | |
| targetTsObj |  1 | bestObjId |1 | bestObjId   | 
A |   1 |   

Re: MySQL 3.23.49

2002-10-12 Thread Chris Stoughton

We found the same thing, after installing the glibc auto-rpm from Redhat.

The updated glibc packages from Red Hat will whack your 
/etc/nsswitch.conf file.  That is, it will mess with the order of how 
things are searched for.

We noticed this with mysql not being able resolve remote server IPs. 
Removing any reference to 'nisplus' in nsswitch.conf helped this problem 
on RH7.1 (glib2.2.4) systems. 

Another, related problem that only appears on RH7.3 (glibc2.2.5) was 
remedied by putting 'dns' at the front of any list of methods that 
require reverse-dns lookups (i.e. hosts:dns files nis).

We downloaded the MySql pre-built 3.23.52, and that solved the problem.




J Ross Nicoll wrote:

I seem to have found a bug in MySQL 3.23.49 (untested in later versions). 
It appears that if the server cannot resolve the hostname of an incoming
TCP connection, it crashes.

I've been testing this under a RedHat 7.2/7.3 hybrid, and only came across 
the problem because I've managed to somehow stop DNS from working 
correctly! Unfortunately, I don't yet know exactly what's wrong 
with DNS.

Putting the hostname of the incoming IP address into the /etc/hosts file 
has solved this for us, but I thought you might want to know.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

  





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: information about the tables in the database

2002-10-10 Thread Chris Stoughton

I do this with

select db;
show tables;

then parse through the result.




Niclas Hedhman wrote:

On Wednesday 09 October 2002 21:43, Inbal Ovadia wrote:
  

Hi all,
There is a way to know if I have table x in my db?
Let's say some catalog tables that hold information about the db like
tables name etc.
and i can do query like select tables from catalogName and get all the
tables in the db



Talking about programmatically?
What client are you using?
Since the JDBC driver can pick up all the database, table and column 
information, it should be available reasonably easy in other clients as well.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

  





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance of a sequence many indexed queries

2002-10-08 Thread Chris Stoughton

I have a database with a table called targetTsObj, and I want to perform 
many queries from it, such as this one:

select run,rerun,camcol,field,id,ra,decl,r from targetTsObj where ra 
between 255.84730673785904 and 256.31969326214096 and decl between 
58.7494108 and 58.9577892

One of these returns a few hundred records from the 1.2 million records 
in the table.

This query returns in under a second.  However, after 40 or so of these 
queries, the time per query is up to 30 seconds or so.

Is this perfomance as expected, or is there something I can do?  Build 
the indices differently?   A setting in my.cnf?

The machine is running Linux, with 1 GB or ram, MySQL server version 3.23.49

Thanks!

===

I send the queries with a client program, using these C api's:

mysql_real_connect(mysql, host, user, passwd, db, port, unix_socket, 
client_flag);

mysql_query(mysql, query);

mysql_store_result(mysql);

In a loop, until all rows are fetched:
field = mysql_fetch_fields(mysql_res);

And after getting all the rows for a query:
mysql_free_result(mysql_res);


Here is the result of the show index command:
mysql show index from targetTsObj;
+-++--+--+-+---+-+--++-+
| Table   | Non_unique | Key_name | Seq_in_index | Column_name | 
Collation | Cardinality | Sub_part | Packed | Comment |
+-++--+--+-+---+-+--++-+
| targetTsObj |  1 | objId|1 | objId   | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | fieldId  |1 | fieldId | 
A |1985 | NULL | NULL   | |
| targetTsObj |  1 | ra   |1 | ra  | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | decl |1 | decl| 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | u|1 | u   | 
A |  111725 | NULL | NULL   | |
| targetTsObj |  1 | g|1 | g   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | r|1 | r   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | i|1 | i   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | z|1 | z   | 
A |  111725 | NULL | NULL   | |
| targetTsObj |  1 | ug   |1 | u   | 
A |  111725 | NULL | NULL   | |
| targetTsObj |  1 | ug   |2 | g   | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | gr   |1 | g   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | gr   |2 | r   | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | ri   |1 | r   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | ri   |2 | i   | 
A | 1228983 | NULL | NULL   | |
| targetTsObj |  1 | iz   |1 | i   | 
A |  122898 | NULL | NULL   | |
| targetTsObj |  1 | iz   |2 | z   | 
A | 1228983 | NULL | NULL   | |
+-++--+--+-+---+-+--++-+
17 rows in set (0.00 sec)

Finally, here is /etc/my.cnf:

bash-2.04$ more /etc/my.cnf
[mysqld]
set-variable = key_buffer_size=512M
set-variable = table_cache=512
set-variable = query_buffer_size=20M
# set-variable = sort_buffer=100M
# set-variable = read_buffer_size=100M
datadir=/export/data/dp20.a/data/mysql
socket=/var/lib/mysql/mysql.sock

[mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid







-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance whil Building Indices -- how does it scale?

2002-09-25 Thread Chris Stoughton

Yes, I increased this to 512M, as suggested in another answer, and the 
perfomance improved dramatically.  Thanks for the tip.

Jeremy Zawodny wrote:

On Mon, Sep 23, 2002 at 07:25:17AM -0500, Chris Stoughton wrote:
  

Joseph,

Thanks for the quick answer.

Very nice to know that adding an index forces a rebuild of all indices!  
(Side note -- I was going to configure the database with a minimal set 
of indices, and then watch to see how people use the database, and then 
add indices on popular columns.)

I did not notice a lot of i/o activity, but will run vmstat for a while 
and gather statistics

The machine has 1GB of RAM.

Here is the configuration:
bash-2.04$ more /etc/my.cnf
[mysqld]
datadir=/data/dp14.a/data/mysql
socket=/var/lib/mysql/mysql.sock
   [mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


There is NO SETTING for key_buffer_size -- what value do you suggest?



Ack!  You're using the default, which is very small (compared to 1GB).

Start with 512M and work from there.

Jeremy
  





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance whil Building Indices -- how does it scale?

2002-09-23 Thread Chris Stoughton

Joseph,

Thanks for the quick answer.

Very nice to know that adding an index forces a rebuild of all indices!  
(Side note -- I was going to configure the database with a minimal set 
of indices, and then watch to see how people use the database, and then 
add indices on popular columns.)

I did not notice a lot of i/o activity, but will run vmstat for a while 
and gather statistics

The machine has 1GB of RAM.

Here is the configuration:
bash-2.04$ more /etc/my.cnf
[mysqld]
datadir=/data/dp14.a/data/mysql
socket=/var/lib/mysql/mysql.sock
   [mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


There is NO SETTING for key_buffer_size -- what value do you suggest?

Thanks,

Chris


Joseph Bueno wrote:

 Chris Stoughton wrote:

 I have a table with a few million rows, wth 633 columns.  I want to 
 create 10 inidices on this table, six single-column and four 
 two-column inidices.  The database will be loaded once and remain 
 static, and queried many times.  Please note that this is a small 
 prototype for the actual database, which will have 40 times more rows.

 I used two strategies to create these tables:

 1.  Create the indices in the create table statement
 2.  Load the table, and then use the create index statement for 
 each index.

 With Strategy 1, the loading started nicely  using the load data 
 infile command to load approx. 2000 records at a time.  It took 3 
 seconds for each load data infile command.  After a few hundred 
 load data infile commands, however, the time increased to two 
 minutes per command.

 With Strategy 2, the loading went from start to finish at 3 seconds 
 for each load data infile command.  But now, creating each index is 
 taking longer.  Here is a log:

 Fri Sep 20 15:17:43 CDT 2002 create index objId on bestTsObj (objId)
 Fri Sep 20 15:35:51 CDT 2002 create index fieldId on bestTsObj (fieldId)
 Fri Sep 20 15:56:02 CDT 2002 create index targetObjId on bestTsObj 
 (targetObjId)
 Fri Sep 20 16:20:02 CDT 2002 create index ra on bestTsObj (ra)
 Fri Sep 20 16:49:16 CDT 2002 create index decl on bestTsObj (decl)
 Fri Sep 20 17:53:38 CDT 2002 create index u on bestTsObj (u)
 Fri Sep 20 18:42:52 CDT 2002 create index g on bestTsObj (g)
 Fri Sep 20 22:04:25 CDT 2002 create index r on bestTsObj (r)
 Sat Sep 21 10:06:44 CDT 2002 create index i on bestTsObj (i)
 Sat Sep 21 19:31:52 CDT 2002 create index z on bestTsObj (z)
 Sun Sep 22 15:45:06 CDT 2002 create index ug on bestTsObj (u,g)


 The first index was created in a reasonable amount of time (18 
 minutes), but it is taking longer to create more indices.


 With both strategies, no process is swapping heavily, and mysqld is 
 not consuming available CPU cycles efficiently.

 Can you suggest how I can build multiple indices on large tables 
 efficiently?  Will it help to create a narrow table with only the 
 quantities I want to use in indices?

 We have installed mysql via rpm:
 bash-2.04$ mysqladmin
 mysqladmin  Ver 8.18 Distrib 3.23.36, for redhat-linux-gnu on i386

 bash-2.04$ uname -r
 2.4.18-xfs-1.1
 bash-2.04$
 What diagnostics do you suggest I run?

 Thanks

 Hi,

 You should use strategy #1 since each time you create a new index, all
 index are rebuilt (this is why each new index is taking longer in
 strategy#2).

 Since mysqld is not consuming CPU nor swapping, it is probably doing a
 lot of disc I/Os. You can verify that with vmstat utility.

 You didn't specify how much RAM you have and what mysql configuration
 you are using (check /etc/my.cnf). You should pay special attention to
 'key_buffer_size' parameter value since it has a major impact on index
 performance.

 Regards,






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance while Building Indices -- how does it scale?

2002-09-22 Thread Chris Stoughton

I have a table with a few million rows, wth 633 columns.  I want to 
create 10 inidices on this table, six single-column and four two-column 
inidices.  The database will be loaded once and remain static, and 
queried many times.  Please note that this is a small prototype for 
the actual database, which will have 40 times more rows.

I used two strategies to create these tables:

1.  Create the indices in the create table statement
2.  Load the table, and then use the create index statement for each 
index.

With Strategy 1, the loading started nicely  using the load data 
infile command to load approx. 2000 records at a time.  It took 3 
seconds for each load data infile command.  After a few hundred load 
data infile commands, however, the time increased to two minutes per 
command.

With Strategy 2, the loading went from start to finish at 3 seconds for 
each load data infile command.  But now, creating each index is taking 
longer.  Here is a log:

Fri Sep 20 15:17:43 CDT 2002 create index objId on bestTsObj (objId)
Fri Sep 20 15:35:51 CDT 2002 create index fieldId on bestTsObj (fieldId)
Fri Sep 20 15:56:02 CDT 2002 create index targetObjId on bestTsObj 
(targetObjId)
Fri Sep 20 16:20:02 CDT 2002 create index ra on bestTsObj (ra)
Fri Sep 20 16:49:16 CDT 2002 create index decl on bestTsObj (decl)
Fri Sep 20 17:53:38 CDT 2002 create index u on bestTsObj (u)
Fri Sep 20 18:42:52 CDT 2002 create index g on bestTsObj (g)
Fri Sep 20 22:04:25 CDT 2002 create index r on bestTsObj (r)
Sat Sep 21 10:06:44 CDT 2002 create index i on bestTsObj (i)
Sat Sep 21 19:31:52 CDT 2002 create index z on bestTsObj (z)
Sun Sep 22 15:45:06 CDT 2002 create index ug on bestTsObj (u,g)


The first index was created in a reasonable amount of time (18 minutes), 
but it is taking longer to create more indices.


With both strategies, no process is swapping heavily, and mysqld is not 
consuming available CPU cycles efficiently.

Can you suggest how I can build multiple indices on large tables 
efficiently?  Will it help to create a narrow table with only the 
quantities I want to use in indices?

We have installed mysql via rpm:
bash-2.04$ mysqladmin
mysqladmin  Ver 8.18 Distrib 3.23.36, for redhat-linux-gnu on i386

bash-2.04$ uname -r
2.4.18-xfs-1.1
bash-2.04$ 

What diagnostics do you suggest I run?

Thanks









-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php