Re: example when indexing hurts simple select?

2006-06-11 Thread Marco Simon
Hi Gasper,

MySql allows to package the index - to get its size smaller and to gain
performance.
Some information about that can be found here:
http://www.mysqlperformanceblog.com/2006/05/13/to-pack-or-not-to-pack-myisam-key-compression/



Gaspar Bakos schrieb:
> Hi,
>
>
> RE:
>   
>> Have you tried
>> analyze table x;
>> 
>
> This was quick:
>
> mysql> analyze table TEST;
> Table  Op  Msg_typeMsg_text
> CAT.TEST   analyze status  Table is already up to date
>
> --
>
> mysql> show index from TEST;
> +---+++--+-+---+-+--++--++-+
> | Table | Non_unique | Key_name   | Seq_in_index | Column_name | Collation | 
> Cardinality | Sub_part | Packed | Null | Index_type | Comment |
> +---+++--+-+---+-+--++--++-+
> | TEST |  1 | MMtestfiel |1 | MMtestfiel  | A |   
>   838 | NULL | NULL   |  | BTREE  | NULL|
> | TEST |  1 | MMi_m  |1 | MMi_m   | A |   
> 25857 | NULL | NULL   | YES  | BTREE  | NULL|
> +---+++--+-+---+-+--++--++-+
>
>
> ---
> I am trying to figure out what the "Packed" field means.
>
> Gaspar
>
>   



smime.p7s
Description: S/MIME Cryptographic Signature


Re: example when indexing hurts simple select?

2006-06-11 Thread Gaspar Bakos
Hi,


RE:
> Have you tried
> analyze table x;

This was quick:

mysql> analyze table TEST;
Table  Op  Msg_typeMsg_text
CAT.TEST   analyze status  Table is already up to date

--

mysql> show index from TEST;
+---+++--+-+---+-+--++--++-+
| Table | Non_unique | Key_name   | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---+++--+-+---+-+--++--++-+
| TEST |  1 | MMtestfiel |1 | MMtestfiel  | A | 
838 | NULL | NULL   |  | BTREE  | NULL|
| TEST |  1 | MMi_m  |1 | MMi_m   | A | 
  25857 | NULL | NULL   | YES  | BTREE  | NULL|
+---+++--+-+---+-+--++--++-+


---
I am trying to figure out what the "Packed" field means.

Gaspar

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: example when indexing hurts simple select?

2006-06-11 Thread Gaspar Bakos
Hi, Philip,

RE:
> What is the EXPLAIN output of each?

OK, first I naively typed:
explain create table test2 select * from TEST where MMi < 9000;
but of course, this does not work.

The simple select that uses MMi_m as index (and takes up to an hour):

mysql> explain select * from TEST where MMi_m < 9000;
++-+---+---+---+---+-+--++-+
| id | select_type | table | type  | possible_keys | key   | key_len | ref  | 
rows   | Extra   |
++-+---+---+---+---+-+--++-+
|  1 | SIMPLE  | TEST  | range | MMi_m | MMi_m | 3   | NULL | 
406649 | Using where |
++-+---+---+---+---+-+--++-+

The select with ignoreing the index (takes only 11 minutes)
mysql> explain select * from TEST ignore key (MMi_m) where MMi_m < 9000;
++-+---+--+---+--+-+--+---+-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | 
rows  | Extra   |
++-+---+--+---+--+-+--+---+-+
|  1 | SIMPLE  | TEST  | ALL  | NULL  | NULL | NULL| NULL | 
470992970 | Using where |
++-+---+--+---+--+-+--+---+-+

> Have you tried
> analyze table x;
> optimize table x;

Not yet.

As regards "optimize table", I thought it would not make too much sense,
because:

"OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length
rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns)."

But I have just uploaded this table from ASCII, and made no changes.
Nevertheless, I will give a try, maybe there is some feature of
"OPTIMIZE TABLE" I don't know of.

What did you think of?

> Is it MyISAM or Innodb ?

MyISAM.

I'll keep you posted. I am very curious about how this can be resolved.

Cheers,
Gaspar

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: example when indexing hurts simple select?

2006-06-11 Thread Philip M. Gollucci

Questions;

1. Is there a way to decrease random seeks? E.g. mysqld config
parameters, increase some buffer/cache sizes?

2. Optimize table: is there a way to rearrange data so that random
seeks are minimized?

3. If we have to live with large number of random seeks does anyone
know how the underlying FS can be optimized?

What is the EXPLAIN output of each?

Have you tried
analyze table x;
optimize table x;
On a 100GB, these might table a while. You probably want to LOCK the 
table before running them.


Is it MyISAM or Innodb ?


--

Philip M. Gollucci ([EMAIL PROTECTED]) 323.219.4708
Consultant / http://p6m7g8.net/Resume/resume.shtml
Senior Software Engineer - TicketMaster - http://ticketmaster.com
1024D/A79997FA F357 0FDD 2301 6296 690F  6A47 D55A 7172 A799 97F

"It takes a minute to have a crush on someone, an hour to like someone,
and a day to love someone, but it takes a lifetime to forget someone..."

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread mos

At 04:41 PM 6/11/2006, Ian Barnes wrote:

Hi,



Then after that it the following happened for 60 seconds and then it timed
out:

I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian



Ian,
Timed out? Wasn't there an error? Have you looked at the MySQL Log 
files? http://dev.mysql.com/doc/refman/5.0/en/error-log.html
You're trying to load 3.2gb of data with only 768mb of ram and 3gb of swap 
and that may not be enough. Also make sure your MySQL data directory has 
enough disk space. You will need more than 3gb of space free in your data 
directory (I'm assuming you're using MyISAM tables and not InnoDb). With 
indexes you could easily be looking at double or triple that size (6-9gb of 
disk space).


Mike




> -Original Message-
> From: mos [mailto:[EMAIL PROTECTED]
> Sent: 09 June 2006 07:15 PM
> To: mysql@lists.mysql.com
> Subject: Re: Importing 3Gb File
>
> At 10:20 AM 6/8/2006, you wrote:
> >Hi,
> >
> >I am trying to import a 3.2Gb sql dump file back into my sql server
> (4.1.12)
> >and im coming across the following error:
> >
> >mysql: Out of memory (Needed 178723240 bytes)
> >mysql: Out of memory (Needed 178719144 bytes)
> >
> >That error comes up after about 30 minutes worth of import and I would
> guess
> >about half way through the import.
>
> What does "Show Processlist" say its doing just before the crash? I've had
> problems with Load Data on a very large table 500 million rows because the
> machine did not have enough memory to build the index. The data was loaded
> just fine, it's building the index that hung it out to dry because that
> eats up memory like crazy. How much memory do you have on your machine?
> The
> cheapest solution might be to go out and get a few more gb of RAM.
>
> Mike
>
>
>
> >The file in question is a mysqldump
> >-all-databases file from another server that im trying to import onto my
> >desktop machine. I have tried to alter the my.cnf file a bit, and this is
> >what it looks like:
> >
> >[client]
> >#password   = your_password
> >port= 3306
> >socket  = /tmp/mysql.sock
> >
> >[mysqld]
> >port= 3306
> >socket  = /tmp/mysql.sock
> >skip-locking
> >key_buffer = 64M
> >max_allowed_packet = 8M
> >table_cache = 512
> >sort_buffer_size = 8M
> >net_buffer_length = 8M
> >myisam_sort_buffer_size = 45M
> >set-variable=max_connections=300
> >
> ># Replication Master Server (default)
> ># binary logging is required for replication
> >#log-bin
> >
> ># required unique id between 1 and 2^32 - 1
> ># defaults to 1 if master-host is not set
> ># but will not function as a master if omitted
> >server-id   = 1
> >
> >#bdb_cache_size = 4M
> >#bdb_max_lock = 1
> >
> ># Uncomment the following if you are using InnoDB tables
> >#innodb_data_home_dir = /var/db/mysql/
> >#innodb_data_file_path = ibdata1:10M:autoextend
> >#innodb_log_group_home_dir = /var/db/mysql/
> >#innodb_log_arch_dir = /var/db/mysql/
> ># You can set .._buffer_pool_size up to 50 - 80 %
> ># of RAM but beware of setting memory usage too high
> >#innodb_buffer_pool_size = 16M
> >#innodb_additional_mem_pool_size = 2M
> ># Set .._log_file_size to 25 % of buffer pool size
> >#innodb_log_file_size = 5M
> >#innodb_log_buffer_size = 8M
> >#innodb_flush_log_at_trx_commit = 1
> >#innodb_lock_wait_timeout = 50
> >
> >[mysqldump]
> >quick
> >max_allowed_packet = 16M
> >
> >[mysql]
> >no-auto-rehash
> ># Remove the next comment character if you are not familiar with SQL
> >#safe-updates
> >
> >[isamchk]
> >key_buffer = 10M
> >sort_buffer_size = 20M
> >read_buffer = 2M
> >write_buffer = 2M
> >
> >[myisamchk]
> >key_buffer = 10M
> >sort_buffer_size = 20M
> >read_buffer = 2M
> >write_buffer = 2M
> >
> >[mysqlhotcopy]
> >interactive-timeout
> >
> >
> >Does anyone have any advice as to what I could change to make it import,
> and
> >not break half way through. The command im running to import is: mysql -n
> -f
> >-p < alldb.sql
> >
> >Thanks in advance,
> >Ian
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



"Load Data Infile Replace ..." too slow

2006-06-11 Thread mos
I'm replacing 14 million rows of data using "Load data infile replace" and 
it is taking forever to complete. I killed it after 6.2 hours on an AMD 
3500 machine. I then deleted all the data from the table and used "Load 
data infile ignore" and it completed quite quickly in about 30 minutes.  Is 
there any way to speed up using "Load data infile replace"? TIA


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FULLTEXT index help

2006-06-11 Thread Horst Azeglio

I'm trying to do a MySQL Query but it doesn't work. MySQL version: 4.0.26


When I put only one argument in MATCH, it shows no error but doesn't return
anything
[quote]
SELECT * FROM item WHERE MATCH (nom) against ('Huile');[/quote]
or
[quote]SELECT * FROM item WHERE MATCH (nom_en) against ('Huile');[/quote]

When I put two arguments:
[quote]SELECT * FROM item WHERE MATCH (nom,nom_en) against ('Huile');
[/quote]
It says: "Can't find FULLTEXT index matching the column list", but both nom
and nom_en are FULLTEXT indexed and the table "item" is MyISAM.

Anyone can help? thank you
--
View this message in context: 
http://www.nabble.com/FULLTEXT-index-help-t1771558.html#a4821912
Sent from the MySQL - General forum at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



fixed or dynamic row sizes with a field with a type of "point"...?

2006-06-11 Thread Andras Pal
Hi,

we have a table with many (~0.5 billion) records and a geometry field
which was defined as a simple "point". The `show table status` shows that
the row format is dynamic, however, a simple point in the GIS
representation has a fixed format (see: WKB: 21 bytes: 1 for MSB/LSB, 4
for type and 2x8 for the two doubles). We experienced that in this case,
when we include this point field all `select`ions and indexing (alter
table ... add [spatial] index ...) are much slower rather than if we
exclude this "point" object.

I've tried the `alter table ... row_format=fixed`, on a simple table
which has only one row. It says that it's ok:

mysql> alter table ... row_format=fixed;
Query OK, 1 row affected (0.00 sec)
Records: 1  Duplicates: 0  Warnings: 0

but after all, `show table status` show that the row format is still
dynamic.

I suppose that in the case of dynamic row format, the engine is using an
additional index table which assigns the real disk (act. in the .MYD file)
position for all row. For such a table with ~0.5giga records it is at
least 4igabytes; and in all of the indexing/selecting operations the
engine should use this table (which may not fit in the memory also...)

Our questions are:
 - why says the server after the `alter table ... row_format=fixed`
statement that it is okay when it is not okay, and, the more important:
 - is it possible to force somehow a fixed row size in the case when one
has a point (geometry) field? We _know_ that we only have points
(objects with the size of 21 bytes).

(Currently, we are using stock 5.0.21-standard and 5.0.22-standard on FC4,
`arch` is ix86 and iamd64)

cheers, Andras


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

Yes, I don't actually know if I have very large blogs, but the possibility
exists, and is quite large.

I am running FreeBSD so I don't have the ulimit program, the only program I
have is called limits and these are what I get when running it:

[EMAIL PROTECTED] /home # limits
Resource limits (current):
  cputime  infinity secs
  filesize infinity kb
  datasize   524288 kb
  stacksize   65536 kb
  coredumpsize infinity kb
  memoryuseinfinity kb
  memorylocked infinity kb
  maxprocesses 5446
  openfiles   10893
  sbsize   infinity bytes
  vmemoryuse   infinity kb
[EMAIL PROTECTED] /home #

I have upped my RAM in the unit with 512Mb to 768Mb, so I think I should
have enough RAM now.

Any other ideas?

Thanks,
Ian

> -Original Message-
> From: Jeremy Cole [mailto:[EMAIL PROTECTED]
> Sent: 09 June 2006 05:01 PM
> To: Ian Barnes
> Cc: mysql@lists.mysql.com
> Subject: Re: Importing 3Gb File
> 
> Hi Ian,
> 
> > I am trying to import a 3.2Gb sql dump file back into my sql server
> (4.1.12)
> > and im coming across the following error:
> >
> > mysql: Out of memory (Needed 178723240 bytes)
> > mysql: Out of memory (Needed 178719144 bytes)
> 
> That error message comes from some single place trying to allocate 178MB
> at a single time.  Do you have large BLOBs in your data?
> 
> This error message means that mysqld is beind denied memory by the OS,
> either because you are actually out of memory, or because your ulimit
> has been reached (more likely).
> 
> Check your ulimits for your system with ulimit -a, and adjust if
> necessary in the mysql.server or mysqld_safe script (those both run as
> root, so can adjust ulimits upwards).
> 
> Regards,
> 
> Jeremy
> 
> --
> Jeremy Cole
> MySQL Geek, Yahoo! Inc.
> Desk: 408 349 5104
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

This is all I could see just before it happened:

mysql> show processlist;
++--+---+-+-+--++---

---+
| Id | User | Host  | db  | Command | Time | State  | Info
|
++--+---+-+-+--++---

---+
| 11 | root | localhost | testing | Query   | 0| creating table | CREATE
TABLE ` upgrade_history` (
  `upgrade_id` int(10) NOT NULL auto_increment,
  `upgrade_vers |
| 12 | root | localhost | | Query   | 0|| show
processlist
|
++--+---+-+-+--++---

---+
2 rows in set (0.00 sec)


Then after that it the following happened for 60 seconds and then it timed
out:

mysql> show processlist;
++--+---+-+-+--+---+
--+
| Id | User | Host  | db  | Command | Time | State | Info
|
++--+---+-+-+--+---+
--+
| 11 | root | localhost | testing | Sleep   | 0|   |
|
| 12 | root | localhost | | Query   | 0|   | show
processlist |
++--+---+-+-+--+---+
--+
2 rows in set (0.00 sec)


I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian


> -Original Message-
> From: mos [mailto:[EMAIL PROTECTED]
> Sent: 09 June 2006 07:15 PM
> To: mysql@lists.mysql.com
> Subject: Re: Importing 3Gb File
> 
> At 10:20 AM 6/8/2006, you wrote:
> >Hi,
> >
> >I am trying to import a 3.2Gb sql dump file back into my sql server
> (4.1.12)
> >and im coming across the following error:
> >
> >mysql: Out of memory (Needed 178723240 bytes)
> >mysql: Out of memory (Needed 178719144 bytes)
> >
> >That error comes up after about 30 minutes worth of import and I would
> guess
> >about half way through the import.
> 
> What does "Show Processlist" say its doing just before the crash? I've had
> problems with Load Data on a very large table 500 million rows because the
> machine did not have enough memory to build the index. The data was loaded
> just fine, it's building the index that hung it out to dry because that
> eats up memory like crazy. How much memory do you have on your machine?
> The
> cheapest solution might be to go out and get a few more gb of RAM.
> 
> Mike
> 
> 
> 
> >The file in question is a mysqldump
> >-all-databases file from another server that im trying to import onto my
> >desktop machine. I have tried to alter the my.cnf file a bit, and this is
> >what it looks like:
> >
> >[client]
> >#password   = your_password
> >port= 3306
> >socket  = /tmp/mysql.sock
> >
> >[mysqld]
> >port= 3306
> >socket  = /tmp/mysql.sock
> >skip-locking
> >key_buffer = 64M
> >max_allowed_packet = 8M
> >table_cache = 512
> >sort_buffer_size = 8M
> >net_buffer_length = 8M
> >myisam_sort_buffer_size = 45M
> >set-variable=max_connections=300
> >
> ># Replication Master Server (default)
> ># binary logging is required for replication
> >#log-bin
> >
> ># required unique id between 1 and 2^32 - 1
> ># defaults to 1 if master-host is not set
> ># but will not function as a master if omitted
> >server-id   = 1
> >
> >#bdb_cache_size = 4M
> >#bdb_max_lock = 1
> >
> ># Uncomment the following if you are using InnoDB tables
> >#innodb_data_home_dir = /var/db/mysql/
> >#innodb_data_file_path = ibdata1:10M:autoextend
> >#innodb_log_group_home_dir = /var/db/mysql/
> >#innodb_log_arch_dir = /var/db/mysql/
> ># You can set .._buffer_pool_size up to 50 - 80 %
> ># of RAM but beware of setting memory usage too high
> >#innodb_buffer_pool_size = 16M
> >#innodb_additional_mem_pool_size = 2M
> ># Set .._log_file_size to 25 % of buffer pool size
> >#innodb_log_file_size = 5M
> >#innodb_log_buffer_size = 8M
> >#innodb_flush_log_at_trx_commit = 1
> >#innodb_lock_wait_timeout = 50
> >
> >[mysqldump]
> >quick
> >max_allowed_packet = 16M
> >
> >[mysql]
> >no-auto-rehash
> ># Remove the next comment character if you are not familiar with SQL
> >#safe-updates
> >
> >[isamchk]
> >key_buffer = 10M
> >sort_buffer_size = 20M
> >read_buffer = 2M
> >write_buffer = 2M
> >
> >[myisamchk]
> >key_buffer = 10M
> >sort_buffer_size = 20M
> >read_buffer = 2M
> >write_buffer = 2M
> >
> >[mysqlhotcopy]
> >interactive-timeout
> >
> >
> >Does anyone have any advice as to what I could change to make it import,
> and
> >not break half way through. The command im running to import is: mysql -n
> -f
> >-p < alldb.sql
> >
> >Thanks in advance,
> >Ian
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.my

example when indexing hurts simple select?

2006-06-11 Thread Gaspar Bakos
Hello,

There is a table (TEST) with ~100 million records, 70 columns (mostly
integers, some doubles, and a few short fixed char()), and has a ~100Gb
size.

The table has a single (not unique) index on one integer column: MMi.

If I invoke a simple select based on MMi, then the selection is VERY slow:

nohup time mysql CAT -u catadmin -p$MPWD -e "create table test2
select * from TEST where MMi < 9000;"

( this selects only ~0.5 % of the table, by the way, so test2 is a
small table, and the time is not spent with writing it on disk)

  Time used: 47 minutes:
0.00user 0.00system 47:17.37elapsed 0%CPU (0avgtext+0avgdata
0maxresident)k 0inputs+0outputs (0major+416minor)pagefaults 0swaps

If I do the same but ignore the index, the select time drops to 1/5th !!!

nohup time mysql CAT -u catadmin -p$MPWD -e "create table test3 \
select * from TEST ignore index (MMi) where \
MMi < 9000;"

  Time used: 11 minutes:
0.00user 0.00system 11:08.23elapsed 0%CPU (0avgtext+0avgdata
0maxresident)k 0inputs+0outputs (0major+415minor)pagefaults 0swaps

Without the index, MySQL does a simple thing; it reads in sequentially
the 100Gb database, and while reading, it parses the lines, and
determines if the MMi is < 9000. This is done with about 16Mb/s speed.

With the index, it performs a large number of random seeks. The data
(.MYD) is probably not organized on the disk according to sorted MMi.

Questions;

1. Is there a way to decrease random seeks? E.g. mysqld config
parameters, increase some buffer/cache sizes?

2. Optimize table: is there a way to rearrange data so that random
seeks are minimized?

3. If we have to live with large number of random seeks does anyone
know how the underlying FS can be optimized?

Cheers,
Gaspar


(   All this TEST DB is kept on FC3 Redhat + MySQL 5.0.22, XFS
filesystem on a 1.5Tb RAID-5 array of 4 SATA disks (served by a
3ware RAID controller).  The computer is running on two opteron
2.0GHZ CPUs and 4Gb RAM.
)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my-huge.cnf quite outdated

2006-06-11 Thread Gaspar Bakos
Hi, Keith,

RE:
> This seems to be the way things are with mysql nowdays.
> Is it not time for the developers to take a serious look
> into culling all the outdated and multiple ways of
> accomplishing the same thing from mysql and the
> documentation?

This is a somewhat different subject.
But you are right about it.

On the other hand, I have been using MySQL since 2001, and I enjoy
looking at the old syntax, and seeing how it changed helps me
understanding what the new syntax means.

Back to my-huge.cnf, I am sure there are many people reading the list
who run MySQL on big-big servers, and they must have figured out how to
optimize it. I am curious about their advice.

> > Any opinions of the following : ?
> >
> > [mysqld]
> > key_buffer_size=1024M
> > myisam_sort_buffer_size=256M
> > sort_buffer_size=256M
> > bulk_insert_buffer_size=64M
> > join_buffer_size=64M
> > max_connections=5
> > read_buffer_size=8M
> > read_rnd_buffer_size=8M
> > net_buffer_length=1M
> > max_allowed_packet=16M

Cheers,
Gaspar

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my-huge.cnf quite outdated

2006-06-11 Thread Keith Roberts
This seems to be the way things are with mysql nowdays.

Is it not time for the developers to take a serious look 
into culling all the outdated and multiple ways of 
accomplishing the same thing from mysql and the 
documentation?

All the excess documentation for different ways of 
accomplishing the same outcome makes the learning curve 
for mysql alot harder.

It doesn't make things any easier for people that want to 
take the MySQL certification exams either.

Why should we have to remember many different ways to 
accomplish the same thing, when one or two ways at the most 
would be quite sufficient?

For example, do we really need so many ways to start a 
server?

Do we really need different ways to add or drop indexes and 
modify tables?

With the newer versions of mysql (5.0.21+ or maybe version 
6.0.x), can we not dump the old syntax that is in mysql for 
backward compatibility reasons?

I would really like to see a slim and trim version of mysql,
the SQL commands, and the supporting cli programs, 
that is up to date and has a very fast learning curve.

Just my little gripe.

Kind Regards

Keith Roberts

In theory, theory and practice are the same;
in practice they are not.

On Sun, 11 Jun 2006, Gaspar Bakos wrote:

> To: mysqllist 
> From: Gaspar Bakos <[EMAIL PROTECTED]>
> Subject: my-huge.cnf quite outdated
> 
> Hi,
> 
> Isn't the my-huge.cnf in the MySQL 5.0.22 distribution quite outdated?
> It says "for systems with 512Mb RAM or more". Nowdays this is pretty
> basic setup, and 'huge' is probably something in excess of 4Gb RAM.
> 
> I wonder if anyone has a recommendation for truly huge systems. For
> example a dual CPU AMD opteron system with 4Gb (or 8Gb) RAM that is
> fully devoted to serving the mysql daemon.
> 
> The config I have (see below) has been tuned to be optimal for creating
> indexes on a large (100Gb+) single database table. It works fine
> (although not satisfactory), but I worry that some parameters may have
> an optimal value or range, and it does not make sense to increase them
> like crazy.
> 
> Any opinions of the following : ?
> 
> [mysqld]
> key_buffer_size=1024M
> myisam_sort_buffer_size=256M
> sort_buffer_size=256M
> bulk_insert_buffer_size=64M
> join_buffer_size=64M
> max_connections=5
> read_buffer_size=8M
> read_rnd_buffer_size=8M
> net_buffer_length=1M
> max_allowed_packet=16M
> 
> # Cheers,
> # Gaspar
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



increase the search speed

2006-06-11 Thread Octavian Rasnita
Hi,

I have the following table:

CREATE TABLE `z` (
  `hash` varchar(16) NOT NULL default '',
  `title` varchar(255) NOT NULL default '',
  `body` text NOT NULL,
  FULLTEXT KEY `title` (`title`,`body`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

I have tried the following query:

select sql_calc_found_rows hash, title, substr(body, 1, 250) as preview,
match(title, body) against('deputat') as rank from z where match(title,
body) against('deputat' in boolean mode) order by rank desc limit 0,20;

The table has almost 200.000 records.
I am using MySQL 5.0.16-standard.

I want to search the records that contain a certain word (or more
complicated expressions), so I need using a boolean mode search.
However, I want to return only the first 20 records ordered by rank, so
that's why I
also need to use a common search (not in boolean mode) for getting that
rank.

I have set MySQL to also index the 3 chars words.

The problem is that this query takes more than 12 seconds, and for some
other one-word searches it takes almost 30 seconds, and this is very
much for a table with only less than 200.000 records.

Can I do something to increase the search speed?

I think that maybe if I will change some MySQL settings, the search might
work faster, but I don't know what I need to change.

Thank you.

Teddy


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]