RE: write out tables

2001-12-19 Thread Brett Error

Mysqldump will write out all the SQL needed to build the table (in plain
text form).

Connecting with the client and doing a select * from table will also write
out the table in plain text form

Finally doing a select into outfile (see manual for syntax details) will
also write out the table in plain text form



-Original Message-
From: Jinghua Tang [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 19, 2001 11:32 AM
To: [EMAIL PROTECTED]
Subject: write out tables


  It is a newbie question. I am wondering if there is a way to write
out a table in plain text form. Thank you in advance!

  Jinghua


Jinghua Tang, Ph.D.   Phone:  858-784-8647(o)
Department of Molecular Biology, MB-31858-268-9321(h)
The Scripps Research InstituteFax:858-784-8660
10550 North Torrey Pines Road E-mail: [EMAIL PROTECTED]
La Jolla, CA 92037http://www.scripps.edu/~jinghua



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select 10 of each

2001-12-19 Thread Brett Error

You'll have to issue two separate queries:

select * from table where lang=1 limit 10 and select * from table where
lang=2 limit 10

-Original Message-
From: Jacob Friis Larsen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 19, 2001 11:46 AM
To: Mysql maillist
Subject: select 10 of each

How do I select 10 rows with lang = 1 and 10 rows with lang = 2 from a MySQL
database ?

Please also reply to my E-Mail address.

:) Jacob


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Strange insert behavior with larger tables

2001-09-10 Thread Brett Error

This weekend I wanted to better characterize how different key
configurations affect the way rows are inserted.  I ran into some behavior I
can't understand:

I created a prototype table to allow me to track the number of events that
occur on a particular day.  Events are described with three integer fields
and a type field which can be 0-3.  The table then is:  

date date not null, type tinyint not null, val1 int not null, val2 int not
null, val3 int not null, occurrences int not null.

I then added a primary key (date, type, val1, val2, val3) because I want to
ensure that I have only one row for each combination of these items.

I ran these benchmarks on my Win2K box (real environment Linux, just testing
here).  I wrote a program which inserts a number of rows for each day (with
unique type, val1, val2, val3 values).  type can have the values 0-3, and
val1-3 can be between 0 and 24,000.  I found that I could insert about
10,000 rows in about 4-5 seconds (512 M RAM, IDE drives UDMA).  It
consistently took between 4-7 seconds to insert 10,000 rows until I got up
to about 2 million rows.  Around that time it started to slow down
significantly taking about 10 seconds to insert 10,000 rows.  By the time I
got up to about 15 million rows, it was taking over 30-40 seconds to insert
10,000 rows.  The strange thing, however is that as soon as I changed to a
new day (the first element in my primary key) rows began inserting very
quickly again.  I was right back where I started-getting 10,000 rows in only
4-5 seconds.  

I guessed that the key was the limiting factor here somehow, so I tried my
test again, but first dropped the primary key.  Without the key I no longer
have a guarantee of uniqueness, but I'm able to insert 10,000 rows every 3-4
seconds consistently well over 5 million for a single day.  Therefore, it
appears to be the key which is so dramatically slowing inserts as I get more
rows.

Curious, I ran perfmon on my machine and repeated the experiment.  I
observed the following:

- Time to insert quadruples (from 4-5 sec/10,000 rows to 20-25 sec/10,000
rows)
- Processor usage drops 2/3s (100% usage to 37% usage)
- Amount written to disk/sec drop 4 fold (~400 bytes/sec to ~200
bytes/sec)
- Number of writes/sec almost double (232 to 385)
- Avg. amount of data with each write drops 4 fold (19000 bytes to 5000
bytes)
- Avg. time to do each write stays constant (.004 sec)
- The machine is NOT swapping page files on and off the disk.

To summarize, at first when doing these inserts the machine is CPU bound.
It is using 100% of CPU resources.  However, as the number of rows for the
day we are inserting increases, the machine becomes severely IO bound.  This
seems to be due to the fact that although the number of writes to the disk
almost double, the average amount of data drops 4 fold as does the amount of
data being written with each byte.  Basically we switch from doing fewer
larger writes and being CPU bound, to doing many many more tiny writes that
the disk can't keep up with.

At first (before seeing the perfmon data) I though this might have something
to do with the key being held in a buffer until it grew too large.  Then is
when we'd start to see the performance slow down.  However, I have not been
able to find a value of the key buffer which really changes performance.
I've tried values between 3M and 70M, with very similar results.  I do get
slightly poorer performance with the 3M than 70M (starts slowing down around
1.3 million instead of 2 million) but nothing that I think accounts for this
problem.

Why do inserts slow down once I get around two million, and I don't seem to
be able to affect this by tweaking buffer sizes?

Why do inserts then speed up again when switching to a new value for date,
but slow down again once I get close to 2 million for the new value?

I've included below a dump of variables.

Thanks in advance for your insight,

Brett Error


back_log: 50
basedir: C:\mysql\
binlog_cache_size: 32768
character_set: latin1
character_sets: latin1 big5 czech euc_kr gb2312 gbk sjis tis620 ujis dec8
dos german1 hp8 koi8_ru latin2 swe7 usa7 cp1251 danish hebrew win1251
estonia hungarian koi8_ukr win1251ukr greek win1250 croat cp1257 latin5
concurrent_insert: ON
connect_timeout: 5
datadir: g:\mysql data\
delay_key_write: ON
delayed_insert_limit: 100
delayed_insert_timeout: 300
delayed_queue_size: 1000
flush: OFF
flush_time: 1800
have_bdb: NO
have_gemini: NO
have_innodb: NO
have_isam: YES
have_raid: NO
have_ssl: NO
init_file: 
interactive_timeout: 28800
join_buffer_size: 131072
key_buffer_size: 52424704
language: C:\mysql\share\english\
large_files_support: ON
log: OFF
log_update: OFF
log_bin: OFF
log_slave_updates: OFF
log_long_queries: OFF
long_query_time: 10
low_priority_updates: OFF
lower_case_table_names: 1
max_allowed_packet: 1048576
max_binlog_cache_size: 4294967295
max_binlog_size: 1073741824
max_connections: 100
max_connect_errors: 10
max_delayed_threads: 20