can't seem to find anything on my problem. hope list can help:
thread class creates connection to mysql - all is well
if sql takes too long, calling thread interrupts thread, and close
resources:
rs.close(), stmt.close(), conn.close()
however mysqld server connections are not closed
after a wh
yves
when creating a varchar field in table creation, use the binary.
that way, selection is exact. always.
david
-Original Message-
From: Yves Goergen [mailto:[EMAIL PROTECTED]
Sent: Monday, March 03, 2008 1:44 PM
To: Anders Karlsson
Cc: MySQL
Subject: Re: Unicode sorting and binary c
glenn
Union is 4.0 feature.
david
>Notwithstanding end-of-life status, is there an issue in MySQL
>v3.23 r.e. the "union" operator?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
dave
I believe group is a reserved word. change to grp.
david
-Original Message-
From: Dave M G [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 04, 2006 8:55 AM
To: mysql@lists.mysql.com
Subject: Re: Exporting utf-8 data problems
Thank you for the advice.
In order
chris
I chased this down a while back.
you are correct. 4G is max on 4.0 & 4.1 versions
of mysql. I don't know about 5.0 though.
I looked at the source for sql/mysqld.cc
line 4170
UINT_MAX32
someone else found docs that said monty had
done this since myism tables had issues
deep inside the
solaris box with 16 Gbytes of memory
os using 5.8, mysql at 4.0.20_64bit
with key_buffer_size set to 6144m, the mysqld crashes
and re-starts (mysql_safe) under heavy load (1day to 1week).
prior to this, key_buffer_size set to 1.5G and no crashes since
early 2004.
the process size is 2G even tho
bertrand
i went to tab delimited fields for the same reason:
commas in text is often done, tabs is rarely done.
david
> -Original Message-
> From: Bertrand Gac [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 21, 2004 12:09 AM
> To: mysql
> Subject: Importing Excel Data in MySql
>
>
the statement is taking 3 hours and counting.
"optimize TABLE tablename"
normally this takes 30 minutes or so.
boss wants to kill this thread.
does that mean the table will end up corrupt?
any input helpful.
david
hrdw: solaris 4cpu, running version 8
mysql version 4.0
matt
>
> the issue is insert speed, I get 150k-1M records daily, of
> these, only
> 5-10 % are new records, I load the EBCDIC file into a temp table, and
> then do "insert ignore into historytable select * from temp table"
>
Since you have a temp table created (no keys I assume), use the comm
matt
1) inserts using this format is much faster:
INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
is much faster then single row insert. My experience is
2.5 hrs vs.. 36 hrs.
2) The PACK_KEYS=1 may be hurting you. I've never used it.
3) There may be a cache somewhere
All
Hardware: Solaris 4 cpu, 16G ram, 900MHz, ver 5.8
MySQL: 4.0.20 binary install, 32 bit version
Admin install, I maintain the my.cnf, and do db admin.
Installed a production db (10 million rows) with no
problems. However, can only open 20 or so connections
to mysqld. The error is:
ERROR 11
John
Try
select field, count(*)
from db.table
group by field;
David
> -Original Message-
> From: John Nichel [mailto:[EMAIL PROTECTED]
> Sent: Monday, May 24, 2004 10:37 AM
> To: MySQL List
> Subject: Query question
>
>
> Hi,
>
>I have a table which I want to select data from (
Jack
you must have a compound index in your table:
ALTER Table ifInOctets_137 add INDEX i_id_dtime(id,dtime);
In your SELECT statement, change
"USE INDEX (dtime)"
to
"USE INDEX (i_id_dtime)"
Let us know how it works.
David
> -Original Message-
> From: Jack Coxen [mailto:[EMAIL
filter: select, mysql
Solaris 3.23.40 connections using perl, jdbc, odbc.
Yesterday, we experienced a rare mysqld failure where all connections
where in the state="Opening table". Normally our 200+ connections
are in state="sleep". The Time field from command
"show processlist"
showed each co
Eric
Try putting the "HAVING" clause after group by. I believe
"having" is the last clause (may be LIMIT).
David
> -Original Message-
> From: Eric Ellsworth [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 27, 2004 8:01 AM
> To: [EMAIL PROTECTED]
> Subject: Syntax Check: Group BY with
Gregory
mysql,select,query
I agree with Joe, use multiple-column index. Much more efficient.
All queries should be sub 5-10 seconds or less.
David
> -Original Message-
> From: Gregory Newby [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 15, 2004 11:53 AM
> To: [EMAIL PROTECTED]
> S
mysql, query
I have a situation come up where we want one slave to act
as backup for two different databases located in two
different hosts. Can this be done?
The master setup is easy. The slave setup is unclear.
Can I have two sets of master-host, master-user, and
master-password, master-port,
Jonathon
We are using odbc and jdbc to link up from notes to mysql.
David
> -Original Message-
> From: Jonathan Villa [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 15, 2003 1:43 PM
> To: [EMAIL PROTECTED]
> Subject: Lotus Notes/Script...
>
>
> Has anyone every connected Lotus Not
Dudes & Dudetts
filter: mysql, select, sql
mysql version: 3.23.49
jdbc version: 2.0.14
development os: win2k
production os: solaris
Problem:
During a reboot of mysqld, the perl connections survived
and the jdbc connections did not. My definition of survived is
that the connections waite
Mustafa
Try this command from the client window:
show processlist;
You must be logged in from root to list all connections.
If you are logged in as normal user, only those connections
which you have privileges are listed.
David
>
> How i list connected users(active) to mysql server?
>
--
kees
How do you measure spiked queries/s?
All I see is average queries/s from the status command.
I can see the calculation based on uptime in seconds and
total queries in that time. But that's average. My boss
wants avg and skipped on a web site.
David
> -Original Message-
> From: Ke
Guru's
Problem is type SMALLINT needs to be MEDIUMINT (MyISAM).
column: id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT
My solution is to use the following ALTER statement:
ALTER TABLE messages CHANGE id
id MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT
My question is: what happens to the s
Gurus
I'm having trouble understanding the column type SET.
To explain my problem, I'll create a table, populate
it, and do selects.
mysql> CREATE TABLE settest( chain SET( "A", "B", "C" ) NOT NULL,
UNIQUE INDEX ichain( chain ) );
mysql> INSERT INTO settest SET chain="A,C
Hasan
Try typing your select statement in a text editor.
Open a mysql client connection and copy paste. This
gets around the problem.
For multiple sql statements, put them in a file and
redirect it to the mysql client:
shell> mysql -u hasan -p < file.sql
David
> -Original Message-
> Fr
Fibilt
try:
SELECT id, concat(lname, fname) as name, count(*) as cnt
FROM table1
GROUP BY name
HAVING cnt > 1;
This is close anyhow.
David
> -Original Message-
> From: Phillip S. Baker [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, October 16, 2002 1:17 PM
> To: [EMAI
Inandjo
Try setting the variable "connection_timeout" to 300 seconds.
That way, the mysqld will close the connection automatically.
Set the variable in file "my.cnf".
David
> -Original Message-
> From: Inandjo Taurel [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, October 16, 2002 9:54
Steve
Try: SELECT MAX(field_timestamp) from table1;
David
> -Original Message-
> From: Steve Buehler [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 27, 2002 5:21 PM
> To: mysql; PHP
> Subject: time stamp
>
>
> I am using PHP with MySQL and have a timestamp field in my db
> table.
Ben
It would appear that the deletion of rows may be a problem.
After deleting rows older than 6 months, do you optimize the
table?
As I understand it, mysql does not delete delete, only marks
a bit for every row thats deleted. That way, delete speed is
fast. However, it slows down queries and i
Bhavin
I'm doing queries to one table with 4 other smaller tables.
The large table is 12-14 million records. With an index,
the table select is 13-15 seconds. With a compound index,
the delay is sub one second. Use explain to verify which
index is being used. Read the manual to tune your index's.
mike0
MYI is the index's
MYD is the data
Can't think why it would suddenly grow in size.
Try the CHECK TABLE command to get more detail.
David
> -Original Message-
> From: //mikezero/ [mailto:[EMAIL PROTECTED]]
> Sent: Monday, July 08, 2002 12:37 PM
> To: [EMAIL PROTECTED]
> Subject: a
Andy
2) Locks are by thread. If thread dies, so does
it's lock.
David
> -Original Message-
> From: Andy Sy [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 12, 2002 12:03 PM
> To: [EMAIL PROTECTED]
> Subject: Auto-increment across multiple tables / Atomicity of update
> statements
Guy
No. Requires multiple inserts.
David
> -Original Message-
> From: Defryn, Guy [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 15, 2002 4:55 PM
> To: [EMAIL PROTECTED]
> Subject: insert into multiple tables??
>
>
>
>
> Is it possible to insert data into different tables with o
Mike
Try moving the datadir to your home partition.
This is a variable in the my.cnf config file.
I just did this on an NT box. You will have
to stop mysql, change the config file, move
your current data dir to new location and
restart mysql.
David
-Original Message-
From: Mike Mike [ma
ks!
David
-Original Message-
From: Michael Widenius [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 12:59 AM
To: Lopez David E-r9374c
Cc: Mysql List (E-mail)
Subject: table lock - which connection id own's it
Hi!
>>>>> "Lopez" == Lopez David > writ
AntiSpam - mysql, sql, query
Version: 3.23.49a, MyISAM, NT, Solaris
My app is 150 daemons writing logs to mysql. The main
table is of fixed size but depends on four other tables
for foreign keys (hash tables). These tables are uploaded
once to the daemon at start-up. Occasionally, a new entry
Keith
What about placing the output of the select in a
temporary table in mysql space and making
a second selection from the temp table. That would
get you what you want.
1) CREATE TEMPORARY TABLE tmp ...
2) INSET into tmp SELECT ...
3) SELECT ... FROM tmp ...
Just a thought
David
-
Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 10, 2002 8:47 AM
To: Lopez David E-r9374c; [EMAIL PROTECTED]
Subject: RE: select query optimization
i made the change, but it looks like it didn't speed the query up at all.
here are the results from the first explain:
mysql> e
Katen
Try using a compound index with STATE and POOL
INDEX ipoolstate( POOL, STATE )
Use EXPLAIN SELECT to see what mysql thinks.
David
PS anybody know if KEY is the same as INDEX?
-Original Message-
From: Steve Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 10, 2002 8
Steve
Have you tried using compound index:
INDEX( POOL, STATE )
Just a thought.
David
-Original Message-
From: Steve Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 03, 2002 10:13 AM
To: [EMAIL PROTECTED]
Subject: select query optimization
i am running a basic install of m
Jeff
You can make a clean start but it takes work.
1) create new table clients_tmp - same fields - no indexes.
add a field and make it a simple integer - say idx_int
2) insert all the fields into client_tmp from clients
place the client.idx field into client_tmp.idx_int
a) At th
Anti-SPAM: mysql, query
Gurus
I'm new to replication. Set it up this weekend and has worked
flawlessly for several days. Then I used the command:
TRUNCATE TABLE tbl;
That worked in master, but was not replicated in slave.
No mention of this in manual. Using version 3.23.49 on
master and slav
Walter
Try adding to your select statement:
ORDER BY DESCRIPTION ASC
David
-Original Message-
From: Walter D. Funk [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 27, 2002 11:05 AM
To: [EMAIL PROTECTED]
Subject: How can I Sort by Relevance?
I have a query like this
select * fro
Hank
Instead of dropping down to OS cp commands, insert the data from
table TBL_FLAT to TBL_INDX.
INSERT INTO TBL_INDX SELECT * FROM TBL_FLAT;
This will be valid in mysql. The insert will be fast since
only one insert statement. Let the list know if this works
for you. Or if your method is val
Using MySQL 3.23.40
App is logging db with 130 clients to a central MySQL db
with 2 million events per day. Aged events of over 7 days
are no longer required. The central table is myisam type
with fixed size (datetime, enum and foreign keys only).
Selects are done infrequently while inserts are
44 matches
Mail list logo