Re: Problem with query on 5.11

2006-10-19 Thread Chris

Don O'Neil wrote:

Why does this query return no results:

SELECT * FROM FileList WHERE MATCH Filename AGAINST ("9640")

When there are entries in the Filename list that have 9640 in them?


How many rows are in the table? Full text won't work with only a couple 
of rows.


And you do have a full text index on the filename field right?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem with query on 5.11

2006-10-19 Thread Don O'Neil
Why does this query return no results:

SELECT * FROM FileList WHERE MATCH Filename AGAINST ("9640")

When there are entries in the Filename list that have 9640 in them?

I'm using MySQL 5.1.11.

Thanks!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Daniel Kasak

Renish wrote:


Hello,

Sorry. Again u r wrong..

Thers no connection with MYSQL here.


Well you're *very* much in the wrong list. Why are you posting to a 
MySQL list if your question has nothing to do with MySQL? You do realise 
there are Oracle lists, don't you? Surely people there will be able to 
give you more accurate advice than people here?


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Renish

Thank u for ur speedy reply.
- Original Message - 
From: "Chris" <[EMAIL PROTECTED]>

To: "Renish" <[EMAIL PROTECTED]>
Cc: "Daniel Kasak" <[EMAIL PROTECTED]>; 
Sent: Friday, October 20, 2006 1:51 PM
Subject: Re: Oracle query..



Renish wrote:

Hello,

Sorry. Again u r wrong..

Thers no connection with MYSQL here.

Lets imagine I only have Oracle 7.3 data and then import the same data 
into Oracle 10g


He's saying you need to talk to an oracle mailing list or forum.

We don't use oracle so we can't help you.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Chris

Renish wrote:

Hello,

Sorry. Again u r wrong..

Thers no connection with MYSQL here.

Lets imagine I only have Oracle 7.3 data and then import the same data 
into Oracle 10g


He's saying you need to talk to an oracle mailing list or forum.

We don't use oracle so we can't help you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Renish

Hello,

Sorry. Again u r wrong..

Thers no connection with MYSQL here.

Lets imagine I only have Oracle 7.3 data and then import the same data into 
Oracle 10g


I hope this helps
Renisn
- Original Message - 
From: "Daniel Kasak" <[EMAIL PROTECTED]>

To: "Renish" <[EMAIL PROTECTED]>; 
Sent: Friday, October 20, 2006 1:39 PM
Subject: Re: Oracle query..



Renish wrote:


Hi Mr Kasak

think you misunderstood my question
I dont want to Import to MYSQL.  I want to import the data into Oracle 10 
G. AS I said I have orcle 7.3 data already available with me..
Please tell me in step by step what i should do? as I am new to this 
field. I appreciate ur time in this matter.


Ah. Well you're asking the wrong list then. But the general idea is the 
same.


First I would look for a migration tool. Failing that, I would export each 
table to csv files, create the destination tables in Oracle, and then 
import from the csv files. But again, migration tools are going to be the 
best bet, as they will set up your table definitions.


There is a *remote* chance that Oracle will be able to import from a 
'mysqldump' backup. You can also check that out, but I doubt it will work 
somehow.


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Daniel Kasak

Renish wrote:


Hi Mr Kasak

think you misunderstood my question
I dont want to Import to MYSQL.  I want to import the data into Oracle 
10 G. AS I said I have orcle 7.3 data already available with me..
Please tell me in step by step what i should do? as I am new to this 
field. I appreciate ur time in this matter.


Ah. Well you're asking the wrong list then. But the general idea is the 
same.


First I would look for a migration tool. Failing that, I would export 
each table to csv files, create the destination tables in Oracle, and 
then import from the csv files. But again, migration tools are going to 
be the best bet, as they will set up your table definitions.


There is a *remote* chance that Oracle will be able to import from a 
'mysqldump' backup. You can also check that out, but I doubt it will 
work somehow.


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Renish

Hi Mr Kasak

think you misunderstood my question
I dont want to Import to MYSQL.  I want to import the data into Oracle 10 G. 
AS I said I have orcle 7.3 data already available with me..
Please tell me in step by step what i should do? as I am new to this field. 
I appreciate ur time in this matter.


Cheers,
Renish Koshy


- Original Message - 
From: "Daniel Kasak" <[EMAIL PROTECTED]>

To: "Renish" <[EMAIL PROTECTED]>; 
Sent: Friday, October 20, 2006 11:50 AM
Subject: Re: Oracle query..



Renish wrote:

Hi All,

   I have oracle 7.3 data available with me..I see  many *.ora files and 
one Import _Rawdata.bat file. . I dont know which one to import. Now I 
have installed, Oracle 10 g version server and client. Could you pls tell 
me. how can I import those data into the Oracle 10g so that I could see 
all the tables.


For eample..when I have an SQl file(*.sdl) I  used to import it in dos 
mode like


Mysql4.1> bin>mysql ..< .sql -p -u root;

Thank  you very much.

Regards,

Renish



There are probably migration tools available to move your data. Check on 
the MySQL website. If there are tools, I would recommend using them.


If you want to do it yourself, you need to export your data from Oracle 
before importing it into MySQL. Dump each table to a comma-delimited file, 
and use 'load data infile' to import it into MySQL. Read up on 'load data 
infile' to find the exact format and commands to use.


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle query..

2006-10-19 Thread Daniel Kasak

Renish wrote:

Hi All,

   I have oracle 7.3 data available with me..I see  many *.ora files 
and one Import _Rawdata.bat file. . I dont know which one to import. 
Now I have installed, Oracle 10 g version server and client. Could you 
pls tell me. how can I import those data into the Oracle 10g so that I 
could see all the tables.


For eample..when I have an SQl file(*.sdl) I  used to import it in dos 
mode like


Mysql4.1> bin>mysql ..< .sql -p -u root;

Thank  you very much.

Regards,

Renish



There are probably migration tools available to move your data. Check on 
the MySQL website. If there are tools, I would recommend using them.


If you want to do it yourself, you need to export your data from Oracle 
before importing it into MySQL. Dump each table to a comma-delimited 
file, and use 'load data infile' to import it into MySQL. Read up on 
'load data infile' to find the exact format and commands to use.


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Oracle query..

2006-10-19 Thread Renish

Hi All,

   I have oracle 7.3 data available with me..I see  many *.ora files and 
one Import _Rawdata.bat file. . I dont know which one to import. Now I have 
installed, Oracle 10 g version server and client. Could you pls tell me. how 
can I import those data into the Oracle 10g so that I could see all the 
tables.


For eample..when I have an SQl file(*.sdl) I  used to import it in dos mode 
like


Mysql4.1> bin>mysql ..< .sql -p -u root;

Thank  you very much.

Regards,

Renish 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Connection character set be changed silently?

2006-10-19 Thread Hanson Lu

My programm connect to MYSQL server(4.1.12),  set it's character set by
"set NAMES gbk",

the variables when client programm started is

character_set_client  gbk
character_set_connection gbk
character_set_databasegbk
character_set_resultsgbk
character_set_server gbk


after a long while, like one  night, I found the client character set have
been changed to latin,

character_set_client  latin1
character_set_connection latin1
character_set_database   gbk
character_set_results   latin1
character_set_servergbk

it seems related to some "timeout" value, the wait_timeout and
interative_timeout is 28800(8 hours).

I tested same program with MySQL 4.1.7,  the connection was disconnnected by
server since wait_timeout was set to 8 hours.

why the characoter set would be changed  with MYSQL 4.1.12?  My programm use
MYSQL C API.

MySQL setting : My.cnf
[mysqld]
socket=/var/lib/mysql/mysql.sock
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
default-character-set=gbk
[mysql.server]
user=mysql
default-character-set=gbk
default-collation=gbk_bin
[mysqld_safe]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
default-character-set=gbk

[mysql]
default-character-set=gbk

Thanks
Hanson


Alter Table Add Column - How Long to update

2006-10-19 Thread Ow Mun Heng
Just curious to know, 

I tried to update a table with ~1.7 million rows (~1G in size) and the
update took close to 15-20 minutes before it says it's done.
Is this kind of speed expected?

I don't really understand how the alter table add column is done, but
when I look at the "show processlist" I see that it says the state is
"copying into tmp table"

Does the alter table mean that MySQL has to copy the Entire table,
row-by-row into a temporary table, and add in the additional column (or
2)??

I'm using InnoDB by the way

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb and high server load

2006-10-19 Thread PaginaDeSpud




hi,i’ve some two forum tables with abot 700Mb each one, and they was type 
myisam. I was getting some lock problems and i decided to switch them to innodb, 
but server load growed from 3 to 20. I followed your steps but i got not any 
server load improvements.
should i back to myisam? or is there any way to solve it?
thanks  



Re: Deletes on big tables

2006-10-19 Thread nigel wood

Marvin Wright wrote:


I have 3 tables where I keep cache records, the structures are something
like
 

TableA is a 1 to many on TableB which is a 1 to many on TableC 


To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.


My expiry runs on a daily basis and deletes approximately this number of
rows from each table.



 TableA500,000

 TableB  4,836,560

 TableC 71,534,549

 


My suggestions:

1) Add an (expirydate,primary key) composite index on table A and make 
sure your foreign keys are in place
2) Turn on cascading deletes for these three tables or (less optimally) 
use a single multi-table delete in a stored procedure to delete lots of 
rows (in A) per query  not singles
3) run the delete query with a limit of 1 rows or so in a 
transaction, use show innodb status to monitor how much rollback space 
the innodb engine has left and up the number rows if possible.
4) If your data integrity can take the risk  turn off innodb's flush on 
commit for this connection during the deletes.
5) Run optimise table once a week, if your deleting lots of records for 
a while the database is probably fragmented. If nessasary for 
availablity use a DR slave/master and run optimise table on the DR pair  
before promoting them to live.


Then look at the memory. If you cant upgrade the OS to use all that 
memory (we do on 32 bit servers) at least create a huge ram disk and 
tell MySQL to use it as temporary storage.


Hope that helps

Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Deletes on big tables

2006-10-19 Thread Chris



2.  If I could split the tables up into smaller tables would this
help ?  My dilemma here is that I can split the data, the data would be
in different tables but on the same hardware, the same number of deletes
would still have to happen so would it actually make any difference ?


No idea about the rest but yes you'll get better performance with 
smaller tables.


What indexes do you have on the table(s) ? Hopefully no redundant or 
duplicate ones.


Deletes have to update all indexes on the table at the same time - the 
less data, the smaller the index(es), the quicker the response.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Deletes on big tables

2006-10-19 Thread Marvin Wright
Hi,

 

This is a bit of a long mail, so apologies in advance, I've tried to
five as much information as possible that I think might be useful
regarding my problem.

 

I have 3 tables where I keep cache records, the structures are something
like

 

  TableA  TableB  TableC

  Id  Id  Id

  ... ... ...

 

The Id is the primary key on all tables and link on this field.

 

  TableA is a 1 to many on TableB which is a 1 to many on TableC

 

To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.

 

Whether date is to be expired/deleted is based on some rule about a date
in TableA.

 

To decide this I run a select query to get all Id's from tableA that
have expired.

This query takes a minute or 2 to run.

 

I then create a loop for all Id's doing

 

  delete from TableC where id=

  delete from TableB where id=

  delete from TableA where id=

 

I am grouping 10,000 loops into a transaction.

 

My expiry runs on a daily basis and deletes approximately this number of
rows from each table.

 

  TableA500,000

  TableB  4,836,560

  TableC 71,534,549

 

I'm starting to run into problems where my expiry process is now taking
longer than 24 hours to complete.

Looking at system stats, the CPU is only at 33%, memory usage is normal
so it looks to me like the i/o is the bottle neck.

 

I'm wondering how I can improve the performance.

 

Firstly, the system I have is

 

  Redhat Enterprise ES rel 3 32-bit

  2 x Dual Core CPU

  16Gb Ram

  1 x 160Gb 10k SCSI drive boot drive

  5 x 160gb 10k SCSI drives RAID Level 1 all combined as 1 partition

 

I have 2 of these in a master/slave config, writes are to the master and
reads to the slave.

MySQL version is 4.1.12 and all tables are InnoDB.  File Systems are all
ext3.

 

 

I have some basic questions

 

1.  Would upgrading to a later MySQL version give any performance
improvements?
2.  If I could split the tables up into smaller tables would this
help ?  My dilemma here is that I can split the data, the data would be
in different tables but on the same hardware, the same number of deletes
would still have to happen so would it actually make any difference ?
3.  Would an alternative file system help, i.e. raw disk ?

 

I think I seem to be limited because of running on a linux 32-bit OS, I
am unable to take advantage of the 16Gb memory the hardware, if I try
and allocate more than 1.6Gb in my buffer pool, MySQL will not start, I
seem to recall this is due to a glibc/malloc issue.  If I were to move
to a 64-bit I believe this would go away.

 

I'd be grateful for any advice or suggestions with this.

 

Cheers,

 

Marvin

 

 



Re: delimiter

2006-10-19 Thread Jo�o C�ndido de Souza Neto
I tried to escape the pipe character but it does not work.

I shall try to explain better what is happening.

I have got a .sql file which create my whole database (including triggers).

When i run it at phpmyadmin it fails, then i tried to run a part of my .sql 
file which creates a trigger in a mysql prompt and it works fine.

Is there some trouble using the delimiter comand by a php script?



""Dan Buettner"" <[EMAIL PROTECTED]> escreveu na mensagem 
news:[EMAIL PROTECTED]
João, the pipe character | often needs to be escaped, usually with a
backslash like so: \|

Can you post the command you are running and the error you get?

Dan


On 10/19/06, João Cândido de Souza Neto <[EMAIL PROTECTED]> wrote:
> Hi everyone.
>
> I have getting a sintax error when i try to execute DELIMITER | in my 
> mysql
> 5 server.
>
> Could anyone help me about it?
>
> --
> João Cândido de Souza Neto
> Curitiba Online
> [EMAIL PROTECTED]
> (41) 3324-2294 (41) 9985-6894
> http://www.curitibaonline.com.br
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqlclient in Apache

2006-10-19 Thread Joshua J. Kugler
On Wednesday 18 October 2006 20:44, Dan Nelson wrote:
> In the last episode (Oct 18), Danny Swarzman said:
> > I'm developing an Apache module that uses mySQL. It needs to be able
> > to talk to a remote host.
> >
> > I'm doing this in a Mac.
> >
> > I have a simple program in C that calls mysql_real_connect(). It
> > works with a remote host and with the localhost.
> >
> > When I put the same code into my Apache module, the call to
> > mysql_real_connect() fails.
> >
> > Anyone have any idea why this is happening?
>
> In the last episode (Oct 18), Danny Swarzman said:
> > I posted a question about running mysql in an Apache module. Maybe I
> > need a list with a different focus. Please suggest.
>
> This list is fine; you just need to include more detail, I think.  Like
> the error code you get from mysql_real_connect(), for starters.

You might want to look at the source code for mod_auth_mysql (apache module) 
for some pointers on doing stuff like that.  Hope that helps.

j

-- 
Joshua Kugler   
Lead System Admin -- Senior Programmer
http://www.eeinternet.com
PGP Key: http://pgp.mit.edu/  ID 0xDB26D7CE
PO Box 80086 -- Fairbanks, AK 99708 -- Ph: 907-456-5581 Fax: 907-456-3111

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delimiter

2006-10-19 Thread Dan Buettner

João, the pipe character | often needs to be escaped, usually with a
backslash like so: \|

Can you post the command you are running and the error you get?

Dan


On 10/19/06, João Cândido de Souza Neto <[EMAIL PROTECTED]> wrote:

Hi everyone.

I have getting a sintax error when i try to execute DELIMITER | in my mysql
5 server.

Could anyone help me about it?

--
João Cândido de Souza Neto
Curitiba Online
[EMAIL PROTECTED]
(41) 3324-2294 (41) 9985-6894
http://www.curitibaonline.com.br



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



delimiter

2006-10-19 Thread Jo�o C�ndido de Souza Neto
Hi everyone.

I have getting a sintax error when i try to execute DELIMITER | in my mysql 
5 server.

Could anyone help me about it?

-- 
João Cândido de Souza Neto
Curitiba Online
[EMAIL PROTECTED]
(41) 3324-2294 (41) 9985-6894
http://www.curitibaonline.com.br 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB messup

2006-10-19 Thread George-Cristian Bîrzan
On Thu, Oct 19, 2006 at 11:03:11AM -0500, Mazur Worden, Kathy wrote:
> I had this problem just this morning and wound up fixing it by changing
> the innodb data and log file directories to new folders in the .cnf
> file.  This enabled to server to start up (new data and log files were
> created) and I then restored the data from a dump.

Well, the thing is... There is no dump. Or rather, what we have is
rather old... :-P

-- 
George-Cristian Bîrzan
Network Engineer
___
RCS & RDS Constanta
Tel.: +40341.400.401 / +40341.400.402
Fax: +40341.400.450
http://www.rcs-rds.ro
___

Privileged/Confidential Information may be contained in this message. If
you are not the addressee indicated in this message (or responsible for
delivery of the message to such person), you may not copy or deliver
this message to anyone. In such a case, you should destroy this message
and kindly notify the sender by reply e-mail.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.1

2006-10-19 Thread Sid Lane

any update on the 5.1 general release date?  is it still on target for Q4 -
Q1?  any narrower window?

On 8/30/06, Colin Charles <[EMAIL PROTECTED]> wrote:


Logan, David (SST - Adelaide) wrote:

Hi!

> Does anybody have any idea when 5.1 may come to General Release? I am
> particularly interested in MySQL Cluster as I have several databases
> (around 50) totalling 26Gb and would like to consider moving to this
> version because of the Cluster Disk Data Storage and the fact the
> current hardware doesn't have the RAM requirements to hold these
> databases.

I expect late (Q4/2006) to early (Q1/2007) as a good time for release.

> I plan to store the data on a SAN and was wondering if anybody had any
> experience with this as yet?

This is why you're after 5.1, so you get disk as opposed to memory only
based storage then
--
Colin Charles, Community Engineer
MySQL AB, Melbourne, Australia, www.mysql.com
Office: [EMAIL PROTECTED] / +46 18 174 400 ext. 4528
Mobile: +614 12 593 292 / Skype: colincharles

MySQL Forge: http://forge.mysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: CPUs not being utilized

2006-10-19 Thread James Neff
I don't know the MailWatch application either but I'll share my 
experience with a multi-threaded application that uses MySQL.


I wrote a Java app that uses several threads to process data (we're 
validating a set of data according to our business rules).


Each thread has its own connection to the database and MySQL spawns a 
thread to process each connections request.  If the application your 
using is only connecting to the database with one connection object, 
then the MySQL database will only spawn 1 process to service that 
request.  To take advantage of multi-threading your client (MailWatch in 
this case) has to know it can use multiple connections to make it go 
faster.  If it is not capable of doing that then your stuck with MySQL 
sitting at 98% idle most of the time. 

The term used is "connection pooling".  See if MailWatch has a feature 
called that or maybe you can find a comparable product that does.  Or, 
if your really smart, if MailWatch is Open Sourced, fix it so it does 
use connection pools, then send the code in for review. 

Even on a single core, single CPU system, a few multiple threads will 
increase performance slightly.  Several more threads will start to slow 
it down due to overhead. 

In my Java application, each thread must instantiate its own Connection 
object.  If I try to use the same Connection object across threads, then 
they all compete for its use and I really don't see any gain in 
performance to the database.


Good luck!

--James





  

-Original Message-
From: Yoloits [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 19, 2006 1:25 PM
To: mysql@lists.mysql.com
Subject: CPUs not being utilized

I am running MailWatch with mysql 4.1.20 with Red Hat
Enterprise 4 on a dual XEON 3.2 with 4 GB mem.  I have the
smp kernel loaded so both CPUs are seen.  They show up as 4
CPUs.  That is the only application the server runs.  I moved
to this server so I could increase the query speed that was
quite slow which is slightly better but it should be faster.
It is not using all the CPUs.

When I run a query one of the 4 CPUs is pegged to zero and
the other 3 remain at 98 to 100% idle.  Can mysql utilize all
the CPUs?  Or is this a limitation of the application?  How
can I check to see what the problem is?  Have I missed a
setting in mysql?

Any help or guidance would be appreciated.

Jay Ehrhart
--
This message has been scanned for viruses
and dangerous content by the
Yolo County Office of Education MailScanner,
and is believed to be clean.








  


RE: CPUs not being utilized

2006-10-19 Thread Jerry Schwartz
I don't now your application at all. That being said, if you are running a
single-threaded application I don't see how MySQL could have more than one
thread running on one CPU.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


> -Original Message-
> From: Yoloits [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 19, 2006 1:25 PM
> To: mysql@lists.mysql.com
> Subject: CPUs not being utilized
>
> I am running MailWatch with mysql 4.1.20 with Red Hat
> Enterprise 4 on a dual XEON 3.2 with 4 GB mem.  I have the
> smp kernel loaded so both CPUs are seen.  They show up as 4
> CPUs.  That is the only application the server runs.  I moved
> to this server so I could increase the query speed that was
> quite slow which is slightly better but it should be faster.
> It is not using all the CPUs.
>
> When I run a query one of the 4 CPUs is pegged to zero and
> the other 3 remain at 98 to 100% idle.  Can mysql utilize all
> the CPUs?  Or is this a limitation of the application?  How
> can I check to see what the problem is?  Have I missed a
> setting in mysql?
>
> Any help or guidance would be appreciated.
>
> Jay Ehrhart
> --
> This message has been scanned for viruses
> and dangerous content by the
> Yolo County Office of Education MailScanner,
> and is believed to be clean.
>
>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



CPUs not being utilized

2006-10-19 Thread Yoloits
I am running MailWatch with mysql 4.1.20 with Red Hat Enterprise 4 on a dual 
XEON 3.2 with 4 GB mem.  I have the smp kernel loaded so both CPUs are seen.  
They show up as 4 CPUs.  That is the only application the server runs.  I moved 
to this server so I could increase the query speed that was quite slow which is 
slightly better but it should be faster.  It is not using all the CPUs.

When I run a query one of the 4 CPUs is pegged to zero and the other 3 remain 
at 98 to 100% idle.  Can mysql utilize all the CPUs?  Or is this a limitation 
of the application?  How can I check to see what the problem is?  Have I missed 
a setting in mysql?

Any help or guidance would be appreciated.

Jay Ehrhart
-- 
This message has been scanned for viruses
and dangerous content by the 
Yolo County Office of Education MailScanner, 
and is believed to be clean.



Re: How can I do a SELECT without locking the table against updates?

2006-10-19 Thread James Neff
I have this problem too on large tables.  Currently my table is 5 
million or so but could easily grow to 10 to 50 times that.


I ended up adding a field called Status, and then adding an index for 
the primary key and status fields.


Then I do my UPDATES or DELETES for a particular status value using the 
LIMIT command.  If I use LIMIT to , say, 1, then the operation is 
very quick and I just keep running it until the records affected 
returned is zero.  But without the STATUS and LIMIT in my SQL then it 
chokes after trying for several hours and I get the record locks error.


--James


Brent Baisley wrote:

You would need to convert the table to InnoDB as George mentioned.

Alternatively, break up your SELECT into many smaller selects. If your 
query is running slow now, it's only going to get slower and slower as 
your table grows. I've noticed with MySQL that large result queries 
don't slow down linearly. You will probably find that the many smaller 
queries actually shorten the entire time it takes.


As an example, I've got a process that merges a 6 million row table 
with a 300 million row table on a daily basis. It merges by selecting 
records within a 20 minute time period (there is a certain amount of 
comparisons involved in the merging).  The tables have to be MyISAM 
because we are using MERGE tables.
It used to take 9 hours with much smaller data sets. Breaking it up it 
now takes about 80 minutes with triple the amount of data. Each query 
only takes a few seconds, so locking isn't a big issue. Insert/Updates 
complete between the selects. I also don't have to worry mysql running 
out of memory and swapping to disk regardless of how big the dataset 
gets.


- Original Message - From: "Jon Ribbens" 
<[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 19, 2006 10:06 AM
Subject: How can I do a SELECT without locking the table against updates?



I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.

While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.

Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.

If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: £ (Great Britain Pound) breaks MySQL insert a nd update - disastrous overwrite

2006-10-19 Thread Paul Warner

>Turn on the querylog, and see what query mysql is actually getting.

Thanks, Gerald!  Indeed the query with £ signs is truncated - mysql isn't 
getting the last two characters (in this case it should be 424).  If one more 
character were taken off, it would overwrite the table.

update advert set adverttitle='jobs pages', mainbody='idiotic stuff £50605 
blahblahblah...', forfurtherinfo='contact Paul Warner, MISD
editing this text.', finalcopy='RESEARCH ARTISTManagement 
Information Services Division
Salary: £28,930.00-£50,219.00 paidiotic stuff £50605 
blahblahblah...
contact Paul Warner, MISD
editing this text.
Quote Reference: AF00428, Closing Date: 20 September 2006' where advertid=4


Whereas, if I remove the £ signs, the query is whole and updates fine:

 update advert set adverttitle='jobs pages', mainbody='idiotic stuff 50605 
blahblahblah...', forfurtherinfo='contact Paul Warner, MISD
editing this text.', finalcopy='RESEARCH ARTISTManagement 
Information Services Division
Salary: £28,930.00-£50,219.00 paidiotic stuff 50605 
blahblahblah...
contact Paul Warner, MISD
editing this text.
Quote Reference: AF00428,Closing Date: 20 September 2006' where advertid=424

Why, when the £ sign is in the middle of query, does the end get chopped off?  
I reiterate that the update statement in the java application log is always 
whole and complete, as in the second version.  So it isn't truncated until 
after it is passed to JDBC...

Thanks,
Paul






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: merging mysql databases

2006-10-19 Thread Dan Buettner

imre -

If you are using MyISAM tables, you might be able to employ the
'mysqlhotcopy' script to snapshot the data tables, then transfer them
to your merge/analysis application db.  You'd need to make sure to
avoid filename overlaps, and make sure to get permissions correct.  In
my experience once you transfer the MyISAM files into a database
directory with proper permissions, they are ready to be used.

Another possibility if you're using any table format would be
mysqldump.  Dump the tables from your devices and then pipe the dumps
into your application db.

Dan

On 10/19/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi,

I am developping some devices that gather and process data using
mysql(embedded). Is there any easy way to merge the databases from these
devices into a single database?

I could easily deal with the semantics of the merge in SQL level. What I
don't know is how can my merge/analysis application see the databases copied
from the devices. (I'd rather not go through csv exports if possible ...)

Any ideas?

Thx

ImRe

P.S.: Ver 5.0.24a



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB messup

2006-10-19 Thread Mazur Worden, Kathy
I had this problem just this morning and wound up fixing it by changing
the innodb data and log file directories to new folders in the .cnf
file.  This enabled to server to start up (new data and log files were
created) and I then restored the data from a dump.

Kathy Mazur Worden
>
> > -Original Message-
> > From: Dan Buettner [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, October 18, 2006 2:16 PM
> > To: George-Cristian B=EErzan
> > Cc: mysql@lists.mysql.com
> > Subject: Re: InnoDB messup
> >
> > Can you just put the files back where they were originally?
> > Ordinarily that would be in the path set up in mysql - see SHOW
> > VARIABLES LIKE "datadir"
> >
> > You do need to have your ibdata* files and ib_logfile* files all in
> > there, assuming you weren't using the file-per-table setup (if you
> > were then I am not sure; haven't tried that one yet myself).
> >
> > I'd shut down the server process, move everything into place,
> > double-check permissions on the files, and then start the mysqld
> > server process back up.

Hm, I did that, that's what I get after doing it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: £ (Great Britain Pound) breaks MySQL insert and update - disastrous overwrit e

2006-10-19 Thread Gerald L. Clark

Paul Warner wrote:

Ok, here is an update.  I have now switched everything to UTF-8, database, 
application, jsp page.  I have added a tomcat filter that sets the request and 
response encoding to UTF-8  presumably before anything else sees the request 
(it seems to write to the log 10 times for each request).  I added the 
following code that processes all the fields before submitting to the database 
for update or insert.

Charset utfCharset = Charset.forName("UTF-8");
CharsetDecoder decoder = utfCharset.newDecoder();
CharsetEncoder encoder = utfCharset.newEncoder();
try {
  ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(advertArr[i]));
  CharBuffer cbuf = decoder.decode(bbuf);
   advertArr[i] = cbuf.toString();
   //log.info(" after conversion: " + advertArr[i]);
} catch ( CharacterCodingException cce ) {
   log.info("character coding exception: " + cce.getMessage());
}

I tested as I added each of these elements.  The problem persists.  Same 
errors.  Same massive, horrific overwrite of the table on every update.  AND 
the data displayed in the browser has a garbage character in front of the £ 
sign.  Why  All the books and internet pages I've looked at say this stuff 
should work.

By fiddling with the encoding settings I did achieve something.  Before 
switching everything to UTF-8, I explicitly set everything to ISO-8859-1 (since 
the £ sign is definitely included in ISO-8859-1), and for a while the update 
refused to do any rows at all, rather than overwriting them all.

Again, the update and insert statements work when pasted into the mysql command 
line utility.  A perl script brings up the data nicely, and does an update 
including a £ sign to just one row without a problem.

So, has anyone ever encountered this??  I'm going mad.


Turn on the querylog, and see what query mysql is actually getting.


--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How can I do a SELECT without locking the table against updates?

2006-10-19 Thread Brent Baisley

You would need to convert the table to InnoDB as George mentioned.

Alternatively, break up your SELECT into many smaller selects. If your query is running slow now, it's only going to get slower and 
slower as your table grows. I've noticed with MySQL that large result queries don't slow down linearly. You will probably find that 
the many smaller queries actually shorten the entire time it takes.


As an example, I've got a process that merges a 6 million row table with a 300 million row table on a daily basis. It merges by 
selecting records within a 20 minute time period (there is a certain amount of comparisons involved in the merging).  The tables 
have to be MyISAM because we are using MERGE tables.
It used to take 9 hours with much smaller data sets. Breaking it up it now takes about 80 minutes with triple the amount of data. 
Each query only takes a few seconds, so locking isn't a big issue. Insert/Updates complete between the selects. I also don't have to 
worry mysql running out of memory and swapping to disk regardless of how big the dataset gets.


- Original Message - 
From: "Jon Ribbens" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 19, 2006 10:06 AM
Subject: How can I do a SELECT without locking the table against updates?



I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.

While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.

Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.

If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: £ (Great Britain Pound) breaks MySQL insert a nd update - disastrous overwrite

2006-10-19 Thread Paul Warner
Ok, here is an update.  I have now switched everything to UTF-8, database, 
application, jsp page.  I have added a tomcat filter that sets the request and 
response encoding to UTF-8  presumably before anything else sees the request 
(it seems to write to the log 10 times for each request).  I added the 
following code that processes all the fields before submitting to the database 
for update or insert.

Charset utfCharset = Charset.forName("UTF-8");
CharsetDecoder decoder = utfCharset.newDecoder();
CharsetEncoder encoder = utfCharset.newEncoder();
try {
  ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(advertArr[i]));
  CharBuffer cbuf = decoder.decode(bbuf);
   advertArr[i] = cbuf.toString();
   //log.info(" after conversion: " + advertArr[i]);
} catch ( CharacterCodingException cce ) {
   log.info("character coding exception: " + cce.getMessage());
}

I tested as I added each of these elements.  The problem persists.  Same 
errors.  Same massive, horrific overwrite of the table on every update.  AND 
the data displayed in the browser has a garbage character in front of the £ 
sign.  Why  All the books and internet pages I've looked at say this stuff 
should work.

By fiddling with the encoding settings I did achieve something.  Before 
switching everything to UTF-8, I explicitly set everything to ISO-8859-1 (since 
the £ sign is definitely included in ISO-8859-1), and for a while the update 
refused to do any rows at all, rather than overwriting them all.

Again, the update and insert statements work when pasted into the mysql command 
line utility.  A perl script brings up the data nicely, and does an update 
including a £ sign to just one row without a problem.

So, has anyone ever encountered this??  I'm going mad.

Thanks,
Paul

- Original Message 
From: Paul Warner <[EMAIL PROTECTED]>
To: mysql@lists.mysql.com
Sent: Thursday, October 19, 2006 11:03:15 AM
Subject: £ (Great Britain Pound) breaks MySQL insert and update - disastrous 
overwrite

Hello,

I have a Java Struts application running on Tomcat 5.  I have mysql 5 installed 
on redhat EL 3.  When a user enters text with a £ sign (Great Britain Pound) in 
the browser and clicks enter, any insert or update statement apparently gets 
truncated in mysql.  The insert reliably breaks and returns this error:

db exception thrown: java.sql.SQLException: Syntax error or access violation 
message from server: "You have an error in your SQL syntax; check the manual 
that corresponds to your MySQL server version for the right syntax to use near 
'' at line 1"

I am writing the insert statement to STDERR before and after the insert, and in 
the java app, it is fine.  The insert statement in the app is not truncated or 
messed up, and if I copy and paste it from the log into the mysql command line 
client, it does the insert without a problem, *with* the pound signs (£)!  But 
from the browser, it will not update and throws the error, every time.

If I enter some text in the browser without pound signs, and do the insert, it 
goes in fine.  Then if I update that text, after entering a pound sign, the 
update OVERWRITES EVERY ROW in the table!  In other words, the where statement 
in the update must be truncated  inside mysql such that the id is removed 
(should be "where advertid = 887").  Again, I am writing the update statement 
to STDERR, and in Java it is fine.  If I copy it from there into the mysql 
command line client, it updates just one row perfectly well.

The character set in the database table is latin1.  The character set in the 
browser is ISO-8859-1, and the character set in Tomcat and in my application 
are all the default.

Samples and info below.  Thanks for any help on this urgent problem!!

Thanks,
Paul

Table info:


DROP TABLE IF EXISTS `advert`;
CREATE TABLE `advert` (
  `AdvertID` int(11) NOT NULL auto_increment,
  `VacancyID` int(11) NOT NULL default '0',
  `AdvertTitle` varchar(100) NOT NULL default '',
  `MainBody` mediumtext,
  `ForFurtherInfo` mediumtext,
  `FinalCopy` mediumtext,
  `InstructionsForAgency` mediumtext,
  `PublicationReleaseDate` datetime default NULL,
  `BlueSheetDate` datetime default NULL,
  `Created` datetime NOT NULL default '-00-00 00:00:00',
  `Updated` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  UNIQUE KEY `AdvertID` (`AdvertID`),
  KEY `IDX_VacancyID` (`VacancyID`),
  KEY `IDX_PublicationReleaseDate` (`PublicationReleaseDate`),
  KEY `IDX_BlueSheetDate` (`BlueSheetDate`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

 
--
-- Dumping data for table `advert`
--


sample insert statement (works in MySQL command line client, bombs from the 
browser) from STDERR:

insert into advert (vacancyid, adverttitle, mainbody, forfurtherinfo, 
finalcopy, instructionsforagency, created) values (884,'test3', 'Computer 
Officer (Part-time)
Department

RE: Hungarian collation

2006-10-19 Thread imre
> From: Peter Gulutzan <[EMAIL PROTECTED]>
> 
> MySQL is looking for an authoritative, official statement which states 
> all the current Hungarian collation rules.

According to the Reference Level Description of the hungarian language (ISBN
9634206441 or the hungarian version on line:
http://bme-tk.bme.hu/other/kuszob/hangok.htm ) the rules are 
the following:

- The basic order of the alphabet is a á b c cs d dz dzs e é f g gy h i í j
k l ly m n ny o ó ö ő p q r s sz t ty u ú ü ű v w x y z zs
- For the short-long vowel pairs (a á, e é, i í, o ó, ö ő, u ú, ü ű)  long =
short usually, but long > short if all else 
is equal. E.g., kád < kar < kár < kard
- The long double consonants are sorting as if they would have been
expanded.  I.e., ggy as gygy, nny as nyny
- Composit words are sorted according to word parts. I.e., meggyújt < meglát
< megy < meggy

An alternative collation sometimes used (in libraries, and some dictionaries
and lexica) is according to the basic latin alphabet, whit the accented
letters having the same value as the not accented.  Or anything in between.
E.g., honoring the digraphs and the trigraph, but leaving the accents out of
the business.
 
I hope this helps.

ImRe



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How can I do a SELECT without locking the table against updates?

2006-10-19 Thread mos

At 09:06 AM 10/19/2006, you wrote:

I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.

While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.

Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.

If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.


Of course like George mentioned, InnoDb is the best way to go. Other 
alternatives is to use "Insert Delayed" 
http://dev.mysql.com/doc/refman/5.0/en/insert-delayed.html provided all 
your inserts are delayed (otherwise you could get corruption).


If you have optimized the table and there are no deleted rows, the Inserts 
will not need to do any locking at all.
As for Updates, make sure you are using an index on the update, preferably 
the primary key to identify the row, and update several rows in 1 sql 
statement.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Re: CREATE TABLE Inv_Id

2006-10-19 Thread Jerry Schwartz
How about the UUID() function. As I recall, the differences between a UUID
and Microsoft's GUID are trivial.

INSERT INTO Inv_Id SET id = UUID();

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


> -Original Message-
> From: Martijn Tonies [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 19, 2006 9:45 AM
> To: Dan Buettner; Scott Hamm
> Cc: Mysql
> Subject: Re: Re: CREATE TABLE Inv_Id
>
>
>
>
> > Scott, what's wrong with 'PRIMARY KEY' ?
>
> A PRIMARY KEY has nothing to do with the "uniqueidentifier" datatype.
>
> A "uniqueidentifier" is a GUID.
>
> Martijn Tonies
> Database Workbench - development tool for MySQL, and more!
> Upscene Productions
> http://www.upscene.com
> My thoughts:
> http://blog.upscene.com/martijn/
> Database development questions? Check the forum!
> http://www.databasedevelopmentforum.com
>
> >
> > On 10/19/06, Scott Hamm <[EMAIL PROTECTED]> wrote:
> > > Wish MySQL would have something like what Microsoft use,
> "uniqueidentifier"
> > > as datatype.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How can I do a SELECT without locking the table against updates?

2006-10-19 Thread George-Cristian Bîrzan
On Thursday 19 October 2006 17:06, Jon Ribbens wrote:
> I have a simple single-table SELECT query that takes of several
> minutes to complete, due to a very large number of result rows being
> involed. I don't think there is any way to optimise the query - MySQL
> is already using the appropriate index etc, it's just a huge table and
> the query has a large result set.
>
> While the SELECT is executing, any INSERT/UPDATE queries on the table
> are blocked.
>
> Is there any way I can tell MySQL *not* to lock the table while the
> SELECT is executing? I know this could lead to ambiguities, but in
> this application it doesn't matter - if, for example, a new matching
> row is added after the SELECT starts but before it finishes, it simply
> doesn't matter whether that row is returned in the results or not.
>
> If there is no way to do this in MySQL, does anyone have any
> innovative suggestions as to how I could work around the problem? The
> table is very large, so solutions involving multiple copies are tricky
> to implement.

Convert the table to InnoDB, which uses row level locking, as opposed to 
MyISAM which uses table level locking.

-- 
George-Cristian Bîrzan
Network Engineer
___
RCS & RDS Constanta
Tel.: +40341.400.401 / +40341.400.402
Fax: +40341.400.450
http://www.rcs-rds.ro
___

Privileged/Confidential Information may be contained in this message. If
you are not the addressee indicated in this message (or responsible for
delivery of the message to such person), you may not copy or deliver
this message to anyone. In such a case, you should destroy this message
and kindly notify the sender by reply e-mail.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How can I do a SELECT without locking the table against updates?

2006-10-19 Thread Jon Ribbens
I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.

While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.

Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.

If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: CREATE TABLE Inv_Id

2006-10-19 Thread Martijn Tonies



> Scott, what's wrong with 'PRIMARY KEY' ?

A PRIMARY KEY has nothing to do with the "uniqueidentifier" datatype.

A "uniqueidentifier" is a GUID.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com

>
> On 10/19/06, Scott Hamm <[EMAIL PROTECTED]> wrote:
> > Wish MySQL would have something like what Microsoft use,
"uniqueidentifier"
> > as datatype.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: CREATE TABLE Inv_Id

2006-10-19 Thread Dan Buettner

Scott, what's wrong with 'PRIMARY KEY' ?

Dan

On 10/19/06, Scott Hamm <[EMAIL PROTECTED]> wrote:

Wish MySQL would have something like what Microsoft use, "uniqueidentifier"
as datatype.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: bin-log with expire_logs_days

2006-10-19 Thread George Law
Thanks Dan.

According to the docs, the "BEFORE" option was introduced in 4.1.   

I just tried the purge with the "to" option :
 PURGE MASTER LOGS TO 'db1-bin.002';
Query OK, 0 rows affected (0.01 sec)

so I think I will just purge a couple log files at a time until I can
get the disk space down to a more manageable capacity.  

The previous DBA had told me that the last time he purged the logs, it
took it several minutes - but I can only assume he tried to purge too
much at once.

Thanks again!

--
George


>-Original Message-
>From: Dan Buettner [mailto:[EMAIL PROTECTED] 
>Sent: Wednesday, October 18, 2006 3:28 PM
>To: George Law
>Cc: mysql@lists.mysql.com
>Subject: Re: bin-log with expire_logs_days
>
>I haven't used the server variable you refer to, but instead have
>always used an external command piped in via cron - PURGE BINARY LOGS
>BEFORE 
>and I just use a DATE_SUB function to subtract X days from 
>today's date.
>http://dev.mysql.com/doc/refman/5.0/en/purge-master-logs.html
>
>It's a pretty quick command to run, generally a fraction of a second.
>Since you have 132 files it might be a few seconds but I would not
>expect longer than that.
>
>I don't know whether MySQL willl go back and delete the old logs if
>you set that variable and restart - presumably it would, but not
>certain.
>
>Dan
>
>
>
>On 10/18/06, George Law <[EMAIL PROTECTED]> wrote:
>> Hi All,
>>
>> I have a **high traffic** mysql 4.0.18-standard-log server 
>running with
>> bin-logging enabled.
>>
>> Right now, this must be using a default setting for 
>expire_log_days.  I
>> do not see this anyway in
>> "show variables" or "show status"
>>
>>
>> $ echo "show variables" | sql |grep bin
>> binlog_cache_size   32768
>> log_bin ON
>> max_binlog_cache_size   4294967295
>> max_binlog_size 1073741824
>>
>>
>> # echo "show status" | sql |grep bin
>> Com_show_binlog_events  0
>> Com_show_binlogs9
>>
>> Right now, I have 132 bin-logs, each at 1 GB. the logs go back to
>> 2/11/2006
>>
>> If I were to add 'expire_logs_days 45' to my.cnf and restart 
>mysql, is
>> mysql going to attempt to purge the logs
>> > 45 days old and if so... how long does it typically take.  
>We cannot
>> afford to restart if its going to take
>> any significant amount of time for it to purge the logs and restart.
>>
>> thanks!
>>
>>
>> George Law
>> [EMAIL PROTECTED]
>> MSN: [EMAIL PROTECTED]
>> Phone: 864-678-3161
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Seeking contribution for MySQL Quality Assurance

2006-10-19 Thread Giuseppe Maxia
Hi, all.
MySQL is about to launch yet another contribution project. We are still in the 
planning phase, and I have outlined the
issue in this article:

http://datacharmer.blogspot.com/2006/10/contributing-to-mysql-qa-ideas-wanted.html

Comments welcome!

Giuseppe

-- 
Giuseppe Maxia, QA Developer
MySQL AB, www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



merging mysql databases

2006-10-19 Thread imre
Hi,

I am developping some devices that gather and process data using
mysql(embedded). Is there any easy way to merge the databases from these
devices into a single database?

I could easily deal with the semantics of the merge in SQL level. What I
don't know is how can my merge/analysis application see the databases copied
from the devices. (I'd rather not go through csv exports if possible ...)

Any ideas?

Thx

ImRe

P.S.: Ver 5.0.24a



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CREATE TABLE Inv_Id

2006-10-19 Thread Scott Hamm

Wish MySQL would have something like what Microsoft use, "uniqueidentifier"
as datatype.


Re: Which to download

2006-10-19 Thread Ratheesh K J
Hi Jacques,

Thanks for your response.

I can see the NDB cluster storage engine rpms under the "Linux x86 generic RPM 
(dynamically linked) downloads" but not under the "Red Hat Enterprise Linux 3 
RPM (x86) downloads". What If the same is needed for our system (later perhaps)?

What exactly is the difference between the two set of rpms? Where (which 
system) are the Linux x86 generic RPM (dynamically linked)  rpms installed then?


Thanks,
Ratheesh K J



- Original Message - 
From: "Jacques Marneweck" <[EMAIL PROTECTED]>
To: "Ratheesh K J" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, October 19, 2006 3:16 PM
Subject: Re: Which to download


> Ratheesh K J wrote:
>> Hello all,
>>
>> Just wanted to know what is the difference between
>>
>> Red Hat Enterprise Linux 3 RPM (x86) downloads and
>> Linux x86 generic RPM (dynamically linked) downloads
>>
>> Which one should I download for a RHEL 3 system?
>>   
> Hi Ratheesh,
> 
> Use the RHEL3 one.
> 
> Regards
> --jm
>> Thanks,
>>
>> Ratheesh Bhat K J
>>
>>   
> 
> 
> -- 
> Jacques Marneweck
> http://www.powertrip.co.za/
> http://www.powertrip.co.za/blog/
> 
> #include 

Fw: bin-log with expire_logs_days

2006-10-19 Thread Visolve DB Team

Hi,

For Info about the 'expire-logs-days' bug fix and new release,
http://www.developertutorials.com/mysql-manual/manual_News.html

Thanks
ViSolve DB Team.
- Original Message - 
From: "Visolve DB Team" <[EMAIL PROTECTED]>

To: "George Law" <[EMAIL PROTECTED]>; 
Sent: Thursday, October 19, 2006 4:00 PM
Subject: Re: bin-log with expire_logs_days



Hi,

The system variable expire_logs_days  removes the binary logs 
automatically after the given number of days.  The default is 0, which 
means "no automatic removal."  Possible removals happen at startup and at 
binary log rotation. For transactions, it never causes rotation instead it 
writes to memory cache.
The Autocommit statement and HAVE_REPLICATION symbol have impact over 
expire_logs_days.


As of our understanding, for transactions, if log file size as 100MB, and 
once it get filled, if thre any new log commit, then the log files content 
will be removed from begining until the required size is obtained and the 
new log is appended at the end (FIFO).


For more information on this variable,
http://bugs.mysql.com/bug.php?id=15580
http://bugs.mysql.com/bug.php?id=7236


Thanks
ViSolve DB Team.
- Original Message - 
From: "George Law" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 19, 2006 12:16 AM
Subject: bin-log with expire_logs_days



Hi All,

I have a **high traffic** mysql 4.0.18-standard-log server running with
bin-logging enabled.

Right now, this must be using a default setting for expire_log_days.  I
do not see this anyway in
"show variables" or "show status"


$ echo "show variables" | sql |grep bin
binlog_cache_size   32768
log_bin ON
max_binlog_cache_size   4294967295
max_binlog_size 1073741824


# echo "show status" | sql |grep bin
Com_show_binlog_events  0
Com_show_binlogs9

Right now, I have 132 bin-logs, each at 1 GB. the logs go back to
2/11/2006

If I were to add 'expire_logs_days 45' to my.cnf and restart mysql, is
mysql going to attempt to purge the logs

45 days old and if so... how long does it typically take.  We cannot

afford to restart if its going to take
any significant amount of time for it to purge the logs and restart.

thanks!


George Law
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
Phone: 864-678-3161


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bin-log with expire_logs_days

2006-10-19 Thread Visolve DB Team

Hi,

The system variable expire_logs_days  removes the binary logs automatically 
after the given number of days.  The default is 0, which means "no automatic 
removal."  Possible removals happen at startup and at binary log rotation. 
For transactions, it never causes rotation instead it writes to memory 
cache.
The Autocommit statement and HAVE_REPLICATION symbol have impact over 
expire_logs_days.


As of our understanding, for transactions, if log file size as 100MB, and 
once it get filled, if thre any new log commit, then the log files content 
will be removed from begining until the required size is obtained and the 
new log is appended at the end (FIFO).


For more information on this variable,
http://bugs.mysql.com/bug.php?id=15580
http://bugs.mysql.com/bug.php?id=7236


Thanks
ViSolve DB Team.
- Original Message - 
From: "George Law" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 19, 2006 12:16 AM
Subject: bin-log with expire_logs_days



Hi All,

I have a **high traffic** mysql 4.0.18-standard-log server running with
bin-logging enabled.

Right now, this must be using a default setting for expire_log_days.  I
do not see this anyway in
"show variables" or "show status"


$ echo "show variables" | sql |grep bin
binlog_cache_size   32768
log_bin ON
max_binlog_cache_size   4294967295
max_binlog_size 1073741824


# echo "show status" | sql |grep bin
Com_show_binlog_events  0
Com_show_binlogs9

Right now, I have 132 bin-logs, each at 1 GB. the logs go back to
2/11/2006

If I were to add 'expire_logs_days 45' to my.cnf and restart mysql, is
mysql going to attempt to purge the logs

45 days old and if so... how long does it typically take.  We cannot

afford to restart if its going to take
any significant amount of time for it to purge the logs and restart.

thanks!


George Law
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
Phone: 864-678-3161


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



£ (Great Britain Pound) breaks MySQL insert and u pdate - disastrous overwrite

2006-10-19 Thread Paul Warner
Hello,

I have a Java Struts application running on Tomcat 5.  I have mysql 5 installed 
on redhat EL 3.  When a user enters text with a £ sign (Great Britain Pound) in 
the browser and clicks enter, any insert or update statement apparently gets 
truncated in mysql.  The insert reliably breaks and returns this error:

db exception thrown: java.sql.SQLException: Syntax error or access violation 
message from server: "You have an error in your SQL syntax; check the manual 
that corresponds to your MySQL server version for the right syntax to use near 
'' at line 1"

I am writing the insert statement to STDERR before and after the insert, and in 
the java app, it is fine.  The insert statement in the app is not truncated or 
messed up, and if I copy and paste it from the log into the mysql command line 
client, it does the insert without a problem, *with* the pound signs (£)!  But 
from the browser, it will not update and throws the error, every time.

If I enter some text in the browser without pound signs, and do the insert, it 
goes in fine.  Then if I update that text, after entering a pound sign, the 
update OVERWRITES EVERY ROW in the table!  In other words, the where statement 
in the update must be truncated  inside mysql such that the id is removed 
(should be "where advertid = 887").  Again, I am writing the update statement 
to STDERR, and in Java it is fine.  If I copy it from there into the mysql 
command line client, it updates just one row perfectly well.

The character set in the database table is latin1.  The character set in the 
browser is ISO-8859-1, and the character set in Tomcat and in my application 
are all the default.

Samples and info below.  Thanks for any help on this urgent problem!!

Thanks,
Paul

Table info:


DROP TABLE IF EXISTS `advert`;
CREATE TABLE `advert` (
  `AdvertID` int(11) NOT NULL auto_increment,
  `VacancyID` int(11) NOT NULL default '0',
  `AdvertTitle` varchar(100) NOT NULL default '',
  `MainBody` mediumtext,
  `ForFurtherInfo` mediumtext,
  `FinalCopy` mediumtext,
  `InstructionsForAgency` mediumtext,
  `PublicationReleaseDate` datetime default NULL,
  `BlueSheetDate` datetime default NULL,
  `Created` datetime NOT NULL default '-00-00 00:00:00',
  `Updated` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  UNIQUE KEY `AdvertID` (`AdvertID`),
  KEY `IDX_VacancyID` (`VacancyID`),
  KEY `IDX_PublicationReleaseDate` (`PublicationReleaseDate`),
  KEY `IDX_BlueSheetDate` (`BlueSheetDate`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

 
--
-- Dumping data for table `advert`
--


sample insert statement (works in MySQL command line client, bombs from the 
browser) from STDERR:

insert into advert (vacancyid, adverttitle, mainbody, forfurtherinfo, 
finalcopy, instructionsforagency, created) values (884,'test3', 'Computer 
Officer (Part-time)
Department of Social Anthropology

Vacancy Reference No: JF00881  Salary:£24,161-£31,525  pa pro rata

We wish to appoint a half-time Computer Officer to support and further the 
provision of information technology in all areas of the Department of Social 
Anthropology.  etc.
', '', 'TEST JOBManagement Information Services Division
Salary: Computer Officer (Part-time)
Department of Social Anthropology
Vacancy Reference No: JF00881  Salary:£24,161-£31,525  pa pro rata
We wish to appoint a half-time Computer Officer to support and further the 
provision of information technology in all areas of the Department of Social 
Anthropology.  etc.

Quote Reference: AF00884,Closing Date: 25 November 2006', '',now())

sample update statement (updates one row in mysql client, overwrites every row 
in the table from the browser):

update advert set adverttitle='test4', mainbody='Computer Officer (Part-time)
Department of Social Anthropology

Vacancy Reference No: JF00881  Salary:£24,161-£31,525  pa pro rata

We wish to appoint a half-time Computer Officer to support and further the 
provision of information technology in all areas of the Department of Social 
Anthropology. etc.', finalcopy='TEST JOBManagement Information 
Services Division
Salary: Computer Officer (Part-time)
Department of Social Anthropology
Vacancy Reference No: JF00881  Salary:£24,161-£31,525  pa pro rata
We wish to appoint a half-time Computer Officer to support and further the 
provision of information technology in all areas of the Department of Social 
Anthropology. etc.

Quote Reference: AF00884,Closing Date: 25 November 2006' where advertid=887


mysql  Ver 14.12 Distrib 5.0.22, for pc-linux-gnu (i686) using readline 5.0
Linux 2.4.21-47.EL #1 Wed Jul 5 20:46:55 EDT 2006 i686 i686 i386 GNU/Linux










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Which to download

2006-10-19 Thread Jacques Marneweck

Ratheesh K J wrote:

Hello all,

Just wanted to know what is the difference between

Red Hat Enterprise Linux 3 RPM (x86) downloads and
Linux x86 generic RPM (dynamically linked) downloads

Which one should I download for a RHEL 3 system?
  

Hi Ratheesh,

Use the RHEL3 one.

Regards
--jm

Thanks,

Ratheesh Bhat K J

  



--
Jacques Marneweck
http://www.powertrip.co.za/
http://www.powertrip.co.za/blog/

#include 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Which to download

2006-10-19 Thread Ratheesh K J
Hello all,

Just wanted to know what is the difference between

Red Hat Enterprise Linux 3 RPM (x86) downloads and
Linux x86 generic RPM (dynamically linked) downloads

Which one should I download for a RHEL 3 system?

Thanks,

Ratheesh Bhat K J


Re: Import oracle

2006-10-19 Thread Martijn Tonies


> Can anyone tell me how can I import the *.gra (oracle db file) files to
> Navicat or Acess. Pls let me know in steps as I am v new to this field.

Install Oracle.

Pump/transfer metadata.

Pump data.

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle &
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]