Re: May I create more than 200 databases for one mysqld?

2006-07-10 Thread 古雷
Thanks a lot!
- Original Message - 
From: paul rivers [EMAIL PROTECTED]
To: '古雷' [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Monday, July 10, 2006 11:41 AM
Subject: RE: May I create more than 200 databases for one mysqld?


 Yes, you can.
 
 Whether there are disadvantages depends mainly on how you are using mysql.
 
 When there are many databases (an order of magnitude or more than what you
 propose), some people report that show databases can be slow.  
 
 Otherwise, the disadvantages are mainly administrative.  For example, do you
 find permissions and accounts too messy for that many databases?  When you
 are trying to troubleshoot activity in one database by examining the client
 log, is it too noisy due to the other databases?  Do you find it too
 burdensome to upgrade that many databases from one version to the next?
 
 Since mysql is threaded, sometimes it makes sense to run multiple mysqld
 processes on the same machine.  But again, this is more an administrative
 decision than a technical one.  200+ databases is certainly well within
 practical mysqld limits.
 
 Regards,
 Paul
 
 -Original Message-
 From: 古雷 [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, July 09, 2006 8:27 PM
 To: mysql@lists.mysql.com
 Subject: May I create more than 200 databases for one mysqld?
 
 May I create more than 200 databases for one mysqld?
 And is there any disadvantage when there're many databases on one mysqld?
 
 Thanks.
 
 Regards,
 
 Gu Lei


Re: How to look for balanced parenthesis?

2006-07-10 Thread Joerg Bruehe

Hi Mike, all!


mos wrote:
I have a complicated SQL statement with around a dozen 
if(this,val1,val2) embedded in it and there are even nested If 
clauses. I'm getting syntax errors because I'm not balancing the ( ) 
properly. Is there any free software out there for Windows that I can 
copy and paste the SQL statement into that will show me where the 
parenthesis are unbalanced? It doesn't have to know about SQL, I just to 
either highlight the parenthesis range or find out when the parenthesis 
become out of sync.


I cannot give a list of editors that will do it,
but I know that vim (vi improved) does it.
It will also do syntax highlighting, but I do not know whether it knows 
about SQL already or would need an additional SQL syntax description.


vim is available freely (both free of charge and free open source) 
for Unix + Windows from here:

   http://www.vim.org/

It has several other nice features which will need some time to really 
use them, like folding (hiding inner structure elements from display), 
and it includes vimdiff (a tool to compare two file versions side by 
side).


Whether you like vim (like I do) or not is a different issue, flame wars 
about editors are famous.


I admit that the way you use vim is different from many Windows tools, 
it is definitely not point-and-click.

There is also a gvim, more GUI-style, which I cannot comment on.

So you have to make your own choice.


HTH,
Jörg

--
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com
Office:  (+49 30) 417 01 487 VoIP: [EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cumulative Totals

2006-07-10 Thread Gabriel PREDA

cumulative total index
...
  SELECT SUM(amount) , DATE_FORMAT( `paymentDate` , '%Y-%m' )   FROM
  `payments`   GROUP BY DATE_FORMAT( payments . date , '%Y-%m' )

.

  Is there any way to get a running cumulative total directly from mysql?
  Something like:
  
  amount | paymentDate
  200| 2005-01
  258| 2005-02


Will WITH ROLLUP do what you want ?

SELECT SUM(amount) , DATE_FORMAT( `paymentDate` , '%Y-%m' )   FROM
`payments` GROUP BY DATE_FORMAT( payments . date , '%Y-%m' ) WITH
ROLLUP

This will give you something like:
amount | paymentDate
200 | 2005-01
58   | 2005-02
258 | NULL

Will it do ?

--
Gabriel PREDA
Senior Web Developer

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Difference between essential and complete distributions

2006-07-10 Thread Rob Desbois
 Is there somewhere I can find the exact differences between the contents of 
 the
 'essential' and 'complete' Windows MySQL distributions?
 I've tried the source code and searched all over the website but can't find 
 it anywhere.

Ping!


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error

2006-07-10 Thread James runaku
The package you sent me was infected, so send me one by mail.Thank you:James 
B.Runaku, P.O.Box 190-50100, Kakamega, Kenya.

mysql@lists.mysql.com wrote:  The message contains Unicode characters and has 
been sent as a binary attachment.




-
How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.

mysql DB server outage and crazy dentunusd figures reported by sar

2006-07-10 Thread Billy Barker

Hi,

We had an outage on our Mysql DB yesterday. Basically the Db was unusable
and we couldn't get a shell onto the host ('could not fork process').


OS: Red Hat Linux Advanced Server release 2.1AS/\m (Pensacola)/2.4.9-
e.3enterprise
Mysq: Ver 4.0.16-standard for pc-linux on i686
All Db's on host use Myisam tables

Apart from some crazy figures for 'dentunusd', nothing else strange reported
by the sar logging for the period of the outage.

17:00:00dentunusd   file-sz  %file-sz  inode-sz  super-sz %super-sz
dquot-sz %dquot-sz  rtsig-sz %rtsig-sz
17:01:004294967201  1306  5.18  1072 0
0.00
0  0.00 0  0.00
17:02:00   64  1331  5.27  1121 0  0.00
0  0.00 0  0.00
17:03:00   94  1347  5.34  1115 0  0.00
0  0.00 0  0.00
17:04:014294967186  1016  4.03  1320 0
0.00
0  0.00 1  0.10
17:05:014294967249   918  3.64  1447 0
0.00
0  0.00 0  0.00
17:10:154294967229   810  3.21  1445 0
0.00
0  0.00 6  0.59
17:15:334294967279   794  3.15  1453 0
0.00
0  0.00 6  0.59
17:22:024294967178   595  2.36  1494 0
0.00
0  0.00 1  0.10
17:23:004294967194   971  3.85  1188 0
0.00
0  0.00 0  0.00
17:24:004294967197  1278  5.06   882 0
0.00
0  0.00 0  0.00
17:25:004294967274  1264  5.01   927 0
0.00
0  0.00 0  0.00
17:26:00  308  1230  4.87  1154 0  0.00
0  0.00 0  0.00
17:27:00  314  1254  4.97  1156 0  0.00
0  0.00 0  0.00
17:28:00  448  1230  4.87  1205 0  0.00
0  0.00 0  0.00
17:29:00  451  1249  4.95  1209 0  0.00
0  0.00 0  0.00
17:30:00  466  1204  4.77  1247 0  0.00
0  0.00 0  0.00
17:31:00  476  1248  4.95  1228 0  0.00
0  0.00 0  0.00
17:32:00  488  1284  5.09  1234 0  0.00
0  0.00 0  0.00
17:33:004294967222  1286  5.10   920 0
0.00
0  0.00 0  0.00
17:34:014294967226  1345  5.33   903 0
0.00
0  0.00 0  0.00
17:35:010  1332  5.28   951 0  0.00
0  0.00 0  0.00
17:36:00  227  1309  5.19  1093 0  0.00
0  0.00 0  0.0

Has anyone ever seen this behaviour before?

Cheers

Bill.


vCard to MySQL

2006-07-10 Thread Timothy Murphy
Is there a standard MySQL structure to represent vCards?
If so, are there programs to install vCard address lists
in a MySQL database?

I'm running Fedora-5 Linux, with KDE,
and am interested in translating the KAddressBook/kabc format
(used with kmail) to and from MySQL,
as automagically as possible.
If there is a standard KDE application that does this
I would be doubly pleased.

Any suggestions or advice gratefully received.



-- 
Timothy Murphy  
e-mail (80k only): tim /at/ birdsnest.maths.tcd.ie
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



File (xls, csv, txt) to MySQL

2006-07-10 Thread Nicholas Vettese
If I have a file that is in a spreadsheet format, how can I dump that into a 
MySQL database, using PHP MyAdmin?

Thanks,
Nick

RE: File (xls, csv, txt) to MySQL

2006-07-10 Thread Jay Blanchard
[snip]
If I have a file that is in a spreadsheet format, how can I dump that
into a MySQL database, using PHP MyAdmin?
[/snip]

Using a LOAD DATA INFILE query. 

http://www.google.com/search?hl=enlr=q=csv+to+mysql+phpmyadmin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does one speed up delete-Again

2006-07-10 Thread Brent Baisley
I've had to delete large amounts of records from tables with over 200 millions records. You are correct in that you want to do it in 
chunks. How you divide the deletes is up to you to decide. You would want to do it based on a indexed column, like a date, and 
specify a range.

DELETE data
FROM data, event
WHERE data.cid=event.cid
AND event.timestamp between 2006-01-01 and 2006-01-07

DELETE data
FROM data, event
WHERE data.cid=event.cid
AND event.timestamp between 2006-01-08 and 2006-01-15
...

You'l find it will go much, much quicker if you can narrow down the number of records deleted at one time. A delete that I would 
give up on after a few hours take a few minutes by doing it in chunks.


BUT, if your data is always going to be divided into date chunks where you'll periodically delete everything prior to a certain 
date, you may want to look into merge tables. Typically a merge table is a psuedo table that is comprised of a set of underlying 
tables. You can add/drop underlying tables very quickly, or you could even reference an underlying table directly for certain tasks. 
Under the right circumstances, merge tables can provide considerable amount of scalability.


- Original Message - 
From: Jacob, Raymond A Jr [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Sunday, July 09, 2006 10:36 PM
Subject: How does one speed up delete-Again


I started the operation below on Friday at  1300hrs EST
DELETE data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-05-01


It is now Sunday 22:00hrs EST and the operation is still running.

Question: Should it take this long to delete 7.5 million records from a
4.5GB
Table?

Question: Other than writing a script to export all the cid's to a file
and  deleting the records one at a time so at least I can delete some
records.
Is there a way to delete records one at a time or in groups
so that if I have to stop the operation the delete will not rolled back?

Question:Does anyone on the list have experience deleting what I guess
is a large number of
records from a large table? i.e. how long does it take?

r/Raymond


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Please solve immediately Error found in migration from MS SQL 2000 to My sql

2006-07-10 Thread Kishore Kumar Barik

Database Migration Error:
My database is stored in MSSQL Server 2000 in a standalone system in windows
authentication mode means without any username and password. I am writing my
whole process  problem if you people could solve my problem.

In Source Connection Parameter

Stored Connection : myLibrary
Hostname : Localhost  Port : 1433
username : .
Password :
Database : Library (I am explicitly specifing, It is not fetching by mySQL
migration Tool)

Next

Target Connection Parameter

Stored Connection : Library
Hostname : Localhost  Port : 3306
Username : root
password : 

next 
Connecting to source database system (failed)
Retrieve schema information from source database system (failed)
Test connection to target database system (not completed)

from Advanced button in below I found following error message





Connecting to source database and retrieve schemata names.
Initializing JDBC driver ...
Driver class MS SQL JDBC Driver
Opening connection ...
Connection
jdbc:jtds:sqlserver://localhost:1433/Library;user=.;password=;charset=utf-8;domain=
The list of schema names could not be retrieved (error: 0).
ReverseEngineeringMssql.getSchemata :Network error IOException: Connection
refused: connect
Details:
net.sourceforge.jtds.jdbc.ConnectionJDBC2.init(ConnectionJDBC2.java:372)
net.sourceforge.jtds.jdbc.ConnectionJDBC3.init(ConnectionJDBC3.java:50)
net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:178)
java.sql.DriverManager.getConnection(Unknown Source)
java.sql.DriverManager.getConnection(Unknown Source)
com.mysql.grt.modules.ReverseEngineeringGeneric.establishConnection(
ReverseEngineeringGeneric.java:95)
com.mysql.grt.modules.ReverseEngineeringMssql.getSchemata(
ReverseEngineeringMssql.java:99)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
java.lang.reflect.Method.invoke(Unknown Source)
com.mysql.grt.Grt.callModuleFunction(Unknown Source)





Please send me solution immediately. Then I will try MySQL with these tools
in Network architecture.
Thank you
Kishore K Barik
Kolkata, India


queue select

2006-07-10 Thread Vittorio Zuccalà
Hello,
i've two tables and they have the same field.
I'd like to make a select which take records in a table and after it
shows record in the other table
in the same field...

I try to explain:

Table1
FieldA - FieldB
AAA123 - Cellular
BBB111 - PostIt

Table2
FieldA - FieldH
AAB111 - Dido
BGF145 - Serp

My select has to show:
AAA123
BBB111
AAB111
BGF145

Is there a way for it?
Thanks!!

-- 
vittorio zuccalà
Finconsumo Banca SPA
[EMAIL PROTECTED]
Tel: 011-6319464



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

RE: queue select

2006-07-10 Thread Addison, Mark
From: Vittorio Zuccalà  Sent: 10 July 2006 14:26
 
 Hello,
 i've two tables and they have the same field.
 I'd like to make a select which take records in a table and after it
 shows record in the other table
 in the same field...
 
 I try to explain:
 
 Table1
 FieldA - FieldB
 AAA123 - Cellular
 BBB111 - PostIt
 
 Table2
 FieldA - FieldH
 AAB111 - Dido
 BGF145 - Serp
 
 My select has to show:
 AAA123
 BBB111
 AAB111
 BGF145
 
 Is there a way for it?
 Thanks!!

Sounds like you need a UNION query:

SELECT FieldA FROM Table1 UNION SELECT FieldA FROM Table2;

hth,
mark
--
 





MARK ADDISON
WEB DEVELOPER

200 GRAY'S INN ROAD
LONDON
WC1X 8XZ
UNITED KINGDOM
T +44 (0)20 7430 4678
F 
E [EMAIL PROTECTED]
WWW.ITN.CO.UK
Please Note:

 

Any views or opinions are solely those of the author and do not necessarily 
represent 
those of Independent Television News Limited unless specifically stated. 
This email and any files attached are confidential and intended solely for the 
use of the individual
or entity to which they are addressed. 
If you have received this email in error, please notify [EMAIL PROTECTED] 

Please note that to ensure regulatory compliance and for the protection of our 
clients and business,
we may monitor and read messages sent to and from our systems.

Thank You.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: queue select

2006-07-10 Thread Joerg Bruehe

Hi!

Vittorio Zuccalà wrote:

Hello,
i've two tables and they have the same field.
I'd like to make a select which take records in a table and after it
shows record in the other table


Read here:  http://dev.mysql.com/doc/refman/5.0/en/union.html



in the same field...


It cannot be the same, as it belongs to a different table.
In your example, both tables use the same name for one field, but for 
Union this is not required - all that is needed is that the result 
columns of the SELECTs that are combined into the UNION be compatible, 
so that a common result type can be set.




I try to explain:

Table1
FieldA - FieldB
AAA123 - Cellular
BBB111 - PostIt

Table2
FieldA - FieldH
AAB111 - Dido
BGF145 - Serp

My select has to show:
AAA123
BBB111
AAB111
BGF145

Is there a way for it?


Try this:
  SELECT FieldA FROM Table1 UNION SELECT FieldA FROM Table2


Happy reading!
Joerg

--
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com
Office:  (+49 30) 417 01 487 VoIP: [EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MAX_JOIN_SIZE

2006-07-10 Thread Peter Lauri
Best group member,

I did this query on a very small database. And it gave error with some
MAX_JOIN_SIZE error. I have never seen this, and how can I avoid this to
happen? Do I have too many joins in one query? Should I write the joins
differently?

++

SQL query: 

SELECT part.joindate AS dateadded, part.profile, part.prefname, part.email,
pass.password, pdf.id AS pass1, pdf.password AS pass2, partype.type_desc
FROM cmmember
LEFT OUTER JOIN cmtest ON ( cmmember.id = cmtest.cmmember_id ) 
LEFT OUTER JOIN tblparticipants part ON ( part.memberid = cmtest.test_id ) 
LEFT OUTER JOIN parpass pass ON ( pass.memberid = part.memberid ) 
LEFT OUTER JOIN profilepdf pdf ON ( pdf.memberid = part.memberid ) 
LEFT OUTER JOIN tblpartype partype ON ( partype.type_num = part.par_type ) 
WHERE cmmember.id =6
AND (
pdf.pdftype = 'par'
OR pdf.pdftype IS NULL 
)
ORDER BY part.joindate DESC , part.prefname, part.email
LIMIT 0 , 30 


MySQL said: 

#1104 - The SELECT would examine more rows than MAX_JOIN_SIZE. Check your
WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT
is ok

++

Best regards,
Peter Lauri



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Where is release 5.0.23

2006-07-10 Thread Rick Robinson
The documentation shows MySQL 5.0.23 as released on July 4 - but I do not see 
it on any mirror anywhere?  Does anyone know where MySQL 
5.0.23 is?  Is it released, as the doc says?  Is it being held until release 
5.0.24?

Thanks for any update.
Regards,
R



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Please solve immediately Error found in migration from MS SQL 2000 to My sql

2006-07-10 Thread Jake Peavy

On 7/10/06, Kishore Kumar Barik [EMAIL PROTECTED] wrote:


 snip 
Please send me solution immediately.



anything else, your highness?

--
-jp


Chuck Norris got in touch with his feminine side, and promptly got her
pregnant.


Re: How to look for balanced parenthesis?

2006-07-10 Thread Rhino


- Original Message - 
From: Miles Thompson [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Sunday, July 09, 2006 8:10 PM
Subject: Re: How to look for balanced parenthesis?



At 03:48 PM 7/9/2006, Rhino wrote:



- Original Message - From: mos [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Sunday, July 09, 2006 1:35 PM
Subject: How to look for balanced parenthesis?


I have a complicated SQL statement with around a dozen 
if(this,val1,val2) embedded in it and there are even nested If clauses. 
I'm getting syntax errors because I'm not balancing the ( ) properly. 
Is there any free software out there for Windows that I can copy and 
paste the SQL statement into that will show me where the parenthesis are 
unbalanced? It doesn't have to know about SQL, I just to either highlight 
the parenthesis range or find out when the parenthesis become out of 
sync.


I'm going to give you one answer that you almost certainly won't like: 
Eclipse. Eclipse is an IDE for developing programs, especially Java, and 
it has a parenthesis matcher which also handles braces and square 
brackets. Installing Eclipse solely for the bracket matcher is a bit like 
using atomic weapons to kill mosquitos but if you were going to develop 
applications anyway and wanted a great IDE, it might be the answer to your 
problem. It's free by the way. You can get it at http://eclipse.org.


Another editor that can also match brackets is PFE, Programmer's File 
Editor. It's also free and is a good editor. You can find it many places, 
including http://www.lancs.ac.uk/staff/steveb/cpaap/pfe/pfefiles.htm. It 
only runs on Windows though.


Another decent little editor that has the feature is TextPad. It's also 
free and can be found at http://www.textpad.com/.


There are probably more basic editors out there that have bracket matchers 
but I can't name any for you. I'm not even sure what the feature you want 
is supposed to be called: bracket matcher, parenthesis balancer, or 
whatever. It's getting to be a pretty standard feature in editors in 
recent years, although it doesn't seem to be in NotePad or WordPad.


--
Rhino


Rhino,

Eclipse can't edit files on the server, can it?

I used to use UltraEdit, but then switched to EditPlus because it can edit 
remote files almost transparently. (Opening a file FTP's it down, you edit 
local copy, Saving FTP's it back.)




I really don't know if Eclipse can edit files on a server. My workspace is 
on my local PC so I've never tried editing a file on a server.


I can certainly see that it would be convenient to be able to edit a file on 
a server without having to manually transfer the file back and forth! But 
Eclipse is pretty smart so I'd be a little surprised if they forced you to 
transfer the file manually.


--
Rhino 




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.9/382 - Release Date: 2006-07-04


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does one speed up delete-Again

2006-07-10 Thread Jacob, Raymond A Jr
Could the problem the Locked data table in this case?
mysql show processlist;
+-+---+---+---+-+---
-+--+---
---+
| Id  | User  | Host  | db| Command |
Time   | State| Info
|
+-+---+---+---+-+---
-+--+---
---+
|   1 | goat | xxx:62693 | snort | Query   |
424668 | Locked   | INSERT INTO data (sid,cid,data_payload) VALUES
('2','13305243','45C10FEF40002F11D162CD9E |
| 524 | root  | localhost | snort | Query   |
424669 | Sending data | DELETE data from data JOIN sidtemp ON data.cid =
sidtemp.cid |
| 537 | root  | localhost | snort | Query   |
242922 | Locked   | DELETE data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-05-01   |
| 542 | root  | localhost | NULL  | Query   |
0 | NULL | show processlist
|
+-+---+---+---+-+---
-+--+-

I killed 524  537, started over and CPU went down. The query is still
slow.
Do I need to bring the database down to single user mode(I don't know
the correct
Term so I just am guessing single user mode exists) then do the delete?

Thank you,
Raymond


Re: Please solve immediately Error found in migration from MS SQL 2000 to My sql

2006-07-10 Thread Daniel da Veiga

On 7/10/06, Kishore Kumar Barik [EMAIL PROTECTED] wrote:

Database Migration Error:
My database is stored in MSSQL Server 2000 in a standalone system in windows
authentication mode means without any username and password. I am writing my
whole process  problem if you people could solve my problem.

In Source Connection Parameter

Stored Connection : myLibrary
Hostname : Localhost  Port : 1433
username : .
Password :
Database : Library (I am explicitly specifing, It is not fetching by mySQL
migration Tool)

Next

Target Connection Parameter

Stored Connection : Library
Hostname : Localhost  Port : 3306
Username : root
password : 

next 
Connecting to source database system (failed)
Retrieve schema information from source database system (failed)
Test connection to target database system (not completed)

from Advanced button in below I found following error message



Connecting to source database and retrieve schemata names.
Initializing JDBC driver ...
Driver class MS SQL JDBC Driver
Opening connection ...
Connection
jdbc:jtds:sqlserver://localhost:1433/Library;user=.;password=;charset=utf-8;domain=


That's your error, the migration tool can't connect to the source
database, dus, can't read its info, and that leads to a failed
process.

Test your connection, recheck username/password, try another tool to
connect, check privileges. I know NOTHING about MSSQL, you should be
able to solve this problem better than any MySQL user.


The list of schema names could not be retrieved (error: 0).
ReverseEngineeringMssql.getSchemata :Network error IOException: Connection
refused: connect
Details:
net.sourceforge.jtds.jdbc.ConnectionJDBC2.init(ConnectionJDBC2.java:372)
net.sourceforge.jtds.jdbc.ConnectionJDBC3.init(ConnectionJDBC3.java:50)
net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:178)
java.sql.DriverManager.getConnection(Unknown Source)
java.sql.DriverManager.getConnection(Unknown Source)
com.mysql.grt.modules.ReverseEngineeringGeneric.establishConnection(
ReverseEngineeringGeneric.java:95)
com.mysql.grt.modules.ReverseEngineeringMssql.getSchemata(
ReverseEngineeringMssql.java:99)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
java.lang.reflect.Method.invoke(Unknown Source)
com.mysql.grt.Grt.callModuleFunction(Unknown Source)



Please send me solution immediately.


I'll try not to be as sarcastic as others, but I REALLY hope this
sentence is a result of you not speaking your native language! We are
volunteers, not employees. You'll be able to solve this better than
any of us, the problem is not with MySQL, it is with MSSQL, as the
migration tool can't connect to your already existing database in
order to migrate it.

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Mark Steele
Hi folks,

 

I've recently tried to do a database backup on a database server that
has a fairly high concurrency rate (1000+ queries/sec) and have noticed
that the backup process seemed to deadlock the machine and I had to
resort to extreme measures to get the database back up (killed the
process and had to restart it in recovery mode).

 

The command:

mysqldump --all-databases --opt --single-transaction --master-data=1
dump.txt

 

All my tables use InnoDB, and the database is about 100 gigabytes in
size.

 

Does anyone have any suggestions for getting consistent database
snapshots? 

 

I tried the InnoDB binary backup tool in the past, but that lead to a
corrupted database, and I'm not sure that it'll lead to a different
outcome as both single-transaction and the binary backup tool use the
same mechanism (versionnning). The documentation describes the
single-transaction as taking a short global lock, which is the root
cause of the deadlock I saw I believe.

 

When the server was deadlocked, all the connections were 'waiting on
table', and the backup process was apparently stuck on 'flushing
tables'.

 

Cheers,

 

Mark Steele
Information Systems Manager

Zango

E: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
P: 514.787.4681 |  F: 514.787.4707 

www.zango.com BLOCKED::http://www.zango.com 

Read our blog at http://blog.zango.com BLOCKED::http://blog.zango.com 

 



Re: mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Dan Buettner

Mark, that's the expected behavior of mysqldump with --opt and
--single-transaction; it locks all databases and all tables for the
duration of the dump, ensuring a consistent snapshot.

With a database this size (100 GB), it's an area where throwing
hardware at the problem may be your best bet.  I suggest one of two
approaches as possible solutions:

1) Buy a *really fast* disk array and set it up as striped on a
superfast connection, like Ultra320 SCSI or fibre.  This will lower
the amount of time required to write the mysqldump output (which will
likely exceed 100 GB data size due to overhead within the file).  You
might even look at 2 disk arrays on 2 channels, striping across both
the disks in the array and across the arrays.  Pros: fairly easy to
do, not terribly expensive.  Cons: You still lock up your main
database server for backups, though possibly for less time than you do
now.

2) Buy a second physical server for MySQL and set up replication.
Then use the replication server to do your backups - provided you
never let people connect directly to it, no one will notice when it
locks up for a few hours dumping data.  Once it's done dumping,
replication will catch up on its own.  This doesn't even have to be a
very fast box, depending on your needs.  If it falls behind from time
to time that may be acceptable - depends on your needs.  Pros:
possibly less expensive than superfast arrays, no lockups of your main
server, backup server in case of primary failure.  Cons: requires
monitoring of replication, and still requires a one-time consistent
dump as a starting point for replication.

HTH,
Dan

On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:

Hi folks,



I've recently tried to do a database backup on a database server that
has a fairly high concurrency rate (1000+ queries/sec) and have noticed
that the backup process seemed to deadlock the machine and I had to
resort to extreme measures to get the database back up (killed the
process and had to restart it in recovery mode).



The command:

mysqldump --all-databases --opt --single-transaction --master-data=1
dump.txt



All my tables use InnoDB, and the database is about 100 gigabytes in
size.



Does anyone have any suggestions for getting consistent database
snapshots?



I tried the InnoDB binary backup tool in the past, but that lead to a
corrupted database, and I'm not sure that it'll lead to a different
outcome as both single-transaction and the binary backup tool use the
same mechanism (versionnning). The documentation describes the
single-transaction as taking a short global lock, which is the root
cause of the deadlock I saw I believe.



When the server was deadlocked, all the connections were 'waiting on
table', and the backup process was apparently stuck on 'flushing
tables'.



Cheers,



Mark Steele
Information Systems Manager

Zango

E: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
P: 514.787.4681 |  F: 514.787.4707

www.zango.com BLOCKED::http://www.zango.com

Read our blog at http://blog.zango.com BLOCKED::http://blog.zango.com







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Where is release 5.0.23

2006-07-10 Thread Joerg Bruehe

Hi Rick, all!


Rick Robinson wrote:
The documentation shows MySQL 5.0.23 as released on July 4 - but I do not see it on any mirror anywhere?  Does anyone know where MySQL 
5.0.23 is?  Is it released, as the doc says?  Is it being held until release 5.0.24?


The release date in the documentation is *not* the date on which the 
binary packages were sent to the mirrors, or that on which the public 
announcement is sent.


Rather, the date is that of the latest file change (source or tools) 
that is included in that version.
After that change, the packages still get built, tested, and distributed 
- and this needs some days.


5.0.23 would have been published today, had we not spotted an 
un-intended effect.  Sorry for the delay!



Regards,
Joerg

--
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Dan Buettner

Sorry Mark - I thought your question was more of a does this seem
right and how do I than a something's wrong here post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says This is an online, non-blocking backup,
makes no mention of --opt, which as you note includes --lock-tables.

From mysqldump man page:


--lock-tables, -l

 Lock all tables before starting the dump. The tables are locked with
 READ LOCAL to allow concurrent inserts in the case of MyISAM tables.
 For transactional tables such as InnoDB and BDB,
 --single-transaction is a much better option, because it does not
 need to lock the tables at all.

 Please note that when dumping multiple databases, --lock-tables
 locks tables for each database separately. So, this option does not
 guarantee that the tables in the dump file are logically consistent
 between databases. Tables in different databases may be dumped in
 completely different states.

Try running without --opt, possibly specifying the included options
you need individually, and see if that works better for you.

I understand what you're saying about MySQL replication; hence the
need for monitoring the replication to ensure good backups.

Dan




On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:

Hi Dan,


  --single-transaction
  Creates a consistent snapshot by dumping all tables in a
  single transaction. Works ONLY for tables stored in
  storage engines which support multiversioning (currently
  only InnoDB does); the dump is NOT guaranteed to be
  consistent for other storage engines. Option
  automatically turns off --lock-tables.
  --opt
  Same as --add-drop-table, --add-locks, --create-options,
  --quick, --extended-insert, --lock-tables, --set-charset,
  and --disable-keys. Enabled by default, disable with
  --skip-opt.

See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html

These options should produce a non-blocking consistent database
snapshot.

I can already accomplish this on a slave server, however MySQL
replication can lead to slave drift as it is statement based (as opposed
to row-based replication). The only safe way to guarantee a real backup
in a MySQL replication setup is via snapshots on the master.

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Monday, July 10, 2006 2:42 PM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Mark, that's the expected behavior of mysqldump with --opt and
--single-transaction; it locks all databases and all tables for the
duration of the dump, ensuring a consistent snapshot.

With a database this size (100 GB), it's an area where throwing
hardware at the problem may be your best bet.  I suggest one of two
approaches as possible solutions:

1) Buy a *really fast* disk array and set it up as striped on a
superfast connection, like Ultra320 SCSI or fibre.  This will lower
the amount of time required to write the mysqldump output (which will
likely exceed 100 GB data size due to overhead within the file).  You
might even look at 2 disk arrays on 2 channels, striping across both
the disks in the array and across the arrays.  Pros: fairly easy to
do, not terribly expensive.  Cons: You still lock up your main
database server for backups, though possibly for less time than you do
now.

2) Buy a second physical server for MySQL and set up replication.
Then use the replication server to do your backups - provided you
never let people connect directly to it, no one will notice when it
locks up for a few hours dumping data.  Once it's done dumping,
replication will catch up on its own.  This doesn't even have to be a
very fast box, depending on your needs.  If it falls behind from time
to time that may be acceptable - depends on your needs.  Pros:
possibly less expensive than superfast arrays, no lockups of your main
server, backup server in case of primary failure.  Cons: requires
monitoring of replication, and still requires a one-time consistent
dump as a starting point for replication.

HTH,
Dan

On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
 Hi folks,



 I've recently tried to do a database backup on a database server that
 has a fairly high concurrency rate (1000+ queries/sec) and have
noticed
 that the backup process seemed to deadlock the machine and I had to
 resort to extreme measures to get the database back up (killed the
 process and had to restart it in recovery mode).



 The command:

 mysqldump --all-databases --opt --single-transaction --master-data=1
 dump.txt



 All my tables use InnoDB, and the database is about 100 gigabytes in
 size.



 Does anyone have any suggestions for getting consistent database
 snapshots?



 I tried the InnoDB binary backup tool in the past, but that lead to a
 corrupted database, 

RE: mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Mark Steele

Dan: The options I specified are correct (according to the
documentation) to get a consistent non-blocking snapshot. 
(--single-transaction disables --lock-tables, --opt is the default
behavior for mysqldump).

My question was more in the nature of will these options work in high
concurrency situations or will they cause a deadlock. (or am I missing
something here)

The documentation states that --single-transaction will get a global
lock 'for a short period of time', which I thought to mean that it'll be
short enough to not disturb normal operations (which is what is implied
in the documentation).

If this isn't the case in high-concurrency situations, anyone have
another method to get a consistent snapshot?

Cheers,

Mark

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 10, 2006 3:21 PM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark - I thought your question was more of a does this seem
right and how do I than a something's wrong here post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says This is an online, non-blocking backup,
makes no mention of --opt, which as you note includes --lock-tables.
From mysqldump man page:

--lock-tables, -l

  Lock all tables before starting the dump. The tables are
locked with
  READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
  For transactional tables such as InnoDB and BDB,
  --single-transaction is a much better option, because it does
not
  need to lock the tables at all.

  Please note that when dumping multiple databases,
--lock-tables
  locks tables for each database separately. So, this option
does not
  guarantee that the tables in the dump file are logically
consistent
  between databases. Tables in different databases may be dumped
in
  completely different states.

Try running without --opt, possibly specifying the included options
you need individually, and see if that works better for you.

I understand what you're saying about MySQL replication; hence the
need for monitoring the replication to ensure good backups.

Dan




On 7/10/06, Mark Steele [EMAIL PROTECTED] wrote:
 Hi Dan,


   --single-transaction
   Creates a consistent snapshot by dumping all tables in a
   single transaction. Works ONLY for tables stored in
   storage engines which support multiversioning (currently
   only InnoDB does); the dump is NOT guaranteed to be
   consistent for other storage engines. Option
   automatically turns off --lock-tables.
   --opt
   Same as --add-drop-table, --add-locks, --create-options,
   --quick, --extended-insert, --lock-tables, --set-charset,
   and --disable-keys. Enabled by default, disable with
   --skip-opt.

 See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html

 These options should produce a non-blocking consistent database
 snapshot.

 I can already accomplish this on a slave server, however MySQL
 replication can lead to slave drift as it is statement based (as
opposed
 to row-based replication). The only safe way to guarantee a real
backup
 in a MySQL replication setup is via snapshots on the master.

 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Monday, July 10, 2006 2:42 PM
 To: Mark Steele
 Cc: mysql@lists.mysql.com
 Subject: Re: mysqldump with single-transaction with high-concurrency
DB

 Mark, that's the expected behavior of mysqldump with --opt and
 --single-transaction; it locks all databases and all tables for the
 duration of the dump, ensuring a consistent snapshot.

 With a database this size (100 GB), it's an area where throwing
 hardware at the problem may be your best bet.  I suggest one of two
 approaches as possible solutions:

 1) Buy a *really fast* disk array and set it up as striped on a
 superfast connection, like Ultra320 SCSI or fibre.  This will lower
 the amount of time required to write the mysqldump output (which will
 likely exceed 100 GB data size due to overhead within the file).  You
 might even look at 2 disk arrays on 2 channels, striping across both
 the disks in the array and across the arrays.  Pros: fairly easy to
 do, not terribly expensive.  Cons: You still lock up your main
 database server for backups, though possibly for less time than you do
 now.

 2) Buy a second physical server for MySQL and set up replication.
 Then use the replication server to do your backups - provided you
 never let people connect directly to it, no one will notice when it
 locks up for a few hours dumping data.  Once it's done dumping,
 replication will catch up on its own.  This doesn't even have to be a
 very fast box, depending on your needs.  If it falls behind from time
 to time that may be acceptable - depends on your needs.  

Re: How does one speed up delete-Again

2006-07-10 Thread Jacob, Raymond A Jr
It appears that every time I start query the event or the data table
gets Locked.
Could this have any affect on why it takes so long to delete records.

Grasping at straws,
Thank you,
Raymond
 



mysql show processlist;
+-+---+---+---+-
+--+--+--

+
| Id  | User  | Host  | db|
Command | Time | State| Info 
 
|
+-+---+---+---+-
+--+--+--

+
|   1 |goat| xxx:62693 |snort| Query   |
1555 | Locked   | INSER
T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305568',
'12', '2006-07-10 11:48:22. |
| 542 | root  | localhost |snort| Query
| 1555 | Sending data | DELET
E 


data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-04-01   
|
| 543 |goat| xxx:62680 | snort_archive | Sleep
| 1555 |  | NULL 
 
|
| 544 | root  | localhost |snort| Query
|0 | NULL | show 
processlist
|
+-+---+---+---+-
+--+--processlist
|
| 545 | root  | localhost |snort| Query
|  308 | Sending data | DELET
E data FROM data, event
WHERE data.cid = event.cid AND event.timestamp  '2006-03-03'
|
+-+---+---+---+-
+--+--+--

+
4 rows in set (0.00 sec)

mysql show processlist;
+-+---+---+---+-
+--+--+--

+
| Id  | User  | Host  | db|
Command | Time | State| Info 
 
|
+-+---+---+---+-
+--+--+--

+
|   1 |goat| xxx:62693 |snort| Query   |
333 | Locked   | INSER
T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305782',
'12', '2006-07-10 12:16:30. |
| 543 |goat| xxx:62680 | snort_archive | Sleep
|  333 |  | NULL 
 
|
| 544 | root  | localhost |snort| Query
|0 | NULL | show 
processlist
|
| 545 | root  | localhost |snort| Query
|  333 | Sending data | DELET
E data FROM data, event
WHERE data.cid = event.cid AND event.timestamp  '2006-03-03'
|
+-+---+---+---+-
+--+--+--

+
4 rows in set (0.00 sec)

mysql show processlist;
+-+---+---+---+-
+--+--+--

+
| Id  | User  | Host  | db|
Command | Time | State| Info 
 
|
+-+---+---+---+-
+--+--+--

+
|   1 |goat| xxx:62693 |snort| Query   |
337 | Locked   | INSER
T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305782',
'12', '2006-07-10 12:16:30. |
| 543 |goat| xxx:62680 | snort_archive | Sleep
|  337 |  | NULL 
 
|
| 544 | root  | localhost |snort| Query
|0 | NULL | show 
processlist
|
| 545 | root  | localhost |snort| Query
|  337 | Sending data | DELET
E data FROM data, event
WHERE data.cid = event.cid AND event.timestamp  '2006-03-03'
|
+-+---+---+---+-
+--+--+--

+
4 rows in set (0.01 sec)



mysql show processlist;
+-+---+---+---+-
+--+--+--

+
| Id  | User  | Host  | db|
Command | Time | State| Info 

Question on views, stored procedures temporary tables

2006-07-10 Thread Daniel Kasak
Hi all.

I have a long and complicated chain of queries in MS Access that I want
to port to using stored procedures  views.

What I'd like to do is something like:

1) stored procedures grabs original data set, and dumps into temporary
tables
2) I open a view, which is in turn based on other views, which
eventually points back to the temporary tables created in 1)

Can I do that? ie if I create the views now, will they still work later,
considering the temporary tables will be dropped and re-created later
on? Do I have to drop the views as well, and re-create them after the
stored procedure that creates and populates the temporary tables?

Thanks :)

Dan

-- 
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql - multimedia database

2006-07-10 Thread Shain Lee
Hi , 
   
  I'm just behind a mulimedia database , that wanted to be online with WAP 
contents. So , i decided to go ahead with mysql , and i installed mysql latest 
5.0 in fedora core 5 with innoDB .upto now every thing success.
  But , my worries is , how can i created huge mulimedia database with mysql ? 
  There are lots of subcatagories ,  in each and every main catagories , 
  for example , 
  Movie stars --- actors -  hollywood---abc---ddd
 bollywood--aaasss
 actress - hollywoodaaa
 bollywood---qqq 
  like that , there should be alot of main catagories and sub catagotries. All 
contents types are available , like wallpaper , greetings, ring tones ..., 
movies clips .etc.
   
  Those contents are supposed to stored as BLOB in mysql database itself.
   
  So , can somebody , help me to created most reliable mysql - multimedia 
database ?
   
  any ideas and comments are welcome ,
   
  Thanx in advance,
  Shaine.
   
   
   


-
 All new Yahoo! Mail The new Interface is stunning in its simplicity and ease 
of use. - PC Magazine

Re: How to look for balanced parenthesis?

2006-07-10 Thread Hank

I used to use UltraEdit, but then switched to EditPlus because it can edit
remote files almost transparently. (Opening a file FTP's it down, you edit
local copy, Saving FTP's it back.)




FYI - UltraEdit can do this - It uses SFTP over SSH to edit remote files.
Works like a charm.

-Hank
mysql, query


How to log the execution time of each query

2006-07-10 Thread Saha, Mukul \(Cognizant\)

Hi,

I would like to log the execution time for each query in my MySQL server
.Is there any possible way?

Please not that, the - log-slow-queries option will not suffice for my
requirement.



Thanks  Regards

Mukul Saha

___
Cognizant Technology Solutions,Kolkata.
Ph: 033-2357 3211 / 5221, Vnet:33790





This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply 
e-mail and destroy all copies of the original message.
Any unauthorised review, use, disclosure, dissemination, forwarding, printing 
or copying of this email or any action taken in reliance on this e-mail is 
strictly
prohibited and may be unlawful.

  Visit us at http://www.cognizant.com