processlist

2007-03-15 Thread balaraju mandala

Hi All,

I have some doudts in db connections. Please clarify if u know.

I am using MySQL 5 version, while my program(Java Program with threads) is
running, some times i won't get DB connection with in expected time. When
ever i type show processlist on mysql prompt, it is showing 180+
connections estableshed but almost all are in sleep stage only. Is these
many opened connections are delaying a new connection? if yes is there any
process to close these  sleeping connections?


Possible in sql, or just move it to a language

2007-03-15 Thread Scott Haneda
This is a one off I need to do

Table1 has email addresses in it, table two also has email addresses in it.
table1 represents bounced emails that need to be purged from the database.

What I would like to do, is mark a field in table2, called needs_purging
to true, when there is a match from table 1

Should I just run a loop in my language of choice, or is there a join I am
not seeing here?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Possible in sql, or just move it to a language

2007-03-15 Thread Daevid Vincent
I'm just guessing at this, but I think this is a simple update:

UPDATE  table1, table2 
SET   table2.needs_purging = 1
WHERE   table1.bounce_email = table2.email

http://dev.mysql.com/doc/refman/5.1/en/update.html

You could also do this same logic with the delete I believe:

DELETE table2
FROM table1, table2
WHERE  table1.bounce_email = table2.email

http://dev.mysql.com/doc/refman/5.1/en/delete.html


 -Original Message-
 From: Scott Haneda [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 15, 2007 12:20 AM
 To: mysql@lists.mysql.com
 Subject: Possible in sql, or just move it to a language
 
 This is a one off I need to do
 
 Table1 has email addresses in it, table two also has email 
 addresses in it.
 table1 represents bounced emails that need to be purged from 
 the database.
 
 What I would like to do, is mark a field in table2, called 
 needs_purging
 to true, when there is a match from table 1
 
 Should I just run a loop in my language of choice, or is 
 there a join I am
 not seeing here?
 -- 
 -
 Scott HanedaTel: 415.898.2602
 http://www.newgeo.com Novato, CA U.S.A.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Daevid Vincent
Can anyone recommend a real, quality, professional level mySQL GUI for
Linux? 
KDE, Gnome, whatever. Doesn't matter. Beggars can't be choosers right.

Something along the lines of SQLYog (Enterprise ideally).

I'm kind of disappointed that I can't seem to find anything. They're all
either some Admin tool designed to setup users and all that stuff. Yawn. Or
they're so limited, I might as well just use an XP VMWare and a windows GUI
client instead.

Sadly SQLYog has no intentions of porting to Linux :-\

mySQL Query Browser is for the most part useless. It's v1.1.18 and gives
almost no benefit to using the CLI mode. You can't sort by clicking
headings. They UI is awkward to use. You can't even set the font sizes, so
it's HUGE (at least in my KDE it is).

mySQL Workbench is Alpha, and I couldn't even get it to connect to the
localhost server (despite the other tools in that package work)!!?

phpMyAdmin is wonderful -- for a Web UI tool. But not very practical for
serious development.

Anything else in my search is either equally amateur or simply just an
inactive or dead project.

How is it that mySQL is effectively a Linux native tool for all intents and
purposes, yet there isn't nearly the level of GUIs for it that there are for
Windows?!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Ow Mun Heng
On Thu, 2007-03-15 at 01:25 -0700, Daevid Vincent wrote:
 Can anyone recommend a real, quality, professional level mySQL GUI for
 Linux? 
 KDE, Gnome, whatever. Doesn't matter. Beggars can't be choosers right.
 

has it got to be Free of COST and FLOSSware? If Not, then consider Aqua
Data Studio. Looks and works nicely.
Java Based though.. so it's a resource hog



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Summer of Code: Apply now, window closes 24 March 2007!

2007-03-15 Thread Kaj Arnö

Dear MySQL Users,

This message is for those of you who are interested in going beyond
*using* MySQL, into the realm of those *developing* the MySQL software
itself, for everyone else in the community to use and enjoy.

As I wrote a few days ago in my blog, MySQL is participating in the
Google Summer of Code 2007.

The application process for students has now been opened! Those
interested have just one good week of time to apply, as the window
closes 24 March 2007.

To participate in MySQL Summer of Code,

1. Read my blog announcement on http://www.planetmysql.org/kaj/?p=89

2. Read Google’s Guide to the GSoC Web App for Student Applicants at
http://groups.google.com/group/google-summer-of-code-announce/web/guide-to-the-gsoc-web-app-for-student-applicants

3. Go to the http://forge.mysql.com/wiki/SummerOfCode page to pick one
or several MySQL related projects that interest you

4. Load the http://forge.mysql.com/wiki/SummerOfCode/ApplicationTemplate
page for the MySQL related application template, to use as a reference
for when you fill in the application itself in the next step

5. Go to the Google Summer of Code Student Signup page at
http://code.google.com/soc/student_step1.html and start the signup
process itself. Be prepared that it can take a while to fill in.

When filling in the GSoC application, remember that we expect you to
follow our Application Template mentioned above. I.e.

- Your application needs to be concise.
- Your application needs to follow our format.
- First you give us your personal details,
- then what you plan on doing,
- then all your actual experience (broken down, for easy reading),
- then what exactly you intend on delivering (the most important part), and
- finally, a simple Yes, I’m willing to sign the MySQL CLA at
http://forge.mysql.com/wiki/MySQL_Contributor_License_Agreement

The type of projects available at
http://forge.mysql.com/wiki/SummerOfCode are of all ambition levels —
and we have not limited ourselves to entirely new features. Colin and
our mentors are still adding new tasks, and currently we have the
following items:

- Test Suite Development (Stewart Smith)
  mysql-test-run.pl and mysqltest
- Test case development (Giuseppe Maxia)
  Code coverage improvement
  System Tests - Load Tests and Long-Running Tests
  Test creation tools
- Benchmarking the MySQL Server
- Instance Manager fixes
- Integrate MySQL Cluster with Instance Manager
- INFORMATION_SCHEMA tables for MySQL Cluster status
- MySQL Based Atom Store
- Simple P4 bugs/features involving options/flags on bugs.mysql.com
- Simple features from Worklog, our detailed todo list

If at any point you need assistance, please don’t hesitate to contact
MySQL’s Summer of Code Project Administrator Colin Charles
([EMAIL PROTECTED]).

Kaj
--
Kaj Arnö [EMAIL PROTECTED]
MySQL AB, VP Community Relations, Munich, Germany

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Daevid Vincent
No, it doesn't have to be free. However I'm not a big fan of Java
Applications either for the very reason you mention. They tend to be big,
bloated and slow. Zend IDE is an example of that, and is mostly unuseable
for real work IMHO.

Whoa! $400 for single license! Yipes!
http://www.aquafold.com/licensing.html 

 -Original Message-
 From: Ow Mun Heng [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 15, 2007 1:37 AM
 To: Daevid Vincent
 Cc: mysql@lists.mysql.com
 Subject: Re: Is there a professional quality mySQL GUI for Linux?
 
 On Thu, 2007-03-15 at 01:25 -0700, Daevid Vincent wrote:
  Can anyone recommend a real, quality, professional level 
 mySQL GUI for
  Linux? 
  KDE, Gnome, whatever. Doesn't matter. Beggars can't be 
 choosers right.
  
 
 has it got to be Free of COST and FLOSSware? If Not, then 
 consider Aqua
 Data Studio. Looks and works nicely.
 Java Based though.. so it's a resource hog
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Martijn Tonies



 No, it doesn't have to be free. However I'm not a big fan of Java

What about using Wine?

Ours works fine under Wine.

Or you could try the one at www.sqlly.com

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, NexusDB, Oracle 
MS SQL Server
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql-workbench

2007-03-15 Thread Ezequiel Panepucci

Dear list,

Is mysql-workbench, the product derived from DBDesigner,
a commercial product only?

I have an alpha version on my computer but now it seems
from the MySQL webpage http://dev.mysql.com/downloads/gui-tools/5.0.html
that the workbench application is no longer available.

On http://www.mysql.com/products/tools/ we find that query-browser
and administrator have download links but workbench does not.

The alpha version just dies every now and then and is really not
reliable enough (Ubuntu 6.06).

Comments?

Thanks for the great database,
   Zac

. --.. . --.- ..- .. . .-..  .--. .- -. . .--. ..- -.-. -.-. ..
Ezequiel Panepucci, Ph.D.   | Paul Scherrer Institut
Phone: +41 (0)56 310 5267   | Swiss Light Source - WSLA/216
Fax  : +41 (0)56 310 5292   | 5232 Villigen PSI, Switzerland
Cell : +41 (0)79 598 6946



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql-workbench

2007-03-15 Thread Ezequiel Panepucci

I hate answering myself...

RTFWBF (Read The Fine WorkBench Forum)
http://forums.mysql.com/read.php?113,142277,142309#msg-142309

Sorry for the wasted bandwidth,
   Zac

Ezequiel Panepucci wrote:

Dear list,

Is mysql-workbench, the product derived from DBDesigner,
a commercial product only?

I have an alpha version on my computer but now it seems
from the MySQL webpage http://dev.mysql.com/downloads/gui-tools/5.0.html
that the workbench application is no longer available.

On http://www.mysql.com/products/tools/ we find that query-browser
and administrator have download links but workbench does not.

The alpha version just dies every now and then and is really not
reliable enough (Ubuntu 6.06).

Comments?

Thanks for the great database,
   Zac



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Network traffic with MySQL

2007-03-15 Thread Jonathan Trahair
Hi Everyone

I have a new MySQL database at the back end of an invoicing and stock control 
program. I ran a Performance Monitor thing whilst doing the look-ups involved 
in one of the reports (Invoice items in order of customer name between 2 dates 
- you know the sort of thing). It appears that the server sends the data to the 
client, process it there (and in the case of writing data back to the files) 
sends it back again.

It doesn't seem to make any difference whether the CursorLocation = adUseClient 
or adUseServer.

Set mconn = New ADODB.Connection
mconn.ConnectionString = DRIVER={MySQL ODBC 3.51 Driver};  SERVER=  
mstrServerName  ;   DATABASE=  mstrThisDatabase  ;  UID=root;PWD= 
 mstrThisPassword  ; OPTION=3
mconn.Open

Then, for updating the database I use
mconn.CursorLocation = adUseClient
grsCustomers.Open mstrSQL, mconn, adOpenStatic, adLockOptimistic
or if it's just for reporting, then
mconn.CursorLocation = adUseServer
grsCustomers.Open mstrSQL, mconn, adOpenForwardOnly, adLockReadOnly

If I set mconn.CursorLocation = adUseServer shouldn't the data be processed 
without sending it to the client, or am I missing something?

PS, what is Option = 3 in mconn.ConnectionString?

Thanks in advance

Jonathan Trahair

Re: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Thiago LPS

I use SQLYog emulated with wine and/or cross over office

it works 100% fine :D

wine sqlyog.exe  Next  Next  Next Finish

:D



On 3/15/07, Daevid Vincent [EMAIL PROTECTED] wrote:


Can anyone recommend a real, quality, professional level mySQL GUI for
Linux?
KDE, Gnome, whatever. Doesn't matter. Beggars can't be choosers right.

Something along the lines of SQLYog (Enterprise ideally).

I'm kind of disappointed that I can't seem to find anything. They're all
either some Admin tool designed to setup users and all that stuff. Yawn.
Or
they're so limited, I might as well just use an XP VMWare and a windows
GUI
client instead.

Sadly SQLYog has no intentions of porting to Linux :-\

mySQL Query Browser is for the most part useless. It's v1.1.18 and gives
almost no benefit to using the CLI mode. You can't sort by clicking
headings. They UI is awkward to use. You can't even set the font sizes, so
it's HUGE (at least in my KDE it is).

mySQL Workbench is Alpha, and I couldn't even get it to connect to the
localhost server (despite the other tools in that package work)!!?

phpMyAdmin is wonderful -- for a Web UI tool. But not very practical for
serious development.

Anything else in my search is either equally amateur or simply just an
inactive or dead project.

How is it that mySQL is effectively a Linux native tool for all intents
and
purposes, yet there isn't nearly the level of GUIs for it that there are
for
Windows?!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
--
Thiago LPS
C.E.S.A.R - Administrador de Sistemas
msn: [EMAIL PROTECTED]
0xx 81 8735 2591
--


RE: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Tim Lucia
I trade between SQLYog and SQL Exporer plugin for Eclipse.  The former only
shows 1 result set at a time (boo) while the latter shows more than one
(yeah!)  The former doesn't let you sort columns from your own query, only
the table preview.  The latter doesn't let you sort the columns.

Neither one is perfect.

Tim

 -Original Message-
 From: Daevid Vincent [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 15, 2007 4:26 AM
 To: mysql@lists.mysql.com
 Subject: Is there a professional quality mySQL GUI for Linux?
 
 Can anyone recommend a real, quality, professional level mySQL GUI for
 Linux?
 KDE, Gnome, whatever. Doesn't matter. Beggars can't be choosers right.
 
 Something along the lines of SQLYog (Enterprise ideally).
 
 I'm kind of disappointed that I can't seem to find anything. They're all
 either some Admin tool designed to setup users and all that stuff. Yawn.
 Or
 they're so limited, I might as well just use an XP VMWare and a windows
 GUI
 client instead.
 
 Sadly SQLYog has no intentions of porting to Linux :-\
 
 mySQL Query Browser is for the most part useless. It's v1.1.18 and gives
 almost no benefit to using the CLI mode. You can't sort by clicking
 headings. They UI is awkward to use. You can't even set the font sizes, so
 it's HUGE (at least in my KDE it is).
 
 mySQL Workbench is Alpha, and I couldn't even get it to connect to the
 localhost server (despite the other tools in that package work)!!?
 
 phpMyAdmin is wonderful -- for a Web UI tool. But not very practical for
 serious development.
 
 Anything else in my search is either equally amateur or simply just an
 inactive or dead project.
 
 How is it that mySQL is effectively a Linux native tool for all intents
 and
 purposes, yet there isn't nearly the level of GUIs for it that there are
 for
 Windows?!
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



max_rows query + SegFaulting at inopportune times

2007-03-15 Thread JP Hindin

Greetings all;

I have a quandary regarding table limits, and clearly I am not
understanding how this all works together. I have a test database which
needs to keep long-term historical data, currently the total dataset in
this one table is probably about 5.5GB in size - although since I have a
4GB table limit that I can't seem to shake, I'm not entirely positive yet.

First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
chaps are willing to help me with this distribution version, but I imagine
the primary question is fairly non-specific. The OS is obviously Deb
Sarge, running on a recent x86 machine (so it does include the large file
support in the kernel).

So, when I first received a 'Table is full' error I looked up the MySQL
documentation and found the section regarding to altering max_rows on a
table. Nice and simple. I ran the following on my DB:
mysql ALTER TABLE mytable max_rows=2000;
And some four days later when I looked at it, this was on the screen:
Segmentation fault

I checked the table status, and max_data_length had not changed. I thought
perhaps I was being too pushy with the max_rows, so I dropped a zero and
tried again - with the same results. About four days in, seg fault. So I
figured perhaps it was getting bent out of shape with a 4.0GB table
already in place, so I removed all rows, optimised the table, and tried
the first query again. Success immediately! The SHOW STATUS gave this:
 Row_format: Dynamic
   Rows: 0
 Avg_row_length: 0
Data_length: 0
Max_data_length: 281474976710655
   Index_length: 1024

Looks good. Nice high max_data_length - so I loaded all the data into the
table. Again, four days pass for the data to complete the bulk INSERT, and
I run a SHOW STATUS again:
 Row_format: Dynamic
   Rows: 18866709
 Avg_row_length: 224
Data_length: 4236151548
Max_data_length: 4294967295
   Index_length: 1141235712

And suddenly I'm back to square one. Now I'm suspecting that the
max_data_length is a combination of a lot of factors, and the
avg_row_length plays into this. The documentation suggests setting
avg_row_length in the ALTER TABLE, however it also says:
You have to specify avg_row_length only for tables with BLOB or TEXT
columns, so I didn't bother as this table is a combination of ints,
varchars and datetimes.

I wanted to check with you wizened lot before I set another query going.
I'm going to assume that running an ALTER with the data in the DB is only
going to garner me another wasted week and a Seg Fault, so I think what I
should probably do is clean the table again, run the following:
mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
And then reload all my data and see if that helps.

Can someone explain to me if my guess that avg_row_length is a factor in
the max_data_length of the table, and is my above query going to release
me from my hovering 4GB table limit?
Has anyone seen this blasted SegFault issue before?

I appreciate any help I can get with this one, I'm obviously missing
something, flame away. Many thanks.

 - JP


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread ddevaudreuil
Tim Lucia [EMAIL PROTECTED] wrote on 03/15/2007 07:47:29 AM:

 I trade between SQLYog and SQL Exporer plugin for Eclipse.  The former 
only
 shows 1 result set at a time (boo) while the latter shows more than one
 (yeah!)  The former doesn't let you sort columns from your own query, 
only
 the table preview.  The latter doesn't let you sort the columns.
 
 Neither one is perfect.
 
 Tim
 

SQLYog 5.25 was just released and it now allows multiple result sets (one 
per query tab).  There is
also a beta release of a new monitoring tool.

Donna

Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread Michael Dykman

What host OS are you running? And which file system? MySQL is always
limited by the file size that the host file system can handle.

- michael dykman


On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:


Greetings all;

I have a quandary regarding table limits, and clearly I am not
understanding how this all works together. I have a test database which
needs to keep long-term historical data, currently the total dataset in
this one table is probably about 5.5GB in size - although since I have a
4GB table limit that I can't seem to shake, I'm not entirely positive yet.

First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
chaps are willing to help me with this distribution version, but I imagine
the primary question is fairly non-specific. The OS is obviously Deb
Sarge, running on a recent x86 machine (so it does include the large file
support in the kernel).

So, when I first received a 'Table is full' error I looked up the MySQL
documentation and found the section regarding to altering max_rows on a
table. Nice and simple. I ran the following on my DB:
mysql ALTER TABLE mytable max_rows=2000;
And some four days later when I looked at it, this was on the screen:
Segmentation fault

I checked the table status, and max_data_length had not changed. I thought
perhaps I was being too pushy with the max_rows, so I dropped a zero and
tried again - with the same results. About four days in, seg fault. So I
figured perhaps it was getting bent out of shape with a 4.0GB table
already in place, so I removed all rows, optimised the table, and tried
the first query again. Success immediately! The SHOW STATUS gave this:
 Row_format: Dynamic
   Rows: 0
 Avg_row_length: 0
Data_length: 0
Max_data_length: 281474976710655
   Index_length: 1024

Looks good. Nice high max_data_length - so I loaded all the data into the
table. Again, four days pass for the data to complete the bulk INSERT, and
I run a SHOW STATUS again:
 Row_format: Dynamic
   Rows: 18866709
 Avg_row_length: 224
Data_length: 4236151548
Max_data_length: 4294967295
   Index_length: 1141235712

And suddenly I'm back to square one. Now I'm suspecting that the
max_data_length is a combination of a lot of factors, and the
avg_row_length plays into this. The documentation suggests setting
avg_row_length in the ALTER TABLE, however it also says:
You have to specify avg_row_length only for tables with BLOB or TEXT
columns, so I didn't bother as this table is a combination of ints,
varchars and datetimes.

I wanted to check with you wizened lot before I set another query going.
I'm going to assume that running an ALTER with the data in the DB is only
going to garner me another wasted week and a Seg Fault, so I think what I
should probably do is clean the table again, run the following:
mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
And then reload all my data and see if that helps.

Can someone explain to me if my guess that avg_row_length is a factor in
the max_data_length of the table, and is my above query going to release
me from my hovering 4GB table limit?
Has anyone seen this blasted SegFault issue before?

I appreciate any help I can get with this one, I'm obviously missing
something, flame away. Many thanks.

 - JP


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: processlist

2007-03-15 Thread Michael Dykman

the problem is not on the MySQL side..  the problem is in the
connection management of your Java application...   Are you using
connection pooling?  Nothing on the server is going to close those
connections for you (although they will go stal eventually), that is
up to your client application.

- michael


On 3/15/07, balaraju mandala [EMAIL PROTECTED] wrote:

Hi All,

I have some doudts in db connections. Please clarify if u know.

I am using MySQL 5 version, while my program(Java Program with threads) is
running, some times i won't get DB connection with in expected time. When
ever i type show processlist on mysql prompt, it is showing 180+
connections estableshed but almost all are in sleep stage only. Is these
many opened connections are delaying a new connection? if yes is there any
process to close these  sleeping connections?




--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread JP Hindin

On Thu, 15 Mar 2007, Michael Dykman wrote:
 What host OS are you running? And which file system? MySQL is always
 limited by the file size that the host file system can handle.

Deb Sarge is a Linux distribution, the large file support I mentioned
allows files up to 2 TB in size.

 On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:
 
  Greetings all;
 
  I have a quandary regarding table limits, and clearly I am not
  understanding how this all works together. I have a test database which
  needs to keep long-term historical data, currently the total dataset in
  this one table is probably about 5.5GB in size - although since I have a
  4GB table limit that I can't seem to shake, I'm not entirely positive yet.
 
  First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
  chaps are willing to help me with this distribution version, but I imagine
  the primary question is fairly non-specific. The OS is obviously Deb
  Sarge, running on a recent x86 machine (so it does include the large file
  support in the kernel).
 
  So, when I first received a 'Table is full' error I looked up the MySQL
  documentation and found the section regarding to altering max_rows on a
  table. Nice and simple. I ran the following on my DB:
  mysql ALTER TABLE mytable max_rows=2000;
  And some four days later when I looked at it, this was on the screen:
  Segmentation fault
 
  I checked the table status, and max_data_length had not changed. I thought
  perhaps I was being too pushy with the max_rows, so I dropped a zero and
  tried again - with the same results. About four days in, seg fault. So I
  figured perhaps it was getting bent out of shape with a 4.0GB table
  already in place, so I removed all rows, optimised the table, and tried
  the first query again. Success immediately! The SHOW STATUS gave this:
   Row_format: Dynamic
 Rows: 0
   Avg_row_length: 0
  Data_length: 0
  Max_data_length: 281474976710655
 Index_length: 1024
 
  Looks good. Nice high max_data_length - so I loaded all the data into the
  table. Again, four days pass for the data to complete the bulk INSERT, and
  I run a SHOW STATUS again:
   Row_format: Dynamic
 Rows: 18866709
   Avg_row_length: 224
  Data_length: 4236151548
  Max_data_length: 4294967295
 Index_length: 1141235712
 
  And suddenly I'm back to square one. Now I'm suspecting that the
  max_data_length is a combination of a lot of factors, and the
  avg_row_length plays into this. The documentation suggests setting
  avg_row_length in the ALTER TABLE, however it also says:
  You have to specify avg_row_length only for tables with BLOB or TEXT
  columns, so I didn't bother as this table is a combination of ints,
  varchars and datetimes.
 
  I wanted to check with you wizened lot before I set another query going.
  I'm going to assume that running an ALTER with the data in the DB is only
  going to garner me another wasted week and a Seg Fault, so I think what I
  should probably do is clean the table again, run the following:
  mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
  And then reload all my data and see if that helps.
 
  Can someone explain to me if my guess that avg_row_length is a factor in
  the max_data_length of the table, and is my above query going to release
  me from my hovering 4GB table limit?
  Has anyone seen this blasted SegFault issue before?
 
  I appreciate any help I can get with this one, I'm obviously missing
  something, flame away. Many thanks.
 
   - JP
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Is there a professional quality mySQL GUI for Linux?

2007-03-15 Thread Tim Lucia
I have upgraded to 5.25 today and it is true.  Outstanding!

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 15, 2007 12:16 PM
 To: mysql@lists.mysql.com
 Subject: RE: Is there a professional quality mySQL GUI for Linux?
 
 Tim Lucia [EMAIL PROTECTED] wrote on 03/15/2007 07:47:29 AM:
 
  I trade between SQLYog and SQL Exporer plugin for Eclipse.  The former
 only
  shows 1 result set at a time (boo) while the latter shows more than one
  (yeah!)  The former doesn't let you sort columns from your own query,
 only
  the table preview.  The latter doesn't let you sort the columns.
 
  Neither one is perfect.
 
  Tim
 
 
 SQLYog 5.25 was just released and it now allows multiple result sets (one
 per query tab).  There is
 also a beta release of a new monitoring tool.
 
 Donna



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread Brent Baisley
You probably did not change the max_rows setting when you created the table. If you read the manual under AVG_ROW_LENGTH for create 
table it says:
When you create a MyISAM table, MySQL uses the product of the MAX_ROWS and AVG_ROW_LENGTH options to decide how big the resulting 
table is. If you do not specify either option, the maximum size for a table is 4GB. 


The 4GB limit is more a default speed optimization setting. Readup on the myisam_data_pointer_size setting for background 
information.


- Original Message - 
From: JP Hindin [EMAIL PROTECTED]

To: Michael Dykman [EMAIL PROTECTED]
Cc: JP Hindin [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Thursday, March 15, 2007 2:09 PM
Subject: Re: max_rows query + SegFaulting at inopportune times




On Thu, 15 Mar 2007, Michael Dykman wrote:

What host OS are you running? And which file system? MySQL is always
limited by the file size that the host file system can handle.


Deb Sarge is a Linux distribution, the large file support I mentioned
allows files up to 2 TB in size.


On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:

 Greetings all;

 I have a quandary regarding table limits, and clearly I am not
 understanding how this all works together. I have a test database which
 needs to keep long-term historical data, currently the total dataset in
 this one table is probably about 5.5GB in size - although since I have a
 4GB table limit that I can't seem to shake, I'm not entirely positive yet.

 First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
 chaps are willing to help me with this distribution version, but I imagine
 the primary question is fairly non-specific. The OS is obviously Deb
 Sarge, running on a recent x86 machine (so it does include the large file
 support in the kernel).

 So, when I first received a 'Table is full' error I looked up the MySQL
 documentation and found the section regarding to altering max_rows on a
 table. Nice and simple. I ran the following on my DB:
 mysql ALTER TABLE mytable max_rows=2000;
 And some four days later when I looked at it, this was on the screen:
 Segmentation fault

 I checked the table status, and max_data_length had not changed. I thought
 perhaps I was being too pushy with the max_rows, so I dropped a zero and
 tried again - with the same results. About four days in, seg fault. So I
 figured perhaps it was getting bent out of shape with a 4.0GB table
 already in place, so I removed all rows, optimised the table, and tried
 the first query again. Success immediately! The SHOW STATUS gave this:
  Row_format: Dynamic
Rows: 0
  Avg_row_length: 0
 Data_length: 0
 Max_data_length: 281474976710655
Index_length: 1024

 Looks good. Nice high max_data_length - so I loaded all the data into the
 table. Again, four days pass for the data to complete the bulk INSERT, and
 I run a SHOW STATUS again:
  Row_format: Dynamic
Rows: 18866709
  Avg_row_length: 224
 Data_length: 4236151548
 Max_data_length: 4294967295
Index_length: 1141235712

 And suddenly I'm back to square one. Now I'm suspecting that the
 max_data_length is a combination of a lot of factors, and the
 avg_row_length plays into this. The documentation suggests setting
 avg_row_length in the ALTER TABLE, however it also says:
 You have to specify avg_row_length only for tables with BLOB or TEXT
 columns, so I didn't bother as this table is a combination of ints,
 varchars and datetimes.

 I wanted to check with you wizened lot before I set another query going.
 I'm going to assume that running an ALTER with the data in the DB is only
 going to garner me another wasted week and a Seg Fault, so I think what I
 should probably do is clean the table again, run the following:
 mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
 And then reload all my data and see if that helps.

 Can someone explain to me if my guess that avg_row_length is a factor in
 the max_data_length of the table, and is my above query going to release
 me from my hovering 4GB table limit?
 Has anyone seen this blasted SegFault issue before?

 I appreciate any help I can get with this one, I'm obviously missing
 something, flame away. Many thanks.

  - JP


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

What's the fastest way to update a table from another table?

2007-03-15 Thread mos
I have 2 tables and I need to update Table1 with the rows from table 2. 
They have 15 columns that need updating (that are in both tables). Now I 
could delete the old rows from table1, except there are around 1 million 
rows, from a 35 million row table. This takes too long. I'd like to update 
the existing rows rather than delete the old rows because this should be 
faster. The Insert Replace syntax is no better because according to the 
documentation it deletes the old row on a duplicate index and adds the new 
row, so this is more work than necessary. Is there no way to use SQL to 
update the existing rows of a table without deleting them? I should add 
that there are a few rows in Table2 that aren't in Table1 so these rows 
will need to be added to Table1 because there are no rows to update.


TIA
Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Benchmarking

2007-03-15 Thread Alex Greg

On 3/14/07, Clyde Lewis [EMAIL PROTECTED] wrote:

System Configuration: Sun Microsystems  sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 65536 Megabytes
CPU: 12 @ 1200 MHz

I'm looking for a tool that will allow us to determine the max number
of databases that can run in a single instance of MySQL on a pretty
beefy server( Spec above).

In total we will have about  ~40 MySQL
instances running on this server. Each instance of MySQL, there will
have between 30-60 individual databases supporting an OLTP
application. I know that there are no know internal limits that MySQL
have regarding the number of databases that can be created, but I
would like get my hands on a tool that can simulate the number of
databases and identify where we would potentially run into
performance issues.


As I mentioned above, your performance issues are going to come not
from the number of databases, but from (primarily) how well-designed
your database tables and queries are, and (secondly) how you configure
the mysql server(s).

One important factor to bear in mind is that with 40 separate MySQL
instances on the single 64GB server, you will have a maximum 1.6GB of
RAM per instance (excluding memory used by the O/S and other
applications). This will have to be divided up between the various
memory buffers (key_buffer, innodb_buffer_pool, etc.) allocated by
each mysql process, so you might want to reconsider if you really need
to run 40 separate mysql processes, or whether all the databases can
live in the same MySQL instance and thus probably make better use of
the available RAM.

With regards to stress-testing and benchmarking, two popular tools for
benchmarking MySQL servers are:

Super Smack: http://vegan.net/tony/supersmack/
Sysbench: http://sysbench.sourceforge.net/


We need to determine whether to have multiple
servers to support the ~40 instances or have all ~40 instances on the
same machine. Any help of ideas would be greatly appreciated with
this decision.


I would be inclined to have separate machines, rather than put
everything on one huge server. By spreading the data around, you are
reducing the risk if the one mega-machine were to become unavailable,
and also reducing resource contention (on the disks, CPU, RAM etc.).


-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Want a Free Ride to MySQL Conference and Expo 2007?

2007-03-15 Thread Jeremy Cole

Hello All,

This year, Proven Scaling is sponsoring three people to attend the MySQL 
Conference and Expo 2007 in sunny Santa Clara, California, April 23-26.


The MySQL Conference and Expo is a great place to meet the MySQL 
developers, other MySQL users, and many interesting people. It is the 
premier MySQL event of the year, and has been getting bigger and bigger 
each year. If you really want to go, but you haven't been able to afford 
it for any reason, Free Ride is for you.


From your submissions, a panel will choose three people to receive:

  * Round-trip airfare from their location to SJC, SFO, or OAK airport
  * Transportation from the airport to hotel/conference
  * Hotel accomodations
  * A meal stipend
  * A full conference pass, provided by MySQL AB (Thanks, MySQL!)

While all entries will be considered, we will give preference to those 
who submit complete answers, demonstrate creativity and drive, can get 
to Northern California for a practical cost, and who would not otherwise 
be able to make it to the conference. Submission is open to everyone 
worldwide.


Entries will be accepted until Midnight PDT (GMT-7), March 23, 2007, and 
the winners will be announced shortly thereafter.


You can enter by visiting:

  http://www.provenscaling.com/freeride/

Good luck, and we hope to see you at the conference!

Regards,

Jeremy Cole
Owner / MySQL Geek
Proven Scaling LLC

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to match all words

2007-03-15 Thread Patrick Aljord

Hey all,
I have a table 'clients' like this:
id int(5),
name varchar(55),
address varchar(55)

I would like to select all the records that have '%x%' and '%y%' but
'%x%' can be in name and '%y%' can be in address. Also in my query
there are generally more words to match (x,y,z,t etc) and I can't use
full text search. Any what's the best way to do this?

Thanx in advance

Pat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to match all words

2007-03-15 Thread mos

At 09:32 PM 3/15/2007, you wrote:

Hey all,
I have a table 'clients' like this:
id int(5),
name varchar(55),
address varchar(55)

I would like to select all the records that have '%x%' and '%y%' but
'%x%' can be in name and '%y%' can be in address. Also in my query
there are generally more words to match (x,y,z,t etc) and I can't use
full text search. Any what's the best way to do this?

Thanx in advance

Pat


Pat,
If your table is small (a couple thousand rows), then you could try:

select * from clients where
(name like '%word1%' or address like '%word1%') and
(name like '%word2%' or address like '%word2%') and

(name like '%word10%' or address like '%word10%')


This is quite slow and clumsy to implement. . Why can't you use full text 
search? It is your best solution.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]