Re: Writing Arabic Text to a DataBase with Perl

2011-12-12 Thread Jon Forsyth
Hello Michaël,

Thank you for the tips.  SHOW CREATE TABLE helped me figure out that my
table was using Latin1 and I was able to change it to utf-8.  However, I
did not see any encoding specified on the column with this command.

I tried to fix the connection encoding with this line of Perl code:

$dbh-do(SET character_set_client='utf8');

Now the output is somewhat improved because the arabic is now writing to
the correct column, but it seems to have an unresolved encoding issue as
can be seen from a mysql command line query in the token_Arabic column:

++--+-+
| id | token_Arabic | variant |
++--+-+
|  1 | ??   | yA  |
|  2 |  | bny|
|  3 | ??   | dA  |
|  4 |  | klmk|
|  5 | ??   | Ey$ryn  |
++--+-+

Previously this query showed Arabic characters, just in the wrong column.

Thanks,

Jon


Writing Arabic Text to a DataBase with Perl

2011-12-10 Thread Jon Forsyth
Hello,

I hope someone can help me solve this.  I am trying to write 2 string
values to separate fields of the same row in a mysql DB table using the
Perl Module DBD::mysql.  One string value is of English letters, while the
other is of Arabic letters.  However, they are somehow switched and written
to the wrong field--the Arabic to the English and English to the Arabic
field.  I believe my file handle declaration ensures that the encoding i'm
dealing with is UTF-8:

---
open my $file, ':encoding(utf8)', 'input_file.xml' or die cannot open
file: $!;
---

and the mysql query:

---
$dbh-do(INSERT INTO bama_xml (token_Arabic, variant) VALUES
('$arab','$engl'););
---


I tried switching the values as below, but they still write to the wrong
place.

---
$dbh-do(INSERT INTO bama_xml (token_Arabic, variant) VALUES
('$engl','$arab'););
---

I also dropped the English string completely, but the Arabic still writes
to the wrong field--strange behavior, but I'm sure it's a problem with the
Arabic.

Thanks,

Jon


Null Output Problem

2011-10-11 Thread Jon Forsyth
Hello,

I have a problem with the following query:

SELECT subject_identifier, COUNT(*) FROM asr_sentence_score WHERE
total_words = correct_words GROUP BY subject_identifier;

OutPut:

++--+
| subject_identifier | COUNT(*) |
++--+
|   222 |2 |
|   111 |2 |
|   333 |1 |
|   444 |   11 |
|   888 |6 |
|   666 |   25 |
|   777 |2 |
|   555 |   20 |
|   999 |4 |
|   000 |3 |
++--+
10 rows in set (0.00 sec)

The asr_sentence_score table is a list of test results where each row is a
single item(sentence) on the test.  The subject_identifier is unique to the
test taker, and is repeated for each test item.  I was using this query to
compute a count of how many items each test taker scored perfectly (total_words
= correct_words), but I realized that this excludes a test taker who did not
score perfect for any item.  I want to output a '0' for those that did not
score any item perfectly.  My best guess at a solution would be to revise
the WHERE clause to something like this:

WHERE [total_words = correct_words] OR [COUNT(total_words = correct_words) =
0]

but this is bad syntax.  I put the brackets there for readability.

Thanks,

Jon


Re: import database

2011-10-10 Thread Jon Siebert
.here is one way

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html



On Mon, Oct 10, 2011 at 4:47 PM, Jessica Bela jessicabel...@yahoo.comwrote:

 Hi all,
 how I can import in my PC a database Mysql that  has been created  in
 another PC and with other tools?


Re: Concerned : Developer getting There is no 'root'@'%' registered error message

2011-08-17 Thread Jon Siebert
Perhaps he could make the stored procedure functional by creating a user
with the EXECUTE permission (somebody please fell free to correct me if I am
mistaken) if it is to be used to execute a stored procedure. You don't want
someone to attempt using the 'root' username with a typical 'mysql' password
to be granted ALL PRIVILEGES. Just my opinion.


On Wed, Aug 17, 2011 at 9:54 AM, Johnny Withers joh...@pixelated.netwrote:

 It would allow anyone from anywhere to access your server as root with full
 access to all databases using the password 'mysql'. Not very secure. I
 don't
 recommend having a root@% defined.

 On Aug 17, 2011 8:50 AM, Brent Clark brentgclarkl...@gmail.com wrote:

 Hiya

 Thank you so much for replying. I really appreciate it.

 I know the answer (well I think I do :) ), but im still going to ask. What
 is the risk if do the GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'
 IDENTIFIED
 BY 'mysql' WITH GRANT OPTION;
 To satisfy the developer.

 Thank you again.
 Brent



 On 17/08/2011 15:42, Johnny Withers wrote:

  
  
   Change the definer to one of your registered root accounts. Root@127or
  root@localhost.
  
 
   On Aug 17, 2011 8:39 AM, Brent Clark brentgclarkl...@gmail.com
 mailto:
  brentgclarklist@gmail.c...
 



Re: does mysql support master to master replication

2011-08-02 Thread Jon Siebert
MySQL, to my knowledge, supports only Master to Slave replication- as well
as Master-slave-slave replication.

http://dev.mysql.com/doc/refman/5.0/en/replication.html

On Wed, Aug 3, 2011 at 1:10 AM, Angela liu yyll2...@yahoo.com wrote:

 Hi, Folks:

 Does mysql support master to master replication, or master to slave
 replication on;y?  I did not find the answer in MySQL manual, but found some
 discussion over the internet.

 Does anybody know?  if so, anybody has implemented master to master
 replication in production?

 Thanks

 Angela



Re: does mysql support master to master replication

2011-08-02 Thread Jon Siebert
I had seen a discussion here as well, but honestly did not test it. It may
be of help.

http://www.linkedin.com/groupItem?view=srchtype=discussedNewsgid=72881item=60056153type=membertrk=eml-anet_dig-b_pd-ttl-cn

On Wed, Aug 3, 2011 at 1:10 AM, Angela liu yyll2...@yahoo.com wrote:

 Hi, Folks:

 Does mysql support master to master replication, or master to slave
 replication on;y?  I did not find the answer in MySQL manual, but found some
 discussion over the internet.

 Does anybody know?  if so, anybody has implemented master to master
 replication in production?

 Thanks

 Angela



Re: How to view Query Execution time

2011-08-01 Thread Jon Siebert
http://dev.mysql.com/doc/refman/5.1/en/query-log.html information on query
log may become useful for you in the future. Also, search the slow query
log, which also may help you in the future

On Mon, Aug 1, 2011 at 11:54 AM, Prabhat Kumar aim.prab...@gmail.comwrote:

 you can also use EXPLAIN, which will give you much more details.
 http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html

 http://www.techiequest.com/mysql-visual-explain-hierarchical-view-of-query-execution-plan/

 On Sun, Jul 31, 2011 at 11:45 PM, Suresh Kuna sureshkumar...@gmail.com
 wrote:

  Usually, at the end of the query running it displays the time how much it
  took.
 
  Or else enable the profiling and run the query to check the exact time it
  took for execution at all levels.
 
  On Mon, Aug 1, 2011 at 12:11 PM, Adarsh Sharma adarsh.sha...@orkash.com
  wrote:
 
   Dear all,
  
   I want to know how much time did it take to run a sample query.
   In postgresql, we enable timing by \timing command.
  
   Is there is any way to enable in Mysql
  
   Thanks
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql?**
   unsub=sureshkumar...@gmail.com
  http://lists.mysql.com/mysql?unsub=sureshkumar...@gmail.com
  
  
 
 
  --
  Thanks
  Suresh Kuna
  MySQL DBA
 



 --
 Best Regards,

 Prabhat Kumar
 MySQL DBA

 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat



Low priority write

2011-07-09 Thread Jon Siebert
  Low Priority Lock

Hello was wondering if anybody would have some input:

This will be on the same table.

Client (A) acquires a READ lock;
immediately thereafter a large amount of clients wait for a read lock as
well, and client(B) requests a LOW PRIORITY insert.

Another surge of clients request read locks as client(C) requests a INSERT
lock.

The last of the read locks are finished selecting.

Does client (B) or (C) acquire their lock first?

Will it go by priority or in order which the requests were received.

I could not find documentation in forums or MySQL reference regarding this.

Thanks!


MySQL Cluster/Cluster Carrier Grade Changelogs Have Moved

2008-04-24 Thread Jon Stephens

Hi,

The changelogs for MySQL Cluster have been consolidated and can now be found 
here: http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news.html


They're now arranged by NDB version number:

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news-6-3.html

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news-6-2.html

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news-6-1.html

Changelogs for individual releases can be found by converting all the . 
characters in the version string to - characters, appending this to 
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news- and tacking on 
.html to the end.


This sounds complicated, so here's an example:

The changelog for MySQL 5.1.23-ndb-6.2.14 can be found at 
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-news-5-1-23-ndb-6-2-14.html


cheers

jon.

--


Jon Stephens - [EMAIL PROTECTED]
Technical Writer - MySQL Documentation Team
Sun Microsystems AB - Database Technology Group
Liljeholmen, Stockholm, Sweden (GMT +01.00)
Mobile: +46 (0) 736 773 993
Skype: plastic-fish
MySQL: www.mysql.com
Sun: www.sun.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Understanding mysql NULL handling ...

2007-06-13 Thread Jon Ribbens
On Wed, Jun 13, 2007 at 12:50:03PM -0700, Mufaddal Khumri wrote:
 mysql select * from t1 where sid != 2;
 
 As you can see, the rows that had sid = NULL did not get returned in  
 the results when i did ... where sid != ; 
 
 Question: Is this behaviour correct and is in accordance to the SQL  
 standard or is it specific to MySQL ?

It is in accordance to the standard. NULL is not equal to anything,
and it is not unequal to anything. NULL is just the magic ineffable
NULL. If you want your NULL rows too you must use:

  SELECT * FROM t1 WHERE sid !=2 OR sid IS NULL;

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread Jon Ribbens
On Tue, Jun 12, 2007 at 11:09:41AM -0400, kalin mintchev wrote:
  I guess a lot of that depends what an acceptable query execution time for
  you is.
 
 well...  i don't really know. 30 secs maximum?! i've never worked with
 such huge tables. 3 - 5 million records is fine but i've never worked on a
 db with a table with 100 000 000 records.

I wouldn't worry about it. I have a table here with nearly a billion
rows, and it fetches based on the primary key in 0.00 sec.

 my calculation was mostly based on resources - like ram. like i mentioned
 earlier the .MYD and .MYI files together on the current one that i have -
 which has about 1.2 million records - are 90 mgb.
 are the .MYI files kept in ram or both .MYD and .MYI?

Neither of them are kept in RAM. You want a reasonable amount of
memory for buffers, query cacheing, etc, but you don't need to hold
any of the files in memory.

My main concern with large tables is disk space, especially if you're
ever going to want to do ALTER TABLE, when you will need enough free
space to hold the existing and new copies of the table on disk at the
same time.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: only select privilege

2007-06-08 Thread Jon Ribbens
On Fri, Jun 08, 2007 at 03:14:18PM +0530, Ananda Kumar wrote:
 How do i give only select privilege to a specific database.
 
 GRANT SELECT ON abc.* to 'ab'@'%' identified by 'ab';

Like that.

 mysql select select_priv from user where user='qa';
 +-+
 | select_priv |
 +-+
 | N   |
 +-+
 But is shows as N. Can you please help me.

If that was 'Y' then the user would have select privileges on *all*
databases.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: only select privilege

2007-06-08 Thread Jon Ribbens
On Fri, Jun 08, 2007 at 04:25:00PM +0530, Ananda Kumar wrote:
So, what ever i did is right?

Yes. User 'ab' can SELECT on all tables in database 'abc'.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Magazine - Issue 1 available NOW!!!!

2007-06-07 Thread Jon Ribbens
On Wed, Jun 06, 2007 at 05:56:44PM -0700, Peter Rosenthal wrote:
 On 04/06/07, Jon Ribbens [EMAIL PROTECTED] wrote:
 Um, what? Both that and the methods described in the magazine are
 completely wrong. You use mysql_real_ecape_string(), that's it.
 
 I would disagree on the use of mysql_real_escape_string(). The use of
 placeholders is much safer from a maintenance and 'oops look I typoed it'
 perspective.

That's not a disagreement - you're just talking about a different
abstraction layer. Behind the scenes your 'placeholder' API will
be using mysql_real_escape_string().

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Magazine - Issue 1 available NOW!!!!

2007-06-04 Thread Jon Ribbens
On Mon, Jun 04, 2007 at 02:44:25PM -0700, Daevid Vincent wrote:
 Thanks for the magazine. I already incorporated a little extra SQL
 injection checking into my db.inc.php wrapper...
 
 //[dv] added to remove all comments (which may help with SQL injections
 as well.
 $sql = preg_replace(/#.*?[\r\n]/s, '', $sql);
 $sql = preg_replace(/--.*?[\r\n]/s, '', $sql);
 $sql = preg_replace(@/\*(.*?)\*/@s, '', $sql); 

Um, what? Both that and the methods described in the magazine are
completely wrong. You use mysql_real_ecape_string(), that's it.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB dropping records / MyISAM working as it should

2007-05-15 Thread Jon Ribbens
On Tue, May 15, 2007 at 04:13:33PM -0500, Kenneth Loafman wrote:
 Can't tell.  The docs are somewhat lacking in detail, however, if I do a 
 db.autocommit(True) it works as it should.
 
 Will have to dig into the API code and see if that is where the semantic 
 discontinuity lies.

The MySQL-python-1.2.2/doc/FAQ.txt says:

| Starting with 1.2.0, MySQLdb disables autocommit by default, as
| required by the DB-API standard (PEP-249). If you are using InnoDB
| tables or some other type of transactional table type, you'll need
| to do connection.commit() before closing the connection, or else
| none of your changes will be written to the database.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB dropping records / MyISAM working as it should

2007-05-15 Thread Jon Ribbens
On Tue, May 15, 2007 at 06:39:21PM -0500, Kenneth Loafman wrote:
 Interesting... guess the intent was a disconnect that would break code 
 trying to work on MySQL, regardless of engine selected.  That decision 
 makes it two products, MySQL/MyISAM and MySQL/InnoDB with different 
 semantics.  Yes, you can work around it, but its not transparent.

I think the purist's response to that would be that it is a bug in
MySQL that MySQL does not support transactions always. But anyone
for whom that is a problem probably isn't on this list ;-)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-11 Thread Jon Ribbens
On Thu, May 10, 2007 at 05:17:12PM +0100, Jon Ribbens wrote:
 I suspect some sort of bug in the MySQLd authentication code.

I've managed to discover using --debug that it's due to MySQLd failing
to handle EINTR from read() in the authentication stage.

I've filed a bug report: http://bugs.mysql.com/bug.php?id=28359

Does anyone know how/where to best attract the attention of a MySQL
developer to this bug?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-10 Thread Jon Ribbens
On Thu, May 10, 2007 at 03:37:26AM +0100, Jon Ribbens wrote:
 I've just upgraded all the clients and servers to 5.0.41 (which looks
 like it just came out); I'll see what happens.

It hasn't solved the problem, but it has changed the error message to:

  OperationalError:
(2013, Lost connection to MySQL server at 'reading
authorization packet', system error: 0)

(still nothing in the logfile).

Does this ring any bells for anyone?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-10 Thread Jon Ribbens
On Thu, May 10, 2007 at 08:58:37AM -0600, mos wrote:
 If you do a google search:
 http://www.google.ca/search?q=lost+mysql+connection+%22reading+authorization+packet%22hl=enstart=90sa=N
 
 you'll find about a hundred web sites encountering the exact same error. 

Indeed, I noticed that ;-)

 Maybe the problem is with the hardware, like your network card 
 disconnecting from the MySQL server machine? Or TCP/IP?

The only comment I could find from a MySQL employee suggested that.
But I don't believe them. There are no other network problems visible,
and the 'authorization packet' is mid-way through the connection setup
(i.e. the TCP/IP connection has already been successfully used to both
send and receive data if we get as far as the 'authorization packet').
I suspect some sort of bug in the MySQLd authentication code.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysterious 'Lost connection' errors

2007-05-09 Thread Jon Ribbens
We are using MySQL 5.0.27 on RedHat Enterprise Linux ES release 4,
and the MySQL-python-1.2.1_p2 connector.

We are getting intermittent mysterious errors as follows:

  OperationalError:
(2013, 'Lost connection to MySQL server during query')

when attempting to connect to the MySQL server (note: on the actual
connection attempt, this is before even trying a query). There doesn't
appear to be any particular pattern to when these errors occur.

The client and server are on different machines, communicating via
TCP, but I have not managed to find any networking problems.

Does anyone have any suggestions as to what the problem might be, or
how we might go about trying to solve it?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-09 Thread Jon Ribbens
On Wed, May 09, 2007 at 09:22:34AM -0400, Michael Dykman wrote:
 I realize that's quite unlikely that you have performed a similar
 sloppy import but there is likely some edge condition on your server
 (wierd permissions in the data directory, corruoted tables, etc.)
 but I still recommend that you scrutinize your server logs for
 evidence of a spontaneous restart.  If that turns up nothing, you
 might try a fresh install of mysql on a separate host to see if the
 problem persists.

Thanks for your suggestions. The hostname.err log contains absolutely
nothing however, just the usual 'mysqld started' and 'Version:' lines.
Certainly there's nothing about any restart.

 Worst case, there is an upgrade patch available which might
 magically raise you above the problem.

Indeed, I see there is 5.0.37 now. I'd rather not go through an
upgrade though unless I knew it was likely to fix the problem.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-09 Thread Jon Ribbens
On Wed, May 09, 2007 at 11:17:59AM -0400, Michael Dykman wrote:
 When we first examined our server logs, we saw the same..  in our case
 (again) it was only when we slowed down and examined the timestamps on
 the start/stop messages that we realized that the server was
 restarting at unexpected intervals.

The last restart, according to both the content of the log, and the
operating system timestamp on the log file itself, was 6 days ago.
The 'lost connection' message has happened several times today,
however.

 (how did you get your binaries?  pre-compiled from the archive
 or build your own? and for what OS?)

We used the official RPMs from www.mysql.com for our exact version of
RedHat Linux (RedHat Enterprise Linux ES Release 4, 64-bit).

 When you do find the cause let me know; Im thinking of starting a
 collection :-)

I'll keep you informed ;-)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query problem

2007-05-09 Thread Jon Ribbens
On Wed, May 09, 2007 at 07:14:38PM +0200, Martijn Tonies wrote:
 I have a table of properties that is linked to a table  f images with a one
 property to many images relationship. I have manged this with nested
 queries but want to try and do it on one line. My current query
 
 $query = SELECT * FROM images, properties WHERE images.property_id =
 properties.property_id;
 
 As you can see from the query this returns a row for every image so if a
 property has 3 images associated with it it will be returned 3 times.
 
 What exactly is your question?

I think he somehow wants to return each property once only but still
have every image returned in the result.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysterious 'Lost connection' errors

2007-05-09 Thread Jon Ribbens
On Wed, May 09, 2007 at 10:07:41PM -0400, Mathieu Bruneau wrote:
 We found a similar issue because we were using persistent connection in
 php and had a firewall between the mysql and the webserver. The problem
 is that our persistent connection were setup for lasting up to something
 like 8 hours but the firewall was keeping state of the connection only
 for up to 1 hour when no data was transferred. After the firewall had
 flushed the state of the connection and that the webserver were trying
 to communicate through it, he was reporting Lost connection to mysql
 server during query

It's a good suggestion, but I'm pretty sure there's no firewall
acting between the client and the server, and that the connection is
not being persisted (i.e. all connection attempts are genuine new
connections).

I've just upgraded all the clients and servers to 5.0.41 (which looks
like it just came out); I'll see what happens.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 5.1.14 Release - Change in Cluster System Tables

2006-12-11 Thread Jon Stephens

Hi,

The following information is important to all MySQL Cluster 5.1 users, 
and especially to those using MySQL Cluster Replication.


It was not included in the 5.1.14 release announcement, so I'm quoting 
the relevant update to the 5.1.14 changelog ( 
http://dev.mysql.com/doc/refman/5.1/en/news-5-1-14.html ) here:


[begin]

Two major changes have taken place with regard to the MySQL Cluster 
system tables. These are:


 1. Incompatible change: The cluster database is no longer used. The 
tables formerly found in the cluster database are now in the mysql 
database, and have been renamed as ndb_binlog_index, ndb_apply_status, 
and ndb_schema.


   2. The mysql.ndb_apply_status and mysql.ndb_schema tables (formerly 
cluster.apply_status and cluster.schema are now created by ndb_restore 
in the event that they do not already exist on the slave cluster. 
(Bug#14612: http://bugs.mysql.com/14612)


Note: When upgrading from versions of MySQL previous to 5.1.14 to 5.1.14 
or later, mysql_fix_privilege_tables merely creates a new 
mysql.ndb_binlog_index table, but does not remove the existing cluster 
database (or, if upgrading from MySQL 5.1.7 or earlier, the existing 
cluster_replication database), nor any of the tables in it.


For more information, see Section 15.10.4, “Replication Schema and 
Tables”: 
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-schema.html


[end]

The fact that news of this non-trivial change for MySQL Cluster 5.1 did 
not make it into the official 5.1.14 release announcement is entirely my 
fault, and I hope you will accept my deepest apologies for the omission.


cheers,

j.

--

Jon Stephens - [EMAIL PROTECTED]
Technical Writer - MySQL Documentation Team
___ Brisbane, Australia (GMT +10.00)
_x_ Bangkok, Thailand (GMT +07.00)
___ Office: +61 (7) 3209 1394
_x_ Office: +66 0 2740 3691 5 ext. #201
Mobile: +61 402 635 784
MySQL AB: www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BINARY(N) as primary key?

2006-11-08 Thread Jon Frisby
I'm curious to know why simply having a UNIQUE constraint on the  
column is inadequate...


-JF

On Nov 7, 2006, at 6:47 PM, Michael Fischer wrote:



Any thoughts on using BINARY(N) or CHAR(N) as a primary key?

Performance issues? In mysql, in general?

Yes, in the context of the application, there is a very good
reason for doing this, and not using an auto increment integer.


Michael
--
Michael Fischer Happiness is a config option.
[EMAIL PROTECTED]Recompile and be happy.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BINARY(N) as primary key?

2006-11-08 Thread Jon Frisby
It's my understanding that a PK in MySQL is basically the same as a  
unique index -- for MyISAM tables at least.  For InnoDB it's a bit  
different with InnoDB storing rows within the PK index (and inserting  
a hidden PK if none is provided).


In short:  I don't think you'll see any better performance by making  
your PK be this BINARY column, instead of just using an  
auto_increment PK column and having a UNIQUE constraint on your  
BINARY column.  I doubt you'll see significantly worse performance by  
doing it either.


-JF


On Nov 8, 2006, at 6:02 PM, Michael Fischer wrote:


On Wed, Nov 08, Jon Frisby wrote:


On Nov 7, 2006, at 6:47 PM, Michael Fischer wrote:



Any thoughts on using BINARY(N) or CHAR(N) as a primary key?

Performance issues? In mysql, in general?

Yes, in the context of the application, there is a very good
reason for doing this, and not using an auto increment integer.



I'm curious to know why simply having a UNIQUE constraint on the
column is inadequate...

-JF


I'm concerned with performance, not integrity constraints.

I was wondering, with respect to mysql internals, whether there
was any substantial penalty to using CHAR or BINARY vs. INTEGER
based primary keys. I imagine, though I have not probed the
source code, that comparisons are done with bitwise-ands, or
memcmp(3). For this of course, VARCHAR and VARBINARY fields
would be a pain, because, from row to row, you couldn't
accurately optimize for the width of the data in the key field.
However, it might be perfectly reasonable to do so with fixed
length CHAR or BINARY ones.

Thanks.

Michael
--
Michael Fischer Happiness is a config option.
[EMAIL PROTECTED]Recompile and be happy.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM vs InnoDB

2006-11-01 Thread Jon Ribbens
Miles Thompson [EMAIL PROTECTED] wrote:
 At 07:56 AM 11/1/2006, Leandro Guimarães Faria Corcete DUTRA wrote:
 snip .. further notices; and SolidDB, which
 is still β.
 
 Help this poor English-speaker - what's the symbol you use to describe 
 SolidDB?

I assume it is a beta character, since solidDB for MySQL is indeed
in beta. See http://dev.soliddb.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



jdbc question regarding enum

2006-11-01 Thread Jon Drukman
disclaimer: i know nothing about java or jdbc.  a co-worker is trying to 
access a database i've set up using jdbc.  he says that my enum column 
is always returning an integer value instead of the string.  obviously 
this is less than desirable.


does anybody have any advice i could give him on where to look or 
something to change?


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How can I do a SELECT without locking the table against updates?

2006-10-19 Thread Jon Ribbens
I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.

While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.

Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.

If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Partition Help

2006-10-08 Thread Jon Stephens



Date: Mon, 02 Oct 2006 13:22:37 -0400
To: mysql@lists.mysql.com
From: Michael Gargiullo [EMAIL PROTECTED]
Subject: RE: Partition Help
Message-id: [EMAIL PROTECTED]


snip/


Daily partitions are created then sub partitioned across 6 data disks
and 6 index disks.

We attempted to build a new table per hour, and merge them after 3
hours. We killed the processes after 2 hours. 1 hour of data is approx
18GB. The server only has 12GB of RAM.

I wish we could partition down to TO_HOUR instead of TO_DAY


There's some discussion of this issue on the Partitioning Forum - 
http://forums.mysql.com/list.php?106 - and you're more likely to get 
topic-specific attention there from users and MySQL developers working 
with partitioning than you are here on the General list.


Also, have you checked out the recent articles on partitioning available 
from our DevZone? These include:


http://dev.mysql.com/tech-resources/articles/mysql_5.1_partitioning_with_dates.html

http://dev.mysql.com/tech-resources/articles/mysql_5.1_partitions.html

- both of which discuss date-based partitioning techniques that you 
might find useful.


cheers

jon.


--

Jon Stephens - [EMAIL PROTECTED]
Technical Writer - MySQL Documentation Team
___ Brisbane, Australia (GMT +10.00)
_x_ Bangkok, Thailand (GMT +07.00)
___ Office: +61 (7) 3209 1394
_x_ Office: +66 0 2740 3691 5 ext. #201
Mobile: +61 402 635 784
MySQL AB: www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: write-protection for some tables while other are writable

2006-08-24 Thread Jon Molin

On 8/23/06, Gregor Reich [EMAIL PROTECTED] wrote:

Hi all

Is there a possibility to have some tables write-protected while others
in the same db are not (and yet the write-protected ones are updatable
through the replication mechanism, ie. there are tables on a slave
server). I guess that both, LOCK TABLES and read-only in my.cnf, don't
get this result.



How about only granting select rights to the user?

/Jon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries

2006-08-17 Thread Jon Molin

Hi list

I have 5 tables:

words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
rows) with the keys:
PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
~11M rows) with the keys:
PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

phrase_words (phrase_id, word_id) (has ~31M rows) with:
UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

word_searches (word_id, search_date date, search hour char(2), amount
smallint, type char(8), location char(2)) with:
KEY `word_search` (`word_id`),
KEY `id_search` (`search_date`),
KEY `word_date` (`word_id`,`search_date`)

(and a similar for phrase_searches, these two tables are merge tables
with one table for each month, each table having 15-30M rows)

phrases are built of  words identified by phrase_words (these are
not human language words and phrases but rather random bytes where
some are human readable).

Now, I'm trying to find out how many times has word 1..n been
searched for and how many times has phrases containing 1..n been
searched for?

These queries take a really long time to execute, first I select for the words:
explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
amount FROM words w, word_searches ws WHERE
ws.word_id=w.word_id AND w.word IN (p, xyz, zzz, abc) AND
 ws.search_date = '2006-07-17' AND ws.search_date =
'2006-08-16' group by ws.word_id;
++-+---+---+-+-+-+--+--+--+
| id | select_type | table | type  | possible_keys   |
key | key_len | ref  | rows | Extra
   |
++-+---+---+-+-+-+--+--+--+
|  1 | SIMPLE  | w | range | PRIMARY,word_ind|
word_ind| 42  | NULL |4 | Using where;
Using temporary; Using filesort |
|  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
word_search | 4   | statistics.w.word_id |   15 | Using where
   |
++-+---+---+-+-+-+--+--+--+

and then for phrases:
explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
   pw.word_id IN (966,1,1250,1741) AND
   pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
   GROUP by pw.word_id;
++-+---+---+-+---+-+-++-+
| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows   | Extra
 |
++-+---+---+-+---+-+-++-+
|  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
 | word  | 4   | NULL| 226847 | Using
where |
|  1 | SIMPLE  | ps| ref   |
phrase_search,id_search,phrase_date | phrase_search | 4   |
statistics.pw.phrase_id | 15 | Using where |
++-+---+---+-+---+-+-++-+

The queries takes 40s-several minutes on a dual xeon 3GHz with 4GB ram
only running Msql.

Can someone see something I've done wrong? I have the same data in
flat files with one word and phrase on each row and one file for each
day and doing grep/sort/uniq -c in all thoose files is quicker on a
slower server with a lot of other procesess and with the files nfs
mounted.

mysqladmin status doesn't show any slow queries:
Uptime: 1215323  Threads: 2  Questions: 2191970  Slow queries: 0
Opens: 0  Flush tables: 1  Open tables: 64  Queries per second avg:
1.804

Thanks in advance
/Jon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris [EMAIL PROTECTED] wrote:

Jon Molin wrote:
 Hi list

 I have 5 tables:

 words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
 rows) with the keys:
 PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

 phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
 ~11M rows) with the keys:
 PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

 phrase_words (phrase_id, word_id) (has ~31M rows) with:
 UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
 KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

 word_searches (word_id, search_date date, search hour char(2), amount
 smallint, type char(8), location char(2)) with:
 KEY `word_search` (`word_id`),
 KEY `id_search` (`search_date`),
 KEY `word_date` (`word_id`,`search_date`)

 (and a similar for phrase_searches, these two tables are merge tables
 with one table for each month, each table having 15-30M rows)

 phrases are built of  words identified by phrase_words (these are
 not human language words and phrases but rather random bytes where
 some are human readable).

 Now, I'm trying to find out how many times has word 1..n been
 searched for and how many times has phrases containing 1..n been
 searched for?

 These queries take a really long time to execute, first I select for the
 words:
 explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
 amount FROM words w, word_searches ws WHERE
 ws.word_id=w.word_id AND w.word IN (p, xyz, zzz, abc) AND
  ws.search_date = '2006-07-17' AND ws.search_date =
 '2006-08-16' group by ws.word_id;
 
++-+---+---+-+-+-+--+--+--+

 | id | select_type | table | type  | possible_keys   |
 key | key_len | ref  | rows | Extra
|
 
++-+---+---+-+-+-+--+--+--+

 |  1 | SIMPLE  | w | range | PRIMARY,word_ind|
 word_ind| 42  | NULL |4 | Using where;
 Using temporary; Using filesort |
 |  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
 word_search | 4   | statistics.w.word_id |   15 | Using where
|
 
++-+---+---+-+-+-+--+--+--+


 and then for phrases:
 explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
 phrase_words pw, phrase_searches ps WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date =
 '2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;
 
++-+---+---+-+---+-+-++-+

 | id | select_type | table | type  | possible_keys
  | key   | key_len | ref | rows   | Extra
  |
 
++-+---+---+-+---+-+-++-+

 |  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
  | word  | 4   | NULL| 226847 | Using
 where |
 |  1 | SIMPLE  | ps| ref   |
 phrase_search,id_search,phrase_date | phrase_search | 4   |
 statistics.pw.phrase_id | 15 | Using where |
 
++-+---+---+-+---+-+-++-+

The problem is it's picking the word index which apparently is
returning 226,000+ areas.

Test this:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps IGNORE INDEX (word) WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(that should ignore the 'word' index and instead use the 'phrase' index).



Unfortunately didn't that help, it leads to:
++-+---+---+-+---+-+-+-+--+
| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows| Extra
   |
++-+---+---+-+---+-+-+-+--+
|  1 | SIMPLE  | ps| range |
phrase_search,id_search,phrase_date | id_search | 3   | NULL
| 3836930

Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris [EMAIL PROTECTED] wrote:


 Unfortunately didn't that help, it leads to:
 ++-+---+---+---

 | id | select_type | table | type  | possible_keys
  | key   | key_len | ref | rows| Extra
|
 ++-+---+---+---

 |  1 | SIMPLE  | ps| range |
 phrase_search,id_search,phrase_date | id_search | 3   | NULL
 | 3836930 | Using where; Using temporary; Using filesort

Yeh it's finding a lot more rows there which isn't what you want so the
extra time isn't surprising.


Does rewriting the query to be an inner join help?

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw INNER JOIN phrase_searches ps ON
(ps.phrase_id=pw.phrase_id) WHERE
pw.word_id IN (966,1,1250,1741) AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

or even:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
pw.phrase_id = ps.phrase_id AND pw.word_id IN (966,1,1250,1741) AND
ps.search_date = '2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(which puts the join between the two tables first).



That didn't help either. Same amount of rows as my first join and
about the same speed as well (only a few seconds differing when
executed).



That would help with this discussion too:
http://lists.mysql.com/mysql/201015

;)


Yes, it'd be sweet if that mysql internals guru revelead her/him-self
from the cloud of guruness and spoke the true way of doing it.

What pisses me off most is that 'grep -E ^word$| word$|^word | word 
2006/07/*/phrases |wc -l' is so much quicker than the db :(

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Load Balancing

2006-08-08 Thread Jon Drukman

Ed Pauley II wrote:
This is another geographical location with automatic failover if there 
is a problem, network, hardware etc. with the primary location. When the 
problem is corrected, or corrects itself the traffic is automatically 
sent back to the primary location. Without 2-way replication data would 
be lost. We have been doing this for since MySQL 4.0 was released.


I don't think you're describing it properly.  Do you mean that both 
masters are not master for the same database?  In that case you could 
make them slaves of each other, but not for the same db.  At least, not 
as I understand it.  (And I have set up Mysql as active-active at 2 
geographically diverse colos.)


It is not a multi-master setup. The master at each location is both 
master and slave to each other. The slaves are only slaves to the master 
in their respective locations. My problem is really with how to load 
balance the slaves at each location.


We use a Netscaler 9000 in front of our slaves but you can use any load 
balancing appliance really.  We used to use an Alteon.


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Stored procedures

2006-07-31 Thread Jon

Thanks both Devanada and Peter, your replies helped me resolve it.

/Jon

On 7/30/06, Peter Brawley [EMAIL PROTECTED] wrote:


 *I'm trying to make stored procedures use parameters for limit and
tables, I
guess this is either me using the wrong datatype or it's not possible.
I'm
having the same issue with seting the table for the query:
*
SET @sql = CONCAT( 'select * from some_table limit ', some_limit );
PREPARE stmt FROM @sql
etc.

PB

-


Devananda wrote:

Jon wrote:

Hi list

I'm trying to make stored procedures use parameters for limit and tables,
I
guess this is either me using the wrong datatype or it's not possible.
I'm
having the same issue with seting the table for the query:

CREATE  PROCEDURE sp_test1 (IN some_limit int)
select * from some_table limit some_limit;

and
CREATE  PROCEDURE sp_test2 (IN some_table table)
select * from some_table;


Both fail with ERROR 1064 (42000).

Can someone please shed some light on this for me? Is this a problem with
procedures not being able to do this or is it wrong datatypes or something

completely different?

Thanks in advance
Jon


It sounds like what you need is dynamic SQL in your stored procedures.
Check out http://forge.mysql.com/snippets/view.php?id=13 for some good
examples, more complex than what you're asking about, but they should shed
light on what you need to do.


Regards,
Devananda


No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.5/403 - Release Date: 7/28/2006





Stored procedures

2006-07-25 Thread Jon

Hi list

I'm trying to make stored procedures use parameters for limit and tables, I
guess this is either me using the wrong datatype or it's not possible.  I'm
having the same issue with seting the table for the query:

CREATE  PROCEDURE sp_test1 (IN some_limit int)
select * from some_table limit some_limit;

and
CREATE  PROCEDURE sp_test2 (IN some_table table)
select * from some_table;


Both fail with ERROR 1064 (42000).

Can someone please shed some light on this for me? Is this a problem with
procedures not being able to do this or is it wrong datatypes or something
completely different?

Thanks in advance
Jon


Re: Stored procedures

2006-07-25 Thread Jon

On 7/25/06, Visolve DB Team [EMAIL PROTECTED] wrote:


Hello  Jon.



Hi there Team :)

And thanks for the quick reply

Could you tell me the version of MySql ?. You can find the version by

excuting the command

SELECT version()

If the version is below 5, the stored procedure feature would not work .




The version shown is 5.0.21-standard (from the rpm
MySQL-server-standard-5.0.21-1.rhel3).

I have no problem with other sp, like:
CREATE PROCEDURE sp_test3 (IN value int)
select count(*) from some_table where foo  value;

It's just defining table and limit I've had problems with (there is also one
mentioning this in the manual about creating sp
http://dev.mysql.com/doc/refman/5.0/en/create-procedure.html )

So could it be that it's not possible?

/Jon


Thanks

Visolve DB Team.



- Original Message -
From: Jon [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Tuesday, July 25, 2006 2:40 PM
Subject: Stored procedures


 Hi list

 I'm trying to make stored procedures use parameters for limit and
tables,
 I
 guess this is either me using the wrong datatype or it's not possible.
 I'm
 having the same issue with seting the table for the query:

 CREATE  PROCEDURE sp_test1 (IN some_limit int)
 select * from some_table limit some_limit;

 and
 CREATE  PROCEDURE sp_test2 (IN some_table table)
 select * from some_table;


 Both fail with ERROR 1064 (42000).

 Can someone please shed some light on this for me? Is this a problem
with
 procedures not being able to do this or is it wrong datatypes or
something
 completely different?

 Thanks in advance
 Jon





Re: Stored procedures

2006-07-25 Thread Jon

But the scope of a prepared statement is only the session? I want a stored
procedure to avoid some sql in clientside code...Or do you mean a prep in
the stored?

/Jon

On 7/25/06, Burke, Dan [EMAIL PROTECTED] wrote:



If I understand correctly, what you need is prepared statements.

http://dev.mysql.com/doc/refman/5.0/en/sqlps.html

Dan.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jon
Sent: Tuesday, July 25, 2006 7:44 AM
To: Visolve DB Team
Cc: mysql@lists.mysql.com; Sena
Subject: Re: Stored procedures

On 7/25/06, Visolve DB Team [EMAIL PROTECTED] wrote:

 Hello  Jon.


Hi there Team :)

And thanks for the quick reply

Could you tell me the version of MySql ?. You can find the version by
 excuting the command

 SELECT version()

 If the version is below 5, the stored procedure feature would not work
.



The version shown is 5.0.21-standard (from the rpm
MySQL-server-standard-5.0.21-1.rhel3).

I have no problem with other sp, like:
CREATE PROCEDURE sp_test3 (IN value int)
select count(*) from some_table where foo  value;

It's just defining table and limit I've had problems with (there is also
one
mentioning this in the manual about creating sp
http://dev.mysql.com/doc/refman/5.0/en/create-procedure.html )

So could it be that it's not possible?

/Jon


Thanks
 Visolve DB Team.



 - Original Message -
 From: Jon [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Tuesday, July 25, 2006 2:40 PM
 Subject: Stored procedures


  Hi list
 
  I'm trying to make stored procedures use parameters for limit and
 tables,
  I
  guess this is either me using the wrong datatype or it's not
possible.
  I'm
  having the same issue with seting the table for the query:
 
  CREATE  PROCEDURE sp_test1 (IN some_limit int)
  select * from some_table limit some_limit;
 
  and
  CREATE  PROCEDURE sp_test2 (IN some_table table)
  select * from some_table;
 
 
  Both fail with ERROR 1064 (42000).
 
  Can someone please shed some light on this for me? Is this a problem
 with
  procedures not being able to do this or is it wrong datatypes or
 something
  completely different?
 
  Thanks in advance
  Jon
 




--
This message has been scanned for viruses by TechTeam's email gateway.


---
This e-mail transmission is strictly confidential and intended solely
for the person or organization to whom it is addressed. It may contain
privileged and confidential information and if you are not the intended
recipient, you must not copy, distribute or take any action in reliance
on it. If you have received this e-mail in error, please notify the sender
as soon as possible and delete the e-mail message and any attachment(s).
This message has been scanned for viruses by TechTeam's email gateway.


...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Stored procedures

2006-07-25 Thread Jon Molin

On 7/25/06, Chris White [EMAIL PROTECTED] wrote:


On Tuesday 25 July 2006 02:10 am, Jon wrote:
 CREATE  PROCEDURE sp_test1 (IN some_limit int)
 select * from some_table limit some_limit;

 and
 CREATE  PROCEDURE sp_test2 (IN some_table table)
 select * from some_table;

Well, first off with stored procedures the format is:

DELIMITER $$
CREATE PROCEDURE name ()
BEGIN
..
END $$
DELIMITER ;

DELIMITER is done so you can use ;'s within the stored procedure.



no need for using blocks and setting delimiter when it's only a single query

 The other

thing too is that you're trying to select a table by a variable.  That
doesn't quite work, and I've tried a dozen or so variations myself hoping
it
would.  You know, I'd almost LIKE someone to go No you're wrong, you just
need to do this... ;)

Well, that's kinda what I want to hear. A simple yes or no for both tables

and limits. I take it tables are a nono considering you've tried so many
things.

Anyone who knows if the same is true for limits?


Re: I don't understand why SCSI is preferred.

2006-07-14 Thread Jon Frisby
It was my impression, from the information we've collected that our  
problem is very specific to Opteron.  It's possible that your problem  
is actually unrelated. :(


-JF


On Jul 14, 2006, at 7:24 AM, living liquid|Christian Meisinger wrote:


We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA
chipset whose particular model number I don't have in front of me.

After MUCH MUCH MUCH trial and error we've discovered that:
1) 2.6.16 substantially alleviates the problem but doesn't  
eliminate it.

2) There is a 3Ware card that's MUCH better in this regard.

Personally, I'm not a fan of 3Ware, having lost a RAID array due  
in no
small part to a BUG in their firmware (whose existence they knew  
about

but, naturally, refused to acknowledge until we presented them with
proof that it had to be a bug...) but you can control for such  
variables...



thanks

we use a 3ware 9000 SATA-RAID5 controller.
strange.

we have a xeon cpu here.
so it's not a amd specific problem either i guess.

maybe some strange SMP problem.

BUT we use kernel 2.6.11 so that could be the problem.
ahh n i hate kernel updates :)
i will try a kernel update... sometime ;)



chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread Jon Frisby
We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA  
chipset whose particular model number I don't have in front of me.


After MUCH MUCH MUCH trial and error we've discovered that:
1) 2.6.16 substantially alleviates the problem but doesn't eliminate it.
2) There is a 3Ware card that's MUCH better in this regard.

Personally, I'm not a fan of 3Ware, having lost a RAID array due in  
no small part to a BUG in their firmware (whose existence they knew  
about but, naturally, refused to acknowledge until we presented them  
with proof that it had to be a bug...) but you can control for such  
variables...


-JF


On Jul 12, 2006, at 11:48 PM, living liquid | Christian Meisinger wrote:

* - For example: We faced a NASTY problem using AMD 64-bit CPUs +  
SATA +

Linux where I/O on the system (the WHOLE system, not JUST the SATA
spindles -- network, PATA, USB, EVERYTHING) would suddenly come to a
grinding halt (or very nearly halted) randomly when the SATA  
subsystem

was under heavy load.  It required a LOT of trial-and-error kernel
adjustments to find a configuration that did not suffer this problem.


we have the same problem here.
what did you do to solve this problem?
i guess we need to trial-and-error our own kernel configuration
depending on our hardware but what parameters did you changed?

i'm very thankful about any help ... we have NO idea what's wrong :)


best regards chris



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-13 Thread Jon Frisby


On Jul 13, 2006, at 3:03 PM, mos wrote:


At 03:45 PM 7/12/2006, Jon Frisby wrote:

This REALLY should be an academic concern.  Either you have a system
that can tolerate the failure of a drive, or you do not.  The
frequency of failure rates is pretty much irrelevant:  You can train
incredibly non-technical (inexpensive) people to respond to a pager
and hot-swap a bad drive.
If you are in the position where the typical failure-rate of a class
of drive is of concern to you then either: A) You have a different
problem causing all your drives to fail ultra-fast (heat, electrical
noise, etc) or B) You haven't adequately designed your storage
subsystem.



It all depends how valuable your uptime is. If you double or triple  
the time between hard disk failures, most people would pay extra  
for that so they buy SCSI drive. You wouldn't take your family car  
and race in the Indy 500, would you? After a few laps at 150 mph  
(if you can get it going that fast), it will seize up, so you go  
into the pit stop and what? Get another family car and drive that?  
And keep doing that until you finish the race? Down time is  
extremely expensive and embarrassing. Just talk to the guys at  
FastMail who has had 2 outages even with hardware raid in place.  
Recovery doesn't always work as smoothly as you think it should.


Again:  Either your disk sub-system can TOLERATE (read: CONTINUE  
OPERATING IN THE FACE OF) a drive failure, or it cannot.  If you  
can't hot-stop a dead drive, your system can't tolerate the failure  
of a drive.


Your analogy is flawed.  The fact that companies like Google are  
running with incredibly good uptimes while using cheap, commodity  
hardware (including IDE drives!) demonstrates it.


SCSI drives WILL NOT improve your uptime by a factor of 2x or 3x.   
Using a hot-swappable disk subsystem, and having hot-spares WILL.   
Designing your systems without needless single points of failure WILL.




Software RAID? Are you serious? No way!


You make a compelling case for your position, but I'm afraid I still  
disagree with you.  *cough*


If you're using RAID10, or other forms of RAID that don't involve  
computing a checksum (and the write hole that accompanies it),  
there's little need for hardware support.  It won't make things  
dramatically faster unless you spend a ton of money on cache -- in  
which case you should seriously consider a SAN for the myriad other  
benefits it provides.  The reliability introduced by hardware RAID  
with battery backups is pretty negligible if you're doing your I/O  
right (I.E. you've made sure your drives aren't lying when they say a  
write has completed AND you're using fsync -- which MySQL does).


-JF



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby
This REALLY should be an academic concern.  Either you have a system  
that can tolerate the failure of a drive, or you do not.  The  
frequency of failure rates is pretty much irrelevant:  You can train  
incredibly non-technical (inexpensive) people to respond to a pager  
and hot-swap a bad drive.


If you are in the position where the typical failure-rate of a class  
of drive is of concern to you then either: A) You have a different  
problem causing all your drives to fail ultra-fast (heat, electrical  
noise, etc) or B) You haven't adequately designed your storage  
subsystem.


Save yourself the headache, and just set up a RAID10 PATA/SATA array  
with a hot spare.   Not sure if Linux/FreeBSD/et al support hot-swap  
of drives when using software RAID, but if it does then you don't  
even need to spend a few hundred bucks on a RAID controller.


-JF


On Jul 12, 2006, at 12:11 PM, mos wrote:


At 12:42 PM 7/12/2006, you wrote:

On Tuesday 11 July 2006 19:26, mos wrote:
 SCSI drives are also designed to run 24/7 whereas IDE drives are  
more

 likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now  
being made
for enterprise class, 100% duty cycle operations.  See, for  
example,
http://www.westerndigital.com/en/products/Products.asp? 
DriveID=238Language=en
No, I am not affiliated with WD, just had good experience with  
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year  
warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky estimate.  
I had an expensive HP tape drive that had something like 20,000 hr  
MTBF. Both of my units failed at under 70 hours. HP's estimate was  
power on hours (unit powered on and doing nothing), and did NOT  
include hours when the tape was in motion. Sheesh.


To get the MTBF estimate, the manufacturer will power on 100 drives  
(or more) and time to see when the first one fails. If it fails in  
1000 hours, then the MTBF is 100x1000hrs or 100,000 hours. This is  
far from being accurate because as we all know, the older the  
drive, the more likely it is to fail. (Especially after the  
warranty period has expired, failure rate is quite highg).


I am hoping the newer SATA II drives will provide SCSI performance  
at a reasonable price. It would be interesting to see if anyone has  
polled ISP's to see what they're using. I know they charge more (or  
at least they used to) for SCSI drives if you are renting a server  
from them. It would be interesting to see what their failure rate  
is on IDE vs SCSI vs SATA.


Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:45 PM, Scott Tanner wrote:





I am hoping the newer SATA II drives will provide SCSI performance  
at a
reasonable price. It would be interesting to see if anyone has  
polled ISP's
to see what they're using. I know they charge more (or at least  
they used
to) for SCSI drives if you are renting a server from them. It  
would be

interesting to see what their failure rate is on IDE vs SCSI vs SATA.

Mike



  By newer SATA II drivers, are you referring to SAS drives?


No, typically SATA II is meant to refer to SATA w/ NCQ + doubled  
max throughput.




  My company is in the process of switching to direct attached SAS
arrays for our database servers, as part of a scale-out model. We've
done testing between SATA, SCSI, and SAS arrays, and the SCSI and SAS
systems were very comparative. The number of disks in the array seemed
to have a larger effect then the type of disk. SAS also has more fiber
like features then SCSI, making it better suited for HA environments.


Yeah, that's sort of the conventional-wisdom for drive arrays:  More  
spindles == faster.  It's roughly analogous to adding CPUs versus  
getting faster CPUs with a workload that's easily parallelizable.   
More spindles means more heads.  More heads means more simultaneous  
seeks, reads, and writes.


-JF



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:56 PM, Daniel da Veiga wrote:


On 7/12/06, mos [EMAIL PROTECTED] wrote:

At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
  SCSI drives are also designed to run 24/7 whereas IDE drives  
are more

  likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there  
now being made
for enterprise class, 100% duty cycle operations.  See, for  
example,
http://www.westerndigital.com/en/products/Products.asp? 
DriveID=238Language=en


No, I am not affiliated with WD, just had good experience with  
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year  
warranty.  Not bad.


That's good to hear, but  MTBF is really a pie in the sky  
estimate. I had
an expensive HP tape drive that had something like 20,000 hr MTBF.  
Both of
my units failed at under 70 hours. HP's estimate was power on  
hours (unit
powered on and doing nothing), and did NOT include hours when the  
tape was

in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100  
drives (or
more) and time to see when the first one fails. If it fails in  
1000 hours,

then the MTBF is 100x1000hrs or 100,000 hours. This is far from being
accurate because as we all know, the older the drive, the more  
likely it is
to fail. (Especially after the warranty period has expired,  
failure rate is

quite highg).

I am hoping the newer SATA II drives will provide SCSI performance  
at a
reasonable price. It would be interesting to see if anyone has  
polled ISP's


The answer (short and based on experience) is NO! A SATA drive is no
different from an IDE drive of the same type. I'm sure they'll release
fast and reliable drives based on SATA with differenct mechanisms
(like the one Joshua pointed), but most will be IDE like with a
different interface, those high demand drives are fated to cost a lot
more.


Rule of thumb:  If you see a SATA drive that is 18GB, 36GB, 72GB, or  
144GB and costs WAY more per GB than other SATA drives of more normal  
capacities (80GB, 100GB, 120GB, 160GB, 200GB...) then it's probably  
using the same physical drive as a SCSI drive but with a SATA  
interface tacked on instead.



That is something only an ISP or corporation would give (and no one
will EVER sign it, *lol*). SCSI has one more advantage I forgot to add
to my previous message, they can be arranged better in RAID with hot
swap. I can only tell about my company, where servers have all SCSI
disks (IBM, Dell).


Have you had any specific problems with SATA/PATA hot-swap?  We've  
only had problems when we've tried to use a ThreeWare RAID card and  
tried to do hot-swap...


-JF


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-12 Thread Jon Frisby


On Jul 12, 2006, at 12:58 PM, Chris White wrote:


On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.

I am more than willing to be called Wrong, slapped, and cast from a
bridge.


Hmm, not sure if the question at hand is being answered.  The  
topics I've seen
so far seem to indicate why SCSI is fast.  However, the original  
question was

more along the lines of Does it matter with regards to database
performance?.  From what I know of MySQL, not really, because  
MySQL does a
good amount of work in memory.  The only time I'd see disk access  
being a

factor is if you had a large mass of swap/virtual memory.

Now one place where I'm sure it would matter is if you were doing a
substantial amount of logging, or db dumping to disk.  Then yes,  
you'd want a

nice fast disk at that point.


That's just silly.  ALL databases attempt to do as MUCH AS POSSIBLE  
in memory.  The disk is ALWAYS the enemy when it comes to a  
relational database.  The only question is the design of the database  
and of the queries.  If you have some leeway to muck about with the  
design of each then you can often find ways of making the database  
*do less work* (talk to the disk/ram less) which is always preferable  
to trying to make the disk faster.


-JF

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I don't understand why SCSI is preferred.

2006-07-11 Thread Jon Frisby
It's my understanding that the biggest remaining difference has to do  
with SCSI having far superior command queueing capabilities --  
although SATA's command queueing may have closed the gap somewhat --  
which provides for much better real-world performance when you have  
multiple database threads doing work.


The bottom line is that (at least in the past -- who knows, perhaps  
the latest-n-greatest SATA gear has truly tipped the scales, although  
I doubt it) you will see better real-world performance with less  
fidgeting* from SCSI (or Fibre Channel, switched or otherwise) in  
terms of access times and throughput than you will from PATA or SATA.


* - For example: We faced a NASTY problem using AMD 64-bit CPUs +  
SATA + Linux where I/O on the system (the WHOLE system, not JUST the  
SATA spindles -- network, PATA, USB, EVERYTHING) would suddenly come  
to a grinding halt (or very nearly halted) randomly when the SATA  
subsystem was under heavy load.  It required a LOT of trial-and-error  
kernel adjustments to find a configuration that did not suffer this  
problem.


As to whether it is PREFERRED, that comes down to your constraints.   
There are some problem domains where it's REALLY REALLY HARD to split  
database load across multiple servers.  There are many problem  
domains where bad or overly-simplistic design patterns are common  
that make scaling to multiple machines hard.  So sometimes you wind  
up in a nasty situation where your only option is to have REALLY fast  
spindles -- in which case, the 10x or 20x price premium for SCSI may  
be unavoidable.


Generally speaking, if you need ultra-fast spindles you should  
probably be re-evaluating your database architecture as you're asking  
for financial and technological pain.


-JF

On Jul 11, 2006, at 4:18 PM, Brian Dunning wrote:

My understanding is that SCSI has a faster transfer rate, for  
transferring large files. A busy database needs really fast access,  
for making numerous fast calls all over the disk. Two different,  
unrelated things.


I am more than willing to be called Wrong, slapped, and cast from a  
bridge.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index and multiple fields

2006-04-03 Thread Jon Drukman

Markus Fischer wrote:

Basically, this means if I've a table like this

id1
id2
id3
id4
id5

and I've two different select statements:

select * from ... where id1 = .. and id2 = ..

and the other being

select * from ... where id3 = .. and id4 = ..

I would create two indexes, one for id1/id2 and the other for id3/id4 ,
right?


yep.  remember that mysql can only use one index per table, and that 
column order is significant in a multi column index.


so if your index was (id1, id2) then a query with where id1 = x and id2 
= y would use the index, as would id1 = x.  but plain old where id2 
= x would not.


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



still cannot start MySQL

2006-03-17 Thread Jon Miller
I'm still having a problem starting MySQL.
I get the following message:
Unable to initialise database connection: Can't connect to local MySQL server 
through socket '/var/run/mysqld/mysqld.sock' 
I've check the /etc/mysql/my.cnf and the sock file is supposed to load in 
'/var/run/mysqld.

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVI'm still having a problem starting MySQL./DIV
DIVI get the following message:/DIV
DIVUnable to initialise database connection: Can't connect to local MySQL 
server through socket '/var/run/mysqld/mysqld.sock' /DIV
DIVI've check the /etc/mysql/my.cnf and the sock file is supposed to load in 
'/var/run/mysqld./DIV
DIVnbsp;/DIV
DIVThanks/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Replication: slaves don't change to new log file properly

2006-03-16 Thread Jon Drukman
Our mysql master machine crashed (hardware problem) and rebooted.  When 
it came back up, it started a new master log file.  The slaves, however, 
were all stuck on the last log file.  They still showed both replication 
threads running, no errors, but they just did not advance.  I had to 
manually run STOP SLAVE; CHANGE MASTER TO ...; START SLAVE on all of 
them and then they picked up properly.


Why didn't the slaves figure out there was a new log file?

-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



missing mysqld.sock

2006-03-14 Thread Jon Miller
Is there a way to recreate the mysqld.sock file, it has gone missing from 
:/var/run/mysqld .  Now MySQL will not run.


Thanks

Jon L. Miller,  ASE, CNS, CLS, MCNE, CCNA
Director/Sr Systems Consultant
MMT Networks Pty Ltd
http://www.mmtnetworks.com.au
Resellers for: Novell Gold Partner, Cisco Partner, Peopletelecom, Westnet, 
Sophos Anti-Virus, CA Products

I don't know the key to success, but the key to failure
 is trying to please everybody. -Bill Cosby
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVIs there a way to recreate the mysqld.sock file, it has gone missing from 
:/var/run/mysqld .nbsp; Now MySQL will not run./DIV
DIVnbsp;/DIV
DIVnbsp;/DIV
DIVThanks/DIV
DIVnbsp;/DIV
DIVJon L. Miller,nbsp; ASE, CNS, CLS, MCNE, CCNABRDirector/Sr Systems 
ConsultantBRMMT Networks Pty LtdBRA 
href=http://www.mmtnetworks.com.au;http://www.mmtnetworks.com.au/ABRResellers
 
for: Novell Gold Partner, Cisco Partner, Peopletelecom, Westnet, Sophos 
Anti-Virus, CA Products/DIV
DIVnbsp;/DIV
DIVI don't know the key to success, but the key to failureBRnbsp;is 
trying 
to please everybody. -Bill Cosby/DIV
DIVnbsp;/DIV
DIVnbsp;/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Unable to initialise database

2006-01-22 Thread Jon Miller
Having a problem connecting to a MySQL database from a mail server.  In the 
mail logs I'm getting:
mail MailScanner[4904]: Unable to initialise database connection: Can't connect 
to MySQL server on '192.168.10.4' (110)

In the script that is running it has a mysql_connect statement with the correct 
credentials.  From the command line of the mail server I can issue the command 
# MySQL -u mailwatch -h 192.168.10.4 mailscanner -p 
then I give it the password  after which I have a MySQL prompt.
I have in the database the user name with GRANT privileges on both local host 
and the mail server.

So I'm trying to figure out why it does not work from within the script.  I 
think it may have to do with the DBI connector.
Tried the following but getting errors

#!/usr/bin/perl

use DBI

$host = '192.168.10.4';
$port = '3306';

$dbh = DBI-connect (DBI:mysql:mailscanner; mailwatch, mailwatch);

if ($dbh) {print Connection successful! \n }

error message:
DBI version 192.168.10.4 required--this is only version 1.46 at 
/usr/share/perl/5.8/Exporter/Heavy.pm line 107.
BEGIN failed--compilation aborted at ./dbi.pl line 5.


Any ideas?

Jon


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



upgrade advice

2006-01-21 Thread Jon Miller
I'm currently running MySQL on RedHat 7.2 and now I'm in the processing of 
finishing testing a new server which has SUSE Linux Enterprise Server 9.
My question is my database currently resides on RH7.2 and 
MySQL-server-4.0.13-0.  I want to install the latest version on the SUSE system 
but I believe this to be  Version: 4.0.18 Release: 32.20 (this is with Service 
Pack 3 from Novell).
How do I do a backup of the existing and restore to the current? Are there any 
changes that may cause my apps to have a fit?

Thanks

Jon


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



table missing

2006-01-12 Thread Jon Miller
I've created a database with a table and etc.  I'm now getting a msg stating 
the table has gone missing.
Is their a way to either reindex or import the database so the table can show 
up.

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



script error in program.

2006-01-11 Thread Jon Miller
Having an error in a script that I cannot figure out why it's not working.  The 
problem is on line 15  while ($row = mysql_fetch_object ($result)).

Movie DatabaseBR
pre
?php
// Connect to MySQL server
// User name is root and password is blank
$link = mysql_connect('127.0.0.1','root','mmtnet');
// Select the database
$db = mysql_select_db(movie_library, $link);
// Query the database for all fields from table 'movies'
$result = mysql_query(SELECT * FROM movies, $link);

// Loop through all the rows
while ($row = mysql_fetch_object($result))
{
// print the object for each row
print_r($row);
}
?
/pre
/BODY
/HTML


Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



missing table

2006-01-10 Thread Jon Miller
In a new setup I had the users table in MySQL.  A few hours later I went to 
setup a new account and found out that the user table is missing.  Is there a 
way to get it back?  I can still login as root and myself.
I wanted to add another user to the database.

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



error in script

2006-01-08 Thread Jon Miller
I'm a newbie to MySQL so please bear with me.  I'm creating a program from a 
script I found and I'm wondering why I keep getting the same error message.
I can connect to the mysql server while I'm ssh to the server, I also can run 
phpmyadmin from my windows desktop and see the databases and create databases, 
tables and input the data.  
I'm using Apache2, PHP4 and MySQL Ver 12.22 Distrib 4.0.24, for pc-linux-gnu 
(i386)

Thanks

Jon

Movie Database



Fatal error:  Call to undefined function:  mysql_connect() in 
/var/www/test5.php on line 8

/var/www/test5.php:

HTML
BODY
Movie DatabaseBR
pre
?php
// Connect to MySQL server
// User name is root and password is blank
$link = mysql_connect(192.168.2.16,mysql-username,mysql-password);
// Select the database
$db = mysql_select_db(movie_library, $link);
// Query the database for all fields from table 'movies'
$result = mysql_query(SELECT * FROM movies, $link);

// Loop through all the rows 
while ($row = mysql_fetch_object($result))
{
// print the object for each row
print_r($row);
}
?
/pre
/BODY
/HTML


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tutorial sites

2006-01-07 Thread Jon Miller
Does anyone know of some good tutorial sites of using Apache, PHP and MySQL.?  
Just starting out and want to create a website (mine) where I can change up the 
data as often as I.

Thanks

Jon L. Miller,  ASE, CNS, CLS, MCNE, CCNA
Director/Sr Systems Consultant
MMT Networks Pty Ltd
http://www.mmtnetworks.com.au
Resellers for: Novell Gold Partner, Cisco Partner, Peopletelecom, Westnet, 
Sophos Anti-Virus, 

I don't know the key to success, but the key to failure
 is trying to please everybody. -Bill Cosby




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



missing mysqld.sock

2006-01-06 Thread Jon Miller
I've installed mysql 4 and noticed that both mysql 4.0 and 4.1 was on the 
system.  In my attempt to remove the older version I may have done away with my 
mysqld.sock.  When I try to access mysql i'm getting the following message:
debOS:~# /etc/init.d/mysql start
Starting MySQL database server: mysqld...failed.
Please take a look at the syslog.
/usr/bin/mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket 
'/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' 
exists!


Any ideas?

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVI've installed mysql 4 and noticed that both mysql 4.0 and 4.1 was on the 
system.nbsp; In my attempt to remove the older version I may have done away 
with my mysqld.sock.nbsp; When I try to access mysql i'm getting the following 
message:/DIV
DIVdebOS:~# /etc/init.d/mysql startBRStarting MySQL database server: 
mysqld...failed.BRnbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp; Please take a 
look at the syslog.BR/usr/bin/mysqladmin: connect to server at 'localhost' 
failedBRerror: 'Can't connect to local MySQL server through socket 
'/var/run/mysqld/mysqld.sock' (2)'BRCheck that mysqld is running and that the 
socket: '/var/run/mysqld/mysqld.sock' exists!BR/DIV
DIVnbsp;/DIV
DIVAny ideas?/DIV
DIVnbsp;/DIV
DIVThanks/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

logging issue

2006-01-03 Thread Jon Miller
System:
Red hat 7.2
My SQL modules:
MySQL-devel-4.0.13-0
php-mysql-4.1.2-7.2.6
MySQL-shared-3.23.55-1
MySQL-server-4.0.13-0
MySQL-client-4.0.13-0
CAMysql-9.0-220
Msql-Mysql-DBI-perl-bin-1.1823-1


I would like to log or turn on the facility to do a verbose logging to 
troubleshoot an issue I'm having with a program on another (mail) server trying 
to access the mysql server.  From the mail server I can issue the following:
mysql -h 192.168.10.4 -u mailwatch -p and enter the password and it connects.  
Yet from within the initial program it does not work and in the mail logs it 
has: 
Jan  3 18:19:31 mail MailScanner[11376]: Unable to initialise database 
connection: Can't connect to MySQL server on '192.168.10.4' (110)
I've ask the mailscanner list, the mailwatch list and the postfix list and we 
are not getting anywhere.  

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need to upgrade

2006-01-01 Thread Jon Miller
I need to upgrade MySQL from a RH7.2 server to either a Suse Enterprise 9.2 or 
Debian 3.1 server.  I understand that the only way to get the data from the 
RH7.2 server is via mysqldump.  Are there any gotchas or issues doing this?

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



cannot connect to database

2005-12-31 Thread Jon Miller
I'm trying to initialize to a database on another server from a mail server.  I 
can telnet to it but cannot initialize the database.
The account and user exists in the database and the paasword as I've tried it 
from the mysql server.
Is there another test I can do to find out exactly what is causing the problem? 
 The firewall is set to allow port 3307 (it the port mysql listens on).

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVI'm trying to initialize to a database on another server from a mail 
server.nbsp; I cannbsp;telnet to it but cannot initialize the database./DIV
DIVThe account and user exists in the database and the paasword as I've tried 
it from the mysql server./DIV
DIVIs there another test I can do to find out exactly what is causing the 
problem?nbsp; The firewall is set to allow port 3307 (it the port mysql 
listens 
on)./DIV
DIVnbsp;/DIV
DIVThanks/DIV
DIVnbsp;/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

testing for connectivity to database

2005-12-31 Thread Jon Miller
I'm trying to initialize to a database on another server from a mail server.  I 
can telnet to it but cannot initialize the database.
The account and user exists in the database and the paasword as I've tried it 
from the mysql server.
Is there another test I can do to find out exactly what is causing the problem? 
 The firewall is set to allow port 3307 (it the port mysql listens on).

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVI'm trying to initialize to a database on another server from a mail 
server.nbsp; I cannbsp;telnet to it but cannot initialize the database./DIV
DIVThe account and user exists in the database and the paasword as I've tried 
it from the mysql server./DIV
DIVIs there another test I can do to find out exactly what is causing the 
problem?nbsp; The firewall is set to allow port 3307 (it the port mysql 
listens 
on)./DIV
DIVnbsp;/DIV
DIVThanks/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

installed modules

2005-12-30 Thread Jon Miller
Like to know how to query MySQL for loaded or installed modules.  Is this done 
through  using the code:
? phpinfo(); ?

I'm looking on my system for loaded modules such as DBD-MySQL, zlib-devel and 
perl-DBI.

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVLike to know how to query MySQL for loaded or installed modules.nbsp; Is 
this done through  using the code:/DIV
DIVlt;? phpinfo(); ?gt;/DIV
DIVnbsp;/DIV
DIVI'm looking on my system for loaded modules such as DBD-MySQL, zlib-devel 
and perl-DBI./DIV
DIVnbsp;/DIV
DIVThanks/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

rentering a sql script

2005-12-30 Thread Jon Miller
I have a sql script that needs to be re entered using the following command 
mysql  create.sql but the database already exists.  IS there a way to 
overwrite or update the database using the script, or do I have to delete the 
existing database?
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2802 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVI have a sql script that needs to be re entered using the following 
command 
mysql lt; create.sql but the database already exists.nbsp; IS there a way to 
overwrite or update the database using the script, or do I have to delete the 
existing database?/DIV
DIVnbsp;/DIV
DIVnbsp;/DIV/BODY/HTML


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

MySQL Control Center

2005-11-22 Thread Jon Drukman
What happened to MySQL Control Center (aka mycc or mysqlcc)?  The 
dev.mysql.com site redirects to the Query Browser page.  QB is a poor 
substitute for mycc.  It looks like neither of them has had active 
development much lately but at least mycc, even in its beta stage, is 
fairly useful.


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication fails with file not found error - but file is there

2005-11-01 Thread Jon Drukman

Gleb Paharenko wrote:

Hello.



Have a look here:

  http://dev.mysql.com/doc/refman/5.0/en/not-enough-file-handles.html



thanks, i've upped the open-files-limit variable.  we'll see how it goes.

-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication fails with file not found error - but file is there

2005-10-31 Thread Jon Drukman
Master and slaves are both Mysql 4.1.14 standard.  There are six slaves, 
and this error kept happening on #2 and #4, but then it stopped.  I 
thought it was all gone for good but today it happened on #3.


The symptom is:  Replication stops with the following error:
Error 'Can't find file: './gspot/product.frm' (errno: 24)' on query. 
Default database: 'gspot'. Query: 'UPDATE product SET rdate='2006-06-30' 
WHERE id=928302 LIMIT 1'



The actual file and query change from occurence to occurence.  In all 
cases, the file is actually present on the disk.  Logging in to the 
slave and executing SLAVE START causes replication to resume normally.


Here's the full output from SHOW SLAVE STATUS:

Slave_IO_State = Waiting for master to send event
Master_Host = c10-gs-stage1.cnet.com
Master_User = replica
Master_Port = 3306
Connect_Retry = 60
Master_Log_File = c10-gs-stage1-bin.01
Read_Master_Log_Pos = 218146109
Relay_Log_File = c17-gs-db-slave3-relay-bin.02
Relay_Log_Pos = 217866316
Relay_Master_Log_File = c10-gs-stage1-bin.01
Slave_IO_Running = Yes
Slave_SQL_Running = No
Replicate_Do_DB = gspot,gfaqs
Replicate_Ignore_DB =
Replicate_Do_Table =
Replicate_Ignore_Table =
Replicate_Wild_Do_Table =
Replicate_Wild_Ignore_Table =
Last_Errno = 1017
Last_Error = Error 'Can't find file: './gspot/product.frm' (errno: 24)' 
on query. Default database: 'gspot'. Query: 'UPDATE product SET 
rdate='2006-06-30' WHERE id=928302 LIMIT 1'

Skip_Counter = 0
Exec_Master_Log_Pos = 217866265
Relay_Log_Space = 218146160
Until_Condition = None
Until_Log_File =
Until_Log_Pos = 0
Master_SSL_Allowed = No
Master_SSL_CA_File =
Master_SSL_CA_Path =
Master_SSL_Cert =
Master_SSL_Cipher =
Master_SSL_Key =
Seconds_Behind_Master =


I could just add error 1017 to the ignore list, I guess, but this gives 
me the heebie jeebies.  Any ideas?


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Input on Materialized Views

2005-10-21 Thread Jon Frisby
 Better, mark this view (or particular rows if it's not too 
 expensive) as dirty and recompute it only on access, you 
 may spare few cycles...

That sort of depends on what you're going for.  Typically materialized
views are used for summarizations of hypercubes for OLAP systems (data
marts / data warehouses).  You want access to these views to be as fast
as possible as they're often accessed by interactive programs (reporting
UI).

Recomputing when the change happens is desirable because slowing down
that change just a tiny bit generally won't be a huge deal, as the thing
doing the change is going to be your ETL process which is generally
non-interactive anyway.

-JF

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Turning tables on their side

2005-10-19 Thread Jon Frisby
You want a Pivot Table.  Excel will do this nicely (assuming you have
65536 rows or less), but SQL does not provide a mechanism to do this.
If you want a web based interface you can look at Jtable.  (I *think*
that's what it's called -- it's a Java web app that provides an HTML
pivot table interface...)

-JF
 

 -Original Message-
 From: Jeffrey Goldberg [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, October 19, 2005 12:24 PM
 To: mysql@lists.mysql.com
 Subject: Turning tables on their side
 
 I suspect that this is the wrong list for this kind of 
 question, but if someone could point me to appropriate 
 sources, I would very much appreciate it.
 
 I am new to SQL but inherited project designed by someone who 
 doesn't seem answer his email anymore.
 
 Essentially date were collected on the web using PHP 
 inserting things into a MySQL data base.  It took me time, 
 but I now have a handle on what is in which of the 15 tables involved.
 
 Each response to each question by each respondent produced 
 its own record (row).  That is, I have something like
 
 
   respondent_idquestion_id  answer_id  answer_text
   
 
23   201  56 NULL
23   202  20 NULL
23   203   1 NULL
23   204NULL Arlington
24   201  52 NULL
24   202  21 NULL
24   203   0 NULL
24   204NULL Richmond
 
 
 and so on for other respondent_ids as well.
 
 What I would like to get for my users is something that looks like
 
 
   respondent_id   q201 q202 
 q203   ...

 --
 -
23 text-for-ans56   text-for-ans20  text-for- 
 answer1   ...
24 text-for-ans52   text-for-ans21  text-for- 
 answer0   ...
 
 
 So instead of having a record for each response, I'd like to 
 have a single record for each respondent that shows all of 
 that respondents responses.
 
 For someone who knows SQL this should be easy.  I suspect that a
 
   group by respondent_id
 
 clause will play a role, but I just don't see it.
 
 As I said, references to books or sites that I should learn 
 from would also be welcome.
 
 -j
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Turning tables on their side

2005-10-19 Thread Jon Frisby
Create an Excel spreadsheet.  Import the raw data, structured as-is,
into a worksheet.  Select all the relevant columns.  Go to Data -
Pivot Table and Pivot Chart Report.  Click Finish.  From the
PivotTable Field List, drag the respondant ID into the box labeled
Drop Row Fields Here, then drag question ID into the box labeled Drop
Column Fields Here.

Voila.

-JF


 -Original Message-
 From: Jeffrey Goldberg [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, October 19, 2005 1:44 PM
 To: Brent Baisley
 Cc: mysql@lists.mysql.com
 Subject: Re: Turning tables on their side
 
 [mailed and posted]
 
 On Oct 19, 2005, at 3:34 PM, Brent Baisley wrote:
 
  The person you inherited from formatted the data correctly in my 
  opinion.
 
 I agree.
 
  What you are trying to do is store the data as you see it, which is 
  rarely a normalized data model. Your presentation layer 
 should handle 
  the formatting for the user.
 
 I'm sorry that I didn't make the question clear.  My goal is 
 to export an MS-Excel file that looks like my target.  I do 
 not wish to change how things are done in the DB.  The 
 end-users will want a spreadsheet like that for doing their 
 analysis.  Not for queries.
 
 I'm using phpmyadmin which will do an Excel export of a table 
 for me.  I just need to create the temporary table long 
 enough to do the export.
 
 -j
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Non-linear degradation in bulk loads?

2005-10-17 Thread Jon Frisby
About 6GB...  Is there any way to forcibly limit this?

-JF
 

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, October 16, 2005 2:23 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 your hardware/OS combination is quite new and unusual. It 
 might indeed be an OS problem. We observed from a 64-bit RHEL 
 4 that when the file cache of the OS grew bigger than 4 GB, 
 then the file I/O performance dropped to one tenth of the 
 normal. You would not expect that kind of behavior from a 64-bit OS.
 
 When you see the slowdown, what does 'top' say about the OS 
 file cache size?
 
 
 The database is our main sites database but we've 
 dramatically reduced the load on that machine over the past 
 couple months through careful optimization of our code.  The 
 box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit 
 Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course).  We 
 have 1GB allocated to the buffer pool, and our usual 1GB *
 3 log files.  8 I/O threads.
 
 Load on the box sits at around 6-7, with a large (50%) 
 amount of time spent in wait state, but actual disk 
 throughput to our software RAID array (No longer on a SAN...) 
 is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
 
 
 Regards,
 
 Heikki
 
 
 - Original Message -
 From: Jon Frisby [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Friday, October 14, 2005 10:39 PM
 Subject: RE: Non-linear degradation in bulk loads?
 
 
  I've tried tweaking the structure of the schema to have, 
 for example, a =
  PRIMARY KEY index on email, no other indexes, and then 
 insert in sorted =
  order -- made no improvement whatsoever.  Another clue that 
 leads me to =
  believe that this may be an OS issue:  Starting a large cp 
 on the same =
  box (from a local filesystem other than the one the InnoDB 
 data pool was =
  on, to NFS) caused MySQL to become COMPLETELY backlogged 
 (we went from =
  ~15-20 connections at any given instant to 750 (our 
 max_connections =
  setting)).
 
  -JF
 
 
  -Original Message-
  From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 12, 2005 8:15 AM
  To: mysql@lists.mysql.com
  Subject: Re: Non-linear degradation in bulk loads?
 =20
  Jon,
 =20
  hmm... maybe one of the indexes inevitably is in a random order.
 =20
  Please post a typical
 =20
  SHOW INNODB STATUS\G
 =20
  when the inserts happen slowly.
 =20
  What is your my.cnf like?
 =20
  Regards,
 =20
  Heikki
  Innobase/Oracle
 =20
  - Alkuper=E4inen viesti -
  L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED]
  Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20
  mysql@lists.mysql.com
  L=E4hetetty: Wednesday, October 12, 2005 3:08 AM
  Aihe: RE: Non-linear degradation in bulk loads?
 =20
 =20
   Two solutions: 1) sort the rows to be inserted on the 
 key 'email'=20
   before inserting.
  
   2) Or:
  
   http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
   
   If you have UNIQUE constraints on secondary keys, starting=20
  from MySQL
   3.23.52 and 4.0.3, you can speed up table imports by 
 temporarily=20
   turning off the uniqueness checks during the import session:
   SET UNIQUE_CHECKS=3D0;
  
   For big tables, this saves a lot of disk I/O because 
 InnoDB can use=20
   its insert buffer to write secondary index records in a batch.
   
  
   But make sure you do not have any duplicates in the rows!
 =20
  After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20
  subsequent to that it also occurred to me to try putting the=20
  data in in sorted order.  Unfortunately, doing=20
  UNIQUE_CHECKS=3D0 did not work, and even the combination of=20
  both did not work.  First chunk (3.4m rows) was ~1.5 minutes,=20
  second was ~5 minutes...
 =20
  At this point I'm inclined to believe that there is something=20
  very wrong with the disk subsystem because of this and other=20
  problems (doing a large cp from the datapool filesystem to=20
  another filesystem brought the database to a near-halt, among=20
  other things).
 =20
  As a stop-gap solution, I created the table with no indexes,=20
  and loaded all the data (loaded in linear time), and plan on=20
  doing a CREATE UNIQUE INDEX on the table.  Will this happen=20
  in linear time, or near-linear time?
 =20
  *sigh*
 =20
  -JF=20
 =20
 =20
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:   =20
  http://lists.mysql.com/[EMAIL PROTECTED]
 =20
 =20
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: 
  http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Non-linear degradation in bulk loads?

2005-10-17 Thread Jon Frisby
Actually, I believe we're running 32-bit, with bigmem...  Does similar
behavior occur in such a scenario?

-JF
 

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, October 16, 2005 2:23 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 your hardware/OS combination is quite new and unusual. It 
 might indeed be an OS problem. We observed from a 64-bit RHEL 
 4 that when the file cache of the OS grew bigger than 4 GB, 
 then the file I/O performance dropped to one tenth of the 
 normal. You would not expect that kind of behavior from a 64-bit OS.
 
 When you see the slowdown, what does 'top' say about the OS 
 file cache size?
 
 
 The database is our main sites database but we've 
 dramatically reduced the load on that machine over the past 
 couple months through careful optimization of our code.  The 
 box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit 
 Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course).  We 
 have 1GB allocated to the buffer pool, and our usual 1GB *
 3 log files.  8 I/O threads.
 
 Load on the box sits at around 6-7, with a large (50%) 
 amount of time spent in wait state, but actual disk 
 throughput to our software RAID array (No longer on a SAN...) 
 is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
 
 
 Regards,
 
 Heikki
 
 
 - Original Message -
 From: Jon Frisby [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Friday, October 14, 2005 10:39 PM
 Subject: RE: Non-linear degradation in bulk loads?
 
 
  I've tried tweaking the structure of the schema to have, 
 for example, a =
  PRIMARY KEY index on email, no other indexes, and then 
 insert in sorted =
  order -- made no improvement whatsoever.  Another clue that 
 leads me to =
  believe that this may be an OS issue:  Starting a large cp 
 on the same =
  box (from a local filesystem other than the one the InnoDB 
 data pool was =
  on, to NFS) caused MySQL to become COMPLETELY backlogged 
 (we went from =
  ~15-20 connections at any given instant to 750 (our 
 max_connections =
  setting)).
 
  -JF
 
 
  -Original Message-
  From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 12, 2005 8:15 AM
  To: mysql@lists.mysql.com
  Subject: Re: Non-linear degradation in bulk loads?
 =20
  Jon,
 =20
  hmm... maybe one of the indexes inevitably is in a random order.
 =20
  Please post a typical
 =20
  SHOW INNODB STATUS\G
 =20
  when the inserts happen slowly.
 =20
  What is your my.cnf like?
 =20
  Regards,
 =20
  Heikki
  Innobase/Oracle
 =20
  - Alkuper=E4inen viesti -
  L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED]
  Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20
  mysql@lists.mysql.com
  L=E4hetetty: Wednesday, October 12, 2005 3:08 AM
  Aihe: RE: Non-linear degradation in bulk loads?
 =20
 =20
   Two solutions: 1) sort the rows to be inserted on the 
 key 'email'=20
   before inserting.
  
   2) Or:
  
   http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
   
   If you have UNIQUE constraints on secondary keys, starting=20
  from MySQL
   3.23.52 and 4.0.3, you can speed up table imports by 
 temporarily=20
   turning off the uniqueness checks during the import session:
   SET UNIQUE_CHECKS=3D0;
  
   For big tables, this saves a lot of disk I/O because 
 InnoDB can use=20
   its insert buffer to write secondary index records in a batch.
   
  
   But make sure you do not have any duplicates in the rows!
 =20
  After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20
  subsequent to that it also occurred to me to try putting the=20
  data in in sorted order.  Unfortunately, doing=20
  UNIQUE_CHECKS=3D0 did not work, and even the combination of=20
  both did not work.  First chunk (3.4m rows) was ~1.5 minutes,=20
  second was ~5 minutes...
 =20
  At this point I'm inclined to believe that there is something=20
  very wrong with the disk subsystem because of this and other=20
  problems (doing a large cp from the datapool filesystem to=20
  another filesystem brought the database to a near-halt, among=20
  other things).
 =20
  As a stop-gap solution, I created the table with no indexes,=20
  and loaded all the data (loaded in linear time), and plan on=20
  doing a CREATE UNIQUE INDEX on the table.  Will this happen=20
  in linear time, or near-linear time?
 =20
  *sigh*
 =20
  -JF=20
 =20
 =20
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:   =20
  http://lists.mysql.com/[EMAIL PROTECTED]
 =20
 =20
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: 
  http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http

RE: Non-linear degradation in bulk loads?

2005-10-17 Thread Jon Frisby
Sorry to spam the group, but I just noticed that I asserted we were on a
2.4 kernel -- we're on 2.6.9-11 as part of CentOS 4.1...

-JF
 

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, October 16, 2005 2:23 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 your hardware/OS combination is quite new and unusual. It 
 might indeed be an OS problem. We observed from a 64-bit RHEL 
 4 that when the file cache of the OS grew bigger than 4 GB, 
 then the file I/O performance dropped to one tenth of the 
 normal. You would not expect that kind of behavior from a 64-bit OS.
 
 When you see the slowdown, what does 'top' say about the OS 
 file cache size?
 
 
 The database is our main sites database but we've 
 dramatically reduced the load on that machine over the past 
 couple months through careful optimization of our code.  The 
 box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit 
 Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course).  We 
 have 1GB allocated to the buffer pool, and our usual 1GB *
 3 log files.  8 I/O threads.
 
 Load on the box sits at around 6-7, with a large (50%) 
 amount of time spent in wait state, but actual disk 
 throughput to our software RAID array (No longer on a SAN...) 
 is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
 
 
 Regards,
 
 Heikki
 
 
 - Original Message -
 From: Jon Frisby [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Friday, October 14, 2005 10:39 PM
 Subject: RE: Non-linear degradation in bulk loads?
 
 
  I've tried tweaking the structure of the schema to have, 
 for example, a =
  PRIMARY KEY index on email, no other indexes, and then 
 insert in sorted =
  order -- made no improvement whatsoever.  Another clue that 
 leads me to =
  believe that this may be an OS issue:  Starting a large cp 
 on the same =
  box (from a local filesystem other than the one the InnoDB 
 data pool was =
  on, to NFS) caused MySQL to become COMPLETELY backlogged 
 (we went from =
  ~15-20 connections at any given instant to 750 (our 
 max_connections =
  setting)).
 
  -JF
 
 
  -Original Message-
  From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 12, 2005 8:15 AM
  To: mysql@lists.mysql.com
  Subject: Re: Non-linear degradation in bulk loads?
 =20
  Jon,
 =20
  hmm... maybe one of the indexes inevitably is in a random order.
 =20
  Please post a typical
 =20
  SHOW INNODB STATUS\G
 =20
  when the inserts happen slowly.
 =20
  What is your my.cnf like?
 =20
  Regards,
 =20
  Heikki
  Innobase/Oracle
 =20
  - Alkuper=E4inen viesti -
  L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED]
  Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20
  mysql@lists.mysql.com
  L=E4hetetty: Wednesday, October 12, 2005 3:08 AM
  Aihe: RE: Non-linear degradation in bulk loads?
 =20
 =20
   Two solutions: 1) sort the rows to be inserted on the 
 key 'email'=20
   before inserting.
  
   2) Or:
  
   http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
   
   If you have UNIQUE constraints on secondary keys, starting=20
  from MySQL
   3.23.52 and 4.0.3, you can speed up table imports by 
 temporarily=20
   turning off the uniqueness checks during the import session:
   SET UNIQUE_CHECKS=3D0;
  
   For big tables, this saves a lot of disk I/O because 
 InnoDB can use=20
   its insert buffer to write secondary index records in a batch.
   
  
   But make sure you do not have any duplicates in the rows!
 =20
  After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20
  subsequent to that it also occurred to me to try putting the=20
  data in in sorted order.  Unfortunately, doing=20
  UNIQUE_CHECKS=3D0 did not work, and even the combination of=20
  both did not work.  First chunk (3.4m rows) was ~1.5 minutes,=20
  second was ~5 minutes...
 =20
  At this point I'm inclined to believe that there is something=20
  very wrong with the disk subsystem because of this and other=20
  problems (doing a large cp from the datapool filesystem to=20
  another filesystem brought the database to a near-halt, among=20
  other things).
 =20
  As a stop-gap solution, I created the table with no indexes,=20
  and loaded all the data (loaded in linear time), and plan on=20
  doing a CREATE UNIQUE INDEX on the table.  Will this happen=20
  in linear time, or near-linear time?
 =20
  *sigh*
 =20
  -JF=20
 =20
 =20
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:   =20
  http://lists.mysql.com/[EMAIL PROTECTED]
 =20
 =20
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: 
  http://lists.mysql.com/[EMAIL PROTECTED]
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe

RE: Non-linear degradation in bulk loads?

2005-10-17 Thread Jon Frisby
We only upgraded to CentOS 4.1 due to an emergency data center migration...  We 
weren't prepared to undergo the risk of a 64-bit upgrade at the same time.  I 
believe we're experimenting with 64-bit kernel now as part of our efforts to 
diagnose and resolve the I/O issue.

-JF
 

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Monday, October 17, 2005 10:52 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 I do not know. Why not install a 64-bit Linux in your computer?
 
 Regards,
 
 Heikki
 Oracle/Innobase
 
 - Alkuperäinen viesti -
 Lähettäjä: Jon Frisby [EMAIL PROTECTED]
 Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; 
 mysql@lists.mysql.com
 Lähetetty: Monday, October 17, 2005 8:46 PM
 Aihe: RE: Non-linear degradation in bulk loads?
 
 
 Actually, I believe we're running 32-bit, with bigmem...  
 Does similar behavior occur in such a scenario?
 
 -JF
 
 
  -Original Message-
  From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
  Sent: Sunday, October 16, 2005 2:23 AM
  To: mysql@lists.mysql.com
  Subject: Re: Non-linear degradation in bulk loads?
 
  Jon,
 
  your hardware/OS combination is quite new and unusual. It 
 might indeed 
  be an OS problem. We observed from a 64-bit RHEL
  4 that when the file cache of the OS grew bigger than 4 GB, 
 then the 
  file I/O performance dropped to one tenth of the normal. 
 You would not 
  expect that kind of behavior from a 64-bit OS.
 
  When you see the slowdown, what does 'top' say about the OS 
 file cache 
  size?
 
  
  The database is our main sites database but we've 
 dramatically reduced 
  the load on that machine over the past couple months 
 through careful 
  optimization of our code.  The box is a dual, dual-core 
 Opteron, 8GB 
  of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 
 (32-bit of 
  course).  We have 1GB allocated to the buffer pool, and our 
 usual 1GB 
  *
  3 log files.  8 I/O threads.
 
  Load on the box sits at around 6-7, with a large (50%) 
 amount of time 
  spent in wait state, but actual disk throughput to our 
 software RAID 
  array (No longer on a SAN...) is quite low -- 6-9k blocks/s 
 out, 1-6k 
  blocks/s in.
  
 
  Regards,
 
  Heikki
 
 
  - Original Message -
  From: Jon Frisby [EMAIL PROTECTED]
  Newsgroups: mailing.database.myodbc
  Sent: Friday, October 14, 2005 10:39 PM
  Subject: RE: Non-linear degradation in bulk loads?
 
 
   I've tried tweaking the structure of the schema to have,
  for example, a =
   PRIMARY KEY index on email, no other indexes, and then
  insert in sorted =
   order -- made no improvement whatsoever.  Another clue that
  leads me to =
   believe that this may be an OS issue:  Starting a large cp
  on the same =
   box (from a local filesystem other than the one the InnoDB
  data pool was =
   on, to NFS) caused MySQL to become COMPLETELY backlogged
  (we went from =
   ~15-20 connections at any given instant to 750 (our
  max_connections =
   setting)).
  
   -JF
  
  
   -Original Message-
   From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, October 12, 2005 8:15 AM
   To: mysql@lists.mysql.com
   Subject: Re: Non-linear degradation in bulk loads?
  =20
   Jon,
  =20
   hmm... maybe one of the indexes inevitably is in a random order.
  =20
   Please post a typical
  =20
   SHOW INNODB STATUS\G
  =20
   when the inserts happen slowly.
  =20
   What is your my.cnf like?
  =20
   Regards,
  =20
   Heikki
   Innobase/Oracle
  =20
   - Alkuper=E4inen viesti -
   L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED]
   Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20  
  mysql@lists.mysql.com
   L=E4hetetty: Wednesday, October 12, 2005 3:08 AM
   Aihe: RE: Non-linear degradation in bulk loads?
  =20
  =20
Two solutions: 1) sort the rows to be inserted on the
  key 'email'=20
before inserting.
   
2) Or:
   
http://dev.mysql.com/doc/mysql/en/innodb-tuning.html

If you have UNIQUE constraints on secondary keys, starting=20
   from MySQL
3.23.52 and 4.0.3, you can speed up table imports by
  temporarily=20
turning off the uniqueness checks during the import session:
SET UNIQUE_CHECKS=3D0;
   
For big tables, this saves a lot of disk I/O because
  InnoDB can use=20
its insert buffer to write secondary index records in a batch.

   
But make sure you do not have any duplicates in the rows!
  =20
   After sending my mail, I discovered SET 
 UNIQUE_CHECKS=3D0, and=20  
  subsequent to that it also occurred to me to try putting the=20  
  data in in sorted order.  Unfortunately, doing=20  
 UNIQUE_CHECKS=3D0 
  did not work, and even the combination of=20  both did not work.  
  First chunk (3.4m rows) was ~1.5 minutes,=20  second was ~5 
  minutes...
  =20
   At this point I'm inclined to believe that there is 
 something=20  
  very wrong with the disk subsystem because of this and other=20  
  problems (doing a large

RE: Non-linear degradation in bulk loads?

2005-10-17 Thread Jon Frisby
Side question:  If I use a 64-bit MySQL build on a 64-bit kernel, is it safe 
and sane to allocate say, 6GB to the InnoDB buffer pool?

On an 8GB box, 64-bit software stack, what is the optimum memory allocation for 
a pure-InnoDB (* - MyISAM used only for grant tables) mysql server running as 
the sole application on the machine?

-JF 

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Monday, October 17, 2005 10:55 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 I am not 100 % sure that the problem we saw was in a 64-bit 
 Linux. It might have been 32-bit.
 
 Anyway, since CentOS is a clone of RHEL, this might be the 
 same file cache phenomenon. I do not know if one can force 
 the file cache to stay smaller than 4 GB. You can try running 
 some dummy programs that occupy a few GB of memory.
 
 Regards,
 
 Heikki
 Oracle/Innobase
 
 - Alkuperäinen viesti -
 Lähettäjä: Jon Frisby [EMAIL PROTECTED]
 Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; 
 mysql@lists.mysql.com
 Lähetetty: Monday, October 17, 2005 8:49 PM
 Aihe: RE: Non-linear degradation in bulk loads?
 
 
 Sorry to spam the group, but I just noticed that I asserted 
 we were on a
 2.4 kernel -- we're on 2.6.9-11 as part of CentOS 4.1...
 
 -JF
 
 
  -Original Message-
  From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
  Sent: Sunday, October 16, 2005 2:23 AM
  To: mysql@lists.mysql.com
  Subject: Re: Non-linear degradation in bulk loads?
 
  Jon,
 
  your hardware/OS combination is quite new and unusual. It 
 might indeed 
  be an OS problem. We observed from a 64-bit RHEL
  4 that when the file cache of the OS grew bigger than 4 GB, 
 then the 
  file I/O performance dropped to one tenth of the normal. 
 You would not 
  expect that kind of behavior from a 64-bit OS.
 
  When you see the slowdown, what does 'top' say about the OS 
 file cache 
  size?
 
  
  The database is our main sites database but we've 
 dramatically reduced 
  the load on that machine over the past couple months 
 through careful 
  optimization of our code.  The box is a dual, dual-core 
 Opteron, 8GB 
  of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 
 (32-bit of 
  course).  We have 1GB allocated to the buffer pool, and our 
 usual 1GB 
  *
  3 log files.  8 I/O threads.
 
  Load on the box sits at around 6-7, with a large (50%) 
 amount of time 
  spent in wait state, but actual disk throughput to our 
 software RAID 
  array (No longer on a SAN...) is quite low -- 6-9k blocks/s 
 out, 1-6k 
  blocks/s in.
  
 
  Regards,
 
  Heikki
 
 
  - Original Message -
  From: Jon Frisby [EMAIL PROTECTED]
  Newsgroups: mailing.database.myodbc
  Sent: Friday, October 14, 2005 10:39 PM
  Subject: RE: Non-linear degradation in bulk loads?
 
 
   I've tried tweaking the structure of the schema to have,
  for example, a =
   PRIMARY KEY index on email, no other indexes, and then
  insert in sorted =
   order -- made no improvement whatsoever.  Another clue that
  leads me to =
   believe that this may be an OS issue:  Starting a large cp
  on the same =
   box (from a local filesystem other than the one the InnoDB
  data pool was =
   on, to NFS) caused MySQL to become COMPLETELY backlogged
  (we went from =
   ~15-20 connections at any given instant to 750 (our
  max_connections =
   setting)).
  
   -JF
  
  
   -Original Message-
   From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, October 12, 2005 8:15 AM
   To: mysql@lists.mysql.com
   Subject: Re: Non-linear degradation in bulk loads?
  =20
   Jon,
  =20
   hmm... maybe one of the indexes inevitably is in a random order.
  =20
   Please post a typical
  =20
   SHOW INNODB STATUS\G
  =20
   when the inserts happen slowly.
  =20
   What is your my.cnf like?
  =20
   Regards,
  =20
   Heikki
   Innobase/Oracle
  =20
   - Alkuper=E4inen viesti -
   L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED]
   Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20  
  mysql@lists.mysql.com
   L=E4hetetty: Wednesday, October 12, 2005 3:08 AM
   Aihe: RE: Non-linear degradation in bulk loads?
  =20
  =20
Two solutions: 1) sort the rows to be inserted on the
  key 'email'=20
before inserting.
   
2) Or:
   
http://dev.mysql.com/doc/mysql/en/innodb-tuning.html

If you have UNIQUE constraints on secondary keys, starting=20
   from MySQL
3.23.52 and 4.0.3, you can speed up table imports by
  temporarily=20
turning off the uniqueness checks during the import session:
SET UNIQUE_CHECKS=3D0;
   
For big tables, this saves a lot of disk I/O because
  InnoDB can use=20
its insert buffer to write secondary index records in a batch.

   
But make sure you do not have any duplicates in the rows!
  =20
   After sending my mail, I discovered SET 
 UNIQUE_CHECKS=3D0, and=20  
  subsequent to that it also occurred to me to try putting the=20  
  data in in sorted order

RE: Non-linear degradation in bulk loads?

2005-10-14 Thread Jon Frisby
I've tried tweaking the structure of the schema to have, for example, a PRIMARY 
KEY index on email, no other indexes, and then insert in sorted order -- made 
no improvement whatsoever.  Another clue that leads me to believe that this may 
be an OS issue:  Starting a large cp on the same box (from a local filesystem 
other than the one the InnoDB data pool was on, to NFS) caused MySQL to become 
COMPLETELY backlogged (we went from ~15-20 connections at any given instant to 
750 (our max_connections setting)).

-JF


 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, October 12, 2005 8:15 AM
 To: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Jon,
 
 hmm... maybe one of the indexes inevitably is in a random order.
 
 Please post a typical
 
 SHOW INNODB STATUS\G
 
 when the inserts happen slowly.
 
 What is your my.cnf like?
 
 Regards,
 
 Heikki
 Innobase/Oracle
 
 - Alkuperäinen viesti -
 Lähettäjä: Jon Frisby [EMAIL PROTECTED]
 Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; 
 mysql@lists.mysql.com
 Lähetetty: Wednesday, October 12, 2005 3:08 AM
 Aihe: RE: Non-linear degradation in bulk loads?
 
 
  Two solutions: 1) sort the rows to be inserted on the key 'email' 
  before inserting.
 
  2) Or:
 
  http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
  
  If you have UNIQUE constraints on secondary keys, starting 
 from MySQL
  3.23.52 and 4.0.3, you can speed up table imports by temporarily 
  turning off the uniqueness checks during the import session:
  SET UNIQUE_CHECKS=0;
 
  For big tables, this saves a lot of disk I/O because InnoDB can use 
  its insert buffer to write secondary index records in a batch.
  
 
  But make sure you do not have any duplicates in the rows!
 
 After sending my mail, I discovered SET UNIQUE_CHECKS=0, and 
 subsequent to that it also occurred to me to try putting the 
 data in in sorted order.  Unfortunately, doing 
 UNIQUE_CHECKS=0 did not work, and even the combination of 
 both did not work.  First chunk (3.4m rows) was ~1.5 minutes, 
 second was ~5 minutes...
 
 At this point I'm inclined to believe that there is something 
 very wrong with the disk subsystem because of this and other 
 problems (doing a large cp from the datapool filesystem to 
 another filesystem brought the database to a near-halt, among 
 other things).
 
 As a stop-gap solution, I created the table with no indexes, 
 and loaded all the data (loaded in linear time), and plan on 
 doing a CREATE UNIQUE INDEX on the table.  Will this happen 
 in linear time, or near-linear time?
 
 *sigh*
 
 -JF 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Non-linear degradation in bulk loads?

2005-10-11 Thread Jon Frisby
 Two solutions: 1) sort the rows to be inserted on the key 
 'email' before inserting.
 
 2) Or:
 
 http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
 
 If you have UNIQUE constraints on secondary keys, starting from MySQL
 3.23.52 and 4.0.3, you can speed up table imports by 
 temporarily turning off the uniqueness checks during the 
 import session:
 SET UNIQUE_CHECKS=0;
 
 For big tables, this saves a lot of disk I/O because InnoDB 
 can use its insert buffer to write secondary index records in a batch.
 
 
 But make sure you do not have any duplicates in the rows!

After sending my mail, I discovered SET UNIQUE_CHECKS=0, and subsequent
to that it also occurred to me to try putting the data in in sorted
order.  Unfortunately, doing UNIQUE_CHECKS=0 did not work, and even the
combination of both did not work.  First chunk (3.4m rows) was ~1.5
minutes, second was ~5 minutes...

At this point I'm inclined to believe that there is something very wrong
with the disk subsystem because of this and other problems (doing a
large cp from the datapool filesystem to another filesystem brought the
database to a near-halt, among other things).

As a stop-gap solution, I created the table with no indexes, and loaded
all the data (loaded in linear time), and plan on doing a CREATE UNIQUE
INDEX on the table.  Will this happen in linear time, or near-linear
time?

*sigh*

-JF

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Non-linear degradation in bulk loads?

2005-10-10 Thread Jon Frisby
Everyone,

We're trying to do some bulk data loads on several different tables (on
several different machines, using several different techniques) and
seeing dramatically worse-than-linear performance.

We've tried the bulk-INSERT syntax, and the LOAD DATA INFILE syntax.
We've done ALTER TABLE ... DISABLE KEYS, SET FOREIGN_KEY_CHECKS=0 (where
appropriate), and so forth.

The one that is the most immediate concern is a table of the form:

CREATE TABLE `test` (
  `email` varchar(255) NOT NULL default '',
  `when_happened` datetime NOT NULL default '-00-00 00:00:00',
  UNIQUE KEY `email` (`email`),
  KEY `when_happened` (`when_happened`)
) TYPE=InnoDB;

I'm loading data using LOAD DATA INFILE with chunks containing 3.4m rows
each (~135MB files).  The first chunk was very quick (about 1.5
minutes), but the tenth chunk has taken 22.6 hours and is still going.
(It's been getting progessively slower with each chunk...)

The database is our main sites database but we've dramatically reduced
the load on that machine over the past couple months through careful
optimization of our code.  The box is a dual, dual-core Opteron, 8GB of
RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of
course).  We have 1GB allocated to the buffer pool, and our usual 1GB *
3 log files.  8 I/O threads.

Load on the box sits at around 6-7, with a large (50%) amount of time
spent in wait state, but actual disk throughput to our software RAID
array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k
blocks/s in.

Something *has* to be wrong here, but we're not sure what we've missed.
We've restored larger data sets from a mysqldump in the past in
dramatically less time on far inferior hardware. (A superset of this
same data to a schema which is also a superset, PLUS a bunch of other
rather large tables -- all in ~8 hours on a 3Ware RAID array on a dual
Xeon w/ 4GB of RAM)

We're inclined to believe that this is a configuration problem, as
opposed to a driver or hardware problem given the non-linear nature of
the performance degradation.  This implies we're doing something truly
stupid with our loads.  What could cause this kind of strangeness?

-JF

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Non-linear degradation in bulk loads?

2005-10-10 Thread Jon Frisby
Manoj,

Thanks for the reply!  Unfortunately, I failed to note that we don't
have an auto-extending data pool.  Our data pool is about 212GB, in 4GB
chunks, with about 4.5GB free right now.  We extend it explicitly, and
monitor to make sure it doesn't fill up.

-JF 

 -Original Message-
 From: Manoj [mailto:[EMAIL PROTECTED] 
 Sent: Monday, October 10, 2005 8:51 PM
 To: Jon Frisby
 Cc: mysql@lists.mysql.com
 Subject: Re: Non-linear degradation in bulk loads?
 
 Not sure but given that you suffer from non-linear 
 degradation in performance;my guess is you might be extending 
 your ibdata file every too frequently during the batch load 
 process. Check the ibdata_data_file_path variable in my.cnf 
 for more details.
 
 Cheers
 
 Manoj
 On 10/11/05, Jon Frisby [EMAIL PROTECTED] wrote:
  Everyone,
 
  We're trying to do some bulk data loads on several different tables 
  (on several different machines, using several different techniques) 
  and seeing dramatically worse-than-linear performance.
 
  We've tried the bulk-INSERT syntax, and the LOAD DATA INFILE syntax.
  We've done ALTER TABLE ... DISABLE KEYS, SET FOREIGN_KEY_CHECKS=0 
  (where appropriate), and so forth.
 
  The one that is the most immediate concern is a table of the form:
 
  CREATE TABLE `test` (
   `email` varchar(255) NOT NULL default '',  `when_happened` 
 datetime 
  NOT NULL default '-00-00 00:00:00',  UNIQUE KEY `email` 
 (`email`),  
  KEY `when_happened` (`when_happened`)
  ) TYPE=InnoDB;
 
  I'm loading data using LOAD DATA INFILE with chunks containing 3.4m 
  rows each (~135MB files).  The first chunk was very quick 
 (about 1.5 
  minutes), but the tenth chunk has taken 22.6 hours and is 
 still going.
  (It's been getting progessively slower with each chunk...)
 
  The database is our main sites database but we've 
 dramatically reduced 
  the load on that machine over the past couple months 
 through careful 
  optimization of our code.  The box is a dual, dual-core 
 Opteron, 8GB 
  of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 
 (32-bit of 
  course).  We have 1GB allocated to the buffer pool, and our 
 usual 1GB 
  *
  3 log files.  8 I/O threads.
 
  Load on the box sits at around 6-7, with a large (50%) 
 amount of time 
  spent in wait state, but actual disk throughput to our 
 software RAID 
  array (No longer on a SAN...) is quite low -- 6-9k blocks/s 
 out, 1-6k 
  blocks/s in.
 
  Something *has* to be wrong here, but we're not sure what 
 we've missed.
  We've restored larger data sets from a mysqldump in the past in 
  dramatically less time on far inferior hardware. (A 
 superset of this 
  same data to a schema which is also a superset, PLUS a 
 bunch of other 
  rather large tables -- all in ~8 hours on a 3Ware RAID 
 array on a dual 
  Xeon w/ 4GB of RAM)
 
  We're inclined to believe that this is a configuration problem, as 
  opposed to a driver or hardware problem given the 
 non-linear nature of 
  the performance degradation.  This implies we're doing 
 something truly 
  stupid with our loads.  What could cause this kind of strangeness?
 
  -JF
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Recommendations for memory use with 16GB

2005-09-27 Thread Jon Drukman
I'm moving from 32-bit intel to the brave new world of AMD64.  Our new 
servers are dual Opterons with 16GB of RAM.  We will be running mysql 
4.1.14-standard on redhat enterprise linux 4 x86_64.


Since I'm new to this, what's my best bang-for-buck in setting up 
mysql's memory usage?


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding the most recent related record?

2005-08-17 Thread Jon Drukman

Brian Dunning wrote:
I have a table of questions, and a table of answers. Each question  has 
a related answer record for each person who has answered that  question, 
and each answer is timestamped. How do I find a list of  questions where 
the MOST RECENT answer is less than 30 days ago?  (Basically trying to 
exclude questions that nobody has answered  lately.) Thanks.  :)


i've made up the table and column names since you didn't provide them.


select *
from
 question q
 join answer a on q.question_id = a.question_id
where
 a.answer_date  date_sub(now(), interval 30 day)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: convert varchar to char

2005-08-16 Thread Jon Drukman

Pooly wrote:

Damnit !
Thanks for pointing it, I forgot these things.
But it's a bit more subtle :
If any column in a table has a variable length, the entire row becomes
variable-length as a result. Therefore, if a table contains any
variable-length columns (VARCHAR, TEXT, or BLOB), all CHAR  columns
longer than three characters are changed to VARCHAR columns.

I get it working with :
ALTER TABLE sessions MODIFY id char(32) NOT NULL DEFAULT '', MODIFY ip
char(8) NOT NULL DEFAULT '0';
show create table sessions;
CREATE TABLE `sessions` (
  `id` char(32) NOT NULL default '',
  `user_id` int(6) NOT NULL default '0',
  `ip` char(8) NOT NULL default '0',
  `lastseen` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
  `expire` timestamp NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY  (`id`),
  KEY `user_id2` (`user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

and now I've got fixed-length rows !


or you could have just done:

alter table sessions row_format=fixed;

-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman

i'm trying to run this query:

SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID 
= b.BoardID;


MSGS has 9.5 million rows, and is indexed on BoardID
MBOARD has 69K rows and is indexed on BaseType

EXPLAIN shows:

mysql explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 
0 AND m.BoardID = b.BoardID;

+---+--+--++-+---+---+-+
| table | type | possible_keys| key| key_len | ref   | 
rows  | Extra   |

+---+--+--++-+---+---+-+
| b | ref  | PRIMARY,BaseType | BaseType   |   1 | const | 
48614 | |
| m | ref  | BoardIndex   | BoardIndex |   4 | b.BoardID | 
 277 | Using index |

+---+--+--++-+---+---+-+

the query takes several minutes to run.  shouldn't this be a simple case 
of doing some math on index values?


-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman

Andrew Braithwaite wrote:

Hi,

You're doing a join on 'BoardID' on the tables MSGS and MBOARD.  Is the
BoardID field indexed on the MSGS table too?  If not then that may be your
problem.



MSGS.BoardID is indexed, and the EXPLAIN output I included in the 
original message shows that it is indeed being used:



| m | ref  | BoardIndex   | BoardIndex |   4 | b.BoardID |
 277 | Using index |



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman

Andrew Braithwaite wrote:

Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?


yes, BoardID is the primary key.  BaseType is also indexed.

from the EXPLAIN output i can see that mysql is choosing to use BaseType 
as the index for MBOARD (as we know, mysql can only use one index per 
table.)


i guess that means it has to do the join without an index.  that might 
be why it's slow.  i wonder if that can be worked around?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem with query

2005-05-24 Thread Jon Miller
Quite new to MySQl and queries in general and would like some help in the 
following query:
select prDate, prName, prLEmployee, prDescription, prTotalHours, prLStatus, 
prCompletionDate, prActive from tProject where prDate =2005-05-09

It generates a listing that has years from 2001 to present.  All I'm looking 
for is information start from 2005-05-09 to present.

Thanks
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
HTMLHEAD
META http-equiv=Content-Type content=text/html; charset=us-ascii
META content=MSHTML 6.00.2900.2627 name=GENERATOR/HEAD
BODY style=MARGIN-TOP: 2px; FONT: 10pt Arial; MARGIN-LEFT: 2px
DIVQuite new to MySQl and queries in general and would like some help in the 
following query:/DIV
DIVselect prDate, prName, prLEmployee, prDescription, prTotalHours, 
prLStatus, 
prCompletionDate, prActive from tProject where prDate gt;=2005-05-09/DIV
DIVnbsp;/DIV
DIVIt generates a listing that has years from 2001 to present.nbsp; All I'm 
looking for is information start from 2005-05-09 to present./DIV
DIVnbsp;/DIV
DIVThanksBR/DIV/BODY/HTML

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Problem compiling mysql 4.1.11 on AIX 5.1

2005-05-09 Thread Jon Earle

Joerg Bruehe said:
 Still, this seems to be a problem with the header files supplied / used
 by gcc. Are you sure you used the fixincludes script?

Hi Joerg,

I tried your suggestion as per:

cd /opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/
mv include inc
cd install-tools
export TARGET_MACHINE=AIX
./fixinc.sh /opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include

From fixinc.sh, I was getting errors saying that TARGET_MACHINE was not
defined, so I took a stab in the dark and set it to AIX (couldn't find any
docs to suggest anything else...)  It seemed to run to completion after that.

When I ran MySQL's configure, modified as per a number of posts regarding
openssl directives to be:

./configure \
--prefix=/usr2/tools/mysql \
--with-big-tables \
--with-low-memory \
--with-vio \
--with-openssl \
--with-openssl-includes=/usr2/tools/openssl/include \
--with-openssl-libs=/usr2/tools/openssl/lib \
--without-extra-tools \
--without-docs \
--without-bench \
--enable-local-infile

I then received many errors of the form:

...
checking dlfcn.h usability... no
checking dlfcn.h presence... yes
configure: WARNING: dlfcn.h: present but cannot be compiled
configure: WARNING: dlfcn.h: check for missing prerequisite headers?
configure: WARNING: dlfcn.h: see the Autoconf documentation
configure: WARNING: dlfcn.h: section Present But Cannot Be Compiled
configure: WARNING: dlfcn.h: proceeding with the preprocessor's result
configure: WARNING: dlfcn.h: in the future, the compiler will take precedence
configure: WARNING: ## -- ##
configure: WARNING: ## Report this to the AC_PACKAGE_NAME lists.  ##
configure: WARNING: ## -- ##
checking for dlfcn.h... yes
...

The config.log showed (for the above error, similar messages were emitted for
other errors):

...
configure:5338: checking dlfcn.h usability
configure:5350: gcc -c -Wa,-many -maix64conftest.c 5
In file included from conftest.c:23:
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:268:
error: parse error before __gnuc_va_list
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:269:
error: parse error before __gnuc_va_list
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:270:
error: parse error before __gnuc_va_list
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:272:
error: parse error before __gnuc_va_list
In file included from conftest.c:23:
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:474:
error: parse error before __gnuc_va_list
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:475:
error: parse error before __gnuc_va_list
/opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/stdio.h:476:
error: parse error before __gnuc_va_list
...

The last configure message was:

...
checking for char... no
checking size of char... 0
configure: error: No size for char type.
A likely cause for this could be that there isn't any
static libraries installed. You can verify this by checking if you have libm.a
in /lib, /usr/lib or some other standard place.  If this is the problem,
install the static libraries and try again.  If this isn't the problem,
examine config.log for possible errors.  If you want to report this, use
'scripts/mysqlbug' and include at least the last 20 rows from config.log!

So, it appears I've made things worse.  :(

Any ideas where to go from here?

Cheers!
Jon


-- 
Jon Earle
Software Developer / Network Manager
Specialising in Open Source Software Solutions
http://kronos.honk.org/~earlej/

Rocket science is a lot more fun when you actually have rockets.  -- US
Navy Ad


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL on AIX 5.1

2005-05-09 Thread Jon Earle
Hi folks,

I need to get a version of MySQL running on AIX 5.1, but I see that the only
versions available are for v5.2 and v4.3.3.  I tried both versions (32 and
64-bit) for 5.2 on my 5.1 box, but received this error when trying to
install:

[EMAIL PROTECTED] mysql] bin/resolveip
exec(): 0509-036 Cannot load program bin/resolveip because of the following
errors:
0509-130 Symbol resolution failed for resolveip because:
0509-136   Symbol _isinf (number 44) is not exported from
   dependent module /usr/lib/libc.a(shr.o).
0509-192 Examine .loader section symbols with the
 'dump -Tv' command.

It is possible to get a current working version for 5.1?  For the life of me,
I cannot get 4.1.11 to compile under this OS using gcc and not having another
compiler at hand, I'm kinda down a dead-end road.

Note:  I just installed 4.1.10a for AIX 4.3.3 and resolveip seems to work. 
Would still like to get a current version (4.1.11 for AIX5.1), or get it to
compile.

Cheers!
Jon

-- 
Jon Earle
Software Developer / Network Manager
Specialising in Open Source Software Solutions
http://kronos.honk.org/~earlej/

Rocket science is a lot more fun when you actually have rockets.  -- US
Navy Ad


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



index slowing things down?

2005-05-06 Thread Jon Beyer
I was playing around with a table that had 100 tinyint fields.  Each  
record contained other a '0' or a '1' for each field.

A query such as SELECT * FROM `foobar` WHERE `f01` =1 AND `f02` =1 AND  
`f03` =1 AND `f04` =1 AND `f05` =1 AND `f06` =1 AND `f07` =1 AND `f08` =1  
AND `f09` =1 AND `f10` =1  runs in 0.07 seconds with 200,000 records.

When I add indeces to fields f01 and f02, the query now takes 0.23  
seconds.  Is this expected?  Is this simply the time required to load the  
index?  Are indeces simply not designed to work with something of such low  
cardinality?

Thanks,
Jon
--
Chance favors only a prepared mind.
Jon Beyer
302N Dod Hall
Princeton University
Princeton, NJ 08544
609 986 8722
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Problem compiling mysql 4.1.11 on AIX 5.1

2005-05-06 Thread Jon Earle
[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr1/acct/jonathan/ostk/build/mysql-4.1.11'
make: *** [all] Error 2

I then did a make distclean, changed CXX from gcc to g++, reran configure and
make.  This time I got as far as the link stage, where I got a rash of
undefined OpenSSL symbols.  I only have static openssl libs, plus they're
64-bit, so I'm currently a) rebuilding openssl to include shared libs and b)
adding the right option to build mysql in 64-bit mode.  I will see if I get
further.

As of now, I believe my Qs are:

1. What is the correct usage of curses, term and termcap?  Which do I need? 
And why don't the MySQL folks run into this problem - what's their AIX 5.1
environment like that gives them a smooth compilation?

2. The docs say that using CXX=gcc (to use gcc as a c++ compiler) is
preferable.  Why does it seem to fail for me, forcing me to using the true
g++ compiler?

Many thanks!
Jon

-- 
Jon Earle
Software Developer / Network Manager
Specialising in Open Source Software Solutions
http://kronos.honk.org/~earlej/

Rocket science is a lot more fun when you actually have rockets.  -- US
Navy Ad


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Out of memory, but plenty of swap space left

2005-04-08 Thread Jon Wagoner
Heikki, 

Yes, I'm running MySQL on 32-bit Linux.

I think maybe something had just gotten corrupted.  MySQL restarted
itself yesterday, with the following in the error log:

050407 16:24:49 [ERROR] Out of memory; check if mysqld or some other
process uses all available memory; if not, you may have to use 'ulimit'
to allow mysqld to use more memory or you can add more swap space
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this
binary
or one of the libraries it was linked against is corrupt, improperly
built,
or misconfigured. This error can also be caused by malfunctioning
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=536870912
read_buffer_size=2093056
max_used_connections=201
max_connections=200
threads_connected=49
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 1342686 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0x59d064a0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
frame pointer (ebp) is NULL, did you compile with
-fomit-frame-pointer? Aborting backtrace!
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd-query at 0x59f907f8  is invalid pointer
thd-thread_id=68571
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
050407 16:24:51  InnoDB: Database was not shut down normally!

 

Unless you tell me different, I'll just plan on upgrading to 4.1.11 (I'm
still running 4.1.8) 

-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 08, 2005 12:24 AM
To: mysql@lists.mysql.com
Subject: Re: Out of memory, but plenty of swap space left

John,

are you running on a 32-bit computer? Then, normally, the process size
is 
limited to 2 GB.

Best regards,

Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM 
tables
http://www.innodb.com/order.php


- Original Message - 
From: Jon Wagoner [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, April 07, 2005 7:31 PM
Subject: Out of memory, but plenty of swap space left


 Recently I've been getting error 1041 at times, which gives the
message
 to update the ulimit or add swap space. =20

 This is a dedicated MySQL server, so I have no ulimit set for MySQL.
 According to /proc/meminfo I have over 1GB of swap free.
 Mem:  2118533120 2067628032 509050880 19677184 1659768832
 Swap: 1998733312 686882816 1311850496
 MemTotal:  2068880 kB
 MemFree: 49712 kB
 MemShared:   0 kB
 Buffers: 19216 kB
 Cached:1320684 kB
 SwapCached: 300184 kB
 Active: 943996 kB
 Inactive:  1008424 kB
 HighTotal: 1179484 kB
 HighFree: 2044 kB
 LowTotal:   889396 kB
 LowFree: 47668 kB
 SwapTotal: 1951888 kB
 SwapFree:  1281104 kB

 The mysqld section of my.cnf contains the following:

 log-bin
 server-id  =3D 106
 basedir =3D /usr
 datadir =3D /var/lib/mysql
 tmpdir  =3D /tmp
 language=3D /usr/share/mysql/english
 skip-locking
 set-variable=3D key_buffer=3D512M
 set-variable=3D max_allowed_packet=3D1G
 set-variable=3D table_cache=3D3072
 set-variable=3D sort_buffer=3D2M
 set-variable=3D record_buffer=3D2M
 set-variable=3D thread_cache=3D8
 set-variable=3D thread_concurrency=3D8
 set-variable=3D myisam_sort_buffer_size=3D64M
 set-variable=3D thread_stack=3D128K
 set-variable=3D open_files_limit=3D8192
 set-variable=3D tmp_table_size=3D50M
 max_tmp_tables =3D 100
 innodb_data_home_dir =3D /var/lib/mysql/
 innodb_data_file_path =3D ibdata1:10M:autoextend
 innodb_log_group_home_dir =3D /var/lib/mysql/
 innodb_log_arch_dir =3D /var/lib/mysql/
 set-variable =3D innodb_buffer_pool_size=3D384M
 set-variable =3D innodb_additional_mem_pool_size=3D20M
 set-variable =3D innodb_log_file_size=3D5M
 set-variable =3D innodb_log_buffer_size=3D8M
 innodb_flush_log_at_trx_commit=3D1
 set-variable =3D innodb_lock_wait_timeout=3D50

 long_query_time=3D30
 query_cache_limit=3D1M
 query_cache_size=3D64M
 query_cache_type=3D1
 max_connections=3D200

 Does anyone have any suggestions as to why I'm getting out of memory
 errors?  Do I have some of the settings wrong?
 If it matters, I have about 50GB worth of data, split between InnoDB
and
 MyISAM tables.  I last got the error updating records in one of the
 MyISAM tables which was about 1MB in size.

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe

Out of memory, but plenty of swap space left

2005-04-07 Thread Jon Wagoner
Recently I've been getting error 1041 at times, which gives the message
to update the ulimit or add swap space.  

This is a dedicated MySQL server, so I have no ulimit set for MySQL.
According to /proc/meminfo I have over 1GB of swap free.
Mem:  2118533120 2067628032 509050880 19677184 1659768832
Swap: 1998733312 686882816 1311850496
MemTotal:  2068880 kB
MemFree: 49712 kB
MemShared:   0 kB
Buffers: 19216 kB
Cached:1320684 kB
SwapCached: 300184 kB
Active: 943996 kB
Inactive:  1008424 kB
HighTotal: 1179484 kB
HighFree: 2044 kB
LowTotal:   889396 kB
LowFree: 47668 kB
SwapTotal: 1951888 kB
SwapFree:  1281104 kB

The mysqld section of my.cnf contains the following:

log-bin
server-id  = 106
basedir = /usr
datadir = /var/lib/mysql
tmpdir  = /tmp
language= /usr/share/mysql/english
skip-locking
set-variable= key_buffer=512M
set-variable= max_allowed_packet=1G
set-variable= table_cache=3072
set-variable= sort_buffer=2M
set-variable= record_buffer=2M
set-variable= thread_cache=8
set-variable= thread_concurrency=8
set-variable= myisam_sort_buffer_size=64M
set-variable= thread_stack=128K
set-variable= open_files_limit=8192
set-variable= tmp_table_size=50M
max_tmp_tables = 100
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
set-variable = innodb_buffer_pool_size=384M
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_log_file_size=5M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
set-variable = innodb_lock_wait_timeout=50

long_query_time=30
query_cache_limit=1M
query_cache_size=64M
query_cache_type=1
max_connections=200

Does anyone have any suggestions as to why I'm getting out of memory
errors?  Do I have some of the settings wrong?
If it matters, I have about 50GB worth of data, split between InnoDB and
MyISAM tables.  I last got the error updating records in one of the
MyISAM tables which was about 1MB in size.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query question

2005-04-07 Thread Jon Wagoner
SELECT product_lines.* FROM product_lines LEFT JOIN
manufacturer_product_line_index ON
manufacturer_product_line_index.product_line_id = product_lines.id WHERE
product_lines.id IS NULL

-Original Message-
From: Ed Lazor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 07, 2005 11:39 AM
To: mysql@lists.mysql.com
Subject: Query question

Three tables like this:

--
product_lines
--
id
title

--
manufacturer
--
id
title

--
manufacturer_product_line_index
--
id
product_line_id
manufacturer_id


The index provides a one to many relationship - one product line can be
associated with more than one company.

How do I get a list of product lines not in the index?

Thanks,

Ed


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: recovery of a very large table?

2005-04-07 Thread jon
Not a bad suggestion... but when I try it, I get the following output:
Checking MyISAM file: theTable
Data records: 22906970   Deleted blocks:   0
myisamchk: warning: Table is marked as crashed and last repair failed
- check file-size
myisamchk: error: Size of indexfile is: 2049552384Should be: 
19229444096- check record delete-chain
- check key delete-chain
- check index reference
- check data record references index: 1
myisamchk: error: Can't read indexpage from filepos: -1
- check records and index references
myisamchk: error: Record at:  0  Can't find key for index:  1
MyISAM-table 'theTable' is corrupted
Fix it using switch -r or -o


Renato Golin wrote:
On Wednesday 06 April 2005 20:05, jon wrote:
Normal recovery seems to grab 490 rows... but, originally there were
some 22 million rows in there.

Seems your data file was corruped too not only the indexes. and probably broke 
when updating the 491st registry... try use myisamchk -e

  -e, --extend-check  Try to recover every possible row from the data file
  Normally this will also find a lot of garbage rows;
  Don't use this option if you are not totally desperate.
it could take a very long time to run also... be warned! ;)
--rengolin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


recovery of a very large table?

2005-04-06 Thread jon
Hey folks...
While we weren't paying attention, one of the tables we were logging to 
got big. Really big... like over 2 gigs... and then the server crashed 
and the data became corrupt.

'Course, we'd like that data back...
Normal recovery seems to grab 490 rows... but, originally there were 
some 22 million rows in there.

So far, I've tried:
myisamchk -f tableName
myisamchk -o tableName
myisamchk -unpack tableName
Same result, every time. Weird, eh?
Some corruption is no problem. (We'll take what we can get.)
I believe that the table was packed up at some point, but I'm not sure.
So... what are my options here?
Machine is linux - using ubuntu on the recovery box.
-- jon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


  1   2   3   4   >