RE: InnoDB: Assertion failure - MySQL keeps crashing

2007-06-22 Thread David Griffiths
Did you see this part of the stack trace?

It is possible that mysqld could use up to key_buffer_size +
(record_buffer + sort_buffer)*max_connections = 182271 K bytes of memory
Hope that's ok, if not, decrease some variables in the equation 

How much memory (MyISAM and InnoDB) are you allocating to the database?
Do you have enough memory on the machine?

David



-Original Message-
From: Julien Marchand [mailto:[EMAIL PROTECTED] 
Sent: June 22, 2007 4:33 AM
To: mysql@lists.mysql.com
Subject: InnoDB: Assertion failure - MySQL keeps crashing

  Hello,



   My MySQL keeps crashing and restarting, and I get this log, which
repeats indefinitely:



   Number of processes running now: 0

   070622 13:01:46  mysqld restarted

   070622 13:01:46  InnoDB: Out of memory in additional memory pool.

   InnoDB: InnoDB will start allocating memory from the OS.

   InnoDB: You may get better performance if you configure a bigger

   InnoDB: value in the MySQL my.cnf file for

   InnoDB: innodb_additional_mem_pool_size.

   070622 13:01:46  InnoDB: Starting an apply batch of log records to
the database...

   InnoDB: Progress in percents: 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99

   InnoDB: Apply batch completed

   070622 13:01:47  InnoDB: Started

   /usr/libexec/mysqld: ready for connections

   070622 13:01:48  InnoDB: Assertion failure in thread 114696 in file
fsp0fsp.c line 2945

   InnoDB: We intentionally generate a memory trap.

   InnoDB: Send a detailed bug report to mysql@lists.mysql.com mysqld
got signal 11; This could be because you hit a bug. It is also possible
that this binary or one of the libraries it was linked against is
corrupt, improperly built, or misconfigured. This error can also be
caused by malfunctioning hardware.

   We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, something is
definitely wrong and this may fail



   key_buffer_size=8388600

   record_buffer=131072

   sort_buffer=2097144

   max_used_connections=0

   max_connections=80

   threads_connected=0

   It is possible that mysqld could use up to key_buffer_size +
(record_buffer + sort_buffer)*max_connections = 182271 K bytes of memory
Hope that's ok, if not, decrease some variables in the equation



   Attempting backtrace. You can use the following information to find
out where mysqld died. If you see no messages after this, something went
terribly wrong...

   Cannot determine thread, fp=0x42d36f98, backtrace may not be correct.

   Stack range sanity check OK, backtrace follows:

   0x80ee059

   0x4a50efd6

   0x8273f32

   0x8234054

   0x8234654

   0x8234874

   0x823589b

   0x82030ca

   0x820338b

   0x81f00e7

   0x81efce7

   0x8235d70

   0x81f4c15

   0x4a509cce

   0x4a463b3a

   New value of fp=(nil) failed sanity check, terminating stack trace!

   Please read http://www.mysql.com/doc/U/s/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved stack
trace is much more helpful in diagnosing the problem, so please do
resolve it The manual page at http://www.mysql.com/doc/C/r/Crashing.html
contains information that should help you find out what is causing the
crash



   Number of processes running now: 0

   070622 13:01:48  mysqld restarted

   070622 13:01:48  InnoDB: Out of memory in additional memory pool.

   InnoDB: InnoDB will start allocating memory from the OS.

   InnoDB: You may get better performance if you configure a bigger

   InnoDB: value in the MySQL my.cnf file for

   InnoDB: innodb_additional_mem_pool_size.

   070622 13:01:48  InnoDB: Database was not shut down normally.

   InnoDB: Starting recovery from log files...

   InnoDB: Starting log scan based on checkpoint at

   InnoDB: log sequence number 0 974975241

   InnoDB: Doing recovery: scanned up to log sequence number 0 974975241

   070622 13:01:48  InnoDB: Flushing modified pages from the buffer
pool...

   070622 13:01:48  InnoDB: Started

   /usr/libexec/mysqld: ready for connections

   070622 13:01:49  InnoDB: Assertion failure in thread 114696 in file
fsp0fsp.c line 2945

   InnoDB: We intentionally generate a memory trap.

   InnoDB: Send a detailed bug report to mysql@lists.mysql.com mysqld
got signal 11; ..





   When I comment out the InnoDB configuration line, InnoDB doesn't
start and MySQL can start normally.



   But all the InnoDB tables can't be accessed...



   Do you have an idea in how to fix this problem ? Sorry for my
English, but I'm French ^_^



   Thanks :)




 

_
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails 

RE: Re : InnoDB: Assertion failure - MySQL keeps crashing

2007-06-22 Thread David Griffiths
It doesn't look like a memory issue, but only you can tell (turn some of your 
parameters to make sure). What does top tell you? It's not just how much 
memory your machine has, but how much free memory there is.

Did you compile from source, or use a binary distribution? If you picked a 
binary distribution, you might want to try compiling from source. Not because 
it's necessarily better but the compiler or linker might provide information if 
it's got something to do with your operating system environment...

I haven't seen this issue before, so I am just making informed guesses...

Let me know how it goes.

David

-Original Message-
From: Julien Marchand [mailto:[EMAIL PROTECTED] 
Sent: June 22, 2007 12:43 PM
To: David Griffiths
Cc: mysql@lists.mysql.com
Subject: Re : InnoDB: Assertion failure - MySQL keeps crashing



Yes, and
I also saw InnoDB: Out of memory in additional memory pool., so I gave more 
memory to InnoDB... I don't have the message out of memory
any more, but the other errors are still here... and the problem too :/


 


Here is
my.cnf (with the skip-innodb which let mysql starting properly for the hosted 
websites :/)


 


[mysqld]


safe-show-database


skip-networking


#innodb_data_file_path=ibdata1:50M:autoextend


datadir=/var/lib/mysql


socket=/var/lib/mysql/mysql.sock


set-variable
= connect_timeout=20


set-variable
= max_connections=80


set-variable
= long_query_time=15


log-slow-queries
=  /var/log/mysql-slow-query.log


 


skip-innodb


innodb_data_file_path
= ibdata1:10M:autoextend


set-variable
= innodb_buffer_pool_size=50M


#set-variable=lower_case_table_names
=0


set-variable
= innodb_additional_mem_pool_size=50M


set-variable
= innodb_file_io_threads=4


set-variable
= innodb_lock_wait_timeout=50


 


[mysql.server]


user=mysql


basedir=/var/lib


 


[safe_mysqld]


err-log=/var/log/mysqld.log


pid-file=/var/run/mysqld/mysqld.pid


 


 


I have
512 mo RAM and 512 mo swap... thanks :)




- Message d'origine 
De : David Griffiths [EMAIL PROTECTED] À : Julien Marchand [EMAIL 
PROTECTED]; mysql@lists.mysql.com Envoyé le : Vendredi, 22 Juin 2007, 21h18mn 
01s Objet : RE: InnoDB: Assertion failure - MySQL keeps crashing

Did you see this part of the stack trace?

It is possible that mysqld could use up to key_buffer_size + (record_buffer + 
sort_buffer)*max_connections = 182271 K bytes of memory Hope that's ok, if not, 
decrease some variables in the equation 

How much memory (MyISAM and InnoDB) are you allocating to the database?
Do you have enough memory on the machine?

David



-Original Message-
From: Julien Marchand [mailto:[EMAIL PROTECTED]
Sent: June 22, 2007 4:33 AM
To: mysql@lists.mysql.com
Subject: InnoDB: Assertion failure - MySQL keeps crashing

  Hello,



   My MySQL keeps crashing and restarting, and I get this log, which repeats 
indefinitely:



   Number of processes running now: 0

   070622 13:01:46  mysqld restarted

   070622 13:01:46  InnoDB: Out of memory in additional memory pool.

   InnoDB: InnoDB will start allocating memory from the OS.

   InnoDB: You may get better performance if you configure a bigger

   InnoDB: value in the MySQL my.cnf file for

   InnoDB: innodb_additional_mem_pool_size.

   070622 13:01:46  InnoDB: Starting an apply batch of log records to the 
database...

   InnoDB: Progress in percents: 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99

   InnoDB: Apply batch completed

   070622 13:01:47  InnoDB: Started

   /usr/libexec/mysqld: ready for connections

   070622 13:01:48  InnoDB: Assertion failure in thread 114696 in file 
fsp0fsp.c line 2945

   InnoDB: We intentionally generate a memory trap.

   InnoDB: Send a detailed bug report to mysql@lists.mysql.com mysqld got 
signal 11; This could be because you hit a bug. It is also possible that this 
binary or one of the libraries it was linked against is corrupt, improperly 
built, or misconfigured. This error can also be caused by malfunctioning 
hardware.

   We will try our best to scrape up some info that will hopefully help 
diagnose the problem, but since we have already crashed, something is 
definitely wrong and this may fail



   key_buffer_size=8388600

   record_buffer=131072

   sort_buffer=2097144

   max_used_connections=0

   max_connections=80

   threads_connected=0

   It is possible that mysqld could use up to key_buffer_size + (record_buffer 
+ sort_buffer)*max_connections = 182271 K bytes of memory Hope that's ok, if 
not, decrease some variables in the equation



   Attempting backtrace. You can use the following information to find out 
where mysqld died. If you see no messages after this, something went terribly 
wrong...

   Cannot determine thread, fp=0x42d36f98

RE: Object-Oriented database

2007-05-04 Thread David Griffiths
This really isn't what he's talking about - rather than storing data as
rows and tables, you store as objects and methods.

MySQL does not support this; you can get this sort of functionality
using something like Hibernate (an Object-Relational-Mapping tool),
which is free but has a learning curve.

David

-Original Message-
From: Jerry Schwartz [mailto:[EMAIL PROTECTED] 
Sent: May 4, 2007 6:36 AM
To: 'sam rumaizan'; mysql@lists.mysql.com
Subject: RE: Object-Oriented database 

If you serialize the object, you can store it in the data base as a
blob.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: sam rumaizan [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 04, 2007 6:27 AM
 To: mysql@lists.mysql.com
 Subject: Object-Oriented database

 Is there such thing call Object-Oriented database for mysql?
 Basically can I store an item as an object in myql?






 -
 Looking for earth-friendly autos?
  Browse Top Cars by Green Rating at Yahoo! Autos' Green Center.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tool to migrate data from oracle to mysql

2007-03-12 Thread David Griffiths
SQLWays also works, and it does stored procedures, etc, but it's 
thousands of dollars (depending on the # of objects in your database).


http://www.ispirer.com/products/

David



Arun Kumar PG wrote:

http://www.mysql.com/products/tools/migration-toolkit/

There is a video presentation as well here for ORacle to MySQL
migration.

Thanks,

- Arun


On 3/12/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Friends,
Does any body know of any tool available for migrating data from

oracle to

mysql. Can you please pass on the url.

regards
anandkl





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle to Mysql Sync

2007-03-07 Thread David Griffiths
I'd given some thought to this a while ago. The only way you are going 
to be able to tell if a row changes is to have a date column on every 
oracle table that indicates the last time the data changed.


You'll need some program to start up that knows the last time it ran, 
and the current date, and look for any changed rows between those two 
dates/times. All dates/times need to come from the database (not the OS).


After it finishes, the current date that the code generated  needs to be 
saved somewhere for the next iteration.


This won't work when rows are deleted from the database. If you are 
lucky, and this never happens, it's not an issue. If it only happens on 
a few tables, you can put triggers on that table (ON DELETE) to generate 
delete-statements to store in some log-type-table that your program can 
then execute on the mysql database (and then remove the rows from the 
log-type-table). If you have 500 tables that can have rows deleted, then 
you have a bit of work on your hands.


This isn't an elegant or simple solution, but I don't know of any 
application or tool that can be used to watch Oracle tables and apply 
the changes to a MySQL table in real-time or near real time.


On the flip side, because you will be batching updates/inserts/deletes 
that happen over a period of time (even if it's just 10-15 minutes) and 
apply them to MySQL all at once, you will be putting 10-15 minutes worth 
of Oracle processing onto MySQL all at once. Unfort, this isn't an 
apples-to-apples test. The Oracle statement might be, update some_table 
set some_column = 'xyz' where some_other_column='abc' and 
some_third_column_id in (select some_value from some_other_table where 
some_column = 12) and the mysql statements would just be a bunch of 
updates keyed off the primary key. Oracle is doing way more work.


If you need to do an oranges-to-oranges comparison, then unfortunately 
the only thing I can think of is to log statements in a centralized 
location (the Oracle database?) and then replay them in order on the 
MySQL database. Again, this could be quite a bit of work.


The third option is to take the archived redo logs and extract the SQL 
from them, and replay that SQL (assuming your Oracle-SQL is 
ANSI-compliant and doesn't use (+) etc for outer joins, and isn't full 
of Oracle-specific functions, etc). This will only capture updates, 
inserts, deletes, however; any load on your database due to 
selects-statements won't be replayed on the MySQL cluster.


Check out, 
http://download-east.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90117/logminer.htm


Note that you might have a tough time running this every 15 minutes.

All that said, MySQL Cluster is definitely a different beast. No foreign 
keys, and potentially slow selects if the query needs to hit multiple 
clusters to retrieve data. You also need a lot of RAM (2.1 times your 
dataset, if I remember correctly), etc, etc. Some of this goes away in 
MySQL 5.1.


There is a book coming out in a few weeks on the MySQL Cluster, and the 
MySQL Conference in Santa Clara has a set of talks devoted to MySQL Cluster.


David

Shain Miley wrote:

Hello everyone,
I had a quick question...I am looking to move away from our dependence
on Oracle over to using a Mysql Cluster.  Due to the complexity of the
move it will have to happen over a period of time, what I would like to
do is keep our mysql database in sync with our Oracle DBthis would
allow us to prove that the MySQL db can deal with the load, provide the
needed uptime,etc.

Does anyone have any thoughts on this..I don't mean I need this done
once..best case is every time something changes in an oracle table, the
change will get send to Mysql...worst case...it needs to be out of sync
by no more the 15 minutes...what do you thinkis it possible?

Thanks in advance,

SKM

 
-

Expecting? Get great news right away with email Auto-Check.
Try the Yahoo! Mail Beta.

  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load sharing

2007-02-26 Thread David Griffiths


If you can wait for 5.1 (in beta now), you can use partitioning to store 
a client on a different database in a different geographical site. You'd 
need to partition by region/state (assuming you capture address info). 
If you wanted to do any reporting, however, you'd need to set up a data 
warehouse, and every night do an extract-transform-load (ETL) from the 
regional sites into your main database.


It might make more sense to have mini-sites all over the country - 
database, web and application servers.


Since it sounds like development hasn't started, you can probably go 
with 5.1 - it should be released before summer.


David

Chris W wrote:
I have a potential client that is anticipating rapid growth of a web 
site they want me to build.  Some quick research tells me that there 
is the potential for as many as 50 million users that will access the 
site for an hour or two every day.  All of those users will be located 
in the USA so most of the access will be during the day..  To use the 
web site you will have to have an account and log in.  At this time I 
can't really say how much data will need to be stored about each user.
If this site grows as much as this client thinks, will I need to have 
some kind of load sharing system to access the database?
I was reading in the MySQL manual about the NDB Cluster storage 
engine.  Is this something that would work well in a situation like 
this?  One thing that was mentioned was the possibility of having 
servers in different locations which seems to make the Cluster storage 
engine not a good choice.


Can someone here give some insight and suggest other options I could 
look into?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL docs: Replication Chapter Revamped

2007-02-13 Thread David Griffiths


Very cool - looking forward to reading it with the new replication 
options in 5.1


David

Stefan Hinz wrote:

Replication Chapter Revamped

We have completely reworked the replication chapter for the 5.1 manual.
This targets a number of issues, including a major redesign (to make it
easier to read and find topics), some updates to the information and
processes, and incorporation of numerous bug fixes. You can see the new
chapter here:

* http://dev.mysql.com/doc/refman/5.1/en/replication.html

The main features:

* New layout. We've ripped apart the old chapter format and replaced it
  with a new one. No information has been lost, although a lot of it has
  been moved around.

* We now have four distinct sections:
  - Replication Configuration - includes details on setup (including How
To notes), options and variables, replication formats, and a new
section on common replication tasks.
  - Replication Solutions - this is designed to feature specific
scenarios where replication is used. For example, it contains the
scale-out solution that was in the FAQ, along with specific notes
and guides on backups, splitting replication, and SSL.
  - Replication Notes and Tips - this collects together sections that
were spread about the old structure, including upgrades,
compatibility, known features and issues and the FAQ.
  - Replication Implementation - the innards of the replication system
and how it works.

In all cases we've either rewritten or hugely expanded the information,
and there are also new illustrations with a consistent look and feel to
describe layouts and architecture. The new structure will make it easier
to add new functionality, scenarios and background information. For
example, one other section that is planned, but not in the current
documentation yet, is Replication Topologies. Other planned mprovements,
such as the MBR/SBR/RBR decision table and implicit commit tables now
have a more suitable home in the Replication Implementation section.

Regards,

Stefan
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JSP/JSTL problem

2007-01-02 Thread David Griffiths


This is a little dangerous as well; if an exception gets thrown, the 
statement doesn't get closed. My sample code is below.


That said, your query looks fine. Add logging to your code to figure out 
what's going on. Run your query by hand against your database; I suspect 
you have an issue with your data.


One final note before the source code - don't use values that have 
real-world meaning as primary keys. If the format of the SSN changes, 
and you create a foreign key back to the cust table, you've added a 
headache. Create auto-increment primary keys that are just value-less 
numbers, and create a unique index on the ssn column.




PreparedStatement stmt = null;
ResultSet rs = null;
try
{
   try
   {
  // Check the SSN to make sure it's a number before you start 
preparing statements

   Integer iSSN = new Integer(ssn);
   }
   catch (NumberFormatException e)
   {
  out.println(The ssn is not a valid number:  + ssn);
  return;
   }
   stmt = conn.prepareStatement(SELECT ssn, first, last FROM cust 
WHERE ssn = ?);

   stmt.setInt(1, iSSN.intValue());
   rs = stmt.executeQuery();
   if (rs.next())
   {
   out.println(Customer exists:  + rs.getString(1));
   }
   else
   {
   out.println(The SSN could not be found in the database:  + id);
   }
}
catch (Exception e)
{
   // Log it, or whatever
}
finally
{
   // The statements here will always be called, exception or no. By 
wrapping the .close() statements in a try-catch, you guarantee that

   // each resource will have close called on it before returning
   try
   {
  rs.close();
   }
   catch (Exception e) {}

   try
   {
  stmt.close();
   }
   catch (Exception e) {}

try
   {
  conn.close();
   }
   catch (Exception e) {}
}

David

murthy gandikota wrote:
I tried everything you suggested. 'Think it is the way I have set up 
the table in MYSQL. This is the table 
   
  ++---+--+-+-+---+

| Field  | Type  | Null | Key | Default | Extra |
++---+--+-+-+---+
| ssn| int(9)|  | PRI | |   |
| submitdate | date  | YES  | | NULL|   |
| submitto   | int(3)| YES  | | NULL|   |
| first  | varchar(30)   | YES  | | NULL|   |
| last   | varchar(30)   | YES  | | NULL|   |
| loanAmt| decimal(10,2) | YES  | | NULL|   |
| company| int(3)| YES  | | NULL|   |
| fee| decimal(10,2) | YES  | | NULL|   |
| appType| int(3)| YES  | | NULL|   |
| appSource  | int(3)| YES  | | NULL|   |
| appStatus  | int(3)| YES  | | NULL|   |
| dateStatus | date  | YES  | | NULL|   |
| fundedAmt  | decimal(10,2) | YES  | | NULL|   |
++---+--+-+-+---+


Hassan Schroeder [EMAIL PROTECTED] wrote:  On 1/2/07, murthy
gandikota wrote:

  

ps = con.prepareStatement(select first, last from cust where ssn=?);
int ssnint = Integer.parseInt(ssn.trim());
ps.setInt(1, ssnint);
ResultSet rs=ps.executeQuery();
if ( rs.next()) {
rs.close();
out.println(Customer already exists  + Integer.parseInt(ssn));
return;
}



  

I get the message customer already exists for EVERY ssn that I


tried.

Not sure how you're actually running this, but it looks dangerous -- if
rs.next() is false, you're not closing that ResultSet object. And the
`return`
seems pointless here. What happens if you change that 'if' to 'while',
and
print out the first, last, ssn results? (and for good measure change
that
SELECT statement to 'SELECT ssn, first, last').

For comparison, here's some simple code similar to yours, which works
exactly as expected: if messageId doesn't exist in the DB, it prints
out
the not a valid id message.

stmt = conn.prepareStatement(SELECT messageText FROM messages WHERE
messageId = ?);
stmt.setInt(1, id); 


rs = stmt.executeQuery();
if ( rs == null )
{
out.println(null ResultSet
);
// never happens :-)
}
if (rs.next())
{
out.println(rs.getString(1) + 
);
}
else
{
out.println(not a valid id);
}
rs.close();
stmt.close();
conn.close();

HTH,
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB does not preserve AUTO_INCREMENT -- WTF!?

2006-11-10 Thread David Griffiths

Daevid,

That page looks a little misleading.

First, it says it's stored in main memory, not on disk.

Then it says that on server-startup, it finds the largest value in the 
table, and initialized it to that. So it is disk-based on startup, and 
then resides in memory thereafter.


This doesn't work like Oracle. I am not sure how MyISAM does it.

There are a few things you could do.

One solution is to insert a dummy row with an id of 10,000. On server 
startup, the largest value will be 10,000 and it will start incrementing 
past that.


Another solution (a bit more elegant but way more work) is to create 
your own sequence table (like Oracle does). Each row in the table 
represents a sequence, and you can initialize to anything you want.


A common function that returns a primary key value (using SELECT... FOR 
UPDATE on the sequence table) based on a sequence name will do 
essentially the same thing as Oracle does.


David

Ryan Stille wrote:
I came up with a work around when we encountered this.  I don't 
remember exactly (and I don't have access to that code anymore), but I 
think we manually put a piece of code in our SQL setup scripts, before 
any of our insert statements.  This 'mysql command' would set the next 
available ID to whatever we wanted, its just that if you mysqldump the 
database back out, then load it back in, this next ID setting won't be 
remembered.  So we had to add it to our dump/setup script each time.  
You'd have to look in the docs to find what command does this, its 
something for setting the next auto increment id.  Hope this helps.


-Ryan

Daevid Vincent wrote:

http://dev.mysql.com/doc/refman/5.0/en/innodb-auto-increment-column.html

We have recently switched several database tables from MYISM to 
INNODB, only

to find out this colossal design flaw in InnoDB tables.

We ship out mySQL on our appliances in enterprise level scenarios. We 
often
like to start the AUTO_INCREMENT for several tables at 10,000 -- this 
way we
can reserve the lower 'block' of IDs for our own internal and 
'default' use
so all customers have the same basic database schema. It also makes 
our code
easier to write as we can, in one easy swoop, make the ID's of any 
record 

1 immutable, uneditable, etc.

Are there plans to fix what we perceive to be a tremendous bug? Why 
would
you make it so that the AUTO_INCREMENT value is stored in memory 
only?! What

use is that? I would have preferred if AUTO_INCREMENT threw an error on
InnoDB tables, this way we would have known this months ago instead 
of now

that we're well into this porting of tables, schema and now code.
This is such a subtle but significant change to the table-type that it
should have been made painfully obvious to anyone trying to switch 
types.
You are costing us many man-hours now of re-architecting existing 
code, and

trying to figure out a way to upgrade existing customers.
...And yes, we *DO* pay you your $10,000 support contract.

*sigh*

This is very disappointing.


  






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: java.util.Date versus java.sql.Date

2006-10-10 Thread David Griffiths

A java.sql.Date does not have time information - just the day-month-year.

A java.util.Date has date and time information.

If you need date and time, use java.sql.Timestamp.

It's not very pretty moving from one to the other.

David


-Original Message-
From: Feliks Shvartsburd [mailto:[EMAIL PROTECTED]
Sent: October 10, 2006 12:04 PM
To: mysql@lists.mysql.com
Subject: java.util.Date versus java.sql.Date


Hello

 

Besides some obvious differences in implementations between util.Date
and sql.Date are there any other issues/advantages or disadvantages
using one versus the other?

 

Thanks

 

 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.1

2006-08-28 Thread David Griffiths

The cluster engine has been available since the 4.0 tree, I believe. You can 
begin using it immediately with 5.0 (which is GA).

David

Logan, David (SST - Adelaide) wrote:

Hi Folks,
 
Does anybody have any idea when 5.1 may come to General Release? I am

particularly interested in MySQL Cluster as I have several databases
(around 50) totalling 26Gb and would like to consider moving to this
version because of the Cluster Disk Data Storage and the fact the
current hardware doesn't have the RAM requirements to hold these
databases.
 
I plan to store the data on a SAN and was wondering if anybody had any

experience with this as yet?
 
I have current thoughts along the lines of
 
2 x Pentium 4 1Ghz, 1Gb RAM as management nodes

4 x Pentium 4 1Ghz, 1Gb RAM as SQL nodes
2 x Pentium 4 1Ghz (x 8 cpus), 8Gb RAM as data nodes
 
The databases are currently running, replicating and serving around 800

queries per second utilising a 100M network.
 
Thanks

---
** _/ **  David Logan 
***   _/ ***  ITO Delivery Specialist - Database

*_/*  Hewlett-Packard Australia Ltd
_/_/_/  _/_/_/    E-Mail: [EMAIL PROTECTED]
   _/  _/  _/  _/     Desk:   +618 8408 4273
  _/  _/  _/_/_/  Mobile: 0417 268 665
*_/   **
**  _/    Postal: 148 Frome Street,

   _/ **  Adelaide SA 5001
  Australia 
invent   
---
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need to speed up deletes

2006-06-21 Thread David Griffiths

Is the table heavily indexed? Indexes cause inserts and deletes (and updates 
under certain conditions) to slow down.

Also, since you are deleting approx 2.5% of your rows, your delete-statement must have a where-clause. You might want to do an 
explain-plan on it to make sure it's not an optimization issue.


I would look at your disk drives and their interface (ATA, SATA, SCSI). I'm 
sure there is a fair bit of disk-writing going on.

You can try to drop some un-needed indexes (if there are any) and see if that helps. You could also drop all indexes except the 
one with the least cardinality in the where-clause of your delete script. That might not be feasible, however.


Finally, try converting the table to InnoDB (you'll need to do some configuration in your my.cnf) - it tends to perform better in 
circumstances such as yours. I would do this on a test server first.


David



mos wrote:
I need to delete 200,000 rows from an 8 million row MyISAM table using 
an index. It's taking 20 minutes which is quite slow. Is there any way 
to speed it up? TIA


Mike
MySQL 4.1.10




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL documentor - not MySQL Workbench

2006-06-01 Thread David Griffiths

Embarcadero, who in my humble opinion make the best database tools around, have 
added MySQL to some of their tools.

Their tools (DBArtisan for database management, and ER Studio for diagramming and change-management) are phenomenal in quality and 
features. They are also very very expensive.


http://embarcadero.com/products/erstudio/erdatasheet.html

David

Miles Thompson wrote:


Is there a tool, preferably open source, that can read database schema 
scripts (for lack of a better  term) or connect to the database, and 
generate a diagram? This for a MySQL 5.x database.


I've been working with MySQL Workbench, and if I used it as a dumb 
device it was OK. As soon as I started adding foreign keys - BLOOM! 
BLOOM! - lines and labels everywhere. The schema code it generated need 
a lot of editing as well.


This is expecting rather a lot, but thought I would ask.

Regards - Miles Thompson




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: i'm at a complete loss?

2006-05-29 Thread David Griffiths

Log into the server, and type, use mysql; without the quotes.

Look at the user table - that defines what user can connect to the database, the ip addresses they can use, and the password 
they must provide.


For example, you could enter,

INSERT INTO USER (host, user, password)
values ('127.0.0.1', 'mysql', password(mysql));

and

INSERT INTO USER (host, user, password)
values ('localhost', 'mysql', password(mysql));

Don't forget to do a flush privileges; afterwards (again, no quotes);

The mysql schema is thoroughly (but dryly) documented @ http://dev.mysql.com

David

Daniel McQuay wrote:

Thanks Greg, I did try that here is what happened.

boxster# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
password: NO)
boxster#

So I really just don't know what to do from here. I'm up for any ideas
if
any one has 'em.

On 5/29/06, Greg Maruszeczka [EMAIL PROTECTED] wrote:


On Sun, 28 May 2006 20:17:53 -0400
Daniel McQuay [EMAIL PROTECTED] wrote:

 Hello list,

 I just installed MySQL on my FreeBSD box here at home and am having

a

 few problems with privileges. I can connect to mysql using; mysql -u
 mysql and there is no password needed. However, when I try to

connect

 to the server using root; mysql -u root I get an error;
 snip
 ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
 password: NO)
 /snip
 so reading that it appears that I need a password so i try; mysql -u
 root -p it prompts me for my root pass and when I put it in it does
 that same thing above but with (using password: YES).

 I went to the the MySQL web site and read 2.10.3 Securing the

Initial

 MySQL Accounts and tried following along with that but with no luck.
 When checking google for help I read a lot about the initial
 installation. Something about /usr/local/bin/mysql_install_db will
 install a privilege table.

 I installed mysql using this guide here
 http://raybdbomb.com/p/mysql-install-on-freebsd.html and every thing
 seemed to go well but like I said I keep getting this error. Is

there

 something else I should do? Any help on this would be MUCH
 appreciated.



Hi,

Wasn't clear to me in reading your post that you did this so here
goes:

Did you actually set a root password for mysql using a GRANT
statement after logging in with the default BLANK password?

mysql -u root -p [then just hit enter]

HTH,
G

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Optimization error ?

2006-04-20 Thread David Griffiths

This isn't a bug, it's a missing feature. The parent query won't use indexes, 
just the subquery.

There's been a lot of complaints about how it renders sub queries less than 
useful.

I seem to remember that it might be fixed in 5.1, but I'm not a reliable source 
of info on this.

David



Dyego Souza Dantas Leal wrote:

I have a good question, the MySQL Optimizer is broker ? see the querys:


mysql explain select * from an where an_id in (100,200);
++-+---+---+---+---+-+--+--+-+ 

| id | select_type | table | type  | possible_keys | key   | key_len | 
ref  | rows | Extra   |
++-+---+---+---+---+-+--+--+-+ 

| 1  | SIMPLE  | an| range | An_Id | An_Id | 4   
|  | 2| Using where |
++-+---+---+---+---+-+--+--+-+ 


1 row in set (0.00 sec)


GREAT !!! the MySQL uses primary index to search the rows... BUt , if 
i'm using subselect the response is not good.



mysql explain select * from an where an_id in (select an_id from an 
where an_id between 100 and 103);
+++---+-+---+---+-+--+--+--+ 

| id | select_type| table | type| possible_keys | 
key   | key_len | ref  | rows | Extra|
+++---+-+---+---+-+--+--+--+ 

| 1  | PRIMARY| an| ALL |   
|   | |  | 2489 | Using where  |
| 2  | DEPENDENT SUBQUERY | an| unique_subquery | An_Id | 
An_Id | 4   | func | 1| Using index; Using where |
+++---+-+---+---+-+--+--+--+ 


2 rows in set (0.00 sec)

mysql

this is very slow... the MAX return in 0.001s , but the IN CLAUSE not 
use the PRIMARY INDEX , this causes FULL TABLE SCAN !!!


Optimizer is Broken ?

MySQL Version: 5.0.19-pro
Plataform: Windows or Linux box (debian kernel 2.6.14-1)
Memory : 1 GB of RAM
Table Type: InnoDB

Tnks in advance !



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: database compatibility

2006-03-22 Thread David Griffiths

That's a pretty difficult request:

1) What versions? MySQL 4.0 has no stored procedures, functions, views, subselects. 4.1 added subselects. 5.0 added the rest. 
Oracle 8i doesn't support ANSI-style outer joins (they use a proprietary format).


2) Different engines; MySQL supports federated, archive, MyISAM, InnoDB in 5.0 (there are probably a few others). Each engine has 
different DML (Data Manipulation Language - select, insert, update, delete) and DDL options (ALTER TABLE, CREATE TABLE, etc).


3) Built-in funtions vary widely (though there are some common ones, the format 
and structure can differ).

That's just touching the surface (I have 5 minutes while a database machine 
reboots, so I thought I'd post a reply).

I am not sure what you are after, but you might want to consider an existing ORM (Object-Relational) tool that does the SQL for 
you. Hibernate for Java is amazing, and NHibernate is now out for .NET (not sure if it's alpha, beta or production).


If you are coding to experiment, I'd suggest you limit yourself to a few (MySQL-InnoDB is very popular, and Postgres). Both free, 
with lots of good online-documentation available.


Check out this article:

http://www.devx.com/dbzone/Article/20743

David

ChadDavis wrote:

Does anyone know of a resource ( on the web perhaps ) that discusses the
core differences between the different database's sql.  I'm trying to
write
code that produces the correct sql for a variety of databases.  Such
things
as Oracle's SEQUENCES versus mysql's AUTO_INCREMENT are of interest.
Maybe
I'm asking too much to find a summary of such differences.  But I'm only
interested in using mainstream sql functinality, nothing complicated.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 1 day left: 75% discount on MySQL/Firebird/InterBase/Oracle/SQL Server developer tool!

2006-03-03 Thread David Griffiths


Definitely give this tool a try if you haven't - it has some very powerful features - schema diffs (and the ability to create 
patches, etc - very powerful if you maintain development, quality-assurance, and production databases).


The ability to sort connections, etc, by machine, by database, etc, is very 
powerful.

I haven't spent more than a few hours with it, but I was very impressed.

David


Martijn Tonies wrote:

This week only, because of the 5-year celebration of
our database development IDE Database Workbench,
there's a 75% discount on ALL modules.

Database Workbench supports InterBase, MySQL,
Firebird, Oracle and Microsoft SQL Server.

Feel free to spread this news!

http://www.upscene.com/index.htm?dbw_party_promo.htm





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication from multiple masters?

2006-03-01 Thread David Griffiths

That's not entirely true.

You can have two instances of mysql running on the slave, and dbA connects to 
one instance, and dbB connects to the other.



Jeff, when you say, different databases, do you mean that each master has a 
single mysql instance, and if you typed on M1,

show databases you'd see (for example),  dbA

and if you did the same on M2, you'd see, dbB?

If so, I wonder if there is another way to get around it:

- create a virtual IP address that represents both masters. Use that virtual master in the my.cnf on the slave; each master has to 
have an identical replication account


- put dbA and dbB on the slave

- restrict replication from each master to their respective databases - dbA and dbB - ie don't replicate changes to the 
mysql database.


The two masters appear as one (which overcomes the single-IP-address in the slave's my.cnf file), and each master has a different 
database inside the mysql instance, they aren't stepping on each others toes.


Just my 2 cents.

David.

Greg Donald wrote:

On 3/1/06, Jeff [EMAIL PROTECTED] wrote:

Does anyone know if it's possible to replicate to a single slave from
different databases on different masters?

For instance:

M1:dbAM2:dbB
  \ /
  rep rep
\ /
 Slave



http://dev.mysql.com/doc/refman/5.1/en/replication-features.html

snipMySQL only supports one master and many slaves./snip



--
Greg Donald
Zend Certified Engineer
MySQL Core Certification
http://destiney.com/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication from multiple masters?

2006-03-01 Thread David Griffiths
Good point about the bin-logs. Yup - that would sink it. If mysql used individual binary logs per master database, it would work. 
Ya, if someone was silly enough to have two different databases with the same name, it would be bad, even with separate binary 
logs for each database.


If you have two mysql instances on a single slave, you'll need more memory, faster CPUs, more disk space, etc. But it could be a 
viable option if the machine is just being used to provide a hot-standby.



David





[EMAIL PROTECTED] wrote:


MySQL cannot handle more than one incoming binlog at a time. The 
facilities are just not in the code.


You also run into a nightmare if a database exists on BOTH masters (same 
name on both systems) and the PK values of any tables (also with 
matching names)  overlap. If  both masters update the same row at appx 
the same time, we could run into deadlocking in the slave that didn't 
happen on either master. It also means that the slave and at least one 
of the masters will become out of sync (because the other master's 
changes remain in the database) and replication is considered broken 
at that point.  It's a serious can of worms to handle multi-master 
replication.


Your two instances on one matching replicating to two separate masters 
is not a multi-master replication (more than one master replicating with 
a single slave) it's two single-master slave setups running on the same 
machine. Close but not quite what the original post was looking for (I 
don't think).


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



David Griffiths [EMAIL PROTECTED] wrote on 03/01/2006 04:34:26 PM:

  That's not entirely true.
 
  You can have two instances of mysql running on the slave, and dbA
  connects to one instance, and dbB connects to the other.
 
 
 
  Jeff, when you say, different databases, do you mean that each
  master has a single mysql instance, and if you typed on M1,
 
  show databases you'd see (for example),  dbA
 
  and if you did the same on M2, you'd see, dbB?
 
  If so, I wonder if there is another way to get around it:
 
  - create a virtual IP address that represents both masters. Use that
  virtual master in the my.cnf on the slave; each master has to
  have an identical replication account
 
  - put dbA and dbB on the slave
 
  - restrict replication from each master to their respective
  databases - dbA and dbB - ie don't replicate changes to the
  mysql database.
 
  The two masters appear as one (which overcomes the single-IP-address
  in the slave's my.cnf file), and each master has a different
  database inside the mysql instance, they aren't stepping on each 
others toes.

 
  Just my 2 cents.
 
  David.
 
  Greg Donald wrote:
   On 3/1/06, Jeff [EMAIL PROTECTED] wrote:
   Does anyone know if it's possible to replicate to a single slave from
   different databases on different masters?
  
   For instance:
  
   M1:dbAM2:dbB
 \ /
 rep rep
   \ /
Slave
  
  
   http://dev.mysql.com/doc/refman/5.1/en/replication-features.html
  
   snipMySQL only supports one master and many slaves./snip
  
  
  
   --
   Greg Donald
   Zend Certified Engineer
   MySQL Core Certification
   http://destiney.com/
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert performance

2006-01-31 Thread David Griffiths

Indexes slow down inserts, updates (if the indexed column is being updated), 
and deletes.

If this is a data-refresh, consider dropping the indexes, importing the data, 
and then indexing the table.

You haven't mentioned how you are getting the data into the database? Is this a bulk-load? Insert statements? Some piece of code 
selecting from one database and inserting into MySQL 5.0?


If you are using mysqldump to get data out as insert statements, use the 
extended-insert, as it's much much faster.

David


Vinay wrote:

I am using mysql5.0 on Hp-UX. IT took about 14 hours to insert 1.7
millin records. How do I make my insert run faster.The table has three
foreign key references and the referencing columns are indexed . Is that
impacting the insert statement performance.


Thanks for the help
Vinay



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem using IN statement MySQL 5

2006-01-11 Thread David Griffiths

Your select has two tables, but you don't join them.

Can you provide the table info, and a subset of the data that should be 
pulled back by this query?


David

Paul Nowosielski wrote:


Hello,

I'm trying to run q query with an IN statement in MySQL 5. Like so:

SELECT * from encore enc, article art
WHERE  enc.encore_id= '10' AND `article_id` IN (`articles`)

Its should return all the articles in the encore.articles column but
instead only returns the first article.

In encore,articles is the data 43,44,45,46.
These are article IDs. If I manually place 43,44,45,46 into the query
like so:

SELECT * from encore enc, article art
WHERE  enc.encore_id= '10' AND `article_id` IN (43,44,45,46)

All 4 articles are returned. Any ideas why this is not working?

TIA!

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Reason for Auto-increment primary keys?

2005-12-21 Thread David Griffiths


Auto-incremented integers (be it bigint, mediumint, etc) are, from a 
purist point of view, better than natural primary keys, like part 
number etc. Read Practical Issues in Database Management, by Fabian 
Pascal. He argues against natural primary keys, because the business 
rules that underly the data could change.


The example you use is the social security number - say the US federal 
government decides that a change is required. Say families will all have 
the same SSN number, so that they may file joint income tax returns. You 
have no control over it.


Also, in order to guarantee uniqueness, you often have to use a 
composite primary key (a primary key made up of many columns). This adds 
complexity and size to your database. It also makes using tools like 
Hibernate (an database -- java-objects mapping tool that is very 
popular and powerful) more difficult to use (at least in version 2.1), 
because composite primary keys have extra overhead.


If a part-number and manufacturer (or supplier) are a unique 
combination, create a unique index on the two. Keep your primary key 
abstract (and unnatural).


I know there are alot of what-if statements (I mean, how often will 
the SSN change?), and maybe you've never heard of Hibernate let alone 
had any desire to use it, but the underlying argument is valid - 
business rules change. For complex schemas, with lots of data, that 
could be a nightmare if some external data source that you have no 
control over suddenly changes the rules underlying their data.


Timestamps (last-modified-dates) are very useful for trying to track 
down problems.


David

Kenneth Wagner wrote:


Rhino,

What I do is put the ID (integer Primary Key, auto-increment, unique) 
first.

First key in every file.

Then define my indexes.

You could, do it the other way as you ask. But, I found this way is 
consistent.
It can be traced anywhere on any file. Timestamp on important or 
critical files
is also a big plus. It's not needed on static or almost static files 
like counties,
states, departments, etc. But orders, parts, inventory, customers and 
so on

benefit from a timestamp. Beside SQL is optimized for it AFAIK.

This hasn't ever been an issue aside from a few people asking about 
how to

use the ID? Or what's it for?

Ken

- Original Message - From: Rhino [EMAIL PROTECTED]
To: Kenneth Wagner [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: mysql mysql@lists.mysql.com
Sent: Wednesday, December 21, 2005 4:57 PM
Subject: Re: Reason for Auto-increment primary keys?




- Original Message - From: [EMAIL PROTECTED]
To: Kenneth Wagner [EMAIL PROTECTED]
Cc: mysql mysql@lists.mysql.com; Rhino [EMAIL PROTECTED]
Sent: Wednesday, December 21, 2005 5:15 PM
Subject: Re: Reason for Auto-increment primary keys?



Kenneth Wagner [EMAIL PROTECTED] wrote on 12/21/2005
04:27:53 PM:


Hi Rhino,

Excellent question. Felt as you do, initially.

Here's what changed my mind.

Integer keys are fast. And small. Hence, they take very little RAM


space.



They are contiguous. A missing PK is easy to find. There's a gap in 
the

number sequence.
Can't do this with the part description. No way to tell if a record is
missing.

Example: The system gets hung up or crashes and a reboot is needed.
How to test the integrity of the parts table. I.e., anything missing?


Check


the PK for
continuity is a good place to start. With a timestamp I would even 
know


the


date
where the file got truncated. Example. It's Dec 20th. The highest date


in


the file is
Dec 1st at rec# 1203023. That's where the analysis would begin. Other


files


that
didn't get truncated but have the related key # in them would tip 
me off


as


to how
much is missing. Like an order file.

Speed. Especially where related files are concerned. Foreign keys. 
Links


on


integer
fields are faster, smaller and more efficient. Keys remain smaller and
faster.

Activity testing: Let's say I do some statistical testing.  Like how


many


new parts
per month on average. Easy to do with the integer PK. Even easier 
if it


has


a timestamp.
Then if the average suddenly drops or increases I would want to know


why. Or


modify
my DB tables or coding. Note that the timestamp does not have to be in


your


example
table. It could be in an insert/update table that just tracks what has


been


added or updated
by PK, timestamp, activity type and updatedbyuserID.

So, there's 2 cents worth.

Wondering how relevant this is?

HTH,

Ken Wagner



- Original Message - From: Rhino [EMAIL PROTECTED]
To: mysql mysql@lists.mysql.com
Sent: Wednesday, December 21, 2005 2:54 PM
Subject: Reason for Auto-increment primary keys?


 One technique that I see a lot on this mailing list is people 
putting

 auto-incremented integer primary keys on their tables.

 Maybe I'm just old school but I've always thought that you should


choose


 a primary key based on data that is actually in the table whenever
 possible, rather than generating 

Re: MyIsam Vs InnoDB

2005-11-24 Thread David Griffiths


Is your database connection auto-commit? MyISAM commits everything at 
once, where InnoDB you can commit whenever you want. You might want to 
commit at the end of your batch.


Also, look at your indexes - indexes make selects fast, but slow down 
inserts and deletes, and can slow down updates in some situations.


David

Andrew stolarz wrote:


hello, here are my current setttings:

# MySQL Server Instance Configuration File
# --
# Generated by the MySQL Server Instance Configuration Wizard
#
#
# Installation Instructions
# --
#
# On Linux you can copy this file to /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options
# (@localstatedir@ for this installation) or to
# ~/.my.cnf to set user-specific options.
#
# On Windows you should keep this file in the installation directory
# of your server (e.g. C:\Program Files\MySQL\MySQL Server 4.1). To
# make sure the server reads the config file use the startup option
# --defaults-file.
#
# To run run the server from the command line, execute this in a
# command line shell, e.g.
# mysqld --defaults-file=C:\Program Files\MySQL\MySQL Server
4.1\my.ini
#
# To install the server as a Windows service manually, execute this in a
# command line shell, e.g.
# mysqld --install MySQL41 --defaults-file=C:\Program Files\MySQL\MySQL
Server 4.1\my.ini
#
# And then execute this in a command line shell to start the server,
e.g.
# net start MySQL41
#
#
# Guildlines for editing this file
# --
#
# In this file, you can use all long options that the program supports.
# If you want to know the options a program supports, start the program
# with the --help option.
#
# More detailed information about the individual options can also be
# found in the manual.
#
#
# CLIENT SECTION
# --
#
# The following options will be read by MySQL client applications.
# Note that only client applications shipped by MySQL are guaranteed
# to read this section. If you want your own MySQL client program to
# honor these values, you need to specify it as an option during the
# MySQL client library initialization.
#
[client]

port=3306

[mysql]

default-character-set=latin1


# SERVER SECTION
# --
#
# The following options will be read by the MySQL Server. Make sure that
# you have installed the server correctly (see above) so it reads this
# file.
#
[mysqld]

# The TCP/IP Port the MySQL Server will listen on
port=3306


#Path to installation directory. All paths are usually resolved relative
to
this.
basedir=C:/Program Files/MySQL/MySQL Server 5.0/

#Path to the database root
datadir=C:/Program Files/MySQL/MySQL Server 5.0/Data/

# The default character set that will be used when a new schema or table
is
# created and no character set is defined
default-character-set=latin1

# The default storage engine that will be used when create new tables
when
default-storage-engine=innodb

# Set the SQL mode to strict
sql-mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION


# The maximum amount of concurrent sessions the MySQL server will
# allow. One of these connections will be reserved for a user with
# SUPER privileges to allow the administrator to login even if the
# connection limit has been reached.
max_connections=5

# Query cache is used to cache SELECT results and later return them
# without actual executing the same query once again. Having the query
# cache enabled may result in significant speed improvements, if your
# have a lot of identical queries and rarely changing tables. See the
# Qcache_lowmem_prunes status variable to check if the current value
# is high enough for your load.
# Note: In case your tables change very often or if your queries are
# textually different every time, the query cache may result in a
# slowdown instead of a performance improvement.
query_cache_size=0

# The number of open tables for all threads. Increasing this value
# increases the number of file descriptors that mysqld requires.
# Therefore you have to make sure to set the amount of open files
# allowed to at least 4096 in the variable open-files-limit in
# section [mysqld_safe]
table_cache=256

# Maximum size for internal (in-memory) temporary tables. If a table
# grows larger than this value, it is automatically converted to disk
# based table This limitation is for a single table. There can be many
# of them.
tmp_table_size=9M


# How many threads we should keep in a cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there aren't
# more than thread_cache_size threads from before.  This greatly reduces
# the amount of thread creations needed if you have a lot of new
# connections. 

Re: A key question

2005-11-18 Thread David Griffiths


MySQL can use the index on one of the columns in a multi-column index, 
with caveats.


If this is your index,

UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)

and you plan to use price_data_date in all your queries, but never 
price_data_ticker, then simply reverse the order of the columns in your index 
definition:

UNIQUE KEY `tidadx` (`price_data_date`, `price_data_ticker`)


If you have a composite index on columns a, b and c:

create index a_b_c_idx ON table_name (a, b, c);

and you query with a in the where clause, the composite index will be 
used.


If you query with a and b in the where clause, the composite index 
will be used; ditto for a, b and c.


But if you query with b (and only b) in the where clause, the index 
won't be used.


If you use b and c in the where clause, the index won't be used.

Look here for other examples:
http://dev.mysql.com/doc/refman/4.1/en/multiple-column-indexes.html

Don't add indexes you don't need - it slows down inserts (and updates 
where the indexed columns are being updated), uses up space in your 
database, and requires extra administration, etc.


David

Mikhail Berman wrote:


Michael,

Thank you for your comments.

This give me a new ideas how to work with this issues.

And, no at this point we are not planning to work with price_data_ticker
field itself.

Regards,

Mikhail Berman

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 18, 2005 12:11 PM

To: Mikhail Berman
Cc: Jeremy Cole; Jasper Bryant-Greene; mysql@lists.mysql.com
Subject: Re: A key question

Mikhail Berman wrote:
 


Dear Jeremy,

Thank you for your help.

I do have an exact situation you have assume I have. Here is the 
output of SHOW CREATE TABLE


   


CREATE TABLE `TICKER_HISTORY_PRICE_DATA` (
 


 `price_data_ticker` char(8) NOT NULL default '',
 `price_data_date` date NOT NULL default '-00-00',
 `price_data_open` float default NULL,
 `price_data_high` float default NULL,
 `price_data_low` float default NULL,
 `price_data_close` float default NULL,
 `price_data_volume` float default NULL,
 UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |
   



 

As you can see, Unique KEY is on two first fields, but most of the 
work, joins  searches, will be done on the second field
   


price_data_date.

As others have pointed out, your UNIQUE KEY on
(price_data_ticker,price_data_date) will serve as an index to speed
queries which search for a specific value of price_data_ticker and
queries which search for a specific combination of values of
price_data_ticker and price_data_date, but it won't help queries which
only search by price_data_date.  Yet, most of the work, joins 
searches, will be done on the second field, price_data_date. 
 In that case, you definitely need an index on price_data_date.  Based

on your description, I'd suggest you have your index backwards.  What
you need is an index on (price_data_date, price_data_ticker).  This will
satisfy searches on price_data_date and on combinations of the two.
Hence,

  ALTER TABLE TICKER_HISTORY_PRICE_DATA
  DROP INDEX tidadx,
  ADD PRIMARY KEY (price_data_date, price_data_ticker);

That will satisfy most of your queries.  Then, the question becomes, do
you need a separate, single-column index on price_data_ticker?  That
will depend on whether you run queries which select based on
price_data_ticker without specifying price_data_date.

Michael



 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Question

2005-10-04 Thread David Griffiths
Suppose you subscribe to a public email list that offers support on a 
free open source database, and you see an email where someone doesn't 
really provide nearly enough information to answer, what would you do?


What is the algorithm you are trying to implement to get the query-output?


Roy Harrell wrote:


Suppose I have a simple table as follows:

PartNameTolerance   Cycles
A   1   10
A   2   11
A   3   13
A   4   15
A   5   18
B   1   12
B   2   14
B   3   16
B   4   16
B   5   17
C   1   6
C   2   7   
C   3   7
C   4   8
C   5   10


How do I set up a query whose output would 
look like this:


Tolerance   PartA   PartB   PartC
1   10  12  6
2   11  14  7
3   13  16  7
4   15  16  8
5   18  17  10


Thanks,

Roy Harrell
Adaptive Equipment
352.372.7821


 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ETA and progress of full-text indexes on INNODB?

2005-09-12 Thread David Griffiths

Kevin,

From http://www.innodb.com/todo.php;

*In progress:*
   Add |FULLTEXT| indexes on InnoDB tables. A sponsor for this project
   has been found, and a developer has been hired. Appears probably in
   2006.



David.


Kevin Burton wrote:


Anyone know the ETA of having full-text index support on INNODB?

Kevin

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird delete behavior on mysql 4.0 - rows not disappearing....

2005-09-01 Thread David Griffiths


No, with the default transaction isolation level, REPEATABLE READ, 
that's how it is supposed to work. You've started a transaction in 
Window B, so Window B is immune to changes made in Window A until you 
finish the transaction in Window B. See the manual for details


http://dev.mysql.com/doc/mysql/en/innodb-consistent-read.html;

I haven't explicitly started any transactions in Window B - it's select-only 
(with autocommit set to 0). Are you
saying that even though transactions have happend and been committed in Window 
A, I won't be able to see those
transactions in Window B?

The relevant part of the documentation in the link you sent is,

The query see[s] the changes made by exactly those transactions that committed 
before that point of time, and
no changes made by later or uncommitted transactions. The exception to this 
rule is that the query sees the
changes made by the transaction itself that issues the query.

In otherwords, if you start a query (and it's a long running query), you won't 
see the results of any data
committed by another session during the running of that query. Fine. That's 
expected.

But if I am doing only queries (no transactions) via a connection, and no query 
is running when I commit data in
another session, then the query-window should see the results of those changes.

I suspect that the mysql client is implicitly starting a transaction when you do a 
set autocommit=0. Thus, any
changes made by any other sessions won't be visible till you do a commit or 
rollback. Each time a commit or
rollback is issued in the non-auto-commit session, you can see data changed by 
other sessions.

Regardless, this is not a repeatable-read issue. I think it's a mysql client 
issue, and the fact that the client
is creating transactions for you in the background.

This is not how the Oracle client works - you are always in non-auto-commit 
mode (and I'd love to figure out
how to set that - autocommit is so dangerous), and until you actually start a 
transaction with an update, insert,
delete or select-for-update, no transaction is started, and you can see the 
changes made by other sessions once
they've been committed (I tested SQL*Plus on Oracle 8i to make sure).

David


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird delete behavior on mysql 4.0 - rows not disappearing....

2005-09-01 Thread David Griffiths
I believe you - I'm just a but surprised. I guess I had a singular view 
of how a session should work based on Oracle. I would have expected that 
until you execute SQL that requires a commit or a rollback, you wouldn't 
be in a transaction. Unfortunately, if you have connections that are 
read and write, and one connection ends up being used for SELECTs only 
(just bad luck) , it's going to have an out-date view of the database.


To me, a transaction is something you commit or rollback. You can't 
commit or rollback a SELECT unless you've done a locking-read. I guess 
Oracle is just smarter about it, only starting a transaction behind the 
scenes if you've actually done something that warrants a transaction.


David



Michael Stassen wrote:


David Griffiths wrote:



No, with the default transaction isolation level, REPEATABLE READ, 
that's how it is supposed to work. You've started a transaction in 
Window B, so Window B is immune to changes made in Window A until you 
finish the transaction in Window B. See the manual for details

http://dev.mysql.com/doc/mysql/en/innodb-consistent-read.html;

I haven't explicitly started any transactions in Window B - it's 
select-only (with autocommit set to 0). Are you saying that even though
transactions have happend and been committed in Window A, I won't be 
able

to see those transactions in Window B?



The key word is explicitly.  You have implicitly started a 
transaction with your first SELECT, precisely because you turned 
AUTOCOMMIT off.  That transaction continues until you COMMIT or 
ROLLBACK (or perform an action that implicitly commits 
http://dev.mysql.com/doc/mysql/en/innodb-implicit-command-or-rollback.html). 
 That's the point of setting AUTOCOMMIT to off.  If you only want to 
start transactions explicitly (with START TRANSACTION or BEGIN), then 
you need to leave AUTOCOMMIT on.  See the manual for details 
http://dev.mysql.com/doc/mysql/en/innodb-and-autocommit.html.



The relevant part of the documentation in the link you sent is,

The query see[s] the changes made by exactly those transactions that 
committed before that point of time, and no changes made by later or

uncommitted transactions. The exception to this rule is that the query
sees the changes made by the transaction itself that issues the 
query. 
In otherwords, if you start a query (and it's a long running query), 
you won't see the results of any data committed by another session 
during the

running of that query. Fine. That's expected.

But if I am doing only queries (no transactions) via a connection, 
and no

query is running when I commit data in another session, then the
query-window should see the results of those changes.



From the AUTOCOMMIT manual page cited above, In InnoDB, all user 
activity occurs inside a transaction.


I suspect that the mysql client is implicitly starting a transaction 
when
you do a set autocommit=0. Thus, any changes made by any other 
sessions
won't be visible till you do a commit or rollback. Each time a commit 
or rollback is issued in the non-auto-commit session, you can see 
data changed by other sessions.



With AUTOCOMMIT off, the transaction starts, in your case, with your 
first SELECT.


Regardless, this is not a repeatable-read issue. I think it's a mysql 
client issue, and the fact that the client is creating transactions for

you in the background.



It's not the client.  That's how InnoDB works.

This is not how the Oracle client works - you are always in 
non-auto-commit mode (and I'd love to figure out how to set that -

autocommit is so dangerous), and until you actually start a transaction
with an update, insert, delete or select-for-update, no transaction is
started, and you can see the changes made by other sessions once they've
been committed (I tested SQL*Plus on Oracle 8i to make sure). 



I'll make no comments on how Oracle works, but what you seem to be 
describing is effectively what happens with AUTOCOMMIT on in MySQL.  
In general, I'd suggest that expecting any two RDBMSs (MySQL and 
Oracle, for example) to behave in exactly the same way will usually 
get you in trouble.



David



Michael




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird delete behavior on mysql 4.0 - rows not disappearing....

2005-09-01 Thread David Griffiths
Yah, I tested in SQL*Plus - one window could see inserts, updates and 
deletes that had been committed in another window (in which a commit or 
rollback had not been issued). I ran the test again - delete data from a 
table in one window and commit the change, and a select in the other 
window displays the results.


Note that SQL*Plus by default does not auto-commit, but the key elements 
of the test are the same. Data committed in one session is visible in 
another session once committed.


In Oracle/SQL*Plus, data committed in session A will show up in Session 
B if Session B has an open transaction. Here's the example (using 
session A and B).


Session A:

insert into temp_table (col1) values ('a');

Session B:

insert into temp_table (col1) values ('b');

At this point, neither is committed, and neither session can see what's 
the other has done (the left hand doesn't know what the right is doing, 
so to speak).


Session A:

commit;

Session B:

SQL select * from temp_table;

C
-
b
a


Session B has an open transaction, yet can see the data that was 
committed in another transaction. It's view of the data is, Show me all 
the data that has been committed to the database at the point where I 
started my query, plus all changes that I've made yet not committed or 
rolled back.


Oracle runs in READ COMMITTED (the above), while INNODB runs in 
REPEATABLE READ. Big difference. And I (stupidly) assumed they ran as 
the same transaction isolation level.


Learn something new every day.

David



[EMAIL PROTECTED] wrote:



If you are NOT in autocommit mode, your connection (or the server, it 
doesn't matter which) starts a transaction *when you issue your first 
command*. Every command you issue on that connection is in that 
initial transaction until you EXPLICITLY commit or rollback (or do 
something else that commits or rolls-back your transactions like ALTER 
TABLE) . At that point a new transaction is automatically started when 
you issue your next command.  If I remember correctly, closing a 
connection with a pending transaction defaults to a ROLLBACK. That way 
if a transaction is left incomplete due to communications failure, you 
maintain a consistent database.


 If autocommit is enabled (SET autocommit=1) then each command 
executes within it's own mini-transaction (one little, tight 
transaction wrapped around each statement). Each SELECT can see what 
every other INSERT, UPDATE, or DELETE has done (assuming their 
transactions are committed) because it is not already inside a pending 
transaction. This is the default mode for user interaction for nearly 
every database product I have used. With autocommit active, you are 
required to explicitly issue a START TRANSACTION if you want a 
transaction that includes several commands.


Are you sure that's not how Oracle operates, too? I ask because MS SQL 
acts the same as MySQL when it comes to autocommits


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

David Griffiths [EMAIL PROTECTED] wrote on 09/01/2005 12:33:55 PM:

 I believe you - I'm just a but surprised. I guess I had a singular view
 of how a session should work based on Oracle. I would have expected 
that
 until you execute SQL that requires a commit or a rollback, you 
wouldn't

 be in a transaction. Unfortunately, if you have connections that are
 read and write, and one connection ends up being used for SELECTs only
 (just bad luck) , it's going to have an out-date view of the database.

 To me, a transaction is something you commit or rollback. You can't
 commit or rollback a SELECT unless you've done a locking-read. I guess
 Oracle is just smarter about it, only starting a transaction behind the
 scenes if you've actually done something that warrants a transaction.

 David



 Michael Stassen wrote:

  David Griffiths wrote:
 
 
  No, with the default transaction isolation level, REPEATABLE READ,
  that's how it is supposed to work. You've started a transaction in
  Window B, so Window B is immune to changes made in Window A until 
you

  finish the transaction in Window B. See the manual for details
  http://dev.mysql.com/doc/mysql/en/innodb-consistent-read.html;
 
  I haven't explicitly started any transactions in Window B - it's
  select-only (with autocommit set to 0). Are you saying that even 
though

  transactions have happend and been committed in Window A, I won't be
  able
  to see those transactions in Window B?
 
 
  The key word is explicitly.  You have implicitly started a
  transaction with your first SELECT, precisely because you turned
  AUTOCOMMIT off.  That transaction continues until you COMMIT or
  ROLLBACK (or perform an action that implicitly commits
  http://dev.mysql.com/doc/mysql/en/innodb-implicit-command-or-
 rollback.html).
   That's the point of setting AUTOCOMMIT to off.  If you only want to
  start transactions explicitly (with START TRANSACTION or BEGIN), then
  you need to leave AUTOCOMMIT on.  See the manual for details

Re: Mysql to Oracle migration

2005-09-01 Thread David Griffiths
You need to talk to Oracle, or look on an Oracle mailing list. People 
here are more concerned about migrating from Oracle to MySQL, rather 
than the other way around.


There are probably lots of commercial tools out there that will do it 
(and compared to your Oracle licensing costs, they are probably 
relatively cheap).


David

Clyde Lewis wrote:

Does anyone know of a straght forward approach to migrate a 
mysql(4.1.11) Schema to Oracle(9i release 2).

Also, please provide any best practices.

Thanks in advance.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Weird delete behavior on mysql 4.0 - rows not disappearing....

2005-08-31 Thread David Griffiths
I just discovered some weird behaviour with MySQL 4.0 (4.0.24 and 
4.0.18) using InnoDB.


If you have two connections to mysql (I use the mysql client), one of 
which has autocommit turned on, an the other turned off, a row deleted 
from the client with autocommit turned on still shows up in the client 
with autocommit turned off, even after a commit.


That's complicated, so here's an example.

CREATE TABLE bug_find (col1 VARCHAR(10) NOT NULL);

Now open two windows (I'll call them Window A and Window B).

Leave Window A alone (I am assuming your client is in auto-commit mode).

In Window B, type,

SET autocommit = 0;

In Window A, type

INSERT INTO bug_find (col1) VALUES ('a');

This should be committed automatically.


In Window B, type

SELECT * from bug_find;

The column should be there.

In Window A, type,

DELETE FROM bug_find;

Again, this should be committed.

In Window B, type,

SELECT * FROM bug_find;

Whoops - still there, even though it's been removed.

In Window A, type,

commit;

In Window B, type,

SELECT * FROM bug_find;

Still there.

To make it disappear from Window B, type,

commit;

That makes no sense. The changes Window B sees (that are made by Window 
A) should not depend on issuing a commit - it has to see any data 
committed by Window A (unless it's trying to avoid dirty reads, which 
isn't the case here).


If Window B is in autocommit mode, you see the deletion right away. It 
seems to be the autocommit=0 that's screwing stuff up. I haven't tested 
this with the JDBC drivers, or with the Query Browser, or anything else. 
It may just be a MySQL client issue.


This is a big problem with data consistency. Note that this bug also 
exists for updates (any updates made in Window A are not seen by Window 
B until Window B issues a commit). Also, turning autocommit off in a 
session half way, and the same behaviour happens.


Is this a known bug?

David.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQLyog Enterprise

2005-07-20 Thread David Griffiths


Also check out EMS MySQL Manager. I looked at every MySQL tool I could 
get my hands on, and ended up buying a license of this.


Main site is here:
http://sqlmanager.net/products/mysql/manager/

Feature list is here:

http://sqlmanager.net/en/products/mysql/manager/features

It also has access via HTTP tunnelling (I have an older version, so 
can't comment on it). It supports 5.0 (stored procedures, views, 
triggers), and has a decent visual designer that is able to reverse 
engineer a database and generate a diagram.


I have only two complaints: it's a very busy GUI, and it would be be 
able to generate a delta of database changes using the GUI (ie reverse 
engineer the database into a diagram, make changes, and have a database 
patch generated for you).


It's more expensive (at $135 for a business license for Windows, $95 for 
Linux) but it's well worth the money.


David

Terence wrote:

It's a bit weak on foreign keys, no support for procedures and views 
(if you're looking at MySQL 5) from a gui perspective (from what I 
have found). Other plus points are it has a http tunnel feature if you 
host your mysql database with someone (requires php on the hosted 
server), so you can actually browse the database as though it was 
local. There's also a free version if you don't need enterprise 
features, so maybe try that out first.


Navicat is an alternative with good support for procs and views. 
There's also a 30day trial version.


Scott Hamm wrote:

I'm contemplating buying SQLyog Enterprise for $49 dollars 
(non-commerical) for personal use. Do anyone use it and how does it 
fare in your opinion?







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb performance issues

2005-07-15 Thread David Griffiths


Tony,

You said that you copied the my.cnf file from huge.cnf - not sure what 
version you are using (I missed your original post), but the my-huge.cnf 
in mysql 4.0.24 is for MyISAM. You should have used 
my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. The 
my-huge.cnf allocates way to much memory to the MyISAM engine. All the 
innodb stuff is commented out.


If you want help, you'll need to post your my.cnf file, the full table 
definition (try SHOW CREATE TABLE tblSoppingCart; and pasting the 
results in here).


You'll need to also post the queries that are hitting the database while 
you're having these issues.


David


tony wrote:


Hi,

A few days ago i posted a quaestion about performace, I now have a
little more info, hopefully someone can help.

I have a table, tblShoppingCart with 3 fields, 


cartUid (int 11 auto increment)
userUid (int 11, indexed) 
strCartHash (varchar 32) 


The table is innodb

Nomally my server load is below 0.1 and everythings fine, I have a
process that runs occasionally that pushes the load up to 1.5, when this
happens inserts into the table seem to get blocked, ie taking up to 20
seconds, as soon as the load drops the inserts are fine again.
Interestingly, if I convert the table to myisam I don't get this
problem. However I really want to keep the table innodb as I use it in
transactions latter.

My my.cnf file is coppied from the default huge.cnf file, i have duel
xeons with 4gb of ram and i'm running mysql 4.1.1 on red hat linux.

Any pointers on where i can look further appreciated.

Tony


 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb performance issues

2005-07-15 Thread David Griffiths

Tony,

Your my.cnf file is set up for MyISAM, not InnoDB. MySQL allows you to 
allocate memory and resources to any and all storage engines. Yours is 
set up to give lots of resources to MyISAM, and none to InnoDB.


Reducing MyISAM
key_buffer = 384M - this is way too much - I'd set to to 2-16 meg, 
assuming that the only MyISAM tables you have are in the mysql database.
query_cache_size = 32M - read up on the query cache - it's only useful 
for oft-repeated queries that hit tables in which the data rarely 
changes. We turn ours off


The big variable in InnoDB (that affects performance the most) is the 
innodb_buffer_pool_size. Since you are running a xeon, I am guessing 
it's a 32-bit architecture. There is a limit on the max size of the process


The amount of memory MySQL will use is:

innodb_buffer_pool_size + key_buffer + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

You should make sure that stays under 2 gigabytes. If MySQL uses much 
more memory, it will crash.



There are other tuning choices (including the thread-pool-cache). The 
best resource is the page on innodb performance tuning, and it can be 
found here:


http://dev.mysql.com/doc/mysql/en/innodb-configuration.html

You might also want to consider High Performance MySQL. There is lots of 
good info in there on setup, tuning, replication, etc.


David



tony wrote:


Hi David,


On Fri, 2005-07-15 at 10:25 -0700, David Griffiths wrote:
 


Tony,

 - not sure what version you are using
   



4.1.11. Server is a duel xeon machine with 4gb or ram running mysql and
apache webserver and not much else.


You should have used 
 

my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. 
   



I can use this instead if it's going to help.

 

If you want help, you'll need to post your my.cnf file, 
   



[client]
port= 3306
socket  = /var/lib/mysql/mysql.sock
# The MySQL server
[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
log = /var/log/mysql/mysql.log
log-slow-queries= /var/log/mysql/mysql-slow.log
set-variable= max_connections=250
server-id   = 1
innodb_data_home_dir = /var/lib/mysql/
innodb_log_group_home_dir = /var/lib/mysql/
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20



 

the full table 
definition (try SHOW CREATE TABLE tblSoppingCart; and pasting the 
results in here).
   




tblCart | CREATE TABLE `tblCart` (
 `intCartUid` int(11) NOT NULL auto_increment,
 `intUserUid` int(11) NOT NULL default '0',
 `tsCartCreated` datetime NOT NULL default '-00-00 00:00:00',
 `tsLastUpdated` datetime NOT NULL default '-00-00 00:00:00',
 `strCartHash` varchar(32) NOT NULL default '',
 PRIMARY KEY  (`intCartUid`),
 KEY `intUserUid` (`intUserUid`),
 KEY `tsLastUpdated` (`tsLastUpdated`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1


 

You'll need to also post the queries that are hitting the database while 
you're having these issues.


   




# Query_time: 20  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
SET insert_id=34475,timestamp=1121407309;
INSERT INTO
 dbseThxWebOrders.tblCart
 (intUserUid,tsCartCreated,strCartHash)
 VALUES

(0,now(),'4e5d105f7cd34268e1a5e160d479ed91');

is an example from my slow query log. All of the offending queries today
were this same query.

Thanks for you help

Tony





 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing Per-Table-Tablespaces

2005-06-22 Thread David Griffiths


Frank, thanks for the reply.

I'd thought of that, but I was worried about using HotBackup (we use 
this for all of our backups on our production machines) - the hot backup 
manual at http://www.innodb.com/manual.php doesn't mention if it can 
follow a symlink to the data file.


Thanks for the link - some very interesting presentations there (wish I 
had gone to the conference - next year maybe).


David

Dr. Frank Ullrich wrote:


David,

David Griffiths wrote:

The manual is a little unclear on per-table-tablespaces in 4.1/5.0 
(http://dev.mysql.com/doc/mysql/en/multiple-tablespaces.html)


Using per-table-tablespaces ignores the innodb_data_file_path (yes, 
it uses it for the ibdata files, but not for the 
tablespace/data-files for the individual tables). It doesn't talk 
about the relationship between the per-tables-tablespaces and the 
innodb_data_file_path (or just as importantly the lack of 
relationship between the two).


That would all be fine, except the same page also states,

Using multiple tablespaces can be beneficial to users who want to 
move specific tables to separate physical disks or who wish to 
restore backups of single tables quickly without interrupting the use 
of the remaining InnoDB tables.


How do you move a table (thus the tablespace) to a seperate disk? It 
implies that different tables and their related tablespaces can be 
put on different disks, but doesn't really get into the specifics.



you have to symlink the innodb table file:
move it to the disk you want, then create a symlink in the correct 
database directory that points towards the new location.


See: http://www.mysqluc.com/pub/w/35/sessions.html New InnoDB Features



David



Regards,
 Frank.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimizing Per-Table-Tablespaces

2005-06-21 Thread David Griffiths
The manual is a little unclear on per-table-tablespaces in 4.1/5.0 
(http://dev.mysql.com/doc/mysql/en/multiple-tablespaces.html)


Using per-table-tablespaces ignores the innodb_data_file_path (yes, it 
uses it for the ibdata files, but not for the tablespace/data-files for 
the individual tables). It doesn't talk about the relationship between 
the per-tables-tablespaces and the innodb_data_file_path (or just as 
importantly the lack of relationship between the two).


That would all be fine, except the same page also states,

Using multiple tablespaces can be beneficial to users who want to move 
specific tables to separate physical disks or who wish to restore 
backups of single tables quickly without interrupting the use of the 
remaining InnoDB tables.


How do you move a table (thus the tablespace) to a seperate disk? It 
implies that different tables and their related tablespaces can be put 
on different disks, but doesn't really get into the specifics.


David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subquery error

2005-06-17 Thread David Griffiths


Short-answer: use IN instead of =

Long-answer:

Your query is kind of weird. I think you want to use IN:

SELECT memberid, fullname FROM members WHERE memberid IN (select 
distinct memberid FROM familymembers)


The equals implies an exact match between the top-level, and the 
sub-query, but I am guessing your sub-query will return more than one 
row, and that's where your problem lies.


Oracle won't let you do it (I haven't worked with anything other than 
4.0 in MySQL, so I can't say regarding 4.1) - complains that a 
single-row-subquery returns more than one row.


If you do this in Oracle:

select * from listing_status where listing_status_id = (select 
listing_status_id from listing_status where rownum  2)


it will work, as the sub-query returns exactly one row. Not sure how 
MySQL would handle it if you put a LIMIT on it. But I don't think 
that's what you are trying to do.



(also) David


Oracle, I believe, will

David

David Kagiri wrote:


i get an error when i run the query below though SQLyog.

SELECT memberid,fullname FROM members WHERE memberid
= (select distinct memberid FROM familymembers)

the error is

Error Code : 1064

You have an error in your SQL syntax. Check the manual that corresponds
to your MySQL server version for the right syntax to use near 'select
distinct memberid FROM familymembers)' at line 2

(60 ms taken)

i use MySQL version is 4.1.7 dosent it support subqueries?








__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Interesting Hardware Article

2005-06-17 Thread David Griffiths
Anandtech has an interesting article 
(http://www.anandtech.com/IT/showdoc.aspx?i=2447) on hardware for Linux 
database servers.


Some very interesting conclusions:

1) Moving to 64-bit MySQL on a 64-bit Xeon actually decreases 
performance by about 12% on average, while an Opteron running 64-bit 
MySQL gets a 32% performance increase.


2) Innodb scales better (obviously)

3) A server with one CPU that has a dual-core Opteron (the X2 CPUs) is 
faster than a server with two single-core CPUs.


4) SuSE SLES 9.1 outperforms Gentoo by about 12%

I would take Anandtech with a grain of salt - this isn't what they 
normally do, and I can't verify their benchmarking was reasonably 
accurate (surprised at the disks they used - one ATA, one SCSI).


David.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can innodb_buffer_pool_size be set 2Gb on FreeBSD?

2005-06-17 Thread David Griffiths


I'll post something I heard about when looking into upgrading Oracle 8i 
from Windows to Oracle 10g on Linux.


To get more memory for the process, you would enable big memory page, 
and then create an in-memory temp file system; you could then allocate 
extra memory for a process, and part of it would be swapped out to this 
temp file system in memory. Red Hat Advanced Server was the OS of choice 
for those who did it - I played around with it, but couldn't get Oracle 
to start with larger memory settings (we weren't running on RedHat AS). 
Maybe you'll have more luck.


A good page that talked about this was,

http://www.oracle-base.com/articles/Linux/LargeSGAOnLinux.php

Good luck.

David

Jeff Smelser wrote:


On Friday 17 June 2005 02:38 pm, Brady Brown wrote:
 


Have any of you MySQL/FreeBSD cats successfully set
innodb_buffer_pool_size  2G without runing into any of the memory
allocation problems found on Linux platforms?
   



It has nothing to do with linux.. its an x86 thing.. So no..  

However, some kernels have things to let you go over, but you get weird 
results when doing so.


Jeff

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What is best open-source platform alternative to overcome innodb 2Gb memory limit on Linux? FreeBSD?

2005-06-15 Thread David Griffiths
Why not go AMD-64? Dual Opteron, with 8/16/32 gig of RAM? Get a 3ware 
SATA drive, and run Gentoo for AMD-64. You can increase your innodb 
buffer pool to use almost all that space. You can make your buffer pool 
as large as the physical RAM in your machine can support. No 2.5 gig per 
process, 4-gig limit on addressable memory (without the address-extensions).


Your hardware is holding you back more than your operating system.

David





Brady Brown wrote:


Hi,

I am currently running a large database (around 20Gb) on a 32bit x86 
Linux platform. Many of my larger data-crunching queries are 
disk-bound due to the limitation described in the innodb configuration 
documentation:


*Warning:* On 32-bit GNU/Linux x86, you must be careful not to set 
memory usage too high. |glibc| may allow the process heap to grow over 
thread stacks, which crashes your server. It is a risk if the value of 
the following expression is close to or exceeds 2GB:


Being a responsible citizen, I have my innodb_buffer_pool_size set 
below 2Gb.  But the time has come to scale the application, so I need 
an alternative solution that will allow me to set 
innodb_buffer_pool_size as high as my heart desires (or at least well 
beyond 2Gb).


Do any of you have battle-tested recommendations?
How about FreeBSD?  From what I can gather, it is a good idea to build 
MySQL on FreeBSD linked with the Linux Thread Library. Would doing so 
re- invoke the 2Gb limit?


I look foward to your collective responses. Thanks!

Brady




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-09 Thread David Griffiths


This is the weirdest thread I've ever seen. I've never seen so many 
seques used in a thread


All we need now is for someone to post a question about configuring 
Tomcat to work with Microsoft SQL Server.


To get back to the spirit of the original post, I personally wouldn't 
use subqueries due to the indexing issues unless all tables in the query 
were static in size and small enough that the performance hit was 
negligable. A table that is growing would mean performance would drop 
quickly (especially if there were joins between larger tables that could 
have used indexes).


Subqueries aren't as useful as they are in other databases yet. Here's 
hoping 5.0 goes gamma/production quckly and 5.1 gets started on (with 
this feature in place).


David

Jay Blanchard wrote:


[snip]
 


1. Join the development work.

   



I tried contributing over at the Tomcat project and really just got
abused
by the team there.
[/snip]

That is unfortunate, but cannot be held against the MySQL team, can it?

[snip]
 


B. Find a product more suitable to your needs.
   



My issue is that shops who are committed to MySQL want to use my
product.
I'm really just trying to get along here.
[/snip]

This is new information. Have you spoken to anyone at MySQL since the
list has not been as helpful as you had hoped?

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slave Dying

2005-06-07 Thread David Griffiths
That's on the schedule (and has been for a bit), but our slave seems to 
stop replicating every week or two. Combine that with weekly pushes, and 
other must-do stuff, it seems to always get dumped on the back burner.


David

Gleb Paharenko wrote:


Hello.



I recommend you to upgrade to 4.1.12 (4.0.24) because there were a lot of bug 
fixes

as of 4.0.20.





David Griffiths [EMAIL PROTECTED] wrote:

 


We are running 4.0.20 on two servers (AMD Opteron and Xeon).
   



 



 


Our slave has died twice in the last month with the following error:
   



 



 

Could not parse relay log event entry. The possible reasons are: the 
   



 

master's binary log is corrupted (you can check this by running 
   



 

'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
   



 

(you can check this by running 'mysqlbinlog' on the relay log), a 
   



 

network problem, or a bug in the master's or slave's MySQL code. If you 
   



 

want to check the master's binary log or slave's relay log, you will be 
   



 


able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
   



 



 

I've tried resetting replication by setting the master log file and 
   



 

position to the values that are given by show slave status in case it 
   



 


was a network hiccup, but the same error.
   



 



 


After I did this, the slave's binary log file shows,
   



 



 


/*!40019 SET @@session.max_insert_delayed_threads=0*/;
   



 


# at 4
   



 

#691231 16:00:00 server id 1  log_pos 0 Rotate to 
   



 


colossus-bin.030  pos: 12435199
   



 


# at 47
   



 

#691231 16:00:00 server id 1  log_pos 0 Rotate to 
   



 


colossus-bin.030  pos: 12435199
   



 



 



 

So I went to the master, and turned the binary log into a text file 
   



 

using mysqlbinlog and scanned by hand the approximate time it died; I 
   



 


didn't see anything particularily interesting.
   



 



 

I then use mysqlbinlog with the -j option (to start parsing at a 
   



 


particular spot; in this case, 12435199). The error I got was,
   



 



 

ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 
   



 


1701209458, event_type: 44
   



 


Could not read entry at offset 12435199:Error in log format or read error
   



 



 

Googling on some of the phrases in that error message didn't turn up 
   



 

much, other than it could be potentially be a hardware or 
   



 


disk-controller issue (we are using 3ware, self-built drivers)
   



 



 

Anyone have any thoughts? This has been fairly recent (we had some 
   



 

max-allowed-packet issues till I bumped that up and reduced the size of 
   



 

the binary logs). The hardware and software has been in place nearly a 
   



 

year (except the kernel, which we bumped up to try to get around 
   



 


corruption in the Innodb data files on the Opteron master).
   



 



 


David
   



 





 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slave Dying

2005-06-06 Thread David Griffiths

We are running 4.0.20 on two servers (AMD Opteron and Xeon).

Our slave has died twice in the last month with the following error:

Could not parse relay log event entry. The possible reasons are: the 
master's binary log is corrupted (you can check this by running 
'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
(you can check this by running 'mysqlbinlog' on the relay log), a 
network problem, or a bug in the master's or slave's MySQL code. If you 
want to check the master's binary log or slave's relay log, you will be 
able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.


I've tried resetting replication by setting the master log file and 
position to the values that are given by show slave status in case it 
was a network hiccup, but the same error.


After I did this, the slave's binary log file shows,

/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 4
#691231 16:00:00 server id 1  log_pos 0 Rotate to 
colossus-bin.030  pos: 12435199

# at 47
#691231 16:00:00 server id 1  log_pos 0 Rotate to 
colossus-bin.030  pos: 12435199



So I went to the master, and turned the binary log into a text file 
using mysqlbinlog and scanned by hand the approximate time it died; I 
didn't see anything particularily interesting.


I then use mysqlbinlog with the -j option (to start parsing at a 
particular spot; in this case, 12435199). The error I got was,


ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 
1701209458, event_type: 44

Could not read entry at offset 12435199:Error in log format or read error

Googling on some of the phrases in that error message didn't turn up 
much, other than it could be potentially be a hardware or 
disk-controller issue (we are using 3ware, self-built drivers)


Anyone have any thoughts? This has been fairly recent (we had some 
max-allowed-packet issues till I bumped that up and reduced the size of 
the binary logs). The hardware and software has been in place nearly a 
year (except the kernel, which we bumped up to try to get around 
corruption in the Innodb data files on the Opteron master).


David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Migration Tool - who wrote it?

2005-05-27 Thread David Griffiths
This isn't exactly the right spot, but I can't find any info on the 
MySQL web site.


Anyone know who maintains the MySQL Migration Tool (or who is developing 
it, as it is currently Apha)?


It does not support Orace 8i (9i and 10g only), but I've looked through 
the source code, and it's not a huge change to add the 8i functionality. 
I'm willing to do it if I can find out who to submit the changes to


David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: No Longer Receiving Emails

2005-05-27 Thread David Griffiths
I stopped receiving email most of yesterday as well - it's still 
catching up today.


I think there was a hiccup in the list.

David

Cummings, Shawn (GNAPs) wrote:



It's possible that Gabe's mail spool is full, and he is not receiving 
mail - including our responses.


Gabriel - if you are receiving any messages directly sent to you - 
please respond to the list to rule that out.


If no responses are made to any of either directly or through the 
list, then he isn't receiving mail at all for some reason, if not a 
full spool.




Jay Blanchard wrote:


I saw this on this list, so it is getting there just fine.

 








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread David Griffiths
Yes, indexes slow down inserts (or updates that change the value of a 
column that is indexed).

Also, remember that MySQL only uses one index per per table in a query. 
So if there are some columns in your table that are indexed, but,

1) Have poor cardinality (number of distinct values - low cardinality 
means there aren't many distinct values)
2) Are only used in a where clause with another column that has good 
cardinality

then they are an excellent candidate for removal.
While EXPLAIN is great for queries, it won't help much with an insert; 
it might be useful for figuring out what indexes are used, and which 
ones aren't.

Use show innodb status to get an idea of what's going on (Gleb 
suggested it in the link to the innodb monitor).

You should also post the relevant parts of your my.cnf file; have you 
seen this equation before:

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

Use it to calculate how much memory you are using.
Finally, read up on phantom reads: 
http://sunsite.mff.cuni.cz/MIRRORS/ftp.mysql.com/doc/en/InnoDB_Next-key_locking.html

This might be what's happening.
David
Gleb Paharenko wrote:
Hello.

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   


Use EXPLAIN to determine how efficient your indexes are. Using a lot of
keys could slow down the INSERT operations but fasten the SELECTs. 

InnoDB monitors might be helpful in your case as well. See:
 http://dev.mysql.com/doc/mysql/en/explain.html
 http://dev.mysql.com/doc/mysql/en/innodb-monitor.html



Joseph Cochran [EMAIL PROTECTED] wrote:
 

So here's my situation: we have a database that has a table of about 5
   

 

million rows. To put a new row into the table, I do an INSERT ...
   

 

SELECT, pulling data from one row in the table to seed the data for
   

 

the new row. When there are no active connections to the DB other than
   

 

the one making the INSERT, it runs like a charm. But during normal
   

 

daytime operation, when we run around 50 connections (most sleeping at
   

 

any one time), it takes up to two minutes to do, and ends up locking
   

 

any other inserts or updates against that table for the entire time.
   

 

 

I'll get into more specifics if they're required, but I wanted to ask
   

 

in general if MySQL has tools to diagnose this, or if anyone has had
   

 

general situations like this. In SQL Server (which is where I have
   

 

most of my experience) I could use the trace tool and the Query
   

 

Analyzer to tell what the execution plan for the query was and thus
   

 

what's stalling it (an index gone bad, a weird locking situation,
   

 

etc).
   

 

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   

 

 

-- Joe
   

 


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnoDB memory usage clarification

2005-04-27 Thread David Griffiths
Mayuran,
It depends on a bunch of things. What else is running on this server? 
Does the distro you use have the ability to take advantage of all 16 gig 
(ie if you have apache running, will it be stuck in the same 4 gig as 
MySQL, or can it use the memory above the 4 gig limit).

How big is your database? The innodb_buffer_pool_size holds data from 
your database in memory; if you run a query, and the data is in the 
buffer_pool, the query returns very quickly. If it is not in the 
buffer_pool, then MySQL/InnoDB has to go to disk to get the data. If 
your database is 100 megabytes, there is not much sense in setting a 
buffer_pool of 1 gigabyte. If your database is 10 gigabytes, then you 
will probably encounter some slowness as the disk is being accessed.

How many users will connect? Each user requires some memory for the 
connection, for sorting, etc, etc.

The following equation gives you an idea of how much memory MySQL will 
consume, based on various parameters:

innodb_buffer_pool_size + key_buffer + max_connections * (sort_buffer + 
record_buffer) + max_connections * 2 MB
If you try to grab too much, mysql will crash. Check your distribution 
to figure out what the max process size is.

David
Mayuran Yogarajah wrote:
The following are from the InnoDB configuration page:
# Set buffer pool size to 50-80% of your computer's memory,
# but make sure on Linux x86 total memory usage is  2GB
*Warning:* On 32-bit GNU/Linux x86, you must be careful not to set 
memory usage too high. |
glibc| may allow the process heap to grow over thread stacks, which 
crashes your server.

Can someone please explain what this means.  We have a 32bit Linux x86 
server with 16gigs of
ram.  Because it is 32bit and not 64bit we cant really make much use 
of all the ram.  I am wondering
which values I can safely increase without crashing the server. Here 
are some of the parameters we are
using in our conf file:

thread_concurrency = 16
table_cache = 512
innodb_buffer_pool_size = 1000M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 100M
innodb_log_buffer_size = 8M
From SHOW INNODB STATUS:
BUFFER POOL AND MEMORY
--
Total memory allocated 462835472; in additional pool allocated 3569664
Buffer pool size   24576
Free buffers   0
Database pages 23956
Modified db pages  11531
Free buffers is 0.
Someone mentioned that because its a quad xeon each CPU would have 
2gigs of ram to work with.  Does this
mean that I can set the innodb buffer pool much higher ?

any feedback is welcome.
thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: innodb, optimizer and outer join

2005-04-22 Thread David Griffiths
Boyd,
You can tell Hibernate not to use outer-joins by setting 
hibernate.use_outer_join to false in the hibernate configuration 
properties file.

It's an always-never proposition. Of course, you can code your own 
queries using the Hibernate Query object to write your own when you know 
you do need one (and you still get the benefit of the relational-object 
mapping).

I won't answer the question about the Innodb optimizer, as I don't know 
the answer.

David
Boyd E. Hemphill wrote:
We are considering using Hibernate as a persistence layer to our web
application.  It seems to only want to do outer joins and this concerns
me
b/c they can be expensive.  I created the following benchmark experiment
and
learned that the explain plan for the two constrained queries is the
same.  


What I would like to know is can I depend on the performance being the
same,
or is the optimizer doing something different b/c of the outer join?  I
seem
to remember something about it not using the index all the time or
forcing a
full table scan in some cases. 


Since Hibernate seems to using only an outer join rather than a join, I
would like this concern put to rest.

Thanks for any insight.
Boyd

create table foo (
  foo_id int unsigned not null auto_increment primary key,
  foo_sn varchar(15),
)
;
create table foo_child (
  foo_child_id int unsigned not null auto_increment primary key,
  foo_id int unsigned not null,
  foo_child_sn varchar(15),
  index fk_foo$foo_child (foo_id)
)
;
insert into foo values 

 (1,'a'),
 (2,'b'),
 (3,'c'),
 (4,'d')
;
insert into foo_child values 

 (1,1,'z'),
 (2,1,'y'),
 (3,2,'x'),
 (4,3,'w'),
 (5,9,'v bad 1'),
 (6,9,'v bad 2'),
 (7,3,'t'),
 (8,4,'s')
;

-- unconstrained
select *
 from foo_child fc join foo f on fc.foo_id = f.foo_id
 

select *
 from foo_child fc left join foo f on fc.foo_id = f.foo_id

-- constrained
select *
 from foo_child fc join foo f on fc.foo_id = f.foo_id
where f.foo_id = 1  


select *
 from foo_child fc left join foo f on fc.foo_id = f.foo_id
where f.foo_id = 1  



Best Regards,
Boyd E. Hemphill
MySQL Certified Professional
[EMAIL PROTECTED]
Triand, Inc.
www.triand.com
O:  (512) 248-2278 x 405


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Could not parse relay log event entry. error on slave

2005-03-02 Thread David Griffiths
Thanks for the response.
We can't afford to lose information, and I don't like doing dangerous 
things.

I guess it's time to rebuild the slave.
David
Gleb Paharenko wrote:
Hello.

 

Other than rebuilding the slave from a backup of the master, is there
   

 

any way to get the replication backup up?
   


Have you tried to stop a slave and then start with SQL_SLAVE_SKIP_COUNTER = 
n,
as suggested at:
 http://dev.mysql.com/doc/mysql/en/replication-problems.html

But if the replication starts succesfully, you'll lose some information
(which can be critical). You may RESET the slave and then use a CHANGE
MASTER statement to begin the replication with 889778259 bin-log position.
However it is dangerous: if the slave SQL thread was in the middle of 

replicating temporary tables when it was stopped, and RESET SLAVE  is issued,
these replicated temporary tables are deleted on the slave. 




David Griffiths [EMAIL PROTECTED] wrote:
 

We have a master-slave setup in production.
   

 

 

The master is running on a dual-Opteron with SuSE 8 SLES.
   

 

 

The slave is running on a dual Xeon with SuSE 9.
   

 

 

Both run MySQL 4.0.20
   

 

 

We recently moved our traffic database to the machine and started 
   

 

writing additional traffic (perhaps as much as 600,000 inserts/updates 
   

 

plus at least as many selects per day).
   

 

 

We use Nagios to monitor the machines, and have gotten alerts that the 
   

 

slave is not responding (this started yesterday, which is our busiest day).
   

 

 

This morning, the alert appeared again, but this time, there was an 
   

 

error in show slave status
   

 

 

Could not parse relay log event entry. The possible reasons are: the 
   

 

master's binary log is corrupted (you can check this by running 
   

 

'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
   

 

(you can check this by running 'mysqlbinlog' on the relay log), a 
   

 

network problem, or a bug in the master's or slave's MySQL code. If you 
   

 

want to check the master's binary log or slave's relay log, you will be 
   

 

able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
   

 

 

I am running a mysqlbinlog on the current binary log on the slave, but 
   

 

it's a large file, and is still going.
   

 

 

On the master, the binary-log-pos is 929084940. On the slave, it's way 
   

 

back at 889778259
   

 

 

Other than rebuilding the slave from a backup of the master, is there 
   

 

any way to get the replication backup up?
   

 

 

David
   

 


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Could not parse relay log event entry. error on slave

2005-03-01 Thread David Griffiths
We have a master-slave setup in production.
The master is running on a dual-Opteron with SuSE 8 SLES.
The slave is running on a dual Xeon with SuSE 9.
Both run MySQL 4.0.20
We recently moved our traffic database to the machine and started 
writing additional traffic (perhaps as much as 600,000 inserts/updates 
plus at least as many selects per day).

We use Nagios to monitor the machines, and have gotten alerts that the 
slave is not responding (this started yesterday, which is our busiest day).

This morning, the alert appeared again, but this time, there was an 
error in show slave status

Could not parse relay log event entry. The possible reasons are: the 
master's binary log is corrupted (you can check this by running 
'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
(you can check this by running 'mysqlbinlog' on the relay log), a 
network problem, or a bug in the master's or slave's MySQL code. If you 
want to check the master's binary log or slave's relay log, you will be 
able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.

I am running a mysqlbinlog on the current binary log on the slave, but 
it's a large file, and is still going.

On the master, the binary-log-pos is 929084940. On the slave, it's way 
back at 889778259

Other than rebuilding the slave from a backup of the master, is there 
any way to get the replication backup up?

David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnoDB: Problem with innobackup

2005-02-18 Thread David Griffiths
James,
We've had this issue twice (every 4 months) - running on 4.0.20 - due to 
an old kernel (we just upgraded the kernel after the last issue).

Do you have a replicated (slave) database? We shut down the master and 
then the slave (a few minutes after the master to let all changes 
propigate), and then copy the data files from the slave to the master 
and restart. We have to rebuild the slave after, but the database is up 
and running at that point.

If that doesn't work, what about using an older (valid) backup and your 
binary logs? You can turn the binary logs into the SQL statements and 
run them on the old backup to bring the database up to date...

Also, be careful about checking the tables - if one is found to be 
corrupt, it is marked as unusable until it is fixed. There are also 
different levels of CHECK TABLE - are you using the appropriate one?

David
James Green wrote:
Hi,
On running the hot backup tool we receive:
ibbackup: Re-reading page at offset 0 366297088 in /var/lib/mysql/data/ibdata1
ibbackup: Re-reading page at offset 0 366297088 in /var/lib/mysql/data/ibdata1
050218 15:18:01  InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex
eeaefd1a57557b35693200017183e16e45bf000[garbage
continues]

(.(.u:.%.1./.7u.E.e8.'%.e.c9]q...;InnoDB: End of page dump
050218 15:18:01  InnoDB: Page checksum 4004445466,
prior-to-4.0.14-form checksum 3154721000
InnoDB: stored checksum 4004445466, prior-to-4.0.14-form stored
checksum 2825075037
InnoDB: Page lsn 1 1904468334, low 4 bytes of lsn at page end 1904466222
InnoDB: Page number (if stored to page already) 22357,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 0 162
ibbackup: Error: page at offset 0 366297088 in
/var/lib/mysql/data/ibdata1 seems corrupt!
innobackup: Error: ibbackup child process has died at innobackup.pl line 332.
We have gone through (via a script) and every table in every database
(all by 'mysql' is InnoDB) returns 'OK' using 'check table'.
We did suffer a hardware failure which required a table to be dropped
and rebuilt, however that was resolved and everything appears to be
operating fine now. Except we want the hot backup to work and it
clearly doesn't.
Looking for options. We have mysqldumps but clearly restoration will
be very slow. The server is Debian Linux (stable) with MySQL-4.1.9
from the mysql.com binary tarball.
Help! Many thanks!
James
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Test Message

2005-02-09 Thread David Griffiths
Our mail server has been stopping emails to the list.
This is a test message to see if it bounces again.
Please ignore (and accept my apology for generating noise).
David.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ANNOUNCE: SHSQL - SQL for LINUX/UNIX Shell scripts

2004-11-05 Thread David Griffiths
Wow - cool idea - nice job. Looking forward to playing with it.
David.
Eddy Macnaghten wrote:
Hi all
I have just released a utility (under the GPL) that enables SQL to be
incorporated into UNIX/LINUX shell scripts (easier than using psql or
similar with better integration).
For more information see http://www.edlsystems.com/shsql
To download ftp://ftp.edlsystems.com/shsql 

(needless to say a MySQL version is included :-))
Enjoy
Eddy
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mySQL Clustering and HA (NDB - Emic Networks Solution - Replication) : Enterpise Use

2004-11-04 Thread David Griffiths
You should read this page on clusters:
http://dev.mysql.com/doc/mysql/en/MySQL_Cluster_Limitations_in_4.1.html
Replication and clustering is different technology.
Replication works great, but there are no algorithms to prevent the same 
row being updated on different servers at the same time (in other words, 
multimaster replication does not have collision detection algorithms 
like Oracle does). You can however chain servers together so that a 
server that is a slave is also the master to another server. I'd suggest 
you should buy  read High Performance MySQL by Derek Balling and Jeremy 
Zawodny. He discusses all these issues.

Last but not least, there is a big disconnect between 24x7 and way 
too expensive for our budget. Say you get three computers replicating 
so that in the event of the failure of a single server, you still have 
two up and running. What if your power goes out for a day? Do you have a 
backup generator? Do you have multiple redundant network feeds? Do you 
have multiple hot-standby sites around the world to take over in the 
case of an earthquake/tidal wave/hurricane (even if you did, how long 
would it take for the DNS updates to percolate around the world)?

The cost of a setup goes up by orders of magnitude as you go past the 
99% availability. And remember, hardware requires someone on call or 
sitting at a desk 24 hours a day, 7 days a week to monitor the hardware, 
and change things around. When I worked for GTE, we spent over a million 
dollars on a single server that had redundant nodes (each with their own 
disks) all connected by a very fast fiber optic cable, and we needed 
several of them.

Food for thought.
David
http://www.oreillynet.com/cs/catalog/view/au/1758?x-t=book.view
Mark Papadakis wrote:
Hello all,
After playing with the idea of  'abandoning ship' in favor of IBM DB2
or Oracle, we deiced to stick with mySQL, due its simplicity and
investment in time and experience we have put into it.
Our company needs a HA solution for ensuring 24x7 operation for the
mySQL server instances. As it is, there are are two solutions
available for the problem: Emic Networks's EAC for mySQL and MySQL's
Cluster.
The Emic solution seems to work but is way too expensive for our
budget (around 4k$ for each 2CPUs node). So we need to either go with
NDB or try to get replication to work properly.
Here is a list of questions:
o How 'stable' is MySQL cluster (NDB) ? Is it ready for enterprise
use? Or even tested?
o Does the memory size limit the data we can manage? If it is a memory
based solution it should mean that it can handle of a very limited
number of databases/tables/rows, based on the available memory of the
nodes.
o Is there some sort of tight integration planned for mySQL cluster
and mySQL server ? Perhaps in 5.0 ?
o When is adding/removing nodes on the fly scheduled for
implementation? Without such a feature the system would have to be
shutdown - therefore not a complete HA solution.
o Has anyone gotten replication to work right? Perhaps Multi-Master replication?
Thank you very much in advance,
MarkP
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Strange results from a query

2004-10-21 Thread David Griffiths
Thanks, Stephen - that's interesting to know.
David

Stephen E. Bacher wrote:
FYI, I ran the same scenario under MySQL 3.23.58 and it produced the
results you wanted/expected:
+--+-+-+
| col1 | A-count | B-count |
+--+-+-+
|1 |   4 |   0 |
|2 |   0 |   6 |
+--+-+-+
So could it be a bug in 4.0.18?
- seb
---
David Griffiths [EMAIL PROTECTED] wrote:
 

One of our developers came to me yesterday with strange results from a 
query. I've created a simple version of the example. I've pasted the 
table definitions at the bottom if someone really needs to see them. 
This is on mysql 4.0.18.

insert into master (col1) values (1), (2);
insert into sub (col1, a, b, c) values
(1, 'a', null, '2'),
(1, 'a', null, '2'),
(2, null, 'b', '3'),
(2, null, 'b', '3');
mysql select m.col1,
  - sum(s1.c) as 'A-count',
  - sum(s2.c) as 'B-count'
  - FROM master m
  - left join sub s1 on (m.col1 = s1.col1 AND s1.a is not null)
  - left join sub s2 on (m.col1 = s2.col1 and s2.b is not null)
  - group by m.col1;
+--+-+-+
| col1 | A-count | B-count |
+--+-+-+
|1 |   8 |NULL |
|2 |NULL |  12 |
+--+-+-+
2 rows in set (0.00 sec)
In case it's not obvious, the count for the column marked A should be 
4, not 8. And for B, it should be 6, not 12. The database seems to be 
iterating through the table twice.

If one of the outer-joins is removed, the results are correct. I would 
hazard a guess that if a third column existed in master/sub, and a third 
left-join was added, A would go to 12, and B would go to 16. Each 
outer join seems to spawn a new iteration through the data.

My question is why, and what would be the strategy to avoid this?
   

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL Database Corruption (InnoDB), according to Innodb Hot Backup

2004-10-19 Thread David Griffiths
No worries about the late reply.
We took down the master, took a hot backup from the slave (I still need 
to convert that 30-day license into a permanent one), moved it to the 
master, started the master, and then took a hot backup and 
re-initialized the slave. Took all of a few hours, and things are good.

We did have some weird crashing issues with this machine while using an 
LSI RAID card (RAID 5) - ie creating an index killed mysql. We switched 
to a 3ware SATA card (almost as fast in RAID 0+1, and much cheaper even 
with wasting more disk space for mirroring) and the problems disappeared.

Unfort, this corruption occurred about 4 months into setting up 
MySQL/Innodb - I hope we don't have to go through this every few months. 
Taking an additional backup from the slave should give us extra redundancy.

Corruption and weird crashes could be the result of specific 
drivers/hardware and/or specific versions of Linux.

Do you have any suggestions for tracking these issues, so that any 
platform/distro issues can be avoided (and hopefully addressed by OEMs 
and developers)??

David

Heikki Tuuri wrote:
David,
I am sorry for a late reply.
The corruption clearly is in the ibdata file of the production database.
InnoDB Hot Backup checks the page checksums when it copies the ibdata files.
Since CHECK TABLE fails, the corruption probably is in that table. You can
try to repair the corruption by dump + DROP + reimport of that table.
innodb_force_recovery cannot fix any kind of corruption.
 

InnoDB: Page lsn 3 1070601164, low 4 bytes of lsn at page end 1070609127
   

The corruption has almost certainly happened in the OS or the hardware,
because InnoDB checks page checksums before writing them to the ibdata
files. Since the lsn stored at the page start differs from what is stamped
at the page end, there is corruption at either end of the page. We have
received quite a few reports of strange crashes in Opteron/Linux boxes. That
suggests there are still OS bugs or hardware flaws in that platform.
Best regards,
Heikki
Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
Order MySQL support from http://www.mysql.com/support/index.html
..
From: David Griffiths ([EMAIL PROTECTED])
Subject: MySQL Database Corruption (InnoDB), according to Innodb Hot Backup
This is the only article in this thread
View: Original Format
Newsgroups: mailing.database.myodbc
Date: 2004-09-30 12:23:37 PST
I went to do some work on our database last night (dropping large
indexes, which can be time consuming). I checked to ensure that the
backup of that evening had run, but noticed that the size of the backup
was too small compared to previous days (I'm kicking myself for not
emailing the results of the backup to myself every night - I just have a
job that verifies that the backup actually ran).
So I ran the backup by hand. We have 8 data files, the first 7 being 4
gig in size, and the last being a 10-meg autoextend. This is MySQL
4.0.20 64bit, running on a dual Opteron machine running SuSE 8
Enterprise (64-bit). We are using ibbackup 2.0 beta (which is 64-bit for
the Opteron).
ibbackup (the Innodb backup utility) complains on the first file.
ibbackup: Re-reading page at offset 0 3272818688 in
/usr/local/mysql/var/ywdata1
this repeats a few hundred times
Then it dumps some ascii:
040930 11:44:14  InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex 55c3ee4d00030c4d00030c4c000374.
And at the bottom,
040930 11:44:14  InnoDB: Page checksum 1522485550, prior-to-4.0.14-form
checksum 1015768137
InnoDB: stored checksum 1438903885, prior-to-4.0.14-form stored checksum
4028531590
InnoDB: Page lsn 3 1070601164, low 4 bytes of lsn at page end 1070609127
InnoDB: Page number (if stored to page already) 199757,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 0 680
ibbackup: Error: page at offset 0 3272818688 in
/usr/local/mysql/var/ywdata1 seems corrupt!
While we no longer seem to have a backup, we do have a slave (not sure
if the corruption propigated to the slave; I know it can happen in Oracle).
I have a few questions:
1) Is InnoDB backup correct? This might be a false positive (doubt it
though).
2) What are the risks of stopping and starting the database? There is a
force-recovery option in inndb, which might fix the corruption. Note
that I answered this myself. I ran a check table on one of our larger
tables (600,000 rows) which killed the database. It came back up fine. I
re-ran the backup - same issue, with the same page checksums, etc.
3) Anyone have any experience with this? Keep in mind that this might be
an Opteron/MySQL-64bit issue. Or it might be a general issue in MySQL.
Thanks,
David
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http

Strange results from a query

2004-10-19 Thread David Griffiths
One of our developers came to me yesterday with strange results from a 
query. I've created a simple version of the example. I've pasted the 
table definitions at the bottom if someone really needs to see them. 
This is on mysql 4.0.18.

insert into master (col1) values (1), (2);
insert into sub (col1, a, b, c) values
(1, 'a', null, '2'),
(1, 'a', null, '2'),
(2, null, 'b', '3'),
(2, null, 'b', '3');
mysql select m.col1,
   - sum(s1.c) as 'A-count',
   - sum(s2.c) as 'B-count'
   - FROM master m
   - left join sub s1 on (m.col1 = s1.col1 AND s1.a is not null)
   - left join sub s2 on (m.col1 = s2.col1 and s2.b is not null)
   - group by m.col1;
+--+-+-+
| col1 | A-count | B-count |
+--+-+-+
|1 |   8 |NULL |
|2 |NULL |  12 |
+--+-+-+
2 rows in set (0.00 sec)
In case it's not obvious, the count for the column marked A should be 
4, not 8. And for B, it should be 6, not 12. The database seems to be 
iterating through the table twice.

If one of the outer-joins is removed, the results are correct. I would 
hazard a guess that if a third column existed in master/sub, and a third 
left-join was added, A would go to 12, and B would go to 16. Each 
outer join seems to spawn a new iteration through the data.

My question is why, and what would be the strategy to avoid this?
Here are the table defs:
create table master (col1 int not null);
create table sub (col1 int not null, a char(1) null, b char(1) null, c 
smallint);

insert into master (col1) values (1), (2), (3);
Thanks, David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Strange results from a query

2004-10-19 Thread David Griffiths
Sorry - removed some data to make it clearer.
insert into master (col1) values (1), (2);
is correct.
David
Michael Stassen wrote:
Before I think about this, which is it?
insert into master (col1) values (1), (2);
or
insert into master (col1) values (1), (2), (3);

Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL implementation of Oracle sequences

2004-10-15 Thread David Griffiths
Having done one of these conversions in the past, I can say that 
auto-incremented columns work just fine.

You insert the row, and then make a SELECT last_insert_id() call - 
this returns the value of the last auto-increment generated via an 
insert for the connection (so some other database connection won't 
overwrite your value). You can then propagate that value into child-records.

This is much easier, and more efficient than either of the two other 
suggestions.

David
Kenneth Lim wrote:
Hello -
I was wondering if others have had to deal with an Oracle to
MySQL migration and how you handled the implementation equivalent
of Oracle sequences in MySQL.
Our application uses a bunch of Oracle sequences to keep ID
uniqueness for each sequence type.  For example, we have:
 UserIDSequence
 NodeIDSequence
 etc.
When we create new records, we first ask the Oracle sequences
for all new IDs.  Then we generate a bunch of insert statements
and ultimately insert the new records.
We've thought of 3 possible solutions from easiest to difficult
based on our source semantics and amount of work.  I would 
appreciate any helpful insights that others might have.

We are using MySQL 4.1.5-Gamma with the InnoDB engine.
1- Create a single table with a single auto-increment column
  to hold a system-wide unique ID.  Every time we want a unique
  ID, we insert in this table and get the value with the
  LAST_INSERT_ID() function.
  This approach seems to create a bottleneck at this single
  table.
2- Create a single table with 2 columns: sequencename, counter.
  Every time we want a unique ID for a particular sequence, we
  increment the counter and get it back immediately.
  This approach seems to create a bottleneck also.  But I would
  imagine this approach is more costly than solution #1.
3- Redo our semantics by replacing our insert statements and
  allowing AUTO_INCREMENTed columns to keep the IDs unique.
Thanks for you feedback.
-ken
Kenneth Lim
Software Engineer
Senvid, Inc.
2445 Faber Place, Suite #200
Palo Alto, CA  94303
phone: 650-354-3612
fax: 650-354-8890
email: [EMAIL PROTECTED]
http://www.senvid.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Command that I believe should work...

2004-10-15 Thread David Griffiths
Only static data is allowed as default-values; functions, derived data 
etc, are not allowed.

An unfortunate shortcoming.
On the flipside, there is some weird rule that the first timestamp in a 
table will be set with the current date/time during an insert if the 
column is left out of the insert clause (ie you try to insert null). 
It's an ugly cludge, unfort., but it might do the trick for you.

David
Chris W. Parker wrote:
Robert Adkins mailto:[EMAIL PROTECTED]
   on Friday, October 15, 2004 12:23 PM said:
 

   INV_DATE DATETIME   DEFAULT NOW()   NOT NULL,
   );
   I receive an error message stating that there is an error with
'NOW()' 
   

[snip]
 

   Is there a very different method of doing this under MySQL 4.0.21?
   

I think MySQL does not support a default value of NOW() like you'd
expect it to. Yeah I know, it sucks. I don't know at what point this was
added, if it's been added at all. (My MySQL version is a bit old also.)

Chris.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: subquery??

2004-10-12 Thread David Griffiths
Miguel,
No subquery needed
SELECT e.name, c.telephone
FROM employee e
LEFT JOIN contact c ON c.id = e.id
WHERE e.sex = 'F'
LEFT JOIN means there does not have to be a matching contact row to 
find an employee row, but if there is a matching row, the data will be 
returned.

I just guessed at what columns you were interested in, and assumed that 
id was the foreign key between employee and contact.

David
Miguel Loureiro wrote:
Hello, i'm new in this I have 2 related tables, how can I see same
data from main table ( simple where clause ) and if exists data from
related table show it...confused??
Main table: Employee: id, name,sex,age
Related table: Contact:id,telephone,employee

I want to see all female employees and, if exists the respective
contacts...

Thanks 4 your help
Best Regards
Miguel Joaquim Rodrigues Loureiro
- Software Development * Internet Solutions -
  http://www.hlink.pt 


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Sync db

2004-10-01 Thread David Griffiths
It's safe to implement two-way replication (properly called multimaster 
replication) in MySQL if each master is guaranteed to only update a 
unique subset of that data (ie data that no other master database 
changes). Each master database would be able to safely read all the 
data, however.

For example, say you have a database that exists on two servers, with 
replicated data consisting of one table and four records, which I'll 
call A, B, C and D.

If server 1 only ever updates records A and B, and server 2 only ever 
updates C and D, then there is no issue with multimaster replication.

The issue is conflict resolution. In the example above, say record A was 
updated on server 1 at the same time record A was updated on server 2. 
Which change is correct? What if the conflict was not so trivial, but 
involved records on multiple tables with auto-incremented primary keys 
and foreign key constraints to other records inserted or updated at the 
same time? You could end up with a real mess.

Some other database vendors do offer multimaster replication (Oracle 
being the big one), but I've heard that even their conflict resolution 
algorithms are not perfect.

In your case, you need to ensure that the same record is not being 
updated (or deleted) on one database at the same time it is being 
accessed on another database. You have to figure out how to do that 
yourself, and implement it in whatever application is talking to the 
database(s). MySQL can't do it for you, at least not right now.

Have you considered having read-only databases at the remote locations, 
and a single master that people write to which in turn sends changes to 
the 4 read-only databases?

David
Jim Grill wrote:
Two way replication is possible. However, it does not work like you think it
would. It would not be safe to modify data on both databases at the same
time. See the FAQ on replication:
http://dev.mysql.com/doc/mysql/en/Replication_FAQ.html
There is some information regarding two-way replication that will shed some
light on the subject.
Jim Grill
 

yeah, I seen that have.  Have you tried that before?  I need to do it
two-way.  But have not seen any special setup for that.  Each site
will be entering data and all sites will need to see the updated
change.
On Fri, 01 Oct 2004 10:38:07 -0400, Michael Stassen
[EMAIL PROTECTED] wrote:
   

Have you considered replication
http://dev.mysql.com/doc/mysql/en/Replication.html?
Michael

spiv007 wrote:
 

I want to know what to best way to keep a 4 mysql servers sync.
I have 4 remote locations,  I am thinking about putting a mysql server
in each location and every hour have the 1 db that I need to sync to
sync together.
Is there away to do its in somewhat real time or even a delay maybe of
an hour or two?
   

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
   

http://lists.mysql.com/[EMAIL PROTECTED]
 

   


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: memory utilization

2004-10-01 Thread David Griffiths
We have an Opteron server with 6 gig of RAM.
The issue used to be 4 gig - the max amount of memory a 32-bit processor 
could access. With 64-bit processors, the amount of accessible memory 
has jumped into the terrabyte range.

Pick a distribution that is for the AMD-64 (we use SuSE 8 Enterprise) 
and use the 64-bit binary. We used the hints inside the my.cnf for huge 
databases.

You need to apply that formula that you can find in the InnoDB section 
of the MySQL documentation:

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

Because you are using InnoDB, you can set your key_buffer_size fairly low.
On our machine with 6-gig, we have the following settings:
sort_buffer_size = 512K
read_buffer_size = 512K
max_connections = 1400
innodb_buffer_pool_size = 3G
innodb_additional_mem_pool_size = 20M
key_buffer = 16M
To apply the formula to our server, we get
3000 meg + 16 meg + (1400 * (.5 meg + .5 meg)) + 2800 meg
Which is about 7.2 gig of memory that might be used by MySQL in a worst 
case scenario (with all 1400 connections open). We never expect to hit 
1400 connections, but we wanted to set it too high at first, and then 
shrink it down slowly.

MySQL is currently using about 3.1 gig at 168 queries per second, with 
95 to 98 percent reads. Most of our database is in RAM at any given 
time. MySQL is using about 5 percent of the two CPUs under this 
configuration. On our busiest day, when our load is 30% higher, MySQL 
uses about 10% of the CPU cycles.

You should have no problem throwing more RAM into an Opteron. One thing 
to note, we had to compile our own MySQL - we were getting segfaults 
with the default binary (something to do with fpic, I believe - I didn't 
do the actual compilation).

Hope that helps.
David
Mark Steele wrote:
Hi folks,
I have to setup some high performance servers
that will be used for MySQL databases and have
a couple questions regarding MySQL running on
Linux AMD-64 (Opteron).
We are looking at setting up these machines
with 16-64 gb of RAM, can MySQL running on Linux
handle this amount of RAM efficiently? Also
most of the tables (almost all of them) will
be using the InnoDB storage engine, any pointers
on what configuration settings we should use?
(for example on a 16 gb RAM server)
Anyone have experience with this kind of setup?
Regards,
Mark Steele
Implementation Director
CDT Inc.
Tel: (514) 842-7054
Fax: (514) 221-3395 

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL Database Corruption (InnoDB), according to Innodb Hot Backup

2004-09-30 Thread David Griffiths
I went to do some work on our database last night (dropping large 
indexes, which can be time consuming). I checked to ensure that the 
backup of that evening had run, but noticed that the size of the backup 
was too small compared to previous days (I'm kicking myself for not 
emailing the results of the backup to myself every night - I just have a 
job that verifies that the backup actually ran).

So I ran the backup by hand. We have 8 data files, the first 7 being 4 
gig in size, and the last being a 10-meg autoextend. This is MySQL 
4.0.20 64bit, running on a dual Opteron machine running SuSE 8 
Enterprise (64-bit). We are using ibbackup 2.0 beta (which is 64-bit for 
the Opteron).

ibbackup (the Innodb backup utility) complains on the first file.
ibbackup: Re-reading page at offset 0 3272818688 in 
/usr/local/mysql/var/ywdata1

this repeats a few hundred times
Then it dumps some ascii:
040930 11:44:14  InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex 55c3ee4d00030c4d00030c4c000374.
And at the bottom,
040930 11:44:14  InnoDB: Page checksum 1522485550, prior-to-4.0.14-form 
checksum 1015768137
InnoDB: stored checksum 1438903885, prior-to-4.0.14-form stored checksum 
4028531590
InnoDB: Page lsn 3 1070601164, low 4 bytes of lsn at page end 1070609127
InnoDB: Page number (if stored to page already) 199757,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 0 680
ibbackup: Error: page at offset 0 3272818688 in 
/usr/local/mysql/var/ywdata1 seems corrupt!

While we no longer seem to have a backup, we do have a slave (not sure 
if the corruption propigated to the slave; I know it can happen in Oracle).

I have a few questions:
1) Is InnoDB backup correct? This might be a false positive (doubt it 
though).

2) What are the risks of stopping and starting the database? There is a 
force-recovery option in inndb, which might fix the corruption. Note 
that I answered this myself. I ran a check table on one of our larger 
tables (600,000 rows) which killed the database. It came back up fine. I 
re-ran the backup - same issue, with the same page checksums, etc.

3) Anyone have any experience with this? Keep in mind that this might be 
an Opteron/MySQL-64bit issue. Or it might be a general issue in MySQL.

Thanks,
David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: huge innodb data files

2004-08-27 Thread David Griffiths
We had a similar problem (though not quite as bad).
I re-organized the datafiles (and fixed some indexes, etc) and we got a 
vast speed improvement.

I'd suggest you shutdown the database, use mysqldump to take a dump of 
the database, move the old datafiles out of the way, fix your my.cnf to 
create new datafiles of about the same size (and if you can, on 
different disks), and then reimport the dump. Note that it would be much 
faster on the import if you used the new extended insert (-e or 
--extended-insert), assuming the version of MySQL you are using supports it.

You didn't indicate the version, which might be helpful.
David

Ronan Lucio wrote:
Mayuran,
Well, I´m not a MySQL expert, but I think that a good
configuration in the my.cf file can make it better.
Ronan
- Original Message -
From: Mayuran Yogarajah [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, August 26, 2004 5:37 PM
Subject: huge innodb data files
Our DB in production currently has 2 innodb data files,
the second one (which is marked autoextend) has now
grown past 26 gigs.  We are experiencing weird speed
problems with one of the tables.  Even though there are
no rows in this table, performing any kind of select takes
about 2 minutes to execute.  Has anyone had a similar
problem before ? What can I do to speed up queries to
this table ?
thanks,
M
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Moving a database

2004-08-06 Thread David Griffiths
Don't forget to copy the my.cnf file, and make any changes neccesary 
(due to different directory/disk structures).

If you are using InnoDB, and can shut down the database, you should just 
be able to move the files in

data/database name or var/database name
like Mark said below (InnoDB stores files there as well).
Also, in the my.cnf, look to see if any datafiles are specified. If so, 
you need to move those files, plus the log files to the new server.

David
Mark Pittam wrote:
-Original Message-
From: David Barron [mailto:[EMAIL PROTECTED]
Sent: 06 August 2004 14:56
To: [EMAIL PROTECTED]
Subject: Moving a database
Good morning,
What's the best way to move a database and all of its tables from one
server to another?
Thanks
   


You can use the mysqldump utility to dump the database you want to move.
Then use the dump file to recreate the database in the mysql instance
running on your other server.
If you are using myisam tables and are able to shutdown your servers you
can copy all the files in the data/database_name directory into a
directory of the same name in the data directory of your new server. Be
sure to copy all the files (.MYD, .MYI, .frm)
Regards
Mark 

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Insert problems with InnoDB (big table)

2004-08-04 Thread David Griffiths
Also, are the indexes in place when you start your inserts? Constantly 
updating those indexes will be slow; try inserting without indexes, and 
then building the indexes.

You can also limit the size of your index file by,
1) Making sure all columns are as small as possible (ie MEDIUMINT rather 
than INT)
2) If possible, consider using partial indexes on VARCHAR or CHAR 
columns (see http://dev.mysql.com/doc/mysql/en/CREATE_INDEX.html).
3) Make sure you have enough tablespace. If your last datafile specified 
is autoextend, see if you are using it. InnoDB seems to use tablespace 
temporarily during index creation; if you don't have enough, and have an 
autoextend, it will start growing the autoextend-datafile for the index 
creation. This slows things down quite a bit. Sounds like this is not 
the case, however.
4) And, as mentioned below, turn autocommit off.

Index creation with InnoDB and large tables is very very slow. Heikki 
Tuuri has a faster-index creation on his TODO list 
(http://www.innodb.com/todo.php) but it's marked as Long Term

David
[EMAIL PROTECTED] wrote:
Are you disabling autocommit before doing the inserts? And committing
after all inserts are complete? 

-Original Message-
From: Luc Charland [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 03, 2004 7:54 PM
To: [EMAIL PROTECTED]
Subject: Insert problems with InnoDB (big table)

We are evaluating the replacement of a Sybase database with MySQL. The
databases are 60+GB, containing more than 100 tables.
Since we need transactions, that implies InnoDB. We were happy with the
early results, but we hit a major roadblock when trying to import the
biggest table (20+GB, with 4 indexes).
We have reproduced the problem with a simpler table on many different
servers and MySQL versions (4.X).
At first, we easily insert 1600+ lines per second. As the number of
lines grows, the performance deteriorate (which I can understand), but
it eventually gets so slow that the import would take weeks.
Doing a vmstat on the server shows that after a certain limit is reached
(index bigger than the total mem ?), mysqld starts reading as much as
writing, and the CPU usage goes down as the I/O eventually reach the
maximum for the server.
If you wait long enough, you get less than 50 lines per second (which is
30+ times slower than the first few million inserts).
We have done the same tests on Sybase and another database on the same
machines and have not seen this behavior, so it is not hardware related.
We have done the same import in a MyISAM table and have not see any
slowdown (the whole data was imported very fast, even if we had to wait
a very long time --5+ hours-- for the index to rebuild after).
We have tried to transform the MyISAM table into a InnoDB (same problem
occurs). We have tried to import from the MyISAM table into an empty
InnoDB, same problem occurs.
SETUP:
We have of course changed the following
innodb_buffer_pool_size= (50% to 80% of total ram)
innodb_log_file_size=(20% to 40% of total ram) we have tried different
innodb_flush_method we have tried innodb_flush_log_at_trx_commit (0, 1)
we have tried ibdata1:1G:autoextend, and also make it big enough so that
all the data will fit without autoextending.
we have tried creating the indexes after instead of before the inserts,
but like the documentation says, it is not better.
Is there an upper limit to the size of the indexes of a single table in
InnoDB?
Anybody else has seen this kind of slowdown for big InnoDB tables?
Here is a small table that reproduce the problem (if you make 5 to 15
million inserts). We wrote a few programs (one in C++, one in Python)
that generates random data and insert into the database.
__
create table smallest ( id int primary key, name varchar(80), model
char(20)
, description varchar(255), lastupdate date, price decimal(8,2), cost
decimal(8,2))
type=innodb
create unique index smallcomplex on smalltest (model, id, name)
create index smallprice on smalltest (price)
create index smallcost on smalltest (cost)
create index smallname on smalltest (name)
__
Thanks for any help.
Luc Charland
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnoDB TableSpace Question

2004-08-03 Thread David Griffiths
Oracle cannot shrink datafiles (same idea as InnoDB datafiles) when data 
is deleted either.

David
Marc Slemko wrote:
On Tue, 3 Aug 2004 12:42:03 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
 

Thanks Marc,
Is there really no way to reclaim unused space in an InnoDB table space? If
not, why is this not considered a tremendous limitation?
   

Some do consider it a tremendous limitation.  It all depends on how it
is being used.
Oh, and one thing I forgot... in newer 4.1 versions, if you set things
up so each table has its own file with innodb_file_per_table, then I
think if you do an optimize table it will end up shrinking the file
for that table since it will recreate it.  However that really is just
a workaround, and there are a lot of disadvantages to that method ...
especially the fact that free space is now per table instead of per
tablespace.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication + InnoDB = badness

2004-08-03 Thread David Griffiths
Could it be a network bandwidth issue? Remember, all that data needs to 
be transmitted across to the slave. If you are on a 10-megabit network, 
it could be the cause. Remember, Ethernet is not exactly an efficient 
protocol, and efficiency drops as network traffic increases.

A second machine might make it worse, not better.
David
Jon Drukman wrote:
Also even after we re-converted all the slave's Inno tables back to 
MyISAM it *still* lagged out.  Only after I disabled the 
 

Inno engine 
   

entirely did the problem abate.
Any ideas why?  Does InnoDB use resources even if there are no
active tables using the engine?
 

This is most confusing.  You're not using InnoDB *at all* and it was
slowing down the slave?
What InnoDB options had you set in my.cnf anyway?
   

[mysqld]
(replication commands omitted)
set-variable= query_cache_size=512M
set-variable= key_buffer=512M
set-variable= max_allowed_packet=4M
set-variable= table_cache=64
set-variable= sort_buffer=4M
set-variable= record_buffer=4M
set-variable= thread_cache=8
set-variable= tmp_table_size=128M
set-variable= thread_concurrency=4
set-variable= myisam_sort_buffer_size=128M
set-variable= max_connections=1800
set-variable= max_connect_errors=10
set-variable= wait_timeout=30
set-variable= max_binlog_size=5
set-variable= long_query_time=1
#innodb_data_home_dir = /var/opt/mysql/innodb
#innodb_log_group_home_dir = /var/opt/mysql/innodb
log-error=db3-log
#log-slow-queries
skip-innodb
i spoke too soon - the slave still lags behind the master but the problem is
not nearly as bad as it was with InnoDB enabled.  it seems like the combined
weight of replicating and serving tons of selects causes it to fall behind.
if we disable selects for a few seconds, it catches up again.  we need to
add a second slave.  (i've ordered two more just to be safe.)
mysql needs synchronous replication.  we're going to eval the EMIC
clustering product in the next few weeks.  i hope it works.
-jsd-
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Optimizer Index Weirdness

2004-07-31 Thread David Griffiths
Really? I had no idea. I am pretty sure that only Day will have a range. 
Our index is currently, (yearmonth, day, stem_base), so I need to drop 
and recreate it.

I think part of the slowness is the size of the table as well. We have 
data going back years in there. I am thinking about breaking the table 
up by yearmonth (ie all data for 200407 goes into a table by that name).

Most people obviously query for recent data, so most of the time just a 
single table would be in use. This would mean the indexes would be more 
efficient, not having to sort through 38 million rows that with out a 
doubt do not have data that the query requires. If someone requests data 
that spans a few months, a UNION would do the trick.

Thanks for the reply. BTW, where did you come across how MySQL uses 
indexes; this is pretty detailed info, and it would be great if it was 
documented somewhere.

David
Michael Stassen wrote:
Mysql uses multiple-column indexes from left to right.  
Multiple-column indexes are most effective when the column with a 
range criteria comes as far to the right as possible.  Think of the 
index as sorting your data into a book where the first column is the 
chapter, the second column is the page, and the third column gives the 
lines on the page.  In your sample query, you have an exact stem_base 
in mind, an exact yearmonth in mind, but a range of days.  With an 
index on (stem_base, yearmonth, day), you would turn to the 
stem_base='' chapter, then the yearmonth=200407 page, then read 
the lines for day 07 to 27.  Similarly, this would also work with an 
index on (yearmonth, stem_base, day).  With an index on (day, 
yearmonth, stem_base), however, you have to look at each of the day 
chapters from 7 to 27, find the stem_base page in each of those 
chapters, then find the yearmonth line on each of those pages.  That 
will work, but it's relatively complicated.

So, I would expect either an index on (stem_base, yearmonth, day) or 
an index on (yearmonth, stem_base, day) to be better than your current 
indexes starting with day.  Which one should you choose?  I expect 
both should work equally well for the sample query you gave, but since 
a 3-column index can be used as an index on the 1st column (as well as 
an index on the first 2 columns), the choice could matter for other 
queries.  That is, an index on (stem_base, yearmonth, day) could be 
used to select rows for

  SELECT * FROM traffic_boats WHERE stem_base = '';
and an index on (yearmonth, stem_base, day) could be used to select 
rows for

  SELECT * FROM traffic_boats WHERE yearmonth = 200407;
Michael
David Griffiths wrote:
We have a table with 40 million rows. It has statistics on traffic 
from our website. Logs are processed once a night, and the data from 
those logs are added.

Our table (traffic_boats, InnoDB) has three columns of interest:
day INT
yearmonth INT
stem_base VARCHAR(100)
There is an index on day, an index on yearmonth, an index on 
stem_base, an index on (day, yearmonth), an index on (day, yearmonth 
and stem_base). I added the last two today to try to fix the 
performance issues we are having.

A typical query would like like,
SELECT * FROM traffic_boats WHERE stem_base = '' AND yearmonth = 
200407 AND day = 07 AND day = 27;

An explain-plan shows that the optimizer is picking the index on 
stem_base. It *should* be picking the composite index on (day, 
yearmonth and stembase). The greater-than-less-than is throwing it 
off. I can add a USE INDEX to force it to use the index I want it 
to, but that's a little hokey (and it gives me flashbacks to the days 
that I managed one of those commercial-RDBMS where tuning was a 
nightmare).

I've tried analyze table and optimize table (it's InnoDB) without 
luck.

What's really weird is that optimizer comes up with a bad count of 
rows to be examined.

If I let the optimizer pick the index,
mysql explain SELECT * FROM traffic_boats WHERE stem_base = '' 
AND yearmonth = 200407 AND day = 07 AND day = 27;
+---+--+-+--+-+---+---+-+ 

| table | type | 
possible_keys   | key  | 
key_len | ref   | rows  | Extra   |
+---+--+-+--+-+---+---+-+ 

| traffic_boats | ref  | 
idx_yearmonth,idx_day,idx_stem,ymd_stem_idx,ymd_idx | idx_stem | 
100 | const | 42600 | Using where |
+---+--+-+--+-+---+---+-+ 

1 row in set (0.02 sec)
It thinks it needs to examine 42600 rows.
If I force the correct index,
mysql explain SELECT * FROM traffic_boats USE INDEX (ymd_stem_idx) 
WHERE stem_base = '' AND yearmonth = 200407 AND day = 07 AND day 
= 27

Optimizer Index Weirdness

2004-07-30 Thread David Griffiths
We have a table with 40 million rows. It has statistics on traffic from 
our website. Logs are processed once a night, and the data from those 
logs are added.

Our table (traffic_boats, InnoDB) has three columns of interest:
day INT
yearmonth INT
stem_base VARCHAR(100)
There is an index on day, an index on yearmonth, an index on stem_base, 
an index on (day, yearmonth), an index on (day, yearmonth and 
stem_base). I added the last two today to try to fix the performance 
issues we are having.

A typical query would like like,
SELECT * FROM traffic_boats WHERE stem_base = '' AND yearmonth = 
200407 AND day = 07 AND day = 27;

An explain-plan shows that the optimizer is picking the index on 
stem_base. It *should* be picking the composite index on (day, yearmonth 
and stembase). The greater-than-less-than is throwing it off. I can add 
a USE INDEX to force it to use the index I want it to, but that's a 
little hokey (and it gives me flashbacks to the days that I managed one 
of those commercial-RDBMS where tuning was a nightmare).

I've tried analyze table and optimize table (it's InnoDB) without luck.
What's really weird is that optimizer comes up with a bad count of rows 
to be examined.

If I let the optimizer pick the index,
mysql explain SELECT * FROM traffic_boats WHERE stem_base = '' AND 
yearmonth = 200407 AND day = 07 AND day = 27;
+---+--+-+--+-+---+---+-+
| table | type | 
possible_keys   | key  | key_len 
| ref   | rows  | Extra   |
+---+--+-+--+-+---+---+-+
| traffic_boats | ref  | 
idx_yearmonth,idx_day,idx_stem,ymd_stem_idx,ymd_idx | idx_stem | 100 
| const | 42600 | Using where |
+---+--+-+--+-+---+---+-+
1 row in set (0.02 sec)

It thinks it needs to examine 42600 rows.
If I force the correct index,
mysql explain SELECT * FROM traffic_boats USE INDEX (ymd_stem_idx) 
WHERE stem_base = '' AND yearmonth = 200407 AND day = 07 AND day = 27;
+---+---+---+--+-+--+-+-+
| table | type  | possible_keys | key  | key_len | ref  
| rows| Extra   |
+---+---+---+--+-+--+-+-+
| traffic_boats | range | ymd_stem_idx  | ymd_stem_idx | 108 | NULL 
| 4019400 | Using where |
+---+---+---+--+-+--+-+-+
1 row in set (0.00 sec)

It thinks it needs to examine 4,019,400 rows.
If I ran this query without the USE INDEX it would take a few minutes. 
If I force the index, it takes 20 seconds. You would think that using 
stem_base, day and yearmonth would be much more selective than using 
just stem_base.

Anyone got some insight into this?
David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread David Griffiths
We just put a new dual-Opteron server into our production environment. 
We ordered a Megaraid SCSI card and five 10k drives, and a 3Ware 
Escalade SATA card with six 7200 RPM drives (Maxtor) to see which ones 
were best.

Our network guy did a bunch of benchmarking on the drives and found that 
SCSI-RAID5 was a bit faster than SATA-RAID0+1.

The SATA was significantly cheaper (the 3Ware card was the same price as 
the Megaraid card, however). You might be able to tie a 10K SCSI rig if 
you went with the Western Digital Raptor drives.

We ended up putting the SATA drives in production - some bug in the SCSI 
driver kept crashing MySQL on index-creation, etc.

High Performance MySQL mentions that SCSI 15K drives are worth the extra 
money.

Fast hard drives are important, but so is lots of RAM (which is where 
the Opteron shines). In fact, all the benchmarks I've seen show that the 
Opteron/Athlon architecture beats Intel processors by a 30-odd percent 
margin if memory serves (note that for some reason, most benchmarks I've 
seen were on 3.23, which is outdated and not overly usefull).

One of our websites serves up 2 million distinct pages per day; the 
original coders of the site did something dumb and open a new connection 
to the database for most of those pages (probably about 1.8 million). 
Even with that additonal load, our Opteron server has an average CPU 
load of about 10%.

David
Jan Kirchhoff wrote:
Egor Egorov wrote:
Money is not really an issue but of course we don't want to waste it 
for scsi-hardware if we can reach almost the same speed with 
hardware sata-raids.
  

'Almost' is a key word. Some SCSI disk are working at 15k RPM, which 
will give
you a HUGE MySQL performance growth compared to 10k disks.
AFAIR, there are no 15k RPM SATA disks yet.  

But shouldn't a sata-based RAID10 with 8 discs do job as well? writes 
would be spread on 4 discs...
Has anybody experience with those external SCSI-to-SATA RAIDs?
A SCSI-solution would cost twice as much, but would it really speed 
things up compared to a massive use of parallel (raid0) sata-discs?
I know disc i/o is the bottleneck in our case, of course we want the 
fastest disc/raid-system we can possibly get for our money.
Is our thinking too simple or shouldn't it be possible to reach the 
speed of fast scsi-discs by simply taking 2-3 fast sata-discs in a 
hardware raid0?
Our goal is a raid10, so reading should be even faster.

Money is not really an issue but of course we don't want to waste it 
for We'd like to stay with x86 because all our hardware is intel/amd 
and all our servers are running debian-linux. Can we expect better 
performance or problems using kernel 2.6.x?
  

You can expect better performance on kernel 2.6.x of course, 
especially on
multiple requests.  

Has anybody experiences with RAM-usage and cpu-architecture (please 
have a look at my earlier post)?

thanks
Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 64 Bit Support

2004-07-06 Thread David Griffiths
Yah - Jeremy Z. mentioned that a few days ago as well. Guess I have it 
confused with Posgres (we don't run MySQL on Windows, so I don't pay 
much attention to it except the performance issues most people seem to 
mention, which I guess made me assume Cygwin).

David
Egor Egorov wrote:
David Griffiths [EMAIL PROTECTED] wrote:
 

Sorry - didn't read your email closely enough. The Windows version is 
not native - runs under Cygwin. Is there a version of Cygwin for the 
Itanium 2?
   

Wrong. It is a native application. :) 

No, there are no Itanium binaries for Windows.


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 64 Bit Support

2004-07-02 Thread David Griffiths
The download page @ MySQL.com 
(http://dev.mysql.com/downloads/mysql/4.0.html) shows a wack of Itanium 
binaries, so I would guess it does.

David
Nawal Lodha wrote:
Can someone tell me if MySQL 4.1.3-beta supports 64 Bit Itanium 2
machines with Windows 2003?
Thanks,
Nawal Lodha.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 64 Bit Support

2004-07-02 Thread David Griffiths
Sorry - didn't read your email closely enough. The Windows version is 
not native - runs under Cygwin. Is there a version of Cygwin for the 
Itanium 2?

The Windows versions are quite a bit slower than the Linux/Unix 
versions, so you might not get the benefits from that fast 64-bit processor.

Daivd
Nawal Lodha wrote:
Thanks David. But I think the Itanium binaries are available only for
HP-UX or Linux but not for Windows 2003. 

-Original Message-
From: David Griffiths [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 02, 2004 12:10 PM
To: [EMAIL PROTECTED]
Subject: Re: 64 Bit Support

The download page @ MySQL.com
(http://dev.mysql.com/downloads/mysql/4.0.html) shows a wack of Itanium
binaries, so I would guess it does.
David
Nawal Lodha wrote:
 

Can someone tell me if MySQL 4.1.3-beta supports 64 Bit Itanium 2 
machines with Windows 2003?

Thanks,
Nawal Lodha.

   


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Index problem

2004-06-30 Thread David Griffiths
What's the definition of the table? IE are you indexing an INT, VARCHAR, 
etc?

What's the definition of the index? Is it unique, composite, etc?
What's the storage engine in use? InnoDB? MyISAM?
Can you show the relevant parts of your my.cnf file?
What operating system are you using?
David
Oropeza Querejeta, Alejandro wrote:
Hi, i'm trying to create an index on a table with 199 million records.
The problem is that is taking too long (8 hours and is not yet
finnished).
does anyone have any idea?
the server is 2Xeon 2.8 gigs with 6 Gb of ram.
Best regards
Alejandro
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Packet Errors

2004-06-30 Thread David Griffiths
These errors could mean a connection timed out, or a mysql-client didn't 
properly close the connection, or possibly a network error.

I went to mysql.com and looked in the searchable docs:
http://dev.mysql.com/doc/mysql/en/Communication_errors.html

If |Aborted connections| messages appear in the error log, the cause can 
be any of the following:

   * The client program did not call |mysql_close()| before exiting.
   * The client had been sleeping more than |wait_timeout| or
 |interactive_timeout| seconds without issuing any requests to the
 server. See section 5.2.3 Server System Variables
 http://dev.mysql.com/doc/mysql/en/Server_system_variables.html.
   * The client program ended abruptly in the middle of a data transfer.
When any of these things happen, the server increments the 
|Aborted_clients| status variable.


Those searchable docs are very handy for looking up error codes, etc.
David.
Mike Blezien wrote:
Hello,
I recently noticed this error in our mysql error log file:

Aborted connection 5439 to db: 'database_name' user: 'someuser' host:
`localhost' (Got an error reading communication packets)
---
we're running MySQL 4.0.20, for pc-linux (i686) on a RH7.3, build from 
the standard RPM's

it seems this just standard recently from what I can see in the logs. 
Is there something we can do to eliminate this or prevent it, if 
possible. ??

TIA
MikemickaloBlezien
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Thunder Rain Internet Publishing
Providing Internet Solutions that work!
http://thunder-rain.com
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Index problem

2004-06-30 Thread David Griffiths
So the table is,
folio int
vacante int
folio2 char(10)
and the table type is MyISAM
create index some_index on table(folio2);
and the table has about 200,000,000 rows.
MyISAM creates a file per table for table data, and for index data. You 
can find the files created underneath the mysql install directory in a 
directory with the database name (mysql/var if you are using 
source-compiled and mysql/data if you are using pre-compiled binaries). 
To quote the docs,

Each |MyISAM| table is stored on disk in three files. The files have 
names that begin with the table name and have an extension to indicate 
the file type. An `.frm' file stores the table definition. The data file 
has an `.MYD' (MYData) extension. The index file has an `.MYI' (MYIndex) 
extension,

What's the max file size on your system? I suspect it's greater than 2 
gigabytes if you have 200 million rows. But something to check.

You might be exceeding the capabilities of the MyISAM storage engine, or 
the version of MySQL you are using (which version *are* you using? 3.23 
or a 4.0.x, or 4.1?).

Can you reduce the size of the index by creating a partial index, like
create index some_index on table(folio2(5));
to only index part of the data?
David
Oropeza Querejeta, Alejandro wrote
Below are the answers
Best Regards
-Mensaje original-
De: David Griffiths [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 30 de Junio de 2004 01:29 p.m.
Para: [EMAIL PROTECTED]
Asunto: Re: Index problem

What's the definition of the table? IE are you indexing an INT, VARCHAR, 
etc?
3 fields
Folio, Vacante, int
Folio2 char(10)

What's the definition of the index? Is it unique, composite, etc?
Nonunique, single column (folio2)
What's the storage engine in use? InnoDB? MyISAM?
Myisam
Can you show the relevant parts of your my.cnf file?
I have the standard My-huge.cnf
What operating system are you using?
Redhat Linux 7.3 
David

Oropeza Querejeta, Alejandro wrote:
 

Hi, i'm trying to create an index on a table with 199 million records. 
The problem is that is taking too long (8 hours and is not yet 
finnished).

does anyone have any idea?
the server is 2Xeon 2.8 gigs with 6 Gb of ram.
Best regards
Alejandro

   


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Job announcement?

2004-06-28 Thread David Griffiths
I've seen job announcements posted on other lists (and I think this one 
as well). I think it's relevant, and shouldn't offend anyone.

David
Michael Halligan wrote:
Greetings.
My company has an immediate opening in SF for a Sr. Mysql/DB
architect. I was wondering if this would be the appropriate list to
post such an announcement?
Michael
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Crashing On Index Creation/Select count(*) statement - InnoDB tables

2004-06-25 Thread David Griffiths
Frank,
We used the 64-bit source to compile 4.0.20, and we used the 32-bit 
binaries.

The problem was tracked down at about 1am - it was the kernel (or the 
SCSI drivers). We put a 3Ware SATA Raid-5 card in, and all the crashes 
went away.

There are 64-bit binaries, but we had some problems with them (the guy 
that initially tried them can't remember the exact issue). You need to 
had a -fPic flag to get them to compile for the Opteron.

The PIC flag is for position-independant code. Google it with Opteron 
and you'll see a bunch of posts on it.

David

Dr. Frank Ullrich wrote:
David,
David Griffiths wrote:
We are in the process of setting up a new MySQL server. It's a 
dual-Opteron (Tyan Thunder K8S motherboard) with 6 gig of DDR333 RAM 
(registered) and an LSI SCSI card with 6 SCSI drives (5 in a RAID-5 
array, with one hot-spare) running SuSE Enterprise 8.1 (64-bit).

I loaded all our data (about 2 gig) into the database back on 
Tuesday, and created the indexes without issue, as a test to see how 
long it would take.

Tonight, we were going to cut over to this new machine. I was setting 
up data as a test run, and started coming across Database page 
corruption on disk or a failed file read of page errors.

At first, we were using MySQL 4.0.20 64-bit, compiled from source by 
us (the -fPic option needs to be included in the Makefile, and for 
some reason isn't in the binaries - also, no release notes for the AMD64 

So you can't use the binaries that MySQL provides and therefore you 
didn't test them? Or did you?
Why is this -fPic option important?
I'm curious because we have a dual opteron system too and I wanted to 
install the 64bit binary (4.0.20-standard) from the MySQL web site.

Regards,
   Frank.

platform at http://dev.mysql.com/doc/mysql/en/Linux.html).
I could consistently crash the database by creating an index on a 
column (a varchar(50)). I could also crash it doing a SELECT 
COUNT(*)... from a table with 3 million rows. Unfort, I did not save 
the crash-log.

We rolled back to 4.0.18, also 64-bit. Exactly the same issue. Here's 
the output.

- 

InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 12244.
InnoDB: You may have to recover from a backup.
040624 17:21:59  InnoDB: Page dump in ascii and hex (16384 bytes):
...
040624 17:21:59  InnoDB: Page checksum 1484130208, 
prior-to-4.0.14-form checksum 1108511089
InnoDB: stored checksum 2958040096, prior-to-4.0.14-form stored 
checksum 1108511089
InnoDB: Page lsn 0 204702464, low 4 bytes of lsn at page end 204702464
InnoDB: Page may be an index page where index id is 0 24
InnoDB: and table yw/boats2 index PRIMARY
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 12244.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: Look also at section 6.1 of
InnoDB: http://www.innodb.com/ibman.html about
InnoDB: forcing recovery.
InnoDB: Ending processing because of a corrupt database page.

- 

InnoDB is robust enough to recover, fortunately.
Then we thought it might be an issue with the 64-bit version, so we 
installed the 32-binary version (we didn't compile it) of 4.0.20.

I managed to make it crash in exactly the same way - adding an index 
to a table, dropping an index, or selecting a count from the same 
large table.
- 

040624 20:29:07  mysqld restarted
040624 20:29:08  InnoDB: Database was not shut down normally.
InnoDB: Starting recovery from log files...
InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 0 3576655719
InnoDB: Doing recovery: scanned up to log sequence number 0 3576655719
040624 20:29:08  InnoDB: Flushing modified pages from the buffer pool...
040624 20:29:09  InnoDB: Started
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.18-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 23235.
InnoDB: You may have to recover from a backup.
040624 20:29:38  InnoDB: Page dump in ascii and hex (16384 bytes):
040624 20:29:38  InnoDB: Page checksum 1229875638, 
prior-to-4.0.14-form checksum 4263044155
InnoDB: stored checksum 2727822450, prior-to-4.0.14-form stored 
checksum 4263044155
InnoDB: Page lsn 0 748566710, low 4 bytes of lsn at page end 748566710

MySQL Crashing On Index Creation/Select count(*) statement - InnoDB tables

2004-06-24 Thread David Griffiths
We are in the process of setting up a new MySQL server. It's a 
dual-Opteron (Tyan Thunder K8S motherboard) with 6 gig of DDR333 RAM 
(registered) and an LSI SCSI card with 6 SCSI drives (5 in a RAID-5 
array, with one hot-spare) running SuSE Enterprise 8.1 (64-bit).

I loaded all our data (about 2 gig) into the database back on Tuesday, 
and created the indexes without issue, as a test to see how long it 
would take.

Tonight, we were going to cut over to this new machine. I was setting up 
data as a test run, and started coming across Database page corruption 
on disk or a failed file read of page errors.

At first, we were using MySQL 4.0.20 64-bit, compiled from source by us 
(the -fPic option needs to be included in the Makefile, and for some 
reason isn't in the binaries - also, no release notes for the AMD64 
platform at http://dev.mysql.com/doc/mysql/en/Linux.html).

I could consistently crash the database by creating an index on a column 
(a varchar(50)). I could also crash it doing a SELECT COUNT(*)... from 
a table with 3 million rows. Unfort, I did not save the crash-log.

We rolled back to 4.0.18, also 64-bit. Exactly the same issue. Here's 
the output.

-
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 12244.
InnoDB: You may have to recover from a backup.
040624 17:21:59  InnoDB: Page dump in ascii and hex (16384 bytes):
...
040624 17:21:59  InnoDB: Page checksum 1484130208, prior-to-4.0.14-form 
checksum 1108511089
InnoDB: stored checksum 2958040096, prior-to-4.0.14-form stored checksum 
1108511089
InnoDB: Page lsn 0 204702464, low 4 bytes of lsn at page end 204702464
InnoDB: Page may be an index page where index id is 0 24
InnoDB: and table yw/boats2 index PRIMARY
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 12244.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: Look also at section 6.1 of
InnoDB: http://www.innodb.com/ibman.html about
InnoDB: forcing recovery.
InnoDB: Ending processing because of a corrupt database page.

-
InnoDB is robust enough to recover, fortunately.
Then we thought it might be an issue with the 64-bit version, so we 
installed the 32-binary version (we didn't compile it) of 4.0.20.

I managed to make it crash in exactly the same way - adding an index to 
a table, dropping an index, or selecting a count from the same large table.
-
040624 20:29:07  mysqld restarted
040624 20:29:08  InnoDB: Database was not shut down normally.
InnoDB: Starting recovery from log files...
InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 0 3576655719
InnoDB: Doing recovery: scanned up to log sequence number 0 3576655719
040624 20:29:08  InnoDB: Flushing modified pages from the buffer pool...
040624 20:29:09  InnoDB: Started
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.18-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 23235.
InnoDB: You may have to recover from a backup.
040624 20:29:38  InnoDB: Page dump in ascii and hex (16384 bytes):

040624 20:29:38  InnoDB: Page checksum 1229875638, prior-to-4.0.14-form 
checksum 4263044155
InnoDB: stored checksum 2727822450, prior-to-4.0.14-form stored checksum 
4263044155
InnoDB: Page lsn 0 748566710, low 4 bytes of lsn at page end 748566710
InnoDB: Page may be an index page where index id is 0 15
InnoDB: and table yw/boats_clobs2 index PRIMARY
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 23235.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: Look also at section 6.1 of
InnoDB: http://www.innodb.com/ibman.html about
InnoDB: forcing recovery.
InnoDB: Ending processing because of a corrupt database page.
-

I am 

Re: Importing data, indexes, and analyzing tables.

2004-06-17 Thread David Griffiths
After a day of looking, I answered my own questions, and I'll post those 
answers here in case anyone else was interested in the answer.

First, LOAD DATA + ALTER TABLE ADD INDEX... seems to be slower than 
a mysqldump + ANALYZE TABLE. Of course, you don't always have a 
mysql dump file.

After importing a mysql dump file, it's wise to analyze all tables 
imported. I found that SHOW INDEX FROM table; would show the cardinality 
(the number of unique values) of an index. The more unique the data 
being indexed, the faster the index is. Another way to put it, if you 
set up an equation like,

(# of rows in table) divided by (cardinality of an index)
you would want a number that is as close to 1 as possible (there will 
never be more unique values in a table than there are rows). The lower 
that ratio is (the closer to 1), the more efficient the index becomes.

Here's an example of the cardinality after a dump but before an ANALYZE 
TABLE, and after an ANALYZE TABLE (these two indexes are on the same 
table) from our database; there are 502055 rows in this table. Index 
names have been changed to protect the innocent:

Before the ANALYZE,
index1 has a cardinality of 81214
index2 has a cardinality of 81214
After the ANALYZE
index1 has a cardinality of 97192
index2 has a cardinality of 20248
If no analyze was done, and someone did an equi-join on the column 
indexed by index1 and a second join on the column indexed by index2, the 
optimizer would use some other criteria for selecting an index other 
than the cardinality (perhaps the data-type of the column, the 
alphabetical order of the column, etc - anyone know what that criteria 
would be?).

After the analyze, you can see that the cardinality of index1 has gone 
up, while index2 has gone down. Index1 is now a far better choice for 
the optimizer to use (remember, MySQL can only use one index per table 
per query, so it has to pick the most efficient one), and should result 
in faster results.

Hope that provides some insight for anyone interested.
David.

David Griffiths wrote:
We have a somewhat large database, with a snapshot of the data that we 
import into the database. Normally, we create a database, create the 
tables, import the data with LOAD DATA, add the indexes, and then 
the foreign keys (we are using InnoDB, 4.0.18). We call this the 
load-data-method.

Sometimes we dump the data with mysqldump, create a new database, and 
pipe the dump file into the new database. We call this the 
mysqldump-method.

I was wondering about the time it takes to import either one. The 
mysqldump-method takes about 45 minutes. Tables, indexes and foreign 
keys are created. The load-data-method takes two to three hours.

Using LOAD DATA is supposed to be the fastest way to get data into 
MySQL. We use the ALTER TABLE ADD INDEX (), ADD INDEX() ... method, 
which is supposed to be the fastest way to add indexes to a table.

So I'm curious as to why it takes so much longer. I suspect that the 
mysqldump-method needs an ANALYZE TABLE ... (for each table in the 
database) at the end of it, to rebuild the statistics for the tables, 
and if that was done, then the load-data-method would be comparitively 
faster. Is this correct, or am I off-base with this? Are the 
statistics for the indexes in InnoDB correct or incorrect after a 
mysqldump file is imported into the db.

This brings my next question - as I was looking around, trying to get 
some insight, I was reading the mysql documentation on SHOW INDEX 
FROM tablename, which is supposed to show key-distribution across a 
table (http://dev.mysql.com/doc/mysql/en/ANALYZE_TABLE.html).

I've run it, and I'm a bit confused about inferring anything from the 
results of the SHOW INDEX... statement 
(http://dev.mysql.com/doc/mysql/en/SHOW_INDEX.html). The most 
interesting column in the result set seems to be the cardinality 
column, which stores the number of unique values in an index, but on 
large tables, it's tough to know if that's accurate if the index is 
not unique. Is there any way to read the results of a SHOW INDEX to be 
able to figure out if a table needs analyzing?

MySQL (Paul DuBois) was written before ANALYZE TABLE was relevant for 
InnoDB, and High Performance MySQL (Jeremy Zawodny and Derek Balling) 
doesn't delve into the details about the circumstances where tables 
(and indexes) need to be re-analyzed.

Thanks in advance for any replies.
David.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Recommendation on god MySQL books

2004-06-17 Thread David Griffiths
God doesn't use MySQL (I think he leans towards Postgres - he needs 
views and triggers).

But if you are interested in MySQL, Paul DuBois's book, MySQL, Second 
Edition is a great reference. If you need more insight into performance 
tuning, then Jeremy Zawodny and Derek Balling's book, High Performance 
MySQL, can be very helpful.

Both are up to date.
David
Bartis, Robert M (Bob) wrote:
I'm looking for suggestions on books that would help me to improve my understanding of 
MySQL operations, admin operations, replication etc. I'm new to MySQL and am about to 
embark on supporting a database for my team to use in recording test results. Any 
suggestions and recommendations ones to stay away from?
Thanks in advance
Bob
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Importing data, indexes, and analyzing tables.

2004-06-16 Thread David Griffiths
We have a somewhat large database, with a snapshot of the data that we 
import into the database. Normally, we create a database, create the 
tables, import the data with LOAD DATA, add the indexes, and then the 
foreign keys (we are using InnoDB, 4.0.18). We call this the 
load-data-method.

Sometimes we dump the data with mysqldump, create a new database, and 
pipe the dump file into the new database. We call this the mysqldump-method.

I was wondering about the time it takes to import either one. The 
mysqldump-method takes about 45 minutes. Tables, indexes and foreign 
keys are created. The load-data-method takes two to three hours.

Using LOAD DATA is supposed to be the fastest way to get data into 
MySQL. We use the ALTER TABLE ADD INDEX (), ADD INDEX() ... method, 
which is supposed to be the fastest way to add indexes to a table.

So I'm curious as to why it takes so much longer. I suspect that the 
mysqldump-method needs an ANALYZE TABLE ... (for each table in the 
database) at the end of it, to rebuild the statistics for the tables, 
and if that was done, then the load-data-method would be comparitively 
faster. Is this correct, or am I off-base with this? Are the statistics 
for the indexes in InnoDB correct or incorrect after a mysqldump file is 
imported into the db.

This brings my next question - as I was looking around, trying to get 
some insight, I was reading the mysql documentation on SHOW INDEX FROM 
tablename, which is supposed to show key-distribution across a table 
(http://dev.mysql.com/doc/mysql/en/ANALYZE_TABLE.html).

I've run it, and I'm a bit confused about inferring anything from the 
results of the SHOW INDEX... statement 
(http://dev.mysql.com/doc/mysql/en/SHOW_INDEX.html). The most 
interesting column in the result set seems to be the cardinality 
column, which stores the number of unique values in an index, but on 
large tables, it's tough to know if that's accurate if the index is not 
unique. Is there any way to read the results of a SHOW INDEX to be able 
to figure out if a table needs analyzing?

MySQL (Paul DuBois) was written before ANALYZE TABLE was relevant for 
InnoDB, and High Performance MySQL (Jeremy Zawodny and Derek Balling) 
doesn't delve into the details about the circumstances where tables (and 
indexes) need to be re-analyzed.

Thanks in advance for any replies.
David.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql memory optimization - AMD 64 - odd crashes

2004-06-08 Thread David Griffiths
Our Opteron server should be arriving today, so I can't provide a whole 
lot of insight.

First, what version are you using? 4.0.20? The 64-bit or 32-bit version? 
Knowing the version might help. If this is not a production machine, you 
might want to try using the version from the MySQL website

Also, I noticed a bug that might be applicable the other day when 
reviewing what was coming up in 4.0.21 (the URL is 
http://bugs.mysql.com/bug.php?id=3754). It had to do with values  4 
billion being used in the client for things like 
myisam_max_sort_file_size (the values work fine in the my.cnf). It's a 
longshot, but just in case Check the bugs fixed between the version 
you are running and the latest - might find some insight there.

By key-cache, are you referring to the variable key_buffer_size? If 
so, keep in mind that the key-buffer is for MyISAM, and only caches the 
indexes. From, http://dev.mysql.com/doc/mysql/en/MyISAM_key_cache.html

   * For index blocks, a special structure called the key cache (key
 buffer) is maintained. The structure contains a number of block
 buffers where the most-used index blocks are placed.
   * For data blocks, MySQL uses no special cache. Instead it relies on
 the native operating system filesystem cache.
Your database would have to be huge to have 4 gig worth of indexes. What 
if you try to scale it back to 2 gig (or less) and let it run for a few 
days?

Does it generate a core file? There is a setting to specify the size of 
a core file; not sure if it turns the generation of a core file on or 
off (http://dev.mysql.com/doc/mysql/en/mysqld_safe.html and search for 
|--core-file-size=size). Once you had the core file, you might want to 
talk to the people on the development list, and see if one of them is 
willing to take a look at it (assuming they can - might need a 64-bit 
system to do so).

I know I haven't really answered your question; just listed what I would 
look at.

David
|
Johannes B. Ullrich wrote:
I just started setting up a dual opteron system with
Suse 9.1 (x86_64). At this point, I am sticking with the
Suse provided rpms.
I am having some odd mysql restarts (crashed with immediate
restart by safe_mysqld). Odd part about it: not a single line
in the error log. The connections just drop and come back as the
client tries to reconnect.
One possible issue I figures is the way I try to use memory.
Does MySQL allow to allocate more then 4 GByte of RAM? Right
now, I am using 4 GByte for key-cache alone. So it doesn't 
complain, and appears to use about 5 GByte at the time
it crashes (the machine has 16 GByte RAM). But does it cause
instability to use that much key-cache?



 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: question

2004-06-05 Thread David Griffiths
I think he's talking about download MySQL, and verifying the binary 
using MD5.

There is a link right above the downloads for verification of the binary.
http://dev.mysql.com/doc/mysql/en/Verifying_Package_Integrity.html
The contents of the page @ that URL suggest using GNU Privacy Guard.
David
Paul DuBois wrote:
At 22:54 -0700 6/4/04, jerome greene wrote:
Where do you run the verify programs from? Are they ran from the 
command prompt? Do I copy the keys?

Thanks

What is the context of these questions?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: What does this sql query mean?

2004-06-04 Thread David Griffiths
Louie,
The inner-join is just joining the two tables, shopcart and items.
I think iip is an alias for items, but the alias would normally come 
after the name of the tiems table. Another way to write the query is 
(assuming iip is an alias for items),

SELECT * FROM shopcart, items iip
WHERE shopcart.itemID = iip.itemID
AND shopcart.cookieId = '4bfa673ee4c544d4352e2c70a78b70b5'
order by iip.itemName asc
The inner-join is just an alternative to what's above.
I read on some site that there were two advantages:
1) Kept the the WHERE-clause a bit simpler
2) Was a bit faster, as the tables were pre-matched; I doubt this, and 
the source wasn't particularily trustworth. The MySQL manual doesn't 
mention any performance gains from using the INNER-JOIN functionality

Here's the relevant MySQL manual page: 
http://dev.mysql.com/doc/mysql/en/JOIN.html

The relevant section:
INNER JOIN and , (comma) are semantically equivalent in the absence of 
a join condition: both will produce a Cartesian product between the 
specified tables (that is, each and every row in the first table will be 
joined onto all rows in the second table).

By comma, they mean the comma seperating the two tables in the FROM 
clause, as I've written it above (FROM shopcart, items).

David
Louie Miranda wrote:
Hi,
Im just a new comer on mysql and i was in the middle of debugging some codes
that aint mine. I was stuck here. I could not figure what does this select
do?
Its complicated..
Can anyone help me out?
select * from shopcart inner join items on shopcart.itemId = iip.itemId
where shopcart.cookieId = '4bfa673ee4c544d4352e2c70a78b70b5' order by
iip.itemName asc
Thanks
-- -
Louie Miranda
http://www.axishift.com

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Specifying an index length and the default value

2004-05-30 Thread David Griffiths
Matt,
Thanks for the great reply. We have a database that has been migrated 
over to MySQL 4.0, and the column-definitions are a bit wacked - way 
more space is allocated than is needed for many columns. I think you are 
using MyISAM tables; we are using InnoDB. I think it woudl be worth 
dropping some indexes after seeing how much free tablespace there is, 
and then re-creating those indexes with a length specifier of the column 
size, and see if the amount of free tablespace changes. If it doesn't, 
then MySQL is indexing the whole column.

The question arose due to a unique index we have on a table; we can't 
seem to get the data in from our other database - the index fails, yet 
the value that it fails on appears only once in the table we are copying 
from. I wondered if maybe MySQL was defaulting the length of the index, 
and causing it to conflict with an other value. I don't believe it is, 
however, as there is no value that is even remotely similar.

Regardless, I think it might be a worthwhile exercise to take a close 
look at our data, and see if we can guesstimate appropriate lengths.

Thanks,
David.
Matt W wrote:
Hi David,
Great questions:
- Original Message -
From: David Griffiths
Sent: Friday, May 28, 2004 6:05 PM
Subject: Specifying an index length and the default value
 

The length of indexes on varchar and char indexes can be specified at
index creation.
What is the default length of an index if no length is provided?
   

The default is to index the whole column length (of course that's not
possible with TEXT/BLOB columns).
 

The High Performance MySQL book hints that the index-length used is
specific for each entry in the indexed column (ie an index on a column
where the average length of the data is 8 bytes would take up (8 x
number-of-rows) bytes).
   

Well, maybe.  It depends... see below.
 

If a column was a varchar(128), would the index use 128 bytes per entry,
or would it use the number of bytes in each row of the indexed column.
So if each row had exactly four characters, the index would use four
bytes per row, but if a row was added with 8 characters, that one row
would have an index entry that was 8 bytes in size and the rest would
remain at 4?
   

I don't know that it's in the manual anywhere, but from experimenting and
stuff, I've found that, generally, indexes with a TOTAL length (if there's
multiple columns) of = 8 are fixed-length.  That is, they'll always use 8
bytes/row even if a string doesn't take up that much space.  Actually, this
up to 8 bytes, fixed length behavior might only occur with indexed
character columns (e.g. 100% numeric indexes may stay fixed-length
regardless of their size).  I'm not sure...
You can change the behavior, however, by setting PACK_KEYS in ALTER/CREATE
TABLE.  The default, in MySQL 4+, is DEFAULT, where MySQL decides whether
to use fixed-length keys (faster) or packed variable-length keys (space
saving) depending on the index.  Setting PACK_KEYS to 0 forces all
fixed-length keys (I *think* 0 in 3.23 is like DEFAULT in 4+).  Setting
PACK_KEYS to 1 packs all *non-unique* indexes, even if they're numeric or
= 8 bytes.
It's useful to use myisamchk to find out the properties of columns in an
index:
myisamchk -dv /path/to/db/table
In the second half of the output, you will see information about the
table's indexes.  Some things you may see in the Type column are:
packed - I think this is for character indexes that have prefix compression
(multiple index entries that start with the same characters are
compressed).  Any unused space at the end of the index (storing 10 chars in
a 32 character index) is also not stored (like you were talking about
above).
prefix - I think this one is for numeric indexes that have prefix
compression (in an INT index, values 0 - 255 use the same 3 bytes, so those
can be compressed).
stripped - This is for character indexes that have unused trailing space
stripped (again, like you were talking about above).
Now, having said that, there's still some things in the myisamchk output
that I can't figure out: like sometimes there will be packed and
stripped on the same column; sometimes not.  And other things I can't
remember now that don't seem consistent.  I just kinda figured it out on my
own since I don't know that there are official descriptions anywhere.
(  But at least it gives you more of an idea of what's going on internally
than you can get from a SQL query. :-)
 

Thanks for any input.
David.
   

Hope that helps somewhat.
Matt


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Specifying an index length and the default value

2004-05-28 Thread David Griffiths
The length of indexes on varchar and char indexes can be specified at 
index creation.

What is the default length of an index if no length is provided?
The High Performance MySQL book hints that the index-length used is 
specific for each entry in the indexed column (ie an index on a column 
where the average length of the data is 8 bytes would take up (8 x 
number-of-rows) bytes).

If a column was a varchar(128), would the index use 128 bytes per entry, 
or would it use the number of bytes in each row of the indexed column. 
So if each row had exactly four characters, the index would use four 
bytes per row, but if a row was added with 8 characters, that one row 
would have an index entry that was 8 bytes in size and the rest would 
remain at 4?

Thanks for any input.
David.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very Strange data corruption

2004-05-25 Thread David Griffiths
MySQL really should throw an exception/error rather than just quietly 
trim your data and accept it. When your data is critical, and your 
business depends on it, you can't have bad data quietly going into the 
database.

David.
Mike Johnson wrote:
From: Jeff McKeon [mailto:[EMAIL PROTECTED]
 

Query: 

insert into
MIS.simcard(ID,ShipID,Service_Provider,SN,v1,v2,f1,d1,puk1,puk
2,pin1,pin
2,TwoStage,Status,DateAssigned,DateDisabled,UserID) 
VALUES('NULL', '6889927707', '1', '8988169214000421398', 
'881621456175',
'', '', '881693156175', '62982149', '', '', '', '1307', '1',
'1085508771', 'NULL', 'jsm');

Always results in a ShipID field value of 2147483647 instead of
6889927707
Even if I just do a simple:
insert into MIS.simcard (ShipID) values ('6889927707');
It does the same darn thing.
ShipID is an Int(11) field
Version 4.0.15
If I change the first digit of the input from a 6 to any 
other digit, it
gets entered correctly.  Any idea what is going on here!?

Version 4.0.15
   

The max value of INT is 2147483647, lower than the value you're inserting (even 
when unsigned, which is 4294967295). That's why that's what's getting inserted.
Manual page is here:
http://dev.mysql.com/doc/mysql/en/Numeric_type_overview.html
Try converting the column to a BIGINT, the signed max alone is 9223372036854775807.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very Strange data corruption

2004-05-25 Thread David Griffiths
It's best practice to write unit tests for all your code, with 
calculated data to show what you expect. That data then gets compared to 
what is actually generated and if there is a discrepency, then you have 
a bug somewhere.

But the reason modern databases have foreign keys, primary keys, 
not-nulls, check constraints and data-metadata (char(5), INT, BIGINT, 
etc) is to prevent bad data from going in.. If no exception is thrown 
because you are trying to put a BIGINT into an INT, then why throw one 
if you try to insert a NULL into a NOT-NULL column (assuming no DEFAULT 
is present)? Or what about foreign keys? Why not just quietly fail if a 
fk-constraint is violated?

Go even farther. Say your SQL is just incorrect (INSETR IN TO  instead 
of INSERT INTO). What if MySQL didn't throw an exception back to your 
PERL DBI or Java JDBC connection? After all, it's up to the developer to 
make sure their SQL syntax is correct.

The database has all sorts of constraints that can be applied to your 
data model. They should all have the same behaviour when violated.

David
Mike Johnson wrote:
From: David Griffiths [mailto:[EMAIL PROTECTED]
 

MySQL really should throw an exception/error rather than just quietly 
trim your data and accept it. When your data is critical, and your 
business depends on it, you can't have bad data quietly going 
into the 
database.
   

Someone correct me if I'm wrong, but isn't it considered best practice to validate 
data before it gets to the database?
I can't seem to find a source for this after a quick search on Google, though...
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very Strange data corruption

2004-05-25 Thread David Griffiths
David Brodbeck wrote:
The client software ought to be range-checking the data before sending 
it to

the database.  If the client isn't even doing that kind of minimal-effort
check, how likely is it to be checking for exceptions?
That's not to say that an error or exception is a bad idea, but MySQL may be
constrained here by what the SQL standard says to do; I'm not sure.
 

I'm not sure what the sql standard says on the matter, but Oracle, DB2 
and Postgres would through an exception. In fact, there is a page on 
MySQL gotachs to document MySQL behaviour when it differs 
significnatly from other databases (like the first datetime field in a 
table getting a value if none is provided during insert).

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very Strange data corruption

2004-05-25 Thread David Griffiths

The client software ought to be range-checking the data before sending it to
the database.  If the client isn't even doing that kind of minimal-effort
check, how likely is it to be checking for exceptions?
 

Not sure what you code in, but in Java, you *HAVE* to catch 
SQLExceptions (or throw them up). Isn't it better to give the developer 
the option?

Also, most people who have replied seem to think that I/you are the 
author of the code.

If you use the MySQL client to do an IMPORT DATA (say as an export from 
another database), and you made a mistake in the DDL 
(data-definition-language) and used an INT when you should have used a 
BIGINT, you won't know.

David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very Strange data corruption

2004-05-25 Thread David Griffiths
Michael Stassen wrote:
This comes up frequently. MySQL's behavior is explained in the manual 
http://dev.mysql.com/doc/mysql/en/constraint_NOT_NULL.html.  It begins:

That's interesting, and I guess one just has to accept it as part of the 
mysql philosphy. I don't agree, as I don't like the database trying to 
read my mind on what the best possible value. I agree that the 
client-code should do the same, but it's nice to have that last defense.

But the reason modern databases have foreign keys, primary keys, 
not-nulls, check constraints and data-metadata (char(5), INT, BIGINT, 
etc) is to prevent bad data from going in.. If no exception is thrown 

MyISAM tables do not have foreign key and check constraints.  I'm not 
sure what you mean by data-metadata.

Meta-data is information about the columns and tables. For example 
finding out that a column is an INT, not-null, etc. MySQL doesn't have a 
tonne of that stuff (some other databases litterally have hundreds of 
tables and views with information about what's in the database, and 
what's going on).

MySQL will only throw an exception if you try to explicitly insert a 
NULL into a NOT-NULL column in a single row insert.  It won't throw an 
exception in a multi-row insert or if you implicitly set a column to 
NULL by leaving it out.  This is in keeping with the need to support 
non-transactional tables.

Foreign keys are different.  They are only supported in transactional 
tables (InnoDB), so ROLLBACK is available.

I am not really familiar with MyISAM - we use only InnnoDB in our 
databases (other than the mysql datababase, of course) as we need the 
ACID-transactions and row-level locking.

There is a reason the MySQL developers do things the way they do.  
It's documented in the manual.  The driving principle is the need to 
support non-transactional tables, not some idea that no data validity 
checking should ever be done by the db.

When MyISAM gets transactions (in the next major version, I think), will 
this behaviour go away? In fact, I thought MyISAM had a basic 
begin-commit/rollback transaction already?

This is a philosophical matter. There have been excellent arguments 
for both
sides in previous threads on this topic. As it stands now, however, if 
you
require a db where the db itself can be set up to validate all data, then
MySQL is not the db for you. On the other hand, if you are willing to
validate your data, MySQL is fast, reliable, and cheap. Personally, I 
don't
We'll just have to code around it - it makes a strong case for adding 
unit tests to our development cycle.

find coding to validate input any more difficult than coding to handle
exceptions.
I find it exactly the opposite. An exception is a try, two braces, a 
catch, two more braces with a stack-trace in between. If one is 
thrown, it means there's a bug. And using InnoDB, I know that I can roll 
the whole transaction back.

David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread David Griffiths
Assuming you insert 100,000 rows, you also have to consider that any 
indexes on the table will need to be re-analyzed to fix the statics. 
Also, the inserts will be slower due to any indexes.

You didn't mention the table-type (storage-engine) you were planning on 
using, but if you use InnoDB, and do it all in one transaction (ie turn 
off auto-commit and commit just once at the endi), then any failure 
during the mass-insert will cause a rollback, so you don't have to worry 
about the integrity of your data.

Is it then possible to have a SQL statement that would somehow take the 
lump field data, select it, chop it up, and insert 100,000 rows, in this 
case, three fields per row?

I haven't heard of a SQL function like that. In addition, you'll suffer 
the pain twice by inserting it once as one big field, selecting it out 
again, breaking it up, and then re-inserting it.

Is the code going to be running on the machine with the database? That 
could improve it.

One other thing to consider is to use IMPORT DATA to do a bulk load 
rather than a tonne of insert statements. You can do this from a 
command-line on the machine where the MySQL server is installed. See the 
docs at http://www.mysql.com to get the syntax of the IMPORT DATA - it 
should work on the format of the file as you specified below.

David.
Scott Haneda wrote:
Faced with some larger than usual for me data requirements, I thought I
would ask some others what they think about my logic.
MySql 4
In short, I will have a file upload on a web server that will digest a file,
I will be able to dictate the format of this file. There may be a few
formats, the main one will be:
data\tdata\tdata\r
So, that is some data, a tab, some more data, another tab, then a return.
My trouble is that the data file could be 100,000 lines in length, I have a
few options:
Iterate through the file, one line at a time, using insert delayed I can put
the data into a table with no trouble, this is the simplest method, but
perhaps has performance issues.  In any language, repeating 100,000 times
will take some time, of course, it will happen in the background, but it
still will take some time.  There is also the trouble with a crashing
server, I would have a incomplete set of data inserts, and no real simple
way to deal with this.
I was thinking, perhaps MySql is up to this task, what if I were to insert
the entire file into one field in mysql, then I have all the data in mysql,
this insert should happen much faster, as it is just data, it could be a few
MB's in size, but still should not take too long to get it into MySql.  It
is also only one operation, so the chance of a server crash interfering is
less.
Is it then possible to have a SQL statement that would somehow take the lump
field data, select it, chop it up, and insert 100,000 rows, in this case,
three fields per row?
I was thiking I could easily do some simple string replaces on the data and
get it to one large INSERT statement, this is not all that atractive to me
as I am not 100% certain each line in the file would be in the correct
format, someone on accident may have put in 5 tabs on one line.  In order to
check for this I am back to repeating through each line server side and
testing the line for integrity.
Any ideas and thoughts?
MySql 4
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Repeat loops in mysql, large data issue, suggestions wanted

2004-05-20 Thread David Griffiths

You didn't mention the table-type (storage-engine) you were planning on
using, but if you use InnoDB, and do it all in one transaction (ie turn
off auto-commit and commit just once at the endi), then any failure
during the mass-insert will cause a rollback, so you don't have to worry
about the integrity of your data.
   

I can use any I like, I wont have not even built the DB yet, so I am open to
any suggestions.
I don't see how I can do it all in one transaction, what does that mean?
 

What that means is you do 100,000 inserts, and then do one commit at the 
end. If the connection to the database dies, or the database itself 
crashes, then all the rows inserted will be rolled back, and no data 
will be in your database.

I think you may mean something like rather than doing 100,000 separate
inserts, somehow build that into just one insert string, so it is then one
connection?  Can you elaborate?
 

No, you still have to do all the inserts, but either they all get in, or 
none of them get in, depending on what happens as you are inserting.

Is it then possible to have a SQL statement that would somehow take the
lump field data, select it, chop it up, and insert 100,000 rows, in this
case, three fields per row?
I haven't heard of a SQL function like that. In addition, you'll suffer
the pain twice by inserting it once as one big field, selecting it out
again, breaking it up, and then re-inserting it.
   

I was thinking that perhaps MySql would be more efficient at it than some
server side middleware.
 

I haven't seen a function like that; it might exist. It might not.
Is the code going to be running on the machine with the database? That
could improve it.
   

The code that would do the insert will run on hardware that is on the same
network as MySql, but certainly not the same machine, they are 2 different
OS's so this is not possible to run them on the same machine.
 

Not sure I follow. If you wrote it in C, and compiled it on the machine 
where teh MySQL database was or if you wrote it in Java, it could 
run anywhere. You can also write it in PERL and run it on the database 
machine, assuming you install PERL.

One other thing to consider is to use IMPORT DATA to do a bulk load
rather than a tonne of insert statements. You can do this from a
command-line on the machine where the MySQL server is installed. See the
docs at http://www.mysql.com to get the syntax of the IMPORT DATA - it
should work on the format of the file as you specified below.
   

Can import data be used on MySql if the data is not on the same machine as
MySql?
 

I don't think so. But IMPORT DATA doesn't require any coding. For 
example, you  just put this into a file:

LOAD DATA LOCAL INFILE 'file_with_all_the_data.txt'
INTO table the_table_where_rows_go
FIELDS TERMINATED BY '\t'
OPTIONALLY ENCLOSED BY '' ESCAPED BY ''
LINES TERMINATED BY '\r\n';
This reads a file, and breaks each line up by a \t (and the line ends 
with \r\n in this example). It just dumps all the data into the table. 
The columns in the table have to be in the same order as the fields in 
each line.

No coding required.
David
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Simple table, 1.7 million rows, very slow SELECTs

2004-05-19 Thread David Griffiths
Jacob Elder wrote:
Thanks to everyone who helped me with this. I settled on breaking it down
into area code, exchange and subscriber. This is one of the most generous
lists I've ever had the pleasure of begging for help on.
 

I don't want to deter you from making the changes above, but it's pretty 
easy to figure out if the key-buffer is too small; it's much less work 
to try that out first, and then if it doesn't work, making radical 
changes to your data.

Also, keep in mind that MySQL will use only one index per table in a 
query. If you break a single row into three columns, and then index the 
three columns, MySQL will now have to decide which of the three indexes 
to use. This means that because you are indexing a subset of your data, 
MySQL will end up doing more work because the index will not be as 
exact. For example,

SELECT area_code, prefix, postfix FROM phone_numbers WHERE area_code=402 
and prefix=232 and postfix=4222;

will force MySQL to pick the index on area_code, prefix, or postfix. 
Since postfix is the most discriminating index, it will reduce the 
number of rows down the fastest. Say there are 800 rows with the postfix 
4222. It now has to scan those 800 rows to match the area code and 
prefix. No index will be used there; it's a small scan, but it will add 
overhead to each search.

If you keep all the data in one column, then the one and only index 
should be chosen, and it should go straight to the row.

One other thing to consider is the length of your index. For char and 
varchar indexes, you can tell MySQL how many of the characters you want 
included in your index.

For example, CREATE INDEX index_name ON table(column(4));
will only index the first four characters of the column. You may want to 
specify an index that matches the length of your data (10 characters, I 
believe). Also, keep in mind that a char is faster than a varchar if you 
have exactly 10 characters for each row (rather than 7 sometimes, and 10 
other times). I am not sure what the default index length is, but it 
could be something like 32 or 64. Anyone know?

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Simple table, 1.7 million rows, very slow SELECTs

2004-05-18 Thread David Griffiths
Is that the only table in your MySQL installation?
MyISAM primary keys are put in a b-tree index, which is cached by MySQL 
in memory in the key_buffer_size parameter. What is it set to on your 
system (the my.cnf file, probably in /etc or /var)?

The second time you run it, the index is definately in memory, which is 
why it is so fast.

Perhaps the OS is swapping MySQL pages out to disk, or perhaps there is 
not enough memory allocated to the key-buffer to keep the index in memory.

The more frequently you access data, the more likely it is to be cached 
by the OS or the database. Not sure what is running on your system or 
how it is configured, but the amount of memory you have looks a bit 
light. Databases are much faster with more memory.

David.
Jacob Elder wrote:
Here's my table:
CREATE TABLE 'data' (
 'junk' char(10) NOT NULL default '',
 PRIMARY KEY  ('junk')
) TYPE=MyISAM;
There are about 1.7 million 10-character long strings. A query like this one
takes about 5 seconds:
SELECT junk FROM data WHERE junk='xx';
Subsequent queries for the same string return right away.
This is MySQL 4.0.18-5 from Debian testing on a dual Xeon 1.8Ghz with 512
ram and hardware raid5. Load from other services on this machine is minimal.
There is no other MySQL traffic at this time.
Is it normal for this to take so long? Grepping against a flat text file
representing my data takes a far less than a second. Any thoughts, folks?
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Simple table, 1.7 million rows, very slow SELECTs

2004-05-18 Thread David Griffiths
Sorry, the variable is actually key_buffer_size (I don't use MyISAM); 
I'm not sure what it defaults to, but the typical recommendation is 25% 
of your memory.

You can tell if your cache is effective by looking at the key_reads and 
key_read_requests (from the MySQL window, type SHOW STATUS without the 
quotes).

If the key_reads/key_read_requests is = .01 then you need to allocate 
more memory to the key_buffer_size.

For example our SHOW STATUS on a test database gives us:
| Key_read_requests| 156689872  |
| Key_reads| 445700 |
Which is (445700 / 156689872), or 0.00284 (truncated), which is fine.
One other thing I would recommend is turn off your query cache (I can 
almost hear the gasps from other members of this list). The query cache 
is designed to return the results of frequently executed queries 
(assuming you have enough memory allocated to the query cache to store 
the results). From the sounds of your database (one table with 1.7 
million records), it sounds like no two identical queries will be run 
with any frequency (I am guessing that a fairly even distribution of 
rows will be selected - you'll rarely-if-ever select the same row out 8 
times in 5 minutes outside of testing). If that's the case, turn off the 
query cache (query_cache_type = OFF in your my.cnf) and give that memory 
to something else.

I hear what you're saying about memory, but I really don't understand 
why a btree lookup would be so dramatically slow compared to a linear 
search with grep. Would something other than MyISAM be more appropriate 
here?

Your query has to be parsed, the index paged in from disk, a lookup done 
on the index, the disk accessed to find the row, format it, and return 
it. Plus there is the overhead of puttting the query and the result into 
the query cache. Grep just spins through the file. For a non-complicated 
task like this, grep is fast. When selecting hundreds of rows from 
dozens of tables with all sorts of criteria in the where clause, grep is 
not usable.

MyISAM is fine for this sort of work (though I prefer InnoDB for the 
row-locking, etc).

David

Jacob Elder wrote:
On Tue 18 May 02004 at 12:26:41PM -0700, David Griffiths wrote:
 

Is that the only table in your MySQL installation?
   

Yes, and no one has access to it yet but me.
 

MyISAM primary keys are put in a b-tree index, which is cached by MySQL 
in memory in the key_buffer_size parameter. What is it set to on your 
system (the my.cnf file, probably in /etc or /var)?
   

key_buffer_size does not appear in my.cnf. Is the default sensible for my
setup? 

 

The second time you run it, the index is definately in memory, which is 
why it is so fast.

Perhaps the OS is swapping MySQL pages out to disk, or perhaps there is 
not enough memory allocated to the key-buffer to keep the index in memory.

The more frequently you access data, the more likely it is to be cached 
by the OS or the database. Not sure what is running on your system or 
how it is configured, but the amount of memory you have looks a bit 
light. Databases are much faster with more memory.

David.
   

There are other services on this machine, but the load is rarely above 0.05.
I hear what you're saying about memory, but I really don't understand why a
btree lookup would be so dramatically slow compared to a linear search with
grep. Would something other than MyISAM be more appropriate here?
The chances of a given row being returned more than once per day is very
small, so caching the result doesn't help a lot.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnodB Hot Backup Questions

2004-05-14 Thread David Griffiths
Sorry - haven't had a chance to respond till now.

  So restore == apply-log, but one works on any computer, and the
other
  only works on the computer that it's node locked to.

 --apply-log works also in any computer regardless of the hostname or the
 license expiration date.


I'm running ibbackup on an unlicenced machine. The manual states,

ibbackup --apply-log /home/pekka/.backup-my.cnf


Here's the output with --apply-log (some sensitive info stripped)

---
mydb2:/data # ./ibbackup --apply-log ./my2.cnf

InnoDB Hot Backup version 1.40; Copyright 2003 Innobase Oy
License xxx is granted to
(--restore works in any computer regardless of the hostname)
Licensed for use in a computer whose hostname is 'hrdb1'
Expires 2005-6-1 (year-month-day) at 00:00
See http://www.innodb.com for further information
Type ibbackup --license for detailed license terms, --help for help

--- Error: the hostname of this computer is 'mydb2'.
Please contact [EMAIL PROTECTED] for a license renewal.


If I run with --restore, however (same directory, same files, 30 seconds
after the above command was run),


mydb2:/data # ./ibbackup --restore ./my2.cnf
InnoDB Hot Backup version 1.40; Copyright 2003 Innobase Oy
License xxx is granted to
(--restore works in any computer regardless of the hostname)
Licensed for use in a computer whose hostname is 'hrdb1'
Expires 2005-6-1 (year-month-day) at 00:00
See http://www.innodb.com for further information
Type ibbackup --license for detailed license terms, --help for help

Contents of ./my2.cnf:
innodb_data_home_dir got value /data
innodb_data_file_path got value
ibdata1:2000M;ibdata2:2000M;ibdata3:2000M;ibdata
4:100M:autoextend
innodb_log_group_home_dir got value /data
innodb_log_files_in_group got value 2
innodb_log_file_size got value 104857600


ibbackup: We were able to parse ibbackup_logfile up to
ibbackup: lsn 5 2076170601
040514  9:51:43  ibbackup: Full backup restoration successful!
-


Am I doing something wrong with regards to --apply-log?


 Looks like you have taken the backup with ibbackup-2.0, but are trying to
 run --apply-log with ibbackup-1.40. You should use the same or later
 ibbackup version to run --apply-log. Maybe you have forgotten to replace
 your old ibbackup-1.40 binary with the new ibbackup-2.0 binary, and the
 innobackup script is using the old binary?

That's a possibility - we were using a demo-ibbackup for a month or two,
before ordering the full version - I guess the demo was beta, and the
licenced version was 1.4. There is no version # in the name of the file, and
I didn't think to check the version by running with the --help flag. I tried
with a more recent backup, and it worked (as you can see above).

 That is safer. It can reveal corruption.

  Or is it only done if you need
  to use the backup?

 --apply-log is needed to use the backup. But I recommend doing it
 immediately after you have taken the backup.

 In short, the logic is this:

 1. Take a backup.
 2. Run --apply-log to make the backup a 'consistent snapshot'.
 3. Use the backup.

Ok - thx - that makes sense.

David.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >