I'm curious to know why simply having a UNIQUE constraint on the
column is inadequate...
-JF
On Nov 7, 2006, at 6:47 PM, Michael Fischer wrote:
Any thoughts on using BINARY(N) or CHAR(N) as a primary key?
Performance issues? In mysql, in general?
Yes, in the context of the application,
, Nov 08, Jon Frisby wrote:
On Nov 7, 2006, at 6:47 PM, Michael Fischer wrote:
Any thoughts on using BINARY(N) or CHAR(N) as a primary key?
Performance issues? In mysql, in general?
Yes, in the context of the application, there is a very good
reason for doing this, and not using an auto
It was my impression, from the information we've collected that our
problem is very specific to Opteron. It's possible that your problem
is actually unrelated. :(
-JF
On Jul 14, 2006, at 7:24 AM, living liquid|Christian Meisinger wrote:
We're using Opterons, Linux 2.6.x, and a SiL
We're using Opterons, Linux 2.6.x, and a SiL (Silicon Image) SATA
chipset whose particular model number I don't have in front of me.
After MUCH MUCH MUCH trial and error we've discovered that:
1) 2.6.16 substantially alleviates the problem but doesn't eliminate it.
2) There is a 3Ware card
On Jul 13, 2006, at 3:03 PM, mos wrote:
At 03:45 PM 7/12/2006, Jon Frisby wrote:
This REALLY should be an academic concern. Either you have a system
that can tolerate the failure of a drive, or you do not. The
frequency of failure rates is pretty much irrelevant: You can train
incredibly
This REALLY should be an academic concern. Either you have a system
that can tolerate the failure of a drive, or you do not. The
frequency of failure rates is pretty much irrelevant: You can train
incredibly non-technical (inexpensive) people to respond to a pager
and hot-swap a bad
On Jul 12, 2006, at 12:45 PM, Scott Tanner wrote:
I am hoping the newer SATA II drives will provide SCSI performance
at a
reasonable price. It would be interesting to see if anyone has
polled ISP's
to see what they're using. I know they charge more (or at least
they used
to) for SCSI
On Jul 12, 2006, at 12:56 PM, Daniel da Veiga wrote:
On 7/12/06, mos [EMAIL PROTECTED] wrote:
At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
SCSI drives are also designed to run 24/7 whereas IDE drives
are more
likely to fail if used on a busy server.
This
On Jul 12, 2006, at 12:58 PM, Chris White wrote:
On Tuesday 11 July 2006 04:18 pm, Brian Dunning wrote:
My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two
It's my understanding that the biggest remaining difference has to do
with SCSI having far superior command queueing capabilities --
although SATA's command queueing may have closed the gap somewhat --
which provides for much better real-world performance when you have
multiple database
Better, mark this view (or particular rows if it's not too
expensive) as dirty and recompute it only on access, you
may spare few cycles...
That sort of depends on what you're going for. Typically materialized
views are used for summarizations of hypercubes for OLAP systems (data
marts /
You want a Pivot Table. Excel will do this nicely (assuming you have
65536 rows or less), but SQL does not provide a mechanism to do this.
If you want a web based interface you can look at Jtable. (I *think*
that's what it's called -- it's a Java web app that provides an HTML
pivot table
Create an Excel spreadsheet. Import the raw data, structured as-is,
into a worksheet. Select all the relevant columns. Go to Data -
Pivot Table and Pivot Chart Report. Click Finish. From the
PivotTable Field List, drag the respondant ID into the box labeled
Drop Row Fields Here, then drag
software RAID array (No longer on a SAN...)
is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
Regards,
Heikki
- Original Message -
From: Jon Frisby [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Friday, October 14, 2005 10:39 PM
Subject: RE: Non-linear
in wait state, but actual disk
throughput to our software RAID array (No longer on a SAN...)
is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
Regards,
Heikki
- Original Message -
From: Jon Frisby [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Friday, October 14
%)
amount of time spent in wait state, but actual disk
throughput to our software RAID array (No longer on a SAN...)
is quite low -- 6-9k blocks/s out, 1-6k blocks/s in.
Regards,
Heikki
- Original Message -
From: Jon Frisby [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
- Alkuperäinen viesti -
Lähettäjä: Jon Frisby [EMAIL PROTECTED]
Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];
mysql@lists.mysql.com
Lähetetty: Monday, October 17, 2005 8:46 PM
Aihe: RE: Non-linear degradation in bulk loads?
Actually, I believe we're running 32-bit, with bigmem
,
Heikki
Oracle/Innobase
- Alkuperäinen viesti -
Lähettäjä: Jon Frisby [EMAIL PROTECTED]
Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];
mysql@lists.mysql.com
Lähetetty: Monday, October 17, 2005 8:49 PM
Aihe: RE: Non-linear degradation in bulk loads?
Sorry to spam the group
my.cnf like?
Regards,
Heikki
Innobase/Oracle
- Alkuperäinen viesti -
Lähettäjä: Jon Frisby [EMAIL PROTECTED]
Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];
mysql@lists.mysql.com
Lähetetty: Wednesday, October 12, 2005 3:08 AM
Aihe: RE: Non-linear degradation in bulk loads
Two solutions: 1) sort the rows to be inserted on the key
'email' before inserting.
2) Or:
http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
If you have UNIQUE constraints on secondary keys, starting from MySQL
3.23.52 and 4.0.3, you can speed up table imports by
temporarily
Everyone,
We're trying to do some bulk data loads on several different tables (on
several different machines, using several different techniques) and
seeing dramatically worse-than-linear performance.
We've tried the bulk-INSERT syntax, and the LOAD DATA INFILE syntax.
We've done ALTER TABLE ...
-
From: Manoj [mailto:[EMAIL PROTECTED]
Sent: Monday, October 10, 2005 8:51 PM
To: Jon Frisby
Cc: mysql@lists.mysql.com
Subject: Re: Non-linear degradation in bulk loads?
Not sure but given that you suffer from non-linear
degradation in performance;my guess is you might be extending
your
set-variable =innodb_log_buffer_size=32M
The log buffer is too big.
Is there a performance penalty associated with making the log buffer
size too large, or is just not beneficial?
-JF
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
If you include an ORDER BY clause, and the contents of the table are not
modified, then yes. Unexpected insertion of a row in the middle of
the table (as ordered by your ORDER BY clause) may cause a row to be
seen twice (if it happens at an index below the point your queries have
reached) as it
We've seen this problem too. In our case we definitely had a mysql
user, and the mysql user definitely existed. From one prompt we could
start the gcc-compiled MySQL correctly, but not the Intel-compiled
MYSQL.
We were able to fix the problem by setting LD_LIBRARY_PATH=/lib.
-JF
The following behavior occurs in MySQL 4.0.20 with both InnoDB and
MyISAM tables. It happens on the prmary-key index as well as other
indexed columns. The behavior does NOT occur in 3.23.56 and 3.23.58.
This is just setup...
mysql select @x := 1;
mysql explain SELECT @x := IF(MAX(id) @x,
Sorry for the intrusion...
EasySQL is a meta-language for MySQL that provides a more sophisticated environment
for pure-SQL coding than MySQL provides on its own, eliminating the need for external
languages such as Perl in many instances.
The first version of EasySQL features iteration across
As I understand it, the particular cycle a release is in depends on how long
it's been since a major bug was reported. So an alpha becomes a beta if
nobody reports a major bug after N days, and a beta becomes a production
release if goes N days without a major bug report. Thus, even if 4.1.3 is
The current approach is better. Having one row with 81 columns will be
harder to deal with in terms of writing code to display it. The size of the
table will be roughly the same either way -- either you have a few very big
rows, or many small rows, but that shouldn't be a huge issue if you index
Boyd,
First off, I should note that I haven't actually used EJB since about 1.1 or
so and have only been casually keeping up with some of the specs.
Therefore, some of what I say may be a bit out of date. Please ingest large
granules of salt while reading this.
What you're referring to is
Everyone,
We're looking to get an HDS SAN solution (95xx series I believe) at my
present employer, and our only remaining concern is the reliability of MySQL
when used in conjunction with dynamic multipathing software (under Linux).
We use 3.23.x mostly, some 4.0.x -- almost all our data is in
You can disable connection timeouts at the MySQL server level. You should
check the docs to be sure, but I think the relevant variable is
wait_timeout -- setting it to 0 should disable connection timeouts.
-JF
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
From what you describe, I have to wonder if a database is even truly
appropriate for the situation at all. How large is the data set in
question? If it's very small, save for actual image data (which could be
stored as files) the question becomes how often and under what circumstances
the data
Even if I grant you that, the DB designer should have produced the
documentation. I'm willing to place the blame on both people; either
one could have avoided the problem. But, the DBD (data
administrator,
if you prefer) is more likely to have been *aware* of the
problem in the
The performance benefit to be had with NOT NULL columns comes from the
fact that a NOT NULL column can be of fixed length (allowing for fixed
length records). Using NOT NULL probably wont offer any benefit on a
VARCHAR column, since VARCHAR columns are not of fixed length to begin
with.
-JF
I didn't know that. What *does* it do if you specify a
string literal that's smaller than the CHAR(20) field, then?
Pad it with binary zeros?
MySQL returns a properly trimmed string, although it will allocate a
fixed amount of space for the storage. Other databases pad with blank
spaces.
The application is payroll/personnel. A programmer is tasked with
creating forms for data entry on new employees, including
supervisor.
If the user doesn't enter a new employee's supervisor, the
application
accepts it, figuring that it is not yet known, and stores
NULL for the
Use the --opt option when performing the dump. It can make a tremendous
difference.
-JF
-Original Message-
From: dan orlic [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 11, 2003 12:31 PM
To: [EMAIL PROTECTED]
Subject: mysql dump speed
I have a bit of an issue with
SET AUTOCOMMIT=0; -- Disable automatic COMMITs after each statement.
-- Tx #1. Do your work here.
COMMIT; -- or ROLLBACK if there was an error.
-- Tx #2. Do more work here.
COMMIT; -- etc...
Keep in mind that errors can result in either the offending statement
being rolled back (leaving the
By partitioning, you mean having one table divided across N
partitions?
Such a feat isn't directly possible with MySQL, however you can create N
tables instead of 1, and use a table of Type=RAID on the master to
unify them for purposes of queries that need to access data from many
partitions.
I've read about using symbolic links, but only for using
multiple data
directories, not splitting a file (not to mention, across
more than one node).
Symlinks wont work for splitting a file...
Since MySQL doesn't directly support this, any ideas on doing
it at the OS
level to
This should work, but will not use any index on date_field:
SELECT field FROM table WHERE DATE_FORMAT(date_field, %Y-%m-%d) =
'2003-10-10';
This is better and will allow an index to be used if appropriate:
SELECT field FROM table WHERE date_field = 2003-10-10 00:00:00 AND
date_field
1) can you join a MyISAM table to an innodb table in a select?
Absolutely.
2) Under 'Restrictions...' in the manual, it says:
When you restart the MySQL server, InnoDB may reuse an old
value for an
AUTO_INCREMENT column.
Under what circumstances does this occur?
If you perform an INSERT
The advantage of other methods is that you avoid all sorts of nasty
locking on various parts of your database. The advantage of
the Hot Copy
product that Innobase Oy sell is that it doesn't place any
locks on your
InnoDB table space when it runs.
Wouldn't that break the ACID ? The
Is the workload I/O bound, CPU bound, or memory bound?
On Linux, you can get a sense of this as follows:
Run top and vmstat 5 on your MySQL box, and with them running have
your system perform whatever slow operations you are concerned about.
The key numbers here are the swap used % iowait and
What you want is the FULL OUTER JOIN syntax which MySQL doesn't support.
SELECT fruit_table.date, fruit, vegetable FROM fruit_table FULL OUTER JOIN
vegetable_table ON (fruit_table.date = vegetable_table.date);
(I'm assuming that NULL is acceptable instead of the empty string for the
empty cells
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 09, 2002 6:00 PM
To: Jon Frisby
Subject: Re: RE: relational is relational is relational, but ...
Your message cannot be posted because it appears to be either spam or
simply off
I've attempted to get the CHECK clause to work as well, and have had no
luck. I don't think it's actually supported by MySQL yet.
-JF
-Original Message-
From: R. Hannes Niedner [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 03, 2002 7:42 AM
To: Achille M. Luongo; MySQL
What you're asking for is the ON CHANGE CASCADE behavior of FOREIGN
KEYs, and I don't know if InnoDB supports this.
However, from a DB design standpoint, it is generally considered
MASSIVELY unwise for your PRIMARY KEY value to have any
business-meaning. If it has no business-meaning, it never
AND alter data then it must connect to the master.
-JF
-Original Message-
From: Massimo Bandinelli [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 21, 2002 1:48 AM
To: Jon Frisby
Cc: [EMAIL PROTECTED]
Subject: R: Mysql Replication
I want do build a mysql cluster
Yeah, I've experienced the same thing -- if I leave my computer
unattended it just saps memory until I start getting complaints from
Windows about VM usage. Same deal -- W2K with all the updates.
-JF
-Original Message-
From: Jon Finanger [mailto:[EMAIL PROTECTED]]
Sent: Tuesday,
SELECT papers.id,
concat(a1.first_name,' ',a1.middle_name,' ',a1.last_name)
as author1,
concat(a2.first_name,' ',a2.middle_name,' ',a2.last_name)
as author2,
concat(a3.first_name,' ',a3.middle_name,' ',a3.last_name)
as author3,
concat(a4.first_name,' ',a4.middle_name,'
that happen each day...
-JF
-Original Message-
From: Heikki Tuuri [mailto:Heikki.Tuuri;innodb.com]
Sent: Friday, November 08, 2002 12:22 AM
To: [EMAIL PROTECTED]
Subject: Re: Replication and temp tables...
Jon,
- Original Message -
From: Jon Frisby [EMAIL PROTECTED
Using 4.0.2 for both server and client, replicating the following query
seems to have caused a crash on the client:
CREATE TEMPORARY TABLE tmp1 (
day DATE NOT NULL,
campaign_id INT NOT NULL,
clicks INT,
clicked FLOAT,
approved FLOAT,
users_raw INT,
users_coreg INT,
IIRC, SQL For Smarties has some excellent advice on how to handle tree
structures in an SQL database efficiently, by treating nodes as sets
(set in the mathematical sense). You can efficiently do queries along
the lines of get me everything in category X or any of it's
sub(-sub)-categories.
-JF
Another alternative for storage of currency values is to use the DECIMAL
type, which allows you to explicitly specify scale and precision.
You're still constrained to a maximum precision that matches that of
DOUBLE, and you get the inefficiency of storing values as strings but
you don't have to
A colleague asked me what was wrong with this query:
mysql select user.id, email, count(click.id) as c from user, click
where when_signup = '2002-11-10' and click.user_id = user.id group by
c;
ERROR 1056: Can't group on 'c'
Of course, it's logically impossible to group by the result of an
Please accept my apologies for consuming bandwidth with such a blatant plug.
I just released a utility that will compare the rows of corresponding tables
in two MySQL databases and return any rows which differ. In addition, it
can optionally ignore any DATE/TIME/DATETIME/TIMESTAMP columns in
Please accept my apologies for consuming bandwidth with such a
blatant plug.
I just released a utility that will compare the rows of
corresponding tables in two MySQL databases and return any rows
which differ. In addition, it can optionally ignore any
DATE/TIME/DATETIME/TIMESTAMP columns
] [ZEROFILL]
-JF
-Original Message-
From: Jon Frisby [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 22, 2002 1:12 PM
To: mysql
Subject: Documentation correction...
Section 6.5.3 of the MySQL documentation has some slight mistakes:
Under the create_definition: section
This implies that I have to preguess how large each data file will be.
Correct. However, all InnoDB tables will share this space automatically.
(Corrolary: A single table will automatically span several InnoDB data
files if need be.)
Now, I understand with MyISAM tables that they just grow
Whatever you have suggested so far hasn't worked. I thought that
maybe something
is missing in my stating the problem and implementing your
suggested solutions.
So here is a more expanded picture of the queries:
$sth=$dbh-prepare(INSERT INTO TABLE1 (id,var1,var2) VALUES (?,?,?));
SQL does not exist in a vacuum.
You have to run a client.
The client is part of or runs under some scripting language ( perl ,
php, sh, command.com )
which in turn runs on an operating system.
You have to somehow pass values that you want to select, update,
or insert.
So it really not
Hello all,
We're experiencing a bit of difficulty replicating from a 4.0.2 master on
Linux to a 4.0.2 slave on Linux. Most of our tables, including the one
that's giving us headaches are InnoDB. My colleague encountered the
problem, his description follows:
It seems to barf on auto_increment
Have you tried SELECT MAX(`timestamp`) FROM minutely_inbound_handovers;?
-JF
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 26, 2002 6:48 AM
To: [EMAIL PROTECTED]
Subject: max() null bug with funny column name
Description:
You expect a
I'm assuming you have AUTO_INCREMENT columns in each table?
Try this:
INSERT INTO TABLE1(...) VALUES(...);
SELECT @t1id := LAST_INSERT_ID();
INSERT INTO TABLE1A(table1_id, ...) VALUES(@t1id, );
INSERT INTO TABLE1B(table1_id, ...) VALUES(@t1id, );
...
-JF
-Original Message-
We recently set up a 4.0.2 slave, which worked fine -- we loaded our data
snapshot (taken via mysqldump) and were able to perform complex queries
without problems...
However, as soon as we tried to get this machine to act as a slave to a
4.0.1 server it crashed. Immediately upon executing SLAVE
This seems to have not gotten through... Perhaps the spam filter ate it?
(sql, query)
-JF
-Original Message-
From: Jon Frisby [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 4:27 PM
To: [EMAIL PROTECTED]
Subject: MySQL 4.0.2 replication going bonkers?
We recently set up
Does your my.cnf file contain any comments that begin after a configuration
option? I had similar problems setting up 4.0.2, but stripping such
comments fixed it. E.G.:
set-variable = innodb_mirrored_log_groups=1 # Some comment here...
MySQL 4.0.2 complains about the set-variable
[mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 27, 2002 2:44 PM
To: Jon Frisby
Cc: mysql
Subject: Re: Rand slowness.
I tried this, but it seems to be ignoring the LIMIT 3, since it
is returning
all the rows.
Try:
SELECT RAND() AS r, * FROM table ORDER BY r LIMIT 3;
Off the top of my
I just discovered a very scary behavior. We're using MySQL 4.0.1 on Linux,
both tables are InnoDB, AUTO_COMMIT is on (default) and these queries are
being issues from the MySQL interactive console.
If I do a query of the form:
SELECT
transaction_report.*,
confirmed
FROM
In MySQL 4.0.x:
DELETE order_details FROM order_details LEFT OUTER JOIN order_header ON
(order_details.order_number = order_header.order_number) WHERE
order_header.order_number IS NULL;
-JF
-Original Message-
From: Gurhan Ozen [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002
E.g. the non-equivalence operator is the same. MySQL will use indexes
for foo0, but not foo0, which ask for the same result (presumed
foo is an unsigned column).
Perhaps I was a bit unclear... Using foo 0 does *NOT* use an index.
Using foo 0 AND foo somevalue *DOES* use an index.
As
Please excuse me if this is something blindingly obvious, but having now
encountered this in several circumstances, but I have been unable to find a
resolution in the docs (perhaps I just missed it?).
In actuality we encountered the problem when doing some nasty joins, but the
problem seems
* Mysql, after reading the query, decides wether using an index would be
better than just a table row scan. Hence, it's MySql's decision
MySQL is making the wrong decision. As stated below, it's doing a table
scan when it only needs to look at some 2,800 rows out of 970,000 rows. In
It has been my (unfortunate) experience that OLAP type applications are not
MySQL's strong point. Large dataset applications involving queries that
perform aggregations, and scan most/all of the dataset tend to take a very
very long time to execute on MySQL even when using a star-schema
I'm rather fond of GTE (GWD Text Editor) myself -- http://www.gwdsoft.com
-JF
-Original Message-
From: Jerry [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 30, 2002 6:37 AM
To: MySQL General List
Subject: Good color coded SQL editor for MySQL?
Does anyone know of a good color
String literals in a truth test such as you set up are coerced to integers.
(A la C's 'atoi()' function.)
The following returns all rows from a table:
SELECT * FROM test WHERE '1'; // String becomes 1
SELECT * FROM test WHERE '1a'; // String becomes 1
SELECT * FROM
Neither I, it seams strange. This value has to be stored in some kind
of pointer to an object or char * in the program it self. I would first
assume that the content of the pointer is tested, but that cant be true
since the content in this case is the number 65.
You're assuming a very
: Tuesday, April 16, 2002 11:07 AM
To: Mysql List
Subject: RE: WHERE codition test
On 16 Apr 2002, at 10:58, Jon Frisby wrote:
String literals in a truth test such as you set up are coerced
to integers.
(A la C's 'atoi()' function.)
[snip]
If you're used to weakly-typed languages
I'll trust a MySQL alpha release before the final production releases of
most vendors. In fact, we're using 4.0.1 in our production
environment with
great success.
Its your system, and if you think it worth the risc, it is of course you
decision to do so to, ;) but I would never ever
But I did ask for a release date of version 4.1.
When it's ready.
A nice thing with a stored procedure is that you can lower the load
impact made by heavy queries on the database by using cursors to soften
the performance hit they are likely to make - e.g. make the query behave
like a
When it's ready.
That's a tautological answer.
You've already stated your lack of belief in the reliability of deadlines,
so what did you expect? If someone had responded with 4.1 will be done on
5/1/2002 you'd respond that the only way that would happen would be if it
were horribly buggy
Checked my own version and found it to be Ver 11.16 Distrib 3.23.49.
He, version 11??? Quit confused, but after a sanity check with reality
I realized I must be using version 3.23. Anyhow, check the status of
next version and found out that version 4.0 is in alpha mode...
The Ver 11.16
FOREIGN KEYs by themselves make it faster and easier to validate that your
code is correctly maintaining referrential integrity.
They also let O/R mappers do more of the work in generating code for
accessing your data (Blatant Plug: http://freshmeat.net/projects/easyorm/ --
EasyORM is an O/R
85 matches
Mail list logo