Re: characters oddity

2016-03-02 Thread Wm Mussatto
McGranaham, Jamen wrote:
Have two virtual machines, both running RedHat 7. Both are also running
MySQL 5.6.29 and both have the same data. We have two databases, however,
that keep throwing odd characters on one system but it's OK on the other and
we've not been able to figure out why.

What it should look like (from the test machine, pointing to the database on
the test machine):



What it looks like on our Production database (from the test machine,
pointing to the production database):
Décimas a la censura de Carmen Aristegui
Guillermo Velázquez Benavidez

We have verified the my.cnf is the same on both machines, using utf8 as the
default character set. We have also verified the character sets for the
databases and tables are identical. We know it has to be something with the
MySQL database on our Production server because we can point Production to
the Test database and it the characters are translated correctly. But we
just haven't been able to figure out what it is - and it's been 48 hours
worth of work and investigation. Any advice, guidance, or suggestions would
be greatly appreciated!  Thank you!

Jamen McGranahan
Systems Services Librarian
Vanderbilt University LIbrary
Central Library
Room 811
419 21st Avenue South
Nashville, TN 37214

-
How is the information going into the database.  We had character issues
until we delt with perl's UTF-8 conversion and also set the DBI to expect
UTF-8 characters.  Also the forms must be configured to send UTF-8

$dbh= DBI->connect("dbi:mysql:$dbname",$dbuser,$dbpasswd,{ RaiseError =>
1,mysql_enable_utf8=>1})
What is strange, is when its not setup correctly some characters will go
in and out correctly but others won't.Not sure if any of this will
help.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using Joins/Unions

2015-08-06 Thread Wm Mussatto
On Tue, August 4, 2015 16:05, Ryan Coleman wrote:
 No but there should be. If there's not my task is useless.

 Secondly yes. Unique name on it too.

 --
 Ryan Coleman
 Publisher, d3photography.com
 ryan.cole...@cwis.biz
 m. 651.373.5015
 o. 612.568.2749

 On Aug 4, 2015, at 17:33, Wm Mussatto mussa...@csz.com wrote:

 On Tue, August 4, 2015 11:19, Ryan Coleman wrote:
 I have been a MySQL user and supporter for over a decade (since 2001)
 and
 I am almost ashamed to admit that I haven’t the faintest idea on how to
 do
 joins and unions.

 I have a specific query I would love to run…

 I have two tables, one with Unique data (“images”) and one with
 corresponding paths but many replicated records (“files”).

 I want to run a query that takes the results from /images/ and also
 searches /images.file/ as a LIKE statement from /files.path/, sort by
 /files.filesize/ in descending order returning just the first record
 (largest file size).  There may be up to 750 records from /images/ and
 thusly could be 3000+ from /files/.

 How on earth do I do this?

 —
 Ryan
 First question, will there always be at least one record in the files
 table for every record in the images table?  That controls the kind of
 join you will use.  I don't think that a union is a player. Also, is
 there
 a unique record ID in each of the table?

Sorry, been swamped.  If you can ignore the cases where there are not any
entry in the 'Files' table then a simple join will work. Otherwise you
would need a LEFT JOIN

Assume a structure
images.id - unique record ID
images.commonName - this will be the common reference name in both
...
files.id - unique record ID
files.commmonName - this will be the common reference name in both
files.location -where this record is on the disk/system
files.filesize - numeric field in whatever units you want (say bytes)


select images.commonName, files.location,   MAX(files.filesize)
FROM images, files
WHERE images.commonName =  files.commonName
GROUP BY files.commonName
ORDER BY images.commonName

Here is my test structure.   No doubt someone else can get it optimized. 
This seems to use a temp table
--
-- Table structure for table `files`
--

CREATE TABLE `files` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `CommonName` varchar(255) NOT NULL,
  `Location` varchar(255) NOT NULL,
  `filesize` int(11) NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `CommonName` (`CommonName`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1;

--
-- Dumping data for table `files`
--

INSERT INTO `files` (`ID`, `CommonName`, `Location`, `filesize`) VALUES
(1, 'Image1', 'FileLoc/1/image1.jpg', 1000),
(2, 'Image1', 'FileLoc/2/image1.jpg', 5),
(3, 'Image2', 'FileLoc/1/image2.jpg', 25000),
(4, 'Image2', 'FileLoc/2/image2.jpg', 5000);

-- 

--
-- Table structure for table `images`
--

CREATE TABLE `images` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `CommonName` varchar(255) NOT NULL,
  PRIMARY KEY (`ID`),
  UNIQUE KEY `CommonName` (`CommonName`)
) ENGINE=MyISAM  DEFAULT CHARSET=utf8;

--
-- Dumping data for table `images`
--

INSERT INTO `images` (`ID`, `CommonName`) VALUES
(1, 'Image1'),
(2, 'Image2');

Hope this helps a bit.

---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext 101
Direct: 909-962-8547
__
CONFIDENTIALITY NOTICE:
This communication, including attachments, is for the exclusive use of the
person or entity to which it is addressed and may contain confidential,
proprietary and/or privileged information. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon,
this information by persons or entities other than the intended recipient
is prohibited. If you received this by mistake, please contact the sender
immediately.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using Joins/Unions

2015-08-04 Thread Wm Mussatto
On Tue, August 4, 2015 11:19, Ryan Coleman wrote:
 I have been a MySQL user and supporter for over a decade (since 2001) and
 I am almost ashamed to admit that I haven’t the faintest idea on how to do
 joins and unions.

 I have a specific query I would love to run…

 I have two tables, one with Unique data (“images”) and one with
 corresponding paths but many replicated records (“files”).

 I want to run a query that takes the results from /images/ and also
 searches /images.file/ as a LIKE statement from /files.path/, sort by
 /files.filesize/ in descending order returning just the first record
 (largest file size).  There may be up to 750 records from /images/ and
 thusly could be 3000+ from /files/.

 How on earth do I do this?

 —
 Ryan
First question, will there always be at least one record in the files
table for every record in the images table?  That controls the kind of
join you will use.  I don't think that a union is a player. Also, is there
a unique record ID in each of the table?
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to retrieve next record?

2014-12-11 Thread Wm Mussatto
On Thu, December 11, 2014 13:43, Larry Martell wrote:
 On Thu, Dec 11, 2014 at 4:34 PM, Trianon 33 triano...@gmail.com wrote:
 Hi all,

 It seems like a simple question, however I'm not able to find the
 answer.

 Let me explain. I have a DB with lots of columns, one is labeled as
 'kenteken' and it is also the primary key in the DB. I poke into this DB
 with the command

 $sDBQuery1 = SELECT * FROM kentekenlogtest WHERE kenteken =
 '$sActueelkenteken';

 This is ok and gives me the correct result. However, then I want the
 next
 row (according to the primary key), but with what command?

 That's were I'm stuck.

 Didn't find anything like NEXT, so... what to look for?


 What do you mean by 'next' row? Since you're querying by primary key
 you will only get one row, right?

Related what is the form of the prmary key.  If its numeric something like
 $sDBQuery1 = SELECT * FROM kentekenlogtest WHERE kenteken 
'$sActueelkenteken' limit 1
might work.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Version 5.6.2-m5 Boolean Datatype

2013-05-22 Thread Wm Mussatto
Why not use ENUM with True|False or Y|N  Only issue is it doesn't throw
and error of you enter an illegal value but I don't think I've ever
flagged the field as NOT NULL.
On Wed, May 22, 2013 11:32, Darryle Steplight wrote:
 Hey Neil,
  Why not just store it as a TINYINT, that's what I do when I only care
 about 0 or 1 values?


 On Wed, May 22, 2013 at 2:19 PM, Neil Tompkins
 neil.tompk...@googlemail.com
 wrote:

 Hi Shawn

 I plan in installing the latest MySQL version tomorrow.   Does MySQL not
 support Bool eg true and false

 Neil

 On 22 May 2013, at 19:05, shawn green shawn.l.gr...@oracle.com wrote:

  Hello Neil,
 
  On 5/22/2013 1:05 PM, Neil Tompkins wrote:
  Hi, Like the link states
 
  For clarity: a TINYINT(1) datatype does NOT ENFORCE a boolean value
 data
  entry. For instance, it's still possible to insert a value of 2
 (any
  integer up to the TINYINT max value). I personally don't see the
 added
  value of a 'BOOLEAN' synonym type which infact behaves unlike a
 boolean
  should.
 
  Has BOOL, BOOLEAN been taken out of MySQL 5.6 ?
 
 
  On Wed, May 22, 2013 at 6:01 PM, Ian Simpson i...@it.myjobgroup.co.uk
 wrote:
 
  BOOLEAN is a synonym for TINYINT(1) in MySQL:
 
  http://dev.mysql.com/doc/refman/5.6/en/numeric-type-overview.html
 
 
  On 22 May 2013 17:55, Neil Tompkins neil.tompk...@googlemail.com
 wrote:
 
  Hi,
 
  I've just created some tables that I designed using the MySQL
 Workbench
  Model.  However, the database type BOOLEAN which was in my models
 has
 been
  converted to TINYINT(1);  I'm currently running MySQL Version
 5.6.2-m5 on
  Windows 2008 server.
 
  Any ideas why this has been removed ?
 
 
  This is exactly the same behavior that MySQL has had for over a
 decade.
 Nothing has been added or removed since release 4.1.0 (2003-04-03)
  http://dev.mysql.com/doc/refman/4.1/en/numeric-type-overview.html
  http://dev.mysql.com/doc/refman/4.1/en/news-4-1-x.html
 
  Also, why are you using a pre-release (milestone) version of 5.6 when
 the full release (GA) versions of 5.6 are available?
  http://dev.mysql.com/doc/relnotes/mysql/5.6/en/
 
  Regards,
  --
  Shawn Green



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Wm Mussatto
On Thu, May 9, 2013 15:25, Robinson, Eric wrote:

  -Original Message-
  From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
  Sent: Thursday, May 09, 2013 1:58 PM
  To: mysql@lists.mysql.com
  Subject: Slow Response -- What Does This Sound Like to You?
 
  We have a situation where users complain that the system
 periodically
  freezes for 30-90 seconds. We check the slow query logs and
 find that
  one user issued a complex query that did indeed take 30-90
 seconds to
  complete. However, NO slow queries are recorded for the other 50
  users, before, during, or after the freeze. Note that the complex
  query in question always shows: Lock_time: 0.
 
  Q: What conditions could cause single query to lock up a
 database for
  a while for all users (even though it shows lock time: 0)  but no
  other slow queries would show in the logs for any other
 users who are
  hitting the database at the same time?
 
  OS: RHEL3 x64
  CPU: 8 x 2.9GHz Xeon
  RAM: 32GB
  Disk: RAID 5 (6 x 512GB SSD)
  MySQL: 5.0.95 x64
  Engine: MyISAM
 


 MyISAM?  Or InnoDB?
 Lock_time perhaps applies only to table locks on MyISAM.

 SHOW ENGINE InnoDB STATUS;
 You may find some deadlocks.

 Is Replication involved?

 Anyone doing an ALTER?



 MyISAM, no replication involved, and nobody is altering the database. This
 happens whenever people run certain reports.


 --Eric
This may be a dumb question, but have you verified that the applications
do not issue a Lock TABLES ...? Either the big one or one of the others.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Fwd: mysql_tzinfo_to_sql

2013-04-05 Thread Wm Mussatto
On Fri, April 5, 2013 13:19, Joe Kosinski wrote:
 I also tried mysql_tzinfo_to_sql /usr/share/zoneinfo  sqlLines with the
 following Terminal output:


   -bash: sqlLines: Permission denied

 Joe
Well at lease we are on familiar territory now.  Whatever directory you
are in requires that the user you are have permission to write to it.  Try
doing a cd first. That should get you back to the user's home directory
where I would hope that user could create files.
 Begin forwarded message:

 From: Joe Kosinski joj...@gmail.com
 Subject: Fwd: mysql_tzinfo_to_sql
 Date: April 5, 2013 3:25:59 PM EDT
 To: mysql@lists.mysql.com mysql@lists.mysql.com

 1.I was able to use the use mysql; to change to the mysql database and
 then I ran the queries which showed that the time zone database is not
 populated.

 2. Then I tried to find the path of mysql_tzinfo_to_sql by running the
 command
 which mysql_tzinfo_to_sql with the following results:
  Last login: Fri Apr 5 14:50:02 on ttys000 which mysql_tzinfo_to_sql ;
 exit;Joseph-Kosinskis-MacBook:~ josephkosinski$ which
 mysql_tzinfo_to_sql ; exit;  logout [Process completed]

 However,  there was no output showing the path.  I ran the which command
 from the command drop down menu of the Shell menu.

 Are there any other suggestions on what I should do to populated the
 tables?

 Joe



 Joe

 Begin forwarded message:

 From: shawn green shawn.l.gr...@oracle.com
 Subject: Re: Fwd: mysql_tzinfo_to_sql
 Date: April 5, 2013 1:58:01 PM EDT
 To: Joe Kosinski joj...@gmail.com

 Hi Joe,

 On 4/5/2013 1:58 PM, Joe Kosinski wrote:
 The commands didn't work and the time zone tables are not loaded.
  Whenever I ran the queries mysql SELECT * FROM time_zone;
 mysql SELECT * FROM time_zone_name;
 I got an error message stating no database was selected.  I think
 when I
 tried to locate the path to the mysql_tzinfo_to_sql previously
 nothing
 worked.


 All tables are organized into databases. The timezone tables are in
 the `mysql` database. To change your active database, use the 'use'
 command in your session

 use mysql;

 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN




--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL Connection Information

2012-11-14 Thread Wm Mussatto
On Tue, November 13, 2012 22:38, Prabhat Kumar wrote:
 No, If you are using non-persistence connection  once the query get
 complete you are closing the connection properly.

Is this true if you are using mod_php or equivalent?

 On Wed, Nov 14, 2012 at 11:44 AM, Trimurthy trimur...@tulassi.com wrote:

 hi every one,  i am working with some application which
 is
 developed in php and back end is mysql. in this application each and
 every
 page i am including config.php which consists queries to connect to
 the
 server. the user name and password is same all the time to connect to
 the
 database server. is it causes to an extra overload on the server to
 process
 the connection request every time with the same user name and password.







   Normal
   0




   false
   false
   false

   EN-US
   X-NONE
   AR-SA







































































































































































 Thanks  Kind Regards,

  TRIMURTHY






 --
 Best Regards,

 Prabhat Kumar
 MySQL DBA

 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Is ECC RAM necessary for MySQL server

2011-12-31 Thread Wm Mussatto
On Sat, December 31, 2011 05:14, Claudio Nanni wrote:
 Ryan,
 My opinion here.
 Any write to memory can go wrong,
 OS , MySQL , Storage engines, client library and so on.
 Innodb has some advanced mechanism for ACID compliance like the double
 write buffer but these are mostly to assure durability. Memory failure
 although not so frequent can still, in my opinion, corrupt anything
 included Innodb buffers.
 I would like the opinion of some other Innodb gurus.
 Happy new year.
 Claudio
 On Dec 31, 2011 2:04 PM, Ryan Chan ryanchan...@gmail.com wrote:

 Assume I am using InnoDB, which is ACID compliant.

 Do I still need to use ECC RAM, in order to make sure there is no
 chance of data corruption due data write?

 Thanks.
ECC memory helps ensure both the program and the data is correct.  If you
are running 24/7 operation rather than a test system which will be
continually restarted, IMHO you need ECC memory.  Otherwise you cannot
guarantee that the instructions the program is executing is what the
program writer intended. If the memory can have an error and your system
cannot detect it ACID won't help, it will just ensure the error is
reliably written to disk.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: FULL mysqldump

2011-12-23 Thread Wm Mussatto
On Fri, December 23, 2011 12:27, Reindl Harald wrote:


 Am 23.12.2011 21:14, schrieb Jim McNeely:
 Hello all, happy holidays!

 What is the best way to run a mysqldump to get the tables, the data, the
 triggers, the views, the procedures, the privileges and users,
 everything? It seems confusing in the online documentation, or is that
 just me?

 echo Prepare im laufenden Betrieb
 rsync --times --perms --owner --group --recursive --delete-after
 /mysql_data/ /mysql_backup/
 echo Offline_sync
 /sbin/service mysqld stop
 cd /Volumes/dune/mysql_data/
 rm -f /Volumes/dune/mysql_data/bin*
 rsync --progress --times --perms --owner --group --recursive
 --delete-after /mysql_data/ //mysql_backup/
 /sbin/service mysqld start

 so you have a REAL consistent backup with minimal downtime you can restore
 on any machine and pull dumps of whatever you really need instead of
 breaindead hughe dumps with long locking time while they are done or
 withut locking inconsistent state

 the first rsync runs while the server is online and the second one
 after mysqld is stopped takes a few moemnts because only changed data
 in the meantime have to be synced again

 this way you can backup many GB of mysql-data with minimal downtime
 and 100% consistence

This is true if the problem is many relatively small tables.  Not sure how
well it would work if the problem was one or more very large tables.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: One database per server architecture

2011-10-19 Thread Wm Mussatto
On Tue, October 18, 2011 22:40, Johan De Meersman wrote:
 - Original Message -
 From: Ryan Mark rm...@tribune.com

 (WordPress does not like to share an app server) and added memcache.

 Really? We run dozens of the thing on a couple of virtuals with no
 problems at all. Then again, we don't exactly get millions of hits on most
 of them, so if you do get a lot of traffic it might be a scaling issue.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

I've turned on the reporting of full table scans into the show query log
and Wordpress has a large number of full table scans so it could easily be
a scaling issue.  We are running the current Debian release version so
YMMV.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimising for 100000 entries

2011-09-15 Thread Wm Mussatto
On Thu, September 15, 2011 11:27, The Doctor wrote:
 On Thu, Sep 15, 2011 at 08:55:16AM -0400, Shawn Green (MySQL) wrote:
 On 9/14/2011 15:26, The Doctor wrote:
 On Wed, Sep 14, 2011 at 09:49:34PM +0530, Ananda Kumar wrote:
 So,
 You want to have 100,000 buttons for 100,000 entries or just have one
 filter
 column, which allows you to specify any type of WHERE CONDITION

 regards
 anandkl

 On Wed, Sep 14, 2011 at 7:17 PM, Arthur
 Fullerfuller.art...@gmail.comwrote:

 Forgive  my bluntness, but IMO it is silly to attempt to retrieve a
 100,000
 rows, except for reporting purposes, and in that case, said reports
 ought to
 run against a replica, not the OLTP instance.

 Far better, IMO, is to present (in the UI) an alphabet as buttons,
 plus a
 textbox for refinements. The alphabet buttons cause the recordSource
 to
 change to something like SELECT * FROM Clients WHERE ClientName LIKE
 'A*'.
 Click the B button and the RecordSource changes to SELECT * FROM
 Clients
 WHERE ClientName LIKE 'B*'. IMO, such an interface gives the user all
 the
 power she needs, and costs the system as little as possible.

 To accomplish this, all you need is a sproc that accepts one
 parameter,
 that being the letter corresponding to the letter-button the user
 pressed.

 I have implemented exactly this solution on a table with only half
 the
 number of rows you cite, but it works beautifully and it is quick as
 lightning.

 HTH,
 Arthur

 Arthur,

 this is exactly what comes to mind.

 I am wonder what needs to be adjusted in osCommerce for this to work.


 I am still confused by your question.  Most modern databases (even those
 that are not client-server capable) don't even break a sweat at handling
 only 100K rows of data. It is the types of queries you write and how
 much
 data you are attempting to move at any one time that are the most likely
 reasons for poor performance.

 Please clarify what you want to fix when you say optimise MySQL for
 10
 entires.  Even with the minimal settings on a low-powered laptop, I
 would
 have no qualms about loading any version of MySQL produced in the last
 10
 years with a million rows of data and using it for personal research. Of
 course, there are things I could (and would) configure to help MySQL use
 it's host system more efficiently. All of that is covered in the chapter
 in
 the operating manual called Optimization. Pick the link below that
 matches the version you are using for more information:
 http://dev.mysql.com/doc/refman/5.0/en/optimization.html
 http://dev.mysql.com/doc/refman/5.1/en/optimization.html
 http://dev.mysql.com/doc/refman/5.5/en/optimization.html

 Perhaps if you could tell us what you are trying to do we could suggest
 ways for doing it better?


 Clarification:

 I have 10 **products** loaded into the shopping cart.

 FRom there is slow to bring up the shopping cart.

 Check http://www.nk.ca/~aboo/racing/osc4/catalog/

 to see what is happening.
OK, this is a catalog not a shopping cart.
I think you need to turn off buffering so that the web server sends stuff
out as it gets it.  It appears to be waiting for the full page to be built
before it sends anything.  Its a php system so a PHP expert should chime
in with how to do this.

I suspect the category system which is listing the number of products on
the left is taking a lot of the time.
How do you link your products to your categories?  I suspect you will need
to add a column in the category table to list the number of products so
you are not doing a select count(*) Products under Category X each time.
If your select to list the categories is also doing the count then you
might want to try separating that out (two select's).  Without your table
structure I'm shooting in the dark a bit here.

Just a few thoughts.  Luck.

The shopping cart is not the issue, the category list on the left is, I
think.


---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext 101
Direct: 909-962-8547
__
CONFIDENTIALITY NOTICE:
This communication, including attachments, is for the exclusive use of the
person or entity to which it is addressed and may contain confidential,
proprietary and/or privileged information. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon,
this information by persons or entities other than the intended recipient
is prohibited. If you received this by mistake, please contact the sender
immediately.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Caution : Hard Disk Full

2011-08-26 Thread Wm Mussatto
On Fri, August 26, 2011 00:44, Johan De Meersman wrote:
 - Original Message -
 From: Adarsh Sharma adarsh.sha...@orkash.com

 Today by chance I am checking the space in mysql  data directories.

 By chance? That should be automated, as should a million other standard
 checks. Install Nagios.

 When it becomes full, I am sure my server down. Can anyone Please let
 me know the steps I need to follow in this condition.

 Pretty obvious: add space or remove data.

 Given that you say it's MyISAM data taking up the space, you simply need
 to optimize the tables to reclaim free space inside the datafiles - but
 make sure there's enough free space for a full copy of the remaining data,
 so start optimizing the smallest tables first. For InnoDB it's quite a bit
 more trouble.

 If there's no data you can delete, you could have a look at wether there's
 tables that don't need to be written to anymore - or set up archiving
 tables for exactly that purpose; you can convert those to compressed
 MyISAM, that should save quite some space, too.

 Adding disks, well... if you set up the server with LVM or MD that
 shouldn't be too hard, but it looks like you didn't. Welcome to
 screwville. You'll need to swap out the disk for a larger one (yes,
 downtime) and copy all the data. That, or *add* a disk, set that one up
 with LVM, copy the data there and then add the old disk as a second
 physical volume and expand the logical volume.


 And this, dear pupils, is why we tell the marketeers to go screw
 themselves until they can provide a three-year volume estimate.

I would only add, while you are in there, add two disks and use MD to
create raid 1 and the put LVM on top of that.  With MD you can replace one
of the underlying drives with a larger one move that in and then replace
the other bring that in and increase the available space.  BTW: on modern
processors, I've found MD to be faster than Hardware Raid unless you go
real high end.  Plus you can put the disks on separate controllers.   Not
a trivial exercise, but you are not locked it.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: optimization strategies based on file-level storage

2011-06-17 Thread Wm Mussatto


On Fri, June 17, 2011 07:11, Jerry Schwartz wrote:
-Original Message-
 snip
 
What I was really trying to figure out was why it takes me 4
hours to
add a new column to my 22-million-row table, and
whether a different
table design can avoid that problem. 
That reply in the forum says,
ALTER TABLE ... ADD
COLUMN will always copy the entire table over,
and rebuild
all the indexes. (And, effectively, do an OPTIMIZE.) Do
_not_
leave space for extra columns, it won't help.  I'm about to
reply and point out the trick that you suggested to me: create
dummy
columns early and then just rename them later :)

 -Bennett

 [JS]
They will be too small, or the wrong type, or there won't be enough
 of
 them. Based upon 30+ years of database design, I'd bet
money on it. ;-)
 
 Regards,
 
 Jerry
Schwartz
 Global Information Incorporated
The only
alternative design would be to create another table with the
added columns and a common key field and then lock the primary table and
populate it with the keys from the original table, and I'm not convinced
that would be any faster or less disruptive.
--
William R.
Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


RE: Doubt regarding Mysqlsump

2011-06-09 Thread Wm Mussatto


On Thu, June 9, 2011 11:59, Jerry Schwartz wrote:

snip
 

A single table is
always consistent. Data inconsistency occurs in sets of
interrelated tables, in other words, on the database level.

 [JS] Not even a single table is always consistent
(unless there is
 transactions). Consider a single transaction
that consists of two steps:
 
 1. Delete record A
 2. Add record B
 
 Now consider
 
 1. Delete record A
 -Backup starts-
 2. Add
record B
 
 You have no idea whether or not record B
will be in your backup.
 
 Worse things can happen, of
course:
 
 1. Delete record A
 !!KABOOM!!

 The data in the table is not going to be consistent.
You'd have to analyze
 the
 data to find out what did
and didn't happen before the crash, back out
 step 1,

and re-run that transaction in the application.
 
 
 Regards,
 
 Jerry Schwartz
Ah, LOCK TABLE
which is the myisam� equivalent of tranactions.�
That will deal with the problem of backup in the middle yes?��
If that won't work then you do need transactions.� IMHO.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: URGENT: Change Default Location of where Database Files get written?

2011-05-13 Thread Wm Mussatto
On Fri, May 13, 2011 08:21, Tina Matter wrote:
 I have a MySQL question that I'm hoping someone can help answer.

 We have a linux machine which has MySQL 5.5.8 installed.
 It is currently installed in this location:/opt/mysql

 When creating a new database, a folder (with the name of the databas)
 gets created in this location:
 /opt/mysql/data

 Is there any way to change the location of where data is stored?
 The database that I need to create is going to have over a billion
 records in it,
 so it needs to be in a specific place.

 I want the database folder to get created here:

 /science/databases/databasename

 Thanks for any help.
 Tina

 --
 Tina Matter
 Web Applications Developer
 University of Michigan
 Department of Epidemiology
 1415 Washington Heights, Suite 4605
 Ann Arbor, MI 48109
Since your are on a linux box, the simplest method is to create the
database, but no tables and then replace the directory with a symbolic
link to the desired location.  Make sure the permissions at the new
directory match that of the other directories.  If you are using innodb
you will need to set it to use separate files for each table.

Hope this helps.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: A common request

2011-03-31 Thread Wm Mussatto
On Thu, March 31, 2011 12:33, mos wrote:
 At 11:20 AM 3/31/2011, you wrote:
At 07:29 AM 3/31/2011, you wrote:
Thanks for your insight! But I'm still worried about the performance of
IN ( big list of values ). Can you tell me how it is implemented?

Suppose I have SELECT a FROM b WHERE c IN (1, 4, 5, 117, 118, 119, ...,
387945)

1) If I put 200 values there, does it do 200 individual SELECTs
internally, and union them?

No. It uses one Select statement.

Or does it notice that c has a UNIQUE index and thus at most one row
can be returned per SELECT, and does them all at once?

The IN() clause is very inefficient because MySQL will NOT use the index.
It will have to traverse the entire table looking for these values. That
is why a table join will be much faster than using IN().

 Oops. Sorry, the In() clause in MySQL 5.5 does use the index (not sure
 when
 they implemented that).  Even so, I still find it slow when the In()
 clause
 has a lot of elements.
 I should have used an Explain in front of my Select statement to see if
 the
 index is used before I posted.
Its in 5.0 series.  When you made your original statement I went back and
checked since I uses it in place of multiple ORs

 Mike

2) If I want to get just the primary key, or join with another table
based on just the primary key, does this query ever touch the disk
(assuming the index is in memory, which I think it always is -- correct
me if I'm wrong about that).

It will get the information from the index and not have to access the
record data from disk. If the index is stored in memory, then it won't
have to go to disk (unless you also have a sort). That is why the query
cache is so important.


The way I would recommend doing it (for BTREE indexes, anyway) is to
 sort
the values in ascending order, and do the search in one pass through the
index. The index is already in memory, and it would be straightforward
 to
modify a binary search algorithm to find the rows corresponding to
monotonically ascending values of the primary key, all in one pass.

Even if the binary search algorithm is run 200 or 2000 times for a list,
it would still be faster than hitting the disk. (Even though the CPU
cache performance would be worse.)

Can you let me know the specifics of it, and especially how I can avoid
hitting the I/O bottlenecks?

Use a table join and make sure you have the indexes loaded into memory.
See http://dev.mysql.com/doc/refman/5.0/en/load-index.html. If using
InnoDb then its index cache scheme is quite good.


Mike


Thank you,
Greg

On 3/29/11 4:17 PM, Peter Brawley wrote:
  Why not optimize the IN ( ... ) to do the same type of thing?

If the argument to IN() is a list of values, it'll be OK. If it's a
SELECT, in 5.0 it will be slower than molasses (see The unbearable
slowness of IN() at http://www.artfulsoftware.com/queries.php.

  I always tried to avoid joins because I am planning to horizontally
 partition my data.

A severe  unfortunate constraint. Can't help you there.

PB

-

On 3/29/2011 1:27 PM, Gregory Magarshak wrote:
Yes, this would be fine. But often, the list of friends is obtained
from a social network like facebook, and is not stored internally.
Basically, I obtain the friend list in a request to facebook, and then
see which of those users have created things. So would I have to
 create
a temporary table and insert all those uids just to make a join? Why
not optimize the IN ( ... ) to do the same type of thing?

There is also a second problem: I want to use MySQL Cluster, because I
expect to have many users. Would it be efficient to use JOIN between
the friends table and the articles table? Both tables are partitioned
by user_id as the primary key, so the join would have to hit many
different nodes. I always tried to avoid joins because I am planning
 to
horizontally partition my data. But if MySQL cluster can handle this
join transparently and split it up based on the partition, then that's
fine. Do you have any info on this?

Greg

On 3/29/11 2:10 PM, Peter Brawley wrote:
  How can I quickly find all the articles written by this user's
 friends, and not just random articles?

Taking the simplest possible case, with table
 friends(userID,friendID)
where each friendID refers to a userID in another row, the friends of
userID u are ...

select friendID from user where userID=u;

so articles by those friends of u are ...

select a.* from article a join (  select friendID from user where
userID=u ) f on a.userID=f.friendID;

PB

-

On 3/29/2011 12:50 PM, Gregory Magarshak wrote:
Hey there. My company writes a lot of social applications, and there
is one operation that is very common, but I don't know if MySQL
supports it in a good way. I thought I'd write to this list for two
 reasons:

 1) Maybe MySQL has a good way to do this, and I just don't know
 about it

 2) Propose to MySQL developers a simple algorithm which would
 greatly improve MySQL support for social networking apps.

 Here is the situation. 

Re: Question about Backup

2011-03-22 Thread Wm Mussatto
On Tue, March 22, 2011 10:32, Karen Abgarian wrote:
 Why, if they shut down the slave, it will be quite consistent.  Only that
 this technique is not as much of the 21th century, but is like 30 years
 old.
 Placing locks is about the same as shutting it down.

Ah, but if you have the dump function do the locking it will also remember
to restart it when its done.

 On Mar 22, 2011, at 6:01 AM, Johan De Meersman wrote:


 You are assuming that the database is one table of 5.000 gigabyte, and
 not 5.000 tables of one gigabyte; and that the backup needs to be
 consistent :-p


 - Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 To: mysql@lists.mysql.com
 Sent: Monday, 21 March, 2011 12:44:08 PM
 Subject: Re: Question about Backup

 Forget mysqldump because TABLE LOCKS for so hughe databases
 I would setup a replication-slave because you can stop
 the salave and make a filesystem-backup of the whole db-folder
 while the production server is online, we do this with our
 dbmail-server since 2009


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=a...@apple.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Backup Policy

2011-03-15 Thread Wm Mussatto
On Tue, March 15, 2011 12:36, Joerg Bruehe wrote:
 Hi!


 Adarsh Sharma wrote:
 Dear all,

 Taking Backup is must needed task in Database Servers. [[...]]

 Correct.


 We have options RAID, mylvmbackup , mysqldump. But it depends on the
 company requirement too.

 RAID is no backup!

 A RAID system may give you protection against a single disk drive
 failing (depending on the RAID level you configure), but it doesn't
 protect you against misuse, operator error, malware (virus, trojan),
 wilful deletion, ...

 RAID is no backup!  (This can't be repeated often enough.)

 Doing a backup means to take the data (including schema, privileges,
 passwords, triggers, ...) to some independent media where it is safe
 from all misfunction on the original machine.
 IMNSHO, a backup must be taken offline or write-protected in some other
 way, so that even a misfunction of the backup machine does not damage
 your backup.

 Old tape drives (or newer tape cartridges) with their physical write
 protection (ring, slider, ...) did provide such protection, it is a pity
 that they are too slow and too small for today's data (or too expensive
 for most people).

 With disks, my solution is:
 - Have the backup disks on a separate machine, via the network.
 - Have external backup disks, which are powered off if not is use.
 - Have two (or more) and use them alternating, so that even in case of a
   misfunction or drive failure (affecting the backup disk currently in
   use) the previous backup (on the other disk) remains safe.


 We have a database of more than 250GB in mysql database  which is
 increasing day by day. Currently I am using mysqldump utility of MySQL
 I perform a full backup about 28 days ago. But is there any mechanism or
 script to backup only the incremental backups on weekly or daily bases.

 Data is inserted in separate tables in separate databases. We cann't
 afford to have some proprietary solution.

 If you can afford downtime (shutting down the database), dirvish is a
 good means to take a file system backup (all your data areas). Check it
 at www.dirvish.org  There are plenty of alternatives, but I didn't try
 most of them. What I did try was rsnapshot, but I found it too
 inflexible for my purposes.

 I can't comment on the other approaches.

 Whatever approach you take: Make sure the backup gets stored os some
 separate, protected media.


 HTH,
 J�rg

 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com

Another advantage of external drives is they can be taken OFF site and
stored away from the building.  We use three.  One on site, one in transit
and one that mirrors (off site of course) the images of the other two.  We
dump nightly and then backup that.  We also backup the binary logs which
get rotated every two days (restore is nightly back followed by the binary
logs).  The only only restore we have had to do is the nephew who knows
html.   The disks are raided, but as was stated, that is to protect
against single point failure.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySql - crashes daily

2010-12-03 Thread Wm Mussatto


On Fri, December 3, 2010 14:11, Basil Daoust wrote:
 Any help
greatly appreciated.
 
 Our mysql is restarting a
LOT!
 Here is what is in the log, it doesn't seem to give much
help.
 
 --start of log--
 InnoDB: Warning: a
long semaphore wait:
 101203 15:12:40 - mysqld got signal 11;
 This could be because you hit a bug. It is also possible that this
binary
 or one of the libraries it was linked against is corrupt,
improperly
 built,
 or misconfigured. This error can
also be caused by malfunctioning
 hardware.
 We will try
our best to scrape up some info that will hopefully help

diagnose
 the problem, but since we have already crashed,
something is definitely
 wrong
 and this may fail.
 
 key_buffer_size=1073741824

read_buffer_size=16773120
 max_used_connections=601

max_connections=600
 threads_connected=27
 It is
possible that mysqld could use up to
 key_buffer_size +
(read_buffer_size + sort_buffer_size)*max_connections =
 20706971
K
 bytes of memory
 Hope that's ok; if not, decrease
some variables in the equation.
 
 thd=(nil)

Attempting backtrace. You can use the following information to find out
 where mysqld died. If you see no messages after this, something
went
 terribly wrong...
 frame pointer is NULL, did you
compile with
 -fomit-frame-pointer? Aborting backtrace!

The manual page at http://www.mysql.com/doc/en/Crashing.html contains
 information that should help you find out what is causing the
crash.
 --end of log--
 
 So before you ask if
we have 20G of memory we are running this on amazons
 :

High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2
 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each),
1690 GB
 of local instance storage, 64-bit platform
 
 We are using Fedora Core 8, with MySQL 5.0.45.
 these
mysql packages are installed.
 mysql.x86_64  
   
   5.0.45-6.fc8  
   
 installed
 mysql-libs.x86_64  
  
   5.0.45-6.fc8   
  
 installed
 mysql-server.x86_64  

   5.0.45-6.fc8 


installed
 
 It seems odd to me that we get a InnoDB
error when this server doesn't
 have a single InnoDB table.
 I also did notice the max_used_connections but from the last 20 or
so
 restarts only 2 hit 601, others where as low as approx
150.
 
 I'm hopeful that someone has a idea or two.
 Basil
You might check your my.cnf file to see if InnoDB engine
is enabled.


SOURCE IP FROM HEADER:

*Please block this
account's access to the *
*internet until its cleaned up.  We are
basing *
*this on an analysis of the header NOT the FROM*
*address.  *

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: filesystem or database??

2010-09-16 Thread Wm Mussatto


On Thu, September 16, 2010 13:30, mohit verma wrote:
 hello ,
 when we  install any package the terminal shows in end 
reading database
 (used by package manager like .deb or
rpm). but as in linux everything is
 nothing but a file. so
database shown above must be a part of the
 filesystem

. so why we not say that  reading filesystem.
 what
differs and imposes to say filesystem or database differentlly

inspite
 of one being one subset of other (filesystem)?
Database can be stored as a file or a collection of related
files.� It need not be a hash or RDBM based system. 

IMHO,
'reading a filesystem' would imply scanning a bunch of directories which
contain unrelated files, whereas 'reading a database' implies information
is in some sort of defined structure. 

Just my $0.02 worth.

---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext 101
Direct:
909-962-8547



RE: Unique ID's across multiple databases

2010-09-13 Thread Wm Mussatto


On Mon, September 13, 2010 15:37, Daevid Vincent wrote:

-Original Message-

From: Kiss D�niel
[mailto:n...@dinagon.com]
 Sent: Monday, September 13, 2010
5:59 AM

 Well, thanks, but I'm afraid using
UUID's (even with hex
 compression) is
 kind of
a suicide, when it comes to performance.
 This is a good
summary about the issues:

http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
 
 Is this UUID issue unique to mySQL or are there other
RDBMS's that handle
 it better (Postgress, Oracle, SQL Server,
etc?)
 
 I too have a need for a unique identifier that
will mesh with other
 databases periodically. So that
a user in one local DB/server will get
 migrated to a
master DB which in turn will sync up with remote sites so
 that
all sites will have all users in it each night (for example).


 Having a mapping of UUID to local ID seems one way, but I feel
there is a
 lot of room for collisions and integrity issues that
way no?
 
 There are some solutions at the bottom of
that blog post. Are those not
 good then? They seem interesting
to me.
Why does it have to be one field.� Two fields: ServerID
and the SequenceID
Across servers the pair would be unique and within
a given server the Sequence ID is the equivalent of a manual
auto-increment fields�� Set it via Max(SequenceID)+1 where
ServerID is the local serverID.�� Have an index set for the
combined fields as well as the Sequence ID field perhaps.�

SOURCE IP FROM HEADER:

*Please block this
account's access to the *
*internet until its cleaned up.  We are
basing *
*this on an analysis of the header NOT the FROM*
*address.  *

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: Is upgrading from 4.X to 5.X really that easy?

2010-08-16 Thread Wm Mussatto


On Mon, August 16, 2010 07:26, Nunzio Daveri wrote:
 Hi all, I
was reading a few of the notes on this forum from a few months

back
 and it seems that ONE WAY of upgrading from 4.x to 5.X with
MyISAM only
 databases is to copy the actual data folder from the
4.X version to a temp
 place, then remove 4.x from the OS,
install 5.X and then just put the 4.X
 data
 folder into
the 5.X folder???
 
 
 Is it really that
simple?  Has anyone done this and can verify this
 please?  I
 am thinking I am missing a few commands you have to run at least???
My
 database
 is pretty small is an only 1.8GB so I am
thinking this is a walk in the
 park :-)
 

Please advise...
 
 And as always... TIA...


 Nunzio
What version of 4.x,� I upgraded between
Debian stable versions and got burned because in the middle of the 4.x
group MySQL changed to a more correct version of JOINs.�
Didn't effect to data, which will should work as you expect, but did have
subtle impact on the select statements embedded in various programs.�
Read the release note between YOUR current and new versions.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: Moving from one MySQL server to three MySQL servers?

2010-08-04 Thread Wm Mussatto
On Wed, August 4, 2010 11:40, Nunzio Daveri wrote:
 Hello Gurus :-)� I was running a simple load generator against our 16GB
 Dual
 Quad core server and it pretty much came down to it's knees within two
 hours of
 running tests.� The customer DOES NOT WANT to change any code, they just
 want to
 throw hardware at it since it took them a year to create all of the code.�
 It is
 a 140GB database with 21GB of indexs all using InnoDB - currently doing
 70%
 reads and 30% writes.

 My question is what is the best way of distributing the load without
 changing
 any of the php / perl code that their web server uses?� This is what I am
 thinking but need someone to tell me it is a good idea or bad please?

 1. Setup a single master and 2 slaves.� The question is how to tell the
 web
 servers to get all the read data from the slaves and to only write to the
 master?

 2. Install a MySQL proxy box and let mysql proxy handle the load, problem
 is now
 it is the SPOF!

 3. Use DNS round robin, BUT how to tell round robin to ONLY go to master
 for
 writes and ONLY use one of the 2 slaves for reads?

As was mentioned, what the test was would help.  Are you using single file
or separate file per table.  If this is a web application, separate the
database onto another server, move to separate files per table and put the
files on separate spindles.  Of course, the real next step is to find
where the actual bottle neck is.  Do you have slow query log enables etc.?
What are the results.  How critical is the consistency between read and
writes.  There will be a lag between the master and slave which may or may
not be critical.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: newb problem

2010-07-20 Thread Wm Mussatto
On Tue, July 20, 2010 09:28, dennis skinner wrote:





 Hello   I am a new mysql user.  Can anyone tell me why this does not
 create a table?

?php(the spaces before the question mark are not in the
 code)
 $dbuser=smeduser;
 $dbpassword=x;
 $dbname=smed;
 mysql_connect(localhost, $dbuser, $dbpassword);
 mysql_select_db($dbname) or die(unable to select database);
 $query=create table patnotes(patid int(9) not null unsigned, patnote
 int(6) not null unsigned auto_increment,
 parentid int not null unsigned, appuserid varchar(40) not null, subject
 varchar(100) not null,
 body longtext not null),
 primary key(patnote),
 unique id(patnote);
 mysql_query($query);
 mysql_close();
 then the closing question mark and carat on this line  this
 does not build a file and I am wondering what syntax I am missing here
  thanks dennis
First question:
Does smeduser have table creation privilages in this database?
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opening a server to generalized queries but not too far

2010-06-16 Thread Wm Mussatto
On Wed, June 16, 2010 14:47, Don Cohen wrote:
 Daevid Vincent writes:

   For the love of God and all that is holy,
   do NOT put the user/pass on the URL like that!!
 What's so unholy (or even unwise) about it?
The username and password shows up in logs on the server and in the
browser's cache since it is part of the page's address.  Anyone who has
access to either will get them.  Remember, browser's cache history.
   Or use mod_auth_mysql to maintain your 'authorized' users to your
 page.
 Why is this so much better?
 In my case it's worse cause I don't want this to be interactive.
 I want to install something on a user's machine that will access his
 data without him having to do anything.  The url is to be used by
 a program, not a person on a browser.

   And as Adam beat me to, use a VIEW to expose ONLY the columns and joins
 you
   want. This is also a good time to normalize the data and column names
 so
   that you don't expose what their internal names are.
 So far I don't like this solution.

   But also has he said, I don't see what you're trying to accomplish. If
 I'm trying to let a large number of users each access his own data
 and not everyone else's in a very flexible way, in particular,
 allowing selection using functions, aggregations, filters.

   someone is technically literate to format SQL statements, then just
 give
   them a read-only account to the mysql (or view) directly. Let them use
   their own GUI tool like SQLYog or whatever -- it will be far more
 robust
   than anything you can write yourself.
 In this case there may be a lot of users but the queries are likely to
 be written by a small number.

   If you're trying to do some reports, then just code up the reports
 and
   use select boxes for the options you want someone to choose. Use jQuery
 and
   table_sorter plugin and you're done.
 I can't predict what options will be needed.
 And this seems much easier.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: a mysqldump causes all other queries to be queued super slow. Any clue what is going on?

2010-03-22 Thread Wm Mussatto
On Mon, March 22, 2010 11:08, Andres Salazar wrote:
 Hello,

 Everytime i run a mysqldump (mysql-server-5.0.77) all the other
 legitimate queries that are ocurring at that time pretty much sleep
 and build up in the processlist untill I either stop the dump or wait
 for it finish. The moment i do either one i can have about 8-15
 queries waiting they all free up immediately.

 What could be causing this? I know there might be many factors
 involved but Id like to get some clues as to where to look.

 Thanks

 Andres

First, what are the table types?
Did you set --lock-tables?  This will lock the tables within a database
That could cause the behavior you are seeing.  I get a similar result when
I do a repair table   For transaction enabled table types try
--single-transaction since it avoids read locks (according to the man
pages.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql RAID

2010-03-10 Thread Wm Mussatto
On Wed, March 10, 2010 09:04, Dan Nelson wrote:
 In the last episode (Mar 10), John G. Heim said:
 I have read (and have been told) to stay away from RAID-5 for
 update-intensive systems.  Are there performance concerns with RAID-10
 as
 well?  We will be buying from Dell (done deal for reasons too
 complicated
 to go into) and the disks they're selling are 146 Gb.  I can get up to 8
 of them in the server we're buying.  I asked them about just getting 2
 big
 disks and going with RAID-1.

 My understanding is that with RAID-10, the system can do multiple reads
 and
 writes simultaneously so throughput is improved oversystems w/o RAID or
 with
 RAID-1. But the same logic would apply to RAID-5 only it doesn't work
 out
 that way.

 RAID-5 has an extra penalty on small random writes due to the I/O required
 to maintain the parity blocks (it does 2 reads and 2 writes for every
 write
 your app does).  RAID-10 is just a mirror so it doesn't have to worry
 about
 that.

 --
   Dan Nelson
   dnel...@allantgroup.com

If you can get the disk cheap you might want to get one or two extras and
keep them as spares.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: compare column value to anything in a list of values

2010-02-17 Thread Wm Mussatto
On Wed, February 17, 2010 09:24, Cantwell, Bryan wrote:
 Is there a simple function or method to compare a value in a column to one
 or more items in a comma separated list?

 select * from table where value contains one of
 ('apple','orange','banana');

 and say value may = something like
 'produce(grape,orange,pear,apple,lettuce)'
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com


IN('value1','value2') should work for exact matches, also works for
integer values.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join Suddenly Failing

2010-01-22 Thread Wm Mussatto
On Fri, January 22, 2010 09:42, Scott Swaim wrote:
 I ran into this when I upgraded from 4.0.xx to 5.0.xx  There was a change
 in
 the logic for the Joins.  I determined that the FROM clause needs to be in
 parenthesis. i.e.FROM (team, person, teamperson)  this allows all of
 the
 fields in all of the tables to be used.  The change was made in mysql so
 that only the last table (i.e. teamperson) was used for your JOIN

 Scott Swaim
 I.T. Director
 Total Care / Joshua Family Medical Care
 (817) 297-4455
 Website: www.totalcareclinic.com
Actually the change happened half way through the 4.x series. Basically
you need to tell mySql which table you wish to join on.  Parentheses is
one way to say check all.

 NOTICE: The information contained in this e-mail is privileged and
 confidential and is intended for the exclusive use of the recipient(s)
 named
 above. If you are not the intended recipient or his or her agent, you are
 hereby notified that you have received this document in error and that any
 use, disclosure, dissemination, distribution, or copying of this message
 is
 prohibited. If you have received this communication in error, please
 notify
 the sender immediately by e-mail, and delete the original message
 -Original Message-
 From: Albert Padley [mailto:ap3des...@gmail.com]
 Sent: Friday, January 22, 2010 11:37 AM
 To: mysql@lists.mysql.com
 Subject: Join Suddenly Failing

 I have a website that gets used once a year for a soccer tournament. It
 has
 been working fine since 2006. No script changes since it was last used in
 2009. All of a sudden the following script started throwing an error.

 SELECT contactinfo.contactdata, contactinfo.ContactID FROM team, person,
 teamperson
 LEFT JOIN personcontact ON person.PersonID = personcontact.PersonID
 LEFT JOIN contactinfo ON personcontact.ContactID = contactinfo.ContactID
 WHERE team.teamID = 22
 AND team.TeamID = teamperson.TeamID
 AND teamperson.PersonID = person.PersonID
 AND person.PeopleTypeID =5
 AND contactinfo.ContactTypeID =2

 Error: Unknown column 'person.PersonID' in 'on clause' (1054)

 There are several of these type scripts and all are giving a similar
 error.

 The server version is 5.0.87. I suspect the hosting company may have
 upgraded to a new version of mysql.

 Thanks.

 Albert


--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL being hacked with commands through URL

2009-11-18 Thread Wm Mussatto
On Thu, November 19, 2009 09:47, James Coffman wrote:
 Hello all,

 My website has been hacked using a url such as:
 -1%20union%20all%20select%201,2,concat(username,char(58),password),4,5,6%20f
 rom%20users-- .

 I have been searching on the web for a solution/fix to this issue and I
 cannot seem to find one.  The command above is showing all usernames and
 passwords (in hashes) and I am not comfortable with that at all!  Is there
 anyone out there that may be able to help or may be able to point me in
 the
 direction that I need to go in order to correct this issue?
Looks like a SQL injection attack.  You should always filter any input
from the web to accept only those characters and conditions which are
reasonable for that list.

In perl you should also either $dbh-quote($inputString) or use the '?'
place holder mechanism.
For example if I'm expecting a page number (or other whole number) from
form variable PAGEID I do something like this.

($pid) = $q-param('PAGEID') =~/(\d+)/;  Basically it will only accept
0-9s as input.   Hope this helps.


How do you have your database server setup?  How are the commands being
passed to the database?


SOURCE IP FROM HEADER:

*Please block this account's access to the *
*internet until its cleaned up.  We are basing *
*this on an analysis of the header NOT the FROM*
*address.  *

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL being hacked with commands through URL

2009-11-18 Thread Wm Mussatto
On Thu, November 19, 2009 09:47, James Coffman wrote:
 Hello all,

 My website has been hacked using a url such as:
 -1%20union%20all%20select%201,2,concat(username,char(58),password),4,5,6%20f
rom%20users-- .

 I have been searching on the web for a solution/fix to this issue and I
cannot seem to find one.  The command above is showing all usernames and
passwords (in hashes) and I am not comfortable with that at all!  Is
there anyone out there that may be able to help or may be able to point
me in the
 direction that I need to go in order to correct this issue?
Looks like a SQL injection attack.  You should always filter any input
from the web to accept only those characters and conditions which are
reasonable for that list.

Update to our phone conversation looks like id value is NOT a number (ss
looks like 55 in my web font, sorry).

In perl you should also either $dbh-quote($inputString) or use the '?'
place holder mechanism.
For example if I'm expecting a page number (or other whole number) from
form variable PAGEID I do something like this.

($pid) = $q-param('PAGEID') =~/(\d+)/;  Basically it will only accept
0-9s as input.   Hope this helps.


How do you have your database server setup?  How are the commands being
passed to the database?



--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: many-inserts go faster the second time

2009-10-02 Thread Wm Mussatto


On Fri, October 2, 2009 12:28, Bennett Haselton wrote:
 At 02:53
AM 10/2/2009, Joerg Bruehe wrote:
Hi Bennett, all!


Bennett Haselton wrote:
  At 08:24 AM 9/25/2009, Dan Nelson wrote:

 In the last episode (Sep 25), Bennett Haselton said:
   I have a script that runs several times in the
evening, and on each
  run it
 
 adds several thousand entries to a table.
 

   On the first run, it adds the entries
rather slowly.  But then
 on all
  
subsequent runs (usually about a minute or two later), the many
  inserts go
   a lot faster. 
This is true regardless of how many entries are
 added by
   each run -- whether the first and second run both
add 50,000 or the
  first
  
and second run both add 10,000, the first run goes slowly and the
  second
   one goes fast.  But
by the following evening, the first run is
 back to
   going slowly again.
 

   It's as if in the minute or two following
the first run of the
 script,
  
MySQL catches its breath and realizes, hey, that table is getting a
  lot of
   entries added to
it, so it waves some magic dust so that the next
 
time I
   add a lot of entries, it goes a lot
faster.  (Hope I'm not losing
  anybody

  with the technical terminology here.) Then by the next
evening the
   optimization parameter has
exp^W^W^W^W the fairy dust has worn off.
 
  More likely, this is a relatively unused table, and
the first batch
 of
  inserts pulls most
of the index and some of the table data into RAM,
 
which
  makes for much faster lookups on the next
run.  What do top and
 iostat
  stats
  show on both runs?  I'd expect heavy disk usage and
little CPU on the
  first
  run,
and light disk and heavier CPU usage on the second.
 
  That's interesting, I can look at that next time I try it.
 But if
  that's the case, wouldn't the first run go
slowly at first, but then
  pick up speed once all of the
indexes etc. have been pulled into
  memory?  Because
that's not what I'm seeing -- if I insert 50,000
 in the
  first run, it's slow all the way through, but then the
second
 50,000 get
  inserted quickly.

Your fairy dust is called access
pattern, evaluated by a LRU or
similar policy.

Don't forget you may have caching on two levels:
database and operating
system. Both have their own cache
aging mechanisms.
The details about caching and its effects
will vary by the table handler
you are using, MyISAM
structures and policies definitely from InnoDB
 ones.

Even if MySQL would not cache data and index
pages, they would still
reside in the operating system's file
I/O cache, so the next access to
them will be faster than the
first one - regardless whether you read
them or modify
them.

However, sooner or later they will be
removed from all caches because
they are not accessed until
the next evening, whereas other pages were
accessed and
needed space in RAM.
(Here, I ignore the case of a RAM which
is larger than all data accessed
for a day, it is too
unlikely.)
In the evening, when your job is run again, this
starts anew.
 
 I understand that, but here's the
problem:
 
 If the speed increase were just the result
of values being recently
 accessed, then the speed increase
should be a function of the number
 of inserts that I've already
done.  So if I insert 10,000 rows in one
 script run and then
10,000 rows in a second script run, and the
 second script run is
a lot faster, then the first 10,000 inserts were
 enough to
optimize everything.  So that should mean if I do 50,000
 inserts
in a single script run, then the first 10,000 inserts should
 be
enough to speed everything up.
 
 But that's not what
I'm seeing.  What I'm seeing is that if I do
 10,000 inserts on
the first run and 10,000 on the second, then the
 first run is
slow and the second run is fast.  On the other hand if I
 do
50,000 inserts on the first run and 50,000 on the second, then the
 entire first run is slow and the entire second run is fast.
 
 In any case, is there anything I can do to force MySQL
to manually
 pre-optimize the entire table (even if it takes some
time to do so,
 reading indexes into memory or whatever), other
than kludgy solutions
 like doing a dummy insert of several
thousand rows and then deleting them?
 
 
-Bennett
Maybe not dummies but rather a subset of the first
group.� Then do the rest of the first set.�


RE: MySQL runs on 16-cores server

2009-04-13 Thread Wm Mussatto
On Mon, April 13, 2009 11:55, mos wrote:
 Jerry,

 At 09:53 AM 4/13/2009, Jerry Schwartz wrote:
Sorry for top-posting, but this is getting unwieldy.

The problems with hardware in multiprocessor systems have been dealt with
long since, assuming that Intel, AMD, et al have implemented the
 solutions.
Ten years ago and more, I worked with machines capable of 128 processors
 and
they seemed to work okay.

 Well having a machine with 128 processors and actually getting MySQL to
 take advantage of 128 processors is a different matter entirely.
 MySQL does not scale well beyond 4 processors, at least not like
 PostgreSql
 does. MySQL seems to hit a plateau rather quickly. If XtraDb's modified
 Innodb plugin scales better, then fine.  But I haven't seen any benchmarks
 showing the speed improvements relative to the number of processors used
 and is something I'd really like to see.

Of course, there was a price difference. :)

As others said, the major bottlenecks are likely to be internal (to the
 DB)
locking and disk access speed.

 Of course. When it comes to MySQL, I would invest more money into more
 memory and fast SSD drives rather than more CPU's. You'll get a bigger
 bang
 for the buck. :)

 Mike
It sounds like we are talking about a server were everything is trying to
get at the same database and tables, correct?  Sort of, it you had to put
Best Buy or Sears on a box how would you do it, vs if you had many
different databases all being hit at the same time.  Has anyone
benchmarked that scenario?

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL runs on 16-cores server

2009-04-10 Thread Wm Mussatto
On Fri, April 10, 2009 05:24, Uwe Kiewel wrote:
 Moon's Father wrote:
 Hi.
If the server has 16 cores, how to set parameters to make MySQL runs
 well.

 IIRC is mysqld multi threaded - so if you have parallel queries, mysqld
 will spam multiple threads across multiple cores.
---
Don't you mean spaN.  I hope mySQL doesn't SPAM. ;-}
 HTH,
   Uwe


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Re: Separate customer databases vs all in one

2009-03-18 Thread Wm Mussatto
On Wed, March 18, 2009 06:29, Johnny Withers wrote:
 Everytime we make a schema change in development we save an SQL script
 into
 a directory named in MMDD format. The scripts are named in
 MMDD_NNN_DESCRIPTION.SQL format, where NNN is the sequence number.
 This
I realize its probably a bit late for this, but wouldn't MMDD... make
more sense?  That way they would sort in proper order.
 allows us to string the files together in the proper order to apply when
 an
 update goes out. We simply look at the last update date, combine all the
 scripts (either using cmd line utilties or an app we wrote to do it) and
 apply this combined script to all the databases. The application to all
 databases is currently handled by hand each time since we don't more than
 once a week and there are not a lot of databases (15 to 20 right now).

 I can see us needing an automated way to apply these scripts in the future
 and it'll probably involve a custom application that makes use of a
 settings
 file that has the connection string for each database. It'd simply loop
 these connections and apply the scripts over a date range.

 We don't have a need for being able to query multiple databases at a time.
 Since some of these databases reside on different servers, I don't even
 think we could do that -- unless we started using an engine that supports
 this.

 -jw
 On Wed, Mar 18, 2009 at 7:01 AM, Stephen Sunderlin
 vze80...@verizon.netwrote:


 How do do you synchronize alter schema across the databases and what
 method
 do you use for internal reporting aggregate across the databases?

 Mar 18, 2009 06:47:52 AM, joh...@pixelated.net wrote:

 I have an application and database doing this right now. We run both
 ways, a single db per customer and a single customer per db. Smaller
 customers are in a shared enviroment and larger customers have their
 own db, sometimes they even have their own server. We use a company_id
 field in each table to identify the data.


  On 3/17/09, Arthur Fuller fuller.art...@gmail.com wrote:
  Are these databases identical or merely similar? If they are
 structurally
  identical, I'd go for one database per customer. Then you have
 isolation,
  easy structure updates and above all, consistent front-end code, in
  whatever language that occurs. Just obtain the customer ID and then
 use
 the
  appropriate database. Everything else can remain the same.
 
  The only fly in the ointment concerns whether you'd ever have the
 customer
  need to cross databases. I would imagine that sort of thing is for
 internal
  use, not the customers. In that case, the performance hit if any won't
  impact upon the customer, just you.
 
  hth,
  Arthur
 
  On Tue, Mar 17, 2009 at 8:21 PM, Daevid Vincent dae...@daevid.com
 wrote:
 
  I'm writing a report tool wherein we have many customers who
 subscribe
  to this SaaS. There are millions of rows of data per customer. All
  customers are islands from each other (of course).
 
  Are there any major issues or benefits between storing each customer
 in
  their own database (with their own tables), or all lumped into a
 single
  database?
 
  At first thought, it seems that by separating them, queries should be
  faster no (as there is less data to sift though per customer)? It of
  course makes upgrading table schema a wee bit more cumbersome, but a
  simple loop and script can handle that easily enough. And since you
 can
  query across databases, we can still make internal aggregate reports
 for
  our own usage.
 
  For example: SELECT * FROM customerA.foo.bar JOIN customerB.foo.bar;
 or
  we can use UNIONS etc. too.
 
  Consolidating them into one would seem to bloat the tables and slow
  things down (or is the fact that mySQL uses B-Trees invalidate that
  theory)? It also makes us have to have a customer_id entry in every
  table basically (or some FK to distinguish who's data is who's). It
 also
  feels like it could leak data if a malformed query were to get
 through,
  although I'm not terribly worried about this as we do some heavy UAT
  before pushing from DEV to TEST to PROD.
 
  Performance is a major factor concern here given our huge data sets
  involved. Does joining across databases impose any speed/performance
  hits vs. just joining across tables within a single database?
 
  http://daevid.com
 
 


 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=stephen.sunder...@verizon.net




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Performance Spamassin PostgreSQL vs MySQL

2009-03-18 Thread Wm Mussatto
We are using the PostgreSQL currently to store the Bayes information.  It
seems to periodically spend a lot of time 'vacumming' which of course
drives up disk load.  The system admin has adjusted it so it only does
this at low load.  I'm curious if anyone has actually tested the
PostgreSQL vs MySQL versions.  We are currently running a uniprocessor
system (Linux version 2.6.18-6-vserver-amd64 (Debian 2.6.18.dfsg.1-24).

System appears disk limited, we have the files on hardware raid 0 and have
moved nearly everything else off that set (they are the fastest drives).

Just curious.  Thanks.

Bill Mussatto
CyberStrategies, Inc.
www.csz.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Upgrading

2009-03-09 Thread Wm Mussatto
On Mon, March 9, 2009 12:51, Mihail Manolov wrote:
 I would strongly suggest logging all your 4.0 queries for at least 24
 hours and then running them on your new 5.x server to avoid any
 surprises such as incompatible queries for example.

 Good luck!

 Mihail
Good idea.  I would pay particular attention to LEFT JOINs.  MySQL began
more strictly following the SQL specs and that caused me problems when I
did the upgrade.

Bill
 On Mar 9, 2009, at 1:42 PM, Matthew Stuart wrote:

 Hi all, I am on... wait for it... version 4.0.25 and I want to
 upgrade to MySQL 5.x

 Is there anything special I should do in order to upgrade? Do I need
 to uninstall v4 or can I just download the most current version and
 double click to upgrade? I am not particularly hardcore mysql
 minded, and quite honestly, if it ain't broke don't fix it is a good
 mantra of mine... but I need to move on now as I have received an
 sql file that is v5 compatible but not v4 compatible.

 mysql@lists.mysql.com

 Once I have it up and running I'll be fine, it's just that I am
 nervous about upgrading and consequently breaking it, and at that
 point, I'll be struggling to put it right.

 Any advice on how I can best do this / best practices etc will be
 very much appreciated.

 Many thanks.

 Mat

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=mmano...@liquidation.com






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Closing/Opening tables

2009-02-27 Thread Wm Mussatto
On Fri, February 27, 2009 05:50, Baron Schwartz wrote:
 On Fri, Feb 27, 2009 at 4:19 AM,  dbrb2002-...@yahoo.com wrote:
 Hi

 Recently I noticed the server takes lot of time on and off when opening
 and closing tables. And I tried to increase the table_cache more the the
 total tables (file_limit is properly set); and the problem still
 continues and lowering it also continues.. and tried to set in middle..
 same

 Any thoughts on fixing this ? I am going crazy..

 Sometimes the threads spin 10-60secs in just opening and closing tables
 state..

 Have you checked to see if your disk is saturated with requests?  Try
 this:

 vmstat 5 5
 iostat -dx 5 5
Slight variant if you use  logical volumes.
iostat -x 10 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde
Where the /dev/...'s are the actual base disks.  W/O the -d you get cpu
loads as well.   I use top -i (then z for color) if I need to know what
processes are running.  The is on Debian GNU Linux.
Look at the await column:
The average time (in milliseconds) for I/O requests issued to the  device
 to  be served.  This includes the time spent by the requests in queue and
the time spent servicing them. 

 Assuming you're on a Unix-like OS.

 --
 Baron Schwartz, Director of Consulting, Percona Inc.
 Our Blog: http://www.mysqlperformanceblog.com/
 Our Services: http://www.percona.com/services.html

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Closing/Opening tables

2009-02-27 Thread Wm Mussatto
Looks like the system is doing a lot of disk WRITES.  Your writes/sec are
much higher than your reads/sec. But the time stuff waits in the queue is
low. Did you try top -i with the x option entered?  That will produce a
colored line if a dask is I/O bound.

On Fri, February 27, 2009 11:51, dbrb2002-...@yahoo.com wrote:
 Thanks for the quick followup Baron..

  vmstat
 procs ---memory-- ---swap-- -io --system--
 -cpu--
  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
 wa st
  3  0100 499380 139256 560400000   190   693   11   11 20  2
 70  8  0

 iostat -dx 5 5

 Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.02   187.72  0.28 10.36 4.66  1584.73   149.44
 2.03  191.16   5.23   5.56
 sdb   1.85   371.84 21.72 56.86  1514.44  3956.6769.63
 2.63   33.43   3.44  27.03

 Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.00  1527.80  0.40 73.80 3.20 12812.80   172.72
 1.59   21.46   0.24   1.76
 sdb   0.0011.60  2.40 10.8065.60   950.4076.97
 0.085.82   3.67   4.84

 Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.00   173.00  0.00 11.60 0.00  1476.80   127.31
 0.58   50.24   8.22   9.54
 sdb   0.00 5.80  0.00 13.40 0.00   153.6011.46
 2.61  195.13   5.63   7.54

 Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.00  1624.40  0.00 126.00 0.00 14003.20   111.14
  5.31   42.14   0.38   4.84
 sdb   0.00   599.60  0.00 82.20 0.00 15697.60   190.97
 1.79   21.75   2.32  19.08

 Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.00  1538.80  0.20 61.60 1.60 12803.20   207.20
 5.10   82.48   2.37  14.62
 sdb   0.20 1.00  7.40  2.40   200.0027.2023.18
 0.22   22.73  10.73  10.52


 The above is when low to medium load..



 
 From: Baron Schwartz ba...@xaprb.com
 To: dbrb2002-...@yahoo.com
 Cc: mysql@lists.mysql.com
 Sent: Friday, February 27, 2009 5:50:11 AM
 Subject: Re: MySQL Closing/Opening tables

 On Fri, Feb 27, 2009 at 4:19 AM,  dbrb2002-...@yahoo.com wrote:
 Hi

 Recently I noticed the server takes lot of time on and off when opening
 and closing tables. And I tried to increase the table_cache more the the
 total tables (file_limit is properly set); and the problem still
 continues and lowering it also continues.. and tried to set in middle..
 same

 Any thoughts on fixing this ? I am going crazy..

 Sometimes the threads spin 10-60secs in just opening and closing tables
 state..

 Have you checked to see if your disk is saturated with requests?  Try
 this:

 vmstat 5 5
 iostat -dx 5 5

 Assuming you're on a Unix-like OS.

 --
 Baron Schwartz, Director of Consulting, Percona Inc.
 Our Blog: http://www.mysqlperformanceblog.com/
 Our Services: http://www.percona.com/services.html




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: how to design book db

2009-01-06 Thread Wm Mussatto
On Tue, January 6, 2009 07:55, Jerry Schwartz wrote:

-Original Message-
From: c...@l-i-e.com [mailto:c...@l-i-e.com]
Sent: Tuesday, January 06, 2009 9:47 AM
To: mysql@lists.mysql.com
Subject: RE: how to design book db


Just theories here:



The same book re-issued by another publisher might have a different
ISBN.



A book with an insert (e.g., CDROM) may have a different ISBN, but be
the same for some purposes.



And mistakes can be made...



Ultimately, I suspect that the uniqueness of ISBN to what normal folks
call the same book is not as clear as one would hope.


 [JS] I'm really glad I was able to eavesdrop on this conversation. I had
 no
 idea the ISBN issue was so murky.

 For better or worse, most of my publishers don't use ISBNs; many of them
 don't even assign product numbers.

 I guess the only way around it is to assign your own unique key.

The problem is books from different publishers are different editions and
need to be differentiated.  If you take a class the professor will want
everyone to be looking at the same edition.  If he hands out errata sheets
and the second publisher corrects the first (or has new errors) things
will get confusing.  Author and Title would be the alternative index
unless your system doesn't care about whether the book has a CD or not.  I
think our local library decided to come up with their own IDs since they
have both hardcover and paperback editions of the same book.  Even though
the paper back and the hardcover were the 'same' book, if I had lost it I
would objected paying for a hardcover if I had lost the paperback.

Just my $0.02 worth.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how to design book db

2009-01-06 Thread Wm Mussatto
On Tue, January 6, 2009 13:30, PJ wrote:
 My comments, questions  explanations inserted below

 mos wrote:
 At 09:55 AM 1/6/2009, you wrote:

 -Original Message-
 From: c...@l-i-e.com [mailto:c...@l-i-e.com]
 Sent: Tuesday, January 06, 2009 9:47 AM
 To: mysql@lists.mysql.com
 Subject: RE: how to design book db
 
 
 Just theories here:
 
 
 
 The same book re-issued by another publisher might have a different
 ISBN.
 
 
 
 A book with an insert (e.g., CDROM) may have a different ISBN, but be
 the same for some purposes.
 
 
 
 And mistakes can be made...
 
 
 
 Ultimately, I suspect that the uniqueness of ISBN to what normal folks
 call the same book is not as clear as one would hope.
 
 
 [JS] I'm really glad I was able to eavesdrop on this conversation. I
 had no
 idea the ISBN issue was so murky.

 For better or worse, most of my publishers don't use ISBNs; many of
 them
 don't even assign product numbers.

 I guess the only way around it is to assign your own unique key.

 Not only do I have 2 ISBNs for a few books, but there is also the
 problem of books in foreign languages (French, Italian, German,
 Spanish)  -  my boss (my daughter who  owns  http:// www.ptahhotep.com )
 tells me that  foreign editions do have different ISBN numbers  but
 not to worry, they are diffeerent but they are unique and would not
 conflict with the US numbers.
 BTW, it might be worth while for PJ to look at how Amazon stores its
 data. I don't think you can find an easier to use database for
 searching on books. It looks like they store 2 ISBN numbers per book.
 It also appears they use fulltext indexing on a lot of fields so the
 user can search on anything.
 I'll look at the Amazon stuff. Thanks for the suggestion.

 But this is where I get a little bit muddled:
 I have a problem (probably just understanding how things work) with
 categories. Since there are many categories (somewhere like 40+), how do
 I handle that? Would it be best to set up foreign keys for a categories
 table?
 But then there is the problem of fulltext indexing... it only works with
 MyISAM but foregn keys only work with InnoDB?
 So do I use MyISAM, a categories field, and just use 1 table for books
 with fulltext indexing on description, title, author, and categories
 tables?

 And what about fulltext indexing? Do I really need that? I thought it
 would be sufficient to search the fields for words or phrases (in the
 case of categories which often will be like second intermetiate period).

 In effect, what I expect to be doing is to use php to format the pages
 dynamically using the search functions of MySQL - so the categories
 would be on a static page with javascript dropdown fields for the
 different categories which would poinnt to a file that would do the
 required search and php would then set up the page to display the
 retrieved info.
You could have a category table, a table to relate categories to
categories and another table to relate books to categories.  That's the
general solution since books could be in multiple categories. I did a
system like that for a bookstore but they didn't want to put the books
into categories and just wanted to search.  If you are using a procedural
language you can use that instead of foreign keys to enforce consistency
then you can us MyISAM tables.




 Also check out http://www.abebooks.com/

 re: ISBN numbers. If this application is for a used bookstore then
 you're going to have to allow books without ISBN's because books from
 40 years ago of course don't have ISBN's.
 No, this is not for a used bookstore... only for the ptahhotep site
 which is only a bibliography of books on Ancient Egypt. :-) - so you're
 right, there are old books in there.
 PJ



 Also see http://en.wikipedia.org/wiki/Isbn.

 Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Trying to Create a Trigger

2008-12-05 Thread Wm Mussatto
On Fri, December 5, 2008 12:14, Martijn Tonies wrote:
 Hi,

 What is the exact error message?


 Here's the latest query:

 delimiter //
 create trigger jobposts_control
  before delete on jobposts for each row
 begin
declare dummy varchar(255);
set @counted = (
select count(ad.adsource_id)
from adsource ad, jobposts jp
where ad.adsource_id = jp.adsource_id
and old.jobpost_id = jp.jobpost_id
);
if @counted = 1 then SET dummy = Cannot delete this record; end
 if;
 end //
 delimiter ;

 Here's the error message:

 ERROR: Unknown column 'old.jobpost_id' in 'where clause'

 Works fine here (although with different tables).

 Just for fun and giggles, have you tried OLD. (uppercase?)


 Okay . . . I tried OLD.


 delimiter //
 create trigger jobposts_control
  before delete on jobposts for each row
 begin
declare dummy varchar(255);
set @counted = (
select count(ad.adsource_id)
from adsource ad, jobposts jp
where ad.adsource_id = jp.adsource_id
and OLD.jobpost_id = jp.jobpost_id
);
if @counted = 1 then SET dummy = 'Cannot delete this record'; end
 if;
 end //
 delimiter;

 Try:

 end; //
 delimiter ;



 I just wrote the trigger source in the Trigger Editor in Database
 Workbench
 and it doesn't bother about delimiters and such...

 It also has no ; after the final END, but hey, who knows ;-)

 With regards,

 Martijn Tonies
 Upscene Productions
 http://www.upscene.com
I think you are missing the point. Where is 'OLD' or 'old' defined? 
Before you try to imbed it in a trigger, try the basic query.  That seems
to be what its complaining about.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: moving from 3.23.58 to 5.0.45

2008-10-30 Thread Wm Mussatto
On thing that tripped me up when I did the same move was the change in
LEFT JOIN syntax.  It now joins to the LAST table in a list.  This change
actually happened half way through the 4.0x series.  Doesn't effect data,
but does effect queries.  There MAY also be a difference on how string
comparisons occur.  I've had to add 'RTRIM' when comparing CHAR type with
strings from the web which are not space extended, but that may just be my
vesion of DBD::mysql.Good luck.
On Thu, October 30, 2008 05:29, Claudio Nanni wrote:
 Hi Mark,

 from my experience I can tell you that you can easily migrate.

 A quick and dirty way is to use the two(or more) different MySQL
 installations
 on same server (or even different ones) and different ports of course.

 You dump the whole DB from the 3.23.58 and import it on the 5.0.45.
 If you have problems try with 4.1 or 4.0 as a midway step.

 At this very moment I can't tell you if you will have problems with the
 'big' jump, but you should test with a smaller set of data(if your DB is
 huge)
 and after that decide if do the direct jump or use a step in the middle.

 Here's the idea:

 [3.23.58]---[5.0.45]
 OR
 [3.23.58]---[4.x]---[5.0.45]
 OR
 [3.23.58]---[4.0.x]---[4.1.x]---[5.0.45]


 Commands to use:

 [3.23.58]# mysqldump --all-databases   dump.sql

 will do the job and

 [5.0.45] mysql source dump.sql

 will complete the opera!

 remember that after that also the grant tables will be replaced from the
 original DB(3.23.58)

 hope it helps


 Claudio




 2008/10/30 Obantec Support [EMAIL PROTECTED]

 Hi

 are there any doc's on how to migrate DB in 3.23.58 format to 5.0.45
 moving from a Fedora Core3 to Centos5.2 server.

 Mark



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Wm Mussatto
On Mon, July 21, 2008 09:14, Brent Baisley wrote:
 Copying 5GB files shows you what kind of performance you would get for
 working with say video, or anything with large contiguous files.
 Database access tends to be random, so you want a drive with faster
 random access, not streaming speed. Try copying thousands of small
 files and compare the speeds.

 One odd thing to check is if the old drive supports command queueing
 and the new one does not. I assume that are both SATA drives. All SCSI
 drives support command queueing and it can make a huge difference
 depending on access patterns.
Also that its turned ON.
 Brent

 On Mon, Jul 21, 2008 at 8:42 AM, Phil [EMAIL PROTECTED] wrote:
 Nothing else running and no queries go against that table, it's
 effectively
 created just for this, so I would expect the table lock.

 Show (full) processlist has nothing but this running..

 Confirmed the faster disks by copying 5Gb files between two of the same
 type
 of disk (I installed two of them). 2xfaster than previous disks.

 my.cnf

 [mysqld]
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 user=mysql
 # Default to using old password format for compatibility with mysql 3.x
 # clients (those using the mysqlclient10 compatibility package).
 old_passwords=1
 key_buffer_size=3072M
 max_allowed_packet=16M
 thread_stack=128K
 thread_cache_size=64
 thread_concurrency=8
 sort_buffer_size=32M
 join_buffer_size=3M
 read_buffer_size=16M
 query_cache_size=64M
 query_cache_limit=8M
 table_cache=300
 max_connections=500
 max_heap_table_size=1024M
 tmp_table_size=1024M
 myisam_sort_buffer_size=128M
 wait_timeout=3000

 set-variable=long_query_time=6
 log-slow-queries=/var/log/mysql-slow-queries.log

 8Gb Ram on this machine which is an intel quad core.

 Anything else I'm missing? It's *possible* a colleague had changed the
 my.cnf in the last few weeks and mysqld hadn't been restarted, but I
 don't
 see anything obvious in there and he can't remember.

 :(



 On Mon, Jul 21, 2008 at 7:08 AM, Ananda Kumar [EMAIL PROTECTED] wrote:

 Is there any other job running while the update is happening. Because,
 myisam does a table level lock. Please check the show full
 processlist.

 Also run mysqladmin -uroot -pxxx status. This would write lock
 information
 into the machine.err log file. Check in this file also if there is any
 locking happening.

 R u sure, this disk is a FASTER disk then the earlier one.




 On 7/21/08, Phil [EMAIL PROTECTED] wrote:

 Hi All,


 Given a fairly simple table as follows

 CREATE TABLE `common_userx2` (
 `t_proj` char(6) default NULL,
 `t_id` int(11) NOT NULL default '0',
 `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
 default '',
 `t_country` varchar(50) NOT NULL default '',
 `t_cpid` varchar(50) NOT NULL default '',
 `t_url` varchar(50) default NULL,
 `t_create_date` int(11) default NULL,
 `t_create_time` bigint(20) NOT NULL,
 `t_has_profile` char(1) NOT NULL,
 `t_team0` int(11) default NULL,
 `t_metric1` double(20,6) NOT NULL default '0.00',
 `t_metric2` double NOT NULL default '0',
 `t_metric3` double NOT NULL default '0',
 `t_metric4` double default NULL,
 `t_active` char(1) NOT NULL default '',
 `t_rev_metric1` double(20,6) NOT NULL default '100.00',
 `t_projrank0` int(11) default NULL,
 `t_rev_metric2` double(20,6) NOT NULL default '100.00',
 `t_racrank0` int(11) default NULL,
 `t_teamrank0` int(11) default NULL,
 `t_countryrank0` int(11) default NULL,
 `t_createdaterank0` int(11) default NULL,
 PRIMARY KEY  (`t_id`),
 KEY `prank` (`t_rev_metric1`,`t_id`),
 KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
 KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
 KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
 KEY `racrank` (`t_rev_metric2`,`t_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 I have a ranking update statement as follows

 set @rank = 0;
 update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
 t_rev_metric1,t_id;

 For the largest case this has close to 1M rows.

 For weeks it was taking around 10seconds to do this. Yesterday I
 replaced
 the main data drive in the machine with a faster SATA Raptor drive. No
 problems occurred, but since then (and the subsequent reboot of the
 machine)
 this particular query is taking 45 minutes!

 I can't, for the life of me figure out why performance would be
 degraded
 so
 much. At first I thought perhaps it might be just disk/mysql caching
 but
 the
 performance has not increased any in subsequent runs.

 Any advice on where to look ?

 Phil

 --
 Help build our city at http://free-dc.myminicity.com !





 --
 Help build our city at http://free-dc.myminicity.com !


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Wm Mussatto

On Fri, June 13, 2008 08:26, Ian Simpson wrote:
 Hi Jerry,

 It could be a kernel issue; however, currently I'm suspecting that the
 drive in the new server simply doesn't have the same bandwidth
 capability. The iostat results I'm getting (although I'm not an expert
 in reading them, having only learned of it about 3 hours ago) suggest
 that the older server is handling roughly the same data quantities, but
 just using a much lower percentage of the drive's bandwidth.

 I can't seem to find a tool which reports on exactly how much write
 bandwidth a drive has; everything seems to focus on reading speed.

 Thanks,

 
 --
 Ian Simpson
Try something like:
iostat -xk /dev/sda /dev/sdb /dev/sdc 10
where the /dev/... are the drives you want to examine and '10' is the
redisplay rate. last column is %util.

Hope this helps.



 On Fri, 2008-06-13 at 11:18 -0400, Jerry Schwartz wrote:
 Having delved a little more into the capabilities of iostat, I've
 discovered that the drive bandwidth seems to be maxed out while MySQL
 is
 running, which I'd peg as the primary candidate for the problem.
 [JS] That suggests even more strongly that there is a difference in the
 kernel
 configuration. More physical I/O would drive the traffic up, by
 definition.
 Either MySQL is causing this, or the system file system is causing it.
 
 Looks like I'll be having more words with my hosting company about
 this...
 
 Thanks for all your help
 ?
 --
 Ian Simpson
 
 On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
  Disk usage: the older server (the one that's running fine) is
 running
  more transactions per second, but has lower blocks written and read
 per
  second than the new server:
  [JS] That, to me, suggests that the difference might be in the way
 the
 systems
  themselves are configured. Unfortunately, I don't know how Linux
 handles file
  system buffering.
  
  The working server (which in addition to replicating is also
 handling
 a
  bunch of read queries)
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  88.47   782.20   998.77 9046888130
 11551757459
  
  The new server, which is just trying to handle replication
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  77.83  1367.55  2914.72  358474084
 764029986
  
  Thanks,
  ?
  --
  Ian Simpson
  
  
  
  On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
   also how often do you issue a commit. batching the inserts inside
 a
   transaction might help.
  
   On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
   wrote:
   check for iostat to see if the disk is heavly used.
  
  
   On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
   Hi Alex,
  
   Configurations are identical, other than the
   differences I initially
   mentioned. I've diffed both the configuration
 files
   and the output of
   SHOW VARIABLES on both servers.
  
   I've contacted my hosting provider to ask about
 the
   RAID settings.
  
   Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
   Variable_name: sync_binlog
  Value: 0
   Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
  
   Thanks
  
   --
   Ian Simpson
  
   On Fri, 2008-06-13 at 17:43 +0530, Alex Arul
 Lurthu
   wrote:
Please check if the my.cnf configurations to be
 the
   same.
   
 What are your configuration parameters in terms
 of
   innodh flush log
trx commit , bin logging, sync binlog and innodb
   unsafe for binlog ?
   
If the systems have raid, check if the BBWC is
   enabled on the new host
and WB is enabled.
   
   
On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
   [EMAIL PROTECTED]
wrote:
Hi list,
   
Have a bit of a mystery here that I hope
   somebody can help
with.
   
I've just got a new server that I'm
 using
 as
   a dedicated MySQL
server.
In terms of hardware it's pretty much
   identical, if not
slightly
superior to an existing server already
 in
   production use.
   
It's having a real 

Re: Optimizing table (shall I create a primary field?)

2008-04-28 Thread Wm Mussatto
On Mon, April 28, 2008 09:44, Rob Wultsch wrote:
 On Mon, Apr 28, 2008 at 6:49 AM, Rob Wultsch [EMAIL PROTECTED] wrote:
 I am going to assume that you are asking this question because
  performance has not improved from this change. Is this correct?

  I don't think that your surogate key (id) is useful, but that is
  probably minor. I think that your hostings company suggestion is
  probably a good idea, but will also probably not impact performance. A
  prymary key is basically only a unique not null key, and it sounds
  like your isbn field should have those properties.

  key_len=302
  This is pretty horrible. If I needed better performance I would try to
  turn the isbn field into a bigint (the dashes are the only non-numeric
  characters and have no significance, right?). That would start with
  something like:
  ALTER TABLE `books` ADD `new_isbn` BIGINT NOT NULL ;
  UPDATE `books` SET `new_isbn` = replace(`isbn`,'-','');
  ...
  though that may require significant changes in your app (converting
  the string into an int) but a stored procedure might make this
  somewhat less painful.

  What are your goals here?

 For the record: if performance is good enough I wouldn't change
 anything...

 --
 Rob Wultsch
 [EMAIL PROTECTED]
 wultsch (aim)

ISBN field is way too long, I think they just changed it to 13 characters.
  Depending on your application leading '0' may be important so you may be
stuck with a character field.  As was suggested, loose the '-' and spaces.
 I don't think they are standard and I would think its easier to
universally remove them.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UPDATEs against one db affecting others

2008-04-21 Thread Wm Mussatto
On Mon, April 21, 2008 07:28, Waynn Lue wrote:
 Is there any way to prevent UPDATEs on one database on one mysql
 instance from affecting SELECTs on other databases in the same
 instance?  I'm noticing a whole bunch of backed up connections on
 other databases when I run massive UPDATEs on one.

 Thanks!

Are there any I/O or CPU load issues?
If you don't have any stored procedures that work between them, I'd check
server level limits.  MySQL is supposed to be multi-threaded to avoid just
this sort of problem, but if your disk and/or CPU are maxed out (or for
that matter you are out of memory) the whole system may be tied up.

Just my thoughts warning its a Monday here.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Two MySql servers, but very different performances for a SELECT JOIN

2008-04-15 Thread Wm Mussatto
On Tue, April 15, 2008 02:03, [EMAIL PROTECTED] wrote:
 I'm still trying to decript the EXPLAIN result, and there is something I
 really
 don't understand:

 mysql EXPLAIN SELECT STRAIGHT_JOIN media.* FROM country,content,media
 WHERE
 country.id='Germany' AND country.detail_tid=content.tid AND
 content.id=media.content_id;

 On the FAST server:

 +--+---+-+--+--++--++
 |id|table  |type |possible_k|key   |k_le|ref   |rows|
 +--+---+-+--+--++--++
 |1 |country|const|PRIMARY   |PRIMARY   |383 |const |   1|
 |1 |content|ALL  |PRIMARY   |NULL  |NULL|NULL  | 137|
 |1 |media  |ref  |media_FI_2|media_FI_2|5   |content.id| 248|
 +--+---+-+--+--++--++

 On the SLOW server:
 +--+---+-+--+--++--+--+
 |id|table  |type |possible_k|key   |k_le|ref   | rows |
 +--+---+-+--+--++--+--+
 |1 |country|ref  |PRIMARY   |PRIMARY   |383 |const | 1|
 |1 |content|ALL  |PRIMARY   |NULL  |NULL|NULL  |   137|
 |1 |media  |ALL  |media_FI_2|NULL  |NULL|NULL  |125649|
 +--+---+-+--+--++--+--+

 On the 3rd row, on the fast server, type=ref, and ref points toward the
 content.id column.

 And on the slow server, type=ALL, and ref points to NULL. Since my MySql
 schema
 defines explicitly the foreign key, should'n I have 'content.id' instead
 of
 'NULL' ?

 What is the difference between 'possible_key' and 'key' columns ? why
 key=NULL
 on my slow server ?

  Tristan

 --
 Tristan Marly
 http://www.linkedin.com/in/tristanmarly

Possible key refers to the keys that the server thinks might be usable. 
key refers to the one is chose.  On the slow server it decided that the
possible key would not work so it didn't use it.  You might try to force
the use of the key and see what happens.  Did you try organizing the slow
server's table.  Its possibly that there were enough records added/removed
that the statistics need to be updated.  Just a guess.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQl 5.0 optimization problem

2008-04-11 Thread Wm Mussatto
On Fri, April 11, 2008 06:47, Ben Clewett wrote:
 Are you using MyIsam or InnoDB?  Or something else?

 In either case the speed to get a COUNT() is largely down to the speed
 if your disks and size of disk caching.  A COUNT() forces the system to
 read every row in order to count them, and any large table is probably
 larger than your caches.

 In some ways this is not so important, since it is unusual for a query
 to want to read every row of a table.  (I have 250GB tables which have
 excellent performance but would take minutes to count every row :)

 It might be better to consider the type of queries you will be using,
 and the type of table, and optimise for that...

 Ben

 Victor Danilchenko wrote:
 Hi,

 I am trying to optimize our DB server. We have one table which has
 1.3M entries, and the keys are GUIDs (so the key space is large).
 However, I have it all indexed. The performance was iffy, though, so I
 increased memory allocation, and the searches on the indexed fields seem
 to be OK now. Still, a simple count(field) on this table still takes,
 like, 2 minutes! I am guessing i am missing something obvious, but I
 read through a few MySQL optimization guides, and it seems like i am
 covering my bases.

 Here is the relevant slice of my my.cnf:

 #
 # * Fine Tuning
 #
 key_buffer  = 256M
 max_allowed_packet  = 64M
 thread_stack= 512K
 thread_cache_size   = 32
 #
 # * Query Cache Configuration
 #
 query_cache_limit   = 32M
 query_cache_size= 256M
 query_cache_type= 1
 table_cache = 512
 sort_buffer_size= 32M


 I am running Ubuntu 7.04, with 1GB of RAM. The relevant fields are
 all indexed, but I can't imagine why a simple count() would take so
 long, when the actual query by value on the same field is effectively
 instant (after my cache setting expansion).

 Does anyone have an idea of what I am missing? Also, if you think
 any of the above settings seem wrong for a server with 1GB of RAM,
 please let me know.
If the field you are counting is the first field in an index I would think
it would go much faster (system will just use the index).  If some of your
counts are fast and some are slow (you said iffy) that might explain the
difference.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Indices in InnoDB/MySQL

2008-04-01 Thread Wm Mussatto
On Tue, April 1, 2008 10:01, Robert DiFalco wrote:
 I've been told that an index always contains the primary key. So if I
 have a LONG ID that is the primary key of table and create on index on
 LONG VALUE, the index on LONG VALUE will actually work as a typical
 compound index on ID,VALUE. My question is this, if I don't know that
 about MySQL and create my indices on purpose (i.e. not taking into
 account implementation details about MySQL) I would create the index
 explicitly on ID,VALUE. If I did that would I effectively have the same
 index as one on just VALUE or would there some how be duplicate data in
 the ID,VALUE index (i.e. ID, ID, VALUE)?

 R.

Close but not quite there...
You should always have AN index which is the primary key.  You can create
other indexes which don't.  If you are searching for 'Value' then that
should be the index.  If you do ID,VALUE it can't use the index AFAIK
because it won't know the ID.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migrating form 3.23.49 to 5.0.32

2008-02-22 Thread Wm Mussatto
On Fri, February 22, 2008 10:01, Ed Curtis wrote:
 I'm doing the above migration as mentioned in the subject line. I
 figured I would use the MySQL Migration Toolkit to help it along but it
 won't let me connect to my 3.23.49 server. Is there any other way to
 migrate all my data easily.

 Thanks,

 Ed


I migrated slowly using mysql dump.  However, if you use 'left join' in
your programs the rules have changed.  If you have
select ... from tableA, tableB LEFT JOIN tableC on
the order tableA and tableB now matters. It was changed in the MIDDLE of
the 5.0.x series to be more compatible with SQL standards.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Where are actual databases stored?

2008-02-08 Thread Wm Mussatto
On Fri, February 8, 2008 13:09, Paolo Amodio wrote:

 Il giorno 08/feb/08, alle ore 21:55, Riccardo Perotti ha scritto:

 Hi all:

 Our sistem crashed and we're trying to recover our data. The disc is
 fine but the OS is gone, so I'm trying to get at the actual mysql
 databases document. Does anybody know where to look for them in a
 *nix system?

 Thanks,

 Riccardo

 --
 [EMAIL PROTECTED]
 www.riccardoperotti.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?
 [EMAIL PROTECTED]




 Usually in

 /usr/local/mysql/data

 but it'snt a standard.
 What's your distro?

 See you

 Paolo Amodio
 [EMAIL PROTECTED]
 www.dixienet.it
As the man said, depends on the distro: debian its
/var/lib/mysql
at least for the myisam files.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: does 'disable networking' make mySQL faster?

2008-01-15 Thread Wm Mussatto
On Tue, January 15, 2008 14:19, Daevid Vincent wrote:
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Baron Schwartz
 Sent: Tuesday, January 15, 2008 1:49 PM
 To: Daevid Vincent
 Cc: mysql@lists.mysql.com
 Subject: Re: does 'disable networking' make mySQL faster?

   On Jan 14, 2008 7:00 AM, Daevid Vincent 
 [EMAIL PROTECTED] wrote:
  
   I saw this on the PHP list and was wondering if there is
 any merit to
  it?!
   I would be surprised if disabling networking made a
 difference. I'm only
   concerned about the added speed. I get the security
 benefit of course.
  
 
   From: Manuel Lemos [mailto:[EMAIL PROTECTED]
   Sent: Saturday, January 12, 2008 12:57 PM
   To: [EMAIL PROTECTED]
   Subject: Re: [PHP] Re: SMTP vs mail()
  
   Every time I install MySQL on the same machine as the
 Web server, I
   disable networking to make it use Unix domain sockets,
 for either
   greater speed and security.

 Why don't you ask the author of the original post to explain or
 justify his opinion on faster performance, instead of asking people
 who didn't make that claim?

 Uh... Because it's a MYSQL question and not a PHP question for one.

 For two, there have been a couple people on THIS list who have suggested
 it
 is true, and there are a few more who (like me) are hard pressed to
 believe
 they are correct (or that mySQL would be written so poorly if that is
 true).

 Three, I have my doubts the original author knows what the hell he's
 talking
 about, and therefore asking him to justify would just give me more
 questionable results.

 d

I think I remember a README file in 3.23 that said something like this. 
If its still true it may be because if the server knows it doesn't need to
deal with networking it can skip a whole bunch of tests like is this
connection over a network.  Using sockets involves less work than going
through the TCP/IP stack, even if mysql itself doesn't take more time the
OS will. Note, my information is dated.

Bill


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Any Penalty For $ Sign?

2008-01-04 Thread Wm Mussatto
Question is, will anyone ever have to write a quick utility in perl (or
any language which uses '$' as a delimiter) to support this?  Will you
have to explain your tables to anyone else?
On Fri, January 4, 2008 09:15, Victor Subervi wrote:
 LOL! True. I'm a Python guy ;)

 On Jan 4, 2008 1:14 PM, Gerald L. Clark [EMAIL PROTECTED]
 wrote:

  Victor Subervi wrote:
  Hi;
  I discovered I can use a $ sign in table names:
  my$table
  Is this recommended against? It's kind of handy in my app.
  TIA,
  Victor
 
 If you write any PHP or Perl code, you will be very sorry.

 --
 Gerald L. Clark
 Supplier Systems Corporation





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Any Penalty For $ Sign?

2008-01-04 Thread Wm Mussatto
On Fri, January 4, 2008 10:07, Victor Subervi wrote:
 Hmm. The idea is to take this open source. I'll eventually switch over to
 Java. I guess there may be some people tempted to write a quick perl
 script,
 as you say, but on the other hand it wouldn't be part of the open source
 package, which will be either all Python or all Java. Your thoughts?
 Victor
I'm thinking more of support.  For example we dump all our databases
nightly and back them up using a shell script.  If we were doing it on a
table basis, the presence of '$' might cause issues.  Your open source
will live on a system, not in isolation, that's what I am concerned about.
However, if it makes your job much easier in python then go for it.  In,
perl at least we can work around it.

Bill
 On Jan 4, 2008 2:00 PM, Wm Mussatto [EMAIL PROTECTED] wrote:

 Question is, will anyone ever have to write a quick utility in perl (or
 any language which uses '$' as a delimiter) to support this?  Will you
 have to explain your tables to anyone else?
  On Fri, January 4, 2008 09:15, Victor Subervi wrote:
  LOL! True. I'm a Python guy ;)
 
  On Jan 4, 2008 1:14 PM, Gerald L. Clark [EMAIL PROTECTED]
  wrote:
 
   Victor Subervi wrote:
   Hi;
   I discovered I can use a $ sign in table names:
   my$table
   Is this recommended against? It's kind of handy in my app.
   TIA,
   Victor
  
  If you write any PHP or Perl code, you will be very sorry.
 
  --
  Gerald L. Clark
  Supplier Systems Corporation
 
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Determining Table Storage Engine Type on Crashed Table

2007-11-28 Thread Wm Mussatto
Can you look at the directory where the data files are
(e.g., in debian its: /var/lib/mysql/{databasename})?
For the problem table you should see {tablename}.MYD and ...MYI files it
he engine is myisam.

Hope this helps
On Wed, November 28, 2007 09:34, Richard Edward Horner wrote:
 FYI, this did not work :)

 Thanks though!

 Rich(ard)

 On Nov 23, 2007 3:37 AM, Paul McCullagh [EMAIL PROTECTED]
 wrote:
 Maybe this will work:

 SHOW CREATE TABLE table_name;


 On Nov 21, 2007, at 9:42 PM, Richard Edward Horner wrote:

  Hey everybody,
 
  Hopefully some of you are already enjoying time off. I am not...yet :)
 
  Anyway, is there a way to determine what storage engine a table is
  using if it's crashed?  When it's fine, I can just run:
 
  mysql show table status like 'table_name';
  +-++-++
  ++-+--+--
  +---++-
  +-+-+---
  +--++-+
  | Name| Engine | Version | Row_format | Rows   |
  Avg_row_length | Data_length | Max_data_length  | Index_length |
  Data_free | Auto_increment | Create_time | Update_time
  | Check_time  | Collation | Checksum | Create_options
  | Comment |
  +-++-++
  ++-+--+--
  +---++-
  +-+-+---
  +--++-+
  | table_name | MyISAM |  10 | Fixed  | 985984 | 13
  |12817792 | 3659174697238527 | 34238464 | 0 |
  1182153 | 2007-11-15 17:44:28 | 2007-11-21 15:28:07 | 2007-11-21
  15:28:18 | latin1_swedish_ci | NULL || |
  +-++-++
  ++-+--+--
  +---++-
  +-+-+---
  +--++-+
  1 row in set (0.00 sec)
 
  As you can see, the second column returned is the Engine. In this
  case, MyISAM. Now, if I crash the table, it doesn't work:
 
  mysql show table status like 'table_name';
  +-++-++--
  ++-+-+--
  +---++-+-
  ++---+--+
  +-
  ---+
  | Name| Engine | Version | Row_format | Rows | Avg_row_length
  | Data_length | Max_data_length | Index_length | Data_free |
  Auto_increment | Create_time | Update_time | Check_time | Collation |
  Checksum | Create_options | Comment
  |
  +-++-++--
  ++-+-+--
  +---++-+-
  ++---+--+
  +-
  ---+
  | table_name | NULL   |NULL | NULL   | NULL |   NULL |
 NULL |NULL | NULL |  NULL |
  NULL | NULL| NULL| NULL   | NULL  | NULL |
  NULL   | Table './blah/table_name' is marked as crashed and
  should be repaired |
  +-++-++--
  ++-+-+--
  +---++-+-
  ++---+--+
  +-
  ---+
  1 row in set (0.00 sec)
 
  Now, let's assume for a moment this were an InnoDB table. If I were to
  try and run repair, it would say that the storage engine does not
  support repair so clearly it knows what the storage engine is. How do
  I get it to tell me? Or I guess a broader more helpful question would
  be, What are all the ways to determine a table's storage engine
  type?
 
  Thanks,
  --
  Richard Edward Horner
  Engineer / Composer / Electric Guitar Virtuoso
  [EMAIL PROTECTED]
  http://richhorner.com - updated June 28th
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?
  [EMAIL PROTECTED]
 





 --
 Richard Edward Horner
 Engineer / Composer / Electric Guitar Virtuoso
 [EMAIL PROTECTED]
 http://richhorner.com - updated June 28th

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list 

RE: Simple questio SQL

2007-09-05 Thread Wm Mussatto
On Wed, September 5, 2007 7:17, Jay Blanchard said:
 [snip]
 I have a Table and want to know the most visited products.
 Products

 -  Id

 -  Name

 -  Visited
 [/snip]

 SELECT Id, Name, count(Visited) AS Total_Visits FROM product GROUP
 BY(Id)
  order by Total_Visits desc limit 5
That should get you the top 5 from most to least visited.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Database architecture and security

2007-08-23 Thread Wm Mussatto
I concur.  Also it makes it easier to remove a customer if they leave. 
Finally your backups will only lock up one customer's database at time and
for a much shorter period.

On Thu, August 23, 2007 10:50, Jerry Schwartz said:
 Personally, I think I'd go with one DATABASE per customer. That way the
 your
 code would be the same, and easier to handle. It would be easier to manage
 the security at the database level, I suspect. I'd set up a ../inc
 directory
 outside the web server root that would have one file per customer, and
 would
 have the customer-specific variables such as database name, password, and
 so
 forth. Each file would be named after a customer. You'd prompt for a user
 name and password, include the appropriate customer-specific .inc file,
 check the password against what the user supplied, and if it passed then
 create a session with the .inc file variables stored as session variables.

 Regards,

 Jerry Schwartz
 The Infoshop by Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341

 www.the-infoshop.com
 www.giiexpress.com
 www.etudes-marche.com


 -Original Message-
 From: Jason Pruim [mailto:[EMAIL PROTECTED]
 Sent: Thursday, August 23, 2007 10:59 AM
 To: MySQL List
 Subject: Database architecture and security

 Hi Everyone,

 Just had a quick question about a database I'm working on.

 I am planning on having the database open to customers of mine to
 store their mailing addresses on-line, and be able to manage the
 records.

 Is it safe, to have 1 database with lots of tables? Or am I safer
 setting up separate databases for everyone?

 I should mention, no one will be accessing the database directly,
 it'll be through a web interface and php to display it.

 Any info would be greatly appreciated!


 --

 Jason Pruim
 Raoset Inc.
 Technology Manager
 MQC Specialist
 3251 132nd ave
 Holland, MI, 49424
 www.raoset.com
 [EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Recreate Table With Sorted Data

2007-08-15 Thread Wm Mussatto
On Wed, August 15, 2007 13:09, Kebbel, John said:
 I have a table with 1600 student locks in random order. I would like them
 permanently sorted by the locker number they are assigned to. I assumed
 that ...

 ~  I would copy the table under a different name
 ~ Delete all records from this copy
 ~ Write a statement that would copy the records from the original table
 into the copied table in SORTED order
 ~ Delete the original table
 ~ Rename the copy  to the same name as the original

 Question 1: Is there a better way to get the desired result?
 Question 2: If not, what would the Insert/Select statement look like that
 would copy the records over in sorted order?
Why not just create an index on the locker number field?
Step 1 make it an index and list to endure no two students have the same
locker.
Step 2. modify the index so that its unique and the primary index.

One of the advantages of a database is you can pull the information in any
order.

order by LockerNumber  to generate a list by lockers,
order by LastName, FirstName to generate a list in alphabetic order.

Unless you specify the order the database engine is free to return them in
whatever order comes out.  Even if it happened to deliver them in locker
order after the process you specified above, the first delete and/or add
would change that.

---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding a value in multiple tables

2007-07-31 Thread Wm Mussatto
On Tue, July 31, 2007 8:52, David Ruggles said:
 I need to find a userid across several different tables.

 What is the best way to do this?

 For example I have three tables
 TableA
 TableB
 TableC

 They all have a field id

 I want to find the value 123 in the field id in any one of these tables.

 Thanks,

 David Ruggles
 CCNA MCSE (NT) CNA A+
 Network Engineer  Safe Data, Inc.
 (910) 285-7200[EMAIL PROTECTED]

Possibly use a union?  I think more details would help.

---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL slow when altering tables.

2007-07-20 Thread Wm Mussatto
On Fri, July 20, 2007 12:32, Andrew Rosolino said:

 Whenever I alter a mysql table like add a new field it gets really slow
 and
 all other processes lag.
 Is there a memory variable I can adjust for this?
 --
 View this message in context:
Without more information it might be hard to answer.  A general though:
Alter Table causes the entire table to be re-created in its new form and
then the old one deleted and the new one renamed.  For a large table that
is a significant bit of work (cpu, disk  and memory).  It might help if
you could give the size of the table and what the alter involved (for
example generating a new index would take even more time).

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data migration Verification

2007-06-04 Thread Wm Mussatto
On Mon, June 4, 2007 9:31, paulizaz said:

 What do you mean by same output ?
Can you write a program to access both databases and have it check to see
if the data matches.  A lot depends on how the structure changed.  If the
new database rows have a one to one correspondence to the original
database then 1st count the rows.  If it passes that test compare the data
in the new row with the corresponding data in the old row.

Bill
 I have too much data to go through and check if all the data is the same.
 This is my problem. Sampling would speed this up, but I need something
 more
 accurate.
 All data is important.




 Olaf Stein-2 wrote:

 Besides the sample approach, output data (a set you would output on a
 live
 system anyway) from both db setups and see if you can get the same
 output
 from both

 Olaf


 On 6/1/07 10:35 AM, paulizaz [EMAIL PROTECTED] wrote:


 Hi all,

 I have somebody creating a C# class to migrate data from one SQL
 database
 to
 another slightly differently structured SQL database.

 Please can anybody suggest ways to verrify that the data has migrated
 successfully, in whole and acurrate??

 I feel a sample approach would not quite be substancial.
 I want to keep it seperate from the migration process itself (having my
 person write a verification script may also not work as he will be
 using
 the
 same thought processes and knowledge that he used for the migration)

 Free Software, scripts, utilities, packages, industry approaches??

 Sorry Im no Tech wizzard, Any ideas appreciated.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Diff between restore from tar and recover from power outage

2007-06-04 Thread Wm Mussatto
On Mon, June 4, 2007 14:21, murthy gandikota said:
 Hi
   When restoring a DB from a tar file, I had to repair some of the tables.
 Whereas when the DB is started after a power failure, everything works
 fine. I noticed this on RedHat linux and MySQL5.

   Any thoughts?

   Thanks
   Murthy

What was in the tar file?  If you tar'ed the data directory, it is almost
guaranteed to fail since only part of the updated information may be
actually written to disk.  If the power failure occurred on a system with
a correctly configured UPS, mysql probably got a shut down command so the
disk files were in a stable state.

Normally you should dump the databases and then tar that.  (see mysqldump
command for details).

---

William R. Mussatto, Senior Systems Engineer
http://www.csz.com
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: design choice - quite many tables

2007-05-24 Thread Wm Mussatto
On Thu, May 24, 2007 2:12, Przemys?aw Klein said:
 Wm Mussatto wrote:
 Assuming you are using MYISAM table types, each table requires at least
 three files on the disk.  If they are in one database they will all be
 in
 one directory (how fast is your OS at finding the files in its directory
 structure?).
 Are they going to be opened at the same time (how many file handles can
 you have open at once?)?
 If separate databases, how do you intend to connect to them (single
 connection specifying database or multiple connections).

 My feeling is that you would not gain anything by having separate
 databases.

 Good luck.
 Thanks Wiliam.

 OS resources are sufficient. We connect through connection pool, so it
 also shouldn't be a problem. The main reason of separating data into
 several databases is easier management (in particular: backup and
 recovery) and easier developer usability. I can hardly imagine to manage
 database with lets say 500 tables. I don't know if there is a way to
 'catalogue'/organize tables within one database (namespaces/schema?).
 The second argument is that data from past years isn't modified often
 and we can apply different backup policies.

 regards,

 --
 _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
 Przemek Klein ([EMAIL PROTECTED])

There is nothing like knowing what your conditions are.  When I spoke of
connections I come from Perl DBI and each database (simultaneously
accessed) would require a separate database handle be created which in
turn would tie up a connection.  You can get around this but its awkward. 
I guess I'd pencil out the queries and see if you would have to ask
questions between the databases in the same query, again just awkward, not
a show stopper.  As for backup, you could backup on a table by table
basis, but then that gets a bit more awkward.  If the historical tables
are truely read-only there are special table types that might help there.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: design choice - quite many tables

2007-05-23 Thread Wm Mussatto
On Tue, May 22, 2007 23:29, Przemysław Klein said:
 Martijn Tonies wrote:
 Hi,


 I'm working on quite big database. It consists of about 200 tables.
 Additionaly about 50 tables are per year (because of annual data). It
 means every year new 50 tables will have to appear in application. And
 now I have a question. Should I use separate databases for annual
 data
 (i.e. db2006, db2007, etc...) (i don't need constraints on that
 (annual)
 tables) or put all the tables in one database? Is there any way to
 'catalogue'/organize tables within one database (namespace/schema)?

 Any thoughts?


 Yes, in my opinion, you should use the same tables for each year. So no
 tables per year or databases per year, unless there is a very very
 specific
 reason for this.

 Having tables on a per-year basis also means you cannot do cross-year
 queries easily and you have to adjust your queries according to the
 current
 year.

 Martijn Tonies
 Database Workbench - development tool for MySQL, and more!
 Upscene Productions
 http://www.upscene.com
 My thoughts:
 http://blog.upscene.com/martijn/
 Database development questions? Check the forum!
 http://www.databasedevelopmentforum.com



 The reason of distribute annual data into different tables is that they
 are NOT small. They store business documents in my company and can count
 about 500k rows (and will grow each year). After performance tests we
 did, it occurs that keeping those data in one table (with additional
 column 'year') wouldn't meet our response time requirements.
 I realize that this approach is not proper from relational point of
 view, but it seems that we must separate annual data. Now, the question
 is: if we should keep them in one database (and be prepared for database
 with approx 500 tables after 3-4 years) or in multiple databases.

 Regards,

 --
 _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
 Przemek Klein ([EMAIL PROTECTED])

Assuming you are using MYISAM table types, each table requires at least
three files on the disk.  If they are in one database they will all be in
one directory (how fast is your OS at finding the files in its directory
structure?).
Are they going to be opened at the same time (how many file handles can
you have open at once?)?
If separate databases, how do you intend to connect to them (single
connection specifying database or multiple connections).

My feeling is that you would not gain anything by having separate databases.

Good luck.
--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: dumpfile question

2007-04-11 Thread Wm Mussatto
On Tue, April 10, 2007 19:45, Paul DuBois said:
 At 10:15 AM +0800 4/11/07, wangxu wrote:
follow is my sql:



select * into dumpfile '/home/wangxu/test4.data' from mytable ;





mysql server report: Result consisted of more than one row



why?how to use the select into dumpfile?


  From the manual:

 If you use INTO DUMPFILE instead of INTO OUTFILE, MySQL writes only
 one row into the file, without any column or line termination and
 without performing any escape processing. This is useful if you want
 to store a BLOB value in a file.

 http://dev.mysql.com/doc/refman/5.0/en/select.html


 --
 Paul DuBois, MySQL Documentation Team
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com

Related question. does mysqldump properly escape blobs so that they can be
restored via 'mysql -u xxx -p  databaseName  dumpFile' ?

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why doesn't the InnoDB count() match table status?

2007-03-26 Thread Wm Mussatto
On Mon, March 26, 2007 16:21, Daevid Vincent said:
 You're about 5 years too late for this converation, but I recall it

 Really? People have just happily accepted this absurd limitation for
 _five_
 years? Wow.

 having to do with the fact that when you're on a table that supports
 transactions, you don't know exactly how many records a particular
 session has available to it unless you actually go and count them.
 Depending on your settings, you may or may not see rows inserted by
 other uncommitted sessions, and they may disappear if the other
 sessions roll their transactions back.

 You know how many are *IN* the table on the disk at that particular
 moment.
Why would they be on the disk. Until the transaction is committed and the
caches are flushed the info. is really in memory I thought.
 That's all that needs to be shown!?
 So if someone isn't using transactions, then that number will be accurate.
 This isn't rocket science.

 You should probably be filing bug reports or calling your support

 Oh. I will. ;-)

 Let us know if you find another database product that supports instant
 count(*)'s on transactioned tables.

 I don't care what other RDMS are or are not doing.
 I care what the one I'm paying for is not doing.

If you want to bypass the uncertainties built into transaction tables and
get a count that is 'accurate', how about locking the tables then issuing
the count request.  I realize this sort of defeats the purpose of
transaction tables but ...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ENCODE() and DECODE()

2007-03-13 Thread Wm Mussatto
On Tue, March 13, 2007 1:45, Neil Tompkins said:
 I'm using ASP.  Do you know any resources that I could use ?

Apart from mentioning that perl from active state does have an asp
variant, no.  Sorry we turned off our last NT box when Garner group said
that MS made fine servers as long as you never ran IIS on them, and that
was a while ago. Hopefully someone else on the list can help.  You might
google for AES and ASP and see what comes up.

From: Wm Mussatto [EMAIL PROTECTED]
To: Neil Tompkins [EMAIL PROTECTED]
Subject: Re: ENCODE() and DECODE()
Date: Mon, 12 Mar 2007 15:27:26 -0700 (PDT)

On Mon, March 12, 2007 15:05, Neil Tompkins said:
  Do you think these functions would be ideal for my requirements in
 terms
  of
  encrypting credit card numbers when stored in a database ?
 
Actually, no.  If you are working a language like perl look at encoding
the information and then storing it. I think encode and decode are too
weak.

in perl I use use Crypt::CBC; and then picked a strong cypher. If you use
perl, please go to cpan.org for the details.  BTW the reason for using
blob type it to avoid truncation. After its encoded removing trailing
spaces is a BAD THING.

Bill
 
 
 From: Wm Mussatto [EMAIL PROTECTED]
 To: Neil Tompkins [EMAIL PROTECTED]
 CC: mysql@lists.mysql.com
 Subject: Re: ENCODE() and DECODE()
 Date: Mon, 12 Mar 2007 13:54:41 -0700 (PDT)
 
 On Mon, March 12, 2007 13:04, Neil Tompkins said:
   Sorry Bill, I'm unsure what you mean by bin
 My error, too early in the morning here 'blob'
  
  From: William R. Mussatto [EMAIL PROTECTED]
  To: mysql@lists.mysql.com
  Subject: Re: ENCODE() and DECODE()
  Date: Mon, 12 Mar 2007 10:31:53 -0700 (PDT)
  
  On Mon, March 12, 2007 8:04, Neil Tompkins said:
I've been researching the best method to store credit card
 numbers
  in
   a
database which are saved encrypted.  My version of mysql is 3.23
  therefore
I
think the only function I could use is
   
ENCODE() and DECODE().
   
I've tried these functions and they appear to work as I want.
I've
  a
couple
of questions though, can I use varchar when saving the data and
are
  these
functions suitable for my requirements ?
   
Thanks,
Neil
  use 'Bin' versions of fields since the encoded data may be binary.
  
  Bill
  
  
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  
   _
   Txt a lot? Get Messenger FREE on your mobile.
   https://livemessenger.mobile.uk.msn.com/
  
  
 
 
 
  _
  Solve the Conspiracy and win fantastic prizes.
  http://www.theconspiracygame.co.uk/
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 



 _
 Get Hotmail, News, Sport and Entertainment from MSN on your mobile.
 http://www.msn.txt4content.com/


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ENCODE() and DECODE()

2007-03-12 Thread Wm Mussatto
On Mon, March 12, 2007 13:04, Neil Tompkins said:
 Sorry Bill, I'm unsure what you mean by bin
My error, too early in the morning here 'blob'

From: William R. Mussatto [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Subject: Re: ENCODE() and DECODE()
Date: Mon, 12 Mar 2007 10:31:53 -0700 (PDT)

On Mon, March 12, 2007 8:04, Neil Tompkins said:
  I've been researching the best method to store credit card numbers in
 a
  database which are saved encrypted.  My version of mysql is 3.23
therefore
  I
  think the only function I could use is
 
  ENCODE() and DECODE().
 
  I've tried these functions and they appear to work as I want.  I've a
  couple
  of questions though, can I use varchar when saving the data and are
these
  functions suitable for my requirements ?
 
  Thanks,
  Neil
use 'Bin' versions of fields since the encoded data may be binary.

Bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


 _
 Txt a lot? Get Messenger FREE on your mobile.
 https://livemessenger.mobile.uk.msn.com/





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]