RE: problem

2011-05-12 Thread Gavin Towey
The server will disconnect idle connections after a while.  The wait_timeout 
variable controls how many seconds it will wait.  You can set it for your 
connection when you connect by issuing a query like:

SET SESSION wait_timeout=NNN;

Just give it a large enough value.

But also, since your program is going to be running continuously, there are 
many other reasons it could lose the connection.  Your program should detect 
the error, and attempt to re-connect.

Regards,
Gavin Towey

-Original Message-
From: swaroop jois [mailto:jois_swar...@yahoo.com]
Sent: Monday, May 02, 2011 12:35 AM
To: mysql@lists.mysql.com
Subject: problem

Hello friends, I have  MySQL server version   5.0.51a-Ubuntu  installed on 
Ubuntu 8.04 machine . I would describe briefly what we are doing .
Basically we have built a server that listen to Gprs connection from client  
and accepts data in form packets and inserts that data into MySQL database.I 
run three commands .1.listening to Gprs connection and displaying all the 
received packets on the terminal.2.Number of packets read will showed in Java 
serial forwarder (Tinyos for reference )which listens on another port 3.command 
that invokes Java files  for inserting data into database table .
Initially when i run the command  everything works fine and when he  receive 
packets he is  able to insert data into table in MySQL database .

 He will still be listening on the port  (i.e he is running 24*7)Assume i 
receive data after 12 hrs .i am experiencing the problem of .It may not 
necessarily be 12 hrs .
 If i have to insert data again i have to  recompile the code again and run all 
the commands .
The error that is troubling is Result for query failed.  SQLState = 08003i 
googled the error and found that this Sqlstate indicates connection does not 
exist .
I dont have any clues.Can any one help me please ?
Regards,Swaroop

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Foreign Key Error

2011-02-14 Thread Gavin Towey
 or column types in the table and the referenced table do not match for 
constraint

 The columns Parent and Child are signed integers and ID is unsigned.

Regards,
Gavin Towey

-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Monday, February 14, 2011 3:09 PM
To: mysql@lists.mysql.com
Subject: Foreign Key Error

Hi;
I have this command:

create table if not exists categoriesRelationships (ID integer
auto_increment primary key, Store varchar(60), Parent integer not null,
foreign key (Parent) references categories (ID), Child integer not null,
foreign key (Child) references categories (ID)) engine=innodb;

show innodb status prints out this:


LATEST FOREIGN KEY ERROR

110214 15:03:43 Error in foreign key constraint of table
test/categoriesRelationships:
foreign key (Parent) references categories (ID), Child integer not null,
foreign key (Child) references categories (ID)) engine=innodb:
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with = InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See
http://dev.mysql.com/doc/refman/5.0/en/innodb-foreign-key-constraints.html
for correct foreign key definition.

mysql describe categories;
+--+-+--+-+-++
| Field| Type| Null | Key | Default | Extra  |
+--+-+--+-+-++
| ID   | int(3) unsigned | NO   | PRI | NULL| auto_increment |
| Store| varchar(60) | YES  | | NULL||
| Category | varchar(40) | YES  | | NULL||
| Parent   | varchar(40) | YES  | | NULL||
+--+-+--+-+-++
4 rows in set (0.00 sec)

Please advise.
TIA,
Victor

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: map polygon data for popular us cities

2011-02-02 Thread Gavin Towey
Openstreetmap.org is as close as you'll get. I'd be surprised if they have 
shapes for cities beyond just lat/lon point data, but they should have shapes 
data for zips, counties, states and countries if I recall correctly.

-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Wednesday, February 02, 2011 7:59 AM
To: viraj
Cc: mysql@lists.mysql.com
Subject: Re: map polygon data for popular us cities

On Wed, Feb 2, 2011 at 11:30 AM, viraj kali...@gmail.com wrote:

 dear list,
 where can i find a list of map polygons for united states cities? any
 open database? or tool to obtain correct coordinates?


A bit offtopic here, but I suspect that most such databases will be
proprietary and thus payable through the nose. Have a look at the
OpenStreetMap project, I suspect their database might be accessible under
some open license.

--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Gavin Towey
If you show the EXPLAIN SELECT .. output, and the table structure, someone will 
be able to give a more definite answer.

-Original Message-
From: Kendall Gifford [mailto:zettab...@gmail.com]
Sent: Monday, January 24, 2011 2:29 PM
To: mysql@lists.mysql.com
Subject: Re: Slow query on MySQL4 server doing simple inner join of two InnoDB 
tables

On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford zettab...@gmail.comwrote:



 On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


 In my application, there CAN in fact be several recipients records with
 both the same message_id foreign key value AND the same employee_id
 value (some employees may be a recipient of a message several times over via
 alternative addresses and/or aliases). However, as I rework things, I could
 probably rework application logic nuke the GROUP BY and just cope, in code,
 with these extra messages records in my result set. (Just FYI, the SQL
 query is simply the default query as created by rails or, more specifically,
 ActiveRecord 2.3.9 which I can/will-be optimizing).

 I will additionally be moving this database to a new server. However, for
 academic interest, I'll see if I can make time to post the query time(s)
 once I change the app, before moving the database to a new (and better
 configured) server.


Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

--
Kendall Gifford
zettab...@gmail.com

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql Event scheduler

2011-01-10 Thread Gavin Towey
1. no
2. yes

-Original Message-
From: Machiel Richards [mailto:machi...@rdc.co.za]
Sent: Monday, January 10, 2011 2:07 AM
To: mysql mailing list
Subject: Mysql Event scheduler

Good day all

Can anyone perhaps off hand tell me whether the following two
tasks will be able to be scheduled with the MySQL event scheduler?

1. MySQL backups using mysqldump
2. Analyzing of tables (all tables for all databases)

 Regards
Machiel

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.


RE: This just seems to slow

2011-01-03 Thread Gavin Towey
I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have a 
file with two columns, and a table with three.  You will probably need to be 
more specific about which columns map to which fields in the file.  Please 
report the error with any commands you run.

Also, most importantly, how slow is slow?  Have you measured the import speed 
in terms of rows per second?

The largest factor I have found that influences overall import speed is the 
innodb_buffer_pool_size.  Make sure you're not running with the default size.  
A buffer pool that's large enough to contain the secondary indexes on the table 
will also help a lot.



-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Monday, January 03, 2011 7:56 AM
To: 'mos'; mysql@lists.mysql.com
Subject: RE: This just seems to slow

Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.

Here's my CSV file, named t_dmu_history.txt:

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete 
--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is Ver 3.7 Distrib 5.1.31, for
Win32 (ia32)

It runs for awhile, but I wind up with only one record:

localhost select * from t_dmu_history;
+--+--++
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+--+--++
|1 |13071 |   NULL |
+--+--++
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know
why.

Here's the table itself:

++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| t_dmu_history_id   | int(11) | NO   | PRI | NULL| auto_increment |
| DM_History_DM_ID   | int(11) | YES  | MUL | NULL||
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL||
++-+--+-+-++

   Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


-Original Message-
From: mos [mailto:mo...@fastmail.fm]
Sent: Sunday, January 02, 2011 11:42 PM
To: mysql@lists.mysql.com
Subject: Re: This just seems to slow

Jerry,
   Use Load Data Infile when loading a lot of data. Whoever is giving
you the data should be able to dump it to a CSV file. Your imports will be
much faster.

Mike

At 07:51 PM 1/2/2011, you wrote:
I'm trying to load data into a simple table, and it is taking many hours
(and
still not done). I know hardware, etc., can have a big effect, but NOTHING
should have this big an effect.

=
us-gii show create table t_dmu_history\G
*** 1. row ***
Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
   `DM_History_DM_ID` int(11) DEFAULT NULL,
   `DM_History_Customer_ID` int(11) DEFAULT NULL,
   PRIMARY KEY (`t_dmu_history_id`),
   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=

Here's a snip of what the input file looks like:
=
SET autocommit=1;

#
# Dumping data for table 'T_DMU_History'
#

INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, 

RE: 4 days to drop Index

2010-12-26 Thread Gavin Towey
The first thing to check is either that your key_buffer_Size is too small (if 
it's a myisam table) or your innodb_buffer_pool_size is too small (if it's a 
innodb table).

After that, most of the time for alter table depends on disk IO.  On decent 
hardware a rate of 4G/hr is common, so around 6 hours is much more reasonable 
for your operation.

Regards,
Gavin Towey


-Original Message-
From: yueliangdao0...@gmail.com [mailto:yueliangdao0...@gmail.com] On Behalf Of 
???
Sent: Thursday, December 23, 2010 11:29 PM
To: Adarsh Sharma
Cc: mysql@lists.mysql.com
Subject: Re: 4 days to drop Index

Hi.
   I think you should upgrade your hardware and adjust your mysqld's
parameters. Then your job will be fine.

David Yeung, In China, Beijing.
My First Blog:http://yueliangdao0608.cublog.cn
My Second Blog:http://yueliangdao0608.blog.51cto.com
My Msn: yueliangdao0...@gmail.com



2010/12/23 Adarsh Sharma adarsh.sha...@orkash.com

 Dear all,

 Things becomes too lazy when it takes too much time.

 I am suffering from this problem when droping index of size 17.7 GB on a
 table of size 24.7 GB.
 This table have some more indexes of different sizes.
 It takes near about 4 days to drop the index.

 Can anyone Please guide me whether it is fine in MySQL or things needed to
 work on.


 Thanks  Regards

 Adarsh Sharma

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com



IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.


RE: explain shows type = ALL for indexed column

2010-12-07 Thread Gavin Towey
Mysql often handles subqueries poorly.  It's best to rewrite that as a JOIN 
instead:

http://dev.mysql.com/doc/refman/5.1/en/rewriting-subqueries.html

If you have further questions after doing that, show the table structures, the 
query, and the explain output.

-Original Message-
From: Aaron Turner [mailto:synfina...@gmail.com]
Sent: Tuesday, December 07, 2010 11:10 AM
To: mysql@lists.mysql.com
Subject: explain shows type = ALL for indexed column

Basically, I'm doing a:

 select fields FROM Database1.table1 WHERE indexed_field IN (Select
field from Database2.table2, );

It's taking about 40sec to execute where table1 (InnoDB) only has
about 33k records and my subselect is returning about 600 records.
Explain shows that it's doing a type=ALL against table1, even though
the indexed_field is an indexed varchar(64).   I've verified the
subselect executes in under 1 second so I know it's not the problem.

I'm guessing that MySQL just can't keep everything in memory at once
to use the index since the indexed_field is relatively large.
Normally, I'd compare against an integer primary key, but that's not
possible and I can't modify the schema to make it possible.

I've been reading the my.cnf documentation and various tuning
articles, but it's not clear what variables I should tweak to solve
this specific issue.  Server is a dual-quad core w/ 4GB of RAM,
although it's not dedicated to MySQL (webserver and some other
database centric background jobs run).   Table1 however is on a
dedicated RAID1 disk pair and is getting regular inserts/deletes (it's
a log table).

Any advice would be appreciated!

--
Aaron Turner
http://synfin.net/ Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix  Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
-- Benjamin Franklin
carpe diem quam minimum credula postero

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Bind-address use?

2010-12-07 Thread Gavin Towey
 intermittent connection problems (error 2013)

Look at SHOW PROCESSLIST; when you're having a connection problem.   If you see 
lots of unathenticated user in the list, then it means you're having DNS 
problems.

Typically the best way to handle this is to set skip-name-resolve, and using ip 
addresses instead of hostnames in your mysql users for authentication.


-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Tuesday, December 07, 2010 1:02 AM
To: Machiel Richards
Cc: mysql mailing list
Subject: Re: Bind-address use?

if you dont specify it it will listen to all interfaces,
and it is the standard option.

you dont break anything.

Claudio



2010/12/7 Machiel Richards machi...@rdc.co.za

  what will the effect be if we disable this option in the config file by
 means of hashing out the option?

 will it break the database or just listen on any device?


 -Original Message-
 *From*: Claudio Nanni 
 claudio.na...@gmail.comclaudio%20nanni%20%3cclaudio.na...@gmail.com%3e
 
 *To*: Machiel Richards 
 machi...@rdc.co.zamachiel%20richards%20%3cmachi...@rdc.co.za%3e
 
 *Cc*: mysql mailing list 
 mysql@lists.mysql.commysql%20mailing%20list%20%3cmy...@lists.mysql.com%3e
 
 *Subject*: Re: Bind-address use?
 *Date*: Tue, 7 Dec 2010 09:22:31 +0100



 Hi,
 You may have multiple interfaces and each multiple ips (private or public).
 With bind-address you force mysql to listen to a specific ip address
 instead of any. In case you have both public and private ips, binding to the
 private, for example, prevents from connections from the internet.
 Just keep in mind a server may belong to multiple subnets and mysql
 normally listens to connections coming from any of em.
 Cheers
 Claudio

 On Dec 7, 2010 9:16 AM, Machiel Richards machi...@rdc.co.za wrote:
  Hi All
 
  I am unable to find an answer on what the bind-address in
  mysql-5.1 actually does.
 
  From the config file it says that it provides the same function
  as the previous skip-networking function.
 
  However, let's say I configure the bind-adress value to the IP
  of the machine, will this cause any issues with applications connecting
  to the database from various other machines?
 
 
  My reason for looking into this is because currently we are
  experiencing intermittend connection problems (error 2013) but I have
  not been able to pinpoint the issue and was wondering if this is not
  perhaps the issue.
 
 
  Thank you in advance for all responses
 
  Regards
  Machiel




--
Claudio

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Another replication question

2010-11-24 Thread Gavin Towey
If you use a chain of masters, you can accomplish the same effect:

Master1-Master2-Master-3-Slave

However I don't recommend this since the more links you have in a replication 
chain, the easier it is to break it in a way that's very not-fun to deal with.


-Original Message-
From: Rolando Edwards [mailto:redwa...@logicworks.net]
Sent: Wednesday, November 24, 2010 6:53 AM
To: Machiel Richards; mysql mailing list
Subject: RE: Another replication question

MySQL, by design, cannot do that.

A DB Server can be Master to Multiple Slaves
Think of the CHANGE MASTER TO command.
Its internal design cannot accommodate reading from more than one set of relay 
logs.

You could attempt something convoluted, like
1) STOP SLAVE;
2) CHANGE MASTER TO Master 1
3) START SLAVE;
4) Read and process some binary log transactions, wait till you are zero sec 
behind master
5) STOP SLAVE;
6) CHANGE MASTER TO Master 2
7) START SLAVE;
8) Read some process binary log transactions, wait till you are zero sec behind 
master
9) Repeat from step 1

Make sure Each Master is updating only one specific set of databases, mutual 
exclusive from other Masters
Make sure you properly record the log file and log position from each master

I would never try this under normal circumstances.

I think this was described in the High Performance MySQL book
http://www.amazon.com/dp/0596101716?tag=xaprb-20


Rolando A. Edwards
MySQL DBA (CMDBA)

155 Avenue of the Americas, Fifth Floor
New York, NY 10013
212-625-5307 (Work)
201-660-3221 (Cell)
AIM  Skype : RolandoLogicWorx
redwa...@logicworks.net
http://www.linkedin.com/in/rolandoedwards


-Original Message-
From: Machiel Richards [mailto:machi...@rdc.co.za]
Sent: Wednesday, November 24, 2010 7:20 AM
To: mysql mailing list
Subject: Another replication question

Hi All

I am back once again with another replication question (maybe this
can also be handled by MMM but not sure) this time for a different
client.

We are trying to find out how to setup 3 different masters to
replicate to a single slave server (without the need to have 3 different
instances running on the slave machine).

Does anybody have any ideas?

Any ideas will be greatly appreciated.

Regards
Machiel

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: question about restoring...

2010-11-09 Thread Gavin Towey
No, you should import the data into another instance of mysql to extract the 
records.

Regards,
Gavin Towey


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...

So, I got a request this morning to recover some specific records for
a client. I just want a handful of records from a couple of tables here.
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?

thanks,
ansdy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: question about restoring...

2010-11-09 Thread Gavin Towey
Not if he has the raw innodb files.


-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Tuesday, November 09, 2010 11:05 AM
To: Gavin Towey; 'Andy Wallace'; 'mysql list'
Subject: RE: question about restoring...

That's overkill.

You should be able to import the data into another database within the same
instance, unless the file is too big to handle.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Tuesday, November 09, 2010 1:50 PM
To: Andy Wallace; mysql list
Subject: RE: question about restoring...

No, you should import the data into another instance of mysql to extract the
records.

Regards,
Gavin Towey


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Tuesday, November 09, 2010 10:34 AM
To: mysql list
Subject: question about restoring...

So, I got a request this morning to recover some specific records for
a client. I just want a handful of records from a couple of tables here.
I have a copy of the INNODB files for these two tables - is there a way
to extract the table contents from these files short of a full import?

thanks,
ansdy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the
individual named.  If you are not the named addressee, you are notified that
reviewing, disseminating, disclosing, copying or distributing this e-mail is
strictly prohibited.  Please notify the sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses. The sender therefore does not accept
liability
for any loss or damage caused by viruses or errors or omissions in the
contents
of this message, which arise as a result of e-mail transmission.
[FriendFinder
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA,
FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
I'm not sure I understand exactly what you mean, but I think you just need to 
keep a timestamp associated with each row as it is inserted, put an index on 
it, then you can select new data just by using the appropriate time range.

Also, if you're parsing files into tab delimited format, you don't need to 
write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
INFILE which takes delimited text files and inserts data in bulk, which is much 
faster than inserting line by line.

Regards,
Gavin Towey

-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Running Queries When INSERTing Data?

I'm redesigning some software that's been in use since 2002.  I'll be working 
with databases that will start small and grow along the way.

In the old format, data would come to us in mega-big text files that had to be 
parsed and manipulated and so on with Perl to remove crap and finally produce 
one tab delimited file.  Once that file was created, another simple program 
would go through and use the data in each line for an INSERT statement that 
would put the data in a table.  This table also has an Idx field that is an 
auto-incrementing primary key for the table.

Each night at 3:30 am, a program would run and would go through the same 
process for each client.  I never timed it, but it could take something like 
30-60 seconds per client, but timing wasn't a major issue, since it had a LONG 
time from then until new data would be inserted into the DB.  The SELECT 
statements to pull the data for each client involve a number of AND and OR 
conditions.  The first one of these would create a temporary table with its 
results, then another long SELECT statement would create a 2nd temporary table 
by filtering the data out more.  This would continue for a few temporary tables 
until the data was filtered.  Then it would be packaged up and encrypted, then 
sent out to the client, who has a program on his computer to read that data and 
print it out if desired.

This has worked, but for a number of reasons, a once-a-day data pull and send 
won't work as well with the new design.  The program on the clients' computers 
will be able to access a special server just for them directly.  (I know the 
concept of a server for each human client sounds inefficient, but it actually 
improves operations in a number of ways.)  So each server will only have to 
provide data for one client.

The big difference is that I'd like to make it so they can access the data 
live, or almost live.  I don't mean all the data, but the subset that meets 
their needs.  In other words, the equivalent of what was sent to them daily in 
the old system.  Their individual servers will still get the big tab-delimited 
file that will still be INSERTed in to their DB line by line.  But I'd like to 
be able to select from the new data as it comes in, once it's been given a new 
number in the Idx field.

Is there any way to run a row of data through SELECT queries as it is being 
INSERTed into a table -- or just after?

The reason for doing this, instead of INSERTing all the data, then running a 
program is that as the database grows, pulling out the data will take longer 
and longer, so if there were a way to screen data as it comes in, that would 
make it easier to provide instantly available data.

I also know my knowledge of MySQL is quite limited, so if this can be done in 
better ways, I'd be interested in hearing about them.

Thank you!



Hal
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
If you are selecting records within a certain time range that is a subset of 
the entire set of data, then indexes which use the timestamp column will be 
fine.

More generally: create appropriate indexes to optimize queries.

Although typically, you should design the database to be correct first, then 
identify performance bottlenecks in the production system, and fix them 
individually.  Trying to predict performance is difficult.  If you're 
concerned, then it's best to create a test that reproduces the exact situation 
of which you are unsure so you can observe real performance, and experiment 
with optimizations.


-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:39 AM
To: mysql@lists.mysql.com
Subject: Re: Running Queries When INSERTing Data?


On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need to 
 keep a timestamp associated with each row as it is inserted, put an index on 
 it, then you can select new data just by using the appropriate time range.

But won't that take just as long as any other queries?  Or will it be speeded 
up because all the matching records would be adjacent to each other -- like all 
at the end?

 Also, if you're parsing files into tab delimited format, you don't need to 
 write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
 INFILE which takes delimited text files and inserts data in bulk, which is 
 much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from about 
2005-2006 until now the system was running smoothly and that's a long gap with 
no programming work!


Hal


 Regards,
 Gavin Towey

 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?

 I'm redesigning some software that's been in use since 2002.  I'll be working 
 with databases that will start small and grow along the way.

 In the old format, data would come to us in mega-big text files that had to 
 be parsed and manipulated and so on with Perl to remove crap and finally 
 produce one tab delimited file.  Once that file was created, another simple 
 program would go through and use the data in each line for an INSERT 
 statement that would put the data in a table.  This table also has an Idx 
 field that is an auto-incrementing primary key for the table.

 Each night at 3:30 am, a program would run and would go through the same 
 process for each client.  I never timed it, but it could take something like 
 30-60 seconds per client, but timing wasn't a major issue, since it had a 
 LONG time from then until new data would be inserted into the DB.  The SELECT 
 statements to pull the data for each client involve a number of AND and OR 
 conditions.  The first one of these would create a temporary table with its 
 results, then another long SELECT statement would create a 2nd temporary 
 table by filtering the data out more.  This would continue for a few 
 temporary tables until the data was filtered.  Then it would be packaged up 
 and encrypted, then sent out to the client, who has a program on his computer 
 to read that data and print it out if desired.

 This has worked, but for a number of reasons, a once-a-day data pull and send 
 won't work as well with the new design.  The program on the clients' 
 computers will be able to access a special server just for them directly.  (I 
 know the concept of a server for each human client sounds inefficient, but it 
 actually improves operations in a number of ways.)  So each server will only 
 have to provide data for one client.

 The big difference is that I'd like to make it so they can access the data 
 live, or almost live.  I don't mean all the data, but the subset that meets 
 their needs.  In other words, the equivalent of what was sent to them daily 
 in the old system.  Their individual servers will still get the big 
 tab-delimited file that will still be INSERTed in to their DB line by line.  
 But I'd like to be able to select from the new data as it comes in, once it's 
 been given a new number in the Idx field.

 Is there any way to run a row of data through SELECT queries as it is being 
 INSERTed into a table -- or just after?

 The reason for doing this, instead of INSERTing all the data, then running a 
 program is that as the database grows, pulling out the data will take longer 
 and longer, so if there were a way to screen data as it comes in, that would 
 make it easier to provide instantly available data.

 I also know my knowledge of MySQL is quite limited, so if this can be done in 
 better ways, I'd be interested in hearing about them.

 Thank you!



 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto

RE: Innodb can't start

2010-11-02 Thread Gavin Towey
Once you get innodb corruption like this, you generally have to try to dump all 
your data, shutdown mysql, wipe out all innodb tables and files, then restart 
mysql  reimport:

It gives the link http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html, 
to help explain how to start innodb and get your data out.

Regards,
Gavin Towey

-Original Message-
From: Julien Jabouin [mailto:chatlumo@gmail.com]
Sent: Sunday, October 31, 2010 5:27 AM
To: mysql@lists.mysql.com
Subject: Innodb can't start

Hello,

I have a database with tables in innodb and from today database can't be start.

I don't know what to do, if your can help, this my mysql log error :

Oct 31 13:18:16 myserver mysqld[13681]: 101031 13:18:16 [Note]
/usr/sbin/mysqld: Normal shutdown
Oct 31 13:18:16 myserver mysqld[13681]:
Oct 31 13:18:16 myserver mysqld[13681]: 101031 13:18:16  InnoDB:
Starting shutdown...
Oct 31 13:18:18 myserver mysqld[13681]: 101031 13:18:18  InnoDB:
Shutdown completed; log sequence number 6 2573408134
Oct 31 13:18:18 myserver mysqld[13681]: 101031 13:18:18 [Note]
/usr/sbin/mysqld: Shutdown complete
Oct 31 13:18:18 myserver mysqld[13681]:
Oct 31 13:18:18 myserver mysqld_safe[14191]: ended
Oct 31 13:18:19 myserver mysqld_safe[14258]: started
Oct 31 13:18:19 myserver mysqld[14261]: 101031 13:18:19  InnoDB:
Started; log sequence number 6 2573408134
Oct 31 13:18:19 myserver mysqld[14261]: 101031 13:18:19 [Note]
/usr/sbin/mysqld: ready for connections.
Oct 31 13:18:19 myserver mysqld[14261]: Version: '5.0.51a-24+lenny3'
socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Debian)
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14295]: Upgrading
MySQL tables if necessary.
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14300]: Looking for
'mysql' in: /usr/bin/mysql
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14300]: Looking for
'mysqlcheck' in: /usr/bin/mysqlcheck
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14300]: This
installation of MySQL is already upgraded to 5.0.51a, use --force if
you still need to run mysql_upgrade
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14307]: Checking for
insecure root accounts.
Oct 31 13:18:20 myserver /etc/mysql/debian-start[14311]: Triggering
myisam-recover for all MyISAM tables
Oct 31 13:18:22 myserver mysqld[14261]: InnoDB: Database page
corruption on disk or a failed
Oct 31 13:18:22 myserver mysqld[14261]: InnoDB: file read of page 178137.
Oct 31 13:18:22 myserver mysqld[14261]: InnoDB: You may have to
recover from a backup.
Oct 31 13:18:22 myserver mysqld[14261]: 101031 13:18:22  InnoDB: Page
dump in ascii and hex (16384 bytes):
Oct 31 13:18:22 myserver mysqld[14261]:  len 16384; hex
4eafb5eb0002b7d900061aaac14145bf000200ba800500aa00020002000303b0f391000179fbc3213332c3213272010002001b696e66696d756d0004000b73757072656d756d0010001100168000124367e422b00026000265530019001680001245cdc328ce1ead000265540021ffc680001247f7abe58f3de20006329
...
...
...
Oct 31 13:18:22 myserver mysqld[14261]:
000

Oct 31 13:18:22 myserver mysqld[14261]:
007000632e150a1b1aaac141; asc N
  AE   y!32
   !2r infimum  supremum Cg   eS E  (
   eT

RE: mySql versus Sql Server performance

2010-10-26 Thread Gavin Towey
It's not much, but the dataset is definitely larger than your buffer pool. You 
could try this query to show how much data+index is in innodb: SELECT 
SUM(data_length+index_length) as data size FROM INFORMATION_SCHEMA.TABLES WHERE 
ENGINE='InnoDB';  =
Then SET GLOBAL buffer_pool_size= that number

I don't have a much experience running mysql on windows; I think much more time 
is spent optimizing the server performance on linux based systems rather than 
windows.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 2:24 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Here's the innodb stuff - although the largest data set I've used in the stats 
run is around 20MB, which doesn't seem like much to me.

'innodb_adaptive_hash_index', 'ON'
'innodb_additional_mem_pool_size', '2097152'
'innodb_autoextend_increment', '8'
'innodb_autoinc_lock_mode', '1'
'innodb_buffer_pool_size', '49283072'
'innodb_checksums', 'ON'
'innodb_commit_concurrency', '0'
'innodb_concurrency_tickets', '500'
'innodb_data_file_path', 'ibdata1:10M:autoextend'
'innodb_data_home_dir', 'C:\MySQL Datafiles\'
'innodb_doublewrite', 'ON'
'innodb_fast_shutdown', '1'
'innodb_file_io_threads', '4'
'innodb_file_per_table', 'OFF'
'innodb_flush_log_at_trx_commit', '1'
'innodb_flush_method', ''
'innodb_force_recovery', '0'
'innodb_lock_wait_timeout', '50'
'innodb_locks_unsafe_for_binlog', 'OFF'
'innodb_log_buffer_size', '1048576'
'innodb_log_file_size', '25165824'
'innodb_log_files_in_group', '2'
'innodb_log_group_home_dir', '.\'
'innodb_max_dirty_pages_pct', '90'
'innodb_max_purge_lag', '0'
'innodb_mirrored_log_groups', '1'
'innodb_open_files', '300'
'innodb_rollback_on_timeout', 'OFF'
'innodb_stats_on_metadata', 'ON'
'innodb_support_xa', 'ON'
'innodb_sync_spin_loops', '20'
'innodb_table_locks', 'ON'
'innodb_thread_concurrency', '8'
'innodb_thread_sleep_delay', '1'
'innodb_use_legacy_cardinality_algorithm', 'ON'

Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Monday, October 25, 2010 4:50 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: [SPAM] RE: mySql versus Sql Server performance
Importance: Low

So it's a primary key lookup.  That's a rather large primary key though, it's 
going to bloat the table size since innodb in mysql uses clustered indexes.

So the explain plan and table structure look pretty straightforward.  It is 
using the index to satisfy the query.  The next question is what does the 
server memory configuration look like?

SHOW GLOBAL VARIABLES LIKE 'innodb%';

In particular innodb_buffer_pool defines the global set of memory where data 
and indexes from your table are cached.  Mysql could be showing slower 
performance if it is getting cache misses from the buffer pool and is being 
forced to read from disk excessively.

On dedicated mysql servers, the buffer pool should be about 80% of available 
RAM.  The default value is 8M which is pretty much unusable except for trivial 
cases.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 12:31 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
 Count=2USAddressTypeCityStringCelebration 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
 1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
 Count=1Stringnever answered 
1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
 Count=1Stringcell 
1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
 Lakeside pl 
1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
 
Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String

RE: mySql versus Sql Server performance

2010-10-25 Thread Gavin Towey
MySQL and most other databases require adjustment of server settings, and 
especially of table structures and indexes to achieve the best performance 
possible.

If you haven't examined index usage for the queries you're running, or adjusted 
server memory settings from defaults, then it's no surprise you would get poor 
performance.

I don't have the inclination to dig through your code; however, if you extract 
the actual queries you are running, then run EXPLAIN query; that will show 
how it's using indexes.  You can put that information here, along with the SHOW 
CREATE TABLE table \G output for all tables involved, and someone here should 
be able to help diagnose why the queries might be slow.

Regards,
Gavin Towey


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 6:38 AM
To: mysql@lists.mysql.com
Subject: mySql versus Sql Server performance

I am running an open source project that provides an abstraction layer over a 
number of different stores. I am puzzled by performance numbers I am seeing 
between mysql and sql server - a brief discussion is available here

http://cipl.codeplex.com/wikipage?title=Data%20Provider%20Comparison

The statistics were generated using mySql 5.1 and Sql Server 2008 on a machine 
with the following specs:

OS Name Microsoft Windows 7 Professional
System Model   HP Compaq nc8430 (RB554UT#ABA)
ProcessorIntel(R) Core(TM)2 CPU T7200  @ 2.00GHz, 2000 Mhz, 
2 Core(s), 2 Logical Processor(s)
Installed Physical Memory (RAM) 4.00 GB
Total Virtual Memory 6.75 GB
Page File Space 3.37 GB
Disk 120GB SSD with 22GB available

If this isn't the right place to ask this question, can someone point me to 
somewhere that is.

Thanks


Patrick
Are you using...
myListhttp://www.mylist.com/ - everything you could possibly want (to buy)
Let me know if you can't find something



The information contained in this email message is considered confidential and 
proprietary to the sender and is intended solely for review and use by the 
named recipient. Any unauthorized review, use or distribution is strictly 
prohibited. If you have received this message in error, please advise the 
sender by reply email and delete the message.

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mySql versus Sql Server performance

2010-10-25 Thread Gavin Towey
So it's a primary key lookup.  That's a rather large primary key though, it's 
going to bloat the table size since innodb in mysql uses clustered indexes.

So the explain plan and table structure look pretty straightforward.  It is 
using the index to satisfy the query.  The next question is what does the 
server memory configuration look like?

SHOW GLOBAL VARIABLES LIKE 'innodb%';

In particular innodb_buffer_pool defines the global set of memory where data 
and indexes from your table are cached.  Mysql could be showing slower 
performance if it is getting cache misses from the buffer pool and is being 
forced to read from disk excessively.

On dedicated mysql servers, the buffer pool should be about 80% of available 
RAM.  The default value is 8M which is pretty much unusable except for trivial 
cases.


-Original Message-
From: Patrick Thompson [mailto:patrick.thomp...@channelintelligence.com]
Sent: Monday, October 25, 2010 12:31 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

Query:

SELECT *
FROM Item
WHERE CollectionID = 'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AND 
ExternalID = 'fred1'

Explain Extended:

select '17304' AS `ID`,'fred1' AS 
`ExternalID`,'a0d3937b-f5a8-0640-dec8-bdd60f7f4775' AS `CollectionID`,NULL AS 
`ItemTypeVersion`,'Item 
xmlns=http://cipl.codeplex.com/CIPlItem1.xsd;Valid1/ValidItemStatus100/ItemStatusExternalIDfred1/ExternalIDModifiedDate2010-10-25T15:06:55.7188551-04:00/ModifiedDatePersonType
 
xmlns=http://cipl.codeplex.com/CIPlOther1.xsd;AddressUSAddressTypeCityStringCelebration
 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressType/AddressAlternateAddresses
 Count=2USAddressTypeCityStringCelebration 
1/String/CityCountryStringUSA/String/CountryStateStringFL/String/StateStreetString1170
 Celebration blvd 
1/String/StreetZipInt3234748/Int32/Zip/USAddressTypeUSAddressTypeCityStringSeattle
 1/String/CityCountryStringUSA/String/CountryPhoneNumbers 
Count=2PhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt327819281/Int32/NumberTags
 Count=1Stringnever answered 
1/String/Tags/PhoneNumberTypePhoneNumberTypeAreaCodeInt32206/Int32/AreaCodeNumberInt329991971/Int32/NumberTags
 Count=1Stringcell 
1/String/Tags/PhoneNumberType/PhoneNumbersStateStringWA/String/StateStreetString12070
 Lakeside pl 
1/String/StreetZipInt3298126/Int32/Zip/USAddressType/AlternateAddressesCreateDateDateTime2010-10-25T15:06:55.7168549-04:00/DateTime/CreateDateNameStringfred1/String/NameTags
 
Count=4Stringfirst/StringStringsecond/StringStringthird/StringString1/String/Tags/PersonType/Item'
 AS `ObjectText`,'2010-10-25 15:06:55' AS `EnteredDate`,'2010-10-25 15:06:55' 
AS `LastModDate` from `ciplitemwell0404`.`item` where 
(('a0d3937b-f5a8-0640-dec8-bdd60f7f4775' = 
'a0d3937b-f5a8-0640-dec8-bdd60f7f4775') and ('fred1' = 'fred1'))

Explain:

1, 'SIMPLE', 'Item', 'const', 'PRIMARY,ItemsByID', 'PRIMARY', '889', 
'const,const', 1, ''


Table definition:

CREATE TABLE  `ciplitemwell0404`.`item` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `ExternalID` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CollectionID` varchar(40) CHARACTER SET utf8 NOT NULL,
  `ItemTypeVersion` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
  `ObjectText` longtext NOT NULL,
  `EnteredDate` datetime NOT NULL,
  `LastModDate` datetime NOT NULL,
  PRIMARY KEY (`CollectionID`,`ExternalID`),
  UNIQUE KEY `ID` (`ID`),
  KEY `ItemsByID` (`CollectionID`,`ID`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=29687 DEFAULT CHARSET=latin1;


This is just the retrieve side - which seems to be around 1.5 times slower than 
the equivalent Sql Server numbers.

The update is much slower - 3 to 5 times slower depending on the record size. 
It makes sense to me to focus on the retrieve, maybe the update is just a 
reflection of the same problems.


Patrick
myList - everything you could possibly want (to buy)


-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Monday, October 25, 2010 2:00 PM
To: Patrick Thompson; mysql@lists.mysql.com
Subject: RE: mySql versus Sql Server performance

MySQL and most other databases require adjustment of server settings, and 
especially of table structures and indexes to achieve the best performance 
possible.

If you haven't examined index usage for the queries you're running, or adjusted 
server memory settings from defaults, then it's no surprise you would get poor 
performance.

I don't have the inclination to dig through your code; however, if you extract 
the actual queries you are running, then run EXPLAIN query; that will show 
how it's using indexes.  You can put that information here, along with the SHOW 
CREATE TABLE table \G output for all tables involved, and someone here should 
be able to help diagnose why the queries might be slow.

Regards,
Gavin Towey


-Original Message-
From: Patrick

RE: Load Data Infile Errors

2010-10-25 Thread Gavin Towey
The answer is 3 =)

With myisam tables, you can have partially complete statements.  That is if you 
get an error, all rows handled before the error are still in the table.  With 
innodb, an error generates a rollback and your table is returned to its state 
before the statement was run.

To find the actual number of rows processed when using REPLACE or IGNORE, see 
the ROW_COUNT() function:
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count

Regards,
Gavin Towey


-Original Message-
From: James W. McKelvey [mailto:james.w.mckel...@jpl.nasa.gov]
Sent: Monday, October 25, 2010 12:16 PM
To: mysql@lists.mysql.com
Subject: Load Data Infile Errors

Hello,

I have a question about the execution cycle of LOAD DATA INFILE.
If I issue a large file via LDI LOCAL, I know that the file is copied to
the MySQL server and executed there.

But at what point does the statement finish from the sender's point of view?

1) When the file is successfully copied?
2) When the file is copied and parsed?
3) When the file is completely processed?

I'm guessing 2).

The reason for asking is to determine what errors may be returned and
how I can deal with them.

Is it possible for the file to be partially processed, say, inserting
the first half of the rows? If 2) or 3), I would say no (barring some
serious server error).

Since LOCAL implies IGNORE, is there any way to get the number of
ignored rows? What about replace?

Ultimately I want to know under what conditions I should reissue the
file, and whether or not that could introduce duplicate entries for
tables with non-unique keys.

Thanks!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Federated Tables versus Views

2010-10-08 Thread Gavin Towey
Neither, really.  You can qualify a table with the database name to operate on 
tables in multiple databases on the same host.

Federated is a way of accessing a table's data from a remote instance of mysql.


-Original Message-
From: winterb...@gmail.com [mailto:winterb...@gmail.com] On Behalf Of 
Wellington Fan
Sent: Friday, October 08, 2010 9:52 AM
To: mysql@lists.mysql.com
Subject: Federated Tables versus Views

Hello All,

What should I consider when trying to choose between a Federated table and a
View when I have one table's data that I'd like to be available in other
databases on the same host.

My view definition would be something like:

CREATE
[some options]
VIEW
[this_db].[this_view] AS

SELECT
[some_db].[some_table].*
FROM
[some_db].[some_table].*


Are there performance gains to be had, one against the other? Security
concerns? Replication gotchas?

Thanks!

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Indexing question

2010-10-04 Thread Gavin Towey
Include the query, EXPLAIN output, and the relavant SHOW CREATE TABLE table \G 
output.  Someone should be able to offer suggestions.

-Original Message-
From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
Sent: Monday, October 04, 2010 8:54 AM
To: Joerg Bruehe
Cc: [MySQL]
Subject: Re: Indexing question

Jörg

Thanks for the useful reply.  Maybe I can EXPLAIN my select queries for you
to advise if any changes need to be made ?

Regards
Neil

On Mon, Oct 4, 2010 at 3:38 PM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi!


 Neil Tompkins wrote:
  Thanks for your reply. So should we create individual indexes on each
  field or a multiple column index ??

 This question cannot be answered without checking and measuring your
 installation. The decision whether to create an index is always an act
 of balancing:

 - If there is an index, the database server can use it to find data
  records by looking up the index, not scanning the base data.
  This results in load reduction (both CPU and disk IO) and speeds up
  query execution.

 - If there is an index, the database server must maintain it whenever
  data are altered (insert/update/delete), in addition to the base data.
  This is increased load (both CPU and disk IO) and slows down data
  changes.

 So obviously you want to create only those indexes that are helpful for
 query execution: you will never (voluntarily) create an index on a
 column which isn't used in search conditions, or whose use is already
 provided by other indexes.
 Of the remaining candidate indexes, you will never (voluntarily) create
 one that provides less gain in searches than it costs in data changes.

 With MySQL, AFAIK there is the limitation that on one table only one
 index can be used. As a result, the choice of indexes to create depends
 on the searches executed by your commands, their relative frequency, and
 the frequency of data changes.


 To answer your other question: If you run aggregate functions (like
 SUM(), MIN(), or MAX()) on all records of a table, their results could
 be computed by accessing a matching index only. I don't know whether
 MySQL does this, I propose you check that yourself using EXPLAIN.

 If you run them on subsets of a table only, an index on that column will
 not help in general.

 In database implementations, there is the concept of a covering index:
 If you have an index on columns A and B of some table, its contents
 (without the base data) would suffice to answer
   SELECT SUM(B) WHERE A = x
 Again, I don't know whether MySQL does this, and I refer you to EXPLAIN.


 HTH,
 Jörg

 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
 ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
 Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
 Amtsgericht Muenchen: HRA 95603



This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: update and times

2010-10-04 Thread Gavin Towey
Those unix_time values don't seem to correspond to the dates you have.

select NOW(), UNIX_TIMESTAMP(NOW());
+-+---+
| NOW()   | UNIX_TIMESTAMP(NOW()) |
+-+---+
| 2010-10-04 13:18:08 |1286223488 |
+-+---+

1286223428 vs 12862162510269684

Your value has far too many digits. That's also beyond the range of a 32 bit 
int.  Are you using BIGINT, or VARCHAR?

I suspect the issues is due because of something different about the values you 
have in your table.  Try posting the SHOW CREATE TABLE table \G output, and a 
sample INSERT statement to populate the table.  That way someone can try to 
reproduce the behavior you're seeing.




-Original Message-
From: kalin m [mailto:ka...@el.net]
Sent: Monday, October 04, 2010 1:11 PM
To: [MySQL]
Subject: update and times



hi all...

i'm doing tests with a table that gets updated based on random unix
times it contains. there is a column that has a bunch or random times
that look like:

+-+---+
 | date_time  | unix_time|
+-+---+
 | 2010-10-01 10:24:52 | 12859430921341418 |
 | 2010-10-01 21:18:13 | 12859822937839442 |
 | 2010-10-01 16:08:00 | 12859636809115039 |
 | 2010-10-01 19:47:43 | 12859768633824661 |
 | 2010-10-01 16:48:30 | 12859661104829142 |
 | 2010-10-01 15:25:37 | 12859611374324533 |
 | 2010-10-01 12:27:28 | 12859504483288358 |
+-+---+


what i'm trying to do is update the column only of one of those times
isn't yet passed. and it works. except sometimes...

like these 2 unix times:

this was in the table under unix time: 12862162385941345...

this 12862162510269684 got passed in the update command as in:

update the_table set updated = 1 where unix_time  12862162510269684
limit 1;

executing this query didn't update the record.

why?


thanks...



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: possible curdate() issue

2010-10-01 Thread Gavin Towey
You're using incorrect syntax for date math.  Use:

CURDATE() - INTERVAL 1 day

See the manual for details.

-Original Message-
From: Ted Maas [mailto:tm...@uic.edu]
Sent: Friday, October 01, 2010 7:47 AM
To: mysql@lists.mysql.com
Subject: possible curdate() issue

This is most likely user error but I noticed today that
when I subract 1 from the curdate() function I get a very
interesting result:

mysql select curdate() - 1 from dual;
+---+
| curdate() - 1 |
+---+
|  20101000 |
+---+
1 row in set (0.00 sec)


curdate() itself is OK:

mysql select curdate() from dual;
++
| curdate()  |
++
| 2010-10-01 |
++
1 row in set (0.00 sec)


These also seem somewhat strange:

mysql select curdate() - 70 from dual;
++
| curdate() - 70 |
++
|   20100931 |
++
1 row in set (0.00 sec)

mysql select curdate() - 71 from dual;
++
| curdate() - 71 |
++
|   20100930 |
++
1 row in set (0.00 sec)


I use the curdate() - 1  to pick up yesterdays date from an index.
On Sept. 30 the code worked OK.   Today ... not so much.

Since this behavior is the same on MySQL 4.1.22, 5.0.77 and 5.1.36
I must be doing something wrong.


Any Ideas?

Ted Maas
Research Programmer
Systems Group
Academic Computing and Communications Center
University of Illinois at Chicago




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Object audit info

2010-10-01 Thread Gavin Towey
Alternatively, if you only want to log modifications, then the binlog would 
suffice.

-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Friday, October 01, 2010 6:21 AM
To: PRATIKSHA JAISWAL
Cc: mysql@lists.mysql.com
Subject: Re: Object audit info

Simply activate the full log (log directive in my.cnf) - this will provide
you with logon, logoff and every command sent by every session.

Keep in mind that this is a LOT of data; so you want to keep this on a
separate set of spindles. It will also. obviously, make for some overhead,
but if your logging disk can keep up that shouldn't be more than 2 or 3
percent, I think.


On Fri, Oct 1, 2010 at 3:16 PM, PRATIKSHA JAISWAL 
pratikshadjayswa...@gmail.com wrote:

 Hi List,

 How can i audit every object of the database including database too.

 I will give you deep idea about it. Our environment having lots of
 application connected to database.
 We have created user specific to applications, which has admin rights. Now,
 i wanted to monitor/audit information
 in case they add/drop/delete any table/user/database/index.

 Please guide me how can I achieve that.



 Regards,
 Pratiksha




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Indexing question

2010-10-01 Thread Gavin Towey
You can't use an index to select records in a range, and order them.  The order 
by will cause a filesort in that case.

Additionally indexes are always read left to right.  So an index on ('user_id', 
'product_id') will help when doing WHERE user_id=N AND product_id IN (1,2,3), 
but wouldn't help for just the condtion on product_id.

See the manual for full details on how mysql uses indexes: 
http://dev.mysql.com/doc/refman/5.1/en/mysql-indexes.html


-Original Message-
From: Jonas Galvez [mailto:jonasgal...@gmail.com]
Sent: Friday, October 01, 2010 11:48 AM
To: mysql@lists.mysql.com
Subject: Indexing question

Suppose I wanted to be able to perform queries against three columns of my
table: 'user_id', 'product_id' and 'created'. Most of the time I'll just be
range-selecting records from the table ordering by 'created'. But I may also
want to select where 'user_id' = something and 'product_id' in (list, of,
ids), ordered by 'created'. Do I need two separate indexes, one on 'created'
and another on ('user_id', 'product_id', 'created'), or does having only the
latter suffice the former case?


-- Jonas, http://jonasgalvez.com.br

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Migrating my mindset from MyISAM to InnoDB

2010-09-28 Thread Gavin Towey
1. Generally reducing fragmentation in the data/index files will reduce the 
footprint of tables on disk, and can be more efficient to query.  With innodb 
you need to be using the innodb-file-per-table option, and then you can use 
OPTIMIZE TABLE table; to rebuild it.  You don't get detailed progress like with 
myisamchk, but that's not important anyway.  You can estimate how long it will 
take by keeping track of how long any given ALTER / OPTIMIZE takes in GB/hr.

2.  Don't stare at the screen.  Start it, script the process  have it email 
your phone when it's done.  Do something else in the mean time.

3.  Yes, innodb table will take more space on disk.  If you have a really long 
primary key, and lots of secondary indexes, then it can take a *lot* more.  
Disk is cheap, don't worry about it.

Regards,
Gavin Towey

-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Tuesday, September 28, 2010 3:29 PM
To: mysql@lists.mysql.com
Subject: Migrating my mindset from MyISAM to InnoDB

Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB.   I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems to be the direction of MySQL/Oracle.

I've been using MySQL 4.1.14 for years in my production environment,
including one master and several slaves for report and long running
queries.

Every 6 to 12 months the master MYI index files grow fairly large, so
I take the production database offline, and run myisamchk -r on the
index files to rebuild them and shrink them back down again. I usually
get a 20% to 30% space saving and improved performance after the
rebuilds. This has worked very well for me for, well, almost 10 years
now!

And when I say large my two main tables have about 200 million rows,
and the myisamchk can take between 60-160 minutes to complete.

I very much like how verbose myisamchk is in detailing which index it
is currently rebuilding, and the progress in terms of records
re-indexed.

SO, my questions are this:

1. With InnoDB, do the indexes ever need to be rebuilt to reduce index
size and improve performance like I get with MyISAM?
2. If so, are there any tools like myisamchk to monitor the InnoDB
index rebuild process, other than issuing a repair table... and
staring indefinitely at a blank screen until it finishes hours later?
3.  I've been testing the rebuild process during upgrading using
alter table table_name engine=innodb to convert my tables from
4.1.14 to 5.5.6, and I'm seeing a 130% increase (more than double) in
the raw disk space required for the new InnoDB tables compared to
their old MyISAM counterparts. (I am using single-file-per-table).  Is
this normal?  If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?

I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Migrating my mindset from MyISAM to InnoDB

2010-09-28 Thread Gavin Towey
Also note, 5.5 isn't production ready.  5.1 is the current GA release.


-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Tuesday, September 28, 2010 3:29 PM
To: mysql@lists.mysql.com
Subject: Migrating my mindset from MyISAM to InnoDB

Primarily due to many positive posts I've seen about MySQL 5.5 and
advances in InnoDB, I'm seriously considering converting all my MyISAM
databases to InnoDB.   I don't need many of the InnoDB features, but
if I'm going to upgrade from 4.1.14 to 5.5, I might as well bit the
bullet since that seems to be the direction of MySQL/Oracle.

I've been using MySQL 4.1.14 for years in my production environment,
including one master and several slaves for report and long running
queries.

Every 6 to 12 months the master MYI index files grow fairly large, so
I take the production database offline, and run myisamchk -r on the
index files to rebuild them and shrink them back down again. I usually
get a 20% to 30% space saving and improved performance after the
rebuilds. This has worked very well for me for, well, almost 10 years
now!

And when I say large my two main tables have about 200 million rows,
and the myisamchk can take between 60-160 minutes to complete.

I very much like how verbose myisamchk is in detailing which index it
is currently rebuilding, and the progress in terms of records
re-indexed.

SO, my questions are this:

1. With InnoDB, do the indexes ever need to be rebuilt to reduce index
size and improve performance like I get with MyISAM?
2. If so, are there any tools like myisamchk to monitor the InnoDB
index rebuild process, other than issuing a repair table... and
staring indefinitely at a blank screen until it finishes hours later?
3.  I've been testing the rebuild process during upgrading using
alter table table_name engine=innodb to convert my tables from
4.1.14 to 5.5.6, and I'm seeing a 130% increase (more than double) in
the raw disk space required for the new InnoDB tables compared to
their old MyISAM counterparts. (I am using single-file-per-table).  Is
this normal?  If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?

I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Segmentation fault - Redhat Linux 64 Bit

2010-09-27 Thread Gavin Towey
Either 1. Use strace to find out where it's getting a segfault, or 2. Use gdb 
and get the backtrace where crashes.

-Original Message-
From: Sharath Babu Dodda [mailto:sharath.do...@gmail.com]
Sent: Monday, September 27, 2010 3:17 PM
To: mysql@lists.mysql.com
Subject: Segmentation fault - Redhat Linux 64 Bit

Hi there,

I installed Apache, MySQL and PHP on Redhat Linux 64 bit. However, when I
try to invoke MySQL, I'm getting the Segmentation fault error and I'm not
able to see the mysql prompt.

Begin of problem:

###

[...@xyz123 bin]$ sudo ./mysql -u root -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 6

Server version: 5.0.91 Source distribution

Segmentation fault

###

End of problem:

Could you please suggest a solution for this? Thanks in advace for your
help.

regards,

Sharath

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: prime number table

2010-09-17 Thread Gavin Towey
The data isn't in the .frm;  That only holds the structure of the table.

Your data is in the .MYD file of the same name, and indexes are in the .MYI 
file.

-Original Message-
From: Elim PDT [mailto:e...@pdtnetworks.net]
Sent: Friday, September 17, 2010 11:29 AM
To: mysql@lists.mysql.com
Subject: prime number table

I got a file of the list of the 1st 1270607 prime numbers (the 1270607th
prime is 1999,
beat the $227 book at
http://www.amazon.com/prime-numbers-Carnegie-institution-Washington/dp/B0006AH1S8).
the file is an output of a python script. the file size is about 12Mb.

Then I created a simeple mysql table prime as

mysql desc prime;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| oid  | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| pv   | int(10) unsigned | YES  | | NULL||
| descript | text | YES  | | NULL||
+--+--+--+-+-++
mysql show create table prime;
--+
| Table | Create Table
--+
| prime | CREATE TABLE `prime` (
  `oid` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `pv` int(10) unsigned DEFAULT NULL,
  `descript` text,
  PRIMARY KEY (`oid`)
) ENGINE=MyISAM AUTO_INCREMENT=1270608 DEFAULT CHARSET=latin1

The table file size is (prime.frm,prime.MYD,prime.MYI) = (9k; 24,817KB;
12,754KB)

Then I do
mysql create table prm select * from prime order by prime.oid;
mysql alter table prm modify oid int unsigned primary key auto_increment;

mysql desc prm;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| oid  | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| pv   | int(10) unsigned | YES  | | NULL||
| descript | text | YES  | | NULL||
+--+--+--+-+-++

mysql show create table prm;
+---+--
| Table | Create Table
+---+--
| prm   | CREATE TABLE `prm` (
  `oid` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `pv` int(10) unsigned DEFAULT NULL,
  `descript` text,
  PRIMARY KEY (`oid`)
) ENGINE=InnoDB AUTO_INCREMENT=1270608 DEFAULT CHARSET=latin1 |
+---+--

The table file prm.frm is only 9KB

My question is that how come it's SO SMALL? (currently the colum description
in both tables prime and prm are empty except one identical row, with very
short string value.

Is is recommend to index the other twoo columns?

Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Crashed Table - How to report/prevent?

2010-09-14 Thread Gavin Towey
Innodb doesn't suffer from corruption issues nearly as often as myisam tables, 
and it is able to recover on its own in many cases (crash recovery.)  In my 
experience, if innodb does get corrupted, it's most often because of hardware 
problems. You could consider using it instead.

-Original Message-
From: Steve Staples [mailto:sstap...@mnsi.net]
Sent: Monday, September 13, 2010 7:03 AM
To: mysql@lists.mysql.com
Subject: Crashed Table - How to report/prevent?

Good day MySQL!

I had a table that crashed last night.  There is a cron function that
goes out every 6 hours or so, that does a quick table backup (it's also
replicated, it's just something that we have running now).   ANYWAY, I
think it crashed early in the evening, but when the backup ran, it looks
like it marked it as crashed finally (which was about 4 hours later).

The table was able to be read up until then, but it looks like it wasn't
able to be written too.

What I want to know is, is there a quick and easy way to maybe every few
hours, check to see if there are any crashed tables or whatnot?

This is only the 2nd time in the last 1 1/2 years.  the table that
crashed, is a table that gets created every month, and is about 5gig in
size at the end of the month (it has a lot of records/fields in it which
is why i create a new one every month).

I could write a quick routine that goes and 'shows' the tables, and if
it fails, then repair it (or report it back to me via email or
something).


Any ideas on a way to automate something that is efficient and quick?

Steve



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: no success (was: hard disk crash: how to discover the db?)

2010-09-14 Thread Gavin Towey
With PHP, a blank page usually means it's generating an error, but not 
displaying it in the browser; this is often disabled for security reasons, but 
makes debugging harder.  First step should be to get PHP to spit out the actual 
error.  There should probably be a setting in the mediawiki configuration for 
this, or see the PHP docs for error reporting.

-Original Message-
From: Uwe Brauer [mailto:oub.oub@gmail.com] On Behalf Of Uwe Brauer
Sent: Monday, September 13, 2010 10:35 AM
To: George Larson
Cc: andrew.2.mo...@nokia.com; mysql@lists.mysql.com
Subject: no success (was: hard disk crash: how to discover the db?)

 On Thu, 09 Sep 2010 18:02:09 -0400, George Larson 
 george.g.lar...@gmail.com wrote:

We do nightly backups at work just by taring the mysql directory.  In
my environment, that is /var/lib/mysql.

Like this:

service mysql stop
cd /var/lib/mysql
rm -rf *
tar zxvf file.tar
rm -rf ib_logfile*
chown -R mysql.mysql
service mysql start

I have followed these steps and it seems not to work.
I have re installed the mediawiki software and then I tried
to open the corresponding  wikipage, but only see empty pages.
There are several reasons for this:

-  when the hard disk crashed the database was damaged,
   as a matter of fact when I have a look into the
   /var/lib/mysql/maqwiki
   directory most of the  files are very small in size with the
   exception of searchindex.MYD and searchindex.MYI. I
   have no idea whether this is normal.

-  I did not do the recover process correctly. Is there
   any hardcore mysql command I could use to check the
   database?

-  the connection between the mediawiki conf and the
   data base is broken. I will ask in the mediawiki page
   about this.

Any advice and help is very much appreciated.

Uwe Brauer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Performance Tunning

2010-08-31 Thread Gavin Towey
This is a good place to start:
https://launchpad.net/mysql-tuning-primer


-Original Message-
From: Johnny Withers [mailto:joh...@pixelated.net]
Sent: Tuesday, August 31, 2010 5:38 AM
To: Johan De Meersman
Cc: kranthi kiran; mysql@lists.mysql.com
Subject: Re: Performance Tunning

So, it's not just me that is stuck in this infinite loop? I thought I had
gone mad!

--
-
Johnny Withers
601.209.4985
joh...@pixelated.net

On Tue, Aug 31, 2010 at 5:23 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 1. Find out what is slow
 2. Fix it
 3. GOTO 1

 On Tue, Aug 31, 2010 at 11:13 AM, kranthi kiran
 kranthikiran@gmail.comwrote:

  Hi All,
   In performance tunning what are the steps can follow,please help
  me
 
  Thanks  Regards,
  Kranthi kiran
 



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Fixture List generation using MySQL

2010-08-19 Thread Gavin Towey
That's almost a cartesean product; except you just want to eliminate results 
where a team would be paired up with itself.

 create table teams ( id serial );
Query OK, 0 rows affected (0.02 sec)

 insert into teams values (), (), (), ();
Query OK, 4 rows affected (0.05 sec)
Records: 4  Duplicates: 0  Warnings: 0

[ff] test select * from teams;
++
| id |
++
|  1 |
|  2 |
|  3 |
|  4 |
++
4 rows in set (0.00 sec)

 select * from locations;
+--+
| name |
+--+
| home |
| away |
+--+
2 rows in set (0.00 sec)


 select * from teams t1 JOIN teams t2;
+++
| id | id |
+++
|  1 |  1 |
|  2 |  1 |
|  3 |  1 |
|  4 |  1 |
|  1 |  2 |
|  2 |  2 |
|  3 |  2 |
|  4 |  2 |
|  1 |  3 |
|  2 |  3 |
|  3 |  3 |
|  4 |  3 |
|  1 |  4 |
|  2 |  4 |
|  3 |  4 |
|  4 |  4 |
+++
16 rows in set (0.00 sec)


With no join condition, we every possible combination of t1 paired with t2; 
however, this leads to the undesireable result that we have combinations like 
team 4 vs team 4.  So you just need to add a condition to prevent those rows 
from showing up:

 select * from teams t1 JOIN teams t2 ON t1.id!=t2.id;
+++
| id | id |
+++
|  2 |  1 |
|  3 |  1 |
|  4 |  1 |
|  1 |  2 |
|  3 |  2 |
|  4 |  2 |
|  1 |  3 |
|  2 |  3 |
|  4 |  3 |
|  1 |  4 |
|  2 |  4 |
|  3 |  4 |
+++
12 rows in set (0.10 sec)


Notice you get both combinations of 2 vs 1 and 1 vs 2, so you could just call 
whichever team is in the first column as the home team.


Regards,
Gavin Towey

-Original Message-
From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
Sent: Thursday, August 19, 2010 10:07 AM
To: [MySQL]
Subject: Re: Fixture List generation using MySQL

I'm looking at a routine / script to create the fixtures like

team 1 vs team 2
team 3 vs team 4
team 5 vs team 6 etc




 On Thu, Aug 19, 2010 at 3:44 PM, Peter Brawley 
 peter.braw...@earthlink.net wrote:



  I'm tasked with generating a list of fixtures from a table of teams,
 whereby
 each team plays each other home and away.  Does anyone have any
 experience
 generating such information using MySQL ?


 Basically ...

 select a.id,b.id from tbl a join tbl b on a.idb.id;
 union
 select a.id,b.id from tbl a join tbl b on a.idb.id;

 PB

 -


 On 8/19/2010 9:12 AM, Tompkins Neil wrote:

 Hi,

 I'm tasked with generating a list of fixtures from a table of teams,
 whereby
 each team plays each other home and away.  Does anyone have any
 experience
 generating such information using MySQL ?

 Thanks for any input.

 Regards
 Neil




This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: max_seeks_for_key

2010-08-17 Thread Gavin Towey
It's not really necessary for you to adjust that variable.



-Original Message-
From: jitendra ranjan [mailto:jitendra_ran...@yahoo.com]
Sent: Tuesday, August 17, 2010 7:52 AM
To: mysql@lists.mysql.com
Subject: max_seeks_for_key

Hi,

We have myisam tables which has round 10 lakhs of records in each tables.I want 
to search the records based on index. What should be the value of 
max_seeks_for_key as it is set at default 4294967295.

Thanks in advance.

Jeetendra Ranjan
MySQL DBA


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Join Problem

2010-08-16 Thread Gavin Towey
What do you mean by not working?  What results do you get?

-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Monday, August 16, 2010 6:59 AM
To: mysql@lists.mysql.com
Subject: Join Problem

Hi;
I have this code:

select f.id from Flights f join Planes p where f.plane_id=p.id and
p.in_service=1

mysql describe Flights;
+-+---+--+-+-++
| Field   | Type  | Null | Key | Default | Extra  |
+-+---+--+-+-++
| id  | int(11)   | NO   | PRI | NULL| auto_increment |
| plane_id| int(11)   | NO   | MUL | NULL||
| pilot_id| int(11)   | NO   | MUL | NULL||
| flight_date | date  | NO   | | NULL||
| departure   | time  | NO   | | NULL||
| arrival | time  | NO   | | NULL||
| origination | enum('STT','STX') | YES  | | NULL||
| destination | enum('STT','STX') | YES  | | NULL||
| price   | float(6,2)| NO   | | NULL||
+-+---+--+-+-++

mysql describe Planes;
+--+-+--+-+-++
| Field| Type| Null | Key | Default | Extra  |
+--+-+--+-+-++
| id   | int(11) | NO   | PRI | NULL| auto_increment |
| name | varchar(20) | NO   | | NULL||
| in_service   | tinyint(1)  | NO   | | 1   ||
| capacity | tinyint(2)  | NO   | | NULL||
| total_weight | int(6)  | NO   | | NULL||
+--+-+--+-+-++

My goal is to exclude results in which in_service !=1; however, the filter
isn't working. Please advise.
TIA,
Victor

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opened_table status

2010-08-12 Thread Gavin Towey
You don't need to flush tables, just increase the table_cache.

-Original Message-
From: jitendra ranjan [mailto:jitendra_ran...@yahoo.com]
Sent: Thursday, August 12, 2010 10:55 AM
To: mysql@lists.mysql.com
Subject: opened_table status

Hi,

I have opened_table status is too high but i don't want increase the value of 
table_cache and also dont want to flush table because it will reset the query 
cache.

Now my question is how can i decrease the opened_table status ?

Thanks in advance

Jeetendra Ranjan
MySQL DBA


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: info on mysql

2010-08-09 Thread Gavin Towey
For most of your questions, use:

SHOW GLOBAL STATUS;
SHOW GLOBAL VARIABLES;


(7) Database performance statistics queries
(8) Top 5 queries taking more time for executions.
(9) Engine information.

For these, you need to enable the slow query log, gather queries over a given 
interval of time, then use either mysql_dump_slow or maatkit's mk-query-digest 
to parse the log.

-Original Message-
From: PRATIKSHA JAISWAL [mailto:pratikshadjayswa...@gmail.com]
Sent: Monday, August 09, 2010 9:01 AM
To: mysql@lists.mysql.com
Subject: info on mysql

Hi All,

How can i get following information from database or is there any query for
the same.

(1) mysql server uptime
(2) Total number of users connected to server
(3) Data file information / where it is located through mysql prompt / size
of data file
(4) each Database size
(5) Database I/O information
(6) Invalid object in database
(7) Database performance statistics queries
(8) Top 5 queries taking more time for executions.
(9) Engine information.


--
Thanks
Pratiksha

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Split string by regex

2010-08-03 Thread Gavin Towey
Since your conditions are pretty specific you can do this:

set @a='RA100';
select IF(@a REGEXP '[A-Za-z]{2}', SUBSTRING(@a,1,2), SUBSTRING(@a,1,1)) as 
letter,  IF(@a REGEXP '[A-Za-z]{2}', SUBSTRING(@a,3), SUBSTRING(@a,2));


+++
| letter | number |
+++
| RA | 100|
+++


If this looks ugly, then that should be good motivation not to store multiple 
pieces of data as concatenated strings =)

Regards,
Gavin Towey

-Original Message-
From: Adam Gray [mailto:acg...@me.com]
Sent: Tuesday, August 03, 2010 8:04 AM
To: mysql@lists.mysql.com
Subject: Split string by regex

Hello,

I'm working on a library OPAC system with books classified using the Library of 
Congress classification system. This takes the format of either one or two 
letters followed by some numbers, i.e. R272 or RA440 etc. What I want to do is 
split the field that holds this classification into two, one containing the 
letter portion and the other containing the number bit. So

+---+
| Class |
+---+
| R100  |
+---+
| RA65  |
+---+
| RP2   |
+---+

Would become

+++
| Class  | Class2 |
+++
|   R|   100  |
+++
|   RA   |   65   |
+++
|   RP   |2   |
+++
etc

Could this be done in MySQL? I want to do something along the lines of set 
class2 = SUBSTRING_INDEX(class,'[A-Z]',-1) but I understand this is not 
possible.

Any ideas?

Regards

Adam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Yet another query question

2010-07-26 Thread Gavin Towey
You'll need to use the technique described here:
http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html


-Original Message-
From: Michael Stroh [mailto:st...@astroh.org]
Sent: Monday, July 26, 2010 2:50 PM
To: MySql
Subject: Yet another query question

Hi everyone and thanks in advance for the help. I have a query that I'd like to 
perform using two tables but am not sure what the best way to perform it short 
of creating a loop in my code and performing multiple queries.

I have two tables. The first table acts as a master table of sorts and Num in 
Table1 maps directly to Num in Table2. One way to think of this is that I'm 
performing a query on Table2 and grouping the records by MAX(version) but I am 
not interested in records if state = new in Table1 for the value of Num in 
Table2. I've tried to give an example below.

Table1:
Num, state
1  final
2 new
3 final

Table2:
Num, ID,IDt, version
11   100   1
12   101   1
13   102   1
24   100   2
25   103   1
36   100   2
37   103   1
38   104   1

Preferred result:
IDt,   ID,  Num, MAX(version)
100   6  3   2
101   2  1   1
102   3  1   1
103   7  3   1
104   8  3   1

Cheers,
Michael


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Myisam advantages

2010-07-19 Thread Gavin Towey
In addition, the binlogs function as incremental backups. With a full backup + 
binlogs, you can do point-in-time recovery to any moment you choose.

-Original Message-
From: Keith Murphy [mailto:bmur...@paragon-cs.com]
Sent: Monday, July 19, 2010 6:55 AM
To: Jan Steinman
Cc: mysql@lists.mysql.com
Subject: Re: Myisam advantages

Since everyone keeps bringing up Innodb's shared tablespace, I will point
out that Innodb has a file-per-table option where each table has it own
tablespace. This helps with management issues. While there is still a
central datafile it doesn't contain table data and is much smaller than if
you used a centralized table space.

keith

On Sat, Jul 17, 2010 at 12:37 PM, Jan Steinman j...@bytesmiths.com wrote:

 From: P.R.Karthik prk...@gmail.com


  I am newbie to mysql can i know the advantages of myisam storage engine
 and some of its special features.


 Works better with file-based incremental backup systems.

 With InnoDB, you end up backing up a humongous file of all your InnoDB
 tables, even if only one byte in one field of one table of one database was
 touched.

 
 There are only two ways to look at life: One is as if nothing is a miracle.
 The other is as if everything is a miracle. -- Albert Einstein
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com




--
Chief Training Officer
Paragon Consulting Services
850-637-3877

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: which version is better for production servers?

2010-07-19 Thread Gavin Towey
No, current GA is 5.1.48

-Original Message-
From: Nilnandan Joshi [mailto:nilnand...@synechron.com]
Sent: Monday, July 19, 2010 6:14 AM
To: Andrés Tello
Cc: mysql@lists.mysql.com
Subject: Re: which version is better for production servers?

But, can we get GA releases for MySQL 5.5?

Andrés Tello wrote:
 I have just upgraded from mysql4.1 to mysql 5.5.3.. I will upgrade to
 5.5.4.

 5.5. has a lot of speed improvements...

 and OF course you should NOT use mysql developtment release for
 production server!

 On Mon, Jul 19, 2010 at 8:05 AM, Nilnandan Joshi
 nilnand...@synechron.com mailto:nilnand...@synechron.com wrote:

 Hi all,

 I just wanna make new MySQL server for OLTP kind of environment. I
 want to use InnoDB storage engine.
 Which version is better for this kind of environment and which
 will give the great performance?
 MySQL 5.1.47/48 or MySQL 5.5?

 Should we use mysql development release for production server?

 regards,
 Nilnandan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
  http://lists.mysql.com/mysql?unsub=mr.crip...@gmail.com



This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql 4 and or Partitions.

2010-07-15 Thread Gavin Towey
Use indexes before thinking about partitions.

-Original Message-
From: Andrés Tello [mailto:mr.crip...@gmail.com]
Sent: Thursday, July 15, 2010 8:44 AM
To: mysql
Subject: Mysql 4 and or Partitions.

Ok... I solved my mistery of the slow 22G table rebuild the kernel to
support memory and now things are working.

But still quite slow.

I do sum() operations over the 22G table. I'm using the latest version of
mysql 4... (I WILL migrate soon, I promise), one thing I have notice, is
that operations over that 22G table, seasuring the I/O with iostat never
surpass the 23mb/s for reading... even if I test the IO of the array, it can
easily give me 300mb/s for reads... and like 150 for random reads...

first of all, is there any way to squeeze more speed out from mysql 4?
No... I imagined that...

So, my next step is migate to a newer version of mysql ( YEY ^_^  )... I
have read a few about partitions.

The specific query I'm making is a query which do a sum filtered over a date
field.

What would be the best approach to partition this table?

Can I mix innodb with partioned tables and still have acid compliance?

The split in several partitions creates more lecture threads  or how
parelelization over partitions works?

And last, does mysql 4 support partitions? XD





tia.

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Allowing Connections From Remote Clients

2010-07-13 Thread Gavin Towey
The text of the error message means that this is not a possible cause; the 
error message given is coming from the mysql server itself after failed 
authentication -- in this case there's no matching u...@host combination.


-Original Message-
From: Michael Satterwhite [mailto:mich...@weblore.com]
Sent: Tuesday, July 13, 2010 12:54 PM
To: mysql@lists.mysql.com
Subject: Re: Allowing Connections From Remote Clients

On Tuesday, July 13, 2010 02:25:33 pm Prabhat Kumar wrote:
 GRANT ALL PRIVILEGES ON **.** TO username@'tuna.iamghost.com' IDENTIFIED BY
 PASSWORD 'password';

 **.** ie for all databases , if want on particular DB

 GRANT ALL PRIVILEGES ON *MyDATABASE.** TO username@'tuna.iamghost.com'
 IDENTIFIED BY PASSWORD 'password';;


You also need to verify that you don't have network connections turned off in
/etc/my.cnf

if you have skip-networking, comment it out
also make sure that the bind-address is set to 192.168.0.100 and not to the
localhost ip addrerss.

One caution: the ip address you indicate appears to be one of the dynamically
assigned ones by a router. If so, it's possible that it will change. You might
want to switch to a static ip address for a server.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: query help

2010-06-15 Thread Gavin Towey
Hi Richard,

This is a LEFT JOIN, but with only one table you may be wondering what to join. 
 Typically you'll have to create a new table that contains all the dates in the 
range you want to check.  Then you left join your transaction table, and all 
the rows from your dates table that don't have a match is your answer.

Regards,
Gavin Towey

-Original Message-
From: Richard Reina [mailto:rich...@rushlogistics.com]
Sent: Tuesday, June 15, 2010 11:30 AM
To: mysql@lists.mysql.com
Subject: query help


I have a table similar to this:

 -
|transactions |
|ID   |DATE  |EMPLOYEE|
|234  |2010-01-05| 345|
|328  |2010-04-05| 344|
|239  |2010-01-10| 344|

Is there a way to query such a table to give the days of the year that employee 
344 did not have a transaction?

Thanks for the help.

Richard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: MySQL For Huge Collections

2010-06-11 Thread Gavin Towey
Agreed.

Consider keeping meta data about the book in your mysql database, but storing 
and serving the actual files from somewhere else.

If I were you, I'd use an external full text search engine like Sphinx or 
Lucene to handle something like searching for content inside the book.

Also, in terms of requirements, 300k books doesn't say a lot.  Looking at 
project Gutenberg, I see that an uncompressed text copy of Sherlock Holmes is 
only 500k, so you're talking about maybe 150G of data -- which is pretty 
moderate.

Sounds like a fun project though, good luck!

Regards,
Gavin Towey


-Original Message-
From: Peter Chacko [mailto:peterchack...@gmail.com]
Sent: Thursday, June 10, 2010 9:05 PM
To: SHAWN L.GREEN
Cc: Andy; mysql@lists.mysql.com
Subject: Re: MySQL For Huge Collections

Usually, you better use a NAS for such purpose. Database is designed
to store highly transactional, record oriented storage that needs fast
access... You can look for any Enterprise content management systems
that rest its storage on a scalable NAS, with file virtualization in
the long run.

thanks

On Fri, Jun 11, 2010 at 8:04 AM, SHAWN L.GREEN shawn.l.gr...@oracle.com wrote:
 On 6/10/2010 10:16 PM, Andy wrote:

 Hello all,

 I am new to MySQL and am exploring the possibility of using it for my
 work.
 I have about ~300,000 e-books, each about 100 pages long. I am first going
 to extract each chapter from each e-book and then basically store an
 e-book
 as a collection of chapters. A chapter could of course be arbitrarily long
 depending on the book.

 My questions are:

 (1) Can MySQL handle data of this size?
 (2) How can I store text (contents) of each chapter? What data type will
 be
 appropriate? longtext?
 (3) I only envision running queries to extract a specific chapter from a
 specific e-book (say extract the chapter titled ABC from e-book number
 XYZ
 (or e-book titled XYZ)). Can MySQL handle these types of queries well on
 data of this size?
 (4) What are the benefits/drawbacks of using MySQL compared to using XML
 databases?

 I look forward to help on this topic. Many thanks in advance.
 Andy


 Always pick the right tool for the job.

 MySQL may not be the best tool for serving up eBook contents. However if you
 want to index and locate contents based on various parameters, then it may
 be a good fit for the purpose.

 Your simple queries would best be handled by a basic web server or FTP
 server because you seem to want

 http://your.site.here/ABC/xyz

 where ABC is your book and xyz is your chapter.

 Those types of technology are VERY well suited for managing the repetitive
 streaming and distribution of large binary objects (chapter files) like you
 might encounter with an eBook content delivery system.

 --
 Shawn Green
 MySQL Principle Technical Support Engineer
 Oracle USA, Inc.
 Office: Blountville, TN

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
  http://lists.mysql.com/mysql?unsub=peterchack...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Fancy partitioning scheme

2010-06-02 Thread Gavin Towey
MySQL doesn't support dynamic distribution of data among partitions.  The usual 
method is to create a partition for each fixed chunk of time, such as for each 
month/week/hour/day or whatever time slice breaks your data up in the 
manageable pieces.   Note that a very large number of partitions (  1000 isn't 
really recommended.)

Other notes:
Personally, I avoid schema-less constructions like this, because they are hard 
to work with.  Sure they're flexible, but you often pay a price in performance.

100 million rows isn't all that much with the proper indexing.  It really 
depends on your queries and access patterns.

Why not use mysql datetime or timestamp type?  Storing unix timestamps as int 
means you're going to have to convert values to use mysql's date functions.


Regards,
Gavin Towey

-Original Message-
From: Bryan Cantwell [mailto:bcantw...@firescope.com]
Sent: Wednesday, June 02, 2010 12:30 PM
To: mysql@lists.mysql.com
Subject: Fancy partitioning scheme

Perhaps someone has already accomplished this:

I have a simple table with 3 columns:
mytable(
  myid BIGINT(20) UNSIGNED NOT NULL DEFAULT 0,
  myunixtime INT(11) NOT NULL DEFAULT 0,
  myvalue BIGINT(20) UNSIGNED NOT NULL DEFAULT 0
)
It is collecting millions of rows.
The myunixtime column is a unix timestamp column.
I'd love to know if it is possible to partition the table so that the
partitions would be something like:

partition A = everything one day or less old,
partition B = everything 7 days old or less,
partition C = everything 31 days old or less,
partition D = everything older than 31 days.

Can partitioning be this dynamic? If not, what solution could be
suggested to handle doing date range queries on this table that can have
10's or 100's of millions of rows?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Slow query using string functions

2010-05-26 Thread Gavin Towey
Jerry,

Are you sure this is really your explain plan for this query?  That's not at 
all what I would expect to see.

Regards,
Gavin Towey


-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Wednesday, May 26, 2010 2:14 PM
To: mysql@lists.mysql.com
Subject: Slow query using string functions

I have a pretty simple query that seems to take a lot longer than it ought to
(over 2 minutes).

Table `feed_new` has a single VARCHAR(255) column, `new_title`, that is an
index.

Table `prod` has many fields: `prod_title` and `pub_id` are both indexes
(VARCHAR).

`feed_new` has 895 records, `prod` has 110432.

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) =
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: new_title
  key_len: 768
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 9816
Extra: Using where

The query is doing a scan of the 9816 records that have pub_id = @PUBID, but
even so this seems like a long time. Are the built-in string functions really
that slow?

I suspect it would be faster if I built separate tables that had just the
shortened versions of the titles, but I wouldn't think that would be
necessary.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Can't create foreign key

2010-05-14 Thread Gavin Towey
Perror 121 says:
OS error code 121:  Remote I/O error

Which I'm not too sure why an ALTER to add an constraint would give that error.

Normally though, foreign key errors are shown in the SHOW ENGINE INNODB STATUS 
\G output, look for more details there.

Regards,
Gavin Towey

-Original Message-
From: j...@msdlg.com [mailto:j...@msdlg.com]
Sent: Friday, May 14, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Can't create foreign key

I'm trying to create a foreign key by executing the following statement:

ALTER TABLE `cc`.`takenlessons`

  ADD CONSTRAINT `fk_lessons`

  FOREIGN KEY (`LessonID` )

  REFERENCES `cc`.`lessons` (`id` )

  ON DELETE CASCADE

  ON UPDATE NO ACTION

, ADD INDEX `fk_lessons` (`LessonID` ASC) ;



I'm using the RC of MySQL workbench to do this. When I execute this
statement, I get the following error:

Error Code: 1005

Can't create table 'cc.#sql-115c_61' (errno: 121))



In the past when I got a similar error, it was because the fields didn't
match exactly. For instance, one may be Int(10) and one Int(11), or one may
be Unsigned, and the other not.  But, in this case, both match exactly.
What other reasons are there for a foreign key creation to fail like that?



Thanks,

Jesse


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: order by numeric value

2010-04-27 Thread Gavin Towey
That won't work on a value like $1.00

select CAST('$1.00' as decimal(8,2));
+---+
| CAST('$1.00' as decimal(8,2)) |
+---+
|  0.00 |
+---+
1 row in set, 1 warning (0.00 sec)


+-+--++
| Level   | Code | Message|
+-+--++
| Warning | 1292 | Truncated incorrect DECIMAL value: '$1.00' |
+-+--++
1 row in set (0.00 sec)


It would have to be something like:

select CAST(REPLACE('$1.00','$','') as decimal(8,2)) ;
+---+
| CAST(REPLACE('$1.00','$','') as decimal(8,2)) |
+---+
|  1.00 |
+---+
1 row in set (0.00 sec)


Which in that case, it's better to just select balance without the dollar sign 
and order on that column.

Regards,
Gavin Towey

-Original Message-
From: DaWiz [mailto:da...@dawiz.net]
Sent: Tuesday, April 27, 2010 3:46 PM
To: Keith Clark; mysql@lists.mysql.com
Subject: Re: order by numeric value

Try

order by CAST(Balance as decimal(8,2)) asc;


Cast will work in the order by.

Glenn Vaughn

- Original Message -
From: Keith Clark keithcl...@k-wbookworm.com
To: mysql@lists.mysql.com
Sent: Tuesday, April 27, 2010 3:52 PM
Subject: order by numeric value


I have the following statement:

 select chart_of_accounts.accountname as Account,
 concat('$',format(coalesce(sum(sales_journal_entries.debit),0),2)) as
 Debit,
 concat('$',format(coalesce(sum(sales_journal_entries.credit),0),2)) as
 Credit,
 concat('$',format(coalesce(sum(sales_journal_entries.credit),0)-coalesce(sum(sales_journal_entries.debit),0),2))
 as Balance
 from sales_journal_entries
 left join sales_journal
 on sales_journal.journalID=sales_journal_entries.journalID
 left join chart_of_accounts
 on chart_of_accounts.accountID=sales_journal_entries.accountID
 where sales_journal.date  '2008-12-31'
 and sales_journal.date  '2010-01-01'
 group by sales_journal_entries.accountID
 order by Balance asc;

 and I'd like the output to be sorted by the Balance according to the
 numberic value, but it is sorting by the string result.  I tried
 abs(Balance) but I get the following error:

 1247 Reference 'Balance' not supported (reference to group function)

 I'm not sure I understand the error.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=my...@dawiz.net

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: better way to backup 50 Gig db?

2010-04-21 Thread Gavin Towey
Claudio,

So innodb may not be consistent between memory and disk at all times, but 
that's actually not important.  What is important is that the files on disk to 
be consistent with a specific binlog position.  That's all that is needed for a 
consistent backup, and can be done with filesystem snapshots.

Innodb may continue to do background flushing even during a FLUSH TABLES WITH 
READ LOCK, but it always keeps consistency between its log files, the binlog, 
and the tablespaces.

When you load your snapshot back into an instance of mysql, you'll often see it 
go through the crash recovery as it applies log file items to the tablespace, 
and deals with any unfinished open transactions that happened to be running, 
but once done your data will be consistent with the binlog position that was 
recorded when the read lock was held.

I do this every day on many servers both for backup, and creating new slaves.  
It always works.  Neither of those links you gave contradict this, in fact they 
both essentially say this works great, as long as you're aware of the caveats

Regards,
Gavin Towey


-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Wednesday, April 21, 2010 1:51 AM
To: Johan De Meersman
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables and
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_File_System_Snapshots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman vegiv...@tuxera.be

 How does the below not guarantee me 100% that everything that can be
 consistent, is ?

 mysql flush tables with read lock;
 unixhost# sync
 unixhost# lvm create snapshot
 mysql unlock tables;

 I agree that there may be data inconsistencies within MyISAM, as it has no
 transactions, but there's also no guarantee that there isn't an application
 in the middle of multiple insert statements the moment you shut your
 database or your application. You can't magically get full consistency out
 of your database if your application hasn't been designed for it.

 Shutting systems down for backup may be fine for small setups, but on a
 server that hosts multiple critical applications, you just can't do that.
 You back up as consistently as you can without downtime, and pick the time
 of night with the lowest likelyhood of activity for it.


 On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gavin,
 Right,
 that is also an option, but you are really not sure 100% that everything
 that is on memory is on the disk (buffers etc...)
 also if it is definitely good for a disaster recovery.
 What I meant is that the only way to have a 100% guaranteed consistent
 binary backup is when the database is shut down.
 Of course this is almost never an option, unless (tada) you have a slave
 dedicated for that.
 One remark on your note:


 Just a note though, I noticed someone added replication to a slave as a
 backup option.  I really discourage that.  Replication makes no guarantees
 that the data on your slave is the same as the data on your master.  Unless
 you're also checking consistency, a slave should be treated as a somewhat
 unreliable copy of your data.

 While it is true that replication makes no guarantees, if your slave is
 not the same as the master and you rely on that for production, you have
 some problems,
 try to go to business and say, our slave (which at least 50% of our
 applications use to read data) is not really in sync, watch their facial
 expression!
 Believe me, in many production environments the method used for backups
 relies on the slave, not on the master.
 It is so much useful and important that you should have all your efforts
 go for having a consistent read-only slave 'dedicated' only for backups, no
 other client messing with it.

 Just my two cents

 Claudio



 Gavin Towey wrote:

 You can make binary backups from the master using filesystem snapshots.
  You only need to hold a global read lock for a split second.

 Regards,
 Gavin Towey


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




--
Claudio

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing

RE: better way to backup 50 Gig db?

2010-04-20 Thread Gavin Towey
More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a backup 
option.  I really discourage that.  Replication makes no guarantees that the 
data on your slave is the same as the data on your master.  Unless you're also 
checking consistency, a slave should be treated as a somewhat unreliable copy 
of your data.

Regards,
Gavin Towey

-Original Message-
From: andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: li...@netrogenic.com
Cc: mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, 
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel 
dump/import.

There's some great features in both products. I will leave you to do your own 
research into the tools as knowing their features will benefit you.

Best wishes

Andy

From: ext Jay Ess [li...@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
 What Shawn said is important.

 Better options:
 1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
 --single-transaction  backup.sql`  and keep your db server actively 
 responding to requests at the same time.

 2. Use something like LVM to create filesytem snapshots which allow you to 
 backup your database, while only keeping a read lock on the db for a second 
 or so.

3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-20 Thread Gavin Towey
You can make binary backups from the master using filesystem snapshots.  You 
only need to hold a global read lock for a split second.

Regards,
Gavin Towey

From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Tuesday, April 20, 2010 1:19 PM
To: Gavin Towey
Cc: andrew.2.mo...@nokia.com; li...@netrogenic.com; mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Where is Falcon (Sorry)

the only way to have a really consistent binary backup is to shut down the 
server.
the best way to shut down a server is to have a slave dedicated to backups that 
you can shutdown any time.

if you have only the content of the database folders under [datadir] it is not 
enough, you need the full [datadir] to 'dream' to restore
your db, unless you only use MyISAM tables, then you are more lucky.

The bottom line is:  Don't Dream, Prove it. Or it will become a nightmare 
sooner or later.


Ciao!

Claudio



2010/4/20 Gavin Towey gto...@ffn.commailto:gto...@ffn.com
More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a backup 
option.  I really discourage that.  Replication makes no guarantees that the 
data on your slave is the same as the data on your master.  Unless you're also 
checking consistency, a slave should be treated as a somewhat unreliable copy 
of your data.

Regards,
Gavin Towey

-Original Message-
From: andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com 
[mailto:andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: li...@netrogenic.commailto:li...@netrogenic.com
Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, 
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel 
dump/import.

There's some great features in both products. I will leave you to do your own 
research into the tools as knowing their features will benefit you.

Best wishes

Andy

From: ext Jay Ess [li...@netrogenic.commailto:li...@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
 What Shawn said is important.

 Better options:
 1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
 --single-transaction  backup.sql`  and keep your db server actively 
 responding to requests at the same time.

 2. Use something like LVM to create filesytem snapshots which allow you to 
 backup your database, while only keeping a read lock on the db for a second 
 or so.

3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com



--
Claudio


This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited. Please notify the sender immediately by e-mail if you have 
received this e-mail by mistake and delete this e-mail from your system. E-mail 
transmission cannot be guaranteed to be secure or error-free as information 
could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
contain viruses. The sender therefore does not accept liability for any loss or 
damage caused by viruses or errors or omissions in the contents of this 
message, which arise as a result of e-mail

RE: Getting Array to display on SELECT

2010-04-20 Thread Gavin Towey
This is a PHP question.

-Original Message-
From: Gary [mailto:g...@paulgdesigns.com]
Sent: Tuesday, April 20, 2010 3:17 PM
To: mysql@lists.mysql.com
Subject: Getting Array to display on SELECT

I'm frankly not sure if this is a MySQL question or PHP, but I thought I
would start here.

I have a form that I have a (ever growing) list of checkboxes,  Here is a
sample of  the code for it.

 input name=keyword[] type=checkbox value=fox /

It seems to go in, when I say seems to, I get a result of Array in the
table, the code is listed below.  I have tried various solutions I found in
searching the issue, but have only been able to so far get Array.

  echo 'table border=1thId Number/ththDate
Entered/ththCaption/ththWhere
Taken/ththKeywords/ththDescription/ththImage/th';
  while ($row = mysqli_fetch_array($data)) {

echo 'trtd' . $row['image_id']. '/td';
   echo 'td' . $row['submitted']. '/td';
echo 'td' . $row['caption']. '/td';
echo 'td' . $row['where_taken'] . '/td';
echo 'td' . $row['keyword']. '/td';
   echo 'td' . $row['description'] . '/td';
  if (is_file($row['image_file'])) {
  echo 'tdimg src='.$row['image_file'].' width=100px
height=100px//td';
  }

As a bonus question, does anyone have any idea why the image would show up
in IE9, and not FF?

Thanks for your help.

Gary


__ Information from ESET NOD32 Antivirus, version of virus signature 
database 5045 (20100420) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-19 Thread Gavin Towey
What Shawn said is important.

Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
--single-transaction  backup.sql`  and keep your db server actively responding 
to requests at the same time.

2. Use something like LVM to create filesytem snapshots which allow you to 
backup your database, while only keeping a read lock on the db for a second or 
so.



-Original Message-
From: Shawn Green [mailto:shawn.l.gr...@oracle.com]
Sent: Monday, April 19, 2010 3:24 PM
To: Mitchell Maltenfort
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Mitchell Maltenfort wrote:
 I'm using MySQL to manage data on my computer .

 The total data is 50 Gig in MyISAM folders.

 As I type, I already have the folder with the myd, frm, etc being
 copied offsite.  As I understand it, if this computer dies tomorrow, I
 can reinstall MySQL on a new computer, drag over the archive, stick
 the folder under data and I'm back in business.

 Or am I dreaming?

 I'd rather be corrected now than find out the hard way.

 Any advice?+


Did you remember to FLUSH (with read lock) those tables before you
started copying. Alternatively, you could have shutdown your MySQL
instance, too.

If not, then the in-memory and on-disk images of your tables are out of
sync. You may be copying away data that will appear corrupted after
restoration.

--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow Union Statement

2010-04-05 Thread Gavin Towey
Union does a distinct on all results.  UNION ALL will avoid that.

Regards,
Gavin Towey

-Original Message-
From: chen.1...@gmail.com [mailto:chen.1...@gmail.com] On Behalf Of chen jia
Sent: Monday, April 05, 2010 11:07 AM
To: mysql@lists.mysql.com
Subject: Slow Union Statement

Hi there,

I run simple statement like this:

create table c
select * from a
union
select * from b;

where table a has 90,402,534 rows, and table b has 33,358,725 rows.
Both tables have the same three variables.

It's taken a long time, more than half an hour now. How do I make it faster?

Best,
Jia

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MyISAM better than innodb for large files?

2010-04-02 Thread Gavin Towey
I disagree.  There's nothing about his requirements that sounds like MyIsam is 
a better solution.  InnoDB should be your default for all tables, unless you 
have specific requirements that need myisam.  One specific example of an 
appropriate task for myisam is where you need very high insert throughput, and 
you're not doing any updates/deletes concurrently.

You want the crash safety and data integrity that comes with InnoDB.  Even more 
so as your dataset grows.  It's performance is far better than myisam tables 
for most OLTP users, and as your number of concurrent readers and writers 
grows, the improvement in performance from using innodb over myisam becomes 
more pronounced.

Regards,
Gavin Towey

-Original Message-
From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
Sent: Friday, April 02, 2010 12:58 PM
To: Mitchell Maltenfort
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM better than innodb for large files?

InnoDB won't give you much in terms of disk crash recovery. That's what
backups are for.

Where InnoDB does excel is if your database server dies while updating
rows. If that happens, your database will come back up with sane data.

For both table types, once the data has been flushed to disk, the data
will still be there if your db server crashes.

It does indeed sound like you will be better off using MyISAM. This will
also reduce your disk space usage considerably.

/ Carsten

Mitchell Maltenfort skrev:
 I'm going to be setting up a MySQL database for a project.  My reading
 indicates that MyISAM (default) is going to be better than InnoDB for
 the project but I want to be sure I have the trade-offs right.


 This is going to be a very large data file -- many gigabytes -- only
 used internally, and once installed perhaps updated once a year,
 queried much more often.

 MyISAM apparently has the advantage in memory and time overheads.

 InnoDB's advantage seems to be better recovery from disk crashes.

 Should I stick with MyISAM (MySQL default), or does the recovery issue
 mean I'm better off using InnoDB for an insurance policy?

 Inexperienced minds want to know -- ideally, from experienced minds.

 Thanks!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Table Length Question...

2010-03-30 Thread Gavin Towey
Not only should you definitely have one record per day, instead of one record 
per month, you should think about normalizing your structure further.  Try 
these articles for tips on how to design a database structure:

http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html and 
http://mysqldump.azundris.com/archives/20-Nermalisation.html and some here 
http://www.keithjbrown.co.uk/vworks/mysql/

Good luck!

Regards,
Gavin Towey

-Original Message-
From: Steven Staples [mailto:sstap...@mnsi.net]
Sent: Tuesday, March 30, 2010 8:36 AM
To: mysql@lists.mysql.com
Subject: Table Length Question...

Hi there,

I currently store some information about a users daily habits in a table.
The table has 4 fields per day, and another 4 fields as the keys.  This
table, depending on the month, can be from (4 keys + (28 days * 4 fields per
day)) fields, to (4 keys + (31 days * 4 fields per day)) fields long...

The table layout is like such:
+-+---++-+--+--+--+--+--+--+
--
|name |id |id2 |type |d01f1 |d01f2 |d01f3 |d01f4 |d02f1 |d02f2 |.and so
on
+-+---++-+--+--+--+--+--+--+
--

Performance wise, would it be better to have it laid out in a manner such as
+-+---++-++---+---+---+---+
|name |id |id2 |type |day |f1 |f2 |f3 |f4 |
+-+---++-++---+---+---+---+
So that each row, contains a single days details, rather than have a single
row, contain the entire months details?

Also, when i would do a select, if i wanted say d02f1, would it load the
entire row first, and then just give me that field?
-Select `d02f01` from `mytable` where [where clause]
Or would it jsut load that field...

Does these questions make sense?  (they do in my head)

Steven Staples



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Best Configuration on Production Server

2010-03-19 Thread Gavin Towey
What changed between your test that took 5ms and the current production system?

-Original Message-
From: Abdul Mohid Khan [mailto:abdulmohid.k...@magnaquest.com]
Sent: Friday, March 19, 2010 12:27 AM
To: mysql@lists.mysql.com
Cc: Abdul Mohid Khan
Subject: Best Configuration on Production Server

Hi list,


I am having a problem with one of our production server. The performance
of the server get decrease considerably in production server. We are
using mysql data base of a Authentication Server. When we have tested
same server on test environment (2GB RAM, 2 CPU) we are getting 5 ms but
in production same request is taking 50 ms.

The Production sever configuration

No of CPU : 8
RAM : 8 GB
OS : Cent OS

Here i am giving the show variable out put on the system. Please help to
do the best configuration for my mysql server on production. We are
using both MyISM and InnoDB engine.

| auto_increment_offset   |
1  |
| automatic_sp_privileges |
ON |
| back_log|
50 |
| basedir |
/usr/  |
| bdb_cache_size  |
8384512|
| bdb_home|
/var/lib/mysql/|
| bdb_log_buffer_size |
262144 |
| bdb_logdir
||
| bdb_max_lock|
1  |
| bdb_shared_data |
OFF|
| bdb_tmpdir  |
/tmp/  |
| binlog_cache_size   |
32768  |
| bulk_insert_buffer_size |
8388608|
| character_set_client|
latin1 |
| character_set_connection|
latin1 |
| character_set_database  |
latin1 |
| character_set_filesystem|
binary |
| character_set_results   |
latin1 |
| character_set_server|
latin1 |
| character_set_system|
utf8   |
| character_sets_dir  |
/usr/share/mysql/charsets/ |
| collation_connection|
latin1_swedish_ci  |
| collation_database  |
latin1_swedish_ci  |
| collation_server|
latin1_swedish_ci  |
| completion_type |
0  |
| concurrent_insert   |
1  |
| connect_timeout |
10 |
| datadir |
/var/lib/mysql/|
| date_format |
%Y-%m-%d   |
| datetime_format | %Y-%m-%d
%H:%i:%s  |
| default_week_format |
0  |
| delay_key_write |
ON |
| delayed_insert_limit|
100|
| delayed_insert_timeout  |
300|
| delayed_queue_size  |
1000   |
| div_precision_increment |
4  |
| keep_files_on_create|
OFF|
| engine_condition_pushdown   |
OFF|
| expire_logs_days|
0  |
| flush   |
OFF|
| flush_time  |
0  |
| ft_boolean_syntax   | +
-()~*:| |
| ft_max_word_len |
84 

RE: udf return column name not value

2010-03-15 Thread Gavin Towey
You'll have to do something like this:

SET @sql := CONCAT('select ',columnname,' into retval from user where ID=',id);
PREPARE mySt FROM @sql;
EXECUTE mySt;



-Original Message-
From: chamila gayan [mailto:cgcham...@gmail.com]
Sent: Monday, March 15, 2010 12:58 AM
To: mysql@lists.mysql.com
Subject: udf return column name not value

CREATE FUNCTION getcolumnvalue(id int,columnname varchar(30))
RETURNS varchar(50) DETERMINISTIC
READS SQL DATA
begin
declare retval varchar(50);
return retval;
end;

I want get value of related column but it return column name.
ex:- ('tom' what I want but it return 'name')
plz tell what the wrong of this

thank you

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Nested inserts possible?

2010-03-12 Thread Gavin Towey
If you're inserting multiple rows in the dependent table for the same id in the 
parent table, you'll need to save the value of LAST_INSERT_ID(), otherwise 
subsequent inserts will change it!

INSERT INTO parent VALUES (...);
SET @id:=LAST_INSERT_ID();
INSERT INTO child1 VALUES (@id, ... );
INSERT INTO child2 VALUES (@id, ... );

-Original Message-
From: Keith Clark [mailto:keithcl...@k-wbookworm.com]
Sent: Friday, March 12, 2010 7:57 AM
To: mysql@lists.mysql.com
Subject: Re: Nested inserts possible?

Johan,

That seems to be the ticket.  Thanks!

Keith

On Fri, 2010-03-12 at 16:54 +0100, Johan De Meersman wrote:
 Have a look at last_insert_id().

 On Fri, Mar 12, 2010 at 3:48 PM, Keith Clark
 keithcl...@k-wbookworm.com wrote:
 I have two tables and I have data to insert into both at the
 same time,
 but the second table depends on an ID that is auto created in
 the first
 table.  I'm not sure exactly how to accomplish this.

 Table_One

 Table_One_Index_ID
 Data_One
 Date_Two

 Table_Two

 Table_Two_Index_ID
 Table_One_Index_ID
 Data_Three
 Data_Four




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
  http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Error Removing Anonymous Accounts

2010-03-11 Thread Gavin Towey

DROP USER ''@'localhost';

If you use the correct GRANT/REVOKE and CREATE/DROP commands it's not necessary 
to update the tables and run FLUSH PRIVILEGES.  IMO manipulating those tables 
directly is a bad habit.

-Original Message-
From: Carlos Mennens [mailto:carlosw...@gmail.com]
Sent: Thursday, March 11, 2010 8:32 AM
To: MySQL
Subject: Re: Error Removing Anonymous Accounts

On Thu, Mar 11, 2010 at 11:29 AM, Rolando Edwards
redwa...@logicworks.net wrote:
 DELETE FROM mysql.user WHERE user='';
 FLUSH PRIVILEGES;

That worked and I thank you however I am wondering why the MySQL guide
was incorrect? Perhaps it's right but I did something wrong. Any
thoughts?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Properly ALTER Column Data?

2010-03-11 Thread Gavin Towey
Please read the tutorial, let us know if you have questions on the information 
in it:
http://dev.mysql.com/doc/refman/5.0/en/tutorial.html



-Original Message-
From: Carlos Mennens [mailto:carlosw...@gmail.com]
Sent: Thursday, March 11, 2010 12:44 PM
To: MySQL
Subject: Properly ALTER Column Data?

I know this is very basic for most on the list but I need some MySQL
help. I am logged into my database and I successfully changed a 'field
type' from INT to VARCHAR. Now I need to modify the actual data I
inserted into those specific fields. I checked the manual and could
not really gather exactly how to format my command. There was just so
many options and information on the page...

My table data is as follows and I would like to 'ALTER' the data in
all three 'Serial' fields:

mysql select Model, Serial, GFE, EOL from sun;
+--++--++
| Model| Serial | GFE  | EOL|
+--++--++
| Ultra 24 | 941| 8402 | 2010-10-16 |
| 7310 | 934|9314 | 2012-08-27 |
| J4400| 926|7623 | 2012-08-27 |
+--++--++
3 rows in set (0.00 sec)

Can someone please help me understand how I am to properly use and
change the data in MySQL?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Front End Application For MySQL

2010-03-09 Thread Gavin Towey
Google for hardware asset management software


-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Tuesday, March 09, 2010 9:15 AM
To: Carlos Mennens
Cc: MySQL
Subject: Re: Front End Application For MySQL

Wrong place to ask, unless you want to reinvent the wheel. There's plenty of
applications out there for managing server pools, although thb I can't think
of a good OSS one off-hand. On the job, we use HP Openview. Do yourself a
favour and stay away from that one :-)

On Tue, Mar 9, 2010 at 5:33 PM, Carlos Mennens carlosw...@gmail.com wrote:

 I am looking for a front end application that will allow me to enter
 in all my server / workstation data into a MySQL backend running on
 Linux. I have been told that I need to take all 75 servers we have in
 our server room and keep a database of the following:

 - Make
 - Model
 - O.S.
 - I.P.
 - Hostname
 - Serial #
 - Company Asset #
 - Warranty Expiration Date
 - Room Location

 Does anyone know of a program or easiest way I can enter all this data
 into a MySQL database and manage it as things change and move around?
 Right now I am using a spreedsheet from OpenOffice and it's not
 efficient if you know what I mean. Anyone have any tips / suggestions?

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: UDF - Sequence Numbers

2010-03-08 Thread Gavin Towey
Others may correct me if I'm wrong, but if you want to maintain your own auto 
increment without duplicates like that, then you need to serialize inserts to 
your table.  Which means either doing an explicit table lock by the process 
that's inserting, or using innodb with transactions in the SERIALIZABLE tx 
isolation mode.

If I were you, I would maintain your incrementing sequence in a separate table.

Prep the table:
CREATE TABLE store_seq ( store_id int unsigned not null, nextid int unsigned 
not null );
insert into store_seq ( 1, 1 );

Get next id:
update store_seq set nextid=LAST_INSERT_ID(nextid+1) where store_id=1;
select LAST_INSERT_ID();

This will give you the next id in an atomic way, and avoid replication 
problems.  It means your insert will take two queries, but that's a small price 
to pay for correctness.

Regards,
Gavin Towey

-Original Message-
From: Johnny Withers [mailto:joh...@pixelated.net]
Sent: Monday, March 08, 2010 1:31 PM
To: MySQL General List
Subject: UDF - Sequence Numbers

I have two servers, both running 5.0.77-log, one is setup as a master, the
other as a replication slave.

The database contains a table that holds records of loans for financial
lending stores. This table has an ID column this is defined as auto
increment. There is another column called store_seq_num that holds the
sequence number for each loan done in each store. This column needs to work
like the auto-increment field; however, it's value is dependent upon which
store created the loan. Currently there is a UDF called fnNextStoreSeqNum
that returns the next sequence number for the new loan for the given store.
It does this by executing:

SELECT MAX(store_seq_num)+1 AS NextId FROM trans_adv WHERE
trans_adv.store_id=N;

It uses the store_seq_num key and explain says Select tables optimized
away. in the extra column.

The INSERT statement for this table looks something like this:

INSERT INTO trans_adv(store_id,store_seq_num,...)
VALUES(fnNextStoreSeqNum(10),10,);

The problem comes in on the replication server. Sometimes the sequence
numbers do not match the master. The root cause of the problem seems to be
when two loans are created in the same store at the same time (same second
-- or even 2 seconds apart sometimes). The master duplicates the sequence
number and the slave writes the correct sequence numbers. This seems to
happen when the server is under heavy load (600+ queries per sec). I hvae a
feeling it's due to the loan being created in a single transaction;
therefore the sequence number for the first loan really didn't exist to
any other connections until COMMIT was issued.

Is there a better way to do these sequence numbers? Should the key be
defined as UNIQUE? If it is defined as UNIQUE how can this key be added to
the existing table that has duplicate sequence numbers?

A partial create table statement is below for the trans_adv table.

CREATE TABLE `trans_adv` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `store_id` int(10) unsigned NOT NULL default '0',
  `store_seq_num` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `store_key` (`store_id`),
  KEY `store_seq_num_key` (`company_id`,`store_id`,`store_seq_num`),
) ENGINE=InnoDB AUTO_INCREMENT=3049363 DEFAULT CHARSET=latin1
;



--
-
Johnny Withers
601.209.4985
joh...@pixelated.net

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Displaying date/time

2010-03-05 Thread Gavin Towey
That's probably something best done in your presentation (app) layer.

If you must do this in mysql, then you'll probably want to write a stored 
function.

-Original Message-
From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
Sent: Friday, March 05, 2010 10:27 AM
To: prabhat kumar
Cc: [MySQL]
Subject: Re: Displaying date/time

Hi

That is kind of what I'm looking for.  However I'd like to be able to
display the difference between date 1 and date 2 like

1d 2h 29min ago

Thanks
Neil

On Fri, Mar 5, 2010 at 3:32 PM, prabhat kumar aim.prab...@gmail.com wrote:

 Might be this will help you:
 there is a table called message with 3 colums - id, pubdate and message;
 You can get all messages from the last 5 minutes with the following
 example;

 SELECT TIMESTAMPDIFF(MINUTE, pubdate, now()), id, message from message
 where (TIMESTAMPDIFF(MINUTE, pubdate, now())  5);

 Thanks,


 On Fri, Mar 5, 2010 at 8:19 PM, Tompkins Neil 
 neil.tompk...@googlemail.com wrote:

 Hi

 I have a number of rows which have Date and Time data in.  I want to
 display
 in the following formats based on the systems current time

 e.g under 1 hour 24min ago
 e.g under 1 day 16h 29min ago
 e.g over 1 day 1d 2h 29min ago
 e.g over 1 week 1w 4d 2h 29min ago

 How would this best be achieve using MySQL.

 Thanks,
 Neil




 --
 Best Regards,

 Prabhat Kumar
 MySQL DBA
 Datavail-India Mumbai
 Mobile : 91-9987681929
 www.datavail.com

 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Re bug#45458

2010-03-03 Thread Gavin Towey
Yes, but the optimizer doesn't know that.

-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Wednesday, March 03, 2010 6:16 AM
To: Zardosht Kasheff
Cc: Jonas Oreland; mysql@lists.mysql.com
Subject: Re: Re bug#45458

Unless I'm very much mistaken, InnoDB tables always have a clustered index
as their primary key.

On Wed, Mar 3, 2010 at 2:58 PM, Zardosht Kasheff zardo...@gmail.com wrote:

 Hello Jonas,

 Thank you for filing this feature request. Are there plans to add
 support for clustered indexes in MySQL soon? This is something I have
 been researching on and off for a while now. Here are my thoughts.

 It seems that there are two parts to this feature request:
 1) a new flag that allows the storage engine to report that an index
 is clustered
 2) changes to the optimizer to properly support clustered keys.

 I like #1. The way that I dealt with it was not as good. I added
 handler::supports_clustered_keys(), and used that function and
 HA_CLUSTERING from my patch to determine if an index is clustered.
 Your method is better.

 As for #2, I do not think it is enough. Here are two other locations
 of code I know that will need to be modified:
 1) find_shortest_key in sql/sql_select.cc. (This will be an addition
 to MySQL bug #39653)
 2) get_best_ror_intersect in sql/opt_range.cc. This is for
 index_merge. A patch of what I have done is in the attached file
 9-index_merge_clustering.txt. This patch was the result of a long
 thread on the internals alias (which you may want to CC for this
 discussion). The link to the thread is
 http://lists.mysql.com/internals/36977.

 There may be more places that need to be modified. I think the
 approach to finding out if other places need to be modified is to
 pattern match off of how the optimizer deals with clustered v.
 non-clustered primary keys. It does so by having a function
 handler::primary_key_is_clustered. I think one needs to search the
 optimizer for all instances of this function, see why it is being
 called, and see if it applies to clustered v. non-clustered secondary
 keys as well.

 -Zardosht

 On Wed, Mar 3, 2010 at 5:57 AM, Jonas Oreland jo...@mysql.com wrote:
  Hi,
 
  I just filed http://bugs.mysql.com/bug.php?id=51687
  which is very related to your bug#45458.
 
  If you would care to look at it and provide feedback,
  I would appreciate it.
 
  /Jonas
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: auto_increment weirdness

2010-02-18 Thread Gavin Towey
Reproduced in 5.1.43.  Could not reproduce it in 5.0.66


-Original Message-
From: Yang Zhang [mailto:yanghates...@gmail.com]
Sent: Wednesday, February 17, 2010 6:05 PM
To: mysql@lists.mysql.com
Subject: auto_increment weirdness

Hi, for some reason, I have an auto_increment field that's magically
bumped up to the next biggest power of 2 after a big INSERT...SELECT
that inserts a bunch of tuples (into an empty table). Is this expected
behavior? I couldn't find any mention of this from the docs (using the
MySQL 5.4.3 beta).

Small reproducible test case:

First, generate some data: from bash, run seq 3  /tmp/seq

Next, run this in mysql:

create table x (a int auto_increment primary key, b int);
create table y (b int);
load data infile '/tmp/seq' into table y;
insert into x (b) select b from y;
show create table x;

This will show auto_increment = 32768 instead of 3.

Is this a bug introduced in the beta? Has it been fixed in newer
releases? Couldn't find a mention in the bug database. Thanks in
advance.
--
Yang Zhang
http://www.mit.edu/~y_z/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: count children nodes

2010-02-16 Thread Gavin Towey
Trees can be complex in SQL; these articles will give some different ideas to 
handle it:
http://hashmysql.org/index.php?title=Trees_and_hierarchical_data_in_SQL
http://dev.mysql.com/tech-resources/articles/hierarchical-data.htm

Regards,
Gavin Towey

-Original Message-
From: David Arroyo Menendez [mailto:david.arr...@bbvaglobalnet.com]
Sent: Tuesday, February 16, 2010 8:27 AM
To: mysql@lists.mysql.com
Subject: count children nodes

Hello,

I've the next table structure:

CREATE TABLE tx_cc20_mensajes (
uid int(11) NOT NULL auto_increment,
pid int(11) DEFAULT '0' NOT NULL,
tstamp int(11) DEFAULT '0' NOT NULL,
crdate int(11) DEFAULT '0' NOT NULL,
cruser_id int(11) DEFAULT '0' NOT NULL,
deleted tinyint(4) DEFAULT '0' NOT NULL,
hidden tinyint(4) DEFAULT '0' NOT NULL,
remitente int(11) DEFAULT '0' NOT NULL,
destinatario int(11) DEFAULT '0' NOT NULL,
padre int(11) DEFAULT '0' NOT NULL,
mensaje text,
leido tinyint(3) DEFAULT '0' NOT NULL,

PRIMARY KEY (uid),
KEY parent (pid)
);

Where padre is the id of the parent message. I need count the messages don't
read in a thread. How can I do it?

With
$query=select count(*) as num from tx_cc20_mensajes msj where hidden=0 and
deleted=0 and leido=0 and destinatario=.$uid. and remitente.$uid. and
(padre=.$est_row['uid']. or uid=.$est_row['uid'].);
I am counting only the first level, but I need count the rest of children
messages. What is the query?

Thanks!

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql crashes, do not understand backtrace

2010-02-15 Thread Gavin Towey
When running with force-recovery, the database will indeed be read-only.  This 
is on purpose.  When you have an error of this nature, the procedure is to find 
a force-recovery level which allows you to start mysql, then export all the 
data.  Then you can shut down mysql, wipe your innodb tablespace and log file, 
restart mysql in a clean state, then re-import your data.

Please proceede with caution though; if possible make a backup of the mysql 
data directory in its current corrupted state before you delete anything.

Regards
Gavin Towey

-Original Message-
From: Wesley Wajon [mailto:wes...@oktober.nl]
Sent: Monday, February 15, 2010 6:30 AM
To: mysql@lists.mysql.com
Subject: Mysql crashes, do not understand backtrace

On one of the servers we maintain mysql recently crashed.
In safe mode (innodb_force_recovery = 4) it runs but then you can't do
any mutations to the databases.

When we try to start it in normal mode it crashes and we do not really
understand the backtrace.
We eventually ended up in flushing all the databases and start adding,
backups of, them one-by-one except for one (used by openX 2.4.4).

It now runs normally with the backups, but could someone give us more
insight in what has happend and what could be the cause of it?

The server is a:
Intel(R) Xeon(R) CPU 3060  @ 2.40GHz with 4 GB RAM

running:
Linux 2.6.15-1.2054_FC5smp #1 SMP Tue Mar 14 16:05:46 EST 2006 i686 i686
i386 GNU/Linux

mysql version:
mysql  Ver 14.12 Distrib 5.0.27, for redhat-linux-gnu (i686) using
readline 5.0

mysqld log:
Number of processes running now: 0
100211 11:55:23  mysqld restarted
100211 11:55:23  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
100211 11:55:23  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 3 319452546.
InnoDB: Doing recovery: scanned up to log sequence number 3 320533610
InnoDB: 1 transaction(s) which must be rolled back or cleaned up
InnoDB: in total 159 row operations to undo
InnoDB: Trx id counter is 0 25336832
100211 11:55:23  InnoDB: Starting an apply batch of log records to the
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: Starting in background the rollback of uncommitted transactions
100211 11:55:23  InnoDB: Rolling back trx with id 0 25327402, 159 rows
to undo
100211 11:55:23  InnoDB: Started; log sequence number 3 320533610
InnoDB: Dump of the tablespace extent descriptor:  len 40; hex
639114ee23260004febfafbeaafffbef;
asc   c   #;
InnoDB: Serious error! InnoDB is trying to free page 17905
InnoDB: though it is already marked as free in the tablespace!
InnoDB: The tablespace free space info is corrupt.
InnoDB: You may need to dump your InnoDB tables and recreate the whole
InnoDB: database!
InnoDB: Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: about forcing recovery.
100211 11:55:23InnoDB: Assertion failure in thread 2960472992 in file
fsp0fsp.c line 2980
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_connections=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 348671 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xb0752308, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8178043
0x83cf830
0x8381ad3
0x8387768
0x836739d
0x83682be

RE: query help

2010-02-09 Thread Gavin Towey
SELECT ID, check_no, amount FROM payables UNION SELECT ID, check_no, amount 
FROM paychecks;

Regards,
Gavin Towey


-Original Message-
From: Richard Reina [mailto:rich...@rushlogistics.com]
Sent: Tuesday, February 09, 2010 9:23 AM
To: mysql@lists.mysql.com
Subject: query help

I am trying to write a query that merges 2 columns from different tables and 
show them as one column of data.  Something like the following.

payables
ID  |check_no| amount|
3   |3478| 67.00 |
4   |3489| 98.00 |
8   |3476| 56.00 |

paychecks
ID  |check_no| amount
23  |3469|498.00 |
34  |3502|767.00 |
36  |3504}754.00 |

I am struggling to write a select query that gives me amounts and check numbers 
from both of the tables in the same column.  Like the following:

ID  |check_no| amount|
3   |3478| 67.00 |
4   |3489| 98.00 |
8   |3476| 56.00 |
23  |3469|498.00 |
34  |3502|767.00 |
36  |3504}754.00 |

Any help would be greatly appreciated.

Thanks,

Richard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Converting MyISAM to InnoDB

2010-02-08 Thread Gavin Towey
If you have a column defined as auto_increment, there must be a key on it.  
This is true both in myisam and innodb.

If you need further help, please show us the full structure of the real table 
you're operating on (not the one from your sandbox), the statement you run, and 
the error message.

Regards,
Gavin Towey

-Original Message-
From: Steve Staples [mailto:sstap...@mnsi.net]
Sent: Monday, February 08, 2010 9:39 AM
To: mysql@lists.mysql.com
Subject: Converting MyISAM to InnoDB

Hello again!

I am trying to convert my tables to InnoDB, and i am getting an error...

Error: 1075
Incorrect table definition; there can be only one auto column and it must be
defined as a key

Now, I converted a table in my sandbox earlier this morning to do some
testing, and it worked fine... mind you, i did truncate the table first, but
i am not sure if that is relavent or not.

The table structure has a TONNE of fields, but to give you an idea... here
is what is starts out like:

CREATE TABLE `radacct_201002` (
  `Year_Month` INT(11) UNSIGNED NOT NULL DEFAULT '0',
  `Radacct_Id` INT(11) UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT,
  `Acct_Authentic` VARCHAR(32) DEFAULT '',
Etc etc
  PRIMARY KEY  (`Year_Month`,`Radacct_Id`),
  UNIQUE KEY `radacct_id` (`Radacct_Id`),
  KEY (there are keys here not of any relevance that i can see)
) ENGINE=INNODB DEFAULT CHARSET=latin1

That is the table in the sandbox, and as i said, all i did was truncate it,
and change to innodb (there was prolly about 5-10 rows when i did it) and
there wasn't any issues.   When i do it to the live database (i copied a
table of live data, so i can convert it and see what kind of times/loads i
get) i get the error...

I am in the midst of removing the combined unique primary key, to see if
that is the culperate or not, but if anyone has any ideas, i am eager to
listen :)

Steve.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Business Key Generation

2010-02-05 Thread Gavin Towey
Your auto-increment could be effectively a row number. Otherwise there is this 
technique with user variables:

SET @rownum := 0;
SELECT @rownum:=...@rownum+1 as ROWNUM, ... FROM ... ;

Regards,
Gavin Towey


-Original Message-
From: Tom Goring [mailto:tom.gor...@gmail.com]
Sent: Friday, February 05, 2010 9:37 AM
To: mysql@lists.mysql.com
Subject: Business Key Generation

Hi,

I have a table that I need to migrate some data into:

CREATE TABLE `employee` (
  `employeeid` bigint(20) NOT NULL auto_increment,
..
  `reference` varchar(20) default NULL,
  `fkcompanyid` bigint(20) default NULL,
  PRIMARY KEY  (`employeeid`),
 KEY `FK4722E6AE82D7E095` (`fkcompanyid`),
CONSTRAINT `FK4722E6AE82D7E095` FOREIGN KEY (`fkcompanyid`) REFERENCES
`company` (`companyid`),
...
) ENGINE=InnoDB AUTO_INCREMENT=10001585 DEFAULT CHARSET=latin1 |


My question is I want to generate a reference business key (a number) as the
migrated data is blank.

something like

update employee set reference = ROWNUM where fkcompanyid = X

I think in Oracle ROWNUM would help me do this.
I.e. generate a reference based on the position in the result set.

any Ideas ?

--
Tom Goring

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Regarding storing non-english chars in mysql

2010-01-18 Thread Gavin Towey
http://dev.mysql.com/doc/refman/5.0/en/charset.html


-Original Message-
From: Ningappa Koneri [mailto:ningappa.kon...@comviva.com]
Sent: Sunday, January 17, 2010 11:32 PM
To: mysql@lists.mysql.com
Subject: Regarding storing non-english chars in mysql

Dear all,

I have a problem in migrating a GUI app(displays multi lingual data like 
Arabic) which currently uses oracle as backend db to mysql (5.1). Currenly in 
oracle I am using NVARCHAR datatype to store the Arabic chars. How do I 
incorporate the same functionality in mysql ? I have tried a sample servlet/jsp 
to insert the arabic chars into mysql by creating a table having two columns of 
NATIONAL VARCHAR type, but it's not displaying Arabic chars instead only 
questions marks are there.

One more thing in sample example is that before inserting I am converting to 
UTF-8.

Regards,
Ningappa Koneri
mLifestyle | www.comviva.com


This e-mail and all material transmitted with it are for the use of the 
intended recipient(s) ONLY and contains confidential and/or privileged 
information. If you are not the intended recipient, please contact the sender 
by reply e-mail and destroy all copies and the original message. Any 
unauthorized review, use, disclosure, dissemination, forwarding, printing or 
copying of this email or any action taken pursuant to the contents of the 
present e-mail is strictly prohibited and is unlawful.
The recipient acknowledges that Comviva Technologies Limited or its management 
or directors, are unable to exercise control or ensure the integrity over /of 
the contents of the information contained in e-mail. Any views expressed herein 
are those of the individual sender only and no binding nature of the contents 
shall be implied or assumed unless the sender does so expressly with due 
authority of Comviva Technologies Limited. E-mail and any contents transmitted 
with it are prone to viruses and related defects despite all efforts to avoid 
such by Comviva Technologies Limited.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: When using FOR UPDATE whole the table seems to lock instead of selected row

2010-01-18 Thread Gavin Towey
I think Baron was referring to a technique like this:

you sell a t-shirt, UPDATE table SET t=t-X WHERE t = X, if you get rows 
affected, it's sold and ok. if not, the stock ran out before the operation. but 
it's safe. see 
http://dev.mysql.com/tech-resources/articles/storage-engine/part_3.html


-Original Message-
From: phark...@gmail.com [mailto:phark...@gmail.com] On Behalf Of Perrin Harkins
Sent: Friday, January 15, 2010 6:08 AM
To: Johan Machielse
Cc: Baron Schwartz; mysql@lists.mysql.com
Subject: Re: When using FOR UPDATE whole the table seems to lock instead of 
selected row

On Fri, Jan 15, 2010 at 2:54 AM, Johan Machielse
johan.machie...@kpnplanet.nl wrote:
 The problem is that multiple users can read and update the same field
 simultaneously (worse case) which could lead to unpredictable problems.

There are other ways to do handle most cases.  For example:
UPDATE table SET value = value + 1 WHERE key = 7;

If you need to grab the value after the insert, you can get it from
last_insert_id:
UPDATE table SET value = last_insert_id(value + 1) WHERE key = 7;

However, if your situation is more complex than that, FOR UPDATE is
usually a good solution.

 What I really want is the following:
 When person A is reading and updating a field value, person B should not be
 able to do this simultaneously. Person B has to wait till the Person A has
 finished his work.

FOR UPDATE is the right solution for that.  Your only issue seems to
be that you feel too many rows are being locked.  That's an internal
implementation issue, but you may be able to change it by adjusting
which columns have indexes and keeping your statistics up to date.  Or
there  may not be enough cardinality on the column you're using in the
query to lock specific rows.  Using EXPLAIN on the SELECT query might
tell you more about what's happening.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Binary Logging

2010-01-06 Thread Gavin Towey
Binary logs are pretty important for backups as well.  Most often people take a 
full backup using mysqldump or filesystem snapshots, and then back up their 
binlogs.  With the full backup + all the binlogs since then, you can recover to 
any point in time you choose.

If you're not currently running replication then you don't really need those 
binlogs.  Asking will this be ok  is more a matter for you to decide in terms 
of your backup procedures.

Once you start using replication, you only need to keep binlogs around as long 
as it takes your slaves to copy the data from the master; with a fast network 
that can be seconds.

You can also use FLUSH LOGS; to force mysql to start a new binlog file, then 
purge all files before the most current one.

Regards,
Gavin Towey

-Original Message-
From: Steve Staples [mailto:sstap...@mnsi.net]
Sent: Wednesday, January 06, 2010 11:43 AM
To: mysql@lists.mysql.com
Subject: RE: Binary Logging

Silly me sees that there is an 'expire_log_days' in the my.cnf

But again, if i was to purge everything but the last day or 2... running
this command i found:

PURGE BINARY LOGS BEFORE DATE_SUB(NOW(), INTERVAL 2 DAY);

From the MySQL command line, will this be ok?

Steve.


-Original Message-
From: Steve Staples [mailto:sstap...@mnsi.net]
Sent: January 6, 2010 2:35 PM
To: mysql@lists.mysql.com
Subject: Binary Logging

Good afternoon.

I am having a weird problem with the Binary Logging of my MySQL.

For some reason, we have it on (still not sure why) but when i look in the
/var/log/mysql/ folder, the oldest file is from the 2009-12-27.   The MySQL
service has not been restarted since the server came back up 150 days ago.
So somehow, they have been purged in the past, but i don't see a cron that
would have done it?

Anyway, my question is, is that I am currently moving data around, splitting
it into multiple tables, so there are TONNES of queries taking place, doing
inserts and such, and now I am up to 93GB in this folder.

I am starting to worry about running out of space on the harddrives, and we
have been talking about doing a master/slave setup, and from what I've read,
you need the binary logs on for replication?

Is this true?  Should I keep them, or can i ditch them, and will it
re-create them when we do go to a master/slave setup?

Once i finish moving data around, i can remove the old tables that i am
moving them out of, so I don't really see a huge issue with dumping them, i
just want to make sure.

Thanks in advance!

Steve Staples.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=sstap...@mnsi.net

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.725 / Virus Database: 270.14.123/2592 - Release Date: 01/06/10
02:35:00


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MySQL db died / got corrupted / ???

2010-01-04 Thread Gavin Towey
Interesting. Usually innodb corruption is the result of hardware failure; it 
looks from the log like you need to recover your database, and remove the 
existing data at the filesystem level, then re-import.

How to get data out of a corrupted innodb is here:
http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html

Make sure to make backups, even of your corrupted data.  You might want to 
check your hardware, it could be faulty.  Good luck!

Regards,
Gavin Towey

-Original Message-
From: John Oliver [mailto:joli...@john-oliver.net]
Sent: Monday, January 04, 2010 4:47 PM
To: mysql@lists.mysql.com
Subject: Re: MySQL db died / got corrupted / ???

I didn't notice that there was a **LOT** more junk dumped into the log:

100104 16:43:47  mysqld started
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
100104 16:43:49  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Page directory corruption: supremum not pointed to
100104 16:43:50  InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex 0about eleventy bazxillion zeros

  ;InnoDB: End of
page dump
100104 16:43:51  InnoDB: Page checksum 1575996416, prior-to-4.0.14-form
checksum 1371122432
InnoDB: stored checksum 0, prior-to-4.0.14-form stored checksum 0
InnoDB: Page lsn 0 0, low 4 bytes of lsn at page end 0
InnoDB: Page number (if stored to page already) 0,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page directory corruption: supremum not pointed to
100104 16:43:51  InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex 0more zeros, lots and lots of them


  ;InnoDB:
End of page dump
100104 16:43:51  InnoDB: Page checksum 1575996416, prior-to-4.0.14-form
checksum 1371122432
InnoDB: stored checksum 0, prior-to-4.0.14-form stored checksum 0
InnoDB: Page lsn 0 0, low 4 bytes of lsn at page end 0
InnoDB: Page number (if stored to page already) 0,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
100104 16:43:51InnoDB: Error: trying to access a stray pointer
0x80002bae3ff8
InnoDB: buf pool start is at 0x2bad4000, end at 0x2aab4bad4000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery, see
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: how to force recovery.
100104 16:43:51InnoDB: Assertion failure in thread 47242840046192 in
file ./../include/buf0buf.ic line 268
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: about forcing recovery.
100104 16:43:51 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this
binary
or one of the libraries it was linked against is corrupt, improperly
built,
or misconfigured. This error can also be caused by malfunctioning
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=0
read_buffer_size=131072
max_used_connections=0
max_connections=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 217599 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
frame pointer is NULL, did you compile with
-fomit-frame-pointer? Aborting backtrace!
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html
contains
information that should help you find out what is causing the crash.
100104 16:43:51  mysqld ended

--
***
* John Oliver http://www.john-oliver.net/ *
* *
***

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual

RE: Is there a better way than this?

2009-12-28 Thread Gavin Towey
No, that won't work, remember that the WHERE clause is applied to each row 
individually -- y is 25, then it also cannot possibly be 24 at the same time, 
so AND condition has no meaning there.  What you're asking for there is the set 
of all x that have 25 as a y value, which is 1 and 2.

You need to use aggregates to create conditions that are meaningful for all x 
with the same value:

SELECT x FROM a GROUP BY x HAVING sum(y=25) and not sum(y=24);

Regards,
Gavin Towey

-Original Message-
From: Chris W [mailto:4rfv...@cox.net]
Sent: Sunday, December 27, 2009 6:02 PM
To: Tim Molter
Cc: mysql@lists.mysql.com
Subject: Re: Is there a better way than this?

Unless I am missing something, this should work.

SELECT DISTINCT X FROM `A`
WHERE Y IN (25)
AND Y NOT IN (24)

Chris W


Tim Molter wrote:
 I'm new to MySQL and I'm looking for some guidance. I have a table A,
 with two columns X and Y with the following data:

 |   X|Y|
 1  24
 1  25
 2  25
 2  26
 3  27

 I want my SQL query to return 2 following this verbose logic: SELECT
 DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

 I came up with the following SQL, which gives me my desired result,
 but is there a better way to do it? Can it be achieved using MINUS or
 UNION somehow?

 BTW, I'm using IN here because I intend to replace the single numbers
 (24 and 25) with arrays that have 0 to N members.

 SELECT DISTINCT X FROM `A`

 WHERE X IN (
 SELECT X FROM `A` WHERE Y IN (25)
 )

 AND X NOT IN (
 SELECT X FROM `A` WHERE Y IN (24)
 )

 Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Why does this query take so long?

2009-12-28 Thread Gavin Towey
It sounds like your laptop might be paging mysql's memory to disk or something 
like that.  Your laptop may not be the most reliable source for benchmarks.

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Monday, December 28, 2009 2:16 AM
To: René Fournier
Cc: mysql
Subject: Re: Why does this query take so long?

Even weirder, I came back to my laptop a couple hours later. And now the same 
queries are taking 3-10 seconds instead of 0.01 seconds. What could be causing 
this?

On 2009-12-28, at 1:19 PM, René Fournier wrote:

 Hmm, weird. I just re-imported the data (after drop/create table, etc.), and 
 now the spatial queries run fast.
 Has anyone seen this sort of thing happen? Maybe the Index got corrupted 
 somehow, and then MySQL had to do a full table scan (even though EXPLAIN 
 indicated it would use the Spatial Index)?



 On 2009-12-28, at 9:28 AM, René Fournier wrote:

 So just to clarify (hello?), the index which *should* be used (EXPLAIN says 
 so) and *should* make the query run faster than 4 seconds either isn't used 
 (why?) or simply doesn't speed up the query (again, why?).

 ++-+---+---+---+---+-+--+--+-+
 | id | select_type | table | type  | possible_keys | key   | key_len | ref  
 | rows | Extra   |
 ++-+---+---+---+---+-+--+--+-+
 |  1 | SIMPLE  | qs| range | coord | coord | 27  | NULL 
 | 5260 | Using where |
 ++-+---+---+---+---+-+--+--+-+

 SELECT id, province, latitude, longitude, AsText(coordinates), s_ts_r_m, 
 quartersection FROM qs WHERE MBRContains(GeomFromText('POLYGON((51.62582589 
 -114.82248918,51.65126254 -114.82248918,51.65126254 
 -114.78150333,51.62582589 -114.78150333,51.62582589 -114.82248918))'), 
 coordinates)

 8 rows in set (3.87 sec)


 On 2009-12-27, at 3:59 PM, René Fournier wrote:

 So... there is an index, and it's supposedly used:

 mysql EXPLAIN SELECT id, province, latitude, longitude, 
 AsText(coordinates), s_ts_r_m, quartersection FROM qs WHERE 
 MBRContains(GeomFromText('POLYGON((51.62582589 -114.82248918,51.65126254 
 -114.82248918,51.65126254 -114.78150333,51.62582589 
 -114.78150333,51.62582589 -114.82248918))'), coordinates);
 ++-+---+---+---+---+-+--+--+-+
 | id | select_type | table | type  | possible_keys | key   | key_len | ref  
 | rows | Extra   |
 ++-+---+---+---+---+-+--+--+-+
 |  1 | SIMPLE  | qs| range | coord | coord | 27  | NULL 
 | 5260 | Using where |
 ++-+---+---+---+---+-+--+--+-+
 1 row in set (0.00 sec)

 But when I run the query:

 mysql SELECT id, province, latitude, longitude, AsText(coordinates), 
 s_ts_r_m, quartersection FROM qs WHERE 
 MBRContains(GeomFromText('POLYGON((51.62582589 -114.82248918,51.65126254 
 -114.82248918,51.65126254 -114.78150333,51.62582589 
 -114.78150333,51.62582589 -114.82248918))'), coordinates)
  - ;
 ++--+-+---+--+--++
 | id | province | latitude| longitude | AsText(coordinates) 
  | s_ts_r_m | quartersection |
 ++--+-+---+--+--++
 | 444543 | AB   | 51.63495228 | -114.79282412 | POINT(51.63495228 
 -114.79282412) | 04-031-06 W5 | N4 |
 | 444564 | AB   | 51.64941120 | -114.79283278 | POINT(51.6494112 
 -114.79283278)  | 09-031-06 W5 | N4 |
 | 444548 | AB   | 51.63497789 | -114.81645649 | POINT(51.63497789 
 -114.81645649) | 05-031-06 W5 | N4 |
 | 444561 | AB   | 51.64943119 | -114.81643801 | POINT(51.64943119 
 -114.81643801) | 08-031-06 W5 | N4 |
 | 444547 | AB   | 51.62775680 | -114.80475858 | POINT(51.6277568 
 -114.80475858)  | 05-031-06 W5 | E4 |
 | 444549 | AB   | 51.63498028 | -114.80479925 | POINT(51.63498028 
 -114.80479925) | 05-031-06 W5 | NE |
 | 444560 | AB   | 51.64220442 | -114.80478262 | POINT(51.64220442 
 -114.80478262) | 08-031-06 W5 | E4 |
 | 444562 | AB   | 51.64942854 | -114.80476596 | POINT(51.64942854 
 -114.80476596) | 08-031-06 W5 | NE |
 ++--+-+---+--+--++
 8 rows in set (3.87 sec)

 So, there are ~2.6 million rows in the table, and coordinates is 
 spatially-indexed. Yet the query requires nearly 4 seconds. What am I doing 
 wrong?

 ...REne



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com

RE: Weeks

2009-12-28 Thread Gavin Towey
See:
http://gtowey.blogspot.com/2009/04/how-to-select-this-wednesday-or-other.html

just calculate the two dates, and use WHERE order_date BETWEEN (calculated 
start date) AND (calculated end date)

This avoids using functions on the actual column when possible, since that will 
prevent using indexes to find your query results.

Regards,
Gavin Towey

-Original Message-
From: ML [mailto:mailingli...@mailnewsrss.com]
Sent: Monday, December 28, 2009 4:15 PM
To: mysql@lists.mysql.com
Subject: Weeks

Hi All,

trying to write some SQL that will give me records for the CURRENT WEEK.

Example, starting on a Sunday and going through Saturday.
This week it would be Dec 27 - Jan 2.

I am doing this so I can write a query that will show orders that are placed 
during the current week.

Here is what I have, but this is showing from today for the next seven days.

SELECT * FROM orders WHERE WEEK(NOW(), 7) = WEEK(orders.order_date, 7)
 AND DATEDIFF(NOW(),orders.order_date)  7;

Would anyone have any advice?

-Jason


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Spatial extensions

2009-12-21 Thread Gavin Towey
How did you populate the point column? Should be something like:

UPDATE places SET point= =GeomFromText(CONCAT(Point(,longitude, 
,latitude,)));

You also should have a spatial index:

ALTER TABLE places MODIFY coordinates point NOT NULL, add spatial index 
(coordinates);

You can check the data with the ASTEXT() function:

SELECT longitude, latitude, ASTEXT(coordinates) FROM places;


Hope this helps
-Gavin Towey

From: René Fournier [mailto:m...@renefournier.com]
Sent: Saturday, December 19, 2009 12:42 AM
To: Gavin Towey
Cc: mysql
Subject: Re: Spatial extensions

Thanks Gavin.

I've got part your query working... sort of. Something I can't figure out is 
how to use MBRContains on a table with a POINT column. For example, this works:

mysql select id, astext(coordinates) FROM places where 
MBRContains(GeomFromText('POLYGON((48.6 -116.4,53.4 -116.4,53.4 -111.6,48.6 
-111.6,48.6 -116.4))'), GeomFromText('Point(49 -114)')) limit 10;
++--+
| id | astext(coordinates)  |
++--+
|  1 | POINT(49.00701238 -110.00507933) |
|  2 | POINT(49.01415809 -110.01615511) |
|  3 | POINT(49.01424023 -110.00508075) |
|  4 | POINT(48.99978158 -110.01617366) |
|  5 | POINT(48.99978996 -110.00507794) |
|  6 | POINT(49.00683419 -110.02751996) |
|  7 | POINT(49.01402057 -110.03861578) |
|  8 | POINT(49.01407281 -110.02750442) |
|  9 | POINT(48.99974667 -110.0386263)  |
| 10 | POINT(48.9997718 -110.0275421)   |
++--+
10 rows in set (0.00 sec)
But when I try to use the table's POINT column, nothing is returned:
mysql select id, astext(coordinates) FROM places where 
MBRContains(GeomFromText('POLYGON((48.6 -116.4,53.4 -116.4,53.4 -111.6,48.6 
-111.6,48.6 -116.4))'), (coordinates)) limit 10;
Empty set (0.00 sec)
What am I missing? For clarity, here's the table schema:

CREATE TABLE `places` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`latitude` decimal(10,8) NOT NULL,
`longitude` decimal(12,8) NOT NULL,
`coordinates` point NOT NULL,
PRIMARY KEY (`id`),
KEY `latlng` (`latitude`,`longitude`),
KEY `coord` (`coordinates`(25))
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
COLLATE=latin1_general_ci;

On 2009-12-18, at 2:21 AM, Gavin Towey wrote:


Not only is it 5.1, but there's a special branch that has improved GIS 
functions not found in the regular MySQL.  I'm not sure if/when they're 
planning on rolling them back into mysql:

http://downloads.mysql.com/forge/mysql-5.1.35-gis/

If it's not possible to use that version, then you can still implement a 
Distance function yourself as a stored procedure or UDF.  Just google for 
mysql+haversine or something similar.

The important part though is the MBRContains, which does an efficient box cull 
and uses the spatial index.  Oops, I forgot to change a couple occurances of 
line_segment to coordinates line_segment was just the column name I was 
using in my original query.

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Thursday, December 17, 2009 8:54 AM
To: Gavin Towey
Cc: mysql
Subject: Re: Spatial extensions

Awesome, this is what I was trying to find, as you succinctly wrote it. I 
*really* appreciate getting pointed in the right direction, since I haven't 
found a lot of MySQL's GIS tutorials directed at what I'm trying to do.

Still, a couple questions, the Distance() function you included, that must 
require 5.1 or higher right? 5.0.88  on my box throws an error:

   Function places.Distance does not exist

Also, where does line_segment come from in the below query?
Thanks.

...Rene

On 2009-12-17, at 8:45 AM, Gavin Towey wrote:


Yes, spatial indexes are very fast:

Query would be something like:

SET @center = GeomFromText('POINT(37.372241 -122.021671)');

SET @radius = 0.005;

SET @bbox = GeomFromText(CONCAT('POLYGON((',
X(@center) - @radius, ' ', Y(@center) - @radius, ',',
X(@center) + @radius, ' ', Y(@center) - @radius, ',',
X(@center) + @radius, ' ', Y(@center) + @radius, ',',
X(@center) - @radius, ' ', Y(@center) + @radius, ',',
X(@center) - @radius, ' ', Y(@center) - @radius, '))')
);

select id, astext(coordinates), Distance(@center,line_segment) as dist
FROM places where MBRContains(@bbox, line_segment) order by dist limit 10;

Regards,
Gavin Towey


-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Wednesday, December 16, 2009 4:32 PM
To: mysql
Subject: Spatial extensions

I have table with 2 million rows of geographic points (latitude, longitude).
Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10 
closest points (records) from that table? Currently, I'm using a simple 
two-column index to speed up queries:

CREATE TABLE `places` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`latitude` decimal(10,8) NOT NULL,
`longitude` decimal(12,8) NOT NULL
PRIMARY KEY (`id`),
KEY `latlng` (`latitude`,`longitude

RE: Spatial extensions

2009-12-17 Thread Gavin Towey
Not only is it 5.1, but there's a special branch that has improved GIS 
functions not found in the regular MySQL.  I'm not sure if/when they're 
planning on rolling them back into mysql:

http://downloads.mysql.com/forge/mysql-5.1.35-gis/

If it's not possible to use that version, then you can still implement a 
Distance function yourself as a stored procedure or UDF.  Just google for 
mysql+haversine or something similar.

The important part though is the MBRContains, which does an efficient box cull 
and uses the spatial index.  Oops, I forgot to change a couple occurances of 
line_segment to coordinates line_segment was just the column name I was 
using in my original query.

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Thursday, December 17, 2009 8:54 AM
To: Gavin Towey
Cc: mysql
Subject: Re: Spatial extensions

Awesome, this is what I was trying to find, as you succinctly wrote it. I 
*really* appreciate getting pointed in the right direction, since I haven't 
found a lot of MySQL's GIS tutorials directed at what I'm trying to do.

Still, a couple questions, the Distance() function you included, that must 
require 5.1 or higher right? 5.0.88  on my box throws an error:

Function places.Distance does not exist

Also, where does line_segment come from in the below query?
Thanks.

...Rene

On 2009-12-17, at 8:45 AM, Gavin Towey wrote:

 Yes, spatial indexes are very fast:

 Query would be something like:

 SET @center = GeomFromText('POINT(37.372241 -122.021671)');

 SET @radius = 0.005;

 SET @bbox = GeomFromText(CONCAT('POLYGON((',
  X(@center) - @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) - @radius, '))')
  );

 select id, astext(coordinates), Distance(@center,line_segment) as dist
 FROM places where MBRContains(@bbox, line_segment) order by dist limit 10;

 Regards,
 Gavin Towey


 -Original Message-
 From: René Fournier [mailto:m...@renefournier.com]
 Sent: Wednesday, December 16, 2009 4:32 PM
 To: mysql
 Subject: Spatial extensions

 I have table with 2 million rows of geographic points (latitude, longitude).
 Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 
 10 closest points (records) from that table? Currently, I'm using a simple 
 two-column index to speed up queries:

 CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`)
 ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
 COLLATE=latin1_general_ci;

 My current query is fairly quick:

 SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 
 52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296;

 But I wonder a couple things:

 1. Would MySQL's [seemingly anemic] spatial extensions would speed things up 
 if I added a column of type POINT (and a corresponding spatial INDEX)?

 CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL,
 `coordinates` point NOT NULL,
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`),
 KEY `coord` (`coordinates`(25))
 ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
 COLLATE=latin1_general_ci;

 2. How would I write the query?

 ...Rene


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 This message contains confidential information and is intended only for the 
 individual named.  If you are not the named addressee, you are notified that 
 reviewing, disseminating, disclosing, copying or distributing this e-mail is 
 strictly prohibited.  Please notify the sender immediately by e-mail if you 
 have received this e-mail by mistake and delete this e-mail from your system. 
 E-mail transmission cannot be guaranteed to be secure or error-free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any loss or damage caused by viruses or errors or omissions in 
 the contents of this message, which arise as a result of e-mail transmission. 
 [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, 
 FriendFinder.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub...@renefournier.com



This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying

RE: Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10

2009-12-16 Thread Gavin Towey
You need to use

DELIMITER //

Or some other symbol besides ; to change the client's end-of-statement symbol.  
Otherwise it ends the statement at the first ; inside the procedure you use, 
but it's not yet complete.

This is described in the manual on that same page.

Regards
Gavin Towey

-Original Message-
From: Walton Hoops [mailto:wal...@vyper.hopto.org]
Sent: Wednesday, December 16, 2009 10:46 AM
To: mysql@lists.mysql.com
Subject: Cannot created stored procedure (Using example from mysql manual) -- 
mysql 5.1.37 -- Ubuntu 9.10

Hi all.



I am running into a very frustrating problem trying to created a stored
procedure.

I had originally assumed I was using bad syntax, but even examples copied
and pasted

directly from the manual are giving the same error.





mysql CREATE DEFINER = 'walton'@'localhost' PROCEDURE account_count()

- BEGIN

- SELECT 'Number of accounts:', COUNT(*) FROM mysql.user;

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near '' at line 3

mysql



This example can be found at:

http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html

Google has failed me on this one.



Can anyone advise me as to what I need to do to troubleshoot this?  Also if
it is in error in the documentation,

how would I go about notifying someone so it can be corrected?



Any help would be greatly appreciated.


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
can speed up this process by running in parallel.

However if you're doing this every week on that large of a dataset, I'd just 
use filesystem snapshots.  You're backup/restore would then only take as long 
as it takes for you to scp the database from one machine to another.

Regards,
Gavin Towey


-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 12:56 PM
To: mysql@lists.mysql.com
Subject: Importing large databases faster

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at least
in part, due to indexing.

My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  /path/to/backup.sql

How can I go about efficiently loading it into a new database?
Specifically, can I disable triggers, indexes and what not until after
load finishes? I can only imagine that a single ok, go create your
indexes now at the end would be faster. Perhaps some way to hold off
commits from happening as often? The target server has 32Gb of RAM, so I
suspect I should be able to hold things in memory and commit to disk
relatively rarely.

I am currently loading via this command:

mysql -psecret  /path/to/backup.sql

The source and destination MySQL versions are:

Source:
mysql  Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64)
using readline 5.0

Dest:
mysql  Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using
readline 5.1

The reason for the discrepancy is that the old server was setup from
source on CentOS 4.x by a previous tech and the destination server is
the stock version from CentOS 5.x. The source server will be phased out
soon, so no real attempt at maintaining matching versions was done.

Thanks!

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Spatial extensions

2009-12-16 Thread Gavin Towey
Yes, spatial indexes are very fast:

Query would be something like:

SET @center = GeomFromText('POINT(37.372241 -122.021671)');

SET @radius = 0.005;

SET @bbox = GeomFromText(CONCAT('POLYGON((',
  X(@center) - @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) - @radius, '))')
  );

select id, astext(coordinates), Distance(@center,line_segment) as dist
FROM places where MBRContains(@bbox, line_segment) order by dist limit 10;

Regards,
Gavin Towey


-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Wednesday, December 16, 2009 4:32 PM
To: mysql
Subject: Spatial extensions

I have table with 2 million rows of geographic points (latitude, longitude).
Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10 
closest points (records) from that table? Currently, I'm using a simple 
two-column index to speed up queries:

CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`)
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
COLLATE=latin1_general_ci;

My current query is fairly quick:

SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 
52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296;

But I wonder a couple things:

1. Would MySQL's [seemingly anemic] spatial extensions would speed things up if 
I added a column of type POINT (and a corresponding spatial INDEX)?

CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL,
 `coordinates` point NOT NULL,
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`),
 KEY `coord` (`coordinates`(25))
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
COLLATE=latin1_general_ci;

2. How would I write the query?

...Rene


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
I don't think so, I'm pretty sure you have to use mk-parallel-dump to get the 
data in a format it wants.  The docs are online though.

Regards,
Gavin Towey

-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 4:35 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Importing large databases faster

Gavin Towey wrote:
 There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
 can speed up this process by running in parallel.

 However if you're doing this every week on that large of a dataset, I'd just 
 use filesystem snapshots.  You're backup/restore would then only take as long 
 as it takes for you to scp the database from one machine to another.

 Regards,
 Gavin Towey

Thanks! Will the Maatkit script work on a simple --all-databases dump?

As for the copy, it's a temporary thing. This is just being done weekly
while we test out the new server. Once it's live, the new server will
indeed be backed up via LVM snapshots. :)

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Count records in join

2009-12-15 Thread Gavin Towey
Hi Miguel,

You'll need to use LEFT JOIN, that will show all records that match and a row 
in the second table will all values NULL where there is no match.  Then you 
find all those rows that have no match in your WHERE clause.

Regards,
Gavin Towey

-Original Message-
From: Miguel Vaz [mailto:pagong...@gmail.com]
Sent: Tuesday, December 15, 2009 10:43 AM
To: mysql@lists.mysql.com
Subject: Count records in join

Hi,

I am stuck with a suposedly simple query:

- i have two tables (:

PROGS
id_prog
name

EVENTS
id
id_prog
name

How can i list all records from PROGS with a sum of how many events each
have? I want to find the progs that are empty.

I remember something about using NULL, but i cant remember. :-P

Thanks.

MV

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Optimization suggestions

2009-12-14 Thread Gavin Towey
Id should probably be an auto_incrementing INT, if you still need a unique text 
identifier, then I would make a separate field.  Though my opinion isn't the 
only way; there is much debate on natural vs. surrogate keys.

I would normalize folderid into a lookup in another table, and make folderid 
an INT value.

Threadid is another field that would probably be better as an INT.

As for your indexes, they depend completely on what type of queries you're 
going to be running.  Once you know that, then you can test them using sample 
data and EXPLAIN.

http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

About normalization:
http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html


Regards,
Gavin Towey


-Original Message-
From: Sudhir N [mailto:sudhir_nima...@yahoo.com]
Sent: Monday, December 14, 2009 10:31 AM
To: Mysql
Subject: Optimization suggestions

I have following table structure, I have to use merge storage engine.
Please have a look, and provide feedback if theres some thing wrong or if 
there's space for optimization.


/*Table structure for table `messages2009` */

CREATE TABLE `messages2009` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

/*Table structure for table `messages` */
/*Merge table definition that covers all message tables*/

CREATE TABLE `messages` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST 
UNION=(`messages2009`);






Sudhir NimavatSenior software engineer.
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught


  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: errno: 13

2009-12-11 Thread Gavin Towey
Mysql daemon runs as the 'mysql' user

Chown -r mysql:mysql /storage/mysql/data

Regards,
Gavin Towey

-Original Message-
From: Carl [mailto:c...@etrak-plus.com]
Sent: Friday, December 11, 2009 11:55 AM
To: mysql@lists.mysql.com
Subject: errno: 13

Fresh install of 5.1.41 on a brand new (Slackware 13 - 64 bit)  machine.  
Installed from tar.  Directory structure is: basedir=/usr/local/mysql and 
datadir=/storage/mysql/data.  I am currently running as root.  The permissions 
on the directories in /storage/mysql/data are 766 (I have double and triple 
checked this.)  I have created the mysql data tables by running 
mysql_install_db... it seemed to complete without error:

r...@mysql3:/usr/local/mysql/scripts# ./mysql_install_db 
--datadir=/storage/mysql/data --basedir=/usr/local/mysql -uroot
Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/local/mysql/bin/mysqladmin -u root password 'new-password'
/usr/local/mysql/bin/mysqladmin -u root -h 10.10.10.31 password 'new-password'

Alternatively you can run:
/usr/local/mysql/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr/local/mysql ; /usr/local/mysql/bin/mysqld_safe 

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/local/mysql/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/local/mysql/scripts/mysqlbug script!

The latest information about MySQL is available at http://www.mysql.com/
Support MySQL by buying support/licenses from http://shop.mysql.com/

I then ran

/usr/local/mysql/bin/mysqld_safe -uroot 

which produced the following error report:

091211 13:19:18 mysqld_safe Starting mysqld daemon with databases from 
/storage/mysql/data
091211 13:19:18 [Warning] Ignoring user change to 'root' because the user was 
set to 'mysql' earlier on the command line

091211 13:19:18 [Note] Plugin 'FEDERATED' is disabled.
091211 13:19:18 [Warning] /usr/local/mysql/bin/mysqld: ignoring option 
'--innodb-use-sys-malloc' due to invalid value 'ON'
^G/usr/local/mysql/bin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
091211 13:19:18 [ERROR] Can't open the mysql.plugin table. Please run 
mysql_upgrade to create it.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
091211 13:19:19  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /storage/mysql/data/ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
091211 13:19:19 mysqld_safe mysqld from pid file /storage/mysql/data/mysql3.pid 
ended
~

The relevant portions of the /storage/mysql/data/mysql directory are (for the 
first error):

-rw-rw 1 root root  0 2009-12-11 13:17 plugin.MYD
-rw-rw 1 root root   1024 2009-12-11 13:17 plugin.MYI
-rw-rw 1 root root   8586 2009-12-11 13:17 plugin.frm
-

It appears to me that mysqld is looking for the plugin.frm in the 
/usr/local/mysql/mysql directory which doesn't exist because my data directory 
is /storage/mysql/data.

The second error,

InnoDB: File name /storage/mysql/data/ibdata1
InnoDB: File operation call: 'create'

is probably the show stopper.  The relevant portions of my.cnf are:

# The MySQL server
[mysqld]
port= 3306
socket  = /var/run/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 20M
max_sp_recursion_depth = 100
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
basedir=/usr/local/mysql
datadir=/storage/mysql/data
wait_timeout = 10800
max_connections = 600

and

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /storage/mysql/data
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
#innodb_log_group_home_dir = /var/lib/mysql/
#innodb_log_arch_dir = /var/lib/mysql/
ignore_builtin_innodb
plugin-load=innodb=ha_innodb.so;innodb_trx=ha_innodb.so;innodb_locks=ha_innodb.so;innodb_lock_waits=ha_innodb.so;innodb_cmp=ha_innodb.so;innodb_cmp_reset=ha_
innodb.so;innodb_cmpmem=ha_innodb.so;innodb_cmpmem_reset=ha_innodb.so
#

Note: ha_innodb.so is in the 'plugins' directory.

This error makes no sense to me.

Can anyone kick me in the right direction?

Thanks,

Carl




This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified

RE: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?

2009-12-11 Thread Gavin Towey
Don't forget triggers, stored routines, views, database/table specific user 
permissions, and replication/binlog options!

Regards,
Gavin Towey

-Original Message-
From: Saravanan [mailto:suzuki_b...@yahoo.com]
Sent: Friday, December 11, 2009 2:02 PM
To: MySql; Michael Dykman
Subject: Re: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?

if you have myisam alone tables you can rename the folder of the database. That 
can work like rename database. If you have innodb table you have to move one by 
one table because details of those tables will be stored in innodb shared table 
space. Moving folder cannot work.

Thanks,
Saravanan

--- On Fri, 12/11/09, Michael Dykman mdyk...@gmail.com wrote:

From: Michael Dykman mdyk...@gmail.com
Subject: Re: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?
To: MySql mysql@lists.mysql.com
Date: Friday, December 11, 2009, 10:54 PM

No, not a loophole.  Just a plain-old management feature..  there is
nothing particularly hacky about it.. this is not trying to leverage
undocumented features: this has been a published part of the API for
at least a couple of years.

On the same file system, yes it should be pretty damned fast.
Depending on how your data is stored, it might now be 'quite' as
simple as a unix 'mv' command..  if this is a production system, I
would recommend you do a dry run with a replicant/slave. No amount of
theorizing will tell as much as the experiment.

 - michael dykman

On Fri, Dec 11, 2009 at 4:40 PM, Daevid Vincent dae...@daevid.com wrote:
 Will this work in 5.0?

 If I'm reading this right, it seems like this is some kind of trick or
 loophole then right? If it works and solves my dilemna, I'm fine with that,
 but I'm just curious.

 How fast is this? I mean, if I have an 80GB database, is it like a real
 unix 'mv' command where it simply changing pointers or is it a full on
 copy/rm? (Assume same filesystem/directory)

 -Original Message-
 From: Michael Dykman [mailto:mdyk...@gmail.com]
 Sent: Friday, December 11, 2009 6:08 AM
 To: MySql
 Subject: Re: Are you serious? mySQL 5.0 does NOT have a
 RENAME DATABASE?

 If you want to move the database atomically,  a RENAME TABLE statement
 may have multiple clauses.

 RENAME TABLE
  olddb.foo to newdb.foo,
  olddb.bar to newdb.bar;

 Here,  I hot-swap a  new lookup table 'active.geo' into a live system
 confident that, at any given point, some version of this table always
 exists:

 RENAME TABLE
  active.geo to archive.geo,
  standby.geo to active geo;

  - michael dykman


 On Fri, Dec 11, 2009 at 8:58 AM, Johan De Meersman
 vegiv...@tuxera.be wrote:
  On Fri, Dec 11, 2009 at 1:56 PM, Ken D'Ambrosio
 k...@jots.org wrote:
 
   rename table oldschema.table to newschema.table;
 
  Just to be 100% clear -- I assume you have to first create
 the destination
  database, and then do this for all the tables in the
 source database?
 
 
  Yep. Easily scriptable, though :-)
 



 --
  - michael dykman
  - mdyk...@gmail.com

 May you live every day of your life.
 Jonathan Swift

 Larry's First Law of Language Redesign: Everyone wants the colon.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=dae...@daevid.com






--
 - michael dykman
 - mdyk...@gmail.com

May you live every day of your life.
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=suzuki_b...@yahoo.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: stored procedure and random table name - temp table, merge, prepared statement

2009-12-10 Thread Gavin Towey
Creating a temporary merge table works fine for me on 5.0.

Your table isn't innodb is it?  That will fail with an error like you're 
getting.

Regards,
Gavin Towey

-Original Message-
From: Dante Lorenso [mailto:da...@lorenso.com]
Sent: Thursday, December 10, 2009 3:20 PM
To: mysql@lists.mysql.com
Subject: stored procedure and random table name - temp table, merge, prepared 
statement

All,

I have a stored procedure that I'm writing where I need to run a lot of
queries against a particular table.  The name of the table will be a
parameter to the stored procedure ... example:

CALL normalize_data('name_of_table_here');

Since I want to run queries against this table, I don't want to have to use
prepared statements for all the queries because treating my queries as
strings gets ugly.  Ideally I want to use the table name as a variable in
the stored procedure, but as a hack around that, I thought about trying this
trick instead: give the table name an alias.

-- remove our temporary table if it already exists
DROP TABLE IF EXISTS dante;
--
-- clone the table structure
CREATE TEMPORARY TABLE dante LIKE name_of_table_here;
--
-- change the temporary table to a merge table which references the named
table
ALTER TABLE dante ENGINE=MERGE UNION(name_of_table_here);

Once these 3 statements were run, the merge table would essentially just
be a view on the underlying table and all my following queries could
reference the dante table and not the strangely named random table.

Note, that queries above that use name_of_table_here would need to be
prepared and executed using the string concat approach.

The problem I am having is that this strategy is not working.  After running
the statements above, I check my new dante table and it doesn't work:

DESC dante;
Error Code : 1168
Unable to open underlying table which is differently defined or of
non-MyISAM type or doesn't exist

So, how can I accomplish what I am trying to do?  I just want to alias a
random table to a fixed name (preferably as a temporary table name so that
it won't conflict with other connections running similar code
simultaneously) so that I can avoid having to use prepared statements
through my whole stored procedure.  I may potentially perform 20-30 queries
to the table which is passed in and want to keep this code looking clean.

I could avoid this problem altogether if I can assign an alias to a table:

ALIAS dante TO name_of_table_here;

or use a variable table name in a query inside a stored procedure:

SET @table_name = 'name_of_table_here';

INSERT INTO some_table (value)
SELECT something
FROM @table_name
WHERE ...;

Am using MySQL 5.1.36.  Any pointers?

-- Dante

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: login problem from django script, using python2.5/MySQLdb/connections.py

2009-12-09 Thread Gavin Towey
Access Denied means you're using an incorrect username and password 
combination.  Test your credentials using the mysql cli.  You can log in as 
root to mysql to make changes as necessary, or supply the correct user/pass 
from your script.

Regards,
Gavin Towey

-Original Message-
From: John Griessen [mailto:j...@industromatic.com]
Sent: Wednesday, December 09, 2009 10:16 AM
To: mysql@lists.mysql.com
Subject: login problem from django script, using 
python2.5/MySQLdb/connections.py

If I can login from a shell, what could stop a script from login?

I'm following a newbie tutorial for django, a web content mgt. system.

The following user and password are good if I use them fromthe same shell the 
script launches from.

Here's the error message from a django script using a python module about mysql:


File /usr/lib/pymodules/python2.5/MySQLdb/connections.py, line 170, in 
__init__
 super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1044, Access denied for user 
'django_editor'@'%' to database 'django_server')



Any ideas?

thanks,

John

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Gavin Towey
I think he's trying to say that this method wouldn't work for innodb, unless 
you copied files from an LVM snapshot, or something similar.

I would say that it's very important to know why data is getting out of sync 
between your master and slave.  Fixing those root causes would eliminate the 
need for this.  There are cases where non-deterministic queries will produce 
different results, but that's what row based replication is supposed to solve =)

There are ways to resync data that don't involve all this as well:  Maatkit has 
some tools that compare data between servers, and can fix them with queries.  
No stopping the slave or locking the master necessary.  I've used them in 
production with good results.

Regards,
Gavin Towey



-Original Message-
From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
Sent: Friday, December 04, 2009 9:00 AM
To: Tom Worster; mysql@lists.mysql.com
Subject: RE: Here's an Idea for Re-Syncing Master and Slave During Production 
Hours without Interrupting Users (Much)

 (1) innodb?

It's an off-the-shelf application that uses MyISAM tables. It is
possible to convert to innodb, but I have not been sold on innodb in
terms of its  performance characteristics for this particular
application. Maybe I've been reading the wrong stuff. Do you have
general thoughts on the differences with respect to performance?

 (2) why delete slave logs when you can
 restart the slave with --skip-slave and
 then use CHANGE MASTER TO?

Well... I guess mainly because I didn't know about that option! I
thought I needed to fake out mysql on this, but it sounds like I can
just do 'flush tables with read lock;reset master;' on the master and
'change master to...;' on the slave. So cool. Thanks for the input!

--
Eric Robinson


Disclaimer - December 4, 2009
This email and any files transmitted with it are confidential and intended 
solely for Tom Worster,my...@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of . Warning: Although  has taken reasonable 
precautions to ensure no viruses are present in this email, the company cannot 
accept responsibility for any loss or damage arising from the use of this email 
or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Gavin Towey
 I would never have any confidence that the replication is solid
 enough to use the slave server for backup purposes.

I agree completely there.  That's the other reason I like filesystem snapshots 
is that it allows you to take a backup from the master relatively painlessly.

-Original Message-
From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
Sent: Friday, December 04, 2009 1:24 PM
To: Gavin Towey; Tom Worster; mysql@lists.mysql.com
Subject: RE: Here's an Idea for Re-Syncing Master and Slave During Production 
Hours without Interrupting Users (Much)

 I would say that it's very important to know why data
 is getting out of sync between your master and slave.

Ultimately, I agree. But since it's a canned application, getting to
that point might be hard, and once it is resolved, new issues might
arise. I would never have any confidence that the replication is solid
enough to use the slave server for backup purposes. (Which, by the way,
is the real reason I'm doing this. In the middle of the night, when
there are few users on the system, I want to backup the slave, but first
I want to make sure I have a 100% reliable copy of the data.)

 There are ways to resync data that don't involve all
 this as well:  Maatkit has some tools

I've looked with great interest at Maatkit, but their tools are replete
with warnings about dangers, bugs, and crashes. They certainly do not
inspire confidence.

--
Eric Robinson



Disclaimer - December 4, 2009
This email and any files transmitted with it are confidential and intended 
solely for Gavin Towey,Tom Worster,my...@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of . Warning: Although  has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Creating Table Through Union

2009-11-22 Thread Gavin Towey
The form would be like:

CREATE TABLE products
SELECT b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics FROM table1
UNION
SELECT b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics FROM table2

Regards,
Gavin Towey

-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Sunday, November 22, 2009 10:56 AM
To: mysql@lists.mysql.com
Subject: Creating Table Through Union

Hi;
I would like to create a table out of merging the fields in other,
previously created tables. I have the following syntax which doesn't work:

create table products union (b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics);

Please advise.
TIA,
Victor

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Strange problem with mysqldump / automysqlbackup (ERROR 1300)

2009-11-20 Thread Gavin Towey
Have you tried dumping that table manually using mysqldump on the command line 
to confirm it's not an issue with automysqlbackup?

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Friday, November 20, 2009 8:31 AM
To: mysql
Subject: Strange problem with mysqldump / automysqlbackup (ERROR 1300)

I've been using automysqlbackup 2.5 for years on a particular database, and 
it's always performed great. Recently, however, I've become encountering 
problems when trying to re-import one of its dumped sql files. (Not sure if it 
matters, but the database file in question is large and growing -- about 10GB. 
The other databases automysqlbackup backs up are fine.)

Basically on the import, MySQL fails and returns an error indicating a problem 
with the dump file:

mysql -u root -p  dump_file.sql  (~10GB)
Enter password:
ERROR 1300 (HY000) at line 426: Invalid utf8 character string: '?03422'

Sure enough, I look at the line in dump_file.sql, which should contain two 
unsigned ints, and two unsigned small ints:

[...],(32562206,1228?03422,1641,135),[...]

And yup, there's a question mark in the middle of the second unsigned int, for 
some strange reason. Not in any of the other rows in that statement. When I 
look at the existing database from which the dump file was made, that row is 
fine:

mysql SELECT * FROM bandwidth WHERE id = 32562206;
+--++---+---+
| id   | time_sec   | device_id | bytes |
+--++---+---+
| 32562206 | 1228803422 |  1641 |   135 |
+--++---+---+
1 row in set (0.00 sec)


So... It appears either mysqldump and/or automysqlbackup is having a problem 
dumping a true copy of the database.

Anyone else run into this sort of thing? Any suggestions? Thanks.

...Rene

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: DELETE DATA FROM TABLE

2009-11-19 Thread Gavin Towey
Assuming you're using either myisam tables, or innodb with file-per-table 
option turned on, then dropping a whole partition at a time will allow you to 
reclaim disk space.

If you're using innodb with a single tablespace currently, then unfortunately, 
you would have to export all your data, shutdown mysql, change you're my.cnf  
delete the tablespace  ib_log files, then restart and re-import all your data. 
 If you need to do this, you should probably seek a bit more information about 
from this list or other sources.

Regards,
Gavin Towey

-Original Message-
From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com]
Sent: Thursday, November 19, 2009 12:13 AM
To: MySQL
Subject: DELETE DATA FROM TABLE

Hi Experts,

I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records will use 1.8 TB of disk space. Total disk space
available is 2.3TB.

Deleting data doesn't free up the disk space. So, I was thinking of rotating
the table. But doesn't have enough disk space.

Any Idea, how this task can be performed.

Any idea or suggestion is highly appreciated.

Thanks  Regards,
Krishna Ch. Prajapati

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: DELETE DATA FROM TABLE

2009-11-19 Thread Gavin Towey
Hi Krishna,

Drop partition should be very quick - much faster than doing a DELETE on the 
same amount of data.  Internally, it will be the same as doing a drop table for 
that partition.

Regards,
Gavin Towey

From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com]
Sent: Thursday, November 19, 2009 1:15 AM
To: Gavin Towey
Cc: MySQL
Subject: Re: DELETE DATA FROM TABLE

Hi Gavin,

I am using innodb with file-per-table. I agree with you dropping a partition 
will reclaim disk space.
alter table table name drop partition partition name

But, my concern is alter table table name drop partition partition name 
on very big table would might take a lot of time. (Although, I haven't tested)

Thanks for the immediate response.

Thanks  Regard,
Krishna Ch. Prajapati
On Thu, Nov 19, 2009 at 2:22 PM, Gavin Towey 
gto...@ffn.commailto:gto...@ffn.com wrote:
Assuming you're using either myisam tables, or innodb with file-per-table 
option turned on, then dropping a whole partition at a time will allow you to 
reclaim disk space.

If you're using innodb with a single tablespace currently, then unfortunately, 
you would have to export all your data, shutdown mysql, change you're my.cnf  
delete the tablespace  ib_log files, then restart and re-import all your data. 
 If you need to do this, you should probably seek a bit more information about 
from this list or other sources.

Regards,
Gavin Towey

-Original Message-
From: Krishna Chandra Prajapati 
[mailto:prajapat...@gmail.commailto:prajapat...@gmail.com]
Sent: Thursday, November 19, 2009 12:13 AM
To: MySQL
Subject: DELETE DATA FROM TABLE

Hi Experts,

I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records will use 1.8 TB of disk space. Total disk space
available is 2.3TB.

Deleting data doesn't free up the disk space. So, I was thinking of rotating
the table. But doesn't have enough disk space.

Any Idea, how this task can be performed.

Any idea or suggestion is highly appreciated.

Thanks  Regards,
Krishna Ch. Prajapati
The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Selecting data from multiple tables

2009-11-15 Thread Gavin Towey
These table names are extraordinarily confusing; especially since your schema 
is de-normalized.  One of these tables should have (user_id int unsigned not 
null auto increment primary key, username varchar(100) );  All the rest should 
be using user_id.

Anyway, to answer your first question:

select * from Table_1 left join Table_2 using (photo_uid) where
Table_1.username != 'dopey' and Table_2!='dopey';

You need to move the conditions on Table_2 into the join clause:

select * from Table_1 left join Table_2 where Table_1.photo_id=Table_2.photoid 
AND Table_2.username != 'dopey' where
Table_1.username != 'dopey';

Regards,
Gavin Towey

-Original Message-
From: Ashley M. Kirchner [mailto:kira...@gmail.com]
Sent: Sunday, November 15, 2009 4:38 AM
To: mysql@lists.mysql.com
Subject: Selecting data from multiple tables

Hi folks,

I'm trying to, possibly do the impossible here.  I have to select data from
4 different tables to come up with the right information and I'm having one
heck of time trying to figure it out.  This is going to be a long email ...

Table_1:
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| photo_uid | int(7) unsigned zerofill | NO   | PRI | NULL|   |
| username  | varchar(100) | NO   | | NULL|   |
| votes | int(5)   | YES  | | 0   |   |
+---+--+--+-+-+---+

Table_2:
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| photo_uid | int(7) unsigned zerofill | NO   | UNI | NULL|   |
| username  | varchar(100) | NO   | PRI | NULL|   |
| vote  | int(2)   | NO   | | 0   |   |
| voted_on  | datetime | NO   | | NULL|   |
+---+--+--+-+-+---+

Table_3:
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| username   | varchar(100)| NO   | UNI | NULL||
| info   | varchar(100)| NO   | | NULL||
++-+--+-+-++

Table_4:
+---+--+--+-+-++
| Field | Type | Null | Key | Default |
Extra  |
+---+--+--+-+-++
| photo_uid | int(7) unsigned zerofill | NO   | PRI | NULL|
auto_increment |
| username  | varchar(100) | NO   | | NULL
||
| photo | varchar(100) | NO   | | NULL
||
+---+--+--+-+-++

Data used for query:
username=foo


The goal here is several.
  1. query Table 3 for info where username=foo  (always 1 record)

  2. query Table 3 for username where info = (result of Q1 above) EXCLUDING
username=foo
 (results in 0 to many records)

  3. query Table 1 for photo_uid where username= (all records in query from
Q2 above)

  4. query Table 4 for photo_uid from Q2 above and EXCLUDING username=foo


Now, I started fiddling with LEFT JOIN and came up with this:

select * from Table_1 left join Table_2 using (photo_uid) where
Table_1.username != 'dopey';
+---+--+---+--+--+-+
| photo_uid | username | votes | username | vote | voted_on|
+---+--+---+--+--+-+
|   011 | bashful  | 0 | NULL | NULL | NULL|
|   010 | bashful  | 0 | NULL | NULL | NULL|
|   005 | bashful  | 0 | dopey|1 | 2009-11-15 03:56:30 |
|   003 | bashful  | 0 | NULL | NULL | NULL|
|   001 | bashful  | 0 | NULL | NULL | NULL|
|   014 | grumpy   | 0 | bashful  |1 | 2009-11-15 03:48:55 |
+---+--+---+--+--+-+

Close, I need to also set Table_2.username != 'dopey', however the moment I
do that, I get exactly 1 record returned:

+---+--+---+--+--+-+
| photo_uid | username | votes | username | vote | voted_on|
+---+--+---+--+--+-+
|   014 | grumpy   | 0 | bashful  |1 | 2009-11-15 03:48:55

RE: cannot find my.cnf file

2009-11-13 Thread Gavin Towey
Did you remove the my.cnf file and then run /etc/init.d/mysql stop?  The my.cnf 
probably had non-default paths for the pid file, so if you remove the config 
file, now the startup script is looking in the wrong location.


Also for your password issue, please show use the exact command you're using to 
try to log in, and the exact error message you get.

Regards
Gavin Towey

-Original Message-
From: Sydney Puente [mailto:sydneypue...@yahoo.com]
Sent: Friday, November 13, 2009 5:31 AM
To: mysql@lists.mysql.com
Subject: Re: cannot find my.cnf file

Yes I enter the password manually into the remote mysql client (actually 
Oracle's SQL developer) when I login.

I thought I would restart mysql with the /etc/init.d/mysql script and go back 
to the original default settings without any my.cnf present. Just to check it 
was some setting in my.cnf that caused the log in problem.

# ./mysql status
MySQL is running but PID file could not be found   [FAILED]
# ./mysql stop
MySQL manager or server PID file could not be found!   [FAILED]

In fact I cannot find a pid file anywhere on the box.
# ps -ef | grep mysql
root  6517 1  0 10:10 pts/000:00:00 /bin/sh /usr/bin/mysqld_safe 
--datadir=/var/lib/mysql --pid-file=/var/lib/mysql/radium01.pid
mysql 6623  6517  0 10:10 pts/000:00:24 /usr/sbin/mysqld --basedir=/ 
--datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/object01.err 
--pid-file=/var/lib/mysql/object01.pid --socket=/var/lib/mysql/mysql.sock 
--port=3306
# locate pid | grep mysql
/usr/share/man/man1/mysql_waitpid.1.gz
/usr/bin/mysql_waitpid

I have no idea why a pid file would be missing!

any ideas

Syd




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: cannot find my.cnf file

2009-11-12 Thread Gavin Towey
Also note that mysql doesn't need a my.cnf file and will happily run with 
default values.  It's possible that there is none and you'll have to create it.

To see where your mysqld is configured to check for the config file do:
mysql --verbose --help | grep -C3 my.cnf

This will give you a list of paths it checks in order.

Regards,
Gavin Towey

-Original Message-
From: John Daisley [mailto:john.dais...@butterflysystems.co.uk]
Sent: Thursday, November 12, 2009 10:30 AM
To: Sydney Puente
Cc: mysql@lists.mysql.com
Subject: Re: cannot find my.cnf file

should be in

/etc/my.cnf

or try the following at the command line

locate my.cnf

That should give you the location


On Thu, 2009-11-12 at 18:10 +, Sydney Puente wrote:
 Hello,
 I want to log all sql queries made against a mysql db.
 Googled and found I should add a line to my.cnf.

 However I cannot find a my.cnf file
 [r...@radium init.d]# ps -ef | grep mysql
 root 13614 1  0 Sep24 ?00:00:00 /bin/sh /usr/bin/mysqld_safe 
 --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/object01.pid
 mysql13669 13614  0 Sep24 ?00:21:40 /usr/sbin/mysqld --basedir=/ 
 --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/object01.err 
 --pid-file=/var/lib/mysql/object01.pid
 root 23050 22746  0 19:05 pts/000:00:00 grep mysql
 [r...@radium init.d]# locate cnf
 /usr/share/doc/MySQL-server-community-5.1.39/my-huge.cnf
 /usr/share/doc/MySQL-server-community-5.1.39/my-innodb-heavy-4G.cnf
 /usr/share/doc/MySQL-server-community-5.1.39/my-large.cnf
 /usr/share/doc/MySQL-server-community-5.1.39/my-medium.cnf
 /usr/share/doc/MySQL-server-community-5.1.39/my-small.cnf
 /usr/share/man/man8/cnfsheadconf.8.gz
 /usr/share/man/man8/cnfsstat.8.gz
 /usr/share/ssl/openssl.cnf
 /usr/share/mysql/my-large.cnf
 /usr/share/mysql/my-huge.cnf
 /usr/share/mysql/my-innodb-heavy-4G.cnf
 /usr/share/mysql/my-medium.cnf
 /usr/share/mysql/my-small.cnf
 Any ideas?
 I might add i did not install mysql and I did not start it and the guy who 
 did is in holiday!

 TIA

 Syd







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   >