Re: Replication and user privileges

2019-02-26 Thread Jim

On 2/26/2019 1:57 PM, Jim wrote:

On 2/26/2019 9:44 AM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 7:29 PM, Jim wrote:

On 2/25/2019 5:46 PM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 5:04 PM, Jim wrote:
I have a question about mysql replication. I believe I understand 
most

of it, but have a question about user privileges.

I understand on the master, the replication user must have the
Repl_slave_priv privilege as described here:
https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave 





My question is about what replication-related users and privileges 
must

exist on the slave.

So, for example, if an insert on the master that is to be 
replicated is

performed by user 'abc' with proper insert permissions on the master,
does that same 'abc' user with same insert permissions need to 
exist on

the slave as well?

In other words, what user is performing the replication operation 
on the

slave? I don't see any indication of users referenced in the bin logs
that I have examined on the master. Are user and privileges regarding
replicated queries irrelevant on the slave and that is handled all
internally via the replication thread with no regard to user 
privileges?


Thank you.
Jim



Your final supposition is correct. All privileges were checked and
verified on the master when the original command was executed. The
Replication system on the slave is going to repeat that change as well
as possible given the state of its copy of the data without regards to
"who originally performed this change" on the upstream master.

We do not store credentials in the Binary Log because they are not
important to either of the purposes of the Binary Log

* point-in-time recovery
or
* Replication (which is very much like an automated, continuous
point-in-time recovery)

===

That replication account you mentioned, on the master, is required to
give a slave (and you could have several) enough rights to read the
Binary Log and not much else. This allows you to create an account
that can login from a remote location with the "least privileges"
necessary to do its job. This minimizes your data's exposure should
that account become compromised.

Many other accounts could also have the REPL_SLAVE_PRIV privilege and
any of those could be used by a slave to do the same job. However
losing control over one of those more privileged accounts could pose a
higher risk to your data.




Thanks, Shawn. Your response confirms what I had assumed was happening.

So bottom line... what I plan to do is strip the various
insert/update/delete privileges from appropriate db users on my slaves.
I had placed them there originally because I thought they would be
needed for the replicated queries, but not true based on your response.

I only want the various mysql users used by my code to have select 
privs
on the slaves so that if somehow a slave was mistakenly written to 
via a
bug in my code, that write would fail and I would receive the error. 
The
slaves should only be used for selects and should never experience a 
write.


That would make sense based on our discussion, correct?

Thanks again.
Jim



As masters and slaves can exchange "positions" or "roles" (it depends 
on how you like to mentally visualize the relationship) within a 
replication graph in a failover situation, adding time to 
re-establish actual permissions using GRANT commands to reset user 
accounts to their old privileges may not be time you want to spend.


A cleaner, simpler solution is to set the --super-read-only flag in 
the server:
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_super_read_only 



That way, you get the behavior you want (no writes to a read-only 
slave) without forcing differences to the content of your privileges 
tables within different nodes of your Replication setup.  Each node 
will remain a transactionally consistent copy of all the others 
(within the temporal limits of replication being an asynchronous 
process).


Yours,



Thanks, Shawn.

super-read-only looks perfect for what I want. I can keep my slaves 
with all the potential users needed to take over as master without 
risking unwanted writes.


Given how you read:
"If the |read_only| 
<https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_read_only> 
system variable is enabled, the server permits client updates only 
from users who have the |SUPER| 
<https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super> 
privilege. If the |super_read_only| 
<https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_super_read_only> 
system variable is also enabled, the server prohibits client updates 
even from users who have |SUPER| 
<https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super>."
One somewhat gets the impression that in order to enable 
super_read_only, one 

Re: Replication and user privileges

2019-02-26 Thread Jim

On 2/26/2019 9:44 AM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 7:29 PM, Jim wrote:

On 2/25/2019 5:46 PM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 5:04 PM, Jim wrote:

I have a question about mysql replication. I believe I understand most
of it, but have a question about user privileges.

I understand on the master, the replication user must have the
Repl_slave_priv privilege as described here:
https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave 





My question is about what replication-related users and privileges 
must

exist on the slave.

So, for example, if an insert on the master that is to be 
replicated is

performed by user 'abc' with proper insert permissions on the master,
does that same 'abc' user with same insert permissions need to 
exist on

the slave as well?

In other words, what user is performing the replication operation 
on the

slave? I don't see any indication of users referenced in the bin logs
that I have examined on the master. Are user and privileges regarding
replicated queries irrelevant on the slave and that is handled all
internally via the replication thread with no regard to user 
privileges?


Thank you.
Jim



Your final supposition is correct. All privileges were checked and
verified on the master when the original command was executed. The
Replication system on the slave is going to repeat that change as well
as possible given the state of its copy of the data without regards to
"who originally performed this change" on the upstream master.

We do not store credentials in the Binary Log because they are not
important to either of the purposes of the Binary Log

* point-in-time recovery
or
* Replication (which is very much like an automated, continuous
point-in-time recovery)

===

That replication account you mentioned, on the master, is required to
give a slave (and you could have several) enough rights to read the
Binary Log and not much else. This allows you to create an account
that can login from a remote location with the "least privileges"
necessary to do its job. This minimizes your data's exposure should
that account become compromised.

Many other accounts could also have the REPL_SLAVE_PRIV privilege and
any of those could be used by a slave to do the same job. However
losing control over one of those more privileged accounts could pose a
higher risk to your data.




Thanks, Shawn. Your response confirms what I had assumed was happening.

So bottom line... what I plan to do is strip the various
insert/update/delete privileges from appropriate db users on my slaves.
I had placed them there originally because I thought they would be
needed for the replicated queries, but not true based on your response.

I only want the various mysql users used by my code to have select privs
on the slaves so that if somehow a slave was mistakenly written to via a
bug in my code, that write would fail and I would receive the error. The
slaves should only be used for selects and should never experience a 
write.


That would make sense based on our discussion, correct?

Thanks again.
Jim



As masters and slaves can exchange "positions" or "roles" (it depends 
on how you like to mentally visualize the relationship) within a 
replication graph in a failover situation, adding time to re-establish 
actual permissions using GRANT commands to reset user accounts to 
their old privileges may not be time you want to spend.


A cleaner, simpler solution is to set the --super-read-only flag in 
the server:
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_super_read_only 



That way, you get the behavior you want (no writes to a read-only 
slave) without forcing differences to the content of your privileges 
tables within different nodes of your Replication setup.  Each node 
will remain a transactionally consistent copy of all the others 
(within the temporal limits of replication being an asynchronous 
process).


Yours,



Thanks, Shawn.

super-read-only looks perfect for what I want. I can keep my slaves with 
all the potential users needed to take over as master without risking 
unwanted writes.


Given how you read:
"If the |read_only| 
<https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_read_only> 
system variable is enabled, the server permits client updates only from 
users who have the |SUPER| 
<https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super> 
privilege. If the |super_read_only| 
<https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_super_read_only> 
system variable is also enabled, the server prohibits client updates 
even from users who have |SUPER| 
<https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super>."
One somewhat gets the impression that in order to enable 
super_read_only, one must also enable read_only.


However, base

Re: Replication and user privileges

2019-02-25 Thread Jim

On 2/25/2019 5:46 PM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 5:04 PM, Jim wrote:

I have a question about mysql replication. I believe I understand most
of it, but have a question about user privileges.

I understand on the master, the replication user must have the
Repl_slave_priv privilege as described here:
https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave 




My question is about what replication-related users and privileges must
exist on the slave.

So, for example, if an insert on the master that is to be replicated is
performed by user 'abc' with proper insert permissions on the master,
does that same 'abc' user with same insert permissions need to exist on
the slave as well?

In other words, what user is performing the replication operation on the
slave? I don't see any indication of users referenced in the bin logs
that I have examined on the master. Are user and privileges regarding
replicated queries irrelevant on the slave and that is handled all
internally via the replication thread with no regard to user privileges?

Thank you.
Jim



Your final supposition is correct. All privileges were checked and 
verified on the master when the original command was executed. The 
Replication system on the slave is going to repeat that change as well 
as possible given the state of its copy of the data without regards to 
"who originally performed this change" on the upstream master.


We do not store credentials in the Binary Log because they are not 
important to either of the purposes of the Binary Log


* point-in-time recovery
or
* Replication (which is very much like an automated, continuous 
point-in-time recovery)


===

That replication account you mentioned, on the master, is required to 
give a slave (and you could have several) enough rights to read the 
Binary Log and not much else. This allows you to create an account 
that can login from a remote location with the "least privileges" 
necessary to do its job. This minimizes your data's exposure should 
that account become compromised.


Many other accounts could also have the REPL_SLAVE_PRIV privilege and 
any of those could be used by a slave to do the same job. However 
losing control over one of those more privileged accounts could pose a 
higher risk to your data.





Thanks, Shawn. Your response confirms what I had assumed was happening.

So bottom line... what I plan to do is strip the various 
insert/update/delete privileges from appropriate db users on my slaves.
I had placed them there originally because I thought they would be 
needed for the replicated queries, but not true based on your response.


I only want the various mysql users used by my code to have select privs 
on the slaves so that if somehow a slave was mistakenly written to via a 
bug in my code, that write would fail and I would receive the error. The 
slaves should only be used for selects and should never experience a write.


That would make sense based on our discussion, correct?

Thanks again.
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Replication and user privileges

2019-02-25 Thread Jim
I have a question about mysql replication. I believe I understand most 
of it, but have a question about user privileges.


I understand on the master, the replication user must have the 
Repl_slave_priv privilege as described here:

https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave

My question is about what replication-related users and privileges must 
exist on the slave.


So, for example, if an insert on the master that is to be replicated is 
performed by user 'abc' with proper insert permissions on the master, 
does that same 'abc' user with same insert permissions need to exist on 
the slave as well?


In other words, what user is performing the replication operation on the 
slave? I don't see any indication of users referenced in the bin logs 
that I have examined on the master. Are user and privileges regarding 
replicated queries irrelevant on the slave and that is handled all 
internally via the replication thread with no regard to user privileges?


Thank you.
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Connections from mysql8.0 to mysql5.1 - bad handshake

2018-10-31 Thread Jim
I'm trying to configure replication of a mysql5.1 database master server 
to a mysql8.0 database server.

But the replication connection will not succeed.

I don't believe the issue is directly related to replication, because 
any connection attempt from mysql8.0 to mysql5.1 fails as follows.

As run from the mysql8.0 server:
$ mysql -h db5.1server.mydomain.com -u my_user -p
Enter password:
ERROR 1043 (08S01): Bad handshake

Given the following bug report, what I am trying to do does not sound 
hopeful:

https://bugs.mysql.com/bug.php?id=90994

I'm in the middle of a transition from centos6 to centos7. mysql5.1 is 
the standard mysql distribution in centos6. With centos7, I decided to 
skip mariadb and use the mysql8.0 distribution. My upgrade path would be 
much safer if I could get replication working as stated above.


Any thoughts or do I need to accept that what I'm attempting just isn't 
going to work?


Thanks,
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



--initialize specified but the data directory has files in it. Aborting.

2015-11-13 Thread jim Zhou
Hi,

I did "yum install myswl-community-server" and "service mysqld start"
I got the error

Initializing MySQL database:  2015-11-13T15:54:01.203931Z 0 [Warning]
Changed limits: max_open_files: 1024 (requested 5000)
2015-11-13T15:54:01.204205Z 0 [Warning] Changed limits: table_open_cache:
431 (requested 2000)
2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit DEFAULT
value is deprecated. Please use --explicit_defaults_for_timestamp server
option (see documentation for more details).
2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the data
directory has files in it. Aborting.
2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting

can someone help?

thank you,
Jim


Re: --initialize specified but the data directory has files in it. Aborting.

2015-11-13 Thread jim Zhou
[root@deweyods1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)

On Fri, Nov 13, 2015 at 8:56 AM, Reindl Harald <h.rei...@thelounge.net>
wrote:

>
>
> Am 13.11.2015 um 17:46 schrieb Axel Diehl:
>
>> what kind of OS do you have?
>>
>
> pretty sure Fedora/CentOS7 because "yum" and the package name, if i would
> be him i would just install mariadb which is the default mysql
> implementation these days on most linux distributions and hence the
> packages are better maintained
>
>
> -Ursprüngliche Nachricht-
>> Von: jim Zhou [mailto:jim.jz.z...@gmail.com]
>> Gesendet: Freitag, 13. November 2015 17:12
>> An: mysql@lists.mysql.com
>> Betreff: --initialize specified but the data directory has files in it.
>> Aborting.
>>
>> Hi,
>>
>> I did "yum install myswl-community-server" and "service mysqld start"
>> I got the error
>>
>> Initializing MySQL database:  2015-11-13T15:54:01.203931Z 0 [Warning]
>> Changed limits: max_open_files: 1024 (requested 5000)
>> 2015-11-13T15:54:01.204205Z 0 [Warning] Changed limits: table_open_cache:
>> 431 (requested 2000)
>> 2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit DEFAULT
>> value is deprecated. Please use --explicit_defaults_for_timestamp server
>> option (see documentation for more details).
>> 2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the data
>> directory has files in it. Aborting.
>> 2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting
>>
>> can someone help?
>>
>
>


Re: When to create a new user?

2015-08-19 Thread Jim

On 8/19/2015 9:24 AM, Reindl Harald wrote:


Am 19.08.2015 um 15:18 schrieb Jim:

On 8/19/2015 8:40 AM, Reindl Harald wrote:


Am 19.08.2015 um 14:29 schrieb Richard Reina:

I am writing a web application in perl that will create, edit, update
and
delete data from a MySQL database. I have written a perl module that
will
manage the connections (issue database handles ). As new users sign up
for
the application should each get their own MySQL username and password
or is
okay to execute their queries with the same (one generic) MySQL
username
and password?


one generic for the application

since you normally never ever should connect as root to your application
it even don't have the permissions to add mysql-users

how would you even imagine working with a usertable on your applications
side which is for every user different - chicken/egg


One generic, non-admin user is what you'll find in most apps.

Some developers take the approach of creating several users based on
level of operation and least privilege, particularly for public facing
scripts.

So, for example, if the operation is to delete data, that might be one
user. Perhaps there is some very sensitive data in the environment and
you'll create a given user for accessing that data and no other user has
access to that data.

Each user is given no greater access than is required based on the
intent of that user.

Then the given script connects with the appropriate user here.

Of course, you should program against and have defenses for db-related
vulnerabilities like SQL-injection, but the thinking with the multiple
users is if you had a script that was vulnerable to some exploit, the
damage would be limited to the privileges of the mysql user used to
connect to your database. It's a bit harder to manage and requires some
more planning up front, but it adds to damage control in case of a db
related exploit. You'd make this call based on how sensitive and
important your data is and how much effort you are willing to put into
the planning and design.


yes, but what has this all to do with As new users sign up for the
application and create a own mysql-user for each application user?



OP's question was generic on when to create db users. I provided an 
alternative (arguably more secure for public facing scripts) to a single 
user per app... again... depends on the value of the data and level of 
programming effort.


Read it; don't read it; use it or don't. It's more information and adds 
to the education of anyone interested. If not used now, there might be a 
future case or someone else reading this that might consider the approach.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new user?

2015-08-19 Thread Jim

On 8/19/2015 8:40 AM, Reindl Harald wrote:


Am 19.08.2015 um 14:29 schrieb Richard Reina:

I am writing a web application in perl that will create, edit, update and
delete data from a MySQL database. I have written a perl module that will
manage the connections (issue database handles ). As new users sign up
for
the application should each get their own MySQL username and password
or is
okay to execute their queries with the same (one generic) MySQL username
and password?


one generic for the application

since you normally never ever should connect as root to your application
it even don't have the permissions to add mysql-users

how would you even imagine working with a usertable on your applications
side which is for every user different - chicken/egg



One generic, non-admin user is what you'll find in most apps.

Some developers take the approach of creating several users based on 
level of operation and least privilege, particularly for public facing 
scripts.


So, for example, if the operation is to delete data, that might be one 
user. Perhaps there is some very sensitive data in the environment and 
you'll create a given user for accessing that data and no other user has 
access to that data.


Each user is given no greater access than is required based on the 
intent of that user.


Then the given script connects with the appropriate user here.

Of course, you should program against and have defenses for db-related 
vulnerabilities like SQL-injection, but the thinking with the multiple 
users is if you had a script that was vulnerable to some exploit, the 
damage would be limited to the privileges of the mysql user used to 
connect to your database. It's a bit harder to manage and requires some 
more planning up front, but it adds to damage control in case of a db 
related exploit. You'd make this call based on how sensitive and 
important your data is and how much effort you are willing to put into 
the planning and design.


Jim


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Sending a fax

2015-06-15 Thread Jim McNeely
What is this “fax” he is speaking of?

 On Jun 15, 2015, at 9:28 AM, Jørn Dahl-Stamnes sq...@dahl-stamnes.net wrote:
 
 On Monday, June 15, 2015, Trianon 33 wrote:
 All,
 
 Maybe a somewhat weird request: I need to verufy my fax can receive
 foreign faxes.
 
 If someone (one is really enough) send me a (preferrably 2 pager) fax on
 +31848708584 (destination is in The Netherlands)
 
 I hope noone is that stupid. I'm sure it cost the sender a lot to send 
 fax/call that number!
 
 -- 
 Jørn Dahl-Stamnes
 homepage: http://photo.dahl-stamnes.net/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



inconsistent optimization

2014-08-20 Thread Jim

Without going into specific details on queries...

Using mysql 5.1 as provided with CentOS6, I've noticed some queries 
providing what I can best explain as inconsistent optimization. The 
database can be quieted to just controlled queries and at times the same 
query will return very quickly when at other times may take minutes.


I don't see the same behavior with mysql5.0 under CentOS5. The same 
queries on the same data returns quickly consistently.


When the queries run slowly they show in a process list as either in a 
copy to temp table or sending data state. At first I thought query 
restructuring to avoid the copy to temp table was a path to a solution, 
but now I don't think so since the same query changed so that it no 
longer needs a temp table will sit in the sending data state for a 
long time.


The queries do eventually come back with correct results, but it takes 
minutes rather than milliseconds (sometimes slow; sometimes fast).


Have others seen this behavior? Any explanations?
Any reading to point to for further understanding?

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: inconsistent optimization

2014-08-20 Thread Jim

innodb

On 8/20/2014 1:22 PM, Martin Gainty wrote:

Jim/Jaime

What engine are you implementing?/
Qual mecanismo de MySQL que você está implementando?
Saludos desde Sud America
Martín



Date: Wed, 20 Aug 2014 13:54:46 -0300
Subject: Re: inconsistent optimization
From: edua...@gerencianet.com.br
To: j...@lowcarbfriends.com
CC: mysql@lists.mysql.com

Well,

Try to start checking the IOPs vs Disc. Check your iowait and the cache
size.

Could you send a create table and the query for us?





Atenciosamente,

*Eduardo Fontinelle*
*Chief Technology Officer | G**erencianet*
Phone: +55 (31) 3603-0812



2014-08-20 12:04 GMT-03:00 Jim j...@lowcarbfriends.com:


Without going into specific details on queries...

Using mysql 5.1 as provided with CentOS6, I've noticed some queries
providing what I can best explain as inconsistent optimization. The
database can be quieted to just controlled queries and at times the same
query will return very quickly when at other times may take minutes.

I don't see the same behavior with mysql5.0 under CentOS5. The same
queries on the same data returns quickly consistently.

When the queries run slowly they show in a process list as either in a
copy to temp table or sending data state. At first I thought query
restructuring to avoid the copy to temp table was a path to a solution, but
now I don't think so since the same query changed so that it no longer
needs a temp table will sit in the sending data state for a long time.

The queries do eventually come back with correct results, but it takes
minutes rather than milliseconds (sometimes slow; sometimes fast).

Have others seen this behavior? Any explanations?
Any reading to point to for further understanding?

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



fulltext question

2013-11-26 Thread Jim Sheffer
Hello all-

I have a question on searching via fulltext.

I have the following SQL statement:

var('SQLResultsID') = 'select *, MATCH 
(product_id,product_name,product_desc) AGAINST(' + $sqlKeywordSearch + ') AS 
SCORE from products WHERE MATCH (product_id,product_name,product_desc) 
AGAINST(' + $sqlKeywordSearchB + ' IN BOOLEAN MODE) AND active NOT LIKE 
%no% ORDER BY score DESC

First off, the variable $sqlKeywordSearch contains the search keywords 
separated by spaces.  The second variable, $sqlKeywordSearchB, has keywords 
separated by spaces but also adds a “+ sign to the beginning of each keyword 
to do the Boolean search and match all the keywords in the search.

question #1 - Is this the best way to do a boolean search but also return a 
usable “score”?  The search is doing what I expect it to do so no problem there.

Here’s my main question: I want to be able to “boost the rankings(score) of 
the results based on the fields.  Anything that matches on the product_id field 
I would like to get a higher ranking, then the product_name field next, then 
the product_desc last (obviously, if something matches in the product_id field 
it is of greater importance than if it matches in the product_description 
field).  I know I can boost, or adjust, the score to the keywords if I want 
but is there a way to add to the search score rankings based on the field 
searched?

as always, Thank everyone for any help!


James

James Sheffer  j...@higherpowered.com
Lasso Developerhttp://www.higherpowered.com
phone:  469-256-0268




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



restore question

2013-07-05 Thread Jim Sheffer
Hi everyone-

This is probably a no brainer (I'm new to Navicat) but I have a backup of a 
database from Navicat.

I want to be able to see if a certain field has changed since this morning in 
the backup (We are having problems with an order that somehow duplicated the 
items.  I need to see if there was only 1 of each item or two removed from 
inventory).  I don't need to do a restore into the database, just have a look 
at the backup.

Is this possible without going through the hoops of creating a copy of the 
database and restoring to the copy (I assume this is possible) - I DO NOT want 
to restore into the currently running database :-)

Any suggestions would b greatly appreciated!


James Sheffer,

The HigherPowered Team!

supp...@higherpowered.com  sa...@higherpowered.com
Web Design  Development http://www.higherpowered.com
phone:  469-256-0268   
 We help businesses succeed on the web!
 ---


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: weird difference in workbench and CLI query

2012-02-13 Thread Jim McNeely
Rik,

Your Crystal Ball was right! How did I miss that? You get the brownie point for 
the day. Thanks!

I knew it wasn't version discrepancy because workbench ssh's into the DB, and 
the script does as well, so it is running it on the native client in both 
cases. My sysadmin was saying it was version discrepancy as well and I was just 
sure that wasn't it.

Jim McNeely

On Feb 13, 2012, at 9:11 AM, Rik Wasmus wrote:

 When I run this query in workbench:
 select c.acc_number 'Accession Number', e.DateExam 'MBI Exam Date',
 s.ExamDate 'SJH Exam Date' from chestcon_log c
 left join exams e on CONCAT(000,c.acc_number) = e.LastWordAccNum
 left join sjhreports_ s on c.acc_number = s.AccessionNumber
 WHERE
 c.timestamp_exam = CAST(DATE_ADD(CONCAT(CURDATE(), ' 23:59:59'), INTERVAL
 '-1' DAY) AS DATETIME) AND c.timestamp_exam =
 CAST(DATE_ADD(CONCAT(CURDATE(), ' 00:00:00'), INTERVAL '-14' DAY) AS
 DATETIME)
 
 I get this:
 7330565  NULL 2012-02-01
 6604419   2011-01-25 NULL
 
 but when I run the same query in a shell script, on a Linux box or OS X, I
 get this: 7330565NULL2012-02-01
 6604419  NULLNULL
 
 I see mixes of single ' and double ... And this is the line that fails, the 
 only occurance of :
left join exams e on CONCAT(000,c.acc_number) = e.LastWordAccNum
 
 Which my crystal ball tells me is probably a sign of improper escapes in 
 shell 
 scripts. What happens if you replace that line with:
 
left join exams e on CONCAT('000',c.acc_number) = e.LastWordAccNum
 -- 
 Rik Wasmus
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: UPDATE triggers with REPLACE statements

2011-12-19 Thread Jim McNeely
Perfect!! This is the answer I was looking for. Thanks! I didn't know about 
this.

Jim McNeely

On Dec 18, 2011, at 11:26 AM, Claudio Nanni wrote:

 Only if you can change the application you could use INSERTON DUPLICATE 
 KEY UPDATE  instead of REPLACE.
 
 Check Peter's post here: http://kae.li/iiigi
 
 Cheers
 
 Claudio
 
 
 2011/12/17 Jim McNeely j...@newcenturydata.com
 Here is a fun one!
 
 I have a set of tables that get populated and changed a lot from lots of 
 REPLACE statements. Now, I need an ON UPDATE trigger, but of course the 
 trigger never gets triggered because REPLACES are all deletes and inserts.
 
 The trigger is going to populate another table as a queue for a system to do 
 something whenever a particular field changes.
 
 SO, does anyone have some slick idea how to handle this little dilemma? I 
 have an idea but I have a feeling there is something better out there.
 
 Thanks!
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 -- 
 Claudio



Re: UPDATE triggers with REPLACE statements

2011-12-19 Thread Jim McNeely
In the MySQL documentation, we find this tantalizing statement:

It is possible that in the case of a duplicate-key error, a storage engine may 
perform the REPLACE as an update rather than a delete plus insert, but the 
semantics are the same. There are no user-visible effects other than a possible 
difference in how the storage engine increments Handler_xxx status variables.

Does anyone know what engine this is? I can't seem to find any info via google. 
If I could live with the choice of engine, I could make this work with no extra 
programming at all.

Thanks,

Jim McNeely

On Dec 18, 2011, at 11:26 AM, Claudio Nanni wrote:

 Only if you can change the application you could use INSERTON DUPLICATE
 KEY UPDATE  instead of REPLACE.
 
 Check Peter's post here: http://kae.li/iiigi
 
 Cheers
 
 Claudio
 
 
 2011/12/17 Jim McNeely j...@newcenturydata.com
 
 Here is a fun one!
 
 I have a set of tables that get populated and changed a lot from lots of
 REPLACE statements. Now, I need an ON UPDATE trigger, but of course the
 trigger never gets triggered because REPLACES are all deletes and inserts.
 
 The trigger is going to populate another table as a queue for a system to
 do something whenever a particular field changes.
 
 SO, does anyone have some slick idea how to handle this little dilemma? I
 have an idea but I have a feeling there is something better out there.
 
 Thanks!
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 -- 
 Claudio



Re: UPDATE triggers with REPLACE statements

2011-12-19 Thread Jim McNeely
With REPLACE, you just set up the query the same as an INSERT statement but 
otherwise it just works. With ON DUPLICATE UPDATE you have to set up the whole 
query with the entire text all over again as an update. The query strings for 
what I'm doing are in some cases pushing enough text in medical report fields 
that it uses a MediumText data type, and I am watchful of bandwidth and 
performance, so this seems better - I'm not sending the field names and values 
twice. It is also something I don't have to program, I can just set the engine. 
The performance bottleneck is NOT likely going to be MySQL with either engine, 
but the processes creating these queries have some limitations.

Anyway, I just thought I would share. BTW I experimented, and innoDB does 
updates and fires off update triggers for REPLACE statements, but MyISAM does 
delete/inserts.

Jim McNeely

On Dec 19, 2011, at 1:28 PM, Claudio Nanni wrote:

 Good to know and good that you took time to read the manual, good approach.
 
 But why bother with REPLACE if you will go with INSERT.ON DUPLICATE KEY
 UPDATE?
 
 The storage engine is a property of your table and you can set it and/or
 change it, it is the low-level layer (physical) of the database that takes
 care on how data is actually stored and retrieved.
 
 You can check your table with:
 
 SHOW TABLE STATUS LIKE 'your-table-name';
 
 Manual page: http://kae.li/iiiga
 
 Cheers
 
 Claudio
 
 2011/12/19 Jim McNeely j...@newcenturydata.com
 
 In the MySQL documentation, we find this tantalizing statement:
 
 It is possible that in the case of a duplicate-key error, a storage
 engine may perform the REPLACE as an update rather than a delete plus
 insert, but the semantics are the same. There are no user-visible effects
 other than a possible difference in how the storage engine increments
 Handler_xxx status variables.
 
 Does anyone know what engine this is? I can't seem to find any info via
 google. If I could live with the choice of engine, I could make this work
 with no extra programming at all.
 
 Thanks,
 
 Jim McNeely
 
 On Dec 18, 2011, at 11:26 AM, Claudio Nanni wrote:
 
 Only if you can change the application you could use INSERTON
 DUPLICATE
 KEY UPDATE  instead of REPLACE.
 
 Check Peter's post here: http://kae.li/iiigi
 
 Cheers
 
 Claudio
 
 
 2011/12/17 Jim McNeely j...@newcenturydata.com
 
 Here is a fun one!
 
 I have a set of tables that get populated and changed a lot from lots of
 REPLACE statements. Now, I need an ON UPDATE trigger, but of course the
 trigger never gets triggered because REPLACES are all deletes and
 inserts.
 
 The trigger is going to populate another table as a queue for a system
 to
 do something whenever a particular field changes.
 
 SO, does anyone have some slick idea how to handle this little dilemma?
 I
 have an idea but I have a feeling there is something better out there.
 
 Thanks!
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 --
 Claudio
 
 
 
 
 -- 
 Claudio


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: UPDATE triggers with REPLACE statements

2011-12-19 Thread Jim McNeely
Not if you are using innoDB tables. For these, you use INSERT and UPDATE 
triggers. 

Jim McNeely
On Dec 19, 2011, at 11:58 AM, Halász Sándor wrote:

 2011/12/19 11:30 -0800, Jim McNeely 
 In the MySQL documentation, we find this tantalizing statement:
 
 It is possible that in the case of a duplicate-key error, a storage engine 
 may perform the REPLACE as an update rather than a delete plus insert, but 
 the semantics are the same. There are no user-visible effects other than a 
 possible difference in how the storage engine increments Handler_xxx status 
 variables. 
 
 Well, try it--but beware of these statements: the semantics are the same. 
 There are no user-visible effects other than a possible difference in how the 
 storage engine increments Handler_xxx status variables.
 
 If accurate, the triggers are those of DELETE  INSERT, not UPDATE.
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



UPDATE triggers with REPLACE statements

2011-12-16 Thread Jim McNeely
Here is a fun one!

I have a set of tables that get populated and changed a lot from lots of 
REPLACE statements. Now, I need an ON UPDATE trigger, but of course the trigger 
never gets triggered because REPLACES are all deletes and inserts.

The trigger is going to populate another table as a queue for a system to do 
something whenever a particular field changes.

SO, does anyone have some slick idea how to handle this little dilemma? I have 
an idea but I have a feeling there is something better out there.

Thanks!

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: import database

2011-10-10 Thread Jim Moseby
 Jessica Bela jessicabel...@yahoo.com 10/10/2011 4:47 PM 
Hi all,
how I can import in my PC a database Mysql that  has been created  in another 
PC and with other tools?
 
Assuming the source and destination are BOTH mysql databases:
 
mysqldump database  export.sql
 
...creates a file 'export.sql'.  Copy it to the target pc, then on the target
 
mysql  export.sql
 
If the source database is something OTHER than MySQL, we'd need to know what 
that database is.
 

CONFIDENTIALITY NOTICE:  This message is directed to and is for the use of the 
above-noted addressee only, and its contents may be legally privileged or 
confidential.  If the reader of this message is not the intended recipient, you 
are hereby notified that any distribution, dissemination, or copy of this 
message is strictly prohibited.  If you have received this message in error, 
please delete it immediately and notify the sender.  This message is not 
intended to be an electronic signature nor to constitute an agreement of any 
kind under applicable law unless otherwise expressly indicated herein.


SLOW performance over network

2011-09-29 Thread Jim Moseby
mysql  Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2
 
Strange, strange problem.
 
Everything was fine yesterday morning, then all of a sudden any query over the 
network takes a REALLY long time to return.  If I log in at the server console, 
every query is snappy-fast.  There had been no changes to clients or server 
when this happened.  All other networking services on the machine seem to be 
fine.  Processors are between 0-3% utilization, disk is at 8% utilization.
 
Server runs apache as well. There are many php pages on this web server that 
access the database, and display the results, all snappy-fast as usual.
 
I still use the old MySQL Administrator GUI on my windows box.  A simple 
'select * from tablename'  that would return only three records takes just over 
a minute to return (although it says '3 records returned in 0.0086 seconds' at 
the bottom).  I have many ODBC clients accessing this server as well, they all 
have slowness problems too.
 
I'm stumped.  What could possibly be causing this issue?
 
Thanks for any ideas!
 
Jim
 
 

CONFIDENTIALITY NOTICE:  This message is directed to and is for the use of the 
above-noted addressee only, and its contents may be legally privileged or 
confidential.  If the reader of this message is not the intended recipient, you 
are hereby notified that any distribution, dissemination, or copy of this 
message is strictly prohibited.  If you have received this message in error, 
please delete it immediately and notify the sender.  This message is not 
intended to be an electronic signature nor to constitute an agreement of any 
kind under applicable law unless otherwise expressly indicated herein.


Re: SLOW performance over network

2011-09-29 Thread Jim Moseby
Yeah:
 
# host 72.30.2.43 /* yahoo.com */
43.2.30.72.in-addr.arpa domain name pointer ir1.fp.vip.sk1.yahoo.com.
# host 10.1.20.97 /* my windows box */
97.20.1.10.in-addr.arpa has no PTR record
 


 Todd Lyons tly...@ivenue.com 9/29/2011 10:26 AM 
On Thu, Sep 29, 2011 at 7:12 AM, Jim Moseby jmos...@elasticfabrics.com wrote:
 I still use the old MySQL Administrator GUI on my windows box.  A simple 
 'select * from tablename'  that would return only three records takes just 
 over a minute to return (although it says '3 records returned in 0.0086 
 seconds' at the bottom).  I have many ODBC clients accessing this server as 
 well, they all have slowness problems too.

 I'm stumped.  What could possibly be causing this issue?

Wild Guess: Does DNS resolution, especially reverse dns resolution,
still work on the mysql server?

..Todd

-- 
If Americans could eliminate sugary beverages, potatoes, white bread,
pasta, white rice and sugary snacks, we would wipe out almost all the
problems we have with weight and diabetes and other metabolic
diseases. -- Dr. Walter Willett, Harvard School of Public Health

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=jmos...@elasticfabrics.com



CONFIDENTIALITY NOTICE:  This message is directed to and is for the use of the 
above-noted addressee only, and its contents may be legally privileged or 
confidential.  If the reader of this message is not the intended recipient, you 
are hereby notified that any distribution, dissemination, or copy of this 
message is strictly prohibited.  If you have received this message in error, 
please delete it immediately and notify the sender.  This message is not 
intended to be an electronic signature nor to constitute an agreement of any 
kind under applicable law unless otherwise expressly indicated herein.


Re: replication between two tables in same database

2011-09-29 Thread Jim Moseby

Sounds like a job for CREATE TRIGGER to me.  :)
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html
 
 

 Tompkins Neil neil.tompk...@googlemail.com 9/29/2011 12:56 PM 
Hi

I've a Innodb and MyISAM table in the SAME database that I wish to replicate
the data between the two because I need to use FULLTEXT searching on
the MyISAM table.  Is this possible ? If so how do I do it.

Thanks
Neil

CONFIDENTIALITY NOTICE:  This message is directed to and is for the use of the 
above-noted addressee only, and its contents may be legally privileged or 
confidential.  If the reader of this message is not the intended recipient, you 
are hereby notified that any distribution, dissemination, or copy of this 
message is strictly prohibited.  If you have received this message in error, 
please delete it immediately and notify the sender.  This message is not 
intended to be an electronic signature nor to constitute an agreement of any 
kind under applicable law unless otherwise expressly indicated herein.


Re: Backup Policy

2011-03-15 Thread Jim McNeely
You might want to look into replication 
(http://dev.mysql.com/doc/refman/5.5/en/replication.html). You can set up a 
replication slave to follow the master DB in real time, or offset by minutes, 
hours, days, or weeks, or whatever. That way you have a copy already served up 
waiting in the wings, very accessible. It's best to have at least one slave 
that is offset by at least a day IMHO because the problem could be human error 
and this will be faithfully replicated to the slave. MySQL replication works 
very well. Doing a dump is useful in some situations but we are more and more 
looking to more convenient ways, as storage and hardware is pretty cheap but 
time in a critical failure is not cheap.

Jim McNeely 

On Mar 15, 2011, at 1:51 PM, Wm Mussatto wrote:

 On Tue, March 15, 2011 12:36, Joerg Bruehe wrote:
 Hi!
 
 
 Adarsh Sharma wrote:
 Dear all,
 
 Taking Backup is must needed task in Database Servers. [[...]]
 
 Correct.
 
 
 We have options RAID, mylvmbackup , mysqldump. But it depends on the
 company requirement too.
 
 RAID is no backup!
 
 A RAID system may give you protection against a single disk drive
 failing (depending on the RAID level you configure), but it doesn't
 protect you against misuse, operator error, malware (virus, trojan),
 wilful deletion, ...
 
 RAID is no backup!  (This can't be repeated often enough.)
 
 Doing a backup means to take the data (including schema, privileges,
 passwords, triggers, ...) to some independent media where it is safe
 from all misfunction on the original machine.
 IMNSHO, a backup must be taken offline or write-protected in some other
 way, so that even a misfunction of the backup machine does not damage
 your backup.
 
 Old tape drives (or newer tape cartridges) with their physical write
 protection (ring, slider, ...) did provide such protection, it is a pity
 that they are too slow and too small for today's data (or too expensive
 for most people).
 
 With disks, my solution is:
 - Have the backup disks on a separate machine, via the network.
 - Have external backup disks, which are powered off if not is use.
 - Have two (or more) and use them alternating, so that even in case of a
  misfunction or drive failure (affecting the backup disk currently in
  use) the previous backup (on the other disk) remains safe.
 
 
 We have a database of more than 250GB in mysql database  which is
 increasing day by day. Currently I am using mysqldump utility of MySQL
 I perform a full backup about 28 days ago. But is there any mechanism or
 script to backup only the incremental backups on weekly or daily bases.
 
 Data is inserted in separate tables in separate databases. We cann't
 afford to have some proprietary solution.
 
 If you can afford downtime (shutting down the database), dirvish is a
 good means to take a file system backup (all your data areas). Check it
 at www.dirvish.org  There are plenty of alternatives, but I didn't try
 most of them. What I did try was rsnapshot, but I found it too
 inflexible for my purposes.
 
 I can't comment on the other approaches.
 
 Whatever approach you take: Make sure the backup gets stored os some
 separate, protected media.
 
 
 HTH,
 J�rg
 
 --
 Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
 
 Another advantage of external drives is they can be taken OFF site and
 stored away from the building.  We use three.  One on site, one in transit
 and one that mirrors (off site of course) the images of the other two.  We
 dump nightly and then backup that.  We also backup the binary logs which
 get rotated every two days (restore is nightly back followed by the binary
 logs).  The only only restore we have had to do is the nephew who knows
 html.   The disks are raided, but as was stated, that is to protect
 against single point failure.
 --
 William R. Mussatto
 Systems Engineer
 http://www.csz.com
 909-920-9154
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
 



Re: Help with slow query

2011-03-10 Thread Jim McNeely
Shawn,

Thanks for the great help! It still is not working. I did an EXPLAIN on this 
query with your amended split out join statements and got this:

++-+---+---+---++-+--++-+
| id | select_type | table | type  | possible_keys | key| key_len | ref 
 | rows   | Extra   |
++-+---+---+---++-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate   | 4   | 
NULL | 296148 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL   | NULL| 
NULL | 262462 | |
|  1 | SIMPLE  | t | ALL   | NULL  | NULL   | NULL| 
NULL | 311152 | |
|  1 | SIMPLE  | c | ref   | IdApptType| IdApptType | 51  | 
func |  1 | |
|  1 | SIMPLE  | af| ALL   | NULL  | NULL   | NULL| 
NULL |   5680 | |
++-+---+---+---++-+--++-+

What I'm not catching is why it says there is no key it can use for the patient 
table; here is a portion of the show create:

PRIMARY KEY (`zzk`),
  KEY `IdPatient` (`IdPatient`),
  KEY `SSN` (`SSN`),
  KEY `IdLastword` (`IdLastword`),
  KEY `DOB` (`DateOfBirth`),
  KEY `NameFirst` (`NameFirst`),
  KEY `NameLast` (`NameLast`)

So, the IdPatient is at least a POSSIBLE key, right?

On Mar 9, 2011, at 8:34 PM, Shawn Green (MySQL) wrote:

 Hi Jim,
 
 On 3/9/2011 17:57, Jim McNeely wrote:
 I am trying to set up an export query which is executing very slowly, and I 
 was hoping I could get some help. Here is the query:
 
 SELECT a.IdAppt, a.IdPatient,
 p.NameLast, p.NameFirst, p.NameMI,
 a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate,  , a.ApptTimeOut) AS 
 CHAR)
 ApptDateTime, a.ApptLenMin Duration,
 a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type,
 t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9,
 '??' Diagnosis_free_test
 
 from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af)
 ON (a.IdPatient = p.IdPatient
 AND a.IdPatientDate = t.IdPatientDate
 AND CONCAT(a.IdAppt, '0') = c.IdApptType
 AND a.IdPriCarePhy = af.IdAffil)
 WHERE a.ApptDate= '2009-03-01';
 
 p.IdPatient, t.IdPatientDate, c.IdApptType, and af.IdAffil are all indexed. 
 Also I selectively took out join parameters until there was nothing but a 
 join on the patient table, and it was still slow, but when I took that out, 
 the query was extremely fast. What might I be doing wrong?
 
 Thanks,
 
 Jim McNeely
 
 The performance problem is with your Cartesian product. I think you meant to 
 write:
 
 from Appt_ a
 LEFT JOIN patient_ p
  ON a.IdPatient = p.IdPatient
 LEFT JOIN today_ t
  ON a.IdPatientDate = t.IdPatientDate
 LEFT JOIN Copy_ c
  ON CONCAT(a.IdAppt, '0') = c.IdApptType
 LEFT JOIN Affil_ af
  ON a.IdPriCarePhy = af.IdAffil
 
 As of 5.0.12, the comma operator for table joins was demoted in the 'order of 
 precedence' for query execution. That means that MySQL became more complaint 
 with the SQL standard but it also means that using a comma-join instead of an 
 explicit ANSI join can result in a Cartesian product more frequently.
 
 Try my style and compare how it works. If both styles are similarly slow, 
 collect the EXPLAIN plan for this query and share with the list.
 
 Yours,
 -- 
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
 



Re: Help with slow query

2011-03-10 Thread Jim McNeely
Rhino,

Thanks for the help and time! Actually, I thought the same thing, but what's 
weird is that is the only thing that doesn't slow it down. If I take out all of 
the join clauses EXCEPT that one the query runs virtually instantaneously. for 
some reason it will use the index in that case and it works. If I take out 
everything like this:

SELECT a.IdAppt, a.IdPatient, 
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
 ON a.IdPatient = p.IdPatient
WHERE a.ApptDate = '2009-03-01';

It is still utterly slow. EXPLAIN looks like this:

++-+---+---+---+--+-+--++-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  
| rows   | Extra   |
++-+---+---+---+--+-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | NULL 
| 296166 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| NULL 
| 262465 | |
++-+---+---+---+--+-+--++-+

But, very good try. I thought this might be it as well.

Thanks,

Jim McNeely

On Mar 10, 2011, at 9:05 AM, Rhino wrote:

 
 What I'm about to say may be completely out to lunch so don't be afraid to 
 dismiss it. I'm more a DB2 guy than a MySQL guy and I'm getting rusty on both 
 but I've always been struck by how similar the two dtabases are. Therefore, I 
 want to offer an insight on why this query would not perform terribly well in 
 DB2. I simply don't know if it is applicable to MySQL.
 
 In DB2, using functions on predicates (conditions in a WHERE clause), 
 prevents DB2 from using an index to satisfy that predicate. (Or at least it 
 used to: I'm not certain if that has been remedied in recent versions of the 
 DB2 optimizer.) Therefore, the CONCAT() function in the line
 AND CONCAT(a.IdAppt, '0') = c.IdApptType would ensure that no index on the 
 IdAppt column would be used to find the rows of the table that satisfied that 
 condition.
 
 My suggestion is that you try rewriting that condition to avoid using 
 CONCAT() - or any other function - and see if that helps the performance of 
 your query. That would require modifying your data to append a zero to the 
 end of the existing date in IdApptType column, which may or may not be a 
 reasonable thing to do. You'll have to decide about that.
 
 Again, I could be all wet here so don't have me killed if I'm wrong about 
 this :-) I'm just trying to help ;-)
 
 --
 Rhino
 
 On 2011-03-10 11:38, Jim McNeely wrote:
 Shawn,
 
 Thanks for the great help! It still is not working. I did an EXPLAIN on this 
 query with your amended split out join statements and got this:
 
 ++-+---+---+---++-+--++-+
 | id | select_type | table | type  | possible_keys | key| key_len | 
 ref  | rows   | Extra   |
 ++-+---+---+---++-+--++-+
 |  1 | SIMPLE  | a | range | apptdate  | apptdate   | 4   | 
 NULL | 296148 | Using where |
 |  1 | SIMPLE  | p | ALL   | NULL  | NULL   | NULL| 
 NULL | 262462 | |
 |  1 | SIMPLE  | t | ALL   | NULL  | NULL   | NULL| 
 NULL | 311152 | |
 |  1 | SIMPLE  | c | ref   | IdApptType| IdApptType | 51  | 
 func |  1 | |
 |  1 | SIMPLE  | af| ALL   | NULL  | NULL   | NULL| 
 NULL |   5680 | |
 ++-+---+---+---++-+--++-+
 
 What I'm not catching is why it says there is no key it can use for the 
 patient table; here is a portion of the show create:
 
 PRIMARY KEY (`zzk`),
   KEY `IdPatient` (`IdPatient`),
   KEY `SSN` (`SSN`),
   KEY `IdLastword` (`IdLastword`),
   KEY `DOB` (`DateOfBirth`),
   KEY `NameFirst` (`NameFirst`),
   KEY `NameLast` (`NameLast`)
 
 So, the IdPatient is at least a POSSIBLE key, right?
 
 On Mar 9, 2011, at 8:34 PM, Shawn Green (MySQL) wrote:
 
 Hi Jim,
 
 On 3/9/2011 17:57, Jim McNeely wrote:
 I am trying to set up an export query which is executing very slowly, and 
 I was hoping I could get some help. Here is the query:
 
 SELECT a.IdAppt, a.IdPatient,
 p.NameLast, p.NameFirst, p.NameMI,
 a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate,  , a.ApptTimeOut) AS 
 CHAR)
 ApptDateTime, a.ApptLenMin Duration,
 a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type,
 t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9,
 '??' Diagnosis_free_test
 
 from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af)
 ON (a.IdPatient = p.IdPatient
 AND a.IdPatientDate = t.IdPatientDate
 AND CONCAT(a.IdAppt, '0') = c.IdApptType

Re: Help with slow query

2011-03-10 Thread Jim McNeely
Shawn,

This is the first thing that I though as well, but here is a portion from the 
show create table for patient_:


PRIMARY KEY (`zzk`),
 KEY `IdPatient` (`IdPatient`),
 KEY `SSN` (`SSN`),
 KEY `IdLastword` (`IdLastword`),
 KEY `DOB` (`DateOfBirth`),
 KEY `NameFirst` (`NameFirst`),
 KEY `NameLast` (`NameLast`)

This extremely simple join is still massively slow.

Jim

On Mar 10, 2011, at 10:00 AM, Shawn Green (MySQL) wrote:

 On 3/10/2011 12:32, Jim McNeely wrote:
 Rhino,
 
 Thanks for the help and time! Actually, I thought the same thing, but what's 
 weird is that is the only thing that doesn't slow it down. If I take out all 
 of the join clauses EXCEPT that one the query runs virtually 
 instantaneously. for some reason it will use the index in that case and it 
 works. If I take out everything like this:
 
 SELECT a.IdAppt, a.IdPatient,
 p.NameLast, p.NameFirst, p.NameMI
 
 from Appt_ a
 LEFT JOIN patient_ p
  ON a.IdPatient = p.IdPatient
 WHERE a.ApptDate= '2009-03-01';
 
 It is still utterly slow. EXPLAIN looks like this:
 
 ++-+---+---+---+--+-+--++-+
 | id | select_type | table | type  | possible_keys | key  | key_len | 
 ref  | rows   | Extra   |
 ++-+---+---+---+--+-+--++-+
 |  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | 
 NULL | 296166 | Using where |
 |  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| 
 NULL | 262465 | |
 ++-+---+---+---+--+-+--++-+
 
 But, very good try. I thought this might be it as well.
 
 ... snip ...
 
 According to this report, there are no indexes on the `patient_` table that 
 include the column `IdPatient` as the first column. Fix that and this query 
 should be much faster.
 
 -- 
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Enterprise support now at Oracle?

2011-03-10 Thread Jim McNeely
Wow! We paid for MySQL enterprise plus enterprise support back in the good old 
days before ORACLE bought MySQL. I just sacrificed the sacred chicken and 
sprinkled the blood around my computer and went to sign up for support at 
support.oracle.com. After wading through the crappy Flash interface and telling 
them my dog's mother's maiden name and all, 3 hours later I got an email saying 
I was Approved. I feel better. 

I read a few of the numerous tutorials on HOW TO USE THEIR HELP THING and I 
still don't understand it. I did a search for MySQL on their site and got 
NOTHING. Have they dropped all support for MySQL? Is there somewhere else we 
should go to pay for one on one support for things like my little join query 
problem? Has anyone else had a good experience with Oracle's MySQL support? 
Maybe this is a bad dream and I'll wake up soon.

Thanks,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Enterprise support now at Oracle?

2011-03-10 Thread Jim McNeely
Shawn Green works for Oracle and has been very helpful, and I am happy to eat a 
little bit of shoe leather!

Thanks Shawn!

Jim

On Mar 10, 2011, at 11:11 AM, Jim McNeely wrote:

 Wow! We paid for MySQL enterprise plus enterprise support back in the good 
 old days before ORACLE bought MySQL. I just sacrificed the sacred chicken and 
 sprinkled the blood around my computer and went to sign up for support at 
 support.oracle.com. After wading through the crappy Flash interface and 
 telling them my dog's mother's maiden name and all, 3 hours later I got an 
 email saying I was Approved. I feel better. 
 
 I read a few of the numerous tutorials on HOW TO USE THEIR HELP THING and I 
 still don't understand it. I did a search for MySQL on their site and got 
 NOTHING. Have they dropped all support for MySQL? Is there somewhere else we 
 should go to pay for one on one support for things like my little join query 
 problem? Has anyone else had a good experience with Oracle's MySQL support? 
 Maybe this is a bad dream and I'll wake up soon.
 
 Thanks,
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Help with slow query

2011-03-09 Thread Jim McNeely
I am trying to set up an export query which is executing very slowly, and I was 
hoping I could get some help. Here is the query:

SELECT a.IdAppt, a.IdPatient, 
p.NameLast, p.NameFirst, p.NameMI, 
a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate,  , a.ApptTimeOut) AS CHAR) 
ApptDateTime, a.ApptLenMin Duration,
a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type, 
t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9, 
'??' Diagnosis_free_test

from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af) 
ON (a.IdPatient = p.IdPatient 
AND a.IdPatientDate = t.IdPatientDate 
AND CONCAT(a.IdAppt, '0') = c.IdApptType 
AND a.IdPriCarePhy = af.IdAffil)
WHERE a.ApptDate = '2009-03-01'; 

p.IdPatient, t.IdPatientDate, c.IdApptType, and af.IdAffil are all indexed. 
Also I selectively took out join parameters until there was nothing but a join 
on the patient table, and it was still slow, but when I took that out, the 
query was extremely fast. What might I be doing wrong?

Thanks,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



auto_increment by more than 1

2011-02-23 Thread Jim McNeely
Is there a way to set the auto-increment for a particular table to increase by 
some number more than one, like maybe 10?

Thanks in advance,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: auto_increment by more than 1

2011-02-23 Thread Jim McNeely
I have read the manual, and you're right, the auto-increment_increment is a 
system wide setting. I only want this on one table. I am in this instance 
creating ID's for a separate system via HL7 for a Filemaker system, and 
FileMaker is too lame and slow to actually spit out an ID in time for the 
foreign system to function correctly within its workflow requirements for the 
end users. So, I am going to offset the ID's so that MySQL issues ID's on the 
10's, and FM issues ID's on the 5's. That way, it works similar to the way some 
people set up replication, but I only need it on this one table, I want the 
other tables to continue to increment normally. I don't want to do this in 
another instance of MySQL or another DB because I am otherwise trying to keep 
it simple. Here is the solution I came up with:

CREATE DEFINER=`user`@`%` TRIGGER ``.`p_number_zzk`
BEFORE INSERT ON ``.`p_number`
FOR EACH ROW
BEGIN
DECLARE maxy INT;
SET maxy = (SELECT ROUND(MAX(zzk),-1) from p_number);
IF ! NEW.zzk THEN
SET NEW.zzk = (maxy + 10);
END IF;
SET NEW.IdPatient = CONCAT(P, NEW.zzk);
END

It's probably ugly, but it works. Any objections to this? The zzk and IdPatient 
fields have unique validations on them.

Thanks,

Jim McNeely

On Feb 23, 2011, at 12:48 PM, Singer X.J. Wang wrote:

 Its theoretically possible, but its a hackish solution.. can you explain why 
 you want this?
 
 
 
 On Wed, Feb 23, 2011 at 15:46, Singer X.J. Wang w...@singerwang.com wrote:
 Right.. and that's not his question..
 
 
 
 On Wed, Feb 23, 2011 at 15:34, Shawn Green (MySQL) shawn.l.gr...@oracle.com 
 wrote:
 On 2/23/2011 12:41, Jim McNeely wrote:
 Is there a way to set the auto-increment for a particular table to increase 
 by some number more than one, like maybe 10?
 
 Thanks in advance,
 
 Jim McNeely
 
 
 The manual is your friend. Don't be afraid of it :)
 
 http://dev.mysql.com/doc/refman/5.5/en/replication-options-master.html#sysvar_auto_increment_increment
 
 -- 
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=w...@singerwang.com
 
 
 
 --
 The best compliment you could give Pythian for our service is a referral.
 



Re: auto_increment by more than 1

2011-02-23 Thread Jim McNeely
This doesn't work, it just sets the starting number, but it will still 
increment by one unless you set the auto_increment_increment system variable, 
but this affects all the tables in the DB and not just the particular table.

Thanks,

Jim McNeely

On Feb 23, 2011, at 10:26 AM, Carsten Pedersen wrote:

 Den 23-02-2011 18:41, Jim McNeely skrev:
 Is there a way to set the auto-increment for a particular table to increase 
 by some number more than one, like maybe 10?
 
 Thanks in advance,
 
 Jim McNeely
 
 CREATE TABLE t (
 ...
 ) AUTO_INCREMENT=10;
 
 
 / Carsten
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Altering database size to add more space

2010-06-25 Thread Jim Lyons
I think you're confusing table size with data base size.  The original post
grouped by schema so it appears the question concerns database size.  I
don't believe mysql imposes any limits on that.  Is there a limit on the
number of tables you can have in a schema imposed by mysql?


On Fri, Jun 25, 2010 at 4:13 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 On Fri, Jun 25, 2010 at 7:11 AM, Prabhat Kumar aim.prab...@gmail.comwrote:

 In case MyISAM it will grow up to space on your data drive or the Max size
 of file limited by OS..



 Not entirely correct. There is some kind of limit to a MyISAM file that has
 to do with pointer size - I've encountered it several years ago.

 You shouldn't be encountering it, in most circumstances, but that's what
 the max_data_length column in *show table status* is about.

 Before 5.0.6, the default max datafile size was 4G, but that's been upped
 to 256T now.

 If you're really running in to this have a look at the various advanced
 options for create/alter table, like avg_row_length and max_rows; as well as
 the variable myisam_data_pointer_size.

 Now what Sarkis is running into, is more of a logical error: data_free does
 not tell you how much free space there is *for data*, but how much free
 space there is *in the existing datafile*. That is, it really tells you how
 much space in your file has become free by deleting rows et al. This also
 explains why it's always 0 for InnoDB tables :-)


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Altering database size to add more space

2010-06-24 Thread Jim Lyons
What do you mean time to increase?  What tells you that?

A database's size is determined by the amount of available diskspace.  If
you need more than the filesystem that it is currently on has, then you can
either move the entire schema (which is synonymous to database) to another
filesystem and symlink it.  You can also store individual tables and table
files on other file systems and symlink those.  Either way, you have the
total collection of disk space available to you.  This assumes a Linux OS.

If your innodb tables are being restricted, you need to see if you have a
max size defined for the table space and if that is what you're bumping
into.



On Thu, Jun 24, 2010 at 9:13 AM, Sarkis Karayan skara...@gmail.com wrote:

 I feel like I am missing something, because I am not able to find the
 answer to this simple question.

 How can I increase the size of a database?

 I am using the following query to check the available space and notice
 that it is time to increase.

 SELECT
  table_schema AS 'Db Name',
  Round( Sum( data_length + index_length ) / 1024 / 1024, 3) AS 'Db Size
 (MB)',
  Round( Sum( data_free ) / 1024 / 1024, 3 ) AS 'Free Space (MB)'
 FROM information_schema.tables
 GROUP BY table_schema ;


 Thank you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: list rows with no recent updates

2010-06-14 Thread Jim Lyons
Do you have a timestamp field on this table?

There's no way of seeing when a row was last updated unless you have a
timestamp field that automatically updates for any change (that's *any*
change - not necessarily the ones you want to keep track of) or creating
your own and updating them either on the update statement itself or in a
trigger.

You can pretty much tell when the last time an entire table was updated by
the date on the MYD or ibd file.

I'm assuming you don't want to constantly parse the binlog or general log.

On Mon, Jun 14, 2010 at 4:02 PM, MadTh madan.feedb...@gmail.com wrote:

 Hi,


 I ran a update command on around 2700 rows inside a mysql database table
 which has around 3000 table rows to change the ( say)  price of each item (
 with unique ID. unique product code).

 like:

 mysql UPDATE tbl_xyz  set listprice='9.45' where prod_id='3069' and
 prod_code='a0071';
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0




 How can I list rows with no recent updates ( or the once where the above
 updates were not done)  or say with no updates in last 2 hours?





 Thank you.




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Geting current user pasword.

2010-06-03 Thread Jim Lyons
You mean  the mysql password of the user?  No, you can't get that,
even the server can't get it, at least not the clear-text version.
When you create/change your password an encrypted version is stored in
the mysql.user table along with the hostname.  If you have SELECT
permission on that table, you can get the encrypted version.  You
could use that for some sort of separate authentication by encrypting
a user-supplied password and comparing the 2 encrypted strings but you
can't get the actual password.

In other words, if the password is 'cat' you only see
'*FD98809C60DD51B6B380DB17B72F9A9E2FE1928F in mysql.user.  You could
have code that checked a user-supplied password, like:

if password(@pwd) = '*FD98809C60DD51B6B380DB17B72F9A9E2FE1928F'

but that's it.

Jim

On Thu, Jun 3, 2010 at 12:12 PM, Guillermo srguiller...@yahoo.com.ar wrote:

 Hello,
   I need to get the user and password from the current session. I found the
 user() function, wich gets the username, is there anything like that to get
 the password ?

 Thx

 Guillermo


 __ Información de ESET NOD32 Antivirus, versión de la base de firmas
 de virus 5170 (20100603) __

 ESET NOD32 Antivirus ha comprobado este mensaje.

 http://www.eset.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com





-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Using RAND to get a unique ID that has not been used yet

2010-05-28 Thread Jim Lyons
If your specs are that specific (IDs must be between 1 and 99)
then you could create a 99-row table with one integer column and
prefill it with the numbers 1 to 99 in random order.

Then you could write a function that would select and return the first
number in the table, then delete that record so you would not reuse
it.

Once you've done the work of sorting 99 numbers in random order
(which can be done anywhich way) it's easy and you don't have to loop
an indeterminant number of times.  You would be looping an increasing
number of times as you begin to fill up the table.

Jim

On Fri, May 28, 2010 at 10:38 AM, Andre Matos andrema...@mineirinho.org wrote:
 Hi All,

 I have a table that uses auto_increment to generate the Id automatically 
 working fine. However, I need to create a new table where the Id must be a 
 number generated randomly, so I cannot use the auto_increment.

 MySQL has a function RAND. So I could use something like this:

 SELECT FLOOR(RAND() * COUNT(*)) AS RandId FROM mytable

 But, let's suppose that the RandId is a number that was already used in the 
 table. Then I need to run the SELECT again and again until I find a number 
 that hasn't been used.

 Is there a way to have this SELECT to loop until it finds a number that 
 hasn't been used?

 The RandId must be only numbers and length of 6 (from 1 to 99). No other 
 character is allowed.

 Thanks for any help!

 Andre

 --
 Andre Matos
 andrema...@mineirinho.org





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com





-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Analysis of a weeks worth of general log

2010-04-20 Thread Jim Lyons
Has anyone tried using the log_output option in mysql 5.1 to have the
general log put into a table and not a flat file?  I used it for a while
before having to downgrade back to 5.0 but thought it was a great idea.  I'm
curious to see if anyone feels it helps analysis.

On Tue, Apr 20, 2010 at 6:02 AM, Imran Chaudhry ichaud...@gmail.com wrote:

 I have 7 days worth of general log data totalling 4.4GB.

 I want to analyze this data to get:

 a) queries per second, minute, hour and day
 b) a count of the number of selects versus write statements (delete,
 insert, replace and update)
 c) a variation of the above with select, replace, delete and insert
 versus update

 How can I do this?

 I've looked at mysqlsla which is complex, works well but does not
 quite get what I want. [1]

 I looked at MyProfi 0.18 which looks like it will get some of the
 answers but runs out of memory working on the smallest log file
 (mysql.log) even with memory_limit in php.ini set to 1024MB [2]

 -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log
 -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1
 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2
 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3
 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4
 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5
 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6

 Any pointers please? If all else fails, I will prolly write a perl
 script to munge it.

 [1] http://hackmysql.com/mysqlsla
 [2] http://myprofi.sourceforge.net

 --
 GPG Key fingerprint = B323 477E F6AB 4181 9C65  F637 BC5F 7FCC 9CC9 CC7F

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: How to corrupt a database please???

2010-04-18 Thread Jim Lyons
You can remove the innodb logs and/or the innodb data file.  You can also
remove some of the individual .idb files (if you're using file-per-table
option).

On Sun, Apr 18, 2010 at 11:25 AM, Nurudin Javeri nsjav...@idh.com wrote:

 Hi all, I am hiring a few new junior DBA's and I want to put them thru a
 simple db repair training.  Does anyone know how I can deliberately corrupt
 a MyISAM and InnoDB database in different ways please?  So what I want to do
 is corrupt 3 MyISAM 100gb databases, 3 InnoDB 100gb databases - ALL WITH
 DIFFERENT ERROR MESSAGES and have these newbies fix them in a 2 hour period
 :-)  I have fixed oodles of db's but NEVER thought I would see the say where
 I would WANT to corrupt a db on purpose, but that day is here and am looking
 for advise please.

 Thanks...

 Nunu

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL Slave is almost 1 day behind

2010-03-26 Thread Jim Lyons
A few things to keep in mind:

1: the master may have several threads feeding into the binlog at a time,
but a slave only executes in a single thread.  Are you throwing more stuff
at the slave in multiple mysql threads?

2: is there something else going on with the slave box?  some big backup or
gzip or something that would chew up cycles?  any big mysql query or update
going on?

3: have you checked the disks on your slave.  Whenever I notice a slave
falling behind for an extended period of time, I ask the sys admins to check
the disk drives - if you're using some kind of RAID, they can become
degraded.

4: you might also check the slave's mysql error log to see if there's any
hint there.





On Fri, Mar 26, 2010 at 9:45 AM, Steven Staples sstap...@mnsi.net wrote:

 Good day :)

 We've had our master/slave server running for a while now, and just
 yesterday, we started getting behind.
 Not entirely sure what happened, but it is getting further and furhter
 behind.

 (master server)
 mysql show master status\G
 *** 1. row ***
File: mysql-bin.000280
Position: 58090245
Binlog_Do_DB: admin_server,baf,freeradius,radius
 Binlog_Ignore_DB:
 1 row in set (0.00 sec)


 (slave server)
 mysql show slave status\G
 *** 1. row ***
 Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.7.101
Master_User: slave_user
Master_Port: 3306
  Connect_Retry: 60
Master_Log_File: mysql-bin.000280
Read_Master_Log_Pos: 55208258
 Relay_Log_File: backup-relay-bin.000530
  Relay_Log_Pos: 96663109
  Relay_Master_Log_File: mysql-bin.000259
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB: admin_server,baf,freeradius,radius
Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
 Replicate_Wild_Ignore_Table:
 Last_Errno: 0
 Last_Error:
   Skip_Counter: 0
Exec_Master_Log_Pos: 96662972
Relay_Log_Space: 2211376614
Until_Condition: None
 Until_Log_File:
  Until_Log_Pos: 0
 Master_SSL_Allowed: No
 Master_SSL_CA_File:
 Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
 Master_SSL_Key:
  Seconds_Behind_Master: 77473
 1 row in set (0.00 sec)

 Now, we are logging the freeradius packets into mysql, and like I said, it
 has been running fine, up until yesterday.   Any idea how the slave would
 get this far behind, and not be generating any errors?

 It is my understanding, that the slave only does update/insert/delete
 queries, so even if there was a lot of select queries on the master, the
 slave wouldn't see them.  We are not running any queries on the slave (it
 was set up for backup purposes, so we could stop the slave and backup
 completely), and we haven't done a backup on the slave in a couple of days
 (yeah, i know... bad bad) so there is really no reason for this.

 Can anyone help/assist/point me in the right direction to figure out how to
 catch the slave back up to the master?  The master is not being overloaded,
 it is keeping up no problem, and the backup server is 8x the server than
 the
 application server, so it shoulnd't even be an i/o or cpu issue.

 Please help! :)


 Thanks in advance
 Steven Staples


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL Encryption

2010-03-20 Thread Jim

Hi Neil.

Information (in most cases a string  100 chars, but that's probably not 
important) that actually needs to be decrypted, so a hash won't do.


Jim

On 3/20/2010 5:09 PM, Tompkins Neil wrote:

Hi

What sort of information are you looking to encrypt ?  If it is for user
passwords I'd recommend SHA256 which is one way encryption.  Or are you
looking to encrypt more sensitive information like card holder data ?

Regards
Neil

On Fri, Mar 19, 2010 at 4:22 PM, Jim j...@lowcarbfriends.com
mailto:j...@lowcarbfriends.com wrote:

Thanks for the reply, John.

What you are describing seems to be the approach I've seen on the
few places I've seen this topic discussed.

I've been considering something along those lines, essentially a two
part key.

Part one of the key is made from some data that is in the record I
want to protect and it is different for each record, very much like
you suggest.

Part two of the key is some constant key value I store somewhere.

The full key is created based on some defined manipulation of the
two parts, much like you suggest I believe.

But, then the issue comes of where to store part two of the key.

In your case, you are storing it in a stored procedure and I assume
that stored procedure resides on the same mysql server that holds
the data you want to protect.

That's where I start questioning the security of that approach. The
assumption being if someone got full control of that mysql box then
essentially all your eggs are in one basket.

I was thinking in terms of a most secure solution, you could have a
separate server (perhaps a mysql server) that for the purpose of
this example only serves part two of the key. That server is well
protected and non-public as is the mysql server that stores the data.

This way, two servers have to be compromised in order to gain all
the parts of the key and data. But, of course, that's kind of a
waste of a server and can you afford that and the extra resources
that go along with maintaining another server.

So, I was thinking, is it really so bad to store only one part of
the key in source code. That source code resides on a separate
server from the mysql server. Yes, the server that stores the source
code is a public server, but at least it's two servers that have to
be compromised to give up all the components needed to gain access
to the encrypted data.

I suppose maybe if I ask you to expand on what you mean by the
following that would be helpful to further understand your approach:

I then store the logic in a database stored procedure and use
database security to prevent unauthorised access.

Thanks,
Jim



On 3/19/2010 6:39 AM, John Daisley wrote:

Jim,

I tend to derive a key based on a separate character string and the
contents of the data in the same or a related table. This means
each row
has a unique encryption key and you never have to have the whole key
stored somewhere (you don't even know it :p ). Biggest advantage
to this
is should someone get hold of your data they have to work out your
character string and the logic for deriving the key or attempt
to hack
each and every individual row of the table because no two rows
will ever
have the same key.

For example, in a table with the columns `username`,
`email_address`,
`password`, `jointime` (where password is encrypted with
AES_ENCRYPT) I
may Use a charcter string of awfully_complex_char_string- and
derive
the key like so


CONCAT(awfully_complex_char_string-,SUBSTRING(`email_address`,1,LOCATE(@,`email_address`)-1),CAST(`jointime`
AS CHAR))

I then store the logic in a database stored procedure and use
database
security to prevent unauthorised access. At no point do I have this
logic outside the database in any external application or
script! That
would be silly :)

Regards

John Daisley

On Thu, Mar 18, 2010 at 7:26 PM, Jim j...@lowcarbfriends.com
mailto:j...@lowcarbfriends.com
mailto:j...@lowcarbfriends.com mailto:j...@lowcarbfriends.com
wrote:

In terms of encryption functions AES_DECRYPT and
AES_ENCRYPT, can
anyone point to any good links or offer any suggestions in
terms of
best practices on storage of the associated symmetric key? I've
found very little information on this when searching.

Does MySQL offer any asymmetric encryption capabilities?

What are people using in terms of a good solution for encrypting
specific columns of table data while providing protection of
the key?

Thanks,
Jim

Re: MySQL Encryption

2010-03-19 Thread Jim

Thanks for the reply, John.

What you are describing seems to be the approach I've seen on the few 
places I've seen this topic discussed.


I've been considering something along those lines, essentially a two 
part key.


Part one of the key is made from some data that is in the record I want 
to protect and it is different for each record, very much like you suggest.


Part two of the key is some constant key value I store somewhere.

The full key is created based on some defined manipulation of the two 
parts, much like you suggest I believe.


But, then the issue comes of where to store part two of the key.

In your case, you are storing it in a stored procedure and I assume that 
stored procedure resides on the same mysql server that holds the data 
you want to protect.


That's where I start questioning the security of that approach. The 
assumption being if someone got full control of that mysql box then 
essentially all your eggs are in one basket.


I was thinking in terms of a most secure solution, you could have a 
separate server (perhaps a mysql server) that for the purpose of this 
example only serves part two of the key. That server is well protected 
and non-public as is the mysql server that stores the data.


This way, two servers have to be compromised in order to gain all the 
parts of the key and data. But, of course, that's kind of a waste of a 
server and can you afford that and the extra resources that go along 
with maintaining another server.


So, I was thinking, is it really so bad to store only one part of the 
key in source code. That source code resides on a separate server from 
the mysql server. Yes, the server that stores the source code is a 
public server, but at least it's two servers that have to be compromised 
to give up all the components needed to gain access to the encrypted data.


I suppose maybe if I ask you to expand on what you mean by the following 
that would be helpful to further understand your approach:
I then store the logic in a database stored procedure and use database 
security to prevent unauthorised access.


Thanks,
Jim


On 3/19/2010 6:39 AM, John Daisley wrote:

Jim,

I tend to derive a key based on a separate character string and the
contents of the data in the same or a related table. This means each row
has a unique encryption key and you never have to have the whole key
stored somewhere (you don't even know it :p ). Biggest advantage to this
is should someone get hold of your data they have to work out your
character string and the logic for deriving the key or attempt to hack
each and every individual row of the table because no two rows will ever
have the same key.

For example, in a table with the columns `username`, `email_address`,
`password`, `jointime` (where password is encrypted with AES_ENCRYPT) I
may Use a charcter string of awfully_complex_char_string- and derive
the key like so

CONCAT(awfully_complex_char_string-,SUBSTRING(`email_address`,1,LOCATE(@,`email_address`)-1),CAST(`jointime`
AS CHAR))

I then store the logic in a database stored procedure and use database
security to prevent unauthorised access. At no point do I have this
logic outside the database in any external application or script! That
would be silly :)

Regards

John Daisley

On Thu, Mar 18, 2010 at 7:26 PM, Jim j...@lowcarbfriends.com
mailto:j...@lowcarbfriends.com wrote:

In terms of encryption functions AES_DECRYPT and AES_ENCRYPT, can
anyone point to any good links or offer any suggestions in terms of
best practices on storage of the associated symmetric key? I've
found very little information on this when searching.

Does MySQL offer any asymmetric encryption capabilities?

What are people using in terms of a good solution for encrypting
specific columns of table data while providing protection of the key?

Thanks,
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Encryption

2010-03-18 Thread Jim
In terms of encryption functions AES_DECRYPT and AES_ENCRYPT, can anyone 
point to any good links or offer any suggestions in terms of best 
practices on storage of the associated symmetric key? I've found very 
little information on this when searching.


Does MySQL offer any asymmetric encryption capabilities?

What are people using in terms of a good solution for encrypting 
specific columns of table data while providing protection of the key?


Thanks,
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Remove - Foreign key constraint in innodb

2010-02-25 Thread Jim Lyons
Yes - you can drop a foreign key constraint, use the 'alter table ... drop
foreign key ...' command.  If you get an error message, post the error
message.

On Thu, Feb 25, 2010 at 6:03 AM, Vikram A vikkiatb...@yahoo.in wrote:

 Hi,

 I tried to remove foreign key constraint in innodb table.

 I tried with different ways; but i am unable to drop the constraints.


 http://lists.mysql.com/mysql/113053
 It says that, droping the foreign key constraint it is not possible in
 innodb engine.

 Is it so? or any other possibilities? I am using mysql 5.1.32

 Please, Can any one you help me?

 Thank you

 VIKRAM A



  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.
 http://in.yahoo.com/




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Garbage collecting/trimming ibdata1

2010-02-23 Thread Jim Lyons
Your innodb data file just auto-extended until you either reached its max or
ran out of disk space if you had no max.

The only way I know to reduce it is to dump all the innodb tables, drop the
innodb data file and logs (and drop the innodb tables if you're using
file-per-table), restart mysql, let it rebuild the innodb files, and reload
the innodb tables from the dump file.

On Wed, Feb 24, 2010 at 12:59 AM, Yang Zhang yanghates...@gmail.com wrote:

 I recently tried to run

  INSERT INTO general_log SELECT * FROM mysql.general_log;

 but that failed a few hours in because I ran out of disk space.
 'SELECT COUNT(*) FROM general_log' returns 0, yet ibdata1 is still
 49GB (started at 3GB before the INSERT; the source mysql.general_log,
 a CSV table, was initially 43GB). I tried TRUNCATE then DROP on
 general_log, then restarted mysqld, to no avail.

 From Googling, the only thing that appears remotely relevant to
 garbage collection is OPTIMIZE TABLE, but I'm not sure how to apply it
 in this case (now that the table has been dropped). How do I reclaim
 my disk space? Thanks in advance.
 --
 Yang Zhang
 http://www.mit.edu/~y_z/ http://www.mit.edu/%7Ey_z/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: max() can't work

2010-02-06 Thread Jim Lyons
Yes - you must use the subselect.  Or, you can set a variable like:

select @max := max(movid) from table_name;
select * from table_name where movid = @max;


On Sat, Feb 6, 2010 at 8:34 AM, tech list bluetm...@gmail.com wrote:

 select * from table_name where movid = max(movid);

 why the sql above can't work?
 Shall I use a sub-select instead?

 select * from table_name where movid = (select max(movid) from table_name)
 ?


 Thanks in advance.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: max() can't work

2010-02-06 Thread Jim Lyons
Why in the world would you think select * from table_name group by movid
having max(movid) would work? It seems to compile without errors but
doesn't give you what you seem to want.

This would work:

select * from table_name group by movid having movid = (select max(movid)
from table_name)

although then your' not really grouping so the GROUP BY is useless.

On Sat, Feb 6, 2010 at 11:01 AM, Vikram A vikkiatb...@yahoo.in wrote:

 hi

 It is not working,

 select * from table_name group by movid having max(movid)

 but it is working fine

 select * from table_name order by movid desc limit 1





 
 From: Roland Kaber roland.ka...@education.lu
 To: armando armand...@gmail.com
 Cc: tech list bluetm...@gmail.com; mysql@lists.mysql.com
 Sent: Sat, 6 February, 2010 8:28:06 PM
 Subject: Re: max() can't work

 The max() function is an aggregate function which can be used in
 conjunction with GROUP BY in the SELECT or HAVING clause:
 http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html . This
 code should work:
 select * from table_name group by movid having max(movid).

 However, there is a simpler and more efficient solution:
 select * from table_name order by movid desc limit 1.

 I hope this helps.

 Best regards
 Roland Kaber

 armando wrote:
  the field movid is type integer or varchar ?
 
 
  2010/2/6 tech list bluetm...@gmail.com
 
 
  select * from table_name where movid = max(movid);
 
  why the sql above can't work?
  Shall I use a sub-select instead?
 
  select * from table_name where movid = (select max(movid) from
 table_name)
  ?
 
 
  Thanks in advance.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=armand...@gmail.com
 
 
 
 
 
 



   The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.
 http://in.yahoo.com/




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Selecting Dates

2010-02-01 Thread Jim Lyons
Shouldn't it be:
SELECT * FROM orders WHERE order_date = '2010-01-01' AND order_date =
'2010-01-30' ORDER BY order_date;

?

change the = and repeat the column_name.  Datetime should be datetime or
timestamp;

On Mon, Feb 1, 2010 at 8:25 AM, sureshkumar...@gmail.com wrote:

 Hi
 What is the datatype of the column order_date. Give the full form of the
 date for the between condition.

 Thanks
 Suresh Kuna
 MySQL DBA
 --Original Message--
 From: ML
 To: mysql@lists.mysql.com
 Subject: Selecting Dates
 Sent: Feb 1, 2010 7:05 AM

 Hi All,

 Switching from Oracle to MySQL, I seem to be having some difficulty
 selecting dates using between or even where = and = like:

 SELECT * FROM orders WHERE order_date BETWEEN='2010-01-01' AND '2010-01-30'
 ORDER BY order_date;

 or

 SELECT * FROM orders WHERE order_date ='2010-01-01' AND = '2010-01-30'
 ORDER BY order_date;

 Neither of these work.

 What am I missing?

 -ML

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=sureshkumar...@gmail.com



 Sent from BlackBerry® on Airtel




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: How to purposely corrupt a table

2010-01-13 Thread Jim Lyons
I have a script that does this.  It goes through every table in every
database (excluding specific database like information_schema, etc) and
issues the command:

select 1 from db.tab limit 1;

At first, I only checked for the error code for a crashed table, but found I
had some other tables that had problems, like a corrupt innodb, that weren't
marked as crashed, so I now just report any table that gets an error.

Jim

On Wed, Jan 13, 2010 at 12:07 PM, Matt Carlson mcarl...@kmcis.com wrote:

 Hello,

 This is going to be a very odd question.  I'm looking for a way to
 purposefully corrupt a table.  The reason behind this, is that I would like
 to write a php script that will go through all databases/tables in the
 environment, and find any tables that are marked as crashed/corrupt, so
 that
 I can shoot an e-mail to appropriate support personnel to repair the table
 in question.

 So two questions really, 1) Does anyone know a good way to cause this to
 happen, and 2) is there already something written that would assist in
 this?


 Thank you,

 Matt



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Exporting the result of a Query into excel

2010-01-05 Thread Jim Lyons
You can use the --xml option to the mysql command to get xnl output.  You
can also get a tab-delimited output file that can be read into Excel (this
is what I usually do).

To get a tab-delimited file, you can use the --tab option of mysqldump or
start up the mysql program with the -sss option and do a select on the data
you want, redirecting it into an output file.

On Tue, Jan 5, 2010 at 7:17 AM, ishaq gbola ishaq...@yahoo.co.uk wrote:

 Hi all,

 I would like to know if there is a tool or command in mySQL that allows one
 to export the result of query into excel formart








-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Exporting the result of a Query into excel

2010-01-05 Thread Jim Lyons
A command to convert the table mytab in database mydb into a tab-delimited
file mytab.txt might be:

mysql -e'select * from mydb.mytab' -sss   mytab.txt

The -sss is necessary to remove all the formatting stuff that you normally
have in the output of a select statement.

An alternative, if you have a directory *** that mysql can write to ***:

mysqldump --tab=/home/mysql/temp mydb mytab

This will create 2 files in /home/mysql/temp:  mytab.txt and mytab.sql.  The
one you'll want is in mytab.txt.

Then use what transfer tool you have, ftp, scp, winscp, rsync to transfer
the file to another server.

On Tue, Jan 5, 2010 at 7:31 AM, ishaq gbola ishaq...@yahoo.co.uk wrote:

 Thanks a lot for that, but where does this file get saved in and how can i
 copy it to my local host if the database is on a remote server

 --- On Tue, 5/1/10, Jay Ess li...@netrogenic.com wrote:

 From: Jay Ess li...@netrogenic.com
 Subject: Re: Exporting the result of a Query into excel
 To: mysql@lists.mysql.com  Mysql mysql@lists.mysql.com
 Date: Tuesday, 5 January, 2010, 13:22

 ishaq gbola wrote:
  Hi all,
 
  I would like to know if there is a tool or command in mySQL that allows
 one to export the result of query into excel formart
 
 select * from table into outfile thefile.txt;
 That can be imported into excel using CSV and using TAB as separator.
 http://code.anjanesh.net/2007/12/export-mysql-data-to-excel-compatible.html

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ishaq...@yahoo.co.uk









-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Exporting the result of a Query into excel

2010-01-05 Thread Jim Lyons
Doesn't that depend on how often you want to do it?  If it's a one-time
thing or an occasional thing, it's easier to just dump the table into a
tab-delimited file.  If it's something you want to do on a regular basis,
ODBC is a way you might prefer to go.

But it also would require you to have the authority on a given box to set up
an ODBC connection (which might involve downloading software).  You would
also have to do it on every machine you want to load the data on.  A
tab-delimited file can be transferred to any machine, even stored on a
flash-drive and walked somewhere.


On Tue, Jan 5, 2010 at 1:50 PM, Carsten Pedersen cars...@bitbybit.dkwrote:

 Is there any particular reason not to use the MySQL ODBC driver to import
 the data directly into Excel?

 / Carsten

 Jim Lyons skrev:

  A command to convert the table mytab in database mydb into a tab-delimited
 file mytab.txt might be:

 mysql -e'select * from mydb.mytab' -sss   mytab.txt

 The -sss is necessary to remove all the formatting stuff that you normally
 have in the output of a select statement.

 An alternative, if you have a directory *** that mysql can write to ***:

 mysqldump --tab=/home/mysql/temp mydb mytab

 This will create 2 files in /home/mysql/temp:  mytab.txt and mytab.sql.
  The
 one you'll want is in mytab.txt.

 Then use what transfer tool you have, ftp, scp, winscp, rsync to transfer
 the file to another server.

 On Tue, Jan 5, 2010 at 7:31 AM, ishaq gbola ishaq...@yahoo.co.uk wrote:

  Thanks a lot for that, but where does this file get saved in and how can
 i
 copy it to my local host if the database is on a remote server

 --- On Tue, 5/1/10, Jay Ess li...@netrogenic.com wrote:

 From: Jay Ess li...@netrogenic.com
 Subject: Re: Exporting the result of a Query into excel
 To: mysql@lists.mysql.com  Mysql mysql@lists.mysql.com
 Date: Tuesday, 5 January, 2010, 13:22

 ishaq gbola wrote:

 Hi all,

 I would like to know if there is a tool or command in mySQL that allows

 one to export the result of query into excel formart
 select * from table into outfile thefile.txt;
 That can be imported into excel using CSV and using TAB as separator.

 http://code.anjanesh.net/2007/12/export-mysql-data-to-excel-compatible.html

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=ishaq...@yahoo.co.uk












-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Render row without duplicates

2010-01-03 Thread Jim Lyons
The sql command does eliminate nulls, that's what the clause where ... is
not null does.  It appears you have an application that is rendering the
results so I would check the application to see if it is somehow putting a
null row on the screen.  Or when you copied the SQL into your program you
left out one of the conditions.  Try running the command in a stand-alone
program, like the mysql interactive program on SQLYOG.


Also, the following SQL gives the same result but is a little simpler.  The
derived table is unnecessary:

select Hospital1Code from tab where Hospital1Code is not null
 union
 select Hospital2Code from tab where Hospital2Code is not null
 union
 select Hospital3Code from tab where Hospital3Code is not null
 order by 1

Jim


On Sun, Jan 3, 2010 at 2:50 AM, bharani kumar bharanikumariyer...@gmail.com
 wrote:

 Hi ,

 Thanks*

 *SELECT * FROM (SELECT cHospital FROM MED_PATIENT where cHospital is not
 null union select cHospital1 from med_patient where cHospital1 is not null
 union select cHospital2 from med_patient where cHospital2 is not null ) A
 order by 1


  
 cHospitalhttp://localhost/phpmyadmin/sql.php?db=medicaltable=med_patientsql_query=SELECT+%2A+FROM+%28SELECT+cHospital+FROM+MED_PATIENT+where+cHospital+is+not+null+union+select+cHospital1+from+med_patient+where+cHospital1+is+not+null+union+select+cHospital2+from+med_patient+where+cHospital2+is+not+null+%29A+ORDER+BY+%60A%60.%60cHospital%60+ASCtoken=160eb1977a91a41f90271414c107d1c5
  [image:
 Edit]http://localhost/phpmyadmin/tbl_change.php?db=medicaltable=med_patientprimary_key=.%60cHospital%60+%3D+%27%27clause_is_unique=0sql_query=SELECT+cHospital2+from+med_patient+goto=sql.phptoken=160eb1977a91a41f90271414c107d1c5
   [image:
 Delete]http://localhost/phpmyadmin/sql.php?db=medicaltable=med_patientsql_query=DELETE+FROM+%60medical%60.%60med_patient%60+WHERE+.%60cHospital%60+%3D+%27%27+LIMIT+1zero_rows=The+row+has+been+deletedgoto=sql.php%3Fdb%3Dmedical%26table%3Dmed_patient%26sql_query%3DSELECT%2BcHospital2%2Bfrom%2Bmed_patient%2B%26zero_rows%3DThe%2Brow%2Bhas%2Bbeen%2Bdeleted%26goto%3Dtbl_sql.php%26token%3D160eb1977a91a41f90271414c107d1c5token=160eb1977a91a41f90271414c107d1c5
  [image: 
 Edit]http://localhost/phpmyadmin/tbl_change.php?db=medicaltable=med_patientprimary_key=.%60cHospital%60+%3D+%271234%27clause_is_unique=0sql_query=SELECT+cHospital2+from+med_patient+goto=sql.phptoken=160eb1977a91a41f90271414c107d1c5
   [image:
 Delete]http://localhost/phpmyadmin/sql.php?db=medicaltable=med_patientsql_query=DELETE+FROM+%60medical%60.%60med_patient%60+WHERE+.%60cHospital%60+%3D+%271234%27+LIMIT+1zero_rows=The+row+has+been+deletedgoto=sql.php%3Fdb%3Dmedical%26table%3Dmed_patient%26sql_query%3DSELECT%2BcHospital2%2Bfrom%2Bmed_patient%2B%26zero_rows%3DThe%2Brow%2Bhas%2Bbeen%2Bdeleted%26goto%3Dtbl_sql.php%26token%3D160eb1977a91a41f90271414c107d1c5token=160eb1977a91a41f90271414c107d1c5
 1234[image: 
 Edit]http://localhost/phpmyadmin/tbl_change.php?db=medicaltable=med_patientprimary_key=.%60cHospital%60+%3D+%278524%27clause_is_unique=0sql_query=SELECT+cHospital2+from+med_patient+goto=sql.phptoken=160eb1977a91a41f90271414c107d1c5
   [image:
 Delete]http://localhost/phpmyadmin/sql.php?db=medicaltable=med_patientsql_query=DELETE+FROM+%60medical%60.%60med_patient%60+WHERE+.%60cHospital%60+%3D+%278524%27+LIMIT+1zero_rows=The+row+has+been+deletedgoto=sql.php%3Fdb%3Dmedical%26table%3Dmed_patient%26sql_query%3DSELECT%2BcHospital2%2Bfrom%2Bmed_patient%2B%26zero_rows%3DThe%2Brow%2Bhas%2Bbeen%2Bdeleted%26goto%3Dtbl_sql.php%26token%3D160eb1977a91a41f90271414c107d1c5token=160eb1977a91a41f90271414c107d1c5
 8524


 I dont want the Null row ,

 how to remove the null display ,


 Thanks




 On Sat, Jan 2, 2010 at 11:38 PM, bharani kumar 
 bharanikumariyer...@gmail.com wrote:

 Hi First i want to thanks to my mysql groups,

 Sorry , just now i find time to see mail,

 Am not sure, but i guess this union solves my problem,

 But let me check it, give me a time..plz


 On Sat, Jan 2, 2010 at 7:24 PM, Jim Lyons jlyons4...@gmail.com wrote:

 Your table structure makes the SQL a little inelegant, but I'd say this
 would give you what you seem to want:

 select Hospital1Code  from tab where Hospital1Code   is not null
 union
 select Hospital2Code  from tab where Hospital2Code   is not null
 union
 select Hospital3Code  from tab where Hospital3Code   is not null

 The union will eliminate duplicates.  Maybe this would be better

 select * from (
  select Hospital1Code  from tab where Hospital1Code   is not null
  union
  select Hospital2Code  from tab where Hospital2Code   is not null
  union
  select Hospital3Code  from tab where Hospital3Code   is not null
 ) A order by 1

 Jim







 On Sat, Jan 2, 2010 at 6:43 AM, bharani kumar 
 bharanikumariyer...@gmail.com wrote:

 Hi

 My fields something like

 hospital1,hospital2,hospital3,patientname,


 Exact table look like

 PatientName  Hospital1Code  Hospital2Code  Hospital3Code

Re: Render row without duplicates

2010-01-02 Thread Jim Lyons
Your table structure makes the SQL a little inelegant, but I'd say this
would give you what you seem to want:

select Hospital1Code  from tab where Hospital1Code   is not null
union
select Hospital2Code  from tab where Hospital2Code   is not null
union
select Hospital3Code  from tab where Hospital3Code   is not null

The union will eliminate duplicates.  Maybe this would be better

select * from (
 select Hospital1Code  from tab where Hospital1Code   is not null
 union
 select Hospital2Code  from tab where Hospital2Code   is not null
 union
 select Hospital3Code  from tab where Hospital3Code   is not null
) A order by 1

Jim






On Sat, Jan 2, 2010 at 6:43 AM, bharani kumar bharanikumariyer...@gmail.com
 wrote:

 Hi

 My fields something like

 hospital1,hospital2,hospital3,patientname,


 Exact table look like

 PatientName  Hospital1Code  Hospital2Code  Hospital3Code

 Bharani   1234NULL NULL

 Kumar 56781234 NULL

 Senthil9632 56758524

 John   1234  4567   8524


 Can u tell me the query which return output like ,


 HospitalID

 1234
 5678
 9632
 5675
 8524
 4567
 8524

 Constraint are

 1. No Duplicate records,
 2.One single column as Output Result ,



 This query purpose is , i have around 1000 patients in my DB,

 Each patient may have one,two,three hospital code,that's y the field are
 hospital1,hosptial2,hospital3,


 i know , i can display all hospital code with unique , but i dont in the
 single column , with unique record,

 Can you tell me how to do this ?


 Thanks




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Aborted_connects is incresing repidly

2009-12-30 Thread Jim Lyons
This error may be caused by applications that don't close their connections
when they finish.  The session stays open until the timeout.  If this
becomes a problem, look at the various applications running on your system.
You might be able to program them better, to either close their connections
or reuse a connection.

If you run show processlist you will probably find a lot of processes in
Sleep state.  You can write a script to run regularly that will kill any
process that is sleeping longer than a certain minimum amount of time.  That
way you won't hit your max connections and prevent others from logging in.
I did this recently when a rogue process was opening sessions and leaving
them open.  I ran the script, selected those processes run by that user
against that database that were in Sleep state, and killed them dead.
This fixed the problem untili we were able to track down the rogue processes
and fix it.

Jim

On Wed, Dec 30, 2009 at 12:13 AM, Jeetendra Ranjan 
jeetendra.ran...@sampatti.com wrote:

 Hi,

 My MySQL server Aborted_connects status is showing 8692 and is rapidly
 increasing.

 What are reasons and how do i decrease the same?

 We are using connect() method in PHP code and have tried below command

 mysqladmin flush-hosts

 but still the value is same.


 Thanks
 Jeetendra Ranjan




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Spatial extensions

2009-12-17 Thread Jim Ginn




Rene:
We've easily integrated GIS with MySQL into our
sites:
http://tenant.com/map-search.php


http://yearlyrentals.com
http://acnj.com/map.php
...

Thanks!

Jim Ginn
Visit My   Work
(888)
546-4466 office
(609) 226-5709 cell


 Awesome, this is what I was trying to find, as you
succinctly wrote it. I
 *really* appreciate getting pointed in
the right direction, since I
 haven't found a lot of MySQL's GIS
tutorials directed at what I'm trying
 to do.
 
 Still, a couple questions, the Distance() function you included,
that must
 require 5.1 or higher right? 5.0.88  on my box throws
an error:
 
   Function places.Distance does not
exist
 
 Also, where does line_segment come from
in the below query?
 Thanks.
 
 ...Rene
 
 On 2009-12-17, at 8:45 AM, Gavin Towey wrote:


 Yes, spatial indexes are very fast:

 Query would be something like:

 SET
@center = GeomFromText('POINT(37.372241 -122.021671)');

 SET @radius = 0.005;

 SET @bbox =
GeomFromText(CONCAT('POLYGON((',
  X(@center) - @radius, ' ',
Y(@center) - @radius, ',',
  X(@center) + @radius, ' ',
Y(@center) - @radius, ',',
  X(@center) + @radius, ' ',
Y(@center) + @radius, ',',
  X(@center) - @radius, ' ',
Y(@center) + @radius, ',',
  X(@center) - @radius, ' ',
Y(@center) - @radius, '))')
  );


select id, astext(coordinates), Distance(@center,line_segment) as dist
 FROM places where MBRContains(@bbox, line_segment) order by
dist limit
 10;

 Regards,
 Gavin Towey



-Original Message-

From: René Fournier
[mailto:m...@renefournier.com]
 Sent: Wednesday, December 16,
2009 4:32 PM
 To: mysql
 Subject: Spatial
extensions

 I have table with 2 million rows of
geographic points (latitude,
 longitude).
 Given
a location -- say, 52º, -113.9º -- what's the fastest way to query
 the 10 closest points (records) from that table? Currently, I'm
using a
 simple two-column index to speed up queries:

 CREATE TABLE `places` (
 `id`
mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude`
decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT
NULL
 PRIMARY KEY (`id`),
 KEY `latlng`
(`latitude`,`longitude`)
 ) ENGINE=MyISAM AUTO_INCREMENT=50
DEFAULT CHARSET=latin1
 COLLATE=latin1_general_ci;

 My current query is fairly quick:

 SELECT SQL_NO_CACHE * FROM places WHERE latitude
BETWEEN 51.98228037384
 AND 52.033153677 AND longitude
BETWEEN -113.94770681881 AND
 -113.86685484296;

 But I wonder a couple things:

 1. Would MySQL's [seemingly anemic] spatial extensions would
speed
 things up if I added a column of type POINT (and a
corresponding spatial
 INDEX)?


CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL
AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL,
 `coordinates`
point NOT NULL,
 PRIMARY KEY (`id`),
 KEY
`latlng` (`latitude`,`longitude`),
 KEY `coord`
(`coordinates`(25))
 ) ENGINE=MyISAM AUTO_INCREMENT=50
DEFAULT CHARSET=latin1
 COLLATE=latin1_general_ci;

 2. How would I write the query?

 ...Rene


 --
 MySQL General Mailing List
 For list archives:
http://lists.mysql.com/mysql
 To unsubscribe:   
http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 This message contains confidential information
and is intended only for
 the individual named.  If you are
not the named addressee, you are
 notified that reviewing,
disseminating, disclosing, copying or
 distributing this
e-mail is strictly prohibited.  Please notify the
 sender
immediately by e-mail if you have received this e-mail by mistake
 and delete this e-mail from your system. E-mail transmission
cannot be
 guaranteed to be secure or error-free as
information could be
 intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or
 contain viruses.
The sender therefore does not accept liability for any
 loss
or damage caused by viruses or errors or omissions in the contents
 of this message, which arise as a result of e-mail
transmission.
 [FriendFinder Networks, Inc., 220 Humbolt
court, Sunnyvale, CA 94089,
 USA, FriendFinder.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql

To unsubscribe:   

http://lists.mysql.com/mysql?unsub...@renefournier.com

 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To
unsubscribe:http://lists.mysql.com/mysql?unsub=...@oats.com


 


Re: Optimization suggestions

2009-12-14 Thread Jim Lyons
After one very quick look, the index on folderid alone is unnecessary since
you have another index in which that field is the high-order field.

On Mon, Dec 14, 2009 at 12:31 PM, Sudhir N sudhir_nima...@yahoo.com wrote:

 I have following table structure, I have to use merge storage engine.
 Please have a look, and provide feedback if theres some thing wrong or if
 there's space for optimization.


 /*Table structure for table `messages2009` */

 CREATE TABLE `messages2009` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;

 /*Table structure for table `messages` */
 /*Merge table definition that covers all message tables*/

 CREATE TABLE `messages` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
 ) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST
 UNION=(`messages2009`);






 Sudhir NimavatSenior software engineer.
 Quick start global PVT LTD.
 Baroda - 390007
 Gujarat, India

 Personally I'm always ready to learn, although I do not always like being
 taught


  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.
 http://in.yahoo.com/




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?

2009-12-11 Thread Jim Lyons
Can you use that syntax if the databases are on different file systems?  If
you can, and the original table is big, the command would take a while as it
moved data from one file system to another.

On Fri, Dec 11, 2009 at 7:58 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 On Fri, Dec 11, 2009 at 1:56 PM, Ken D'Ambrosio k...@jots.org wrote:

   rename table oldschema.table to newschema.table;
 
  Just to be 100% clear -- I assume you have to first create the
 destination
  database, and then do this for all the tables in the source database?
 

 Yep. Easily scriptable, though :-)




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: error creating table

2009-11-30 Thread Jim Lyons
I created dummy tables for Roles and Users specifying the primary keys as
'serial' and then tried the below syntax.  It failed.

Then I redefined the primary keys in the parent tables to be the exact same
type as the foreign keys in UserRole and it worked.

So, check the datatype of all your keys and make sure they match.

On Mon, Nov 30, 2009 at 11:50 AM, Sharique uddin Ahmed Farooqui 
saf...@gmail.com wrote:

 Hi,
 I'm developing a CMS, in which I have 3 tables user, roles and userRoles.
 Here is the code for userRoles table. on this I'm getting error
 creating table (error code 1005), both userid and roleid are pkey
 (int, auto increment)

 CREATE  TABLE IF NOT EXISTS `mydb`.`UserRole` (
  `roleid` INT(10) UNSIGNED NOT NULL ,
  `userid` INT(10) UNSIGNED NOT NULL ,
  PRIMARY KEY (`roleid`, `userid`) ,
  INDEX `fk_userid` (`userid` ASC) ,
  INDEX `fk_roleid` (`roleid` ASC) ,
  CONSTRAINT `fk_userid`
FOREIGN KEY (`userid` )
REFERENCES `mydb`.`Users` (`userid` )
ON DELETE NO ACTION
ON UPDATE NO ACTION,
  CONSTRAINT `fk_roleid`
FOREIGN KEY (`roleid` )
REFERENCES `mydb`.`Roles` (`roleid` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
 ENGINE = InnoDB
 DEFAULT CHARACTER SET = utf8
 COLLATE = utf8_general_ci;
 --
 Sharique uddin Ahmed Farooqui
 (C++/C# Developer, IT Consultant)
 http://safknw.blogspot.com/
 Peace is the Ultimate thing we want.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Is a view efficient in joins?

2009-11-29 Thread Jim Lyons
A view is no more or less efficient that the queries that make it up.  Each
time you invoke the view, you repeat all the joins.

A join could be more efficient only if you go to a lot of effort to ensure
it forms the most efficient join(s) of the underlying tables.

Your solution of the summary table is actually a good one in many instances,
especially if it's not vital that it contain the most up-to-date data.

On Sun, Nov 29, 2009 at 6:16 PM, Neil Aggarwal n...@jammconsulting.comwrote:

 Hello:

 I have a database with over 60 tables with thousands
 to millions or rows in each.

 I want to develop a summary of the data joined across
 all the tables.

 I can do this with a view, but I am concerned it will
 take a lot of resources to perform all the joins required
 by the view.  Is a view efficient at making joins?  Are
 the joins executed every time the view is used or is
 the data cached somehow?

 The other approach is for me to create a table to hold
 the summary data and write application code that periodically
 updates it.

 Which alternative would be best?

 Thanks,
Neil

 --
 Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
 Host your MySQL driven app on a CentOS VPS for $25/mo
 Unmetered bandwidth = no overage charges, 7 day free trial


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Rename Database - Why Would This Not Work?

2009-11-23 Thread Jim Lyons
Does this work if any of the tables are InnoDB?

On Mon, Nov 23, 2009 at 8:17 AM, Robinson, Eric eric.robin...@psmnv.comwrote:

 RENAME TABLE
olddb.table1 TO newdb.table1,
olddb.table2 TO newdb.table2

 put the whole list in here, the whole statement will be applied to the
 system atomically

 The database has 1200+ tables, so your approach seems like more work to
 me. As it is, all I'm doing is:

 service mysql stop
 mv olddb newdb
 service mysql start
 mysqlcheck -o newdb


 --
 Eric Robinson

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Odd select question

2009-10-12 Thread Jim Lyons
that's legal where set is a comma-delimited list of items of the same
datatype as col1

On Mon, Oct 12, 2009 at 2:41 PM, Bruce Ferrell bferr...@baywinds.orgwrote:

 I seem to recall a SQL select syntax along these lines:

 SELECT col1, col2
 WHERE col1 IN (set)

 Is this or similar syntax in MySQL or is my dotage coming upon me


 Thanks in advance,

 Bruce

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Correct way to start new .MYD .MYI files

2009-09-25 Thread Jim Lyons
Run mysqldump on the table ApplicationLog and save that somewhere where
there's space, then truncate the table.

If you would really like to keep the tables but only have a disk space
issue, then there is one thing you can do if you have disk space on another
file system.  Move the 3 files for ApplicationLog and create sym links for
them.  In Unix/Linux, you would use the ln command.  I believe Windows has a
way of creating symlinks with .sym files, but I'm not 100% sure.


On Fri, Sep 25, 2009 at 2:33 PM, Jones, Keven keven.jo...@ncr.com wrote:

 Hello,

 I am new to MySQL.I have a disk space issue and I have found that 3 files
 are the cause.

 The files are ApplicationLog.MYD and ApplicationLog.MYI


 I would like to purge these files and basically get rid of the data that is
 in them. What is the proper
 Way to accomplish this? Does anyone have a procedure to follow that will
 allow me to archive these files and
 Then start new ones so I can get my disk space back?

 Thanks in advance.
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Anyone using LVM for backing up?

2009-06-22 Thread Jim Lyons
What we do to start is the following:

) open 2 windows to the server running the mysql instance
) in one window,
 ) run the mysql cli and issue command 'flush tables with read lock'
 ) stop the slave, if this is a running slave
 ) run either show master status or show slave status, whichever is
appropriate, to get log position
) in the other window
 ) run the sync command several times
 ) create the snapshots
) back in the first window
 ) unlock that tables
 ) start the slave, if appropriate
) back in the other window
 ) mount the snapshots


We do it this way to minimize the time the server we're syncing from is in
read lock.

If anyone sees any flaws in this, please let me know.  There's a lot more,
of course, involving rsync and change master.  I just dealt with the
beginning part.



On Mon, Jun 22, 2009 at 3:41 PM, Little, Timothy 
tlit...@thomaspublishing.com wrote:

 We have a 20 gig db (that includes the MYIs and MYDs and FRMs).

 We are wondering how long LVM snapshots take.. in that how long might
 the DB be read-locked?  Do we have to read-lock it and flush tables?

 Are we talking half a second, ten-seconds, 20 minutes?

 Currently, when we copy the raw files from one directory to another, it
 takes about 20 mins and brings the DB to it's proverbial knees.  When we
 copy the files with the db server down, it takes 10 minutes or so.

 Tim...

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Call a routine on mysqld startup

2009-06-15 Thread Jim Lyons
There's an option called init-file that will invoke an sql script on start
up.  That would probably work for you.

On Mon, Jun 15, 2009 at 5:17 AM, Gabriel Linder lin...@jeuxvideo.comwrote:

 Hi,

 I must call a routine when mysqld start (to populate a heap table). I
 did not find any related options in mysqld --help --verbose. Is there a
 way to achieve this, without modifying the startup script ?

 Thanks in advance.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MAC address as primary key - BIGINT or CHAR(12)

2009-05-14 Thread Jim Lyons
Definitely CHAR (or VARCHAR).

If the format of a MAC address changes at all, you could be in real
trouble.  Also, if a MAC address can have a leading 0 (I don't know anything
about MAC addresses), then storing it as some sort of number could lose
that.

This is a general rule for me.  A field might only contain numbers (at one
particular point in time)  but if those numbers are really nominal data (in
which the size or order does not matter) then they should be CHAR or VARCHAR
fields anyway.

On Thu, May 14, 2009 at 8:26 AM, Ilia KATZ ik...@dane-elec.co.il wrote:

 Hi.
 Currently I have a table:
 1. MAC address defined as BIGINT
 2. MAC address set as primary key

 Should I consider changing it to CHAR(12)?

 Replies will be appreciated.
 Ilia






-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MAC address as primary key - BIGINT or CHAR(12)

2009-05-14 Thread Jim Lyons
As I said in my post, this is a general principle for me.  Nominal data
should have a data type of some sort of character.  You will never run into
unexpected  problems  down the line.

On Thu, May 14, 2009 at 11:04 AM, Pete Wilson pete...@yahoo.com wrote:



 I'm new to MySQL so can't answer the OP's question, but:

 MAC addresses do not by design contain letters. Native MAC addresses are
 48-bit (6-byte) integers:

  http://standards.ieee.org/getieee802/download/802-2001.pdf

 The confusion arises because a MAC address is usually /represented/ as
  hexadecimal, and that might contain letters, but MAC addresses natively
  are pure 6-byte integers.

 So the issue is whether you want to index by a 48-bit number or a
 12-character ascii string. For efficiency's sake, I'm guessing you'd choose
 the former.

 I'll be interested in the answer, though, from someone with experience.

 -- Pete Wilson
   http://www.pwilson.net/


 --- On Thu, 5/14/09, Jim Lyons jlyons4...@gmail.com wrote:

  From: Jim Lyons jlyons4...@gmail.com
  Subject: Re: MAC address as primary key - BIGINT or CHAR(12)
  To: Ilia KATZ ik...@dane-elec.co.il
  Cc: mysql@lists.mysql.com
  Date: Thursday, May 14, 2009, 11:38 AM
  Definitely CHAR (or VARCHAR).
 
  If the format of a MAC address changes at all, you could be
  in real
  trouble.  Also, if a MAC address can have a leading 0
  (I don't know anything
  about MAC addresses), then storing it as some sort of
  number could lose
  that.
 
  This is a general rule for me.  A field might only
  contain numbers (at one
  particular point in time)  but if those numbers are
  really nominal data (in
  which the size or order does not matter) then they should
  be CHAR or VARCHAR
  fields anyway.
 
  On Thu, May 14, 2009 at 8:26 AM, Ilia KATZ ik...@dane-elec.co.il
  wrote:
 
   Hi.
   Currently I have a table:
   1. MAC address defined as BIGINT
   2. MAC address set as primary key
  
   Should I consider changing it to CHAR(12)?
  
   Replies will be appreciated.
   Ilia
  
  
  
  
 
 
  --
  Jim Lyons
  Web developer / Database administrator
  http://www.weblyons.com
 






-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Help with mysql query, multiple list

2009-05-08 Thread Jim Lyons
why not something like below.  Assume you have 3 pairs of start/end
timestamps and you want to find everything within those 3 time periods:

select * from table_name where start = start1 and end = end1
union
select * from table_name where start = start2 and end = end2
union
select * from table_name where start = start3 and end = end3

On Fri, May 8, 2009 at 11:26 AM, Abhishek Pratap abhishek@gmail.comwrote:

 Hi All

 I am kind of stuck with this query  , cant expand my thinking. May this is
 a
 limitation.  Here it is

 I have a database with many cols two of which are start and end position
 for
 an event.

 Now I have a list of event time stamps, I want to find all the info once
 the
 current event time stamp is = start time of event and =end time of event.

 something  like this

 select * from table_name where start = ( LIST of time stamps) AND end =(
 list of time stamps).

 Clearly above query accepts only one item in the list. Is there a way to do
 this for multiple items in the list ??? I can't think of anything at this
 moment.


 Thanks,
 -Abhi




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Help with mysql query, multiple list

2009-05-08 Thread Jim Lyons
then either build the statement by way of a program like a perl script or
select all records with a start time after the min start time of all in your
list and an end time less than the max end time in your list then filter
them further either in a program or a store procedure.

On Fri, May 8, 2009 at 11:45 AM, Abhishek Pratap abhishek@gmail.comwrote:

 Hi Jim

 Unfortunately I have thousands of such points. So explicit statement
 calling will be very expensive both computationally and in terms of
 writing..

 Thanks,
 -Abhi


 On Fri, May 8, 2009 at 12:37 PM, Jim Lyons jlyons4...@gmail.com wrote:

 why not something like below.  Assume you have 3 pairs of start/end
 timestamps and you want to find everything within those 3 time periods:

 select * from table_name where start = start1 and end = end1
 union
 select * from table_name where start = start2 and end = end2
 union
 select * from table_name where start = start3 and end = end3


 On Fri, May 8, 2009 at 11:26 AM, Abhishek Pratap 
 abhishek@gmail.comwrote:

 Hi All

 I am kind of stuck with this query  , cant expand my thinking. May this
 is a
 limitation.  Here it is

 I have a database with many cols two of which are start and end position
 for
 an event.

 Now I have a list of event time stamps, I want to find all the info once
 the
 current event time stamp is = start time of event and =end time of
 event.

 something  like this

 select * from table_name where start = ( LIST of time stamps) AND end
 =(
 list of time stamps).

 Clearly above query accepts only one item in the list. Is there a way to
 do
 this for multiple items in the list ??? I can't think of anything at this
 moment.


 Thanks,
 -Abhi




 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com





-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: grant user create privilege

2009-05-07 Thread Jim Lyons
It's hard to believe this to be the case since I assume you've created other
databases in this instance, but the error on create database, which is
essentially a mkdir in Unix, makes me wonder if you don't have a file
permissions error on the datadir directory.

On Wed, May 6, 2009 at 9:14 AM, John Clement
john.clem...@readingroom.comwrote:

 I'm clearly doing something wrong.  All I want is to grant a user rights
 to create databases:

 grant create on *.* to 'user'@'localhost' identified by 'pass';

 doesn't do the trick, nor does

 grant super on *.* to 'user'@'localhost' identified by 'pass';

 The user in question was originally created using

 grant all privileges on their_db.* to 'user'@'localhost' identified by
 'pass';

 If I try logging in as this user though the following happens:



 mysql create database testdb;
 ERROR 1044 (42000): Access denied for user 'user'@'localhost' to
 database 'testdb'

 Can someone point out the error of my ways?

 Many thanks, jc

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Creation date

2009-05-05 Thread Jim Lyons
I would have said to look at the creation date of the .frn file, but there
is also a field called create date in the show table status command and
the 2 dates often differ.  Anyone know why they do?

On Tue, May 5, 2009 at 9:28 AM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:

 At the risk of getting spanked for not finding this in the documentation,
 I'm asking a simple question:



 Can I tell when a table was created?



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



  http://www.the-infoshop.com www.the-infoshop.com






-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: How to generate unique primary key in MySQL?

2009-04-28 Thread Jim Lyons
If you just don't want a primary key in your major data tables, then create
a table for the express purpose of generating primary keys that uses
auto_increment.  Something like:

create table myseq (x   serial;)


Each time you need a new key, get the next value from that table.  It would
be more like an Oracle sequence that an auto_increment.

On Mon, Apr 27, 2009 at 11:59 PM, yuan edit edit.y...@gmail.com wrote:

 I have a shopping cart table like this:

 CREATE TABLE shopping_cart(
 id VARCHAR(20) NOT NULL,
 product_id INT NOT NULL,
 product_quantity INT NOT NULL,
 ...
 ...
 user_id INT NOT NULL,
 current_timestamp TIMESTAMP,
 primary key (id)
 );

 I will not use auto_increment

 Is there other way to  generate unique primary key in MySQL?

 Thank you




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: mysqldump syntax - dumping only specific tables starting with a certain character. (e.g. z*)

2009-04-19 Thread Jim Lyons
try something like:

mysqldump -u(user) -p (db-name) `ls z*`  (filename)


2009/4/19 ChoiSaehoon saeho...@hotmail.com


 Is there a way to dump only specific tables starting with a certain
 character?



 For example, I only want to dump tables starting with the character 'z'.

 The following doesn't work.

 mysqldump -u(user) -p (db-name) z*  (filename)



 Do I have to use regular expression here?



 Please help  thanks in advance.

 _
 MSN 메신저의 차세대 버전, Windows Live Messenger!
 http://windowslive.msn.co.kr/wlm/messenger/




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: mysqldump syntax - dumping only specific tables starting with a certain character. (e.g. z*)

2009-04-19 Thread Jim Lyons
actually, that was stupid of me - you need a list of tables not files.

I think the only to do this, and the way we do it, is to run some command
like:
mysql -eshow tables in db-name like 'z%' tabnames

Note the use of double-quotes and single-quotes.

then use a loop to read the file tabnames and build a string to tack on
the mysqldump command, or issue multiple mysqldump commands.  A shell
interpreter like bash or a program like perl or php can do this easily.

You might try putting the above command in back-tics (`) and then inserting
directly into the mysqldump command.

2009/4/19 Jim Lyons jlyons4...@gmail.com

 try something like:

 mysqldump -u(user) -p (db-name) `ls z*`  (filename)


 2009/4/19 ChoiSaehoon saeho...@hotmail.com


 Is there a way to dump only specific tables starting with a certain
 character?



 For example, I only want to dump tables starting with the character 'z'.

 The following doesn't work.

 mysqldump -u(user) -p (db-name) z*  (filename)



 Do I have to use regular expression here?



 Please help  thanks in advance.

 _
 MSN 메신저의 차세대 버전, Windows Live Messenger!
 http://windowslive.msn.co.kr/wlm/messenger/




 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Cant get TRIM to work?

2009-04-17 Thread Jim Lyons
Works for me.  I assume you're only trimming blanks from the beginning
and/or end of the string.  I have found that it will not trim tabs.  Try a
command like:

select concat(', trim(notes), ') from work_notes



On Fri, Apr 17, 2009 at 2:17 PM, Richard Reina rich...@rushlogistics.comwrote:

 Hello All,

 I can't get trim to trim the blank space from a TEXT field in the query
 below and was wondering if someone could tell what I am doing wrong?

 SELECT TRIM(notes) FROM work_notes;

 Thanks for any help as I am at a complete loss.

 Richard


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL replication status plugin

2009-04-15 Thread Jim Lyons
I'd just write a perl script to do it and return the appropriate status
code/message to nagios.  Shouldn't be hard at all.  PhP or any language that
can talk to mysql would work, too.  You just mentioned the position, you'll
have to compare the names of the binlog files as well: position 100 in file
bin-00010 is ahead of positions 9 in file bin-9.

On Wed, Apr 15, 2009 at 4:12 AM, Gabriel - IP Guys 
gabr...@impactteachers.com wrote:

 Good morning guys, (and girls), I hope all is well. I've been given the
 task to, and I quote - Write a Nagios plugin to test the replication
 status of two servers by
 comparing the position on the master to that on the slave



 To save myself a lot of work, I'd like to know if anything has been done
 in this arena already, I would be over the moon, if someone has beaten
 me to it, but I am so not in the mood to write one!



 Any hints, recommendations, and ideas are wholly welcome!



 ---

 Kind Regards,

 Mr Gabriel






-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


how can i determine default database within a stored procedure?

2009-03-24 Thread Jim Lyons
The database() function returns the default database, so:

mysql use scratch;
Database changed
mysql select database();
++
| database() |
++
| scratch|
++

mysql use mysql;
Database changed
mysql select database();
++
| database() |
++
| mysql  |
++


However, if the database function is invoked from within a stored procedure,
it only returns the name of the database in which it exists:

use scratch;

delimiter $$
create procedure thisdb()
begin
select database();
end$$
delimiter ;

use scratch;
call scratch.thisdb();
scratch

use mysql;
call scratch.thisdb();
scratch

use customer;
call scratch.thisdb();
scratch

This is documented behavior.  Is there anyway for a stored procedure to
determine what the deafault schema of the user invoking it is?  We are
trying to track down cross-schema invocations of sp's and this is the last
piece I have to figure out.

Thanks,
Jim


-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: why is this happening?

2009-03-21 Thread Jim Lyons
No, there's only one mysqld process running.   Thanks, tho.

On Thu, Mar 19, 2009 at 3:56 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 I am afraid you have two mysql instances up on the same host.

 ps -ef | grep mysql ?


 Claudio


 Jim Lyons wrote:

 I have a rather odd problem with a replication slave.

 The slave has been down for a number of hours.  Of course, both io and sql
 threads are stopped - from SHOW SLAVE STATUS:

   Slave_IO_Running: No
  Slave_SQL_Running: No


 However, the relay logs are still being written to.  I was under the
 impression that the slave's io thread was what brought data from the
 master's bin log to the slave's relay log.  With the io thread stopped,
 the
 relay logs should stop filling up - right?  Mine are definitely filling
 up.


 Does anyonee know why the relay logs keep filling up after replication has
 broken?

 Thanks







-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL versus PostgreSQL for GPS Data

2009-03-20 Thread Jim Ginn
Juan:

Still seems excessive but in that case, ignore inserts that have no change
in lat / lon ...

Jim

 Jim Ginn wrote:

Not sure why you you need the trucks location 'every second' ie:
31,536,000 rows per year per truck ?
doing every 30 seconds seems more manageable at 1,051,200 rows per year
per truck?  Maybe better at 60 seconds?

 OpenGGD is also designed to deliver GPS data in real time; we have
 customers
 that sometimes want to track their trucks in real time, that's why we
 think
 the worst scenario could be one position per second.

 Juan Karlos

 2009/3/18 Jim Ginn j...@oats.com

 Juan:

 We've had success with spatial indexes and mysql on our sites however
 our
 numbers are smaller:

 http://brokersnetwork.com (200,000+ records)

 http://yearlyrentals.com (200,000+ records)

 http://avalonrealestate.com/map.php (4,400+ records)

 ...

 Not sure why you you need the trucks location 'every second' ie:

 31,536,000 rows per year per truck ?

 doing every 30 seconds seems more manageable at 1,051,200 rows per year
 per truck?  Maybe better at 60 seconds?

 Jim


  Juan,
 
  On Wed, Mar 18, 2009 at 11:14 AM, Juan Pereira
  juankarlos.open...@gmail.com wrote:
  Hello,
 
  I'm currently developing a program for centralizing the vehicle fleet
  GPS
  information -http://openggd.sourceforge.net-, written in C++.
 
  The database should have these requirements:
 
  - The schema for this kind of data consists of several arguments
  -latitude,
  longitude, time, speed. etc-, none of them is a text field.
  - The database also should create a table for every truck -around 100
  trucks-.
  - There won't be more  than 86400 * 365 rows per table -one GPS
 position
  every second along one year-.
  - There won't be more than 10 simultaneously read-only queries.
 
  The question is: Which DBMS do you think is the best for this kind of
  application? PostgreSQL or MySQL?
 
  I think it depends on exactly what you want to do with the data. MySQL
  has fairly poor support for spatial types but you can achieve a lot
  just manipulating normal data types. Postgres (which i know nothing
  about) appears to have better spatial support via postgis
 
  http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html
 
  http://postgis.refractions.net/documentation/manual-1.3/
 
  In terms of data size you should not have a problem, I think you need
  to look at how you are going to query the tables.
 
  Cheers,
 
  Ewen
 
 
  Thanks in advance
 
  Juan Karlos.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=...@oats.com
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



why is this happening?

2009-03-19 Thread Jim Lyons
I have a rather odd problem with a replication slave.

The slave has been down for a number of hours.  Of course, both io and sql
threads are stopped - from SHOW SLAVE STATUS:

   Slave_IO_Running: No
  Slave_SQL_Running: No


However, the relay logs are still being written to.  I was under the
impression that the slave's io thread was what brought data from the
master's bin log to the slave's relay log.  With the io thread stopped, the
relay logs should stop filling up - right?  Mine are definitely filling up.


Does anyonee know why the relay logs keep filling up after replication has
broken?

Thanks

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: how can I make a stored procedure executable by public?

2009-03-18 Thread Jim Lyons
Thanks, Claudio, but that's not quite it.  I'm not writing any procedure.
I'm inserting code into procedures other people write.  I am taking each
procedure out of the mysql.proc table, inserting a few lines of code right
at the start of the body, and saving back into the proc table.  These lines
of code insert a line into my audit table.  I don't have any control over
what other people write, I just want to record when their procedures get
called.

The genral log logs original calls to procedures, but I don't see that it
records calls made to one procedure from within another.

On Wed, Mar 18, 2009 at 6:42 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Hello Jim,

 If I unserstand well your needs the steps you need to do are:

 Create one user X with insert privileges on the mydb.audit_table

 Create the stored procedure specifying the user X both in the DEFINER
 section and in the SQL_SECURITY section

 of the create procedure statement (
 http://dev.mysql.com/doc/refman/5.0/en/create-procedure.html)

 In this way who calls the stored procedure will have the table written with
 the grants of the original definer X.

 Is it what you are looking for?

 Claudio


 2009/3/17 Jim Lyons jlyons4...@gmail.com

 I am writing a tracking procedure that will be inserted into every
 procedure
 (regardless of who writes the procedure) that will insert a record into an
 audit table.  This means the procedure, regardless of who writes it, must
 have the permission to insert into the table.  I am going to modify the
 code
 of the procedures once they're stored in the database and the authors of
 the
 procedures will probably not know that I will be doing it (although it's
 not
 really a secret) and the way they code will not be altered in any way.

 I would like to write a grant command like:

 grant insert on mydb.audit_table to public

 but I don't see anything in the manual Is there any way that I can do
 this.
 I know I can grant ALL privileges to a user, but I want to grant one
 privilege to all users, without having to loop through the mysql.user
 table
 and explicitly granting the insert privilege.

 I guess I could put it in test, but then everyone could do anything with
 it,
 which would not be particularly desirable.  The table should be insert
 only, not readable or updateable by anyone but the owner of mydb.

 Is there any way I can do this?

 Thanks,
 Jim

 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com





-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL versus PostgreSQL for GPS Data

2009-03-18 Thread Jim Ginn
Juan:

We've had success with spatial indexes and mysql on our sites however our
numbers are smaller:

http://brokersnetwork.com (200,000+ records)

http://yearlyrentals.com (200,000+ records)

http://avalonrealestate.com/map.php (4,400+ records)

...

Not sure why you you need the trucks location 'every second' ie:

31,536,000 rows per year per truck ?

doing every 30 seconds seems more manageable at 1,051,200 rows per year
per truck?  Maybe better at 60 seconds?

Jim


 Juan,

 On Wed, Mar 18, 2009 at 11:14 AM, Juan Pereira
 juankarlos.open...@gmail.com wrote:
 Hello,

 I'm currently developing a program for centralizing the vehicle fleet
 GPS
 information -http://openggd.sourceforge.net-, written in C++.

 The database should have these requirements:

 - The schema for this kind of data consists of several arguments
 -latitude,
 longitude, time, speed. etc-, none of them is a text field.
 - The database also should create a table for every truck -around 100
 trucks-.
 - There won't be more  than 86400 * 365 rows per table -one GPS position
 every second along one year-.
 - There won't be more than 10 simultaneously read-only queries.

 The question is: Which DBMS do you think is the best for this kind of
 application? PostgreSQL or MySQL?

 I think it depends on exactly what you want to do with the data. MySQL
 has fairly poor support for spatial types but you can achieve a lot
 just manipulating normal data types. Postgres (which i know nothing
 about) appears to have better spatial support via postgis

 http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html

 http://postgis.refractions.net/documentation/manual-1.3/

 In terms of data size you should not have a problem, I think you need
 to look at how you are going to query the tables.

 Cheers,

 Ewen


 Thanks in advance

 Juan Karlos.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=...@oats.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how can I make a stored procedure executable by public?

2009-03-18 Thread Jim Lyons
great!  thanks much.

On Wed, Mar 18, 2009 at 1:52 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 Ok sorry, I did not understand at first.

 GRANT INSERT on mydb.audit_table to ''@'%';

 should do the work.

 Cheers

 Claudio

 Jim Lyons wrote:

 Thanks, Claudio, but that's not quite it.  I'm not writing any procedure.
  I'm inserting code into procedures other people write.  I am taking each
 procedure out of the mysql.proc table, inserting a few lines of code right
 at the start of the body, and saving back into the proc table.  These lines
 of code insert a line into my audit table.  I don't have any control over
 what other people write, I just want to record when their procedures get
 called.

 The genral log logs original calls to procedures, but I don't see that it
 records calls made to one procedure from within another.

 On Wed, Mar 18, 2009 at 6:42 AM, Claudio Nanni 
 claudio.na...@gmail.commailto:
 claudio.na...@gmail.com wrote:

Hello Jim,

If I unserstand well your needs the steps you need to do are:

Create one user X with insert privileges on the mydb.audit_table

Create the stored procedure specifying the user X both in the
DEFINER section and in the SQL_SECURITY section

of the create procedure statement
(http://dev.mysql.com/doc/refman/5.0/en/create-procedure.html)

In this way who calls the stored procedure will have the table
written with the grants of the original definer X.

Is it what you are looking for?

Claudio


2009/3/17 Jim Lyons jlyons4...@gmail.com
mailto:jlyons4...@gmail.com


I am writing a tracking procedure that will be inserted into
every procedure
(regardless of who writes the procedure) that will insert a
record into an
audit table.  This means the procedure, regardless of who
writes it, must
have the permission to insert into the table.  I am going to
modify the code
of the procedures once they're stored in the database and the
authors of the
procedures will probably not know that I will be doing it
(although it's not
really a secret) and the way they code will not be altered in
any way.

I would like to write a grant command like:

grant insert on mydb.audit_table to public

but I don't see anything in the manual Is there any way that I
can do this.
I know I can grant ALL privileges to a user, but I want to
grant one
privilege to all users, without having to loop through the
mysql.user table
and explicitly granting the insert privilege.

I guess I could put it in test, but then everyone could do
anything with it,
which would not be particularly desirable.  The table should
be insert
only, not readable or updateable by anyone but the owner of
mydb.

Is there any way I can do this?

Thanks,
Jim

--
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com





 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com





-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


how can I make a stored procedure executable by public?

2009-03-17 Thread Jim Lyons
I am writing a tracking procedure that will be inserted into every procedure
(regardless of who writes the procedure) that will insert a record into an
audit table.  This means the procedure, regardless of who writes it, must
have the permission to insert into the table.  I am going to modify the code
of the procedures once they're stored in the database and the authors of the
procedures will probably not know that I will be doing it (although it's not
really a secret) and the way they code will not be altered in any way.

I would like to write a grant command like:

grant insert on mydb.audit_table to public

but I don't see anything in the manual Is there any way that I can do this.
I know I can grant ALL privileges to a user, but I want to grant one
privilege to all users, without having to loop through the mysql.user table
and explicitly granting the insert privilege.

I guess I could put it in test, but then everyone could do anything with it,
which would not be particularly desirable.  The table should be insert
only, not readable or updateable by anyone but the owner of mydb.

Is there any way I can do this?

Thanks,
Jim

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Question about LVM snapshots and innodb tables

2009-03-11 Thread Jim Lyons
The book “High Performance MySQL” states the following about using LVM
snapshots with innodb tables:  “All innodb files (InnoDB tablespace files
and InnoDB transaction logs) must be on a single logical volume
(partition).”  Here is portion of a df command performed on one of our
hosts:



/dev/mapper/vg01-db   2.5T  2.0T  567G  78% /db

/dev/mapper/vg00-innodb  8.0G  2.0G  6.1G  25% /db/innodb

/dev/mapper/vg02-binlog  503G  140G  363G  28% /db/binlog

/dev/mapper/vg06-data4  755G  652G  103G  87% /db/data

/dev/mapper/vgc2-data8  6.2T  644G  5.6T  11% /db/data8

/dev/mapper/vgc3-data9  6.2T  1.8T  4.5T  29% /db/data9



Where /db/innodb contains the innodb logs and the one ibdata file.  However,
we use innodb_file_per_table so all the /db/datax filesystems have .ibd
files (many of the tables in the datadir, /db/data, are sym-linked to
/db/data8 and /db/data9 where the data actually resides.



We use LVM snapshots to move the data around, since our databases are
several terabytes.  Does this mean our snapshots are inconsistent?  There’s
no way we can get all innodb data on a single partition.



Thanks,


Jim Lyons

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Finding replicated database

2009-03-07 Thread Jim Lyons
BTW, same problems occur on the slave side with replicate-do and
replicate-ignore.  They seem to go away with row-based replications - that's
our big hope, anyway.  It appears to work so far in test.

On Sat, Mar 7, 2009 at 12:41 PM, Baron Schwartz ba...@xaprb.com wrote:

 Sure.  Set binlog-do-db to foo and set up a slave, and then try this:

 create database foo;
 create database bar;
 use bar;
 create table foo.table1(a int);
 use foo;
 insert into table1(a) values(1);

 Now go to the slave and check replication.  It's broken:

   Last_Error: Error 'Table 'foo.table1' doesn't exist' on query.
 Default database: 'foo'. Query: 'insert into table1(a) values(1)'

 Why?  Because binlog-do-db doesn't do what you think it does.  Check
 the docs again :) Read this:
 http://dev.mysql.com/doc/refman/5.0/en/replication-rules.html and then
 this:
 http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#option_mysqld_binlog-do-db

 I won't duplicate the documents here.  The docs are clear and
 comprehensive.  But pay attention to default database which is what
 changes when you say use foo or use bar.  Notice what the default
 database is in the error above!

 On Fri, Mar 6, 2009 at 12:08 PM, Atle Veka at...@flyingcroc.net wrote:
  Hi, care to elaborate on the pit-falls of binlog-do-db?
 
 
  Thanks,
  Atle
 
  On Wed, 4 Mar 2009, Baron Schwartz wrote:
 
  No.  But I would recommend removing binlog-do-db and using
  replicate-wild-do-table=mydb.% on the slave instead.  The
  binlog-do-db approach has many pitfalls anyway, and is likely to burn
  you at some point, e.g. get the slave out of sync with the master.
  The bonus of my suggestion is that you'll be able to see from SHOW
  SLAVE STATUS what's being replicated.
 
  --
  Baron Schwartz, Director of Consulting, Percona Inc.
  Our Blog: http://www.mysqlperformanceblog.com/
  Our Services: http://www.percona.com/services.html
 
 
 



 --
 Baron Schwartz, Director of Consulting, Percona Inc.
 Our Blog: http://www.mysqlperformanceblog.com/
 Our Services: http://www.percona.com/services.html

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: vs AND

2009-03-02 Thread Jim Lyons
Yes - it works in MySQL, as does ||.  See:

http://dev.mysql.com/doc/refman/5.1/en/logical-operators.html#operator_and



On Mon, Mar 2, 2009 at 5:02 PM, John Daisley 
john.dais...@mypostoffice.co.uk wrote:

 Never seen  used with MySQL, does it really work???

 As for other RDBMS, I reckon its likely to cause problems in Oracle as
  is used for session substitution variables in SQL*Plus.


 John Daisley
 Email: john.dais...@butterflysystems.co.uk
 Mobile: 07812 451238

 MySQL Certified Database Administrator (CMDBA)
 MySQL Certified Developer (CMDEV)
 MySQL Certified Associate (CMA)
 Comptia A+ Certified Professional IT Technician

















 On Mon, 2009-03-02 at 13:04 -0800, Daevid Vincent wrote:

  Someone sent me a huge SQL query today that used  instead of AND
  all over the place. I was a bit surprised that it is legal mySQL, but I
  was always under the impression that is not proper. Can anyone confirm
  or deny this? Like will ALL SQL RDBMS support that syntax?
 
  http://dev.mysql.com/doc/refman/5.0/en/logical-operators.html
 
 
  __
  This email has been scanned by Netintelligence
  http://www.netintelligence.com/email




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: catch the error

2009-02-26 Thread Jim Lyons
what's the error message?

On Thu, Feb 26, 2009 at 11:46 AM, PJ af.gour...@videotron.ca wrote:

 It is commented out because I am using mysql_connect
 I don't think it would be good to use both, since the db1 references
 another db. But even when I use the db1.php and change the database and
 table, I get the same error message.




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: MySQL log in issues

2009-02-22 Thread Jim Lyons
You should have typed  ... set password = password('newpass')

You've set root's password to the encypted value of some other string.

You ought to be able to get around this by starting mysql (the server
process, not the client) using the --skip-grant-tables option and resetting
the password.

On Sun, Feb 22, 2009 at 9:38 PM, Tim DeBoer tdeb...@gmail.com wrote:

 Hi everyone,
 I'm having some problems getting logged in to mysql to create a new
 database.
 I have to admit, my overall skill level with mysql is 'Entry level noob'.
 I just haven't ever had much need to mess with it in general, so if you
 want
 to laugh, feel free. I'll understand  ;)

 At any rate, as it has been about a year since the last time I tried to do
 this, I've forgotten the root password and tried reset it using the
 following steps:
 1. Stopped the mysqld daemon process.
 2. Started the mysqld daemon process with the --skip-grant-tables option.
 3. Started the mysql client with the -u root option.
 Once logged in as root, I did:

 UPDATE mysql.user SET Password='newpass' WHERE User='root';
 FLUSH PRIVILEGES;
 exit

 When I try to log in now:
 # mysql -u root -p
 Enter password:
 ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
 password: YES)

 Can someone give me a nudge/push/drag me kicking and screaming in the right
 direction?

 Thanks everyone  :)

 --
 Tim DeBoer




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: enabling storage engine with RPM install

2009-02-20 Thread Jim Lyons
yes, but how does one add engines to an existing install?  Is it possible?
I might want/need to add a storage engine that was not in the initial
install.

On Fri, Feb 20, 2009 at 12:59 AM, Krishna Chandra Prajapati 
prajapat...@gmail.com wrote:

 Hi Jim,

 mysql rpm installation comes with default storage engine myism and innodb.
 Just verify show engines on mysql prompt.


 On Fri, Feb 20, 2009 at 5:25 AM, Jim Lyons jlyons4...@gmail.com wrote:

 We have 5.0.22 installed on a test machine and for some reason the innodb
 storage engine was not enabled.  We install from RPMs so I'm not sure how
 to
 enable the storage engine. If we compiled ourselves, we'd recompile but
 that's not an option.

 Does anyone know how to enable a storage engine once mysql's been
 installed
 by an RPM?  How does one make the selections in the first place with RPMs?
 We've always just taken what we got and it was sufficient.

 Thanks,
 Jim

 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com




 --
 Krishna Chandra Prajapati
 MySQL DBA,
 Email-id: prajapat...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


enabling storage engine with RPM install

2009-02-19 Thread Jim Lyons
We have 5.0.22 installed on a test machine and for some reason the innodb
storage engine was not enabled.  We install from RPMs so I'm not sure how to
enable the storage engine. If we compiled ourselves, we'd recompile but
that's not an option.

Does anyone know how to enable a storage engine once mysql's been installed
by an RPM?  How does one make the selections in the first place with RPMs?
We've always just taken what we got and it was sufficient.

Thanks,
Jim

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Resend: enabling storage engine with RPM install

2009-02-19 Thread Jim Lyons
Sorry, but I'm resending because I made a mistake in terminology and want to
be clear.  The problem isn't that innodb is DISABLED on the database.  The
innodb engine is not supported by the database.

We have 5.0.22 installed on a test machine and for some reason the innodb
storage engine was not installed with it.  We install from RPMs so I'm not
sure how to install the storage engine. If we compiled ourselves, we'd
recompile but that's not an option.

Does anyone know how to install a storage engine once mysql's been installed
by an RPM?  How does one make the selections in the first place with RPMs?
We've always just taken what we got and it was sufficient.

Thanks,
Jim

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Resend: enabling storage engine with RPM install

2009-02-19 Thread Jim Lyons
Yes, I had a slip of the mind.  The engine that was not supported by the
install is the Federated engine.  I apologize, I had a blind spot.  The
SHOW ENGINES command lists FEDERATED but has NO in the Support column.

The question, though, is how does one add an unsupported engine to an RPM
install?  Is it possible?  Otherwise I have to either compile from source or
upgrade to a version that I hope will have it.

On Thu, Feb 19, 2009 at 10:08 PM, Baron Schwartz ba...@xaprb.com wrote:

 Where on earth did you get an RPM that doesn't have InnoDB support?  I
 find this unlikely.  I think it is more likely that you have some
 configuration error that's causing InnoDB to disable itself on start.
 How do you know InnoDB isn't supported?  And by isn't supported I
 mean isn't compiled into mysqld.

 Per your commend that InnoDB wasn't installed with mysqld -- it is not
 separate.  It's built into the /usr/sbin/mysqld binary (or whatever
 that is on your system).  For example, look at this:

 strings /usr/sbin/mysqld | grep -i innodb

 If you see a bunch of lines starting with InnoDB: blah blah, you
 have a binary that includes InnoDB, and it's just disabled for some
 reason.

 Baron

 On Thu, Feb 19, 2009 at 6:58 PM, Jim Lyons jlyons4...@gmail.com wrote:
  Sorry, but I'm resending because I made a mistake in terminology and want
 to
  be clear.  The problem isn't that innodb is DISABLED on the database.
  The
  innodb engine is not supported by the database.
 
  We have 5.0.22 installed on a test machine and for some reason the innodb
  storage engine was not installed with it.  We install from RPMs so I'm
 not
  sure how to install the storage engine. If we compiled ourselves, we'd
  recompile but that's not an option.
 
  Does anyone know how to install a storage engine once mysql's been
 installed
  by an RPM?  How does one make the selections in the first place with
 RPMs?
  We've always just taken what we got and it was sufficient.
 
  Thanks,
  Jim
 
  --
  Jim Lyons
  Web developer / Database administrator
  http://www.weblyons.com
 



 --
 Baron Schwartz, Director of Consulting, Percona Inc.
 Our Blog: http://www.mysqlperformanceblog.com/
 Our Services: http://www.percona.com/services.html




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Resend: enabling storage engine with RPM install

2009-02-19 Thread Jim Lyons
How does one do that in my.cnf?

It is possible to disable a supported engine by screwing up the my.cnf
configuration.  For example, I once pointed the InnoDB data file to a
directory that still had root as its owner.  The Innodb engined appeared as
DISABLED in the SHOW ENGINES output, but it was supported.  I chown'd the
directory to the proper owner and it was fine.  (This may be why I mistyped
InnoDB in my first post - I had InnoDB on my mind.)

In the case of the FEDERATED engine in my database, it's not supported at
all.  I don't think I can turn support on or off in my.cnf.  I would love
it, if I could, though.

On Thu, Feb 19, 2009 at 10:47 PM, chaim.rie...@gmail.com wrote:

 Perhaps you disabled it via my.cnf


 Sent via BlackBerry from T-Mobile

 -Original Message-
 From: Jim Lyons jlyons4...@gmail.com

 Date: Thu, 19 Feb 2009 22:45:01
 To: MySQLmysql@lists.mysql.com
 Subject: Re: Resend: enabling storage engine with RPM install


 Yes, I had a slip of the mind.  The engine that was not supported by the
 install is the Federated engine.  I apologize, I had a blind spot.  The
 SHOW ENGINES command lists FEDERATED but has NO in the Support column.

 The question, though, is how does one add an unsupported engine to an RPM
 install?  Is it possible?  Otherwise I have to either compile from source
 or
 upgrade to a version that I hope will have it.

 On Thu, Feb 19, 2009 at 10:08 PM, Baron Schwartz ba...@xaprb.com wrote:

  Where on earth did you get an RPM that doesn't have InnoDB support?  I
  find this unlikely.  I think it is more likely that you have some
  configuration error that's causing InnoDB to disable itself on start.
  How do you know InnoDB isn't supported?  And by isn't supported I
  mean isn't compiled into mysqld.
 
  Per your commend that InnoDB wasn't installed with mysqld -- it is not
  separate.  It's built into the /usr/sbin/mysqld binary (or whatever
  that is on your system).  For example, look at this:
 
  strings /usr/sbin/mysqld | grep -i innodb
 
  If you see a bunch of lines starting with InnoDB: blah blah, you
  have a binary that includes InnoDB, and it's just disabled for some
  reason.
 
  Baron
 
  On Thu, Feb 19, 2009 at 6:58 PM, Jim Lyons jlyons4...@gmail.com wrote:
   Sorry, but I'm resending because I made a mistake in terminology and
 want
  to
   be clear.  The problem isn't that innodb is DISABLED on the database.
   The
   innodb engine is not supported by the database.
  
   We have 5.0.22 installed on a test machine and for some reason the
 innodb
   storage engine was not installed with it.  We install from RPMs so I'm
  not
   sure how to install the storage engine. If we compiled ourselves, we'd
   recompile but that's not an option.
  
   Does anyone know how to install a storage engine once mysql's been
  installed
   by an RPM?  How does one make the selections in the first place with
  RPMs?
   We've always just taken what we got and it was sufficient.
  
   Thanks,
   Jim
  
   --
   Jim Lyons
   Web developer / Database administrator
   http://www.weblyons.com
  
 
 
 
  --
  Baron Schwartz, Director of Consulting, Percona Inc.
  Our Blog: http://www.mysqlperformanceblog.com/
  Our Services: http://www.percona.com/services.html
 



 --
 Jim Lyons
 Web developer / Database administrator
 http://www.weblyons.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Resend: enabling storage engine with RPM install

2009-02-19 Thread Jim Lyons
What configuration parameter in the config file could have an impact on
whether a particular storage engine is supported?  The binaries are built
before the config is even used.  I don't mind posting the file, but I don't
see the point.

The question is pretty simple, can one add a storage engine to an RPM
install?  The config file follows.  Thanks.

[client]
socket  = /db/data/mysql.sock
port= 3306
[mysqld]
socket  = /db/data/mysql.sock
datadir = /db/data
tmpdir  = /db/tmp
port= 3306
user= mysql
max_allowed_packet = 1024M
lower_case_table_names=0
log-bin=/db/binlog/tlsgriffin01-bin
sync_binlog = 1
expire_logs_days = 14
log-error=/db/log/tlsgriffin01-err.log
log-slow-queries=/db/log/tlsgriffin01-slow.log
long_query_time = 1
log_warnings=2
server-id = 101
skip-slave-start
sysdate-is-now
log_bin_trust_function_creators=1
skip-external-locking
key_buffer_size = 128M
query_cache_size = 256M
table_cache = 4096
thread_concurrency = 14
thread_cache_size = 0
open_files_limit = 10240
max_connections = 1000
skip-bdb
read_buffer_size = 64M
read_rnd_buffer_size = 64M
sort_buffer_size = 64M
tmp_table_size = 512M
max_heap_table_size = 250M
myisam_sort_buffer_size = 64M
myisam_max_sort_file_size = 20G
innodb_data_home_dir = /db/innodb
innodb_log_group_home_dir = /db/innodb
innodb_data_file_path = ibdata1:10M:autoextend:max:4G
innodb_log_files_in_group = 2
innodb_log_file_size = 256M
innodb_file_per_table
innodb_buffer_pool_size = 400M
innodb_additional_mem_pool_size = 80M
transaction-isolation = READ-COMMITTED
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer = 64M
sort_buffer_size = 16M
read_buffer = 16M
write_buffer = 16M
[mysqlhotcopy]
interactive-timeout
[manager]
socket = /db/log/manager.sock
pid-file=/db/log/manager.pid
password-file = /db/data/.mysqlmanager.passwd
monitoring-interval = 60
port = 1998
bind-address = tlsgriffin01
[mysql.server]
use-manager


On Thu, Feb 19, 2009 at 11:24 PM, chaim.rie...@gmail.com wrote:

 Can you post you my.cnf please

 Sent via BlackBerry from T-Mobile

 --
 *From*: Jim Lyons
 *Date*: Thu, 19 Feb 2009 23:06:33 -0600
 *To*: chaim.rie...@gmail.com

 *Subject*: Re: Resend: enabling storage engine with RPM install
 How does one do that in my.cnf?

 It is possible to disable a supported engine by screwing up the my.cnf
 configuration.  For example, I once pointed the InnoDB data file to a
 directory that still had root as its owner.  The Innodb engined appeared as
 DISABLED in the SHOW ENGINES output, but it was supported.  I chown'd the
 directory to the proper owner and it was fine.  (This may be why I mistyped
 InnoDB in my first post - I had InnoDB on my mind.)

 In the case of the FEDERATED engine in my database, it's not supported at
 all.  I don't think I can turn support on or off in my.cnf.  I would love
 it, if I could, though.

 On Thu, Feb 19, 2009 at 10:47 PM, chaim.rie...@gmail.com wrote:

 Perhaps you disabled it via my.cnf


 Sent via BlackBerry from T-Mobile

 -Original Message-
 From: Jim Lyons jlyons4...@gmail.com

 Date: Thu, 19 Feb 2009 22:45:01
 To: MySQLmysql@lists.mysql.com
 Subject: Re: Resend: enabling storage engine with RPM install


 Yes, I had a slip of the mind.  The engine that was not supported by the
 install is the Federated engine.  I apologize, I had a blind spot.  The
 SHOW ENGINES command lists FEDERATED but has NO in the Support column.

 The question, though, is how does one add an unsupported engine to an RPM
 install?  Is it possible?  Otherwise I have to either compile from source
 or
 upgrade to a version that I hope will have it.

 On Thu, Feb 19, 2009 at 10:08 PM, Baron Schwartz ba...@xaprb.com wrote:

  Where on earth did you get an RPM that doesn't have InnoDB support?  I
  find this unlikely.  I think it is more likely that you have some
  configuration error that's causing InnoDB to disable itself on start.
  How do you know InnoDB isn't supported?  And by isn't supported I
  mean isn't compiled into mysqld.
 
  Per your commend that InnoDB wasn't installed with mysqld -- it is not
  separate.  It's built into the /usr/sbin/mysqld binary (or whatever
  that is on your system).  For example, look at this:
 
  strings /usr/sbin/mysqld | grep -i innodb
 
  If you see a bunch of lines starting with InnoDB: blah blah, you
  have a binary that includes InnoDB, and it's just disabled for some
  reason.
 
  Baron
 
  On Thu, Feb 19, 2009 at 6:58 PM, Jim Lyons jlyons4...@gmail.com
 wrote:
   Sorry, but I'm resending because I made a mistake in terminology and
 want
  to
   be clear.  The problem isn't that innodb is DISABLED on the
 database.
   The
   innodb engine is not supported by the database.
  
   We have 5.0.22 installed on a test machine and for some reason the
 innodb
   storage engine was not installed with it.  We install from RPMs so I'm
  not
   sure how to install the storage engine

what is best upgrade plan for large, replicating database?

2009-02-16 Thread Jim Lyons
I am beginning the process of upgrading a large mysql 5.0 database to 5.1.
Specifically, I'm going from 5.0.56 enterprise to 5.1.30 community (which I
think is now the GA version?)

The db is about 4 terabytes.  This means making a dump of the database,
installing a new version of the dbms, and reloading the database, would take
too long.  Also the databases are replicating.  I believe I need to upgrade
the slaves first, the go to the master.  I've read about the possibility of
having to rebuild indexes, mainly due to collation changes.  I didn't see
any compatibility issues in going to 5.1.30, although I can certainly
rebuild some indexes if I need to.

Can I (after backing up the data) just install the new mysql code, run
mysql_upgrade, and reasonably expect the database to work?
What is time involved?
If I can just install new binaries and point them to the old data, the time
taken should not depend on the size of the database, should it (unless I
have to rebuild some indexes)?

Thanks,

Jim

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


how can trace stored procedure usage?

2009-01-26 Thread Jim Lyons
I am trying to track the usage of stored procedures on our system.  My
solution so far is to parse the general log for call queries.  This works
well for procedures that are called from the command line, but the general
log does not seem to report procedures called from within other procedures.

Is there a way to do that?  Is there a better overall way to track procedure
calls?  What about function calls?

Thanks,
Jim

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: How do you backup HUGE tables?

2009-01-23 Thread Jim Lyons
We have a very large, multi-terabyte database with individual tables that
are over 100Gig.  We have it on a Red Hat Linux system and we set up logical
volumes, take LVM snapshots, then use rsync to move the data over.  This
works well and is a lot faster than dumping and certainly restore is
faster.

On Fri, Jan 23, 2009 at 3:18 PM, Daevid Vincent dae...@daevid.com wrote:

 We have some INNODB tables that are over 500,000,000 rows. These
 obviously make for some mighty big file sizes:

 -rw-rw 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1

 This can take a good amount of time to copy even just the file, and a
 mysqldump can take hours to export and import took 3 days (with the
 usual intelligent optimizations, locks, inserts, keys, etc.). Yowza!

 So, it's not really feasible or ideal to do this daily. We'd like to do
 some kind of daily diff and weekly or monthly full backup. Also, as
 any good backup strategy involves off site backups, it's not very
 convenient or even feasible to transfer 80+ GB over our pipe ever day
 (which according to iperf is getting around 11 MBytes/sec from our data
 center to our office).

 Looking for ideas as to what others are doing to backup their very large
 sets?

 We do have a master/slave setup. We're thinking of adding two more
 slaves that are read-only and not accessed via the web at all. Just sits
 there being a backup effectively. One being offsite in another building
 and the logic that we'll trickle in maybe 100k per minute as the data
 is inserted into the real M/S so that should be negligible on our
 intra/internet.


 ---
 I've done some research here, but nothing stands out as the winner...
 but I'm open to any of these ideas if you can make a strong case for
 them.

 http://dev.mysql.com/doc/refman/5.1/en/mysqlhotcopy.html
 mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables.

 http://dev.mysql.com/doc/refman/5.0/en/innodb-backup.html
 InnoDB Hot Backup is an online backup tool you can use to backup your
 InnoDB database while it is running. InnoDB Hot Backup does not require
 you to shut down your database and it does not set any locks or disturb
 your normal database processing. InnoDB Hot Backup is a non-free
 (commercial) add-on tool with an annual license fee per computer on
 which the MySQL server is run.
 http://www.innodb.com/hot-backup/
 [not loving that it's a commercial tool]


 http://dev.mysql.com/doc/refman/5.0/en/replication-implementation-details.html

 http://dev.mysql.com/doc/refman/5.1/en/backup.html
 read the comments You can also take very fast online or hot backups if
 you have linux volume management or LVM ... I knew there was a way to
 use LVM for backups somehow. Maybe a solution for us?
 http://www.mysqlperformanceblog.com/?s=backup

 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

 Some other backup solutions maybe:
 http://www.debianhelp.co.uk/mysqlscript.htm
 http://mysqlbackup.phpmagazine.net/ (seems dead, but maybe it just works
 so well, no need to improve it?)
 http://www.ozerov.de/bigdump.php




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


  1   2   3   4   5   6   7   >