Re: Doubt with stored procedures

2013-04-17 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergames.com
 Subject: Doubt with stored procedures
 
 I have a doubt with stored procedures functionality. Is possible that
 a stored procedure works with all databases form the server? I have
 created a stored procedure on dataBaseA and also works with dataBaseB. Is that
 correct? Independently of the user privileges defined.

Yes, that's all part of the mysql magic, any stored procedure ever defined will 
work on any server ever to exist.

Well, either that, or you might want to provide us with a little more detail, 
including but not limited to:
 * Is this MySQL or NDB Cluster?
 * Software version?
 * What is the relation between the servers (master, slave, master/master, ...) 
?
 * how exactly did you define the SP, using what user etc. ?
 * how are you calling the SP, using what user, ... ?
 * ...


That being said, using a set of default assumptions, I can tell you that both 
GRANT and CREATE PROCEDURE are replicated, so both all procedures and all users 
should logically exist on both sides of a replication setup.

-- 
Linux Kriek Wanderung
April 19-21, 2013
http://www.tuxera.be/lkw

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Doubt with stored procedures

2013-04-17 Thread hsv
 2013/04/17 14:16 +0200, Antonio Fernández Pérez 
I have a doubt with stored procedures functionality. Is possible that a
stored procedure works with all databases form the server? I have created a
stored procedure on dataBaseA and also works with dataBaseB. Is that
correct? Independently of the user privileges defined. 

It is the default assumption that a procedure within a database is meant for 
use within that database, but one can call a procedure from any of the set of 
databases by qualifying the name--and the MySQL command show procedure status 
shows all procedures. The only question is the procedure s use of variables: if 
they refer only to the arguments, it is of no importance whence it is called. 
This is documented:

file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/sql-syntax.html#useUSE
 statements within stored routines are not permitted. When a routine is 
invoked, an implicit USE db_name is performed (and undone when the routine 
terminates). The causes the routine to have the given default database while it 
executes. References to objects in databases other than the routine default 
database should be qualified with the appropriate database name. 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Doubt with stored procedures

2013-04-17 Thread Antonio Fernández Pérez
Hi everybody,

Firstly thanks for your reply.

I'm using MySQL only in one server (I have only one), without replication
and without cluster configuration. I have defined the stored procedure as a
normal user, not as a root. And the call is make as a normal user also.
Occurs with MySQL 5.1.49 on Debian 6.

This normal user has CREATE PROCEDURE privilege and EXECUTE privilege.

Thanks.

Best regards,

Antonio.


RE: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Stillman, Benjamin
Sounds like you're missing the following in your my.cnf on server B (probably 
all of them):

replicate-same-server-id = 0
log-slave-updates

While you're checking, might as well as make sure your auto-increment settings 
are in there and correct also.




-Original Message-
From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
Sent: Monday, September 24, 2012 10:23 AM
To: mysql@lists.mysql.com
Subject: Doubt Regd. Circular Replication In Mysql

Hi all,

Today i set up a circular replication between three nodes A,B  C successfully.
I expect whatever writes on A will propagated to B  then Propagated to C 
because the structure is like below :-

A - B -  C - A

I created a sample table stag in test database in A and insert few records that 
are also replicated to B but not to C. Now when i created the same table in C , 
it shows errors in show slave status\G output in A node.

I needed this setup because all these servers are in different  colos so that 
whatever writes to any node would replicated to others also for one database.

I followed the below  link for setting this circular replication :-

http://www.howtoforge.com/setting-up-master-master-replication-on-four-nodes-with-mysql-5-on-debian-etch-p2

Is it possible to achieve whatever i needed or i need to create Multi Master 
set up 2 nodes only.

Thanks



Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Stillman, Benjamin
replicate-same-server-id = 0 keeps MySQL from replicating binary log entries 
from itself. For instance, here's a rough overview:

You write to Server A.
Server A writes that to its binary log.
Server B reads Server A's binary log and completes the same thing.
Because log-slave-updates is enabled, Server B writes it to its own binary log.
Server C reads Server B's binary log and completes the same thing.
Again, with log-slave-updates enabled, Server C writes it to its own binary log.
Server A reads Server C's binary log.

Here's where the issue starts. Without replicate-same-server-id = 0, Server A 
will complete the insert/update/delete as it reads it from Server C's binary 
log. However, this query originated from Server A, so it's just going to do it 
again. Then it's again replicated to Server B, Server C, and so on. This can 
create a loop and/or break replication. For instance, if you drop a table on A. 
It replicates across, and back to A. Replication will error out because when it 
tries to drop the same table again, it already doesn't exist. You need 
replicate-same-server-id = 0 set so that it knows not to execute any binary log 
entries with its own server ID.



From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
Sent: Monday, September 24, 2012 10:39 AM
To: Stillman, Benjamin
Subject: Re: Doubt Regd. Circular Replication In Mysql

Yes I fixed , but i solve the issue by enabling log-slave-updates only
Why we use the below parameter :-
replicate-same-server-id = 0

Ya i configured auto-increment settings properly.

Thanks

Thanks
On Mon, Sep 24, 2012 at 8:03 PM, Stillman, Benjamin 
bstill...@limitedbrands.commailto:bstill...@limitedbrands.com wrote:
Sounds like you're missing the following in your my.cnf on server B (probably 
all of them):

replicate-same-server-id = 0
log-slave-updates

While you're checking, might as well as make sure your auto-increment settings 
are in there and correct also.




-Original Message-
From: Adarsh Sharma [mailto:eddy.ada...@gmail.commailto:eddy.ada...@gmail.com]
Sent: Monday, September 24, 2012 10:23 AM
To: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Doubt Regd. Circular Replication In Mysql

Hi all,

Today i set up a circular replication between three nodes A,B  C successfully.
I expect whatever writes on A will propagated to B  then Propagated to C 
because the structure is like below :-

A - B -  C - A

I created a sample table stag in test database in A and insert few records that 
are also replicated to B but not to C. Now when i created the same table in C , 
it shows errors in show slave status\G output in A node.

I needed this setup because all these servers are in different  colos so that 
whatever writes to any node would replicated to others also for one database.

I followed the below  link for setting this circular replication :-

http://www.howtoforge.com/setting-up-master-master-replication-on-four-nodes-with-mysql-5-on-debian-etch-p2

Is it possible to achieve whatever i needed or i need to create Multi Master 
set up 2 nodes only.

Thanks


Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.



Re: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Shawn Green

Hello Benjamin,

On 9/24/2012 10:52 AM, Stillman, Benjamin wrote:

replicate-same-server-id = 0 keeps MySQL from replicating binary log entries 
from itself. For instance, here's a rough overview:

You write to Server A.
Server A writes that to its binary log.
Server B reads Server A's binary log and completes the same thing.
Because log-slave-updates is enabled, Server B writes it to its own binary log.
Server C reads Server B's binary log and completes the same thing.
Again, with log-slave-updates enabled, Server C writes it to its own binary log.
Server A reads Server C's binary log.

Here's where the issue starts. Without replicate-same-server-id = 0, Server A 
will complete the insert/update/delete as it reads it from Server C's binary 
log. However, this query originated from Server A, so it's just going to do it 
again. Then it's again replicated to Server B, Server C, and so on. This can 
create a loop and/or break replication. For instance, if you drop a table on A. 
It replicates across, and back to A. Replication will error out because when it 
tries to drop the same table again, it already doesn't exist. You need 
replicate-same-server-id = 0 set so that it knows not to execute any binary log 
entries with its own server ID.



Not true.

Replication, by default, operates with --replicate-same-server-id=0. The 
only time you need to change it to a 1 is for certain recovery 
scenarios. We added this variable specifically to allow for exceptions 
to the rule that every server in a replication chain (or ring) must have 
their own, unique, --server-id value.


It's not required for normal operations. In fact we recommend you do not 
set it at all. Each server will automatically ignore any event that 
originates from a server with the same --server-id setting unless you 
specifically set --replicate-same-server-id=1 .


Regards
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Stillman, Benjamin
I stand corrected and apologize. Numerous multi-master setup descriptions I've 
read have said to set this (including the one linked in the original question). 
However, as you said, the entry in the manual clearly says it defaults to 0. 
Learn something new every day. Thanks Shawn.



On Sep 24, 2012, at 2:05 PM, Shawn Green shawn.l.gr...@oracle.com wrote:

 replicate-same-server-id = 0



Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Rick James
Don't use circular replication with more than 2 servers.  If one of your 3 
crashes and cannot be recovered, you will have a nightmare on your hands to fix 
the broken replication.

 -Original Message-
 From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
 Sent: Monday, September 24, 2012 11:56 AM
 To: Shawn Green
 Cc: mysql@lists.mysql.com
 Subject: Re: Doubt Regd. Circular Replication In Mysql
 
 I stand corrected and apologize. Numerous multi-master setup
 descriptions I've read have said to set this (including the one linked
 in the original question). However, as you said, the entry in the
 manual clearly says it defaults to 0. Learn something new every day.
 Thanks Shawn.
 
 
 
 On Sep 24, 2012, at 2:05 PM, Shawn Green shawn.l.gr...@oracle.com
 wrote:
 
  replicate-same-server-id = 0
 
 
 
 Notice: This communication may contain privileged and/or confidential
 information. If you are not the intended recipient, please notify the
 sender by email, and immediately delete the message and any attachments
 without copying or disclosing them. LBI may, for any reason, intercept,
 access, use, and disclose any information that is communicated by or
 through, or which is stored on, its networks, applications, services,
 and devices.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Adarsh Sharma
Agreed with your point Rick, right now i am maintaining my datadir 
logging in my EBS volumes so if any of the instance goes down ,we will
launch new instance  use the existing EBS volumes and start replication
again.

I think it will start automatically from the point where it goes down and
start replicating again.

Can we use any other prevention for automating the failover.

Thanks

On Tue, Sep 25, 2012 at 12:41 AM, Rick James rja...@yahoo-inc.com wrote:

 Don't use circular replication with more than 2 servers.  If one of your 3
 crashes and cannot be recovered, you will have a nightmare on your hands to
 fix the broken replication.

  -Original Message-
  From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
  Sent: Monday, September 24, 2012 11:56 AM
  To: Shawn Green
  Cc: mysql@lists.mysql.com
  Subject: Re: Doubt Regd. Circular Replication In Mysql
 
  I stand corrected and apologize. Numerous multi-master setup
  descriptions I've read have said to set this (including the one linked
  in the original question). However, as you said, the entry in the
  manual clearly says it defaults to 0. Learn something new every day.
  Thanks Shawn.
 
 
 
  On Sep 24, 2012, at 2:05 PM, Shawn Green shawn.l.gr...@oracle.com
  wrote:
 
   replicate-same-server-id = 0
 
  
 
  Notice: This communication may contain privileged and/or confidential
  information. If you are not the intended recipient, please notify the
  sender by email, and immediately delete the message and any attachments
  without copying or disclosing them. LBI may, for any reason, intercept,
  access, use, and disclose any information that is communicated by or
  through, or which is stored on, its networks, applications, services,
  and devices.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Doubt regarding Mysqlsump

2011-06-09 Thread Karen Abgarian
Hello, comments inline.  Regards, Karen.
 
 
 I checked up in the mean time, and it does not make a truly consistent backup 
 of MyISAM - it locks all tables - yes, ALL tables - and then copies the 
 files. Given that MyISAM doesn't support transactions, that means that any 
 transactions (that is, sets of related queries) in progress will be copied 
 in the state they were, and the backup may contain inconsistent data like 
 unreferenced rows.

This however means that the resulting snapshot will be consistent.  The fact of 
taking a backup really cannot create more consistency than what the application 
has by design.  If the application inserts related rows in say two related 
tables without transactions, it kind of assumes that it is prepared to handle 
the case when updates make it to one table and not to the other.   If it is not 
prepared to handle it, it means that any client crash will create inconsistent 
data, not to mention the server crash.  

 
 I am not using xtrabackup but I think --single-transaction  -q
 options may solved this issue
 --single-transaction, as you say, only affects InnoDB tables.
 
 --lock-all-tables will give you the same behaviour for MyISAM as xtrabackup, 
 but is likely to be incompatible with --single-transaction.
 
 Neither of those solve your consistency issue.
 
Somebody mentioned the xtrabackup to me.  How different is it from another 
wrapper program with the ibbackup at the core?   I will be very curious to know 
if there exists a product that does NOT do the following at the basic technical 
level:

- FLUSH TABLES WITH READ LOCK;  -- locks all tables
- take backup

With the products of this kind, all that can be done is to accept that it does 
what it does.   Which does not exclude the options like reducing the downtime 
with say split-mirror-like snapshots or creating a replica to experience the 
downtime there.  

If I let my fantasy run wild about what an alternative could be, it will be 
something like this:

- run a cycle for all MyISAM tables
- for each table, lock it for writes
- note the position in the binary log, record this position
- read the table into the backup
- release the lock 

To recover this, do the following:

- restore the table backups
- start reading binlog from the first recorded position, record by record
- determine which table the record is a change fore
- if the position is greater than the position recorded for the table, apply 
the change, otherwise don't.

The result will be the locks taken per table, which is equally bad compared to 
the normal access pattern for MyISAM tables.  

 The answer to is there a way to take consistent backups of MyISAM tables 
 without stopping the application is no, there is not. The binlog backup 
 strategy I roughly outlined earlier can dramatically decrease your 
 application's downtime, however.
 

If we think about it, a MyISAM table by definition is a table, the consistency 
of which is based on whole table locks.  Considering that the backup is really 
a request to read the whole table, locking out everybody else, the question 
about backup can well be rephrased as is a MyISAM table a MyISAM table?  The 
answer to this is a firm yes :-).




Re: Doubt regarding Mysqlsump

2011-06-09 Thread Johan De Meersman
- Original Message -
 From: Karen Abgarian a...@apple.com
 
 This however means that the resulting snapshot will be consistent.
  The fact of taking a backup really cannot create more consistency
 than what the application has by design.  If the application inserts
 related rows in say two related tables without transactions, it kind
 of assumes that it is prepared to handle the case when updates make
 it to one table and not to the other.   If it is not prepared to
 handle it, it means that any client crash will create inconsistent
 data, not to mention the server crash.

True, but I have never seen an application that checks for inconsistency in 
it's tables. Making sure all users have stopped using the app ensures no 
in-flight transactions, and then you have a consistent database - save 
application crashes, of course, as you mention. MyISAM was never designed for 
data consistency, so it is pretty hard to get, indeed. The original question 
was asking for consistent backups, so I'm trying to give the best there is :-)

Like the physics jokes go, assume a perfectly flat surface without friction...

 Somebody mentioned the xtrabackup to me.  How different is it from
 another wrapper program with the ibbackup at the core?   I will be

Not very, I suspect.

 very curious to know if there exists a product that does NOT do the
 following at the basic technical level:
 
 - FLUSH TABLES WITH READ LOCK;  -- locks all tables
 - take backup

You only need to FLUSH TABLES if you want the datafiles instead of an SQL dump. 
In the latter case, you can just lock the tables you will backup. A read lock 
will give you an unchanging view of the locked tables, both on MyISAM and 
InnoDB. On MyISAM, that read lock will by default prevent writes, with the 
exception of inserts if there are no holes in the table; for InnoDB a read lock 
wil simply give a view at the then-current SCN and allow further writes to 
simply go on.

If the database was in a consistent state at the time of the lock, you can take 
a consistent backup at that point.

 With the products of this kind, all that can be done is to accept
 that it does what it does.   Which does not exclude the options like
 reducing the downtime with say split-mirror-like snapshots or
 creating a replica to experience the downtime there.

Correct, but with the same caveats about consistency.

 The result will be the locks taken per table, which is equally bad
 compared to the normal access pattern for MyISAM tables.

Yeps. Which is why you try to

 * take a full backup of the db in a consistent state (say, once a month) and 
switch the binary logs;
 * then switch the binary logs at a point where the database is in a consistent 
state, and copy all but the newly active one.

You can then restore the full snapshot, and be sure that at the end of each 
sequential binlog set you apply, it is again consistent.

As I indicated, ZRM is one product which does exactly that - save for ensuring 
the DB is consistent, of course - that's up to you.


 If we think about it, a MyISAM table by definition is a table, the
 consistency of which is based on whole table locks.  Considering

A single table is always consistent. Data inconsistency occurs in sets of 
interrelated tables, in other words, on the database level.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Doubt regarding Mysqlsump

2011-06-09 Thread Karen Abgarian
Hi!  Inline, again. 
On Jun 9, 2011, at 4:58 AM, Johan De Meersman wrote:

 True, but I have never seen an application that checks for inconsistency in 
 it's tables. Making sure all users have stopped using the app ensures no 
 in-flight transactions, and then you have a consistent database - save 
 application crashes, of course, as you mention. MyISAM was never designed for 
 data consistency, so it is pretty hard to get, indeed. The original question 
 was asking for consistent backups, so I'm trying to give the best there is :-)
 
 Like the physics jokes go, assume a perfectly flat surface without 
 friction...

Ha, I remember that one about friction :)  What I meant to say about internal 
application consistency checks, the applications are known to skip that, but 
they better have a plan about what to do if say the application goes down.  
Because we have the client/server model, it kind of implies the N:1 
relationship between the applications and the database.   The inconsistency can 
be created by one of the N application pieces, so it is quite likely.  

 very curious to know if there exists a product that does NOT do the
 following at the basic technical level:
 
 - FLUSH TABLES WITH READ LOCK;  -- locks all tables
 - take backup
 
 You only need to FLUSH TABLES if you want the datafiles instead of an SQL 
 dump. In the latter case, you can just lock the tables you will backup. A 
 read lock will give you an unchanging view of the locked tables, both on 
 MyISAM and InnoDB. On MyISAM, that read lock will by default prevent writes, 
 with the exception of inserts if there are no holes in the table; for InnoDB 
 a read lock wil simply give a view at the then-current SCN and allow further 
 writes to simply go on.
 
 If the database was in a consistent state at the time of the lock, you can 
 take a consistent backup at that point.

FLUSH TABLES really is a way to take the lock on all tables (with an added 
bonus of flushing the stuff to disk).  How else would you lock the tables, list 
them in the LOCK TABLES statement?  That could be a lot of tables.  

About the datafiles, it is a valid point.  However, the value of dumping versus 
copying files goes down with the increase of data volume.   If we have a large 
table, recovery from a dump would mean reinserting all data, that is, redoing 
all the insertion work since the application was created.   This may take a 
while.   There is also index maintenance which could take quite a lot of time.  
  The file copy thing will actually not work with InnoDB at all, so ibbackup 
really is the only way to go about that.  

 
 The result will be the locks taken per table, which is equally bad
 compared to the normal access pattern for MyISAM tables.
 
 Yeps. Which is why you try to
 
 * take a full backup of the db in a consistent state (say, once a month) and 
 switch the binary logs;
 * then switch the binary logs at a point where the database is in a 
 consistent state, and copy all but the newly active one.
 
 You can then restore the full snapshot, and be sure that at the end of each 
 sequential binlog set you apply, it is again consistent.
 
 As I indicated, ZRM is one product which does exactly that - save for 
 ensuring the DB is consistent, of course - that's up to you.
 

What is described here, is a simple recovery plan.   That could be done 
relatively easy with scripts (if it matters, that is what I did).   Perhaps the 
ZRM does something else that qualifies it as a product. 

As we know it, taking a full backup while the database is in consistent state 
may lock everybody out for a very long time.   What placing a lock on the 
tables will do is this:

- lock tables T1, T2, T3
- wait until the table T4 is unlocked (possibly a while), then place a lock on 
it
- continue with the rest of tables until all are locked.  

This means that if there is something big going on with T4, the lock will take 
quite a long time.  Meanwhile, everybody will wait.   The wait will also have 
to continue until we took a snapshot of all tables, of course.   So it is the 
time of placing the lock plus the time of taking the snapshot.  

What I described before, is the procedure based on taking individual table 
locks, avoiding the need to lock the whole database and experiencing the large 
wait.  


 
 If we think about it, a MyISAM table by definition is a table, the
 consistency of which is based on whole table locks.  Considering
 
 A single table is always consistent. Data inconsistency occurs in sets of 
 interrelated tables, in other words, on the database level.
 

I would not be so sure about always.   If someone fires an update of every 
row in the table and aborts it half way, half of the records in the table will 
be updated and half of them will not be.   That's something called 
statement-level consistency, which also could render the table inconsistent 
within itself.   MyISAM table, that is.  

Personally, to me the using of MyISAM tables means: I 

RE: Doubt regarding Mysqlsump

2011-06-09 Thread Jerry Schwartz
snip


A single table is always consistent. Data inconsistency occurs in sets of
interrelated tables, in other words, on the database level.

[JS] Not even a single table is always consistent (unless there is 
transactions). Consider a single transaction that consists of two steps:

1. Delete record A
2. Add record B

Now consider

1. Delete record A
-Backup starts-
2. Add record B

You have no idea whether or not record B will be in your backup.

Worse things can happen, of course:

1. Delete record A
!!KABOOM!!

The data in the table is not going to be consistent. You'd have to analyze the 
data to find out what did and didn't happen before the crash, back out step 1, 
and re-run that transaction in the application.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Doubt regarding Mysqlsump

2011-06-09 Thread Wm Mussatto


On Thu, June 9, 2011 11:59, Jerry Schwartz wrote:

snip
 

A single table is
always consistent. Data inconsistency occurs in sets of
interrelated tables, in other words, on the database level.

 [JS] Not even a single table is always consistent
(unless there is
 transactions). Consider a single transaction
that consists of two steps:
 
 1. Delete record A
 2. Add record B
 
 Now consider
 
 1. Delete record A
 -Backup starts-
 2. Add
record B
 
 You have no idea whether or not record B
will be in your backup.
 
 Worse things can happen, of
course:
 
 1. Delete record A
 !!KABOOM!!

 The data in the table is not going to be consistent.
You'd have to analyze
 the
 data to find out what did
and didn't happen before the crash, back out
 step 1,

and re-run that transaction in the application.
 
 
 Regards,
 
 Jerry Schwartz
Ah, LOCK TABLE
which is the myisam� equivalent of tranactions.�
That will deal with the problem of backup in the middle yes?��
If that won't work then you do need transactions.� IMHO.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: Doubt regarding Mysqlsump

2011-06-07 Thread Claudio Nanni
Hi Adarsh,

I think this is the best option for you:
http://www.percona.com/docs/wiki/percona-xtrabackup:start

There is also a commercial alternative, InnoBackup, but I imagine you like
it free.

Cheers

Claudio
On Jun 7, 2011 7:59 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote:
 Dear all,

 Is it possible to take backups of a table or complete database without
 stopping the application that continuously inserts and select data from
 the tables.

 For taking complete backup of a database I follow the below steps :-

 1. First stop the application that insert  modifies tables.
 2. Mysqldump command to backup the tables.
 3. After complete backup , start the application.

 I know Mysql-Replication helps a lot to solve this issue but I have not
 any extra server to configure it.

 So , Can I solve this issue without Replication so that I don't need to
 stop my application  I must have consistent backups too.

 Please note that size of databases may be more than 100GB


 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com



Re: Doubt regarding Mysqlsump

2011-06-07 Thread Nilnandan Joshi
Can you tell us which storage engine you are using?

On Tue, Jun 7, 2011 at 11:30 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Dear all,

 Is it possible to take backups of a table or complete database without
 stopping the application that continuously inserts and select data from the
 tables.

 For taking complete backup of a database I follow the below steps :-

 1. First stop the application that insert  modifies tables.
 2. Mysqldump command to backup the tables.
 3. After complete backup , start the application.

 I know Mysql-Replication helps a lot to solve this issue but I have not any
 extra server to configure it.

 So , Can I solve this issue without Replication so that I don't need to
 stop my application  I must have consistent backups too.

 Please note that size of databases may be more than 100GB


 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=nilnan...@gmail.com




Re: Doubt regarding Mysqlsump

2011-06-07 Thread mark carson
Hi

We use the --single-transaction switch thinking it does less locking or waiting
for a required table lock. You then get a snapshot without stopping.

Subject should have included the word 'hot'? Looking forward to other 
suggestions.

Mark

On 2011/06/07 08:00, Adarsh Sharma wrote:
 Dear all,
 
 Is it possible to take backups of a table or complete database without 
 stopping
 the application that continuously inserts and select data from the tables.
 
 For taking complete backup of a database I follow the below steps :-
 
 1. First stop the application that insert  modifies tables.
 2. Mysqldump command to backup the tables.
 3. After complete backup , start the application.
 
 I know Mysql-Replication helps a lot to solve this issue but I have not any
 extra server to configure it.
 
 So , Can I solve this issue without Replication so that I don't need to stop 
 my
 application  I must have consistent backups too.
 
 Please note that size of databases may be more than 100GB
 
 
 Thanks
 

-- 
Mark Carson
Managing
Integrated Product Intelligence CC (CK95/35630/23)
EMail : mcar...@ipi.co.za/ (secondary:mcar...@pixie.co.za)
Physical Address : 34 Spanish Galliard, Mooikloof, Pretoria, South Africa
snailmail : P.O. Box 36095 Menlo Park 0102, South Africa
Tel. +27 12 996 1193/1815 Fax : +27 86 672 7012 Cell : +27 83 260 8515


This e-mail may contain PRIVILEGED AND/OR CONFIDENTIAL INFORMATION intended
only for use of the addressee. If you are not the addressee, or the person
responsible for delivering it to the person addressed, you may not copy or
deliver this to anyone else. If you received this e-mail by mistake, please
do not make use of it, nor disclose it's contents to anyone. Thank you for
notifying us immediately by return e-mail or telephone. INFORMATION PROVIDED
IN THIS ELECTRONIC MAIL IS PROVIDED AS IS WITHOUT WARRANTY REPRESENTATION
OR CONDITION OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO CONDITIONS OR OTHER TERMS OF MERCHANTABILITY AND/OR FITNESS FOR A
PARTICULAR PURPOSE. THE USER ASSUMES THE ENTIRE RISK AS TO THE ACCURACY AND
THE USE OF THIS DOCUMENT.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Doubt regarding Mysqlsump

2011-06-07 Thread Adarsh Sharma

Both MYISAM  Innodb Engines are used.

Thanks

Nilnandan Joshi wrote:

Can you tell us which storage engine you are using?

On Tue, Jun 7, 2011 at 11:30 AM, Adarsh Sharma 
adarsh.sha...@orkash.com mailto:adarsh.sha...@orkash.com wrote:


Dear all,

Is it possible to take backups of a table or complete database
without stopping the application that continuously inserts and
select data from the tables.

For taking complete backup of a database I follow the below steps :-

1. First stop the application that insert  modifies tables.
2. Mysqldump command to backup the tables.
3. After complete backup , start the application.

I know Mysql-Replication helps a lot to solve this issue but I
have not any extra server to configure it.

So , Can I solve this issue without Replication so that I don't
need to stop my application  I must have consistent backups too.

Please note that size of databases may be more than 100GB


Thanks

-- 
MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql
To unsubscribe:  
 http://lists.mysql.com/mysql?unsub=nilnan...@gmail.com







Re: Doubt regarding Mysqlsump

2011-06-07 Thread Johan De Meersman
- Original Message -
 From: Claudio Nanni claudio.na...@gmail.com
 
 I think this is the best option for you:
 http://www.percona.com/docs/wiki/percona-xtrabackup:start

I must say, I still haven't looked very well at xtrabackup. How does it take 
consistent backups of MyISAM tables? I didn't think that was possible without 
shutting down the applications writing to them.


Adarsh, a vital piece of information is the storage engine you're using. Are 
your tables InnoDB or MyISAM? Afaik (see my question above :-p ) your approach 
is the only one that will allow you to take a consistent backup of MyISAM 
tables; for InnoDB tables xtrabackup should work fine.


Another option that might be of interest would be taking only one full backup 
per week or month using your current procedure, and taking daily backups of the 
binary logs between those. Still no 100% guarantee of consistency, but 
everything is in there without load on your database - except for the log 
writing overhead of course - and you can do point-in-time restores up to the 
individual statement if you feel like it. Zmanda ZRM Server is one solution 
that provides that level of backup.

Come to think of it, you could use your current procedure for backing up the 
binlogs consistently, too:
 1. shut application
 2. issue flush logs to switch to a new binlog
 3. restart application
 4. backup all but the active binlog at your leisure for a consistent backup at 
that point in time

That would enable you to do a quick daily backup with minimal application 
downtime, and the added benefit of point-in-time restores. The downside of that 
approach is increased restore time: you need to first restore the latest full 
backup, and then incrementally apply each of the binlog backups to the point 
you need to restore to.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Doubt regarding Mysqlsump

2011-06-07 Thread Adarsh Sharma

Johan De Meersman wrote:

- Original Message -
  

From: Claudio Nanni claudio.na...@gmail.com

I think this is the best option for you:
http://www.percona.com/docs/wiki/percona-xtrabackup:start



I must say, I still haven't looked very well at xtrabackup. How does it take 
consistent backups of MyISAM tables? I didn't think that was possible without 
shutting down the applications writing to them.
  

I am working with both MyISAM  Innodb tables.



Adarsh, a vital piece of information is the storage engine you're using. Are your tables InnoDB or MyISAM? Afaik _*(see my question above :-p )*_ 

Not getting U'r point marked as bold  Underline


your approach is the only one that will allow you to take a consistent backup 
of MyISAM tables; for InnoDB tables xtrabackup should work fine.
  
I am not using xtrabackup but I think --single-transaction  -q options 
may solved this issue
I know this  is valid only for Innodb Tables but anyway's I have both 
MyISAM  Innodb tables but only Innodb tables  size is increasing in 
seconds and MyISAM table size increased after hours.


Can U Please explain me what happened when I issue the mysqldump command 
with options --single-transaction  -q option on Innodb tables of size 
greater than 100 GB  on the other hand my application continuously 
insert data in the tables.


Compressed backup should take more than 2 or more Hours.

Another option that might be of interest would be taking only one full backup 
per week or month using your current procedure, and taking daily backups of the 
binary logs between those. Still no 100% guarantee of consistency, but 
everything is in there without load on your database - except for the log 
writing overhead of course - and you can do point-in-time restores up to the 
individual statement if you feel like it. Zmanda ZRM Server is one solution 
that provides that level of backup.
  


Please note that I don't have my bin-log enabled.

I can enable it if required.

Thanks

Come to think of it, you could use your current procedure for backing up the 
binlogs consistently, too:
 1. shut application
 2. issue flush logs to switch to a new binlog
 3. restart application
 4. backup all but the active binlog at your leisure for a consistent backup at 
that point in time

That would enable you to do a quick daily backup with minimal application 
downtime, and the added benefit of point-in-time restores. The downside of that 
approach is increased restore time: you need to first restore the latest full 
backup, and then incrementally apply each of the binlog backups to the point 
you need to restore to.


  

Yet I am not able to find the finalize the answer of the original question.

Thanks



Re: Doubt regarding Mysqlsump

2011-06-07 Thread Johan De Meersman

- Original Message -
 From: Adarsh Sharma adarsh.sha...@orkash.com
 
 Not getting U'r point marked as bold  Underline

I checked up in the mean time, and it does not make a truly consistent backup 
of MyISAM - it locks all tables - yes, ALL tables - and then copies the files. 
Given that MyISAM doesn't support transactions, that means that any 
transactions (that is, sets of related queries) in progress will be copied in 
the state they were, and the backup may contain inconsistent data like 
unreferenced rows.

 I am not using xtrabackup but I think --single-transaction  -q
 options may solved this issue
  I know this  is valid only for Innodb Tables but anyway's I have
  both MyISAM  Innodb tables but only Innodb tables  size is increasing in
 seconds and MyISAM table size increased after hours.

-q is good, but not relevant to your problem. It simply prevents buffering of 
the output, which speeds up the dump a bit.

--single-transaction, as you say, only affects InnoDB tables.

--lock-all-tables will give you the same behaviour for MyISAM as xtrabackup, 
but is likely to be incompatible with --single-transaction.

Neither of those solve your consistency issue.

 Can U Please explain me what happened when I issue the mysqldump
 command with options --single-transaction  -q option on Innodb tables of
 size greater than 100 GB  on the other hand my application continuously
 insert data in the tables.

The size isn't particularly relevant; --single-transaction basically 
snapshots the InnoDB tables so you copy all tables from the same 
point-in-time. I'm not 100% sure, but I think this is a pure read lock, so the 
inserts continue to happen; your backup process will not see them, but other 
processes will. This ensures a consistent InnoDB backup.

 Please note that I don't have my bin-log enabled.
 I can enable it if required.

It is ovbiously necessary if you choose to do binlog backups :-p

 Yet I am not able to find the finalize the answer of the original
 question.

The answer to is there a way to take consistent backups of MyISAM tables 
without stopping the application is no, there is not. The binlog backup 
strategy I roughly outlined earlier can dramatically decrease your 
application's downtime, however.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Doubt Regarding Truncate

2011-02-11 Thread Claudio Nanni
The only case in which you recover automatically the disk space is with
MyISAM tables,
In case of other storage engines is depending on the specific engine.
And the only guaranteed way to have new optimized tables is *full* dump and
reload.

Rolando link is good.

Cheers

Claudio

2011/2/11 Rolando Edwards redwa...@logicworks.net

 Do you have innodb_file_per_table turned on ???

 If this is off, then all your InnoDB data is going in
 /var/lib/mysql/ibdata1

 You actually need to the following to recover all free space from all
 InnoDB tables

 I commented on this in 
 http://stackoverflow.com/questions/3927690/howto-clean-a-mysql-innodb-storage-engine/405si
 si martino, masturbiamo dopo 
 insieme!6261#4056261http://stackoverflow.com/questions/3927690/howto-clean-a-mysql-innodb-storage-engine/4056261#4056261


 Rolando A. Edwards
 MySQL DBA (SCMDBA)

 155 Avenue of the Americas, Fifth Floor
 New York, NY 10013
 212-625-5307 (Work)
 201-660-3221 (Cell)
 AIM  Skype : RolandoLogicWorx
 redwa...@logicworks.net
 http://www.linkedin.com/in/rolandoedwards

 -Original Message-
 From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com]
 Sent: Friday, February 11, 2011 12:17 AM
 To: mysql@lists.mysql.com
 Subject: Doubt Regarding Truncate

 Dear all,

 Today I performed  the below commands on a 553 GB InnoDb table .

 truncate table page_crawled;
 optimize table page_crawled;

 But I couldn't find the free space available after truncation.  The
 below structure is  same as before  truncation


 /dev/sda2  29G  9.5G   18G  36% /
 /dev/sda1  99M   11M   84M  11% /boot
 /dev/sda5  69G   35G   32G  52% /hdd1-1
 /dev/sdb1 274G  225G   36G  87% /hdd2-1
 /dev/sdc5 274G  225G   36G  87% /hdd3-1
 /dev/sdd5 274G  212G   49G  82% /hdd4-1
 /dev/sde1 266G  161G   92G  64% /hdd5-1

 Please guide me if I 'm doing something wrong.


 Thanks  best Regards

 Adarsh Sharma


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=redwa...@logicworks.net


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com




-- 
Claudio


RE: Doubt Regarding Truncate

2011-02-10 Thread Rolando Edwards
Do you have innodb_file_per_table turned on ???

If this is off, then all your InnoDB data is going in /var/lib/mysql/ibdata1

You actually need to the following to recover all free space from all InnoDB 
tables

I commented on this in 
http://stackoverflow.com/questions/3927690/howto-clean-a-mysql-innodb-storage-engine/4056261#4056261


Rolando A. Edwards
MySQL DBA (SCMDBA)

155 Avenue of the Americas, Fifth Floor
New York, NY 10013
212-625-5307 (Work)
201-660-3221 (Cell)
AIM  Skype : RolandoLogicWorx
redwa...@logicworks.net
http://www.linkedin.com/in/rolandoedwards

-Original Message-
From: Adarsh Sharma [mailto:adarsh.sha...@orkash.com] 
Sent: Friday, February 11, 2011 12:17 AM
To: mysql@lists.mysql.com
Subject: Doubt Regarding Truncate

Dear all,

Today I performed  the below commands on a 553 GB InnoDb table .

truncate table page_crawled;
optimize table page_crawled;

But I couldn't find the free space available after truncation.  The 
below structure is  same as before  truncation


/dev/sda2  29G  9.5G   18G  36% /
/dev/sda1  99M   11M   84M  11% /boot
/dev/sda5  69G   35G   32G  52% /hdd1-1
/dev/sdb1 274G  225G   36G  87% /hdd2-1
/dev/sdc5 274G  225G   36G  87% /hdd3-1
/dev/sdd5 274G  212G   49G  82% /hdd4-1
/dev/sde1 266G  161G   92G  64% /hdd5-1

Please guide me if I 'm doing something wrong.


Thanks  best Regards

Adarsh Sharma


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=redwa...@logicworks.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: doubt: mysqldump in linux like windows

2008-03-21 Thread dr_pompeii

Hello 

the process of the restore is painful.
i see

even in windows, i dont know why made the backup in that way,
(i dindt change any option to make the backup in the mysql administrator)
 if you say that the restore would be painful

thank for your time


Moon's Father wrote:
 
 If you skip the extend insert during mysqldump ,the process of the restore
 is painful.
 
 On Fri, Mar 21, 2008 at 5:05 AM, dr_pompeii [EMAIL PROTECTED] wrote:
 

 Hi Rolando

 thanks for the reply
 it works, thanks,

 new command used

 mysqldump --opt --skip-extended-insert --password=XXX --user=root somedb
 
 /home/Someuser/somepath/A.sql

 but i see one difference

 from windows

 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo`

 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
 VALUES
  ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
  ('1-P0001','CASCOS DE MOTOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0003','LLANTAS DUNLOP LT
 265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0014','POLOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),


 now with the new command already shown
 i have this way


 LOCK TABLES `articulo` WRITE;
 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11');
 INSERT INTO `articulo` VALUES ('1-CHA01','KIT CHACARERO AZUL
 (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14');
 INSERT INTO `articulo` VALUES ('1-P0001','CASCOS DE MOTOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10');


 i need like the windows way, thats mean,
 for the first line for insertion before to insert all rows
 i need

 INSERT INTO `articulo`

 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
 VALUES


 i tried adding --disable-keys but wierd and undesired results

 regards


 Rolando Edwards-3 wrote:
 
  Use --skip-extended-insert as another mysqldump option
 
  -Original Message-
  From: dr_pompeii [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 20, 2008 2:43 PM
  To: mysql@lists.mysql.com
  Subject: doubt: mysqldump in linux like windows
 
 
  Hello guys
 
  i have this situation
  in widnows with the mysql administrador i make backup
  i saw in the x.sql these lines for example
 
 
  /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
  INSERT INTO `articulo`
 
 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
  VALUES
   ('1-15W40','ACEITE EXTRAVIDA X GLN
  15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
   ('1-CHA01','KIT CHACARERO AZUL
  (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14'),
   ('1-P0001','CASCOS DE MOTOS
  HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
   ('1-P0003','LLANTAS DUNLOP LT
  265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
   ('1-P0014','POLOS
  HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
 
 
  see pls that each row is written in a unique of line of text
 
  now in linux with command in a terminal i do in this way my backups
 
 
  mysqldump --opt --password=XXX --user=root somedb 
  /home/Someuser/somepath/A.sql
 
 
  the backup is done but in this way
 
  /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
  INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
  15W40','0.00','0.00','0.00','0.00','','0.00
 ','300','11'),('1-CHA01','KIT
  CHACARERO AZUL
  (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14'),('1-P0001','CASCOS
  DE
 
  how you can see, all the rows appear in one line,
  dangeous, i dont want this behaviour when i open this file in windows
 tell
  me if i try to save this file i will missing some values or rows
  and in linux the gedit dies :(
 
  after to read this
  http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html mysqldump
  i tried in this way
 
 
  mysqldump --opt --extended-insert--password=XXX --user=root somedb 
  /home/Someuser/somepath/A.sql
 
  with the same undesired results
 
  how i can resolve this??
 
  thanks in advanced
  --
  View this message in context:
 
 http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16185833.html
  Sent from the MySQL - General mailing list archive at Nabble.com.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
  --
  MySQL General Mailing List
  For list archives: 

RE: doubt: mysqldump in linux like windows

2008-03-20 Thread Rolando Edwards
Use --skip-extended-insert as another mysqldump option

-Original Message-
From: dr_pompeii [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 20, 2008 2:43 PM
To: mysql@lists.mysql.com
Subject: doubt: mysqldump in linux like windows


Hello guys

i have this situation
in widnows with the mysql administrador i make backup
i saw in the x.sql these lines for example


/*!4 ALTER TABLE `articulo` DISABLE KEYS */;
INSERT INTO `articulo`
(`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
VALUES
 ('1-15W40','ACEITE EXTRAVIDA X GLN
15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
 ('1-CHA01','KIT CHACARERO AZUL
(GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00','300','14'),
 ('1-P0001','CASCOS DE MOTOS
HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
 ('1-P0003','LLANTAS DUNLOP LT
265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
 ('1-P0014','POLOS
HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),


see pls that each row is written in a unique of line of text

now in linux with command in a terminal i do in this way my backups


mysqldump --opt --password=XXX --user=root somedb 
/home/Someuser/somepath/A.sql


the backup is done but in this way

/*!4 ALTER TABLE `articulo` DISABLE KEYS */;
INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),('1-CHA01','KIT
CHACARERO AZUL
(GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00','300','14'),('1-P0001','CASCOS
DE

how you can see, all the rows appear in one line,
dangeous, i dont want this behaviour when i open this file in windows tell
me if i try to save this file i will missing some values or rows
and in linux the gedit dies :(

after to read this
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html mysqldump
i tried in this way


mysqldump --opt --extended-insert--password=XXX --user=root somedb 
/home/Someuser/somepath/A.sql

with the same undesired results

how i can resolve this??

thanks in advanced
--
View this message in context: 
http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16185833.html
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: doubt: mysqldump in linux like windows

2008-03-20 Thread dr_pompeii

Hi Rolando

thanks for the reply
it works, thanks, 

new command used

mysqldump --opt --skip-extended-insert --password=XXX --user=root somedb 
/home/Someuser/somepath/A.sql

but i see one difference

from windows

/*!4 ALTER TABLE `articulo` DISABLE KEYS */;
INSERT INTO `articulo`
(`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
VALUES 
 ('1-15W40','ACEITE EXTRAVIDA X GLN
15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
 ('1-P0001','CASCOS DE MOTOS
HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
 ('1-P0003','LLANTAS DUNLOP LT
265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
 ('1-P0014','POLOS
HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),


now with the new command already shown
i have this way


LOCK TABLES `articulo` WRITE;
/*!4 ALTER TABLE `articulo` DISABLE KEYS */;
INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
15W40','0.00','0.00','0.00','0.00','','0.00','300','11');
INSERT INTO `articulo` VALUES ('1-CHA01','KIT CHACARERO AZUL
(GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00','300','14');
INSERT INTO `articulo` VALUES ('1-P0001','CASCOS DE MOTOS
HONDA','0.00','0.00','0.00','0.00','','0.00','300','10');


i need like the windows way, thats mean,
for the first line for insertion before to insert all rows
i need 

INSERT INTO `articulo`
(`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
VALUES 


i tried adding --disable-keys but wierd and undesired results

regards


Rolando Edwards-3 wrote:
 
 Use --skip-extended-insert as another mysqldump option
 
 -Original Message-
 From: dr_pompeii [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 20, 2008 2:43 PM
 To: mysql@lists.mysql.com
 Subject: doubt: mysqldump in linux like windows
 
 
 Hello guys
 
 i have this situation
 in widnows with the mysql administrador i make backup
 i saw in the x.sql these lines for example
 
 
 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo`
 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
 VALUES
  ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
  ('1-CHA01','KIT CHACARERO AZUL
 (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00','300','14'),
  ('1-P0001','CASCOS DE MOTOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0003','LLANTAS DUNLOP LT
 265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0014','POLOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
 
 
 see pls that each row is written in a unique of line of text
 
 now in linux with command in a terminal i do in this way my backups
 
 
 mysqldump --opt --password=XXX --user=root somedb 
 /home/Someuser/somepath/A.sql
 
 
 the backup is done but in this way
 
 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),('1-CHA01','KIT
 CHACARERO AZUL
 (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00','300','14'),('1-P0001','CASCOS
 DE
 
 how you can see, all the rows appear in one line,
 dangeous, i dont want this behaviour when i open this file in windows tell
 me if i try to save this file i will missing some values or rows
 and in linux the gedit dies :(
 
 after to read this
 http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html mysqldump
 i tried in this way
 
 
 mysqldump --opt --extended-insert--password=XXX --user=root somedb 
 /home/Someuser/somepath/A.sql
 
 with the same undesired results
 
 how i can resolve this??
 
 thanks in advanced
 --
 View this message in context:
 http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16185833.html
 Sent from the MySQL - General mailing list archive at Nabble.com.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 

-- 
View this message in context: 
http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16188637.html
Sent from the MySQL - General mailing list archive at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: doubt: mysqldump in linux like windows

2008-03-20 Thread Moon's Father
If you skip the extend insert during mysqldump ,the process of the restore
is painful.

On Fri, Mar 21, 2008 at 5:05 AM, dr_pompeii [EMAIL PROTECTED] wrote:


 Hi Rolando

 thanks for the reply
 it works, thanks,

 new command used

 mysqldump --opt --skip-extended-insert --password=XXX --user=root somedb 
 /home/Someuser/somepath/A.sql

 but i see one difference

 from windows

 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo`

 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
 VALUES
  ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
  ('1-P0001','CASCOS DE MOTOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0003','LLANTAS DUNLOP LT
 265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
  ('1-P0014','POLOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),


 now with the new command already shown
 i have this way


 LOCK TABLES `articulo` WRITE;
 /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
 INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
 15W40','0.00','0.00','0.00','0.00','','0.00','300','11');
 INSERT INTO `articulo` VALUES ('1-CHA01','KIT CHACARERO AZUL
 (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14');
 INSERT INTO `articulo` VALUES ('1-P0001','CASCOS DE MOTOS
 HONDA','0.00','0.00','0.00','0.00','','0.00','300','10');


 i need like the windows way, thats mean,
 for the first line for insertion before to insert all rows
 i need

 INSERT INTO `articulo`

 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
 VALUES


 i tried adding --disable-keys but wierd and undesired results

 regards


 Rolando Edwards-3 wrote:
 
  Use --skip-extended-insert as another mysqldump option
 
  -Original Message-
  From: dr_pompeii [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 20, 2008 2:43 PM
  To: mysql@lists.mysql.com
  Subject: doubt: mysqldump in linux like windows
 
 
  Hello guys
 
  i have this situation
  in widnows with the mysql administrador i make backup
  i saw in the x.sql these lines for example
 
 
  /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
  INSERT INTO `articulo`
 
 (`idArticulo`,`descripcion`,`stockactual`,`precioUnitario`,`precioUnitarioVenta`,`totalValorizado`,`xtraTextUnoArticulo`,`xtraNumDosArticulo`,`idLineaCategoria`,`idMedida`)
  VALUES
   ('1-15W40','ACEITE EXTRAVIDA X GLN
  15W40','0.00','0.00','0.00','0.00','','0.00','300','11'),
   ('1-CHA01','KIT CHACARERO AZUL
  (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14'),
   ('1-P0001','CASCOS DE MOTOS
  HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
   ('1-P0003','LLANTAS DUNLOP LT
  265/75R216','0.00','0.00','0.00','0.00','','0.00','300','10'),
   ('1-P0014','POLOS
  HONDA','0.00','0.00','0.00','0.00','','0.00','300','10'),
 
 
  see pls that each row is written in a unique of line of text
 
  now in linux with command in a terminal i do in this way my backups
 
 
  mysqldump --opt --password=XXX --user=root somedb 
  /home/Someuser/somepath/A.sql
 
 
  the backup is done but in this way
 
  /*!4 ALTER TABLE `articulo` DISABLE KEYS */;
  INSERT INTO `articulo` VALUES ('1-15W40','ACEITE EXTRAVIDA X GLN
  15W40','0.00','0.00','0.00','0.00','','0.00
 ','300','11'),('1-CHA01','KIT
  CHACARERO AZUL
  (GDFGO,PORTAF,LLANT-DEL/POST)','0.00','0.00','0.00','0.00','','0.00
 ','300','14'),('1-P0001','CASCOS
  DE
 
  how you can see, all the rows appear in one line,
  dangeous, i dont want this behaviour when i open this file in windows
 tell
  me if i try to save this file i will missing some values or rows
  and in linux the gedit dies :(
 
  after to read this
  http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html mysqldump
  i tried in this way
 
 
  mysqldump --opt --extended-insert--password=XXX --user=root somedb 
  /home/Someuser/somepath/A.sql
 
  with the same undesired results
 
  how i can resolve this??
 
  thanks in advanced
  --
  View this message in context:
 
 http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16185833.html
  Sent from the MySQL - General mailing list archive at Nabble.com.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 

 --
 View this message in context:
 http://www.nabble.com/doubt%3A-mysqldump-in-linux-like-windows-tp16185833p16188637.html
 Sent from the MySQL - General mailing list archive at Nabble.com.


 --
 

Re: doubt about indexes

2007-05-29 Thread Baron Schwartz

Hi Ricardo,

Ricardo Conrado Serafim wrote:

Hi Group!

I've a doubt about indexes and I hope that you can help me.

If I've a table index with 5 columns (col_1,col_2,col_3,col_4,col_5) and 
I do a query where the clause where use just col_1, col_2, col_4 and  
col_5.

This query will use the index at least with the col_1 and col_2??


It depends.  When in doubt, use EXPLAIN to see if the index is being used.

Cheers
Baron

--
Baron Schwartz
http://www.xaprb.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt on Index Merge??

2006-08-10 Thread Chris

Ratheesh K J wrote:

Hello all,

I have a doubt on Index Merge. 


We are currently using MySQL server 4.1.11. As a part of DB management I am 
currently doing an analysis on all the Tables.

While looking into the table structures, their columns and Indexes I found that 
most of the tables have an Index on fields that have a very low cardinality.

For an estimate let me say that there were indexes on fields with cardinality 
17 for a table with 13 lac rows. So i decided to remove the Index on such 
fields.

I made this decision because I assume that the probability of MySQL optimizer 
choosing such indexes is very low. MySQL would always choose a better index 
than this.

Now i doubt my assumption when I move to MySQL server 5.0.X. In MySQL 5 there 
is a concept of Index Merge. So was it right for me to remove these indexes if 
we were to use MySQL 5?

How much of a difference in terms of performance would removal of Index make in 
MySQL 4.1.11?

How much of a difference in terms of performance would retaining of Index make 
in MySQL 5?


You can test those scenarios yourself.

You can use ignore index like this:

select blah from table IGNORE INDEX (index_name) WHERE ;

so that will tell mysql specifically not to use that index without you 
having to drop it and recreate it.


Comes in handy ;)

And it *always* depends on the situation (queries you run a lot vs 
queries that run once a month) and data you have. Asking us not to take 
that into account is silly.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about query optimization

2005-07-28 Thread Ronan Lucio

Eric,

Can you send us the actual show indexes from table and explain output that 
isn't shortend?


Thank you for answer my question.

Actually, the real query and tables are much more complex than
the data in the previous message.
A just tryed to optimize the information for you better understand the 
trouble.


I think found what´s happening.
A SELECT WHERE city = 1 returns more rows thant a full table scan
in the table front (the real states that appear in the front page).
So, it seems MySQL choose the second option, once it has less rows
to optimize.

Thanks,
Ronan 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about query optimization

2005-07-27 Thread Eric Bergen
Can you send us the actual show indexes from table and explain output 
that isn't shortend?


Ronan Lucio wrote:


Hello,

I´m using MySQL-4.1.7, and I have a query that seems,
even using where clause, an specific table isn´t being optimized
properly:

I have four tables:

real_state: cod, name, city, ag_cod, sell
agency: cod, name, ag_status
front: cod, rs_cod, ag_cod
photo: cod, rs_cod

These table have the following indexes:

real_state:
   cod (pri)
   city
   ag_cod
agency:
   cod
   name
front:
   cod
   rs_cod
   ag_cod
photo
   cod
   rs_cod

When I EXPLAIN the query:

EXPLAIN
SELECT front.rs_cod, photo.cod, real_state.descr
FROM real_state, agency, front, photo_foto
WHERE real_state.city = 1
AND real_state.sell = 1
AND front.rs_cod = real_state.cod
AND photo.rs_cod = front.rs_cod
AND agency.cod = real_state.ag_cod
AND ag_status = 'A'

It shows me (in a short):

tablekeyrows
======  
frontrs_cod   2085
real_statecod1
agencycod1
photo  rs_cod   1

But the trouble is: If I execute:

SELECT COUNT(*)
FROM real_state, front
WHERE real_state.city = 1
AND real_state.cod = front.rs_cod

Returns: 271

So, Why the index front.rs_cod isn´t being used?

Any help would be appreciated,

thank you,
Ronan





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt on MySQL for Linux...

2005-06-24 Thread Danny Stolle

Ashok Kumar wrote:

Hi friends,
 I'm now started to using the Redhat-Linux. Now i want
to start MySQL on Linux. I'm not having much idea
about Linux. I'm new to this. For Linux which MySQL
installation i've to choose from the site. there are
lot of binary distributions, i don't which one is
compatible for my OS.

 Linux and H/W specification of my system is as
follows.
1. Redhat Linux 9
2. Intel PIII

 And i also want to know abt how to install and
configure MySQL for using that in 'C'. Pls guide me in
this.


Thanks and Regards,
 Ashok Kumar.P.S.



 
Yahoo! Sports 
Rekindle the Rivalries. Sign up for Fantasy Football 
http://football.fantasysports.yahoo.com




Ashok,

I suggest that you carefully read the documentation on 
http://dev.mysql.com/doc/mysql/en/index.html


Here you can find the installation program and tutorials to help you get 
through MySql


If you are new to Linux, then perhaps reading some stuff about linux first.

Good luck and enjoy :^)

Best Regards,

Danny Stolle
Netherlands

EmoeSoft (http://www.emoesoft.nl)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about Performance

2005-01-19 Thread Ronan Lucio
Shawn,

Thank you very much for your tips.
I´ll do that.

Ronan

- Original Message -
From: [EMAIL PROTECTED]
To: Ronan Lucio [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, January 18, 2005 6:36 PM
Subject: Re: Doubt about Performance


Ronan Lucio [EMAIL PROTECTED] wrote on 01/18/2005 02:53:39 PM:

 Shawn,

 Your tips are realy good but I can´t use cache neither for
 queries nor for pages because our site serves hundreds of
 clients, each one with your own code make a different
 query, returning different rows.

 It would be too many queries to be cached.
 I know I didn´t say it in the previous message.

 Thank you for the help,
 Ronan



You would run your queries against your cached data using your web
site's application code. You can create additional arrays to act as
indexes against the data so that you will not need to do a full array
scan every time. Load your data into your arrays in the order of the
customer parameter, then you have already isolated each customer's data
to a contiguous portion of the data array.

Trust me, if you do it right (sorted and indexed data + fast lookup
routine), it should be 10-20 times faster than trying to read through the
same data from the database each and every time ([array search + array
seek + looped scan] instead of [SQL parsing + query processing + net lag +
data transfer time]).

I do not recommend doing this to every page on your site, only to those
pages that handle the highest traffic and only for data that doesn't
change quickly (on the order of several changes per hour or per day, not
several changes per second). For rapidly changing data, data you don't
need often, or unpredictable queries, read the data from the database. It
saves you no time to take the effort to cache that kind of data.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about Performance

2005-01-18 Thread Ronan Lucio
Greg,

Thank you very much for your help.

 An index the full size of the distinct key is probably very quick
especially
 if it only has 20 distinct values.  OTOH, if you already know you have a
 small number of distinct values, could you just store them normalized in a
 different table?

Actually, it is.
It has some tables:

features
=
- id
- description

groups
=
- id
- description

products
==
- id
- description
- group_id
- feature_id

And I´ll use a SELECT like this:

SELECT DISTINCT features.description
FROM products
LEFT JOIN features ON (products.feature_id = features.id)
WHERE products.group_id = $var_group
AND products.features_id  0

The table products should have a million of records, but the
filtered query should goes over a  thousand records (filtered
by group_id) and return about 20 distinct lines.

It´s my situation but I don´t know how heavy such query is
for the database and how viable such query is.

Thanks in advance,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about Performance

2005-01-18 Thread Sasha Pachev
Ronan Lucio wrote:
Hello,
Trying to keep the database and the application fast,
I´d like to clearify my mind about it.
Ronan:
I assume you mean has serious performance impact when you say weight. If 
this is not what you meant, please correct/clarify.

1) Is a SELECT DISTINCT over 5,000 records a weight
query? (Supposing it has about 20 different option the the
DISTINCT key).
This query will most likely result in a creation of a temporary table with 20 
columns and a key over all of them that will have no more than 5000 records, and 
 will take 5000 attempted inserts to populate. Assuming that your WHERE clause 
is ok, this query should take no more than 3 seconds or so on modern hardware. 
However, this could be bad if you are doing this frequently and there is other 
activity going on. On the other hand, the query cache could save you. If it does 
not, consider creating and maintaining a summary table.

2) Is SELECT ORDER BY RAND() over 1,500 records
a weight query?
Does the table have only 1,500 records, and is it going to stay that way? Are 
you selecting only a few reasonably sized columns? If yes, unless you are Yahoo 
or Google, you'll do fine on modern hardware - this query under those 
curcumstances should take the order of maginitude of 0.01 s. However, if you 
have more records in the table, and the WHERE clause is not optimized, things 
could get bad, and this time the query cache does not save you.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Doubt about Performance

2005-01-18 Thread SGreen
Ronan Lucio [EMAIL PROTECTED] wrote on 01/18/2005 01:13:10 PM:

 Hello,
 
 Trying to keep the database and the application fast,
 I´d like to clearify my mind about it.
 
 1) Is a SELECT DISTINCT over 5,000 records a weight
 query? (Supposing it has about 20 different option the the
 DISTINCT key).
 
 2) Is SELECT ORDER BY RAND() over 1,500 records
 a weight query?
 
 I need to put these two queries in the first page of our site.
 So, I´ve been worried if it can slow down our site in the
 pics.
 
 Thanks,
 Ronan
 
 

Depending on the size of the data, you may be better off caching your 5000 
rows and 1500 rows in arrays on your web server. You would only need to 
refill the array if your source data changed. Since you should also 
control the code that updates the source data of the arrays, you can have 
it refill your server-cached arrays as soon as it finishes making its 
changes (inserts, updates, or deletes) to the source data. 

I can't tell you exactly which commands/objects/techniques to use to make 
static, global instances of those arrays (so that every user-specific 
thread sees the same objects) as you never said what your web 
server/language was.  For example: if you were using an IIS/ASP server, I 
would tell you to store the arrays (NOT the recordsets! convert the 
recordsets to scalar data first) in the Application object (NOT the 
Session object).

I recommend this because you say this is going to be served on the first 
page of your site. Everyone is going to need this data at least once. By 
keeping those lists as arrays in the web server's memory then randomly 
picking from the arrays, you will get blazing performance. Update the 
arrays only when the source data changes. That way you can save your SQL 
cycles for other, less predictable queries. I have reduced the response 
times on some websites to 20% or less than their original time by using 
this technique (even with query caching enabled! The time savings is not 
just in avoiding query processing but also due to eliminating network lag 
and data transfer time).

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Doubt about Performance

2005-01-18 Thread Ronan Lucio
Sasha,

 I assume you mean has serious performance impact when you say weight.
If
 this is not what you meant, please correct/clarify.

Yes, you´re right.


 1) Is a SELECT DISTINCT over 5,000 records a weight
 query? (Supposing it has about 20 different option the the
 DISTINCT key).

 This query will most likely result in a creation of a temporary table with
20
 columns and a key over all of them that will have no more than 5000
records, and
   will take 5000 attempted inserts to populate. Assuming that your WHERE
clause
 is ok, this query should take no more than 3 seconds or so on modern
hardware.
 However, this could be bad if you are doing this frequently and there is
other
 activity going on. On the other hand, the query cache could save you. If
it does
 not, consider creating and maintaining a summary table.

Hmmm, I wanted to say the SELECT DISTINCT should return about
20 lines.

The table should have about a million records, but WHERE clause
should filter it to up to 5,000 and such query should be in the site´s
home... :-/
Perhaps work with summaries would be a better choice.


 2) Is SELECT ORDER BY RAND() over 1,500 records
 a weight query?

 Does the table have only 1,500 records, and is it going to stay that way?
Are
 you selecting only a few reasonably sized columns? If yes, unless you are
Yahoo
 or Google, you'll do fine on modern hardware - this query under those
 curcumstances should take the order of maginitude of 0.01 s. However, if
you
 have more records in the table, and the WHERE clause is not optimized,
things
 could get bad, and this time the query cache does not save you.


The table should have about a million records, but WHERE clause
should filter it to up to 5,000 and such query should be in the site´s home.

I don´t know if it can put the site in performance troubles or if it´s
paranoia of mine.

Thanks,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about Performance

2005-01-18 Thread SGreen
Ronan Lucio [EMAIL PROTECTED] wrote on 01/18/2005 02:53:39 PM:

 Shawn,
 
 Your tips are realy good but I can´t use cache neither for
 queries nor for pages because our site serves hundreds of
 clients, each one with your own code make a different
 query, returning different rows.
 
 It would be too many queries to be cached.
 I know I didn´t say it in the previous message.
 
 Thank you for the help,
 Ronan
 
 

You would run your queries against your cached data using your web 
site's application code. You can create additional arrays to act as 
indexes against the data so that you will not need to do a full array 
scan every time. Load your data into your arrays in the order of the 
customer parameter, then you have already isolated each customer's data 
to a contiguous portion of the data array. 

Trust me, if you do it right (sorted and indexed data + fast lookup 
routine), it should be 10-20 times faster than trying to read through the 
same data from the database each and every time ([array search + array 
seek + looped scan] instead of [SQL parsing + query processing + net lag + 
data transfer time]). 

I do not recommend doing this to every page on your site, only to those 
pages that handle the highest traffic and only for data that doesn't 
change quickly (on the order of several changes per hour or per day, not 
several changes per second). For rapidly changing data, data you don't 
need often, or unpredictable queries, read the data from the database. It 
saves you no time to take the effort to cache that kind of data.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


RE: Doubt about Performance

2005-01-18 Thread Dathan Pattishall
 




 -Original Message-
 From: Ronan Lucio [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, January 18, 2005 10:13 AM
 To: mysql@lists.mysql.com
 Subject: Doubt about Performance
 
 Hello,
 
 Trying to keep the database and the application fast, I´d 
 like to clearify my mind about it.
 
 1) Is a SELECT DISTINCT over 5,000 records a weight
 query? (Supposing it has about 20 different option the the
 DISTINCT key).

This is not bad, it's a mysql function that uses a KEY if a key exist.

 
 2) Is SELECT ORDER BY RAND() over 1,500 records
 a weight query?
 
 I need to put these two queries in the first page of our site.
 So, I´ve been worried if it can slow down our site in the pics.

DO SELECT ORDER BY RAND() LIMIT   1500 

Such that  means much less then 1500 on the order of 10.

This is a rather expensive operation and should be used with care.

I've gotten around this by generating a random number in my app and trying to 
match it to a known id by making multiple selects. This was less intensive then 
ORDER BY RAND.

--
DVP
 
 Thanks,
 Ronan
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about TINYINT type

2004-04-28 Thread Don Read

On 27-Apr-2004 Paul DuBois wrote:
 At 13:29 -0400 4/27/04, Keith C. Ivey wrote:
On 27 Apr 2004 at 13:59, Ronan Lucio wrote:

  OK, I understood it, but I didn´t understand why is there an
  option
  TINYINT(n)

See http://dev.mysql.com/doc/mysql/en/Numeric_types.html

|   Another extension is supported by MySQL for optionally
|   specifying the display width of an integer value in
|   parentheses following the base keyword for the type (for
|   example, INT(4)). This optional display width specification
|   is used to left-pad the display of values having a width
|   less than the width specified for the column. However, the
|   display width does not constrain the range of values that
|   can be stored in the column, or the number of digits that
|   will be displayed for values having a width exceeding that
|   specified for the column.

I haven't found much use for display widths myself, but then
different people use MySQL differently.
 
 I agree.  I cannot think of a time when I've actually specified
 a display width, except just to see what effect it has on result
 display. :-)
 

The only time I've used it is in a billing app (w/ zerofill):

CREATE TABLE invoice (
  id mediumint(6) unsigned zerofill NOT NULL auto_increment,
  idcust mediumint(5) unsigned zerofill NOT NULL, 
 ... 
);

You can make some pretty decent reports with a shell script if the DB
lends a hand with formatting. 


-- 
Don Read [EMAIL PROTECTED]
-- It's always darkest before the dawn. So if you are going to 
   steal the neighbor's newspaper, that's the time to do it.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about TINYINT type

2004-04-27 Thread Paul DuBois
At 11:53 -0300 4/27/04, Ronan Lucio wrote:
Hi,

If I create a column with the type TINYINT(2) UNSIGNED.

Will the content can have the value minor than 510 (like a number 499),
or will the content can have two values minor than 255 (like 11)?
I don't know what minor than means, but TINYINT is a single-byte
type.  The range for TINYINT is -128 to 127, and the range for
TINYTINT UNSIGNED in 0 to 255.
You can look up numeric column type ranges here:

http://dev.mysql.com/doc/mysql/en/Numberic_type_overview.html

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Doubt about TINYINT type

2004-04-27 Thread Ronan Lucio
Paul,

 I don't know what minor than means

Sorry by my english.
I´d like to say less than (or something like this).

 , but TINYINT is a single-byte
 type.  The range for TINYINT is -128 to 127, and the range for
 TINYTINT UNSIGNED in 0 to 255.

OK, I understood it, but I didn´t understand why is there an option
TINYINT(n)

Thanks,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about TINYINT type

2004-04-27 Thread Keith C. Ivey
On 27 Apr 2004 at 13:59, Ronan Lucio wrote:

 OK, I understood it, but I didn´t understand why is there an option
 TINYINT(n)

See http://dev.mysql.com/doc/mysql/en/Numeric_types.html

|   Another extension is supported by MySQL for optionally
|   specifying the display width of an integer value in
|   parentheses following the base keyword for the type (for
|   example, INT(4)). This optional display width specification
|   is used to left-pad the display of values having a width
|   less than the width specified for the column. However, the
|   display width does not constrain the range of values that
|   can be stored in the column, or the number of digits that
|   will be displayed for values having a width exceeding that
|   specified for the column.

I haven't found much use for display widths myself, but then
different people use MySQL differently.

--
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about TINYINT type

2004-04-27 Thread Paul DuBois
At 13:29 -0400 4/27/04, Keith C. Ivey wrote:
On 27 Apr 2004 at 13:59, Ronan Lucio wrote:

 OK, I understood it, but I didn´t understand why is there an option
 TINYINT(n)
See http://dev.mysql.com/doc/mysql/en/Numeric_types.html

|   Another extension is supported by MySQL for optionally
|   specifying the display width of an integer value in
|   parentheses following the base keyword for the type (for
|   example, INT(4)). This optional display width specification
|   is used to left-pad the display of values having a width
|   less than the width specified for the column. However, the
|   display width does not constrain the range of values that
|   can be stored in the column, or the number of digits that
|   will be displayed for values having a width exceeding that
|   specified for the column.
I haven't found much use for display widths myself, but then
different people use MySQL differently.
I agree.  I cannot think of a time when I've actually specified
a display width, except just to see what effect it has on result
display. :-)
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


re: doubt

2003-03-21 Thread Egor Egorov
On Friday 21 March 2003 08:01, sivakumar wrote:
 Hai, am having the problem in creating CD of my project(Online
 Tutorial).While creating the CD only am having the problem. Actually am
 working in SSH(linux),right.

 I have to make a dump of my source code and mysql ..to copy (from server to
 C:\ of my PC with windows not case- sensitive, right).

 Am having no problem in making tar file of my source code.
 But while mysql is showing one error.

 Database name : GEnglish
 Source code directory: GEnglish

 C:\CDrive\apache\mysql\binmysql GEnglish English.txt

 Error 1136 at line 211: column count does'nt match value count at row 1

Check your INSERT statement in the script.

 If i retype the same command then
 Error 1050 at line 11:Table Section_b1_Choose already exists

Some SQL commands from script were executed before error happened and 
table 'Section_b1_Choose' was created.

 But actually am having the table name as Section_B_Choose(note few capital
 letters) but windows is not case senstive..




-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: doubt

2003-03-20 Thread Jeff Kilpatrick
How are you dumping the database?  It doesn't seem like you should have
problems going either direction, especially to a case-insensitive
filesystem.

Copying databases with something like
$ mysqldump database | mysql -hremote_server database

works for me, even if it is fairly slow.

-jeff

(formerly kilpatjr_at_lupus.omrf.ouhsc.edu)

On Fri, 2003-03-21 at 00:01, sivakumar wrote:
 Hai, am having the problem in creating CD of my project(Online Tutorial).While 
 creating the CD only am having the problem. Actually am working in SSH(linux),right.
 
 I have to make a dump of my source code and mysql ..to copy (from server to C:\ of 
 my PC with windows not case- sensitive, right).
 
 Am having no problem in making tar file of my source code.
 But while mysql is showing one error.
 
 Database name : GEnglish
 Source code directory: GEnglish
 
 C:\CDrive\apache\mysql\binmysql GEnglish English.txt  
 
 Error 1136 at line 211: column count does'nt match value count at row 1
 
 If i retype the same command then
 Error 1050 at line 11:Table Section_b1_Choose already exists
 
 But actually am having the table name as Section_B_Choose(note few capital letters) 
 but windows is not case senstive..
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Doubt

2001-05-03 Thread Eric Fitzgerald

There are a few options here.

First off, when making the query, just put the NOW() statement into the
tableit's only a few extra keys to type, and it really isn't that
difficult.

The second option is using the TIMESTAMP type...take a look, it's probably
what your looking for :)

- Original Message -
From: Sreevatsava Varada [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, May 02, 2001 8:03 PM
Subject: Doubt


Hi,

   I've recently moved from MS-SQL Server 7.0 to mySQL. In MSSQL I used to
give the default value 'GetDate()' for a field in the database with the
datetime datatype. In mySQL I tried using Now(). But when I insert a new row
the datetime field is taking :00:00 00:00:00 value. Please let me know
what should I give as the default value for 'record_date' field (of datetime
datatype) so that it takes the value returned by Now() function when a new
row is inserted using php script.


Thanks.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Doubt

2001-05-03 Thread Paul DuBois

At 8:33 AM +0530 5/3/01, Sreevatsava Varada wrote:
Hi,

I've recently moved from MS-SQL Server 7.0 to mySQL. In MSSQL I 
used to give the default value 'GetDate()' for a field in the 
database with the datetime datatype. In mySQL I tried using Now(). 
But when I insert a new row the datetime field is taking :00:00 
00:00:00 value. Please let me know what should I give as the default 
value for 'record_date' field (of datetime datatype) so that it 
takes the value returned by Now() function when a new row is 
inserted using php script.

Default values must be constants in MySQL.  When you create a new record,
initialize your date field to NOW() explicitly.


Thanks.


-- 
Paul DuBois, [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Doubt

2001-05-03 Thread Chris Bolt

 I've recently moved from MS-SQL Server 7.0 to mySQL. In MSSQL I 
 used to give the default value 'GetDate()' for a field in the 
 database with the datetime datatype. In mySQL I tried using Now(). 
 But when I insert a new row the datetime field is taking :00:00 
 00:00:00 value. Please let me know what should I give as the default 
 value for 'record_date' field (of datetime datatype) so that it 
 takes the value returned by Now() function when a new row is 
 inserted using php script.
 
 Default values must be constants in MySQL.  When you create a new record,
 initialize your date field to NOW() explicitly.

Or make a TIMESTAMP column and omit it from the field list of your INSERT.

More info can be found at http://www.mysql.com/doc/D/A/DATETIME.html

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Doubt

2001-03-31 Thread Sinisa Milivojevic

subba reddy pidugu writes:
  Hi !
  
  
  We have small doub't regarding your software,
  We are using WindowsNT operating system we are installed Mysql database after that. 
 we want how to create a databasae and create a user and how to  set the password 
 specified  user. please give answar as soon as possible,
  
  bye
  Thank You
  N.S.Reddy.Pidugu
  
  
  
  

Hi!

Download mysqlgui for your workstation and use grant and password
dialogues.


Regards,

Sinisa

    __ _   _  ___ ==  MySQL AB
 /*/\*\/\*\   /*/ \*\ /*/ \*\ |*| Sinisa Milivojevic
/*/ /*/ /*/   \*\_   |*|   |*||*| mailto:[EMAIL PROTECTED]
   /*/ /*/ /*/\*\/*/  \*\|*|   |*||*| Larnaca, Cyprus
  /*/ /*/  /*/\*\_/*/ \*\_/*/ |*|
  /*/^^^\*\^^^
 /*/ \*\Developers Team

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: doubt urgent

2001-02-27 Thread Miguel Angel Solórzano

At 10:50 27/02/2001 +, gopinath rajagopal n wrote:
Hi!
First you need to set the privileges rights on the MySQL server for
the NT machine, user and db's. Read the GRANT/REVOKE commands and
How works the privileges system in the MySQL Manual.

To connect you have several ways:

- A telnet session from your NT machine to Unix machine. Run the
mysql client on /mysql/bin directory.

- If your Unix server has a ssh server and you have a ssh client like
Putty for Win. Do the same procedure like telnet.

- You download from our site the Win MySQL stuff. The Win stuff has
a MySQL console client called mysql.exe that you can use with the
connection parameters to connect with the Unix server.

- You can install the ODBC driver MyODBC (you need to download if from
our web site) and use any client application that support ODBC.

Regards,
Miguel
Hello Sir,

  I am having doubt in connecting mySQL client to server.i want 
 some explaination, before that i worked in Oracle.I am newly entering to mySQL.

  my mySQL server is in LINUX OS,my machine is WIN NT . now i want 
 to connect with the server and access the database.

  First how to install mysql client in my machine.from where will 
 i get the EXE?
  How to configure my client to mysql server which is in LINUX server.

   Pls .. explain my doubt ASP.

Thanks  Regards
gopi


Nucleus Software Exports Ltd
Shakthi Towers,6th floor
766,Anna Salai
Chennai-2
Ph: 91-044-8510519,8510825


_
Chat with your friends as soon as they come online. Get Rediff Bol at
http://bol.rediff.com





-
Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail 
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

__  ___   __
   /  |/  /_ __/ __/ __ \/ /   http://www.mysql.com/
  / /|_/ / // /\ \/ /_/ / /__  Miguel Solrzano [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/  So Paulo, Brazil
___/  Development Team


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php